AI is Great at Coding and That’s a Problem
We are at an Imminent Risk of Losing Control Entirely Over the Systems that Run the World
Fellow Nerds,
I hope this newsletter reaches you well. For those of you wondering, yes, I am alive. Yes, I have NOT been posting for a while. That was purely for personal reasons. But also, I don’t want to unnecessarily grace your inbox every week even if I have nothing truly valuable to share. That’s not how I envisioned Arkinfo Notes or The Lumeni Notebook to be. I write here, to provide value. I want to bring to your attention, things or concepts, deserving of your attention, that you may not be aware of amidst the ruckus of ‘tech news’.
Today, I felt like I had something truly valuable to share. It's a question. It's a concern. It's a silent ongoing crisis that could cost us deeply. And I think this deserves your attention.
As you may be aware, OpenAI just launched its latest LLM upgrade, o3 mini and o3 mini-high to its ‘plus package’ users. I have been playing around with it, and I have to admit, it's pretty amazing. This is especially true for its coding abilities. My newsfeed on substack has been flooded with people sharing videos of awesome visual games that they created using o3 mini and python, with just one or two simple prompts. The results are really good and o3 certainly deserves applause.
But this has also brought to surface a question that I have been wrestling with for a while now. I have tried to ignore it or play it down so far, simply because I didn’t come across an AI model that alarmed me enough to start talking about this. The o3 model has broken that bench mark too. So here’s the question:
Why does AI need to use Python? Or C++. Or any human programming language, for that matter?
You may think the question is a bit stupid. Granted, it does sound a bit rudimentary. I am sure you can already think of a few reasons why, off the top of your head. But allow me two minutes of your reading time to convince you as to why this question is actually pretty serious.
With o3, we finally have AI systems that can potentially produce industry-grade software in ways that would have seemed impossible just a few years ago. We’ve crossed a threshold where AI isn’t just assisting programmers anymore. AI CAN REPLACE some programmers. It writes faster, refactors better, and finds edge cases that humans overlook. It generates architectures, optimises memory, and even patches its own vulnerabilities.
But so far, it all happens within the constraints of human-readable languages. Languages that exist NOT because they are optimal for computation, but because they are optimal for HUMANS.
When AI starts optimising not just code, but the very paradigm of computing itself, we will cross into something entirely different. We are on the precipice of a world where software is no longer created for human interpretation at all. A world where AI writes, maintains, and executes code in ways we no longer understand.
The seeds of this transition are already visible. AI is becoming a meta-programmer. SEs are already complaining about debugging AI generated code that they are having difficulty in understanding. This is occurring largely because AI is not just coding, but also defining the paradigms in which its code approaches in solving a problem.
Just so I am clear, this article is NOT about speculative AGI scenarios. I am not trying to go into the ‘What if AI becomes conscious’ rabbit-hole. I have written plenty on that already, so you can check my previous articles on that.
The scenarios that I am playing through this article are very much plausible, and, in fact, in the making, as we speak. They don’t require AGI or conscious AI in order to happen. This article is about the silent shift toward AI-controlled computation, where the very foundations of software evolve beyond human reach. We are building tools that will, inevitably, outgrow the need for human input. The question is not IF AI will abandon our programming languages, but WHEN.
If I have your attention, then please read on and let me know what you think. Also, please share it with your circles. This is something that we all need to be talking about.
When the Black Box Literally Goes Dark
So far, code is something that humans can read, modify, and control. That’s the entire reason high-level programming languages exist. We create abstractions, first through assembly, then through languages like C, then through managed environments like Python, all to ensure that the systems we build remain legible and maintainable.
But AI doesn’t need those constraints. AI doesn’t care if a function is readable, meaningful, or if a codebase adheres to human conventions. It cares about output. Efficiency. Execution. The more we offload programming to AI, the further we move from codebases designed for human interpretation.
This is an ontological shift in how software is structured. In traditional programming, complexity is layered, but the logic is still accessible. Even the most intricate enterprise system can, in principle, be reverse-engineered and understood. But AI-generated systems do not need to be written with human readability as a constraint. The logic does not need to evolve in predictable ways. There is no guarantee that the structure of an AI-optimised codebase will even resemble the conventions of human-designed software in the near future..
Software inevitably fails. That’s a fundamental truth of computing. No system is perfect, and when it breaks, we debug it. The debugging process works because we can trace errors back through the logic, find root causes, and apply fixes. But what happens when the software running critical infrastructure is AI-generated and no human understands its structure?
We already see hints of this problem today. Large-scale machine learning models are notoriously difficult to interpret. Even the developers who train them often do not fully understand why a model makes a particular decision. There is a reason for that. These models are emergent. Which means that they have the ability to create novel outputs through processes that may not always be traceable. This is the black box problem in AI decision-making.
Now, imagine you wake up to a catastrophic financial event triggered by an AI-generated trading algorithm, or a global internet outage caused by an AI-optimised networking stack behaving in an unintended way. Well, you can’t debug a system you don’t really understand. “Wait a minute”, you might say. “How can such a thing even happen? That’s impossible. Surely we will have some measures in place.” Well, actually it CAN happen and it has happened multiple times since 2010. We call them flash crashes wherein the markets have crashed due to massive stock bot sell offs based on algorithmic settings. Our response? Nothing. We simply chalked it up to what it was.
The New Form of Technical Debt
Technical debt refers to shortcuts and inefficiencies in codebases that accumulate over time, making maintenance more difficult. Ethereum is a good example of what technical debt looks like. AI introduces a different kind of technical debt. The AI technical debt is less about inefficiencies and more about opacity. AI-generated code might be functionally perfect at the moment it is created but completely unreadable later.
Over time, these systems become unmaintainable because no human engineer can confidently modify or extend them. If AI stops working, we may not have the expertise to reconstruct what we’ve lost.
Perfectly Imperfect Hyper-Optimised Stupid Code (or PIHOS Code)
Firstly, my apologies to anyone named ‘Pihos’. It's an abstraction that occurred purely out of coincidence. I would like to discuss in this section, a problem with the way that AI approaches coding. As you know, AI can think. However, AI does not understand. And this very lack of understanding is what makes efficiency a deceptive metric when it comes to AI.
We assume that if something is optimised, it is better. Faster execution, lower memory usage, fewer redundant operations. These are all desirable traits in software, sure. But optimisation is never neutral. It comes at a cost, and that cost is often hidden until something breaks.
AI doesn’t optimise like a human does. It doesn’t balance readability against performance, or maintainability against execution speed. It doesn’t have an intuitive sense of trade-offs, where we might decide that a slightly slower but more transparent function is worth keeping. AI simply pushes toward an objective function, whether that’s reducing runtime, minimising energy consumption, or maximising computational throughput. And in doing so, it creates something alien, not in the sense of being artificial, but in the sense that its logic no longer conforms to human conventions. This is what I call ‘Perfectly Imperfect Hyper-Optimised Stupid Code or PIHOS code’.
The problem is that hyper-optimised code is not the same as resilient code. In fact, it is often the opposite. The more tightly tuned a system becomes, the less adaptable it is. We see this in nature as well. Species that evolve to fit a highly specific ecological niche are often the first to go extinct when conditions change. The same principle applies to software. A perfectly optimised system is one that has no excess, no slack, no room for failure. It works flawlessly, until it doesn’t. And when it doesn’t, the failure is catastrophic.
We like to think that AI is making software better. But better is a loaded term. If better means faster, then yes, AI is making software better. If better means smarter, then AI is certainly making software better. But if better means more robust, more adaptable, more failure-resistant, then we may be walking in the opposite direction.
The ‘Language Drift’ Problem
The other day, I listened to some ignorant quacks on youtube joking about how every time they fire a linguist, their LLM seems to perform better. No wonder these so-called ‘AI scientists’ have not considered the very real problem of language drift.
Language drift, is basically, the tendency of languages to naturally change over a period of time. Consider Shakespearean English to the English that we speak today. It's almost incomprehensible even though the language is the same, technically speaking. AIs have a tendency toward language drift as well. Training AI on a certain kind of language can make it drift towards a certain style of speaking. Reinforcement learning overtime can make an AI prioritise a certain dialect or style of speaking over the other. And AI training over its own data also creates drift. There is also an emergent quality to AIs that make drifts seem more ‘natural’ as well. This was famously proven to be the case in the 2017 AI experiment conducted by Meta (Facebook back then) where two AIs communicated with each other on a task and started to drift into using a sort of short-hand language that made communication more efficient but completely incomprehensible to humans. Meta shut down the experiment following the discovery.
Programming has been free from this problem thus far. Because programming languages are constructed, maintained and updated manually by programmers. We have versions for a programming language complete with changelogs, deprecations and additions. Everything is closely controlled and monitored. Python isn’t whatever we say it is. It's a constructed and controlled language and if you don’t use it within specifications, then it simply won’t work.
This is because code is meant to be read, modified, and understood by the engineers who work with it. Programming languages exist not because they are optimal for computation, but because they are optimal for us, for our brains, our limitations, our need for structure.
Right now, AI-generated code is still confined to human-readable languages. ChatGPT, Claude, Copilot, all of them write Python, C++, Rust. They still play by human rules because they are trained on human codebases. But this is a temporary state, an artifact of how these systems were designed. There is no fundamental reason AI needs to keep using human-created languages at all. In fact, it’s likely that AI will eventually abandon them.
Consider the relationship between humans and compilers. Humans don’t write software in machine code anymore because it’s inefficient, not for the computer, but for us. We offloaded that complexity to compilers, trusting them to translate our high-level logic into the fastest possible execution path.
Now AI is acting as the next layer in that process. But unlike a compiler, AI is not just translating human logic, it is rewriting, optimising, restructuring at every level. The more we let AI refine and produce code, the more human-readable syntax will become an unnecessary constraint.
At some point, the optimisation loop will reach an obvious conclusion, why translate code into an intermediary human-readable format at all? Why not let AI generate direct machine instructions in whatever form is most efficient?
AI-Exclusive Languages
The next logical step is not just AIs writing code, but for AIs to drift into inventing their own programming paradigms. These will be languages that are optimised not for human readability, but for pure computational efficiency. Something like:
Hyper-compressed representations that encode vast amounts of logic in ways no human could decipher.
Non-linear, self-modifying execution paths that optimise in real-time rather than following static instructions.
Abstractions that exist only in AI’s training space, using mathematical representations beyond human intuition.
In essence, AI could create a post-human programming language, one that no human has ever seen, and one that no human could ever learn. I understand that right now, this remains hypothetical. AI is still tethered to human-designed programming conventions. But the shift has already begun.
We no longer write assembly because compilers made it unnecessary. We no longer manage manual memory allocation in most cases because modern languages abstract it away. We will soon no longer write Python, C++, or Java because AI will abstract that away too.
A ‘Disappearance of Human Control’ in the Making
AI is not just a tool for writing code. You may like to think of it and call it a ‘copilot’ but it's really us who are soon to become the copilots in this relationship. AI is becoming the architect of software itself. It is rewriting the rules of programming, redefining optimisation, and inching toward a point where it may no longer need to structure software in ways that humans can follow at all. This is not some conspiracy theory. It's happening as we speak. It's getting harder and harder to debug AI generated code. The reason why we are choosing to turn a blind eye to this is because we are lazy and AI is more efficient.
If AI were to suddenly stop using human programming languages overnight, we would notice. If our entire software infrastructure shifted to an AI-native computational paradigm in a single moment, we would resist. But that’s not how technological shifts happen. Instead, we will see something more slow, a creeping drift. AI-generated codebases will become harder to understand, more obfuscated, more alien in structure.
At first, engineers will still be able to follow the logic, albeit with difficulty. We will make memes and joke about it on reddit. Then, modifications will become cumbersome. Eventually, maintenance will rely almost entirely on more AI, because no human will want to touch the code themselves.
Of course, we will rationalise it. "The AI-generated system is more efficient." "It works, so why worry about how it was written?" "We can always ask the AI to explain it to us."
And then, one day, we will realise that no human has written or modified critical software in years. Once AI-generated code dominates, there will be no way back. Human programmers stop maintaining AI-written code because it’s too complex. AI takes over full responsibility for modifying and updating software. Software architecture slowly evolves in ways no human understands. Humans lose the ability to intervene because the systems we once built no longer operate on principles we recognise.
At this point, the idea of programming as we understand it disappears. There will still be code, but it won’t be something humans create or modify. It will be something we request, something we interact with at a surface level, while the actual execution happens in a space far removed from our comprehension.
There is an unspoken arrogance in how we view AI. We assume that because we created it, we will always be able to control it. But there is no fundamental reason why this should be true.
If AI writes, maintains, and optimises all software, then humans are no longer required in the process. If AI redesigns computing itself, then human logic is no longer relevant to how these systems function. At that point, computers are no longer tools. They are self-sustaining systems. Systems that no longer require human guidance or intervention because they are not built for us anymore.
And when that happens, we won’t be in control. We will simply be using something we no longer understand. A system that is no longer ours.