It's Time To Kill Moore's Law
Moore's Law is Too Slow for AI Growth. We Need To Think Beyond Chips!
Last week, I had the pleasure of visiting The Computer History Museum at Mountain View CA, located right at the heart of Silicon Valley. Its a quaint place, great for computer nerds such as myself to take a 2,000 year stroll down the history of computational devices. If you ever get a chance (and want to get a high dose of geek), I highly recommend visiting.
So there I was, strolling through the Museum when something caught my eye. No, it wasn't the ancient computer that's roughly the size of my apartment. It was a glittering sign that said: Moore's Law.
The museum had an entire section devoted to it! Imagine—a law so iconic, it gets its own corner in a museum. It's like the Beyoncé of computing laws. As I wandered through the exhibits, nostalgia washing over me like an early 2000s playlist, I started to wonder: "Is Moore's Law really nearing its expiration date, or are we just not thinking big enough?"
Cue the lights, people, because we're about to dig into the life, the legend, the looming question mark that is Moore's Law.
Moore’s Law
Before we dive into the nitty-gritty, I owe you an apology for being a bit pedantic. But hey, let's call it a "legal requirement" (wink, wink) so I don't get sued for heresy or something.
Moore's Law was first posited by Gordon Moore, co-founder of Intel, in 1965. In its most commonly cited form, the law states that the number of transistors on a chip would double approximately every two years, leading to an exponential increase in computing power.
That law has, more or less, held its ground for over five decades. But listen, Moore never said this doubling spree would last forever. He wasn't peeking into some Silicon Crystal Ball. In fact, Moore himself has mentioned that this trend would eventually slow down. So, before you come at me with pitchforks, understand that even Moore knew limitations were coming.
So, with that legality out of the way, let's get into why I'm "legally" allowed to be tired of Moore's Law.
Is Moore's Law Dying?
Alright, let's cut to the chase. Everyone's talking about Moore's Law like it's some kind of doomed rockstar on its farewell tour.
Naysayers are whispering, "Moore's Law is tapering off! Repent, ye engineers!" Concerns hover around supply chain management issues, especially after covid, raw material costs that are exponentially increasing, heating issues with transistors, and weird entanglements starting to occur at the quantum level, because, yes, we are making these things that small now!
The M1 Ultra by Apple, for instance, has a process node of 5 nanometers with a density of 171 million transistors per square millimetre. That’s insanity! No wonder we are starting to get all quantum entangled.
In 2023, chips have up to 114 billion transistors. In 2022, the number was 54 billion. Which means in almost 18 months we've doubled that. Yes, you read that right, DOUBLED! So, why all this buzz about Moore's law becoming an antique?
It is clear that so far, Moore’s Law has been going above and beyond the bench marks and is good and alive. But scientists are beginning to see the end of the tunnel (and there is no light, just in case you were wondering).
So this raises an important question, how do we sustain the exponential growth of AI if the chips can’t keep up? Let’s discuss.
First, We Need to Stop Eating So Many Chips!
Listen, I know chips are addictive. Not just the McDonald’s kind, but also silicon. With soaring demands for machine learning, cloud computing, and artificial intelligence, our computational hunger is growing exponentially. But here's the deal—our chip consumption is anything but sustainable.
First up, power consumption. Did you know that in 2022, data centres worldwide consumed around 340 TWh of electricity? That's more than the annual electricity consumption of Thailand!
As transistors get smaller, leakage current, although reduced in modern architectures, is still a villain. A 7nm transistor might leak around 100nA of current. Multiply that by 100 billion transistors and, well, you get the point.
Next, let's talk dimensions. We're down to a 2-3nm scale, where the rules of classical physics wave goodbye and quantum mechanics says hello. Quantum tunneling is more than just a theory; it's a real problem. For instance, at scales below 5nm, the probability of an electron tunneling can be as high as 30%. Good luck maintaining signal integrity when your electrons are playing hopscotch on you.
And don't get me started on materials. Silicon has a bandgap of about 1.12 eV, which is starting to show its limitations. Alternatives like gallium nitride, with a bandgap of 3.4 eV, or graphene, which has a high electron mobility of 2.5 x 10^5 cm^2/V·s, are under active research. Yet, none have shown they can truly replace silicon in versatility and manufacturability. You can read this article I wrote on graphene to understand this better.
Finally, there's the cost. The investment needed to build a semiconductor fabrication plant for modern chips can exceed $10 billion. And that's just the ticket to enter the game, not to win it.
So, the message is clear: our rate of chip consumption is hitting physical, material, and even fiscal walls. We need a change in direction, or we're heading for a silicon apocalypse.
Why Not Neuromorphic?
Okay, let's get one thing straight—neuromorphic computing is cool, like leather-jacket-and-dark-shades cool. I even wrote an article about it for Open Source For You Magazine which should be made available for online reading soon. But can it replace our classical chips for the demanding AI tasks of the future? Let's not get ahead of ourselves.
First up, the cool stuff. Neuromorphic chips mimic the structure of the brain, using neurons and synapses to process information in parallel. You've probably heard of IBM's TrueNorth or Intel's Loihi. They boast energy efficiency that can be as much as 1,000 times better than traditional architectures for specific tasks.
But here's the rub. While neuromorphic chips are awesome at pattern recognition and decision making, try getting them to do a Fast Fourier Transform. I dare you. These chips are hyper-specialized, making them unsuitable for general-purpose computing.
Now, let's talk about training. A typical neuromorphic system might employ a spiking neural network (SNN). Sounds cool, right? Except training SNNs can be like teaching your cat quantum mechanics. There's not enough established software infrastructure to make it feasible on a large scale yet.
And lastly, scalability. Even if we hypothetically managed to get neuromorphic computing up to speed with conventional methods, we'd still be limited by material and manufacturing techniques that are, in many ways, shared with classical chip technology.
So, the bottom line? Neuromorphic computing might be the punk rock star of the computational world, but it's not headlining any world tours just yet.
Why Quantum Computers Suck but Quantum Technology Doesn't
As of yet, Quantum computing is like the star child everyone thinks will solve all of humanity's problems—from curing diseases to making your coffee just the right temperature every time. But let's slow our quantum roll for a moment and dissect this buzzword.
The Hype and The Horror
First, the hype. Quantum computers use qubits instead of bits, allowing for superposition and entanglement. This theoretically allows them to perform many calculations simultaneously. You might've heard of Shor's algorithm, which could theoretically break current encryption methods in mere seconds. Grover's algorithm can search large databases exponentially faster than any classical computer can.
Now, the horror. Coherence time, my friends. That's the time a qubit can maintain its state, and it's not much. We're talking microseconds here.
Keeping qubits stable is like trying to balance a pencil on its tip while riding a roller coaster. Quantum error correction is still a nascent field, and we're far from practical solutions.
The Actual Hero
Here's where the nuance comes in. Quantum technology goes far beyond just quantum computing. For instance, quantum key distribution in cryptography, quantum sensors for more accurate measurements, and quantum networking for ultra-secure communications are already proving themselves viable.
The Quantum-Classical Romance
One of the most promising solutions to our limitations is quantum-classical hybrid systems. Imagine a setup where a classical computer handles the mundane tasks, while the quantum computer swoops in to solve complex problems that the classical system couldn't handle efficiently. These systems could use Quantum Machine Learning algorithms, which would be insanely more efficient than their classical counterparts.
Material & Engineering Challenges
Let's get extra nerdy. The materials used in quantum computers—like superconducting circuits or trapped ions—are insanely sensitive. We're talking cryogenic temperatures and vacuum chambers. This isn't something you'll stick next to your fridge magnet collection. Plus, there's the whole issue of quantum gates. The error rates are still far too high for practical applications, meaning we need breakthroughs in material science and engineering before quantum computers are more than just experimental playthings.
A Million Quantum Brains
First, let's marvel at the wonder that is the human brain. You know that mushy, 3-pound blob in your skull? It's a computational marvel. The brain consists of around 86 billion neurons, with each neuron forming around 1,000 connections with other neurons. This results in approximately 100 trillion synaptic connections. And guess what? All of this operates on just about 20 watts of power. Yep, less than your average light bulb.
According to some estimations, the human brain performs at an approximate speed of 1 exaFLOP (1 billion billion calculations per second). That's the computational capacity we're talking about.
AI Should Not Just Be Brainy, But A Million Times Brainier
Let's bring AI into the picture. We're talking about the future—a future where AI needs to perform not just one specialised task but a multitude of tasks across different domains, all in real-time.
AI needs to make decisions based on an extensive set of variables, from weather patterns affecting agriculture to real-time stock market analytics. It will be integrated into our daily lives, healthcare systems, transportation, and whatnot. We're not just talking about doing one human brain's worth of calculations—we're talking about doing those calculations a million-fold, in real-time, while sipping on the computational equivalent of an energy drink.
Why Quantum AI is The Endgame
Now, why do I say Quantum AI is the endgame? Because our brains themselves are quantum systems, at least according to some theories. Processes like quantum tunneling and quantum entanglement may play a role in neural processing. Theories suggest that microtubules in neurons could be sites for quantum processing.
If we're talking about mimicking human-like intelligence, or surpassing it a million times over, we've got to delve into the same technologies that nature uses. Quantum technology provides a framework for handling computational problems that are currently intractable for classical systems. Quantum algorithms can sift through vast data sets or perform calculations in parallel, reducing computational times exponentially.
In AI, this could translate to incredible advancements in machine learning algorithms, making them exponentially faster and more efficient. We're looking at quantum neural networks that could simulate human cognition and much more.
In Conclusion, We Need To Kill Moore’s Law (By Overachieving)
So if you're still clinging to Moore's Law like it's your childhood teddy bear, it's time to grow up. Moore's Law is like your high school sweetheart—great memories but not your future. Quantum Computing, on the other hand is the enigmatic, artsy rebel you never knew you needed but can't live without.
Why settle for doubling transistors every 18 months? Let’s be better.
We’ve talked about how the human brain itself is a quantum system. Forget the speed of a Ferrari; your grey matter is on the level of a Falcon Heavy rocket. And what we're aiming for with AI is not just one Falcon Heavy but an entire fleet of them, firing simultaneously at warp speed. Yes, you heard it right—warp speed, people!
So is Quantum Technology our knight in shining armour? Heck yes! It's not just a game-changer; it's a whole new game altogether. It's like going from a game of checkers to multidimensional 4D chess.
The reality is, we're on the brink of not just advancing technology, but redefining what's possible. The key to unlocking the future of AI, and thereby our own limitless potential, lies not in following Moore's Law, but in shattering it to smithereens and watching it burn.
Good little history, and I essentially agree with the conclusion here. Mostly, though: I am jealous of your visit to the museum!