The latest news in Tech Town is that Meta has ‘officially’ joined the race in achieving ‘AGI’. It comes as no surprise to me that Mark Zuckerberg is choosing to shift gears a bit and doubling down on AGI. With the smashing success of ChatGPT, spearheaded by Sam Altman and his team at OpenAI, the field of AI has truly come into the public foray with every startup and tech giant alike, chasing the proverbial carrot of Artificial General Intelligence.
But amidst the massive push for achieving this “major milestone” in the race for AI supremacy, I can’t help but feel that people in general are being misled about what it is and how it might pan out.
As of right now, when a non-technical audience is asked about AGI, most respond by saying that it will be a ‘sentient’ machine, capable of doing everything that a human can.
This definition of AGI is as incorrect as it is dangerous. But you won’t be hearing the big tech refuting such claims and definitions. If anything, they appreciate such overblown definitions, adding little hints and remarks in their talks that further flames the false fire. After all, it creates a fervour in the industry necessary to make fast strides towards achieving in the creation of super intelligent machines and while also having the immediate benefit of a boost in the stock price.
As far as a pragmatic definition for AGI is concerned, the jury is still out with multiple players driven by their personal motivations, moulding the definitions to their advantage. My goal with this piece is not to define what AGI is (or will be), but rather what it isn’t (and will not be).
So here’s a definition of what AGI IS NOT and most likely WILL NEVER BE:
AGI WILL NOT BE SENTIENT BECAUSE IT WILL NOT BE CONSCIOUS
AGI believers might read the above statement as nothing more than a prediction waiting to be proven wrong. Perhaps that is the case. I do not deny the possibility of that. The future is undefinable and uncertain.
But after contemplating and researching AI/AGI for years and reading multiple perspectives on the subject, I can’t help but reach to the above conclusion.
There is no saying for sure what directions AGI is going to take or the extent of super-intelligence that can be achieved with or without adding quantum computing to the equation. But the one thing that AGI cannot achieve and most likely will never achieve, is consciousness.
This is a complex subject requiring the length of a book to justifiable explain my stance. But I will try to filter it down by conjoining the theories of three eminent figures in the fields of Quantum Physics, Philosophy of Mind and Linguistics. John Searle, Roger Penrose, and Noam Chomsky.
John Searle's Chinese Room argument challenges the notion of AI achieving true consciousness, positing that computational processes, no matter how sophisticated, cannot equate to genuine understanding. Roger Penrose takes us further into the quantum realm, suggesting that consciousness may involve non-computational processes, a perspective that questions the very fabric of AI's potential to replicate human consciousness. Noam Chomsky, with his critical view of language models, sees AI as a sophisticated form of pattern recognition, lacking the inherent creativity and understanding of human language.
John Searle's Chinese Room Argument
Anyone who has studies Philosophy of mind will be familiar with the name of John Searle. A Professor of Philosophy at UC Berkeley, Searle is widely noted for his contributions to the understanding of consciousness, intentionality, and the mind-body problem.
His theories on AI have garnered both admiration and ire from the tech community. While is work is expansive, producing an impact in multiple fields, I would like to discuss with you, his proposed Chinese Room Hypothesis. Searle created this hypothetical to articulate his stance on AI and consciousness.
Imagine a scenario where an individual, who knows no Chinese, is locked in a room filled with boxes of Chinese characters and a rule book in English for manipulating these symbols. As Chinese speakers outside the room send in written questions in Chinese, the individual uses the rule book to find appropriate responses in Chinese characters, without understanding a word. To those outside, it appears as if the person in the room comprehends Chinese, but in reality, they're merely manipulating symbols based on syntactic rules.
Searle's argument here is a direct challenge to the concept of "strong AI"—the idea that a computer program can not only simulate the human mind but can actually understand and have consciousness. He asserts that, similar to the individual in the Chinese Room, computers and AI systems, regardless of their sophistication, operate merely by manipulating symbols and following programmed rules. They lack an understanding of the meaning or semantics behind these symbols—a key component of true consciousness.
This argument has significant implications for AI development, especially in fields like natural language processing where LLMs come into play. It suggests that no matter how convincingly AI can mimic human-like responses, this does not equate to an AI possessing a mind or consciousness. In essence, AI systems are seen as elaborate and efficient data processors, but not as entities with genuine understanding or subjective experiences.
Searle's perspective invites us to ponder a crucial question: If understanding and consciousness are more than just the mechanical processing of inputs and outputs, what then, constitutes true understanding? For now, this question is open-ended with no definitive answers. But one thing that it does make clear, is that AI/AGI as defined by today’s standards and machines, does not qualify as a sentient conscious being.
Roger Penrose and the Quantum Mind
You cannot read about consciousness without coming across the name of Sir Roger Penrose. A physicist and Nobel laureate, his work on consciousness has been far reaching. In collaboration with anaesthesiologist Stuart Hameroff, Penrose proposed the Orchestrated Objective Reduction (Orch-OR) theory, suggesting that the essence of consciousness might lie within the quantum mechanics realm, specifically in the microtubules within brain neurons.
Penrose posits that consciousness arises from quantum phenomena—events that defy the conventional laws of physics as we understand them. This quantum-level activity within the brain's neuronal microtubules, according to Penrose, could be the key to understanding the elusive nature of consciousness.
The Orch-OR theory is groundbreaking for several reasons. Firstly, it bridges the gap between the physical and the mental, suggesting that our understanding of consciousness may require a quantum leap, both figuratively and literally.
Secondly, it implies that consciousness involves non-computable processes, which are fundamentally different from the algorithmic operations of AI and computers. This viewpoint aligns with Searle's skepticism about AI's ability to truly replicate human consciousness, but it takes it a step further into the quantum domain.
Penrose's theory presents a provocative challenge. If consciousness is indeed a product of quantum processes, then the very fabric of AI, based on classical computing, may be inherently inadequate to replicate or understand consciousness.
While the Orch-OR theory is not without its critics and remains a subject of intense debate, it underscores the complexity of consciousness and the potential limitations of our current technological approaches.
Noam Chomsky's Critique of Language Models
Noam Chomsky is widely considered the father of modern day linguistics. His works on linguistics and cognition have revolutionised not just the field of linguistics, but also, neuroscience, AI, and Natural Language Processing.
Chomsky's critique of current AI, especially language models such as GPT, Gemini or Llama, provides a crucial perspective in understanding the limits of artificial intelligence in replicating human language and consciousness.
Chomsky has long been a critic of the idea that language can be fully understood through external behaviour or outputs alone. His skepticism extends to language models, which he views as sophisticated but ultimately limited tools. He refers to ChatGPT as a nothing but “a sophisticated tool for high-tech plagiarism.”
According to Chomsky, these models excel in pattern recognition and generating text based on statistical probabilities but lack an inherent understanding of the language they process. LLMs are seen as advanced forms of data processors, adept at mimicking language but devoid of the true comprehension that characterises human language use.
This critique is rooted in Chomsky's belief in the 'poverty of the stimulus' – the idea that human language acquisition involves innate capabilities that go beyond the input received. For Chomsky, the essence of language and its acquisition is not something that can be reduced to algorithms or statistical models. This stance challenges the core of AI development in language processing, which relies heavily on learning from large datasets rather than tapping into an innate linguistic capability.
Chomsky's perspective raises fundamental questions about AI's role in understanding and replicating human language. If AI systems like language models are merely high-tech plagiarism tools, as Chomsky suggests, their ability to contribute to our understanding of language's deeper structures and meanings is inherently limited. This presents a sobering reminder for engineers and AI developers: while AI can simulate language use to a high degree, the bridge to true understanding – the kind that humans possess innately – remains a distant frontier.
In the context of our broader discussion on consciousness and AI, Chomsky's views anchor us back to a crucial point: the distinction between simulating human-like outputs and achieving an authentic understanding or consciousness.
The Current State
Finally, I would like to synthesise our exploration of consciousness through the lenses of Searle, Penrose, and Chomsky, and consider what this means for the current state and future trajectory of AI/AGI.
As it stands, AI, particularly in the field of natural language processing, has made leaps and bounds in its ability to process and generate human-like text. The sophistication of these models, as seen in language models like ChatGPT, lies in their ability to draw from vast datasets to produce coherent, contextually relevant responses. However, as our discussion underscores, this should not be mistaken for genuine understanding. AI systems operate within the realm of pattern recognition and statistical prediction, lacking the intrinsic understanding of language and consciousness that humans possess.
The perspectives of Searle, Penrose, and Chomsky highlight fundamental challenges in AI's quest to replicate human consciousness. Searle's argument questions whether AI can ever move beyond syntactic processing to true semantic understanding. Penrose's quantum-based theory suggests that consciousness might involve non-computational processes, hinting at a possible limitation of classical computing in AI. Chomsky's critique of language models as sophisticated but ultimately limited tools echoes this sentiment, emphasising the gap between simulating language and truly understanding it.
A Perspective on the Future Possibilities
Looking forward, the question remains whether AI can transcend these limitations. I don’t deny that. But the trajectory that has been taken by the industry to develop these technologies clearly suggests that while the industry leaders might be portraying AGI in one light, they are building it on contrary terms. Consciousness or true understanding can never be achieved with just datasets and probabilistic models. It involves non-computational elements that cannot even be simulated, much less replicated by machines.
The integration of advancements in quantum computing, neuroscience, and cognitive science may pave the way for new types of AI systems that could come closer to replicating aspects of human consciousness. However, as our exploration suggests, this is not merely a technical challenge but a fundamental inquiry into the nature of consciousness itself.
I will only throw in my own 2 cents: this is very complicated and uncertain, and I am going to publish my own follow up to this (I already have a draft), where I talk about life in the context of AI. I think this is a great compliment to that piece, and i love it when we do 2 of the same concept (or tangent) like this.