Why ‘Complex Emergent Intelligence’ Is The Future of AI
Discussing the Path Ahead for AI Using Complexity Theory
Fellow Nerds,
Earlier this month, I launched the Complex Emergent Model of Language Acquisition (CEMLA). It is available with open access on PhilPapers. I introduced it to my audience on The Lumeni Notebook, and it was an instant hit, driving a lot of interest from people belonging to different fields, everywhere from quantum computing to linguistics.
CEMLA proposes a dynamic framework for understanding language acquisition, grounded in the principles of self-organisation and emergent behaviour. It recognises that language is NOT a static or rule-bound construct. It evolves as a product of continuous interaction and feedback, shaped by cultural, cognitive, and environmental factors. This model is a departure from traditional linear interpretations, viewing language as a complex adaptive system that mirrors the non-linearities of natural systems.
At the time of launching the model, I never expected it to get much traction initially, so I was pleasantly surprised to see how positively and widely it was received. This motivated me to discuss with you, this week, the concept of Complex Emergent AI and why I think that is the way to break future benchmarks in AI in a meaningful and useful way.
There is no shortage of debate in the AI community about the long-term pursuit of artificial general intelligence (AGI). There are many researchers, chasing the AGI, trying to develop a future where machines possess true consciousness (whatever that means). I, however, am NOT one of them. AGI, in my view, is a fallacy. When it comes to consciousness, I tend to lean on the direct and pragmatic approach of Dr. John R. Searl. Consciousness (however you’d like to define it), is ultimately a biological phenomenon, deeply rooted in the structures of organic life. No matter how sophisticated the machine, it cannot experience, feel, or be in the way that a living organism does.
That doesn’t mean AI cannot exhibit intelligence-like behaviour. In fact, we are already seeing early glimpses of emergent behaviour in large language models (LLMs) like GPT-4. But herein lies the problem. These models are fundamentally computational and static, constrained by their training data and lacking the dynamic, feedback-driven learning that is crucial for true emergent intelligence. This is where CEMLA could be of help.
Stop Chasing the Wrong Stuff
To understand where AI development has veered off course, we must first address the elephant in the room- artificial general intelligence (AGI).
The crux of the issue with AGI, in my view, is that it rests on the assumption that consciousness or something akin to it, can be engineered. However, consciousness is NOT a programmable feature. It is a deeply biological process. It arises from the particularities of organic systems, rooted in biological lifeforms' neural and cognitive architectures.
Even if we were to literally power an AI system with a biological heart or some other organic element, we still wouldn’t derive consciousness from it. Machines, no matter how complex, are fundamentally different from biological organisms. Consciousness is not simply a matter of complexity or computational power. It is not a purely quantitative effect. It is a qualitative phenomenon, inseparable from the living process itself. The fundamental mistake in the AGI narrative is the belief that we can somehow bypass the biological and still arrive at the same destination.
This is why I argue that the pursuit of AGI is but a distraction from AI’s true potential.
Why Complex Emergent Intelligence is the Way Forward
The current trajectory of AI development has inherent limitations. Even OpenAI with its latest o1 reasoning model seems to be grappling with this. What’s the problem? These systems remain rigid, that’s the problem.
They require static training regimes and manual fine-tuning. They lack fluidity and adaptability. And simply throwing more computational power and data is not going to make the problem disappear. This needs to be fixed from the ground-up, envisioning a completely new paradigm for building language models.
If we were to adapt the principles of CEMLA, and apply it to AI, it could result in a radical departure from existing methods, calling for AI systems that operate as complex adaptive systems. What would that entail? Let’s discuss.
AI as a Complex Adaptive System
CEMLA offers a perspective in which linguistic elements form an adaptive network of nodes, where connections strengthen or weaken based on feedback. In this framework, AI systems need to move beyond static pattern recognition toward becoming self-organising systems. This would mean that instead of being trained once and deployed, AI would evolve continuously, similar to how neural connections in the brain reorganise with each new piece of linguistic input.
Imagine an AI architecture built to self-reconfigure its internal models as it interacts with users and environments, adjusting in real-time without needing retraining. This is fundamentally different from today’s models, which require retraining cycles to update their knowledge.
Emergent Intelligence
In the context of CEMLA, emergence refers to complex patterns arising from simple interactions between linguistic elements. This principle could fundamentally alter how we design AI systems, moving from brute-force statistical learning to emergent behaviour through interaction and experience.
AI systems based on CEMLA principles would no longer be trained just to recognise patterns but to allow for the spontaneous emergence of new capabilities through interaction. This means that intelligence wouldn’t be explicitly programmed but would emerge from the interactions between the system’s various components and its environment. Instead of predicting the next word in a sentence based solely on training data, a CEMLA-aligned AI would allow contextual interactions and feedback loops to shape new, dynamic language outputs.
To model this mathematically, the system’s learning could be framed using non-linear differential equations to capture the non-linearity of real-world interactions, where small inputs can lead to disproportionately large changes in the system’s behaviour.
Feedback Loops and Self-Organisation
A core principle of CEMLA is that language learning is guided by positive and negative feedback loops, where successful interactions strengthen certain neural pathways, and unsuccessful ones weaken them. In AI development, the introduction of feedback loops and self-organising principles would move us closer to creating systems that can learn and adapt autonomously.
Positive feedback would strengthen connections within the AI’s internal model (akin to Hebbian learning in the brain), while negative feedback would trigger a recalibration of its strategies. This would give rise to AI systems capable of self-organisation, an intelligence that isn’t imposed but emerges over time.
This would involve using dynamic weighting matrices, which adjust based on the success of each interaction, allowing the AI to autonomously reorganise its internal structure. The result would be an AI that doesn’t merely respond to inputs but learns from its own failures and successes, continually refining its model to become more sophisticated without human intervention.
This is NOT to be confused with Generalised Reinforcement Learning
It is important that I clarify here that self-organisation, in the context of the CEMLA framework, is NOT the same thing as Monte Carlo methods or RL.
Both Monte Carlo and generalised RL are designed to optimise towards a pre-defined goal or reward function. In RL, the agent interacts with the environment, gathers feedback, and updates its policies based on maximising future rewards. The entire system is oriented around improving performance for that particular objective, which is often rigidly defined and specific to the task at hand.
A CEMLA inspired AI won’t simply optimise towards a predefined goal. Instead, the system continuously reorganises itself in response to feedback in a broader, more holistic sense. It’s not just aiming for task-specific optimisation. It’s dynamically adjusting its structure based on all the interactions it experiences, which can span multiple domains or contexts simultaneously. The intelligence that emerges is not tied to a single reward metric but instead to the ability to self-organise in an open-ended manner, adjusting in ways that extend beyond predefined rewards.
I think I will be discussing this in much broader detail in future articles. But for now, this base level differentiation should suffice.
Phase Transitions
For now, in AI, training is a gradual, often incremental process. However, CEMLA suggests that learning, especially in complex adaptive systems, doesn’t always occur linearly. There are moments when the system reaches a critical threshold and reorganises itself, resulting in a sudden leap in capability. In AI, this could translate to systems that experience non-linear growth.
These phase transitions could be modelled using sigmoidal functions, where small, continuous inputs eventually lead to abrupt, large-scale changes in the system’s behaviour. AI systems designed with this in mind could experience sudden boosts in fluency or problem-solving ability as they accumulate experiences and interactions. Imagine, for a second, a world where you may no longer have to build the NEXT BIG MODEL in AI, instead AI creates it itself using phase transitions. If you are an AI developer, this should be music to your ears.
But What About The Terminator?
Okay, now at this point, I can already hear the word ‘Terminator’ being hurled my way along with a whole bunch of Arnold Schwazenhager gifs. NO, I am not saying we should build a Terminator.
It’s a natural question though, I'll give you that. The fear comes from the assumption that autonomy and emergent intelligence must somehow lead to uncontrollable, conscious AI. But there is a critical distinction that you’ll be missing. Consciousness itself is unnecessary for emergent intelligence, and frankly, it’s not something we need or should aim for. Consciousness, as a biological process, has no real value in an AI system. What we do need, however, is conscious-like behaviour, the ability to adapt, learn, and respond intelligently, within strictly defined guardrails.
Building AI systems that exhibit conscious-like behaviour within clear, pre-defined guardrails is, in fact, the best safeguard against the very scenarios people fear. Why? Because it ensures that AI systems remain highly functional and adaptive, but always under control. By focusing on creating controlled emergent intelligence, we avoid the pitfalls of trying to recreate human-like sentience, while still leveraging the most advanced forms of AI development.
This approach neutralises the Terminator narrative by designing out the risk from the very beginning. Instead of fearing the rise of uncontrollable AI, we should focus on creating AI that functions intelligently within clearly defined guardrails, leveraging its emergent capabilities to solve complex problems, without ever stepping outside the boundaries set for it. Given that the AI would be self-organising and emergent, its adherence to the guardrails will substantially improve with every interaction that it has with the environment and other agents, without us, as developers, having to play cat-and-mouse, always catching up with the bad guys trying to use the AI for destructive purposes. I hope this addresses your concerns satisfactorily. If not, please let me know, and I would be happy to delve deeper into this in future articles.
Notes
Alright, I think I have said what I had to say, so without further to do, I think I will just say some more. If you are bored, I don’t hold it against you. But given the gravity of what we are discussing, I just can’t fathom getting bored in the first place.
I just want to share some notes on the Success Metrics that we have in the field of AI these days. The current narrative in AI often treats artificial general intelligence (AGI) as the ultimate benchmark for success, i.e., machines that can mimic human intelligence across all domains. But CEMLA presents a different vision. Instead of aiming for AGI, we should aim for systems that excel in specific areas of adaptability, creativity, and contextual intelligence without requiring consciousness or a fully generalised intelligence.
Success is measured by the system’s ability to adapt in real-time to novel situations, learning continuously without retraining. This adaptability can be measured in fields like autonomous systems, where AI would need to adjust its behaviour based on changing physical environments, or in healthcare, where it could refine its diagnostic models as it learns from patient outcomes.
AI systems built with CEMLA principles would be evaluated by their ability to generate novel solutions to complex problems. In scientific research, success wouldn’t just be about processing data but about identifying previously unseen connections and patterns, leading to breakthroughs in areas like drug discovery, materials science, or even fundamental physics, and I don’t think that AI development, with its current trajectory, can get to that.
Traditional AI systems rely heavily on expanding computational capacity, training models on bigger datasets, building larger neural networks, and increasing processing power. While this has driven much of the recent progress, it also hits diminishing returns. You can see it happen as we speak. Not even 2 years have passed and already LLMs are hitting a figurative ceiling. More data doesn’t necessarily lead to more intelligence, and bigger models often come with higher inefficiencies.
Self-organising systems don’t need massive datasets to function. They need the ability to learn from fewer examples, using context and feedback to refine their intelligence. This shifts the focus from scaling up to building more intelligent, self-regulating architectures capable of growth and adaptation over time.
In fields like robotics or climate science, CEMLA-inspired AI could discover emergent solutions to complex, multi-variable problems by constantly interacting with and learning from its environment. Such systems would no longer be constrained by the need for predefined answers, they would evolve their understanding, uncovering new pathways that could lead to novel innovations.
I'm starting to come around to Chomsky's universal grammar idea now that I understand it a bit better. I agree about emergence playing a central role.