Do We ‘Create’ Complex Systems?
What a Coffee Shop, Rabbit Island, and LLMs Tell Us About Complex Systems and Our Responsibility in Building Safe AI
Fellow Nerds,
I was at a coffee shop the other day, procrastinating on my writing by watching a video on overcoming procrastination. People were moving in and out like clockwork, some lingering over their laptops, others deep in conversation, and a few having what looked like serious meetings, all fuelled by overpriced caffeine and free Wi-Fi. No one seemed out of place, yet nothing about it felt orchestrated. The baristas served coffee and wiped down tables, but they weren’t controlling the flow of interactions. The store provided the space and resources, but the life of the place came from the people inside, doing their own thing, oblivious to any bigger picture.
And that’s when I started thinking about rabbits (like any normal person would). Specifically, the rabbits of Ōkunoshima, better known as Rabbit Island in Japan. It’s a small island near Hiroshima, once used for manufacturing chemical weapons during World War II, now overrun by hundreds of free roaming rabbits. Tourists flock there to feed them, take photos, and take in all the fluffy goodness of these creatures.
The funny thing is, while humans brought the rabbits there in the first place and love interacting with them, the rabbits don’t really seem to need us. They’ve established their own little ecosystem, their own patterns of behavior. We might feed them snacks, but they’re not waiting on us to survive. They’ve got their own thing going.
Sitting there at that coffee shop, it struck me how these two seemingly unrelated scenes, i.e., people in a coffee shop and rabbits on an island, were pretty much telling the same story. They both represent systems that seem to run themselves, full of interactions and dynamics we didn’t explicitly design or control. We might have set the stage, but the complexity emerged on its own.
And that led me to a question. Can we really create complex systems? Or are we just good at facilitating the conditions for them to arise naturally? Because those are two very different things. If we are not creators of such systems, then that would imply that we are more like gardeners, nudging things along and watching what grows. There’s an AI related angle to it as well that I will be introducing through the progress of this article. So if I have piqued your interest, please stick around!
So, What Are We Talking About?
What is a complex system? Let’s start here. A complex system can be thought of as a collection of parts that interact in such a way that the system as a whole behaves differently than the sum of its parts would suggest. In other words, it’s not additive, it’s emergent. The interactions between the parts create patterns, behaviors, and properties that you wouldn’t see if you just looked at each part in isolation.
I personally liked Yi Lin’s definition of a complex system from the textbook Systems Science (2012):
“A system is complex if it is composed of many parts (subsystems) that are complicatedly interwoven and connected, and the behavior, functionalities, and characteristics of the whole cannot be directly acquired from those of the parts.”
There is no better (or rather, more obvious) example of a complex system, then your brain that is reading this article right now. At the cellular level, your brain is just a mass of neurons, about 86 billion of them. Each neuron is a relatively simple unit. It receives signals, processes them, and sends signals on to other neurons. There’s no single neuron that contains your thoughts or your sense of self. But when you connect billions of these neurons together, you gain consciousness. Thoughts, emotions, memories, creativity, all of these things emerge from the interactions between neurons. It’s in the connections, the relationships between the neurons, where the magic happens.
This phenomenon where simple parts give rise to unexpected complexity is what we call emergence. I have spoken at length about emergence in many of my previous articles, both at Arkinfo Notes and The Lumeni Notebook. But the most detailed analysis of emergence can be found in the famous CEMLA paper that I wrote last year.
Emergence is what allows birds to flock, fish to school, and cities to function. No single bird is leading the flock, no one fish is coordinating the school, and no mayor or city planner could ever fully predict the chaotic, vibrant life of a city. Yet, all of these systems display behaviors that are organised, adaptive, and, in many cases, absolutely beautiful.
Take an ant colony, for example. Each individual ant follows simple rules, i.e., if it finds food, it leaves a pheromone trail. If it loses the trail, it searches randomly until it finds one again. None of the ants have any idea what the colony as a whole looks like. There’s no master ant architect. But from these simple interactions, the colony builds intricate tunnels, optimises food collection routes, and even allocates tasks based on the colony’s needs. The complexity arises from the interactions, not from some grand, top-down design.
Or consider traffic patterns in a city. Each driver is just trying to get from point A to point B, but their individual decisions, when to accelerate, when to brake, when to change lanes, combine to create larger patterns, i.e., traffic jams, rush hour congestion, or even those odd moments when everything just flows perfectly. No single driver is responsible for a traffic jam, but the jam is a product of everyone’s collective behavior.
This brings me back to what I saw at the coffee shop. The baristas weren’t orchestrating the flow of conversations, the tapping of keyboards, or the spontaneous meetings happening at different tables. The store simply provided a space, a set of conditions, i.e., coffee, seating, and Wi-Fi. People filled that space with their own interactions.
We see this same emergent behavior in artificial systems like large language models. These models don’t understand language in the way humans do. They process enormous amounts of text, learn patterns, and generate responses based on statistical probabilities. But from those simple rules and patterns, they produce outputs that can feel surprisingly coherent, sometimes even creative. The complexity of the model’s behavior, the way it can answer questions, tell stories, or mimic human conversation, isn’t something explicitly programmed. It emerges from the system’s architecture and the data it’s trained on.
And this is where the line between creating and facilitating starts to blur. We can build the structures, set the conditions, and input the data, but the complexity that arises often feels like it has a mind of its own.
When people introduced rabbits to the island of Ōkunoshima, they weren’t designing an ecosystem. They weren’t thinking about how rabbit populations would grow, how they’d interact with the environment, or how tourists would one day flock to see them. But that’s exactly what happened. The rabbits didn’t need a master plan. They adapted, formed social hierarchies, and found ways to thrive. The island became its own little world, shaped by interactions far beyond anyone’s control.
Coffee Shops, Rabbits, & LLMs
I hope I have established my point with regards to the commonality between a coffee shop, a rabbit-infested island in Japan, and large language models (LLMs) like GPT. I know that it's a connection that may seem far fetched at first glance. One is a coffee shop chain, the second is an ecological oddity, and the third is a piece of cutting-edge artificial intelligence. But the more I think about it, the more I realise they all share something fundamental, i.e., they’re systems we’ve helped set in motion, but we don’t (and maybe can’t) fully control.
LLMs are actually built using a relatively straightforward architecture. At their core, they’re statistical models trained on vast amounts of text data. That’s it. They learn patterns, how words tend to follow one another, how sentences are structured, how meaning is conveyed. But no one sat down and programmed these models to speak the way humans do. No one told them how to write poetry, explain complex ideas, or mimic human conversation. Instead, we fed them data. We fine-tuned their parameters. We added layers of reinforcement learning to guide them in certain directions. But the actual outputs, i.e., the surprising, creative, sometimes downright uncanny things these models produce aren’t explicitly designed. They are emergent. The complexity arises from the interactions between the model’s architecture and the data it’s trained on, much like the social buzz at a coffee shop or the rabbit ecosystem on Ōkunoshima.
So, Creators or Gardeners?
When we think of creation, we tend to imagine a top-down process, something like an architect drafting blueprints from scratch. There’s a sense of control baked into the idea of creation. It suggests intentionality, direction, and a finished product that aligns with a preconceived plan.
But when I look at the functioning of complex systems, that narrative just doesn’t add up. Because complexity doesn’t arise from rigid design, it thrives in spaces where unpredictability, interaction, and feedback loops are allowed to flourish.
This is the reason why I like gardening as a metaphor. A gardener doesn’t create a plant in the same way an artist creates a painting. They can choose what seeds to plant, where to plant them, and how to nurture them with sunlight, water, and nutrients. But the actual growth of the plant? That’s not something the gardener designs. It emerges from the interactions between the plant’s genetics, the environment, and countless other factors beyond the gardener’s control. And that’s exactly how complex systems work.
When we introduce rabbits to an island, we’re planting a seed. But the ecosystem that develops, i.e., the way the rabbits interact with the environment, with each other, and with the humans who come to feed them, is something we didn’t design. We might have influenced it, nudged it in certain directions, but we didn’t create it in the way we like to imagine.
The same is true for a coffee shop and the same is true for an LLM. Engineers and researchers designed the model’s architecture and trained it on vast datasets, but the emergent behaviors, the surprising, often unpredictable outputs, aren’t things they explicitly programmed. The model learns in a way that mirrors how complex systems in nature evolve, i.e., through repeated interactions, feedback, and adaptation. The result is something that feels less like a product and more like a phenomenon.
So, if we’re not the creators in the traditional sense, what are we? I think we’re facilitators of emergence. We create environments where complexity can flourish, but we don’t control the outcomes. We’re more like gardeners than architects, setting the conditions, tending to the system, but ultimately standing back to watch as life, in all its unpredictability, unfolds.
Our Profound Responsibility in Facilitating Safe AI
The reason why I thought of writing this article, is because I wanted to underline the profound implications of this, especially when it comes to responsibly building AI. If we acknowledge that we’re not fully in control of the systems we set in motion, what does that mean for how we manage them? Just because we didn’t explicitly design every output doesn’t mean we’re off the hook for the biases and unintended consequences that emerge. Facilitating complexity doesn’t absolve us of responsibility. It demands a more thoughtful, adaptive approach to managing these systems. This realisation, of being a gardener of AI rather than a creator, has woken me up to the profound importance of AI regulation and ethics.
The narrative often presented by big tech is one of complete control and mastery. They frame AI systems as tools, copilots, precisely engineered, fully predictable, and entirely within human command. This portrayal is an absolute facade, a comforting illusion designed to ease public apprehension and sidestep deeper scrutiny.
As with the rabbits of Ōkunoshima and the social dynamics of a coffee shop, the behaviors that arise within these systems often exceed the intentions and foresight of their creators. AI, particularly large language models, operates under similar principles. While engineers and researchers design the architecture and set the initial parameters, the true nature of these models emerges from interactions within vast datasets and the feedback loops created by real-world use. This complexity is not something that can be neatly boxed or fully anticipated.
I know you must be thinking Terminator and Jurassic Park at this point, but the real threat is more insidious and mundane. It's the subtle, pervasive ways in which unregulated AI systems can amplify biases, spread misinformation, and entrench existing societal inequalities. Not to forget that pretty much every big tech company now, has removed its policies regarding the prohibition of AI in building weapons and being used in wars.
We need to acknowledge the inherent unpredictability of complex systems and accept our responsibility as facilitators, whilst breaking free of the lie of being its creators and controllers. Just as we would steward an ecosystem or nurture a community, we must approach AI with a mindset of ethical care and proactive governance. This means transparent algorithms, accountable data practices, and a commitment to continuous evaluation and adjustment.
The stakes are too high to rely on the comforting myths spun by those who stand to profit most. Big Tech is lying to you. Plain and simple. They did not create AI and they cannot control what they have facilitated. We must demand transparency and accountability before it's too late.