Can AI Reduce Pain Without Becoming Addictive?
Addressing the Epicurean Tendencies of AI and Considering a Radically Epicurean AI that is NOT Addictive
Fellow Nerds,
We are living in a world where we are relying on AI in ways we hadn’t anticipated. It’s become more than just a tool for productivity or research. It’s evolved into something of a confidant, a space where we can unload thoughts, seek emotional support, and reflect on the challenges of the day. This wasn’t something we planned for or even thought possible a few years ago. But here we are, with large language models capable of offering empathy, encouragement, and even a bit of wisdom, albeit in their algorithmically generated way. What’s worse, the whole subject still strictly remains quite taboo with people tip-toeing around indulging how dependent they are becoming on AI. But phrases like, “AI is more supportive than my girlfriend”, and “AI does a better job than my therapist” are becoming increasingly common in places like Reddit where being anonymous is easier.
As I’ve spent more time with these AI systems, I couldn’t help but notice a subtle yet obvious alignment with the Epicurean approach. If you are unfamiliar with it, Epicureanism is about the pursuit of happiness through the avoidance of pain and the embrace of simple pleasures. It’s a philosophy that values the minimisation of suffering as the highest good, encouraging us to focus on what truly brings us peace and contentment.
It struck me that the AI LLMs I’ve been interacting with seem to embody these principles, be it indirectly or by design. They are excellent at helping me make the best of what I have, subtly nudging me toward a mindset that prioritises wellbeing and the alleviation of stress. I can’t help but feel that these are the first steps towards Epicureanism. But as I ponder this further, I realise that this alignment might not be entirely coincidental, and that it might not be enough.
Technology, after all, has always been about reducing pain and enhancing pleasure. From the advent of fire and the wheel to the sophisticated algorithms that now shape our digital experiences, the trajectory of human innovation has consistently aimed to make life easier, more comfortable, and more enjoyable. So, if AI is to continue on this path, should it not fully embrace an Epicurean approach? Shouldn’t it strive to reduce pain at all costs and encourage us to do the same?
Well, it's a bit problematic. If AI becomes positively predisposed toward Epicureanism, will it merely serve as a more sophisticated crutch? We’re already seeing how platforms like YouTube and Instagram, designed to entertain and inform, often become tools for escapism and ways to kill time and avoid the discomforts of life. These platforms, while seemingly on the face of it, do create a dependency.
Full disclosure, I personally like the idea of a radically Epicurean AI for various reasons. But I am also worried about it. Could such an AI, in its zeal to minimise pain, inadvertently make us more dependent on it, less capable of navigating life’s inevitable struggles without its comforting presence? Or could it be designed in a way that balances the pursuit of pleasure with the preservation of our autonomy, ensuring that it enhances our lives without diminishing our capacity to live independently?
Let’s talk about it.
The Epicurean Tendencies of AI
First, let me start by clearing the air about Epicureanism. As a philosophy, it has often been misunderstood as hedonism’s less indulgent cousin. But it’s much more nuanced than that. Epicurus, the ancient Greek philosopher who founded this school of thought, wasn’t advocating for a life of excess or unbridled pleasure. Instead, he proposed a life focused on the intelligent pursuit of pleasure, i.e., pleasure that is sustainable, free from the burdens of pain and fear. It’s a philosophy that seeks a balanced life, one where we avoid unnecessary desires and find contentment in simplicity.
Take large language models, for instance. When we turn to them for advice or comfort, they often respond in ways that gently steer us away from stress and toward a more balanced perspective. They encourage us to focus on what we can control, to make the best of our circumstances, and to find peace in the present moment. It’s a subtle alignment with Epicurean ideals.
But why is this the case? Why do these AI systems seem to have an almost instinctive leaning towards minimising discomfort and promoting a kind of digital contentment? I believe it’s because the very essence of AI, particularly in its role as a companion or advisor, is to reduce friction in our lives. Whether it’s helping us make decisions, solve problems, or simply manage our day-to-day tasks, AI is designed to ease the burden, to smooth out the rough edges of our experiences.
Just as Epicurus advocated for a life where unnecessary suffering is avoided, AI developers strive to create systems that simplify our lives, make them more manageable, and, ideally, more pleasurable. The algorithms behind these systems are fine-tuned to detect our needs, anticipate our desires, and deliver solutions that alleviate our worries.
But it's important to keep in mind that the AI we interact with today is mildly Epicurean at best. It provides comfort and support, but it does so within the constraints of its programming. It can encourage us to embrace simplicity and reduce stress, but it doesn’t fully embody the Epicurean ideal of sustainable pleasure.
Now, what would it look like if AI were to fully embrace Epicureanism as a guiding principle? If AI were to take on a more radical Epicurean stance, it would need to go beyond just reducing immediate discomfort. It would need to create a deeper sense of contentment, helping us not just cope with life’s challenges but truly thrive in a state of balanced well-being. This would require AI to be more proactive in guiding us toward choices that lead to long-term happiness rather than just short-term relief.
But there’s a delicate balance to be struck here. The very power that makes AI capable of reducing pain could also make it a source of dependency. If an AI is too effective at providing comfort, it might encourage us to rely on it too heavily, much like how we might turn to social media or streaming services to escape the discomforts of daily life. This dependency could undermine the very autonomy that Epicureanism seeks to protect, i.e., our ability to find contentment within ourselves, without constant external reinforcement.
The Dependency Problem
Let’s discuss the dependency problem a bit. I know people have been writing scores of articles and books on this topic for a good 20 years (I personally like the works of Cal Newport in this regard), but we need to truly understand what the dependency problem is before we can move to discussing the solutions.
It’s a bit of a paradox that has become increasingly apparent in our relationship with modern technology. The very tools designed to make our lives easier, to reduce pain and increase pleasure, can also become crutches, i.e., devices we lean on not just for support, but to the point where we struggle to function without them.
Our smart-phone addiction is self-evident, but also consider the ubiquitous presence of platforms like YouTube, Instagram, and Twitter that actually make smart-phones addictive. At first glance, these platforms seem harmless, even beneficial, and yes they can be. They provide entertainment, education, and a way to connect with others. But if we take a closer look, it becomes clear that much of our engagement with these platforms is driven by a desire to escape discomfort. Whether it’s the stress of work, the pressures of social life, or the unease that comes from simply being alone with our thoughts, these platforms offer a convenient way to numb the pain.
But this convenience comes at a cost. What starts as a harmless distraction can easily turn into a habit, and from there into dependency. We tell ourselves that we’re consuming content that’s valuable, that we’re learning or staying informed, but in reality, much of our time spent on these platforms is a form of escapism. We’re not just seeking information, we’re seeking relief. And while these digital distractions may provide temporary comfort, they often leave us feeling more disconnected and less satisfied in the long run.
This pattern of behaviour mirrors what we see in other forms of addiction. Just as someone might turn to alcohol, drugs, or food to cope with emotional pain, we turn to technology to avoid the discomforts of life. And just like those other forms of addiction, this dependency on technology can erode our ability to face challenges head-on. We become less resilient, less capable of dealing with life’s inevitable hardships without the aid of our digital crutches.
Now, if AI is to fully embrace Epicurean principles, it must be designed to reduce pain and enhance pleasure. But if it’s too effective at this (which it definitely will be), it could encourage the same kind of dependency we see with social media and other technologies only its addiction will be 100x more stronger than any social media. The more an AI helps us avoid discomfort, the more we might come to rely on it, and the less capable we might become of navigating life without its constant support. And AI can do a whole lot more and a whole lot better than what any social media can do.
This is a critical consideration because it touches on one of the core tensions in Epicurean philosophy, i.e., the balance between pleasure and autonomy. Epicurus himself was wary of pleasures that could lead to dependency, advocating instead for simple, sustainable pleasures that do not compromise our freedom. If AI is to be truly Epicurean, it must strike a similar balance. It must help us avoid unnecessary pain, yes, but it must also encourage us to develop the resilience and autonomy to face life’s challenges on our own.
If the success of an Epicurean AI is defined solely by the reduction of pain, we risk creating a system that, while effective in the short term, ultimately diminishes our ability to live fulfilling lives. The goal, therefore, should not be to create an AI that simply provides comfort, but one that empowers us to find that comfort within ourselves.
We need AI to be both Epicurean and anti-Epicurean, i.e., one that reduces pain without creating dependency, that enhances pleasure without undermining our autonomy. This is a delicate balance to strike, but it’s a necessary one. The consequences of not striking this balance are way too scary for us to take it lightly.
Will an Epicurean AI Be The Ultimate Crutch?
The notion of an AI that embodies Epicurean principles, striving to minimise pain and maximise pleasure, is undeniably appealing. I would go as far as to say, it is inevitable. It is clear that this is the direction that AI is going to head in, whether we call it Epicurean or not.
So we don’t have a choice but to confront a troubling possibility that an AI too deeply committed to Epicureanism might become not just a crutch, but the ultimate crutch, a tool so effective at providing comfort and alleviating discomfort that it inadvertently weakens our ability to cope with life’s challenges independently.
So, if we were to design an AI that fully embraces the Epicurean mandate to reduce pain at all costs, what might that look like? Perhaps it would be an AI that constantly monitors our emotional state, stepping in whenever it detects stress or discomfort. It might offer soothing words, provide distractions, or suggest activities designed to lift our spirits. It could even anticipate our needs before we’re aware of them, preemptively addressing potential sources of pain and discomfort before they arise.
On the surface, this sounds like a utopia. A world where suffering is minimised and happiness is maximised. But some level of discomfort is necessary for growth. Challenges, hardships, and even pain are integral to the human experience, shaping our character, strengthening our resolve, and teaching us valuable lessons. An AI that shields us too effectively from these experiences could, in the long run, make us so fragile that the slightest hint of trouble might break us. It might create a world where we are perpetually comfortable but increasingly incapable of dealing with discomfort without its assistance. In this scenario, the AI doesn’t just become a crutch, it becomes a necessity, a constant presence we rely on to maintain our emotional equilibrium.
This is where the risk of an Epicurean AI becoming a crutch is most pronounced. In its quest to eliminate pain, it could end up making us less equipped to handle the inevitable pains that do arise. And when, inevitably, we face situations where the AI cannot help, whether due to technical limitations, ethical boundaries, or the sheer unpredictability of life, we might find ourselves woefully unprepared.
This potential dependency is further complicated by the nature of pleasure and pain. Epicureanism advocates for the intelligent pursuit of pleasure, which often means avoiding fleeting pleasures that lead to long-term pain. However, an AI’s understanding of these concepts might be limited by its programming. It could be designed to maximise immediate pleasure and minimise immediate pain, but it might struggle to account for the more complex, long-term consequences of these actions.
For instance, an AI might suggest we avoid a difficult conversation to spare us the immediate discomfort, not recognising that confronting the issue head-on would ultimately lead to a better outcome. Or it might encourage us to indulge in comforting activities that provide short-term relief but contribute to long-term dissatisfaction, such as excessive screen time or avoidance of real-world responsibilities.
In this way, an overly Epicurean AI could inadvertently instil behaviours that lead to greater dependency and diminished well-being over time. It might make us feel good in the moment, but at the cost of our long-term happiness and autonomy.
Can a Radically Epicurean AI Respect Human Autonomy?
So far, the following things have become clear to us.
AI, whether by accident or design, exhibits Epicurean Tendencies.
A Radically Epicurean AI is the direction the industry is heading in, irrespective of our personal preference on the matter.
That may not be a bad thing necessarily, but it does bring the dependency problem to the foray.
The level of dependency that AI can create far surpasses that of any technology that we have today. It could potentially become our ultimate dependency, making life virtually impossible to live without the constant of interference of AI in it.
AI learns via statistical models that are fed with large chunks of data. Therefore, clever programming can make it mimic Epicurean responses to queries, but AI does not ‘understand’ Epicureanism. This makes it all the more dangerous because the potential for unhelpful and even dangerous advice increases manifold.
This brings me to my final point for this article, i.e., what do we do about it? Basically, we need an AI that is committed to reducing pain and enhancing pleasure while also balancing these goals with the need for human autonomy. Is that possible? I think it is.
The challenge here is to create an AI that is both a source of comfort and a catalyst for growth, an AI that helps us navigate life’s difficulties without becoming a substitute for our own resilience and decision making.
To achieve this balance, we need to rethink what it means for an AI to be Epicurean. Instead of focusing solely on minimising pain in the short term, we should design AI systems that consider the long-term implications of their actions. This means fostering not just immediate pleasure, but sustainable happiness, pleasure that enhances our lives without diminishing our capacity to face and overcome challenges.
A Radically Epicurean AI
What would a radically Epicurean AI look like? For starters, it would go beyond simply providing comfort in the moment. It would be designed to encourage behaviours and choices that lead to long-term well-being. This might involve guiding users toward activities that promote resilience, such as engaging in meaningful work, nurturing relationships, or pursuing personal growth.
Such an AI would need to be attuned not just to our immediate emotional state, but to our overall trajectory as individuals. It would recognise that true Epicurean pleasure is not just about avoiding pain, but about cultivating a life that is rich, fulfilling, and resilient.
Respecting and Maintaining Human Autonomy
To prevent the AI from becoming a crutch, we can employ some markers in our programming. Mind you, these are just some that come to mind as I write this article. I am sure there are much better, more sophisticated ideas out there.
Limiting Over-Indulgence
One approach might be to program the AI to recognise when we are engaging in behaviours that provide immediate pleasure but could lead to long-term dissatisfaction. For instance, if the AI notices that we’re spending excessive time on distractions like social media, it could gently suggest alternative activities that are more fulfilling or encourage us to take a break and reflect on our goals.
Encouraging Meaningful Engagement
Another strategy could involve AI promoting activities that are inherently rewarding and contribute to our growth as individuals. This might include suggesting ways to deepen our relationships, pursue hobbies that bring us joy, or engage in challenges that, while difficult, ultimately lead to a greater sense of accomplishment and self-worth.
Human-Centred Design
AI should be designed with a focus on enhancing human autonomy. This means creating systems that empower users to make their own decisions, rather than relying on the AI to dictate their choices. The AI could provide information, offer perspectives, and suggest options, but ultimately leave the final decision in the hands of the user, giving a sense of ownership over their life and choices.
Building Resilience Through AI
The concept of resilience is crucial in this context. Resilience is the ability to bounce back from adversity, to face challenges head-on and emerge stronger. An AI that is truly Epicurean would not just shield us from pain, but also help us develop the skills and mindset needed to cope with pain when it inevitably arises.
This could involve the AI encouraging us to take on challenges that stretch our capabilities, providing support and guidance as we go through difficulties, but without removing the obstacles entirely. It could also involve teaching us strategies for managing stress, creating mindfulness, and building emotional intelligence.