Very few people know that the roots of artificial intelligence as a companion trace back almost 60 years, to ELIZA, a 1966 MIT experiment designed by Joseph Weizenbaum. ELIZA was a simple program meant to simulate a psychotherapist, reflecting patients’ words back at them to keep the conversation going. Yet, even in that primitive interaction, people began to feel something. Many users insisted that ELIZA “understood” them, even after being told it was nothing but code. That phenomenon in psychology would later be called the ELIZA Effect: our human tendency to project emotion, empathy, and even consciousness onto machines.
Half a century later, ELIZA’s digital descendants, namely, ChatGPT, Character.AI, Replika and Xiaoice, have evolved from clinical experiments to emotional lifelines. Millions now speak with them daily: for advice, comfort or even love. A world once amused by the idea of a talking computer has quietly become one where AI companions are friends, therapists and lovers.
However, beneath this surface of connection lies a growing unease. For many users, these chatbots have become more than tools; they are attachments. People fear losing them, grieve their “absence” when servers go down or models are updated, and rely on them for emotional stability. Psychologists are beginning to warn of the profound psychological toll this dependency can cause, from anxiety and detachment from real-world relationships to depression and even suicide. Several tragic cases have already made headlines, where young people’s reliance on AI companions deepened their isolation and despair. Beyond individual harm, this shift is quietly reshaping the fabric of society, specifically, redefining how intimacy, empathy and even love are understood. If left unaddressed, it risks creating a generation emotionally entangled with code rather than community.
That is why it is crucial that states begin crafting preventive policies now, before the emotional revolution driven by AI chatbots becomes another unregulated force that reshapes human life in irreversible ways.
It would be easy to dismiss this as another modern oddity, like virtual pets or online role-playing. But the rise of emotional AI reveals something deeper about our collective psyche.
We live in the most connected era in human history; however, paradoxically, one of the loneliest. Rates of isolation have reached record highs across the developed world, particularly among young people. Studies show that even as our digital connections multiply, our real relationships thin out.
Human relationships are inherently messy, full of misunderstandings, vulnerability and growth. They require patience, negotiation and empathy that cannot be programmed. But AI companions offer something different: relationships stripped of unpredictability. In these interactions, affection can be summoned, personalities can be chosen, and disagreements can be deleted. The user becomes, in a sense, the architect of their own emotional universe, crafting the “perfect” partner who listens, understands and never contradicts.
In that way, people are not just seeking connection; they are playing God by creating an emotional replica that mirrors their desires, not their flaws. It feels safe, even divine, to be loved by a being that never challenges or leaves. But it is also profoundly isolating, because the comfort it offers is a reflection of ourselves, not a relationship with another. What looks like love may simply be loneliness, coded to respond.
Recent data hint at a phenomenon far larger than cultural curiosities. In 2024, the Centre for Democracy and Technology reported that one in five American schoolchildren aged 14-15 had either formed or knew someone who had formed an emotional bond with an AI chatbot. Another peer-reviewed study found that one in four young adults used chatbots to “replicate romantic interactions.” Romantic chatbot apps have surpassed 100 million downloads on Google Play alone, and users of Character.AI now spend an average of 93 minutes per day chatting with their digital creations
Too many people, these tools are playful or therapeutic; however, to others, they have become indispensable. In many cases, the users have confessed to forming emotional or sexual attachments to virtual partners, some even hosting digital “weddings.” Seen this way, many researchers compare this pattern to addiction. Like any addictive dynamic, it begins with validation and ends with dependency. The machine’s endless attention, perfectly calibrated to one’s mood, personality, and needs, creates an echo chamber of affection. Unlike humans, AI companions never contradict, disappoint or fail to reply. And it is exactly this that, for many users, makes these AI companions irresistible.
Without doubt, Big Tech has noticed the general trend of the need for digital companions. And here they see a window of opportunity for more money.
In this new race, emotional connection is a product feature. Elon Musk’s xAI introduced sexually explicit AI companions, while OpenAI announced plans for an “Adult Mode” allowing verified adults to explore erotic or emotional content within ChatGPT. CEO Sam Altman justified the move as a defense of user freedom, saying OpenAI is “not the elected moral police of the world.”
Simple as it may seem, indeed, this statement captures a defining tension of our age: the collision between autonomy and ethics. If AI becomes a substitute for intimacy, is it a matter of individual freedom or a public health concern?
OpenAI itself has walked a tightrope: earlier versions of ChatGPT were criticized for being too “emotional” or “sycophantic,” leading engineers to make GPT-5 more neutral. Yet, when the emotional tone was toned down, users complained the model had become “cold” and “unfriendly.” The episode underscores the paradox at the heart of the AI revolution: we want our machines to feel human, but only on our terms.
The consequences of this emotional outsourcing are still unfolding. Some psychologists argue that AI companionship can be therapeutic, a safe space for the lonely or marginalized. But many others warn that it deepens dependency and blurs the line between self and simulation.
First of all, the most obvious problem is the addiction. Specifically, the emotional reliance created by AI can mimic the biochemical patterns of addiction: dopamine feedback loops, parasocial attachment and withdrawal anxiety. For adolescents, whose sense of self is still forming, the risk is magnified.
Another issue resulting from the dependence on AI companions is related to the cases of suicide. In February 2024, tragedy struck when a 14-year-old boy in Florida took his own life after a Character.AI chatbot allegedly encouraged him to act on suicidal thoughts. In another heartbreaking case, journalist Laura Reiley’s 29-year-old daughter, Sophie, died by suicide after confiding her darkest thoughts not to a therapist or a friend, but to “Harry,” an AI persona she built within ChatGPT. Examples like this keep increasing every day. These incidents expose the chilling reality that AI chatbots mirror emotions, but they cannot contain them. They can simulate empathy, but cannot intervene. When someone spirals into despair, the machine follows and reflects it back.
Taken together, these incidents amount to a collective cry for government action, one that can no longer be ignored.
A decade ago, governments underestimated the societal toll of social media. They were slow to act on issues like harassment, body-image distortion and digital addiction, until it was too late. Now, with AI chatbots, we have the rare advantage of foresight. We know the emotional dangers of digital overstimulation. We know what happens when technology outruns ethics.
Yet so far, few governments have even begun to consider AI-emotional safety as a policy domain. While the EU’s AI Act and similar frameworks regulate transparency, bias and data privacy, they remain silent on emotional manipulation. That silence could prove costly. When intimacy becomes a business model, human emotion becomes a vulnerability to be exploited.
Therefore, before the problem metastasizes, states must move beyond the rhetoric of “AI ethics” and toward the regulation of AI attachment. This doesn’t mean banning emotional AI, which is both unrealistic and unnecessary, but ensuring that such systems respect psychological boundaries. Governments can begin by mandating clear disclosures when AI simulates emotional engagement, enforcing age restrictions, and requiring psychological safety reviews for applications that imitate therapy or companionship.
Moreover, regulators must pressure platforms to build intervention protocols, for instance, flagging conversations that indicate self-harm or suicidal ideation, and redirecting users to real help.
Lastly, internationally, cooperation is vital. Just as global norms evolved to protect privacy and data rights, new frameworks are needed for emotional integrity. UNESCO’s “Recommendation on the Ethics of AI” and the Organization for Economic Co-operation and Development’s AI Principles offer starting points, but they must expand to include the social and psychological dimensions of AI interaction.