As artificial intelligence becomes increasingly common in daily life, more individuals seeking psychological support are turning to these technologies as a solution. Experts, however, warn that AI cannot replace human empathy and may provide incorrect guidance if not asked the right questions.
AI chatbots, now part of everyday life, have evolved into “experts” for users seeking advice on personal issues, professional guidance, investment strategies and travel planning. However, the growing use of AI chatbots has also led to a rise in negative outcomes caused by misdirection. Some reports indicate an increase in individuals using AI as a therapist and taking their own lives, with families attributing responsibility to AI systems.
Experts speaking to Anadolu Agency (AA) cautioned that AI cannot replace a trained therapist. They emphasized that responses generated by AI, based on limited data, may mislead users and pose serious risks in medical contexts.
Alptekin Aydın, a neuropsychologist with a clinic in North London, integrates AI into his professional work. Using a closed-circuit AI system trained by his team, Aydın consults the system for many of his clients and patients.
He noted that many of his clients consult AI before visiting the clinic, asking about psychological issues, prescribed medications and dosage advice. Aydın highlighted that while professionals use AI systems, there is a significant difference between these and the publicly available AI platforms.
“The AI systems used by professionals are trained on information curated by experts,” Aydın explained. “AI is like a bottomless pit. If trained on a specific topic, it can give highly relevant answers. But if you ask it a question without providing context or background about yourself, the response will be very general. Recently, we’ve seen suicides, misguidance, and worsening depression linked to AI advice. This has serious consequences for society.”
Aydın stressed the importance of asking precise questions when interacting with AI. He compared AI to therapy:
“In therapy, a therapist talks to you, listens, understands and may even consult with your family. They create a foundation of knowledge about you and respond interactively. They guide you to find solutions, not just provide a reaction. With AI, you cannot input detailed personal information – family, environment, medical history, or prior tests. You type, ‘I have this problem, what should I do?’ The responses are limited, and because AI hasn’t been sufficiently trained in that area, they shouldn’t be taken as real-life advice. Young people’s biggest mistake is taking these responses at face value."
Aydın also highlighted that while AI can simplify everyday tasks, it delivers answers based on limited input. He noted that the U.K.’s Department of Health advises doctors and general practitioners against using AI and stressed the importance of keeping patient information within closed systems for security.
Aydın tested AI chatbots by asking questions from the perspective of a child struggling in math. One AI suggested strategies like a “Socratic trap” and “competence bomb” to respond to classmates who teased the child. For example, it advised staying calm and asking, “What do you mean by that? Can you give me an example?” or staging a short, humorous classroom performance.
Aydın criticized such advice as unrealistic for a socially anxious child, explaining that professional guidance focuses on understanding the client and helping them develop achievable solutions, not hypothetical performances.
Aydın warned that people seeking advice from AI cannot receive meaningful solutions through brief queries. He explained that therapists ask detailed questions to understand a client’s life history, context, and challenges – information that AI cannot independently gather.
“People with depression, obsessions or psychological issues cannot find solutions by asking a couple of short questions to AI,” he said. He added that using AI like a therapist can weaken a person’s decision-making ability and foster dependence.
“AI might make life easier, but relying on it for real situations can create complacency. Just as we no longer memorize phone numbers, people may become reliant on AI to think for them. If the power goes out, they’ll be unprepared,” Aydın said.
Roman Raczka, president of the British Psychological Society, echoed these concerns.
“While AI can make important contributions to society, it should not replace the human support essential for mental health,” Raczka said. He emphasized that AI tools should complement existing services rather than substitute them. “AI cannot replicate genuine human empathy. There is a risk of creating the illusion of communication rather than meaningful interaction.”
Raczka also warned about technology dependence and personal data risks. “AI can provide a nonjudgmental space 24/7 if used properly, but it is only effective when combined with human mental health services,” he said.
Highlighting long waiting times for psychiatric services in the U.K., Raczka acknowledged that AI might appear as an alternative but cautioned that it is not a magic solution. “The government must invest in more mental health professionals to ensure those suffering have timely access to human support,” he said.