High school students see AI as a 'friend' and that is a problem
"Teachers report that students increasingly struggle with basic thinking skills. When asked to explain their reasoning, many are unable to do so." (Shutterstock Photo)

As teens increasingly rely on AI for homework and emotional support, governments and schools must guide its safe use



New research reveals that teenagers worldwide turn to ChatGPT for homework, comfort and life advice, while schools scramble to catch up.

A recent study involving 545 high school students in Kashmir reveals an urgent educational challenge: nearly 96% use ChatGPT regularly for schoolwork, mirroring patterns found in other research across Cambodia, Hungary and broader Europe. But the numbers tell only part of the story. Students describe these AI tools not merely as academic aids, but also as companions and partners in decision-making.

"ChatGPT feels like a trustworthy friend," one teenager explained. Another said, "AI assists me in making decisions and organizing my life." One student put it plainly: "I use ChatGPT like a friend, especially when I have no one else to talk to or share my feelings with."

This emotional reliance is deeply concerning. These language models were not designed for therapeutic purposes, and recent global incidents have highlighted the dangers when teenagers treat chatbots as therapists, sometimes with tragic outcomes. Yet the allure is undeniable: AI tools are convenient, patient and always available. They do not judge. They do not get tired. For a generation raised on immediate digital gratification, this is irresistible.

Self-taught and unaware

Nobody taught them how to use AI. Students learn through social media and YouTube rather than structured instruction, creating stark inequalities. Those with better internet access and language skills gain advantages while others fall behind. Without formal training, students miss critical knowledge about how AI systems work and why they produce false information. They learn how to prompt ChatGPT but not how to evaluate its responses.

Meanwhile, students expressed deep ambivalence about their AI use. "Sometimes I feel using ChatGPT for my assignments is like cheating," one 16-year-old admitted. Another confessed, "I feel I am not using my cognitive capabilities. I even feel too lazy to cross-check whether the information is right or wrong."

Teenagers know something is not quite right. They feel the cognitive laziness creeping in. They recognize ethical boundaries blurring. But they continue anyway because everyone else is doing it, because it is easy, and because no adult authority has definitively told them to stop.

Policy vacuum

ChatGPT launched in November 2022. By the time schools began discussing policies, students had already integrated them into daily routines. The study revealed troubling institutional failure: 77% of students reported their schools have no official stance on AI tools. Only 2% said AI is explicitly accepted, while 20% said it's prohibited.

This policy vacuum leaves everyone unprepared. "Teachers themselves need proper training," one senior educator emphasized. "Students also need structured AI literacy programs."

Assessment has become particularly challenging. With class sizes often exceeding 40 students, teachers struggle to identify AI-generated work. "It is common to see homework that is unusually perfect," one teacher noted. "But proving it was AI-generated is nearly impossible." Even when students recognize ethical problems, the absence of consequences weakens deterrents.

Crisis of dependency

Emotional connection emerged as the strongest factor driving AI adoption, surpassing even convenience. Students do not just find AI helpful; they experience it as a companion, which explains why ethical concerns have little impact on behavior. Even students who feel guilty continue using AI. The emotional pull outweighs moral qualms.

Teachers report that students increasingly struggle with basic thinking skills. When asked to explain their reasoning, many are unable to do so. "With rampant AI use, students are not growing intellectually," one teacher warned. "They are using AI to complete tasks instead of developing their cognitive capabilities."

If an entire generation learns to produce answers without understanding concepts, what happens when they face challenges that AI cannot solve?

Despite all drawbacks, some teachers show conditional optimism. "The use of AI in our schools is good, and I firmly support it," one said, "but we must learn and teach students to use it responsibly."

That "but" carries enormous weight. Used thoughtfully, AI can personalize learning and support struggling students. Used carelessly, it becomes a crutch that weakens the muscles it was meant to strengthen.

What must be done?

Schools must develop clear, written policies about AI use, not blanket bans, but thoughtful guidelines that define appropriate uses.

Students require structured AI literacy programs that encompass both technical skills and ethical considerations. These programs should teach how to evaluate tools, detect bias and maintain critical thinking skills.

Teachers require immediate professional development to understand AI tools and redesign assessment methods. Traditional homework may need to give way to oral explanations and process-focused evaluation that cannot be outsourced to machines.

Educational authorities should integrate AI literacy into national curricula now. This cannot wait for five-year curriculum cycles. The urgency demands acceleration.

Schools also need to strengthen human-centered counseling services. If teenagers turn to AI for emotional support, it suggests a deficit in human connection that technology cannot truly fill.

Finding balance

Türkiye's rapidly digitalizing education system faces these same challenges documented elsewhere. The good news? Türkiye can learn from global experiences and implement thoughtful AI integration before problems become entrenched. The bad news? Time is running out. Students are already using these tools extensively, whether schools acknowledge it or not.

AI can be a friend, a tool or a threat, depending on how it's used. Used effectively, it offers personalized support and instant feedback. Used poorly, it replaces thinking and substitutes algorithms for human relationships.

As one thoughtful student put it: "AI is safe and ethical if used correctly, if we use it and learn from it rather than just copying and pasting."

The teenagers in this study are already living in a world enhanced by AI. The question is not whether AI belongs in education; it's already there. The question is whether adults will provide the guidance, policies and literacy programs needed to ensure it helps rather than harms.

Current educational systems are moving too slowly to keep pace with the technological reality that students inhabit. We cannot afford to keep playing catch-up. The goal of education isn't just getting correct answers. It's developing the ability to think, question and understand for yourself.