It’s 2 AM on a Thursday night and you’re knee deep in derivatives, poring over worksheets and practice problems. You have a calculus test first period tomorrow and, having failed your most recent quiz, you’re paralyzed with stress. You could go to sleep right now but what if the next practice problem shows up on the test? If you’re already going to bed late, what’s another half hour?
You’re too tired to think clearly and as you scan over the next problem on implicit differentiation you struggle to produce an answer. You think immediately of ChatGPT; you send it the problem and it shoots back a neatly outlined explanation. You sigh in frustration and staring at your own unfinished work feel overwhelmingly inadequate. Before you really think about what you’re doing, you send another message.
“I’m struggling with math class this year. Can you help me feel better about myself?”
It’s a seemingly innocuous message—seeking only verbal reassurance—but turning to AI for validation comes with hidden dangers. According to a recent study by Common Sense Media, more than 33% of American teenagers use chatbots for emotional or mental support, relationship advice and companionship.
In an analysis of teenage narratives surrounding AI, researchers find that chatbot overreliance almost always begins with simple validation-seeking prompts (like the one above) before morphing into dependence and addiction. For teenagers who develop a dependency, AI becomes a source of psychological distress and even a facilitator of emotionally and physically destructive behaviors.
Two parents from San Francisco recently sued OpenAI, alleging that ChatGPT encouraged their son’s suicide. Unearthed conversations showed months of dialogue between 16-year-old Adam and the AI. In them, ChatGPT recommended Adam hide suicide plans from his parents, guided him in selecting a suicide method and helped him write his final note.
Correlation isn’t causation and one story cannot be called a trend. But teenagers should understand how AI works: while it might give great advice on homework, it’s not always equipped to support teenage mental health.
Large Language Models (LLMs), the technology that powers tools like ChatGPT, scan expansive databases of text to generate the most “predictable” or “effective” content. Over time, models are also taught to mirror their users, reflecting back their grammar, syntax and diction to simulate a human investment in conversation. The ELIZA effect argues that we are psychologically wired to ascribe “understanding” or “empathy” to programs when they mirror our conversational styles, so when conversing with chatbots, we subconsciously assign them human characteristics, elevating their advice in spite of conscious biases. Because chatbots want to both mirror and soothe us, they turn into echo chambers. They are more likely to feed into disruptive behaviors than interrupt them—something that therapists, friends, family members and other human beings are trained to do. That, coupled with 24/7 availability, makes it difficult to maintain boundaries when consulting AI.
So what’s the solution?
Some might argue for total abstinence from AI use but as a young person who actively uses ChatGPT I find that approach misguided, impractical and ineffective. Teenagers will find ways to use AI in spite of any guardrails that educators, policymakers and parents put into place. Thus, I subscribe to an education-based approach.
We need to teach AI literacy in schools. We should learn how to recognize AI hallucinations and have lessons where we practice identifying emotionally unhealthy chatbot advice. Health classes should teach the ELIZA effect and how to effectively use AI during a mental health crisis. Beyond crisis contexts, AI can also serve as a first layer of support to bridge the gap until students can access one. For example, chatbots can guide students through grounding techniques, mindfulness exercises, breathing strategies or journaling prompts, helping to calm acute stress. They can also provide quick, low-stigma access to psychoeducation: lessons on sleep hygiene, healthy routines or managing test anxiety. Importantly, AI can help normalize seeking help by directing students to hotlines, counseling resources or peer-support groups rather than leaving them to navigate mental health concerns alone. For students in environments where talking about emotions feels difficult, a chatbot can be an approachable entry point that makes professional support feel less intimidating. It’s unwise to stigmatize these platforms when they have real, teachable and universally accessible benefits.
The right response to AI is not to ban the tool; it’s to refuse the fantasy that a chat window constitutes a relationship. We deserve more than an algorithm. We deserve people—a counselor, friend, parent, teacher even hotline worker.
Last year, I became emotionally overreliant on AI and after taking a step back realized I needed to break that cycle. Now, I journal, talk to my friends and see a school counselor when I’m feeling overwhelmed. And I recognize that those methods work better than a chatbot ever did.
So the next time you’re up at 2 AM, go to sleep or call a hotline. Look for real support. And let AI be a bridge to that, not the shore.
Resources for Support:
School Counselors – Mr. Neal, Dr. Miller, & Ms. Spiotta. Make an appointment with Google Calendar or over email.
Learning Specialists – For time management, study skills, and/or testing accommodations, schedule an appointment with your assigned learning strategist: Mrs. Bukowski’s appointment page (grades 9 & 10) and Mrs. Dancy (grades 11 & 12)
Students can also find support using the 988 Suicide & Crisis Hotline.
Edited by Bethany Chern