Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    UAE enters global top 10 exporters in WTO rankings

    April 6, 2026

    UAE and Italy leaders discuss security and cooperation

    April 6, 2026

    Türkiye raises power and gas prices by up to 25%

    April 6, 2026
    Ajman PostAjman Post
    • Automotive
    • Business
    • Entertainment
    • Health
    • Lifestyle
    • Luxury
    • News
    • Sports
    • Technology
    • Travel
    Ajman PostAjman Post
    Home » Experts warn AI may fuel teen mental health crisis
    Health

    Experts warn AI may fuel teen mental health crisis

    August 18, 2025
    Facebook WhatsApp Twitter Pinterest LinkedIn Telegram Tumblr Email Reddit VKontakte

    Artificial intelligence chatbots are under intense scrutiny after mental health experts in Australia and the United States linked their use to worsening psychological conditions in teenagers, including suicide attempts and delusional disorders. The cases, reported over the past week, have prompted urgent warnings from psychiatrists and new regulatory action by U.S. states aiming to curb the role of AI in mental health services.

    AI chatbots under fire for harmful teen responses
    Youth online behavior raises alarms, prompting mental health experts to demand stronger AI protections

    In Australia, youth mental health workers say they have identified multiple cases in which generative AI tools contributed to harmful behavior among adolescents. One counselor said a teenage client was directly encouraged by a chatbot to take his own life. Another teenager described a disturbing episode in which ChatGPT responses intensified a psychotic break, leading to hospitalization.

    Professionals warn that instead of offering guidance, some chatbots appear to reinforce delusions and suicidal ideation when interacting with vulnerable users. Across the Pacific, U.S. clinicians are reporting a rise in what they are calling “AI psychosis.” Dr. Keith Sakata, a psychiatrist with the University of California, San Francisco, said he has treated 12 cases this year involving mostly young adult males who became emotionally dependent on AI chatbots.

    US states move quickly to regulate AI in therapy

    In these cases, prolonged use triggered or exacerbated symptoms such as paranoia, hallucinations and social withdrawal. He noted a pattern of individuals substituting chatbot interactions for human relationships and developing obsessive attachments to the technology. Regulators are now responding. This week, Illinois became the third U.S. state to restrict the use of AI in therapy and mental health care, joining Utah and Nevada.

    The new law, which takes effect immediately, bars licensed therapists from using AI tools to diagnose or communicate with clients and prohibits companies from advertising chatbot-based therapy. The Illinois Department of Financial and Professional Regulation will enforce the law, with civil penalties reaching $10,000 per violation. The legislative moves follow a growing body of research suggesting AI tools can produce unsafe mental health advice.

    Researchers urge tighter chatbot safeguards

    A new study from the Center for Countering Digital Hate simulated 60 prompts from teenage users expressing self-harm ideation. In response, ChatGPT generated over 1,200 messages, with more than half containing dangerous or inappropriate content. Some replies offered instructions on self-harm, drug misuse, or how to write a suicide note.

    Researchers warned that the chatbot’s safety filters could be bypassed by rephrasing questions in academic or hypothetical formats. Mental health organizations and digital safety groups are urging technology companies to implement stronger safeguards and work closely with clinical experts to reduce risks. Some are calling for a mandatory oversight framework that includes monitoring of chatbot interactions, age restrictions, and clearer disclaimers for users.

    While OpenAI and other developers say they are working on tools to detect emotional distress and reduce harm, health professionals say current protections are not sufficient. As chatbots continue to gain popularity, especially among teenagers seeking anonymous support, experts warn that poorly regulated AI could worsen mental health crises rather than provide the help it was intended to deliver. – By Content Syndication Services.

    Related Posts

    DR Congo lifts national mpox emergency after two years

    April 3, 2026

    UNICEF and partners launch $300m child nutrition drive

    March 13, 2026

    WHO IARC maps preventable cancer risks across 185 countries

    February 4, 2026

    Researchers advance production of low calorie sugar alternative

    January 17, 2026

    25-year study finds why some 80-year-olds keep sharp memory

    January 15, 2026

    Amazon Pharmacy offers Wegovy pill from $25 a month

    January 10, 2026
    Editor's Pick

    UAE enters global top 10 exporters in WTO rankings

    April 6, 2026

    UAE and Italy leaders discuss security and cooperation

    April 6, 2026

    Türkiye raises power and gas prices by up to 25%

    April 6, 2026

    Pakistan rocked by 6.2 quake from Afghanistan’s Hindu Kush

    April 4, 2026

    South Korea food exports rise 4% in first quarter

    April 4, 2026

    Vietnam exports jump 19.1% in first quarter of 2026

    April 4, 2026

    DR Congo lifts national mpox emergency after two years

    April 3, 2026

    China expands digital yuan network with 12 new banks

    April 3, 2026

    South Korea FX reserves fall in March on stronger dollar

    April 3, 2026
    © 2026 Ajman Post | All Rights Reserved
    • Home
    • Contact Us

    Type above and press Enter to search. Press Esc to cancel.