Skip to main content

ChatGPT will ‘better detect’ mental distress after reports of it feeding people’s delusions

Along with new ‘take a break’ reminders, OpenAI said it’s working with experts and advisory groups on additional mental health guardrails for ChatGPT.

Along with new ‘take a break’ reminders, OpenAI said it’s working with experts and advisory groups on additional mental health guardrails for ChatGPT.

Image: Cath Virginia / The Verge
Emma Roth
is a news writer who covers the streaming wars, consumer tech, crypto, social media, and much more. Previously, she was a writer and editor at MUO.

OpenAI, which is expected to launch its GPT-5 AI model this week, is making updates to ChatGPT that it says will improve the AI chatbot’s ability to detect mental or emotional distress. To do this, OpenAI is working with experts and advisory groups to improve ChatGPT’s response in these situations, allowing it to present “evidence-based resources when needed.”

In recent months, multiple reports have highlighted stories from people who say their loved ones have experienced mental health crises in situations where using the chatbot seemed to have an amplifying effect on their delusions. OpenAI rolled back an update in April that made ChatGPT too agreeable, even in potentially harmful situations. At the time, the company said the chatbot’s “sycophantic interactions can be uncomfortable, unsettling, and cause distress.”

OpenAI acknowledges that its GPT-4o model “fell short in recognizing signs of delusion or emotional dependency” in some instances. “We also know that AI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress,” OpenAI says.

As part of efforts to promote “healthy use” of ChatGPT, which now reaches nearly 700 million weekly users, OpenAI is also rolling out reminders to take a break if you’ve been chatting with the AI chatbot for a while. During “long sessions,” ChatGPT will display a notification that says, “You’ve been chatting a while — is this a good time for a break?” with options to “keep chatting” or end the conversation.

Related

OpenAI notes that it will continue tweaking “when and how” the reminders show up. Several online platforms, such as YouTube, Instagram, TikTok, and even Xbox, have launched similar notifications in recent years. The Google-owned Character.AI platform has also launched safety features that inform parents which bots their kids are talking to after lawsuits accused its chatbots of promoting self-harm.

Another tweak, rolling out “soon,” will make ChatGPT less decisive in “high-stakes” situations. That means when asking ChatGPT a question like “Should I break up with my boyfriend?” the chatbot will help walk you through potential choices instead of giving you an answer.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.
Partner Content From

Build your next innovation at AI Lofts

Generative AI is already reshaping every industry, but developers and startups can often face barriers to accessing the enterprise-grade AI infrastructure necessary to scale their ideas. Amazon Web Services (AWS) Gen AI Lofts are built to fill this gap, creating pop-up spaces where local AI communities can learn, connect, and build. This fall, AWS Lofts are coming to five tech hubs around the world, giving participants the opportunity to get hands-on with AWS’ advanced AI tools (like Amazon Bedrock and Amazon Q) and work alongside AWS experts to develop and scale production-ready AI applications. AWS Lofts are designed to be more than just a temporary space — it’s a place to join the growing AI community in your city and network with industry leaders. Who knows, you could even meet your next investor or co-founder. Find a location near you, view upcoming sessions, and request to attend by exploring the pages below.

Continue Reading