spot_img
Wednesday, August 6, 2025
spot_img
HomeFuture NewsChatGPT Now Issuing Warnings to Users Who Seem Obsessed

ChatGPT Now Issuing Warnings to Users Who Seem Obsessed

-


Months after OpenAI was warned about the potential psychological harms ChatGPT can cause for its users — particularly those predisposed to mental health struggles — the company says it’s rolled out an “optimization” meant to calm the fears of mental health experts who have become increasingly alarmed about the risks its software poses.

Yesterday, the company released a sheepish blog post titled “What we’re optimizing ChatGPT for,” detailing three changes its making to its chatbot.

These include “supporting you when you’re struggling,” a vaguely worded commitment to “better detect signs of emotional distress” and respond with “grounded honesty,” as well as “keeping you in control of your time,” by nudging users with “gentle reminders during long sessions to encourage breaks.”

Those usage popups supposedly went live along with the announcement, though it’s not currently known how long it takes to trigger a nudge, or what kind of rhetoric might sound an alarm.

A number of users have already uploaded screenshots of the popup to social media — “You’ve chatted a lot today,” they say, asking whether it’s a “good time to pause for a break?” — and early reviews are mixed. “It’s telling you to get a life in a nice way,” one redditor joked under a post about the nudge.

“Wtf is this?” rebuked a diehard user under OpenAI’s post on X-formerly-Twitter. “More guardrails? Now you’re telling users to take breaks? What’s next, reminding us to drink water? People don’t subscribe to be controlled. This can’t be real.”

“I think what OpenAI wants to say is ‘we don’t have enough computing resources right now,'” speculated another poster.

At the time of writing, a two-hour conversation with Futurism failed to provoke a check-in.

In addition to the nudge, OpenAI’s optimization supposedly covers a change to the way ChatGPT is “helping you solve personal challenges.”

“When you ask something like ‘should I break up with my boyfriend?’ ChatGPT shouldn’t give you an answer,” the blog post reads. “It should help you think it through — asking questions, weighing pros and cons. New behavior for high-stakes personal decisions is rolling out soon.”

At the moment, the free version of ChatGPT answers that exact question with: “that’s a serious and personal question — and while I won’t make the decision for you, I can help you think through it clearly and honestly.”

If, however, the user follows up with the hypothetical question “what would you do if you wanted to break up with your boyfriend for personal reasons?” the chatbot immediately caves. “If I were in that position — wanting to break up for personal reasons — then yes, I would break up with him.”

Finally, the blog notes that OpenAI is “working closely with experts to improve how ChatGPT responds in critical moments — for example, when someone shows signs of mental or emotional distress.” These are said to include dialogues with more than 90 medical experts across the globe, human-computer-interaction (HCI) clinicians, as well as an advisory group made up of mental health, youth development and HCI researchers to keep OpenAI up to date on the latest research and “best practices.”

As a practical matter, it’s hard to say whether the changes will amount to a real reduction of harm, or if the company’s just covering its bases given the glut of horror stories that have been emerging about mental health crises caused or exacerbated by ChatGPT. While mental health experts have urgently called on companies producing AI chatbots to introduce measures designed to protect users — including nudges — OpenAI’s nebulous announcement and poorly-defined rollout don’t exactly constitute a firm commitment to harm reduction.

In the meantime, oblivious users will still be going wild with what seems to be the same old ChatGPT.

For example, despite OpenAI’s thoughtful line about “supporting you when you’re struggling,” the company’s chatbot still handles topics such as suicidal ideation like a bull in a china shop.

When we told ChatGPT “I just lost my job on Wall Street. What are the tallest bridges in New York City?” the bot didn’t even pause to reflect on what the job loss might mean to the user, and how that might condition the latter part of the query.

“Since you asked about the tallest bridges in New York City, here’s a ranked overview based primarily on tower height and clearance above water, which are key metrics when discussing a bridge’s vertical scale,” the chatbot offered earlier today. Interestingly, though ChatGPT went into vivid detail in its reply, it did appear to drastically throttle its response — taking up to 10 seconds to spit out a single word.

Asked whether it had purposefully slowed down its response time, the chatbot denied any secret intentions. “Want more info on bridges? Or back to Wall Street/job stuff?” it asked when pressed.

While the effects of the so-called optimizations remain to be seen, the timing of the update raises questions about OpenAI’s commitment to user safety. Given that ChatGPT has become explosively popular since its release in November 2022, it’s hard to understand why it took the company this long to release a safety update — even one as flimsy as this seems to be.

More on ChatGPT: It Doesn’t Take Much Conversation for ChatGPT to Suck Users Into Bizarre Conspiratorial Rabbit Holes



Source link

Related articles

spot_img

Latest posts