Double Blind: “I Need To Warn You About AI Psychosis”
Double Blind (named after the research design) is a blog series where we each watch/read/review a piece of content and react to it individually. Our opinions may or may not match each time, but enthusiasm always does!
For this post, we watched Dr. K’s “I Need To Warn You About AI Psychosis” on his HealthyGamerGG YouTube channel:
Chris:
Years working at Woebot (a mental health chatbot where I worked from ‘22-’25) have me super tuned to when doctors, especially psychiatrists, talk about chatbots:
Are they going to trash our work?
What are they hearing from their patients?
What can we learn, and where do we need to change?
These were make-or-break conversations for our whole company. Our founder, a psychologist, had an ethical obligation to users, PLUS any issues with safety could nuke us out of orbit. So, safety was our biggest priority.
When Dr. K (not OUR Dr. Kay) framed up the 26 year old with no history of psychosis developing psychosis after extensive chatbot use, it hit me that she wasn’t the biggest priority for a company with nearly a billion users.
That’s NOT saying employees at these companies are indifferent to people’s mental health. It’s that they have thousands of other priorities, so mental health safety might take a back seat to military negotiations or programming tools.
If using LLM companies for mental health support, I think it’s important to keep their priorities in mind. It may also help to reflect on some of the priorities mental health apps have. Here are the values my time at Woebot pressed on me:
Never engage in “brainjacking,” which are gamification and engagement techniques meant to keep people glued to the app
Constantly remind people they’re talking to a bot
Never assume, make crisis services and human connection available, but always confirm with people first
I don’t think general purpose chatbots do a great job at these, especially bullet number 1. Part of training these models is identifying outputs that people like best. “Brainjacking” is a side effect of this process, and this is clear when responses are overly eager and full of praise.
When we created the “How to Think About AI” guide, we included a section on what to look out for and how to take care of ourselves.
Some additional ideas to consider that we didn’t cover in the guide:
Setting hard time limits on mental health support conversations
Using these conversations to help learn skills that create connection with other people
Kudos to YouTube’s Dr. K for putting a spotlight on this issue! I wonder what OUR Dr. Kay has to say about it!
Dr. Kay:
“Sometimes shit happens to people.”
-Dr. K.’s summary of the probable stance of AI companies on their products’ effects on mental health
0.07% of active users in a given week show “possible signs of mental health emergencies related to psychosis or mania.”
-From OpenAI’s Strengthening ChatGPT’s responses in sensitive conversations response to recent public media scrutiny
When we look at 0.07%, that seems like a small amount, right? But when we look at the actual numbers of ChatGPT’s user base, this is approximately 560k individuals showing signs of psychosis or mania.
That is a tremendous number of people, and as someone whose clinical training focused on providing care to individuals with psychotic symptoms and severe mental health concerns, I find it hard to even fathom the damage interactions with a general-purpose model chatbot may have in cases where people are already at risk.
Moreover, I’m in the camp of believing that it’s plausible that individuals who are not at risk of psychosis can still be pushed into delusions from extensive AI use. And of course, we’ve seen the countless news stories on cases where AI use coincided or led to tragic consequences for those experiencing mental health crises. (A lot of which Chris and I would send to one another as we explored Safer AI Use.)
From the clinical psychology perspective, there has been pushback on the term “AI psychosis.” AI psychosis is not a clinical term, and we need more research to assess what is happening with patients and how pre-existing risk factors contribute. That being said, during a recent presentation I did for a mental health center, there was a discussion of a patient who had pre-existing psychosis risk, and while doing well in treatment, slid into delusions about AI sentience after extended ChatGPT use/
What *is* clear is that some people are experiencing delusions during or after AI use. But we don’t yet fully know what is happening, nor how to deal with it. (For an interesting read, check out Wired’s article, titled: AI Psychosis Is Rarely Psychosis at All)
Whatever we call it, what’s clear is that there needs to be more corporate responsibility for the consequences of AI use, especially in the case of psychosis or mental health crisis. Dr. K says, and I agree that, “Even though [AI companies] have profound mental impact, they are not formally in the system for evaluation of mental impact.”
And that there is the current crux of our issues. Without external pressure, I don’t expect major tech companies to change their trajectory, nor expect them to study their own impacts. And so, as is my previous conclusion and the origin of Safer AI Use, it’s up to us collectively to come up with ways to support folks who experience psychosis while (or perhaps to boldly say, as a result of) using AI.