Why Safer AI Use?
Over a period of months, we’d send texts back and forth, do phone calls and jump on regular zooms with a sense of growing horror. We’d share articles coming out and the newest writeups from one of our connections. We’d share our own thoughts, our experience of disbelief, and our feelings of “what do we do?” The topic: the emerging use of AI for emotional support, life guidance, and companionship, and the consequences. We saw the growing problem of reliance on AI chatbots, the lack of protections for users, and the increasing harm that users were experiencing when trying to seek support. And we knew it didn’t have to be this way.
Sometime in July we started talking about what we called “the AI consumer guide.” Both of us had spent years working in healthtech. We’ve seen the rise (and fall) of technology hype aimed to address the same fundamental issue: a lack of a functioning mental healthcare system in the US. What do we do when there’s an increasing need for support, and lack of opportunity to seek it?
Algorithms are capable of some powerful things if we have a sense of what's actually happening. If we don’t have that sense, then we may well fully trust these things with life coaching, companionship, and our own mental health. When it gets to that point, the results have been heartbreaking: amplified delusions, GPT-guided suicides, deepened social isolation, the list goes on.
One thing we stand firm with is that we can’t blame or shame individuals choosing to use AI foundation models to try to seek support or companionship. People are doing the best they can trying to get support, and those same people are sometimes getting hurt. We also know that we can’t wait for the big technology companies to add in the guardrails that should have been there in the first place. And where did we land?
Safer AI Use is a social good project that aims to provide free resources to users choosing to engage with AI for emotional support & life guidance for the purposes of harm reduction. We use the term “safer” instead of “safe” because we do not believe in fully safe AI use for these purposes, and to give salience to the risk that technology companies are choosing to put us in. The language and lens we take is humanistic, and while each of us come with a mental health & technology background, we do not position ourselves as experts saying “you must.” Instead, we feel ourselves to be community members aiming to provide collective care through these resources, all of which come from a “we/us” lens.
This work is truly a labor of love and care, and we hope you find use in the resources we offer. Please share widely, and reach out to us with any thoughts: connect@saferaiuse.org. We’ll continue to add more resources and more collaborators and time goes on. And for now, we’re a small but mighty team of 2, trying to offer what help we can.
With care,
Dr. Kay Nikiforova & Chris Hemphill