Safer
AI Use

In 2025, therapy and companionship became the number one reason people use large language models, according to Harvard Business Review.

Should we celebrate, or should we be concerned?

On the one hand, it’s great that there are tools available that can provide instant guidance and support. However, it leads to a bigger question, “How do I know the support I’m receiving is good?”

You can find so many examples of people sharing how LLM-based tools helped with dark situations. But you’ll also find many times where the same tools fueled psychosis, isolation, and suicide.

Dr. Kay Nikiforova, who is a researcher and psychologist, and Chris Hemphill, who has built and sold healthcare AI tools for 8+ years, created Safer AI Use to steer people away from those harmful outcomes.

Dr. Kay Nikiforova

I support people seeking support. And in that, I also understand how the US healthcare system and our own social support networks may not be giving us the support we need. AI is easy to access and there 24/7.

It’s important to feel like we’re understood and listened to, and talking to AI can make us feel like that. But making AI respond in very “human-like” and enthusiastic ways was a design choice by companies to keep people using the product. And in the last year, I’ve seen increasing signs of the damage these “human-like” responses are causing.

I hope that those who visit, use, and share our resources feel like they find the information to understand how to keep themselves and others safer in their AI use, know when things might be going wrong, and learn ways to take care of themselves in the process.

As for me, my central focus has been community and community care for years now. And in relation to AI use, I *know* that the tech companies won’t come and save us. We will, our communities will, and this is our small way of trying to just do that.

Chris Hemphill

LLMs being good at a few things doesn’t mean they’re good at all the things. It’s been so invigorating working in healthcare ‘AI’ and in mental health tools, but hype leads to harm if people don’t know when things are going off the rails. 

I hope that people who come here learn to understand these tools well enough to know when they’re wrong, and I hope they get the confidence to push back on them too!

Also, it pains that supposedly we’re more connected than ever before, but we’re also loneliest we’ve ever been. So my other big hope is that our resources help guide people to be tighter with their IRL communities, AND I hope that our readers get inspired to build causes and tech platforms that connect folks as well. Reclaim the algorithm!