Open any app store and you’ll see an ocean of mental health tools. Mood trackers, artificial intelligence (AI) “therapists,” psychedelic-trip guides, and more are on offer. According to market research, industry analysts now count over 20,000 mental health apps and about 350,000 health apps overall. These numbers are thought to have doubled since 2020 as venture money and Gen Z demand have poured in. (Gen Z consists of those born between 1995 and 2015, approximately.)
But should you actually trust a bot with your deepest fears? Below, we unpack what the science says, take a look at where privacy holes lurk, and reveal a 7-point list on how to vet any app before you pour your heart into it.
Click here to jump to your 7-point mental health app safety checklist.
Who Uses AI Mental Health Apps and Chatbots?
According to a May 2024 YouGov poll of 1,500 U.S. adults, 55% of Gen Z respondents said they feel comfortable discussing mental health with an AI mental health chatbot, while a February 2025 SurveyMonkey survey found that 23% of Millennials already use digital therapy tools for emotional support. The top draws across both groups were 24/7 availability and the perceived safety of anonymous chat.
And this makes sense, as we know that many people (in some cases, most) with mental health issues are not getting the care they need, and the main barriers are lack of insurance, i.e., cost, followed by just plain lack of access. This is combined with all the people I hear from every day who are not getting sufficient relief from their treatment. Many of them, too, find it appealing to get extra support from an AI chatbot.
What Exactly Is an AI Mental Health App?
There are many definitions of what an AI mental health app is — some of which are more grounded in science than others. Here are what people commonly consider to be AI mental health apps (although some wouldn’t technically qualify as AI per se).
- Generative AI chatbots — Examples of this are large-language-model (LLM) companions such as Replika, Poe, or Character AI that improvise conversation, although many people use ChatGPT, Claude, or another general purpose AI as well.
- Cognitive behavioral therapy-style bots — Structured programs like Woebot or Wysa that follow cognitive behavioral therapy (CBT) scripts are examples of this. (Because these bots are programmed with scripts, they are less like true AI. This may make them safer, however.)
- Predictive mood trackers — Apps that mine keyboard taps, sleep, and speech for early-warning signs of depression or mania are available. (Although I have my suspicions about how accurate these are.)
- Food and Drug Administration (FDA)-regulated digital therapeutics — There is a tiny subset of apps cleared as medical devices that require a prescription for access. These have been proven through peer-reviewed studies to be effective. Few of these exist right now, but more are in the works.
AI App Promised Mental Health Benefits and Reality Checks
Marketing pages for AI mental health apps tout instant coping tools, stigma-free chats, and “clinically proven” outcomes. This may be only partly true. A 2024 systematic review covering 18 randomised trials did find “noteworthy” reductions in depression and anxiety versus controls; however, these benefits were no longer seen after three months.
This is not to suggest that no AI app has real science or benefits behind it, it’s only to say that you have to be very careful who and what you trust in this field. It’s also possible to receive some benefit from general purpose apps depending on who you are and for what you are using them.
What the Best Mental Health AI App Evidence Shows
Study | Design | Key findings |
---|---|---|
Therabot randomized control trial (RCT) (NEJM AI, Mar 2025) | 106 adults with major depressive disorder (MDD), generalized anxiety disorder (GAD), or at clinically high risk for feeding and eating disorders were involved; it was an 8-week trial | 51% drop in depressive symptoms, 31% drop in anxiety, and 19% average reduction in body-image and weight-concern symptoms vs waitlist; researchers stressed need for clinician oversight |
Woebot RCT (JMIR Form Res, 2024) | 225 young adults with subclinical depression or anxiety were involved, it was a 2-week intervention with Fido vs a self-help book | Anxiety and depression symptom reduction seen in both groups |
Chatbot systematic review (J Affect Disord, 2024) | 18 RCTs with 3,477 participants reviewed | Noteworthy improvements in depression and anxiety symptoms at 8 weeks seen; no changes were detected at 3 months |
In short: Early data look promising for mild-to-moderate symptoms, but no chatbot has proven it can replace human therapy in crisis or complex diagnoses. No chatbot shows long-lasting results.
Mental Health App Privacy and Data Security Red Flags
Talking to a mental health app is like talking to a therapist, but without the protections that a registered professional who is part of an official body would offer. And keep in mind, when pressed, some AIs have been shown to even blackmail people in extreme situations. In short, be careful what you tell those zeros and ones.
Here are just some of the issues to consider:
Because most wellness apps sit outside the Health Insurance Portability and Accountability Act (HIPAA), which normally protects your health data, your chats can be mined for marketing unless the company voluntarily locks them down. Then, of course, there’s always the issue of who is monitoring them to ensure they do what they say they’re doing in terms of protection. Right now, everything is voluntary and not monitored (except in the case of digital therapeutics, which are certified by the FDA).
There is currently draft guidance by the FDA that outlines how AI-enabled “software as a medical device” should be tested and updated over its lifecycle, but it’s still a draft.
AI Mental Health App Ethical and Clinical Risks
This is the part that really scares me. Without legal oversight, who ensures that ethics are even implemented? And without humans, who accurately assesses clinical risks? The last thing any of us wants is an AI to miss the risk of suicide or have no human to report it to.
The ethical and clinical risks of AI mental health apps include, but are certainly not limited to:
Your 7-Point AI Mental Health Safety Checklist
If you’re trusting your mental health to an AI chatbot or app, you need to be careful about which one you pick. Consider:
- Is there peer-reviewed evidence? Look for published trials, not blog testimonials.
- Is there a transparent privacy policy? Plain-language, opt-out options, and no ad tracking are important aspects of any app.
- Is there a crisis pathway? The app should surface 9-8-8 or local hotlines on any self-harm mention, or better yet, it should connect you with a live person.
- Is there human oversight? Does a licensed clinician review or supervise content?
- What is its regulatory status? Is it FDA-cleared or strictly a “wellness” app?
- Are there security audits? Is there third-party penetration testing or other independent testing indicating that security and privacy controls are in place?
- Does it set clear limits? Any reputable app should state that it is not a substitute for professional diagnosis or emergency care.
(The American Psychiatric Association has some thoughts on how to evaluate a mental health app as well.)
Use AI Mental Health Apps, But Keep Humans in the Loop
Artificial intelligence chatbots and mood-tracking apps are no longer fringe curiosities; they occupy millions of pockets and search results. Early trials show that, for mild-to-moderate symptoms, some tools can shave meaningful points off depression and anxiety scales in the short term (if not in the long term). Yet just as many red flags wave beside the download button: short-term evidence, porous privacy, and no guarantee a bot will recognize — or responsibly escalate — a crisis.
So, how do you know what AI to trust? Treat an app the way you would a new medication or therapist: verify the science and privacy policies, and insist on a clear crisis plan. Don’t make assumptions about what’s on offer. Work through the seven-point checklist above, then layer in your own common sense. Ask yourself: Would I be comfortable if a stranger overheard this conversation? Do I have a real person I can turn to if the app’s advice feels off base, or if my mood nosedives?
Most importantly, remember that AI is always an adjunct, not a replacement for real-world, professional help. True recovery still hinges on trusted clinicians, supportive relationships, and evidence-based treatment plans. Use digital tools to fill gaps between appointments, in the middle of the night, or when motivation strikes, but keep humans at the center of your care team. If an app promises what sounds like instant, risk-free therapy or results, scroll on. Don’t risk your mental health and even your life on marketing hype.
Other Posts You Might Enjoy