14.7 Million People Trust These Health Apps. They Shouldn't.
Security researchers found over 1,500 vulnerabilities in popular mental health apps. Vibe coding is about to make the problem much worse.
Ten mental health apps.
Over 14.7 million combined downloads on Google Play.
And more than 1,500 security vulnerabilities between them, including 54 rated high-severity.
That’s what mobile security firm Oversecured found when they scanned popular Android apps designed to help people with depression, anxiety, panic attacks, and bipolar disorder. (source)
One therapy app with over a million downloads had 85 medium- and high-severity flaws on its own. Some of these bugs could let an attacker access internal app activities that handle authentication tokens and session data. Which means your therapy records could be exposed.
Other apps stored data locally in a way that gave any app on your phone read access to your CBT session notes and mood logs.
And six of those ten apps? They explicitly told users their data was private or encrypted. Which as it turns out, wasn’t the case.
Why This Is Worse Than a Typical Data Breach
Mental health data isn’t like a leaked credit card number.
You can cancel a credit card in five minutes and you’re not responsible for fraudulent charges. You can’t un-share that you’re managing bipolar disorder, attending therapy for trauma, or tracking self-harm indicators.
The economics reflect this.
On dark web marketplaces, medical records routinely sell for $250 to over $1,000 per record, according to Experian and multiple cybersecurity firms. A stolen credit card number goes for $1 to $5. (source)
Mental health records are arguably worth even more than typical medical data because they carry massive blackmail and social engineering potential. Therapy session transcripts, medication schedules, mood logs are the kind of information that can destroy careers, relationships, and lives if it ends up in the wrong hands.
Remember the Tea app?
Last July, the dating safety app (which marketed itself as “the safest place” for women to share sensitive information) suffered a catastrophic breach. Over 72,000 user images (including government IDs) and 1.1 million private messages were leaked on 4chan, affecting more than 1.6 million users.
Women who had shared intimate details about their relationships, including discussions of abuse and infidelity, were suddenly exposed. Websites popped up where strangers could rate stolen selfies. The app now faces multiple class-action lawsuits.
Tea was a warning shot. Mental health apps are the next, much larger target.
Why It’s About To Get Much Worse
Here’s what connects these dots:
we’re in the middle of an explosion in AI-generated apps, and security isn’t keeping up.
The Google Play Store peaked at around 3.6 million apps in 2017, then declined for years as Google cleaned out low-quality listings. (source)
But that trend has reversed; the store is now back above 3.9 million apps and adding roughly 1,200 new ones every day.
A major driver?
“Vibe coding” — the practice of building apps by describing what you want to AI tools like Cursor, Replit, and Lovable, which then generate all the code. Collins Dictionary named it their Word of the Year for 2025. It lets people with zero security experience ship production apps to millions of users.
We’re trying not to be sensationalist, but the security track record is alarming.
Research from security firm Escape found over 2,000 vulnerabilities and 400+ exposed secrets across just 5,600 vibe-coded apps they analyzed. (source) Wiz Research reported that roughly 1 in 5 vibe-coded apps had security misconfigurations. (source)
A Tenzai assessment of five leading vibe coding tools found 69 vulnerabilities across just 15 test applications, including several rated critical. (source) And a 2025 study found that approximately 45% of AI-generated code contains security vulnerabilities. (source)
Now combine that with the mental health app market, where the data is extraordinarily sensitive, users are often in vulnerable emotional states, and the barrier to entry just dropped to “describe what you want and hit publish.”
📌 If you missed it: We did a breakdown of the online age verification chokepoint strategy, and why Facebook’s Zuckerberg wants identity checks baked into your operating system. Read on X.
Our Take
We’re headed toward a wave of mental health app breaches that will make the Tea incident look small. Circumstances are creating a perfect storm for a dangerous outcome:
a surge of AI-generated apps built by people who don’t understand security, handling some of the most sensitive data imaginable, downloaded by millions of users who trust the app store listing at face value.
And the app stores aren’t going to save you here.
Google has tightened quality controls, but their review process doesn’t catch the kinds of vulnerabilities Oversecured found; things like hardcoded API keys, insecure session token generation, and misconfigured internal activities. These are architectural security failures, not policy violations.
We’re about 12–18 months away from a major mental health app breach that results in blackmail campaigns. The data is too valuable, the apps are too poorly built, and the attack surface is growing exponentially. Scammers and extortionists who currently buy medical records on the dark web will specifically target therapy and mental health platforms because the leverage is unmatched.
What You Can Do Right Now
Before you download any app, but especially a mental health or wellness app, check one thing: when was it last updated?
Oversecured noted that only four of the ten apps they tested had been updated this month; some hadn’t been touched since September 2024. An app that isn’t being actively maintained almost certainly isn’t being actively secured. It’s not a perfect signal, but it’s the fastest filter you have.
Beyond that, consider whether you actually need an app for this. A notes app with local-only storage and no cloud sync might serve you better than a slick AI therapy chatbot that’s sending your deepest thoughts to a poorly secured backend server.
Looking Ahead
This is one of the privacy stories we’ll be watching most closely this year. The intersection of AI-generated software, sensitive health data, and minimal oversight is a perfect storm.
And if this post has you wondering what’s already out there about you, from health apps, data brokers, or anything else, we created a guide for exactly that. How Exposed Are You Online? walks you through finding out.
Looking for help with a privacy issue or privacy concern? Chances are we’ve covered it already or will soon. Follow us on X and LinkedIn for updates on this topic and other internet privacy related topics.
Disclaimer: None of the above is to be deemed legal advice of any kind. These are *opinions* written by a privacy and tech attorney with years of working for, with and against Big Tech and Big Data. And this post is for informational purposes only and is not intended for use in furtherance of any unlawful activity. This post may also contain affiliate links, which means that at no additional cost to you, we earn a commission if you click through and make a purchase.

Privacy freedom is more affordable than you think. We tackle the top Big Tech digital services and price out privacy friendly competitors here. The results may surprise you.
Do you own a Smart TV? If so, you won’t want to miss this reader fav post Smart TV Privacy Settings: How to Disable Tracking on Every Brand.
If you’re reading this but haven’t yet signed up, join for free (4.5K+ subscribers strong) and get our newsletter delivered to your inbox by subscribing here 👇





