The Privacy Risks of AI Chatbots Like ChatGPT
Understanding the internet privacy risks of AI tools like ChatGPT and exploring safer alternatives
Privacy concerns with Artificial Intelligence, particularly chatbots like ChatGPT, were front and center last week with Apple’s announcement it would integrate ChatGPT into Siri. Elon Musk went so far as threatening to ban Apple products from his companies. OpenAI, possibly feeling the heat, put out a post promoting their privacy practices.
And to top it off, just days later, Open AI appointed a former head of the National Security Agency (NSA) to its board of directors.
Welcome to another issue of Secrets of Privacy where we discuss internet privacy related topics and provide practical tips to immediately shield your personal privacy.
If you’re reading this but haven’t yet signed up, join the growing Secrets of Privacy community for free and get our newsletter delivered to your inbox by subscribing here 👇
AI assistants can be quite useful, despite being highly gimmicky at times. Which is why chatbots like ChatGPT are here to stay, even with the mounting digital privacy concerns. Savvy users should be aware of the tradeoffs with using this technology to adjust and plan accordingly. We’ll outline the key risks below, suggest things to look for when choosing an AI assistant and name some online privacy-friendly AI alternatives to play around with.
Chatbot Privacy Issues
Data Collection and Tracking
This one is obvious but it bears repeating. When you chat with AI assistants like ChatGPT, the conversations are likely being recorded, analyzed, and stored by the company behind the tool. This data could potentially include sensitive personal information, opinions, ideas, or other private details shared during the chat sessions. In effect, the tools are a form of surveillance technology that impacts information privacy.
The companies can use this data to improve their AI models, target ads, or even sell it to third parties. There is often little transparency around what exactly is being collected and how it may be used. You should assume the information you input cannot be deleted and will be made available to third parties in some form.
Lack of Anonymity
Many AI chatbots require users to create an account and log in, linking chat data directly to an individual's identity. ChatGPT and Claude require a phone number. Even if you use the tool "anonymously", there are likely trackers and telemetry collecting device fingerprints, IP addresses, and other identifiable metadata about you.
This loss of anonymity means your conversations and interactions with the AI could be permanently tied back to you as an individual.
Bias and Manipulation
The training data and processes used to create large language models like ChatGPT are opaque "black boxes". There could be inherent biases, hidden agendas, or potential for the outputs to be deliberately manipulated by the creators. Google is notorious for doing this with search. No reason to think it and others won’t do the same with AI assistants. The temptation is too great.
Data Breaches
The vast troves of user data collected and stored by AI companies make them attractive targets for hackers and data breaches. A breach could expose your private conversations and information shared with the AI. This information can be used for blackmail, financial scams or romance scams. This was a principal concern with Micosoft’s Recall AI feature, which they’ve quietly delayed release of.
Unintended Data Use
There are concerns that the data collected, while claimed to be just for improving the AI, could potentially be used for other purposes like targeted advertising or even sold to third parties, without user consent. Business models and leadership change. A new regime could decide to use customer data in a different manner than what you expected when first signing up for the service.
Lack of Transparency
There is often little transparency from AI companies around exactly what user data is being collected, how it is processed and stored, and what security practices are in place to protect that data. You should assume the worst and plan accordingly.
Removing your personal information from Google and data broker sites is critical to protecting your digital privacy from Bad Actors and AI scams. Start removing your personal information today. Set up an account, pay a monthly/annual fee and forget about it - super easy, and an enormous time saver. Get started right away with DeleteMe here or PrivacyBee here (affiliate links). For a deeper dive on the topic, check out our prior post here.
Protecting Personal Data with AI
Given all of the above threats and risks, what is an internet privacy conscious person to do? Don’t rely on laws to protect you, of course. They’re outdated and not there to protect individuals anyway.
Don’t rely on privacy notices either. As we’ve explained before, reading privacy notices for privacy protection is terrible ROI. Those are primarily legal CYA and can be changed at any time. Ai chatbot privacy notices are no different.
Fortunately there are a few simple steps that you can take.
Be cautious about sharing personal or sensitive information with AI chatbots. Not exactly groundbreaking information, but it bears repeating.
Use anonymous accounts when possible or email aliases.
Use VPNs when possible.
Err on the side of using more privacy-focused alternatives like local AI models that run on your device, decentralized AI networks, or open source tools that can be audited.
Private AI Chatbot Alternatives
In just the past few months, a number of ChatGPT competitors claiming to be privacy friendly AI tools have hit the public. We won’t do a detailed comparison in this post – we’ll save that for a future one. But below is an overview of a few that we have tried and found promising.
DuckDuckGo AI Chat
Anonymous way to access popular AI chatbots – currently, Open AI's GPT 3.5 Turbo, Anthropic's Claude 3 Haiku, and two open-source models (Meta Llama 3 and Mistral's Mixtral 8x7B)
No login required; all features available
Free to use
Read more about their privacy features here
Perplexity AI
No login required for basic use. An email login is required for their mid tier.
Free to use
Paid plans available with extra features and functionality ($20/month)
Mobile apps available
Read more about their privacy features here
Brave Leo
No account required; all features available
Free to use
No mobile app; used within the Brave browser (click the “answer with AI” button if it doesn’t happen automatically)
Read more about their privacy features here.
Venice AI
No account required for basic plan
Free to use
Paid plans available with extra features and functionality ($50/year)
Privacy discussion here.
Self hosted
If you’re tech savvy, or just up for a challenge, self-hosting your own AI is worth pursuing. It’s a great way to brag as well – not many people have the skills or patience to do it. If you give it a try, be sure to let us know how it goes. This post by
will get you started.
What to look for in Privacy Friendly ChatGPT Competitors?
If you’re evaluating other privacy friendly ChatGPT alternatives on your own, there are a few things to look for. You can probably put together a checklist based on our descriptions above. Things to look for:
Do you need to provide a phone number?
Do they require you to create an account to try it out?
If an email is required, do they prohibit email aliases like SimpleLogin?
“Yes” to one or more of those is a red flag, with #2 being the least problematic if the answers to the others are “no”.
On that note, beware of Claude, a ChatGPT competitor created by Anthropic that is described by some sources as a privacy friendly alternative to ChatGPT. We went to try it out and immediately encountered a few red flags:
🚩you need to create an account to use the service. Not a deal breaker, but...
🚩 we tried creating an account with a SimpleLogin address and were rejected. We were able to create an account with another email alias provider, but…
🚩 we were then prompted to enter a phone number to continue.
Even OpenAI allows SimpleLogin addresses, though to be fair, OpenAI also requires a phone number to register. Needless to say, requiring a phone number to use the service is not privacy friendly. We’re unlikely to ever endorse or recommend using such a resource unless it was exceptionally better than more privacy-friendly AI tools.
Wrap Up
The rise of AI chatbots brings incredible capabilities, but also significant new privacy risks to be aware of. While these systems offer immense potential in terms of convenience, productivity, and access to information, taking precautions to bolster privacy is worthwhile, and even attainable. Private AI is still a new and evolving concept that we’ll keep an eye on and report back with meaningful updates. By understanding the issues and using more privacy-conscious alternatives when possible, you can enjoy the benefits of AI while better protecting your personal data.
Be sure to follow us on X and LinkedIn for updates on this topic and other privacy related topics.
We’re also now on Rumble and YouTube. Subscribe today to be notified when videos are published.
Disclaimer: None of the above is to be deemed legal advice of any kind. These are *opinions* written by a privacy attorney with 15+ years of working for, with and against Big Tech and Big Data. And this post is for informational purposes only and is not intended for use in furtherance of any unlawful activity.
Check out our Personal Privacy Stack here.
Proton is running a limited time promotion right now on their core offerings like Proton VPN and Proton Mail. Get privacy peace of mind for as little as $3.99 per month.
If you’re reading this but haven’t yet signed up, join the growing Secrets of Privacy community for free and get our newsletter delivered to your inbox by subscribing here 👇