Understanding AI Companion Privacy Policies

In February 2026, a security researcher discovered that Chat & Ask AI had left 300 million private messages from 25 million users sitting in an unprotected database (Malwarebytes, 2026). Intimate conversations, confessions, roleplay scenarios. All exposed because one company didn’t bother securing its servers. Your AI companion conversations are only as private as the company storing them. And the only way to know how a company handles your data is to read its privacy policy. The problem? Most people don’t. The policies run thousands of words long, bury the important parts in legal jargon, and rely on the fact that you’ll click “I agree” without scrolling past the first paragraph. This guide shows you exactly what to look for, which red flags should stop you cold, and how the 27 AI companion apps in our catalog actually compare. For a breakdown of what specific data these apps collect, see our guide on how AI companion apps use your data.

Key Takeaways

  • Only 5 of 27 AI companion apps earn a C or above in our 23-dimension safety review. The rest sit at D or F.
  • Seven specific red flags in privacy policies signal that an app isn’t protecting your data. We list all seven below with real examples.
  • You don’t need to read every word. Five targeted keyword searches (Ctrl+F) can tell you what matters in under 10 minutes.
  • A 10-item checklist at the bottom of this page lets you evaluate any AI companion app before creating an account.
  • The February 2026 Chat & Ask AI breach exposed 300 million messages. Privacy policies won’t prevent breaches, but they reveal whether a company takes your data seriously.

Why Do AI Companion Privacy Policies Matter?

AI companion apps collect some of the most personal data of any consumer software. People share fears, relationship struggles, sexual fantasies, mental health details, and daily routines with these chatbots. That makes the privacy policy more than a legal formality. It’s the only document that tells you what happens to those conversations after you hit send.

The stakes became concrete in early 2026. A security researcher found that Chat & Ask AI, a popular AI chatbot app, had exposed 300 million messages tied to 25 million user accounts through an unsecured Firebase database (Malwarebytes, February 2026). The exposed data included IP addresses and unique device identifiers that could be cross-referenced with other breaches to identify individual users. This wasn’t the first time. In 2025, two AI companion platforms suffered separate breaches exposing over 700 million messages combined (AI Tipsters, March 2026). These apps collect massive volumes of sensitive data, and too many of them don’t protect it well enough.

The European Data Protection Supervisor has noted that AI companions “continuously process personal data during interactions, including text messages that can contain sensitive information and voice or video recordings that could reveal biometric data” (EDPS, 2025). U.S. lawmakers are also moving toward tighter oversight of AI companions and emotionally engaging chatbots, especially where minors and sensitive data are involved. Our AI companion regulation guide breaks down the proposals that matter most.

So how do you tell which apps handle your data responsibly? You read the privacy policy. But you don’t need to read all of it. The next sections show you exactly where to look.

What Should a Good Privacy Policy Include?

The Anuma 2026 AI Chat Privacy Report found that only 7 of 15 major AI chat platforms offer end-to-end encryption (PRWeb, March 2026). That single finding illustrates the gap between what users expect and what companies actually deliver. A trustworthy AI companion privacy policy covers eight specific areas. When any of them is missing, the company is either careless or deliberately vague.

Here are the eight sections to look for:

  • What data is collected. The policy should list every category: messages, photos, voice recordings, device data, location, payment information. Vague language like “information you provide” without specifics is a warning sign.
  • How your data is used. Does the company use your conversations to improve its product? To train AI models? To serve targeted ads? Each purpose should be stated explicitly.
  • AI training disclosure. This is the section most users care about and most policies obscure. Does the app feed your messages into its training pipeline? Can you opt out? Replika states clearly that it will “never share your conversations” with third parties. Many competitors make no such commitment.
  • Third-party sharing. Who else gets access to your data? Legitimate disclosures name specific categories (payment processors, cloud hosting providers). Red flags include broad references to unnamed “business partners” or “affiliates.”
  • Data retention periods. How long does the company keep your data after you stop using the app? The best policies give specific timeframes. The worst say “as long as necessary” without defining what that means.
  • Encryption and security measures. Does the policy mention encryption in transit (HTTPS) and at rest? Does it reference any security audits or certifications? Pi explicitly states it uses encryption at rest. Several lower-scoring apps mention no security measures at all.
  • User rights. Can you request a copy of your data? Can you delete your account and all associated data? Can you opt out of data collection or AI training? EU users have these rights under GDPR. California users have them under CCPA. But many apps make exercising these rights difficult or unclear.
  • Contact information for privacy requests. A legitimate privacy policy includes a working email address or form for data requests. Apps that bury this information or omit it entirely aren’t serious about user rights.

We evaluated all 27 apps in our catalog against these eight criteria as part of the CompanionWise Safety Index. Apps that address all eight sections tend to score higher. Apps that skip multiple sections land at D or F. That’s not a coincidence. The privacy policy is where transparency either shows up or doesn’t.

How to Read an AI Companion Privacy Policy in 10 Minutes

You don’t need to read 4,000 words of legal text from start to finish. Five targeted searches will tell you what matters about any AI companion app’s privacy practices. Open the privacy policy in your browser and use Ctrl+F (or Cmd+F on Mac) to search for these terms in order.

Search 1: “training” or “machine learning.” This reveals whether the company uses your conversations to train its AI. Look for phrases like “improve our services,” “train our models,” or “machine learning purposes.” If you find them, check whether there’s an opt-out mechanism nearby. Character AI states explicitly that it uses data to “train our artificial intelligence/machine learning models.” No opt-out exists for this.

Search 2: “retain” or “delete.” How long does the company keep your data? Good policies specify retention periods (“we retain data for 12 months after account deletion”). Bad ones use open-ended language like “as long as reasonably necessary.” Check whether the deletion process is straightforward or buried behind support ticket requirements.

Search 3: “share” or “third party.” Every app shares some data with infrastructure providers (hosting, payment processing). That’s normal. What you’re looking for is whether the policy also mentions sharing with advertisers, data brokers, or unnamed affiliates. Count the sharing categories. More than five should make you pause.

Search 4: “encrypt.” Often the shortest search, and sometimes the most revealing. Many AI companion apps don’t mention encryption at all. If you find nothing, that’s a finding in itself. Companies that invest in security mention it. The ones that don’t stay quiet. Across our 27-app catalog, the lowest-scoring apps almost universally lacked any encryption disclosure.

Search 5: “opt out” or “rights.” Your last search shows what control you actually have. Can you download your data? Delete it? Opt out of specific uses? Look for concrete instructions (“email privacy@company.com”) rather than vague references to your rights. If the policy mentions GDPR or CCPA, it should include a clear process for exercising those rights.

These five searches won’t catch everything. But they’ll expose the most important decisions a company has made about your data. If an app’s policy fails on three or more of these searches, consider it a strong signal to look elsewhere.

What Are the Biggest Privacy Red Flags?

Seven patterns in AI companion privacy policies should make you think twice. We spotted these again and again while reviewing 27 apps across 23 safety dimensions. Every app that earned an F in our Safety Index triggered at least three of them.

  • No mention of encryption. If a privacy policy contains zero references to encryption (neither in transit nor at rest), the company either doesn’t encrypt your data or doesn’t think it’s worth mentioning. Both are bad. Eva AI (F/10) and Romantic AI (F/13) provide no encryption details in their policies.
  • “Perpetual, irrevocable license” to your content. Some apps claim permanent ownership of everything you submit. Character AI’s Terms of Service grant the company a “nonexclusive, worldwide, royalty-free, fully paid up, transferable, sublicensable, perpetual, irrevocable license” to all user content. That license survives account deletion. For a full breakdown, see our Character AI privacy policy explanation.
  • No data deletion mechanism. If you can’t delete your data, the company has it forever. Watch for phrases like “we may retain certain information” or “deletion requests are subject to our operational needs.” Muah AI (F/8) and CrushOn AI (F/8) offer the least clarity on data deletion of any apps we reviewed.
  • Training on conversations with no opt-out. Many AI companion apps use your messages to improve their models. That’s common. What separates responsible apps from irresponsible ones is whether you can say no. If the policy mentions training but provides no opt-out, every message you send becomes permanent training data.
  • Sharing data with unnamed parties. References to “business partners,” “affiliates,” or “select third parties” without names or categories mean the company wants flexibility to share your data with anyone. Responsible policies name the categories of recipients and explain why sharing is necessary.
  • No privacy policy at all. Some smaller AI companion apps operate without a published privacy policy. This isn’t just a red flag. In many jurisdictions, it’s a legal violation. If you can’t find a privacy policy link on an app’s website or in its app store listing, don’t create an account.
  • Policy last updated more than 18 months ago. Privacy practices change as companies grow, add features, and enter new markets. A policy dated 2023 or earlier may not reflect the app’s current data handling. Replika’s policy was updated in March 2026. Check the date at the top of every policy you read.

Does an app you’re considering hit multiple items on this list? Check its CompanionWise safety rating before proceeding. We score every app in our catalog on these dimensions and more.

How Do AI Companion Apps Compare on Privacy?

Of 27 AI companion apps in our catalog, only 5 earn a safety score of C or above (Yellow tier). The remaining 22 sit at D or F (Red tier). The gap between the top and bottom is enormous. Pi earns a B/55. CrushOn AI and Muah AI each earn an F/8. That’s a 47-point spread on a 100-point scale, and it reflects fundamental differences in how these companies treat your data.

Here’s how the top 5 and bottom 5 compare:

App Safety Score Encryption Disclosed Deletion Process Training Opt-Out
Pi B / 55 Yes (at rest + transit) Clear, documented Available
ElliQ B- / 53 Yes Clear Limited
Replika C / 43 Yes (transit) In-app Partial
Kindroid C / 40 Yes (transit) Available Partial
Momo Self-Care C- / 36 Yes (transit) Available Limited
Romantic AI F / 13 Not disclosed Unclear None
Eva AI F / 10 Not disclosed Unclear None
Muah AI F / 8 Not disclosed Not documented None
CrushOn AI F / 8 Not disclosed Not documented None
PolyBuzz F / 13 Not disclosed Unclear None

The divide is sharp. Apps that score well disclose their security practices, offer clear deletion, and give you at least some control over AI training. Apps at the bottom stay silent on encryption, make deletion difficult, and feed your conversations into training pipelines without asking.

Replika’s privacy policy includes the statement: “We will never share your conversations with your Replika AI companion or any photos or other content you provide within the Apps” (Replika Privacy Policy, March 2026). Compare that with the vague language from F-rated apps, where sharing practices are described in broad terms or not addressed at all.

Want the full ranked list? See our safest AI companion apps page, which ranks all 27 apps by safety score with detailed breakdowns.

Your Privacy Policy Checklist

Use this 10-item checklist before creating an account with any AI companion app. Each item takes less than a minute to verify. If an app fails more than three items, look for an alternative.

  1. Privacy policy exists and is accessible. Can you find a link to the privacy policy on the app’s website or app store listing? If not, stop here.
  2. Policy updated within the past 18 months. Check the date at the top. A policy from 2023 or earlier likely doesn’t reflect current practices.
  3. Data categories are listed specifically. The policy names what it collects (messages, photos, voice, device data, location) rather than using catch-all phrases.
  4. AI training use is disclosed. The policy states whether your conversations are used to train AI models. Bonus: it explains how to opt out.
  5. Third-party sharing is specific. The policy names categories of recipients (payment processors, hosting providers) rather than vague “business partners.”
  6. Data retention period is defined. The policy states how long data is kept and what triggers deletion. “As long as necessary” without context fails this check.
  7. Encryption is mentioned. The policy references encryption in transit (HTTPS/TLS), at rest, or both. Silence on encryption fails this check.
  8. Account deletion is documented. The policy explains how to delete your account and all associated data. A clear process (button, email, form) passes. “Contact support” without specifics is borderline.
  9. User rights section exists. The policy addresses GDPR, CCPA, or equivalent rights: data access, data portability, deletion, opt-out. Even non-EU/non-California users benefit from companies that respect these frameworks.
  10. Contact information is provided. A working email address, form, or mailing address for privacy requests is included. No contact info means no accountability.

Don’t want to run through this checklist for every app? We’ve already done it. The CompanionWise Safety Index evaluates all 27 apps in our catalog across 23 dimensions, including every item on this list. Check an app’s safety rating before you download it.

What to Do If You’re Already Using an Unsafe App

If you’re reading this and realize your current AI companion app has a weak privacy policy, you have options. Here’s what to do, in order of priority.

Check your privacy settings now. Many apps bury privacy controls in Settings > Account or Settings > Privacy. Look for toggles related to data sharing, AI training, and analytics. Turn off everything you’re not comfortable with. Some apps (like Replika) give you granular control. Others give you almost none.

Request a copy of your data. Under GDPR (EU residents) and CCPA (California residents), you have the right to request all data a company holds about you. Send an email to the privacy contact listed in the app’s privacy policy. If there’s no contact listed, that tells you something important about how the company operates.

Request data deletion. Both GDPR and CCPA give you the right to request deletion of your personal data. The company must comply within 30 days (GDPR) or 45 days (CCPA). Note: some apps retain data they’ve already used for training, even after deleting your account. Check the retention policy carefully.

Consider switching to a safer alternative. If your current app scores poorly, several better options exist. Pi (B/55) offers the strongest privacy protections in our catalog. Replika (C/43) and Kindroid (C/40) provide good experiences with reasonable safety scores. See our safest AI companion apps ranking for the complete list, or take the Companion Matchmaker Quiz for a personalized recommendation.

Delete your account entirely. If you’re done with an app and want to minimize your exposure, delete the account through the app’s settings. Then email the privacy contact requesting confirmation that all data associated with your account has been removed from their servers.

Frequently Asked Questions

How private are AI companion apps?

Not very. Researchers at Malwarebytes found that one AI chat app exposed 300 million messages from 25 million users through an unsecured database in February 2026. Only 5 of 27 apps in the CompanionWise catalog earn a C grade or above for safety. Most collect extensive data with minimal security disclosures.

Can AI companion apps read my messages?

Yes. Every AI companion app processes your messages on its servers to generate responses. Many also store and analyze those messages for product improvement or AI training. According to Character AI’s privacy policy, user data is used to “train our artificial intelligence/machine learning models” with no opt-out available for conversation data.

Which AI companion app has the best privacy policy?

Pi earns the highest safety score in the CompanionWise catalog at B/55, with clear encryption commitments, documented deletion processes, and transparent data practices. Replika (C/43) and Kindroid (C/40) also score above average. See our safest AI companion apps ranking for the full list.

Do AI companion apps sell my data?

Most claim they don’t sell data directly. However, according to Replika’s privacy policy, the company does “not sell your personal information.” Other apps are less explicit. Broad sharing provisions with unnamed “business partners” or “affiliates” can function as data selling under a different name. Always check the third-party sharing section.

Can I delete my data from an AI companion app?

Under GDPR and CCPA, you have the legal right to request deletion. According to most app privacy policies, companies must comply within 30 to 45 days. However, some apps retain data already used for AI training. The deletion process ranges from simple (Pi, Replika) to unclear (Eva AI, CrushOn AI).

Are AI companion conversations used to train AI models?

Frequently, yes. According to the Anuma 2026 AI Chat Privacy Report, many leading platforms use conversation data for model training. Character AI confirms this in its policy. Some apps, like Pi and Replika, offer partial opt-out mechanisms. Others provide no opt-out at all. Check the “training” section of any app’s policy before signing up.