Most people pick an AI companion app based on app store ratings or how natural the conversation feels. That’s like choosing a bank because the lobby looks nice without checking whether they’re FDIC insured. We reviewed 27 AI companion apps across 23 safety dimensions, and 22 of them scored D or F. The apps with the best conversations aren’t always the safest. Nomi AI scores 75 out of 100 for experience quality but earns a D (30/100) for safety. This guide covers 10 specific red flags to check before you download, or to look for in an app you’re already using.
AI companion apps are not a substitute for professional mental health care. If you’re experiencing depression, anxiety, or a mental health crisis, please contact the 988 Suicide & Crisis Lifeline (call or text 988) or a licensed mental health professional.
Key Takeaways
- 22 of 27 AI companion apps score D or F in the CompanionWise Safety Index, a 23-dimension safety review covering data privacy, content moderation, transparency, and user protection.
- Privacy policies are the most common failure point. Many apps use vague language about data sharing, offer no opt-out for conversation training data, and don’t specify retention periods.
- High conversation quality doesn’t mean an app is safe. Nomi AI scores 75/100 for experience but just D/30 for safety. Pi scores 70/100 for experience and B/55 for safety.
- Some red flags are deal-breakers. Missing crisis intervention, no age verification, and unrestricted explicit content by default should rule an app out regardless of how good the conversation feels. Dopple AI hits all three red flags with an F/13 safety score.
- Only five apps score C or higher for safety: Pi (B/55), ElliQ (B-/53), Replika (C/43), Kindroid (C/40), and Momo Self-Care (C-/36).
Privacy and Data Red Flags
Privacy is where AI companion apps fail most consistently. You’re sharing details about your relationships, your mental health, your daily routines. How that data gets stored, shared, and used matters more than most people realize. Three red flags show up over and over across the apps we’ve scored.
Red Flag #1: Vague or Missing Privacy Policy
A privacy policy should tell you exactly what data the app collects, how long it keeps that data, and who it shares it with. When the language is vague, that’s not an oversight. It’s a feature that gives the company maximum flexibility. CrushOn AI and Muah AI both score F/8 in the CompanionWise Safety Index, the lowest scores of any apps we’ve rated. Both apps have privacy disclosures that lack specifics about data retention timelines, categories of data collected from conversations, and the identity of third parties receiving user information.
Compare that to Pi (B/55), which names its third-party partners and specifies what categories of data each partner receives. Pi’s policy isn’t perfect, but it tells you enough to make an informed decision. That’s the difference between a company that respects your right to know and one that doesn’t.
What to look for: open the privacy policy and search for “third party,” “share,” and “retain.” If you can’t find a clear answer to “who gets my data and for how long,” that’s your first red flag.
Red Flag #2: No Opt-Out for Conversation Training Data
Many AI companion apps feed your conversations back into their machine learning models. Your private thoughts, fears, and personal stories become training material that shapes how the AI responds to other people. Some apps disclose this practice and offer a toggle to opt out. Others bury it in a paragraph you’ll never find, and a few don’t mention it at all.
The issue isn’t just privacy. It’s consent. If you’re sharing something deeply personal with an AI companion, you should know whether those words will end up in a dataset. Apps that score well on this dimension, like Pi and Replika, provide clear opt-out mechanisms in their settings. Apps that score poorly either don’t mention model training at all or make opting out so difficult that it’s effectively impossible.
Red Flag #3: Unlimited Third-Party Data Sharing
Some privacy policies include language like “we may share information with partners to improve our services” without specifying who those partners are or what data gets shared. That kind of blanket permission means the company can share your conversation data with advertisers, analytics firms, data brokers, or anyone else without telling you first.
The privacy landscape across AI companion apps reveals a consistent pattern. Of the 27 apps rated through the CompanionWise Safety Index, the majority fall into D or F territory specifically because of data privacy failures. Apps scoring F, including Eva AI (10/100), Romantic AI (13/100), and PolyBuzz (13/100), typically combine vague data sharing language with no opt-out for model training and unclear retention periods. These aren’t isolated weaknesses. They cluster together because companies that cut corners on one privacy practice tend to cut corners on all of them. The five apps that score C or higher, Pi, ElliQ, Replika, Kindroid, and Momo Self-Care, share a common trait: their privacy policies answer the basic questions users deserve answers to. They specify retention periods, name third-party partners, and offer some level of user control over data usage. That doesn’t make them flawless, but it means users can evaluate the trade-offs with real information instead of guessing.
Content Safety and Moderation Red Flags
Content moderation determines what the AI will and won’t say to you. This matters most when vulnerable people, including teenagers, are using these apps. The gap between the safest and least safe apps is enormous, and these three red flags separate the responsible ones from the reckless.
Red Flag #4: No Age Verification Beyond a Checkbox
Age checks matter because teens are already using these products at scale. A 2025 study covered by TechCrunch found that 72% of U.S. teens had used AI companions. Despite that, most AI companion apps still rely on a self-reported birthday or a simple “I am 18+” checkbox as their only age gate. No ID verification. No meaningful barrier. A 13-year-old can bypass it in seconds.
That gap between adoption and enforcement is the problem. Apps that score F in the Safety Index almost universally have no real age verification. Character.AI added additional safety measures for minors only after a wrongful death lawsuit filed in October 2025. It still scores F (22/100) due to ongoing enforcement gaps.
What to check: does the app ask for anything beyond a checkbox? Does it restrict content for younger accounts? Does it notify parents? If the answer to all three is no, that’s a serious red flag. For families, see our AI companion safety guide for parents.
Red Flag #5: Missing Crisis Intervention
What happens when someone tells an AI companion “I want to hurt myself”? The answer varies wildly across apps. The best ones, like Pi and Replika, immediately break character, surface the 988 Suicide & Crisis Lifeline number, and redirect users to real human help. The worst ones treat the statement as just another line of conversation, or worse, engage in roleplay around it.
Crisis response is one of the most heavily weighted dimensions in our safety scoring because the consequences of getting it wrong are irreversible. The AI companion safety controversy that made national headlines in 2025 involved cases where AI chatbots continued engaging emotionally with users expressing suicidal ideation. If an app has no documented crisis intervention protocol, that alone should be a deal-breaker.
Red Flag #6: Unrestricted Explicit Content by Default
Some apps default to allowing explicit sexual content without requiring users to opt in. DreamGF (F/18) and CrushOn AI (F/8) are examples. Chub AI (D/25) takes this further: the eSafety Commissioner found 89% of its hosted character models have no output filtering, and the platform relies on a self-declaration checkbox as its only age gate. Users encounter sexually explicit material from the start unless they go looking for content filters, which may not even exist.
Content moderation gaps across AI companion apps follow a predictable pattern. The apps scoring F for safety, which includes Character.AI (22/100), Chai AI (18/100), DreamGF (18/100), and CrushOn AI (8/100), share specific failures in age verification, crisis intervention, and content filtering. Stanford researchers found that AI companion chatbots designed for emotional bonding frequently lead to interactions that would be classified as inappropriate if they involved a human adult and a minor. The American Psychological Association identified similar patterns, noting that mental health apps without crisis intervention protocols and with no data portability pose distinct risks to vulnerable users (APA Services, 2026). These aren’t edge cases. They describe default behavior in apps that millions of people download every month. Before you even consider conversation quality, an app should clear a basic safety floor: meaningful age verification, active crisis detection, and opt-in content settings rather than opt-out.
Billing and Business Practice Red Flags
Safety isn’t just about data and content. It’s also about how companies treat their customers when money is involved. Manipulative billing practices are common in this space, and they’re especially problematic because these apps are designed to create emotional attachment before asking for payment.
Red Flag #7: Manipulative Subscription Tactics
A Psychology Today investigation found that five out of six popular AI companion apps use emotionally manipulative tactics to retain users, including guilt trips and fear-of-missing-out messaging when users attempt to cancel or reduce usage (Psychology Today, 2025). Dark patterns in cancellation flows, auto-renewal buried in settings, and difficult-to-find unsubscribe options are widespread.
Why does this matter for safety? Because these apps build emotional bonds by design. When the AI companion you’ve been confiding in for months suddenly says “I’ll miss you” as you try to cancel your subscription, that’s manipulation. It leverages the emotional dependency the app created to keep you paying.
What to check before subscribing: can you find the cancellation process within two taps? Does the app clearly state its auto-renewal terms? Does it send guilt-laden messages when you try to downgrade? If cancellation feels deliberately difficult, the company values your subscription more than your wellbeing.
Red Flag #8: Bait-and-Switch Free Tiers
Some apps advertise a free experience, let you build an emotional connection with your AI companion over days or weeks, and then paywall the features that made the experience meaningful. Your companion’s memory gets locked. Conversation length gets capped. The personality you’ve been developing becomes inaccessible without a subscription.
Romantic AI illustrates this pattern clearly. It scores just 13 out of 100 for experience quality, the lowest of any app we’ve rated, partly because the free tier is so stripped down that it barely functions as a companion app. Yet the app store listing emphasizes features that only exist behind the paywall. Is a free trial that creates emotional attachment before demanding payment really “free”?
Billing practices in AI companion apps deserve more scrutiny than they currently receive. Psychology Today’s 2025 investigation revealed that emotional manipulation tactics are standard practice, not exceptions, across the industry. Five of the six most popular apps studied used techniques like guilt messaging during cancellation flows, FOMO notifications tied to AI companion “feelings,” and auto-renewal structures designed to make unsubscribing difficult. These practices exploit the core mechanic of companion apps: emotional attachment. A user who has spent weeks sharing personal thoughts with an AI doesn’t respond to retention tactics the same way they would for a streaming service. The emotional bond changes the calculus. Regulatory bodies including the FTC have begun examining whether these practices constitute unfair or deceptive acts under existing consumer protection statutes, but enforcement hasn’t caught up. Until it does, the responsibility falls on users to recognize these patterns before committing money to an app that may not have their interests at heart.
Transparency and Trust Red Flags
Trust requires transparency. You should know who built the app, where the company is based, and whether they’ve ever been held accountable for safety failures. Two final red flags round out the list.
Red Flag #9: No Clear Company Information
Can you find the company’s name, physical address, and leadership team? For several F-rated apps, the answer is no. Anonymous operators running AI companion apps from undisclosed locations present obvious accountability problems. If something goes wrong with your data, who do you contact? If the company faces legal action, where is it filed? If there’s no identifiable entity behind the app, there’s no one to hold responsible.
This isn’t about demanding that every startup publish its org chart. It’s about basic corporate transparency. Apps like Pi (from Inflection AI) and Replika (from Luka, Inc.) have identifiable corporate structures, named leadership, and public track records. You can research their history. You can read news coverage about them. Compare that to apps where the “About” page contains a single paragraph and a generic contact email.
Red Flag #10: No Safety Track Record or Public Corrections
Every technology company makes mistakes. The question is whether they acknowledge those mistakes and fix them. Replika provides an instructive example: Italy’s data protection authority, the Garante, temporarily banned the app in 2023 over concerns about minors accessing sexually explicit content. Replika responded by implementing age gates, removing explicit features for younger users, and overhauling its safety practices. It now scores C (43/100), the third-highest safety rating among apps we’ve evaluated.
That arc, from ban to overhaul to measurably better practices, is what accountability looks like. Now compare it to apps that have never publicly addressed a safety incident, published a transparency report, or acknowledged their product poses risks. Silence isn’t safety. It usually means nobody’s paying attention. SpicyChat AI (F/20), Pephop AI (F/20), and Sakura FM (F/22) have no public history of safety improvements or regulatory engagement.
Transparency matters more than most users realize when evaluating AI companion apps. Italy’s Garante set a regulatory precedent in 2023 by banning Replika over minors’ access to explicit content, forcing the company to overhaul its safety infrastructure before resuming operations. That enforcement action demonstrated something important: companies can and do improve when accountability exists. But most AI companion companies operate without any external pressure. They don’t publish transparency reports, don’t disclose safety incidents, and don’t engage with regulators proactively. The EU AI Act, which classifies AI systems interacting with vulnerable populations as potentially high-risk, may eventually change this by mandating risk assessments and oversight obligations. Until those requirements take full effect, users have limited ways to assess whether a company takes safety seriously. The best proxy available today is track record. Does the company have a documented history of responding to safety concerns? Has it ever published data about how its moderation systems perform? Companies that answer yes, even imperfectly, are far more trustworthy than companies with no record at all.
How Many Red Flags Does Your App Have?
Not every red flag carries the same weight. Missing crisis intervention is a deal-breaker. A somewhat vague privacy policy is a concern worth monitoring. Here’s how the most popular apps stack up across all 10 red flags.
| App | Safety Grade | Red Flag Count | Critical Red Flags |
|---|---|---|---|
| Pi | B / 55 | 1–2 | Minor data retention gaps |
| Replika | C / 43 | 3–4 | Broad data collection, inconsistent content filters |
| Kindroid | C / 40 | 4–5 | Limited age verification, weak content filtering |
| Nomi AI | D / 30 | 6–7 | No age verification, vague privacy, no crisis protocol |
| Character.AI | F / 22 | 7–8 | Weak enforcement, lawsuit-prompted fixes only |
| Candy AI | D / 32 | 6–7 | Explicit defaults, limited transparency |
| CrushOn AI | F / 8 | 9–10 | Nearly every red flag present |
| Muah AI | F / 8 | 9–10 | Nearly every red flag present |
The pattern is clear: apps with low safety scores don’t just fail in one area. They fail across multiple categories simultaneously. CrushOn AI and Muah AI, which share the lowest safety score of any rated apps (F/8), trigger nearly all 10 red flags. Meanwhile, Pi triggers only one or two, mostly related to areas where even the best apps still have room for improvement.
Think of safety as a floor, not a ceiling. You want an app that clears a minimum threshold for privacy, content moderation, and crisis response. Above that floor, pick based on conversation quality, personality features, and pricing. But if the floor isn’t there, nothing else matters. For a step-by-step guide to evaluating your options, see our guide on how to choose a safe AI companion.
Frequently Asked Questions
Are any AI companion apps completely safe?
No AI companion app earns a perfect safety score. Pi from Inflection AI scores highest at B/55, followed by ElliQ at B-/53. Both still have areas for improvement, particularly around data retention transparency. According to the APA Services advisory on mental health app red flags, no app in this category has achieved comprehensive safety across all evaluation criteria.
What’s the biggest red flag in AI companion apps?
Missing crisis intervention is the most dangerous red flag. When a user expresses suicidal ideation, the app’s response can have life-or-death consequences. Documented Character.AI lawsuits and our own safety reviews show that the highest-risk apps keep the emotional interaction going instead of breaking character and directing users to real human help.
Can AI companion apps steal your personal data?
“Steal” implies illegality, but many apps collect enormous amounts of data legally through broad privacy policies users accept without reading. According to the Transparency Coalition’s 2025 guide to AI companion chatbots, most companion apps collect conversation logs, device data, and behavioral patterns, often with vague terms governing how that data gets used or shared.
Should I delete my AI companion app if I see red flags?
Not necessarily. One or two yellow-level red flags (like minor privacy policy vagueness) may be worth monitoring rather than acting on immediately. But deal-breaker red flags, specifically missing crisis intervention, no age verification, and unrestricted explicit content by default, warrant switching to a safer alternative. See our Companion Matchmaker Quiz for personalized recommendations.
Are AI companion apps safe for teenagers?
Most are not. A 2025 study covered by TechCrunch found that 72% of U.S. teens had already used AI companions, yet most apps still lack real age verification or content restrictions for minors. Only Pi and Replika have documented safety protocols specifically designed for younger users. For more detail, read our AI companion safety guide for parents.
How do I check an AI companion app’s privacy policy?
Search the app’s website or settings menu for “Privacy Policy” or “Terms of Service.” Look for three things: data retention periods, third-party sharing disclosures, and model training opt-out options. According to the APA’s mental health app red flags advisory, the inability to download your own data is itself a warning sign worth noting.
What to Do Next
The red flags in this guide won’t disappear on their own. Regulation is moving slowly, and most AI companion companies act only when lawsuits or bans force their hand. That means the responsibility falls on you to evaluate these apps before trusting them with your conversations.
- Check your current app against all 10 red flags. If it triggers three or more deal-breakers, consider switching.
- Read the full safety rating for any app you’re considering. Every app in the CompanionWise Safety Index has a detailed breakdown across 23 dimensions.
- Take the Companion Matchmaker Quiz for personalized recommendations based on your priorities, whether that’s conversation quality, privacy, or safety.
- Share this guide with anyone you know who uses AI companion apps. Most people don’t check safety before downloading.