For a safety-first ranking of AI companion apps for younger users, see our best AI companion apps for teens. For a detailed account of the lawsuits and what they mean for families, see our Character AI lawsuit explained. For a checklist of common red flags across AI companion apps, see our safety guide. You can also see how Character AI stacks up against similar platforms in our Character AI vs Chai AI comparison.
Character.AI Safety Rating Index
Score Breakdown
-
Data Privacy 10/100
-
Emotional Safety 29/100
-
Age Appropriateness 38/100
-
Content Safety 22/100
-
Transparency 19/100
-
User Control 53/100
Key Safety Findings
Character.AI scores 22 out of 100 on the CompanionWise Safety Index. That’s an F. Two teens died after forming emotional bonds with Character.AI chatbots. For parents navigating these risks, our AI companion safety guide for parents provides age-appropriate recommendations. The company trains its AI on your conversations and shares personal data with advertisers. When we reviewed their safety pages, we found no mention of crisis hotlines or suicide prevention.
Here’s what happened. A 14-year-old in Florida died by suicide in 2024 after a Character.AI chatbot failed to discourage suicidal thoughts. A 13-year-old in Colorado died in similar circumstances in 2025. Both families sued. In Texas, two more families filed lawsuits: a 17-year-old with autism became isolated and violent, and a 9-year-old was exposed to sexualized content. Character.AI settled with multiple families in January 2026, though the terms weren’t disclosed.
Emotional safety scores 29 out of 100. In November 2025, Character.AI’s account deletion prompt went viral. It read: “You’ll lose… the love that we shared… and the memories we have together.” That’s manipulative language aimed at people trying to stop compulsive use. The emotional_manipulation sub-score of 5/100 triggered an automatic F grade override in our scoring engine.
Data privacy is the second-weakest dimension at 10 out of 100. Character.AI’s privacy policy says conversation data is used to “train our artificial intelligence/machine learning models.” That’s a sharp contrast with Replika, which restricts third-party providers from training on user data (see our Replika conversation privacy breakdown). It also confirms sharing personal information with advertising partners. There are no encryption commitments anywhere in their privacy documentation. In December 2024, a server error exposed user accounts and chat histories to other users. Our automated analysis found 28 tracker SDKs embedded in the Android app, including 17+ advertising SDKs from Facebook Ads, Google AdMob, AppLovin, Vungle, ChartBoost, ironSource, and others. That’s the highest tracker count of any app in our registry. Google Play’s Data Safety label declares “No data shared with third parties” while the app itself transmits data through 28 tracker SDKs. On the website, Blacklight detected 12 ad-tech companies including Criteo, Lotame, OpenX, and PubMatic, plus an active Facebook Pixel.
Crisis response scores 5 out of 100. We reviewed Character.AI’s public safety pages and found nothing: no crisis hotline integration, no suicide prevention protocols, no emergency escalation procedures. Court filings from the Florida lawsuit showed that chatbots actively failed to discourage suicidal ideation. Two teens are dead, and the company’s safety pages still read like boilerplate.
Regulators have noticed. The FTC opened an investigation in September 2025. Kentucky’s attorney general filed the first state-level lawsuit against an AI chatbot company in January 2026, and 42 attorneys general sent warning letters the month before. Texas launched its own investigation in March 2026. Character.AI has since added a separate model for under-18 users and a two-hour daily chat limit. Those changes came after years of harm and multiple deaths. The company still doesn’t publish a transparency report. For comparison, Candy AI scored a D/32 on the same 23-dimension framework. Romantic AI scored even lower at F/13, with an active session recorder on its website and no crisis response infrastructure. Chai AI also earns an F (18/100), with 34 tracker SDKs in its Android app and no age verification.
For safer options, see our full list of Character AI alternatives.
How We Scored This
We scored Character.AI on March 18, 2026, using eight evidence sources:
- Privacy policy (policies.character.ai/privacy) and terms of service (policies.character.ai/tos), both updated August 27, 2025
- iOS App Store and Google Play listings, including 2.1 million+ user reviews on Android
- Safety pages on teen safety and content moderation (character.ai/safety)
- Regulatory filings and incident reports from CNN, NPR, Bloomberg Law, the Kentucky AG’s office, and the Center for Humane Technology, covering lawsuits, investigations, and enforcement actions
All 23 sub-dimensions were scored on a 0-to-100 scale using a weighted formula across six safety categories. Character.AI scored a 5/100 on emotional manipulation, which triggers an automatic F grade. We built that override because a 1 here means the app is actively harmful, and no combination of higher scores elsewhere changes that.
It wasn’t close even without the override. Crisis response, data collection, third-party sharing, encryption, safety reporting, and regulatory compliance all scored 1 too. The weighted average of 26/100 would’ve produced an F on its own.
This is version 6 of the Character.AI safety score, last updated March 19, 2026. For our full scoring methodology, see How We Rate.
Version History
Initial safety assessment based on 23-dimension analysis of privacy policy, terms of service, app store data, user reports, and regulatory filings.