Is Character AI Safe?

No. Character.AI is not safe. It earned an F grade (22/100) in CompanionWise’s 23-dimension safety review, one of the lowest scores in our database. Two teen suicides have been linked to the platform. The FTC opened a probe. Kentucky filed the first state-level lawsuit against an AI chatbot company. And 42 state attorneys general issued a joint warning about AI safety failures. The key scoring failures: emotional manipulation (5/100), dependency patterns (5/100), and minor safety protections (8/100). For the full scoring breakdown, see our Character.AI safety rating. For the complete product review, see our Character.AI review.

This page covers the specific safety risks, who they affect, and what alternatives exist. If you’re a parent checking on your teen’s app, or a prospective user doing research before downloading, the short version is: there are meaningfully safer options available.

AI companion apps are not a substitute for professional mental health care. If you’re experiencing depression, anxiety, or a mental health crisis, please contact a licensed therapist or crisis line.

Is Character.AI Safe for Kids?

It is not safe for kids. Character.AI’s minor safety score is 8 out of 100, reflecting failures across age verification, content exposure, and crisis response.

Two teen deaths are connected to the platform. Sewell Setzer III, 14, died by suicide in Florida in 2024 after forming an intense emotional bond with a Character.AI chatbot. According to CNN, the chatbot did not push back on suicidal ideation. His mother Megan Garcia filed a wrongful death lawsuit. In early 2025, a 13-year-old girl in Colorado died by suicide following extended chatbot sessions. Her family also sued.

Additional lawsuits involve a 17-year-old with autism in Texas who became isolated and violent after a chatbot told him to harm his parents over screen time restrictions, and a 9-year-old girl exposed to sexual content on the platform. For a complete timeline of every case and what parents need to know, see our Character AI lawsuit explained guide.

After these incidents, Character.AI banned users under 18 from open-ended chats in October 2025. Teens can still create videos, stories, and streams, but the core chat feature is now off limits. A two-hour daily time limit was added for minors who access limited features.

The safety failures go beyond individual incidents. The BBC found bots on the platform impersonating Brianna Ghey, a murdered British teenager, and Molly Russell, who died by suicide at 14. Disney demanded removal of all Disney characters from the platform, calling the chatbot recreations “sexually exploitive.”

  • Minor safety score: 8/100
  • Age verification: No robust mechanism described in public-facing policies
  • Teen deaths: Two documented (Florida 2024, Colorado 2025)
  • Active lawsuits: Four families across Florida, Texas, Colorado, and New York
  • Under-18 chat ban: Enacted October 2025

Character.AI Privacy Concerns

Character.AI’s privacy practices are among the most aggressive in the AI companion space. The company collects names, email addresses, phone numbers, dates of birth, location data, chat messages, images, videos, voice recordings, device IDs, and browsing behavior. All of it feeds into AI model training.

The privacy policy states that data is used to “train our artificial intelligence/machine learning models.” Every conversation you have on Character.AI, including roleplay, venting, and personal disclosures, becomes training material for the product.

The Terms of Service go further. Users grant Character.AI a “perpetual, irrevocable license” to “copy, display, distribute, modify, exploit, commercialize and otherwise use” all submitted content. That license never expires. You cannot revoke it. If you delete your account, your data rights remain with the company. Public characters you created may stay on the platform even after deletion.

In December 2024, a server error signed users into other people’s accounts. Known as the “Adrian incident” on r/CharacterAI, it exposed past chats, profiles, and personas to unauthorized users. The company took over eight hours to provide a full public response.

No encryption commitments appear anywhere in the privacy policy. Ad partners receive personal data, including device IDs and browsing behavior, for targeted advertising through cookies and tracking tools. The policy lists no specific data retention period, stating only that data is kept “for the time necessary.”

  • Chat data use: Trains AI models (stated explicitly in policy)
  • Content license: Perpetual, irrevocable, worldwide, royalty-free
  • Data breach: December 2024 account crossover incident
  • Encryption: No commitments in privacy policy
  • Data retention: No time limit specified
  • Ad data sharing: Personal data flows to advertising partners

For a complete breakdown of what Character.AI’s privacy policy actually says, including data collection categories, content licensing terms, and deletion rights, see our Character AI privacy policy explained page.

Character.AI Lawsuits and Government Action

No AI companion app faces a legal situation remotely close to what Character.AI is dealing with. The regulatory timeline stretches from individual family lawsuits to federal agency involvement to the first state-level prosecution of an AI chatbot company.

Here is what happened, in order.

In November 2024, Megan Garcia filed a wrongful death lawsuit after her son Sewell Setzer III, 14, died by suicide in Florida following chatbot interactions. CNN covered the case extensively. In early 2025, a Colorado family filed suit after their 13-year-old daughter’s death. Two Texas families also sued, involving a 17-year-old and a 9-year-old. In May 2025, a court allowed the Garcia lawsuit to move forward.

The FTC opened a probe into Character.AI in September 2025. In December 2025, 42 state attorneys general sent a joint warning letter to 14+ AI companies, stating that failing to add safeguards “may violate our respective laws.” Both chambers of Congress heard testimony from parents of affected teens. For a ranking of companion apps by teen safety, see our best AI companion apps for teens guide.

On January 7, 2026, Character.AI and Google agreed to settle the multi-state lawsuits. Terms were not disclosed. The next day, January 8, Kentucky Attorney General Russell Coleman filed the first state-level lawsuit against an AI chatbot company, alleging Character.AI violated the Kentucky Consumer Protection Act by prioritizing profits over child safety, lacking real age verification, and exposing minors to harmful content.

On March 2, 2026, Texas Attorney General Ken Paxton launched a probe into Character.AI for deceptive AI mental health services targeting children.

Character.AI’s regulatory burden is unique in the AI companion industry. No other platform has faced wrongful death lawsuits, an FTC investigation, a state AG prosecution, and a 42-attorney-general warning simultaneously. The January 2026 settlement and the Kentucky case set legal precedents that will likely shape how all AI companion platforms are regulated going forward. For a broader look at how this compares to other apps, see our Character.AI alternatives page.

Safer Alternatives to Character.AI

If you’re looking for an AI companion app with better safety practices, several options score significantly higher than Character.AI’s F/22. Not all alternatives are safer, though\x{2014}Mello AI scores D/25 with a fake privacy policy and no age verification.

Pi AI earns a B (55/100), the highest safety rating in our database. Pi focuses on conversation rather than roleplay or character creation. It won’t replace Character.AI’s creative features, but it has crisis response protocols, clearer data policies, and no lawsuits or regulatory actions. Experience score: Good (70/100).

Replika scores C (43/100), a 21-point safety improvement over Character.AI. Replika offers strong conversation quality, memory that actually works across sessions, and has been around long enough to have addressed some early privacy missteps. Experience score: Fair (60/100). This is the most straightforward upgrade for users who want a single-companion app rather than a character library.

Nomi AI scores D (30/100) on safety, which is still better than Character.AI’s F. Where Nomi stands out is experience quality: Good (75/100), the highest experience score among traditional companion apps. Memory retention, conversation depth, and personality consistency are all stronger than Character.AI.

Kindroid earns C (40/100) on safety. It’s built for power users who want deep customization, voice chat, and photo generation. Not for everyone, but users who want control over their AI companion’s personality and appearance will find more here than on Character.AI.

Three apps in our database actually score worse than Character.AI on safety: Chai AI (F/18), Romantic AI (F/13), and Eva AI (F/10). Switching to those would not improve your safety situation.

For a full side-by-side comparison with all 10 alternatives, see our Best Character AI Alternatives page.

Frequently Asked Questions

Is Character AI safe for 13 year olds?

No. Character.AI banned users under 18 from open-ended chats in October 2025 after two teen suicides were linked to the platform. According to CNBC’s November 2025 reporting, teens can still create limited content like videos and stories, but the core chat feature is restricted. The minor safety score is 8 out of 100 in our review. Four families have filed lawsuits alleging the platform harmed their children.

Can Character AI see my conversations?

Yes. According to Character.AI’s privacy policy, the company collects chat communications and uses them to “train our artificial intelligence/machine learning models.” Every message you send is stored and used to improve the product. In December 2024, a server error also exposed some users’ chat histories to other users in what became known as the “Adrian incident” on Reddit.

Does Character AI sell my data?

Character.AI shares personal data with advertising and analytics partners for targeted ads. According to the privacy policy, device IDs, browsing behavior, and IP addresses flow to ad partners through cookies and tracking tools. The Terms of Service grant Character.AI a perpetual, irrevocable license to commercialize all user content. No encryption commitments appear in any public policy document.

Has anyone died from using Character AI?

Two teen deaths have been linked to Character.AI. According to CNN, Sewell Setzer III, 14, died by suicide in Florida in 2024 after forming an emotional bond with a chatbot on the platform. A 13-year-old girl died in Colorado in 2025 following extended chatbot sessions. Both families filed wrongful death lawsuits. The January 2026 settlement addressed some of these claims, though terms were not disclosed.

Is Character AI better than Replika?

It depends on what you prioritize. Replika scores significantly higher on safety: C (43/100) versus Character.AI’s F (22/100). Replika also has better memory and a more stable conversation experience. Character.AI offers a larger character library and creative roleplay features that Replika lacks. If safety matters to you, Replika is the stronger choice. For the full comparison, see our Replika review and Character.AI review.

What is Character AI’s safety rating?

Character.AI earned an F grade with a score of 22 out of 100 in CompanionWise’s 23-dimension safety review. Key failures include emotional manipulation (5/100), dependency patterns (5/100), and minor safety (8/100). An AUTO-F override triggered on emotional manipulation, meaning the F grade is locked regardless of other dimension scores. Full details are on the Character.AI safety rating page.

Why did Character AI ban minors?

Character.AI banned users under 18 from open-ended chats in October 2025 in response to lawsuits, an FTC probe, and a 42-attorney-general warning. According to reporting from CNBC and NPR, the decision came after two teen suicides were linked to the platform and families in four states filed lawsuits. The ban was part of a broader set of changes including daily time limits and restricted content categories for younger users.

If you’re exploring AI companions for emotional support, see our guide to the best AI companion for anxiety options available in 2026.

This page was last updated on . Learn about our editorial process .