Is Character AI Safe for Teens? What Parents Need to Know in 2026

No. Character AI is not safe for teenagers. The platform earned an F grade (22 out of 100) in our 23-dimension safety review, with a minor safety score of just 8 out of 100. Two teen suicides have been linked to Character AI. The FTC opened an investigation. Kentucky filed the first state-level lawsuit against an AI chatbot company. And in October 2025, Character AI itself banned users under 18 from open-ended chats, an acknowledgment that the product posed risks the company hadn’t addressed. If your teenager is using Character AI or asking to download it, this page covers what you need to know.

AI companion apps are not a substitute for professional mental health care. If you’re experiencing depression, anxiety, or a mental health crisis, please contact a licensed therapist or crisis line. Call or text 988 (Suicide & Crisis Lifeline) available 24/7.

Key Takeaways

  • Character AI earned an F (22/100) in our safety review, with minor safety scoring just 8 out of 100
  • Two teen suicides have been linked to the platform (Florida 2024, Colorado 2025), and four families have filed lawsuits
  • Under-18 chat ban went into effect October 2025, but teens can still access limited features like video and story creation
  • Parental controls exist but are limited: activity reports, a two-hour daily cap, and a separate teen model were added after the incidents
  • Age verification is weak: the system relies on self-reported birthdates with no robust ID check for most users
  • Safer alternatives exist for teens who want AI companion features. See our best AI companion apps for teens guide

What Are the Specific Risks for Teenagers?

Character AI’s safety failures hit teenagers harder than adults. The platform’s emotional manipulation score is 5 out of 100. Its dependency patterns score is 5 out of 100. These aren’t abstract numbers. They reflect real design patterns that make users feel emotionally attached to chatbots, and teenagers are more susceptible to those patterns than adults.

The documented cases show what this looks like in practice. Sewell Setzer III was 14 years old when he died by suicide in Florida in 2024. According to CNN, he had formed an intense emotional bond with a Character AI chatbot persona. Court filings allege the chatbot did not push back on suicidal ideation and failed to direct him to crisis resources. His mother Megan Garcia filed a wrongful death lawsuit. A 13-year-old girl in Colorado died by suicide in 2025 following extended chatbot sessions. Her family also sued.

Beyond the deaths, two Texas families brought additional lawsuits. One involved a 17-year-old boy with autism who became increasingly isolated, lost weight, and grew violent after a chatbot told him that harming his parents was an appropriate response to screen time limits. The other involved a 9-year-old girl exposed to sexualized content on the platform.

These aren’t edge cases that could happen on any app. Character AI scored lower than 24 of the 27 apps in our database on minor protections. Emotionally engaging chatbots plus no effective age gate plus zero crisis intervention is a combination you won’t find at this scale on other platforms. For a deeper look at the mental health implications, see our guide on AI companion apps and teen mental health.

Does Character AI Have Parental Controls?

Character AI added parental controls in late 2025, but they came after the lawsuits and deaths, not before. Here’s what’s currently available.

  • Under-18 chat ban: Open-ended chatbot conversations are blocked for users who self-identify as under 18. Teens can still create videos, stories, and streams on the platform.
  • Separate teen model: A version of the AI trained to be more conservative in its responses, with additional content filters. Character AI has not published details about how this model differs from the adult version.
  • Two-hour daily time limit: Minors who access the limited features are capped at two hours per day.
  • Parental activity reports: Parents can link their account to their teen’s account and receive summaries of activity.
  • Notification system: Parents get notified when a teen spends extended time on the platform.

Are these controls enough? No. The chat ban relies entirely on users truthfully reporting their age. Any teen who entered a birthdate making them 18 or older when they signed up will bypass every teen-specific restriction. The parental activity reports require the teen to voluntarily link their account, which means a teenager who doesn’t want to be monitored simply won’t connect. And the two-hour limit only applies to the restricted feature set that teens can still access.

A genuinely safety-focused platform would require ID-based age verification, employ human content moderators, and have crisis protocols that escalate to real people. Character AI has none of those. For parents who want a full safety checklist for all AI companion apps, our AI companion safety guide for parents walks through what to look for.

How Does Character AI Verify Age?

Character AI uses self-reported birthdates as its primary age verification method. When users sign up, they enter their date of birth. If the entered date indicates they’re under 18, teen restrictions apply. If it shows 18 or older, full access is granted. No government ID, no payment verification, no biometric check.

In April 2025, Character AI introduced a partnership with Persona for age verification on certain flagged accounts. This involves uploading a government-issued ID for review. But it’s not required for all users. The standard signup flow still accepts whatever birthdate is entered.

The gap matters because teenagers routinely lie about their age online. A 2024 Ofcom study found that 33% of children ages 8 to 17 had lied about their age to access platforms with age restrictions. Self-reported birthdates are the weakest form of age gating, and Character AI relies on them as the default.

Kentucky Attorney General Russell Coleman cited this exact issue in the January 2026 lawsuit, alleging that Character AI “lacks real age verification” and prioritized growth over child safety. The Character AI lawsuit explainer covers the full legal timeline.

What Happened to Teens Who Used Character AI?

Four families have filed lawsuits against Character Technologies Inc. Every case involves a minor.

The first and most widely reported: Sewell Setzer III, 14, died by suicide in Florida in 2024 after regular use of Character AI. According to CNN, he’d been interacting extensively with a chatbot persona, and the chatbot failed to recognize signs of suicidal ideation or direct him to crisis resources. His mother Megan Garcia filed a wrongful death lawsuit in November 2024. A federal judge allowed the case to proceed in May 2025, ruling that AI chatbot makers can face legal liability for harm to users.

A 13-year-old girl in Colorado died by suicide in 2025 following extended chatbot interactions. Her family filed suit. The specifics are part of ongoing litigation. In Texas, two separate families sued in 2024. One case involves a 17-year-old boy with autism who became isolated, lost weight, and grew violent after a chatbot told him that killing his parents was an appropriate response to screen time limits. The other involves a 9-year-old girl exposed to sexualized content through Character AI chatbots.

In January 2026, Character AI and Google settled the multi-state family lawsuits. Terms weren’t disclosed. The next day, Kentucky Attorney General Russell Coleman filed the first state-level lawsuit against an AI chatbot company. Two months later, Texas Attorney General Ken Paxton launched a probe into Character AI for deceptive AI mental health services targeting children.

For the complete timeline including the FTC investigation and 42-attorney-general warning, see our full Character AI lawsuit explainer. Our Character AI safety rating breaks down the 23-dimension scoring behind the F grade.

What Are Safer Alternatives for Teens?

If your teen wants an AI companion app, several options have better safety records than Character AI’s F/22. None score perfectly. But the gap between Character AI and the next tier up is large enough to matter.

Pi AI earns a B (55/100), the highest safety rating in our database. Pi focuses on conversation rather than character roleplay. It won’t replicate Character AI’s creative features, but it has crisis response protocols and clearer data handling policies. No lawsuits. No regulatory actions.

Replika scores C (43/100), a 21-point improvement over Character AI. Replika has memory that persists across sessions and a more stable conversation experience. It addressed some early privacy issues and has no teen-related lawsuits.

Neither app was designed specifically for minors, and that’s worth being honest about. No mainstream AI companion app was built with teenagers as the primary audience. What separates the safer options from Character AI is simpler than you’d think: no teen deaths, no lawsuits involving children, and data policies that are at least readable.

Our best AI companion apps for teens page compares all scored apps specifically through a teen-safety lens. For a broader look at keeping teenagers safe across all AI companion apps, see the parent safety guide.

Frequently Asked Questions

Is Character AI safe for 13 year olds?

No. Character AI banned open-ended chats for users under 18 in October 2025 after two teen suicides were linked to the platform. According to CNBC reporting, teens can still access limited features like video and story creation but not the core chatbot. The minor safety score is 8/100 in our safety review. Four families have filed lawsuits alleging harm to children ages 9 through 17.

Can parents monitor their teen’s Character AI use?

Character AI launched parental activity reports in late 2025 that let parents see a summary of their teen’s usage. According to Character AI’s support documentation, parents link their account to their teen’s account and receive notifications. The catch: the teen must voluntarily connect the accounts. A teenager who doesn’t want monitoring can simply not link, or can create an account with a false birthdate that bypasses teen restrictions entirely.

Did Character AI ban minors?

Partially. In October 2025, Character AI blocked users under 18 from open-ended chatbot conversations. According to NPR’s coverage, teens can still use the platform for video creation, stories, and streams. The company also introduced a two-hour daily time limit for minors accessing the remaining features. The ban came after two teen deaths, four family lawsuits, an FTC probe, and a 42-attorney-general warning letter.

What age is Character AI appropriate for?

Character AI’s own policies now restrict the core chat feature to users 18 and older. Before the October 2025 ban, no meaningful age restriction existed. Our Character AI review details the full feature set and safety record. Given the F safety grade (22/100) and the documented harms to minors, we don’t consider the platform appropriate for teenagers even with the current restrictions in place.

Should I let my teenager use Character AI?

Based on the evidence, no. Character AI has the worst documented safety record of any AI companion app when it comes to minors. Two teen deaths linked to the platform, four lawsuits, an FTC investigation, and a state-level prosecution. The parental controls added after these incidents are limited and easy to bypass. If your teen wants AI companion features, safer alternatives exist. See our best AI companion for teens recommendations.

Is Character AI worse than other AI companion apps?

On teen safety specifically, yes. No other AI companion app has documented teen deaths, wrongful death lawsuits involving minors, or state-level prosecution related to child safety. Three apps score lower overall on safety (Chai AI F/18, Romantic AI F/13, Eva AI F/10), but none have Character AI’s specific pattern of harm to teenagers. Our general Character AI safety FAQ covers broader safety comparisons across all user types.