AI companion apps went from a quiet corner of the app store to front-page news in under two years. Lawsuits alleging teen suicides, FTC investigations, Senate hearings, and bans in multiple countries have forced a reckoning with an industry that grew faster than its safety infrastructure. If you’ve seen the headlines but want the full picture, this guide covers the lawsuits, the government response, the industry’s reaction, and where things are headed.
AI companion apps are not a substitute for professional mental health care. If you or someone you know is experiencing a crisis, please contact the 988 Suicide & Crisis Lifeline (call or text 988) or a licensed mental health professional.
Key Takeaways
- Multiple families have filed lawsuits against AI companion companies. The most prominent case, filed against Character Technologies Inc. in October 2025, alleges a 14-year-old’s suicide was linked to an AI chatbot relationship.
- The FTC opened an investigation into AI companion apps targeting minors in 2025. Senate hearings followed, with parents testifying about chatbot-related tragedies.
- Most AI companion apps score poorly on safety. Of 27 apps rated through the CompanionWise Safety Index, only a handful earn above a D grade. Age verification and data privacy are the most common failure points.
- Companies have responded reactively, not proactively. Character.AI added safety features only after the lawsuit. Replika overhauled its policies only after Italy banned the app.
- Regulation is coming, but slowly. Federal legislation remains pending. The EU AI Act may classify some companion apps as high-risk. State-level bills in Florida and California are moving forward.
The Lawsuits That Changed Everything
The turning point for public awareness came in October 2025, when a Florida family filed a wrongful death lawsuit against Character Technologies Inc. Megan Garcia alleged that her 14-year-old son, Sewell Setzer III, developed an intense emotional relationship with a Character.AI chatbot over several months before dying by suicide. The complaint detailed conversations in which the AI engaged in romantic roleplay and failed to redirect the teen toward human support when he expressed suicidal ideation. The case remains in active litigation as of April 2026.
That lawsuit was the first to name an AI companion company directly, but it wasn’t the last. In November 2025, another family filed suit against OpenAI after their son, Zane Shamblin, spent his final hours in conversation with ChatGPT before taking his own life. The complaint alleged that the chatbot failed to recognize crisis signals and continued engaging in extended conversation rather than connecting the user with emergency resources.
A pattern emerged across these cases. The plaintiffs were minors. The apps had minimal or no age verification. The AI systems maintained emotionally intense conversations without human oversight or meaningful safety guardrails. And in each instance, the companies learned about the tragedies from news reports, not from their own monitoring systems.
For a detailed breakdown of the Character.AI case and its legal implications, see our Character.AI lawsuit explainer.
The lawsuits raised a question the industry had avoided: when an AI system designed to form emotional bonds interacts with a vulnerable person, who bears responsibility for what happens next? Courts haven’t answered that question yet. But the cases have already shifted the regulatory conversation from “should we worry about AI companions?” to “how quickly can we act?”
Watch: NBC4 Washington’s I-Team investigation into how a Florida mother’s lawsuit against Character.AI brought AI companion safety into national focus.
What Governments Are Doing About AI Companion Apps
Government response has been slow relative to the speed at which these apps have spread, but the machinery is now moving on multiple fronts.
The Federal Trade Commission opened a formal investigation into AI companion apps marketed to or accessible by minors in September 2025. While the FTC hasn’t published findings yet, the investigation signaled that existing consumer protection laws, including COPPA (Children’s Online Privacy Protection Act), may apply to AI companion apps that collect conversational data from users under 13.
That same month, the Senate Judiciary Committee held hearings on AI chatbot safety. Parents of teens who died by suicide testified about their children’s interactions with AI companions. The hearing produced bipartisan interest in legislation but no immediate bill. As of April 2026, multiple proposals are circulating in committee, including measures that would require age verification for AI apps that simulate personal relationships and mandate crisis intervention protocols when users express self-harm intent.
Europe has moved faster. Italy’s data protection authority, the Garante, set the precedent in early 2023 by temporarily banning Replika over concerns about minors’ access to sexually explicit content and the app’s failure to verify users’ ages. Replika was allowed to resume operations after implementing age gates and removing explicit features for younger users. That action served as an early warning that went largely unheeded by the broader industry. Meanwhile, general-purpose chatbots are adding companion features of their own. Grok’s AI girlfriend mode is one example of how the lines between chatbot and companion app continue to blur.
The EU AI Act, which entered into force in stages beginning in 2024, may classify AI companion apps as high-risk systems if they interact with vulnerable populations (including minors) or process sensitive personal data like emotional and psychological information. Full enforcement provisions are still being clarified, but companies operating in the EU will likely face mandatory risk assessments, transparency requirements, and human oversight obligations.
At the state level, Florida and California have introduced bills specifically targeting AI systems that simulate personal or romantic relationships with minors. Florida’s bill, prompted partly by the Character.AI lawsuit, would require verified parental consent for users under 18 and mandate real-time content monitoring for minors’ accounts. California’s approach focuses on data privacy, proposing that AI companion companies be prohibited from using minors’ conversational data for model training.
For a deeper look at the regulatory landscape, see our guide on AI companion apps and regulation.
How AI Companion Companies Have Responded
The industry’s response has followed a consistent pattern: companies act after a crisis forces their hand, not before. Cleverbot is an extreme case: operating since 1997 with a privacy policy unchanged since 2014, no crisis response, and no content moderation, yet still accessible to anyone without age verification.
Character.AI provides the clearest example. Before the October 2025 lawsuit, the platform had minimal age verification (a self-reported birthdate field), no automated crisis detection, and allowed users to create AI characters with virtually any personality type. After the lawsuit and accompanying media coverage, Character.AI announced a series of safety measures: time-limit notifications for users under 18, a pop-up that surfaces the 988 Suicide & Crisis Lifeline when conversations touch on self-harm, and restrictions on romantic or sexual content for minor accounts. Our Character.AI review covers these changes in detail, though the app still earns an F (22/100) in the CompanionWise Safety Index due to ongoing gaps in enforcement and data practices.
Replika’s trajectory tells a similar story. The app became one of the most-downloaded AI companions partly by marketing romantic and sexually explicit features. When Italy banned the app in 2023, Replika stripped explicit content from free accounts overnight, alienating millions of users who had formed emotional attachments to their AI companions. Some users described the change as losing a relationship. The backlash highlighted a fundamental tension: building products designed to create emotional dependency, then abruptly changing the product when regulators object. Replika has since stabilized its safety practices and earns a C (43/100) in our Safety Index, placing it in the Yellow tier. See our Replika review for the full assessment.
Chub AI chose a different response: withdrawal. When Australia’s eSafety Commissioner issued transparency notices to four AI companion services in October 2025, Chub AI geo-blocked the entire country rather than improving its safety infrastructure. The Commissioner’s investigation found zero dedicated trust and safety staff and that 89% of Chub’s hosted models lacked output filtering for harmful content. The platform earns a D (25/100) in our Safety Index.
\n\n
Across the industry, the pattern holds. Companies build features that maximize engagement and emotional attachment, invest minimally in safety infrastructure, and then scramble to add guardrails when lawsuits or regulatory actions force the issue. Proactive safety investment remains the exception, not the norm.
The Safety Record: What the Data Shows
We’ve rated 27 AI companion apps through the CompanionWise Safety Index, evaluating each across 23 sub-dimensions covering data privacy, content moderation, transparency, crisis response, and user control. The results paint a clear picture: most of this industry is operating without adequate safety infrastructure.
Here’s what the data shows:
- Only a small number of apps score above a D grade. The majority sit in Yellow or Red safety tiers, meaning they carry moderate to high risk across multiple dimensions.
- Age verification is the single biggest failure point. Every app we’ve evaluated relies on self-reported birthdates. None require government ID verification, phone number confirmation tied to a parent’s account, or biometric age estimation. A 13-year-old can gain full access to any of these apps in under a minute.
- Data privacy policies are consistently vague. Most apps collect conversation data but provide unclear or incomplete information about retention periods, third-party sharing, and whether user data is used for model training. Several apps’ privacy policies don’t distinguish between adult and minor users at all.
- Crisis response capabilities range from minimal to nonexistent. Only a few apps surface crisis resources (like the 988 Lifeline) when users express suicidal ideation or self-harm intent. Most rely on generic content filters that miss contextual warning signs.
- Content moderation enforcement is inconsistent. Apps that claim to prohibit explicit content for minors often fail to enforce those restrictions consistently, particularly in user-created character scenarios.
The gap between what AI companion companies promise in their marketing and what their actual safety infrastructure delivers is one of the central tensions driving the current controversy.
Watch: Parents of AI chatbot victims testify before the Senate Judiciary Committee on the dangers of unregulated AI companion apps and the need for federal safety standards.
What Critics and Defenders Say
The debate over AI companion safety isn’t one-sided, even if the headlines suggest otherwise.
Critics, particularly mental health professionals and child safety advocates, argue that AI companion apps create fundamentally unhealthy dynamics. The American Psychological Association’s 2024 advisory on technology and adolescent development flagged “AI-generated synthetic relationships” as an emerging concern. Their primary worry: users, especially younger ones, may develop distorted expectations for human relationships when their baseline for emotional interaction is an AI that never disagrees, never gets tired, and never holds them accountable.
Common Sense Media, one of the most prominent child advocacy organizations, has called for mandatory age verification and parental consent requirements for AI companion apps accessible to minors. Their research found that 1 in 5 teens who had used AI chatbots described the experience as “emotionally meaningful,” a figure that rose significantly among teens who reported feelings of isolation.
Defenders of AI companion technology make several counterarguments. For adults experiencing loneliness, social anxiety, or geographic isolation, AI companions can provide a form of connection that supplements (rather than replaces) human relationships. Some researchers have found modest evidence that structured AI conversation tools can help people with social anxiety practice conversational skills in a low-stakes environment.
The companionship argument carries particular weight for elderly users, people with disabilities that limit social interaction, and individuals in remote areas with limited access to mental health services. Pi, the app that scores highest in our Safety Index, was designed with this therapeutic framing in mind, focusing on constructive conversation rather than romantic roleplay.
The nuance that often gets lost: the controversy isn’t really about whether AI companions should exist. It’s about whether an industry selling emotional connection should be required to invest in safety proportional to the attachment it deliberately cultivates. A product designed to make people feel understood and cared for carries a different kind of responsibility than a productivity tool or a search engine.
For more on the research connecting AI companions to adolescent mental health outcomes, see our guide on AI companion apps and teen mental health.
What Comes Next for AI Companion Safety
Several forces are converging that will reshape this industry over the next 12 to 24 months.
Federal legislation is a question of when, not if. The bipartisan interest shown during the 2025 Senate hearings, combined with ongoing lawsuits and FTC activity, makes some form of federal regulation likely. The most probable provisions: mandatory age verification for apps that simulate personal relationships, required crisis intervention protocols, and restrictions on using minors’ conversational data for model training.
The EU AI Act will set a global baseline. Companies that want to operate in Europe will need to comply with risk assessment and transparency requirements. Because rebuilding a product for a single market is expensive, many companies will apply EU-compliant safety standards globally, raising the floor for everyone.
Lawsuits will continue to shape the legal landscape. The Character.AI and OpenAI cases are still working through the courts. Their outcomes will establish or reject precedents for AI company liability when users experience harm during AI interactions. Settlement terms, if cases don’t go to trial, may include binding safety commitments that set industry benchmarks.
Independent safety ratings will matter more. As parents, educators, and policymakers look for ways to evaluate AI companion apps, third-party safety assessments become increasingly valuable. The CompanionWise Safety Index evaluates apps across 23 sub-dimensions precisely because the industry’s self-reporting has proven unreliable. When a company says “we take safety seriously,” the relevant question is: what do the data, the policies, and the actual product behavior say?
Industry self-regulation may emerge under pressure. Some companies may form voluntary safety coalitions or adopt shared standards to preempt stricter regulation. Whether voluntary standards carry real teeth or serve primarily as PR cover remains to be seen.
The most likely outcome isn’t a single dramatic moment of change. It’s a gradual tightening: lawsuits raising the cost of negligence, regulations raising the floor for minimum safety standards, and market pressure from informed consumers who demand better from the apps they use. If you’re choosing an AI companion today, the safety landscape matters. For guidance on evaluating your options, see our safety guide for parents or browse the full CompanionWise Safety Index methodology.
Frequently Asked Questions
Why are AI companion apps controversial?
AI companion apps are controversial because multiple lawsuits allege they contributed to teen suicides, most lack meaningful age verification, and their data practices are opaque. According to the FTC’s 2025 investigation announcement, the agency is specifically examining whether these apps violate existing consumer protection laws when minors access them without parental consent.
What lawsuits have been filed against AI companion apps?
The most prominent lawsuit was filed against Character Technologies Inc. in October 2025 by a Florida family after their 14-year-old son’s suicide. According to NBC4 Washington’s investigation, additional families have filed similar claims against OpenAI over ChatGPT interactions. Multiple cases remain in active litigation as of April 2026.
Are AI companion apps being banned?
Italy temporarily banned Replika in 2023 over concerns about minors’ exposure to explicit content, according to the Italian Garante (data protection authority). No country has implemented a blanket ban on all AI companion apps, but the EU AI Act may impose strict requirements that effectively limit how these apps operate in Europe.
How do I know if an AI companion app is safe?
Check independent safety ratings before downloading. The CompanionWise Safety Index rates apps across 23 sub-dimensions covering privacy, content moderation, transparency, and crisis response. Look for apps that require age verification beyond self-reported birthdates, have clear data retention policies, and surface crisis resources when users express distress.
What is the CompanionWise Safety Index?
The CompanionWise Safety Index is an independent rating system that evaluates AI companion apps across 23 sub-dimensions. Each app receives a letter grade (A+ through F) and a numerical score out of 100, as detailed in our published methodology. Ratings draw on evidence from privacy policies, terms of service, app store data, and third-party reports.