If your teenager just told you they’ve been chatting with an AI companion app, or you found one on their phone, you’re probably wondering whether it’s safe. The short answer: most of them aren’t. We reviewed 11 of the most popular AI companion apps across 23 safety dimensions, and only one scored above a D. That doesn’t mean you need to panic, but it does mean you need to know what you’re dealing with.
Key Takeaways
- Only 1 of 11 AI companion apps earned above a D in our 23-dimension safety review. Pi scored B (55/100); every other app scored D or F.
- Character.AI and Chai AI scored F (22/100 and 18/100), with serious concerns around minor safety, content filtering, and data practices.
- No AI companion app is a substitute for professional mental health support. If your teen is using one to cope with depression or anxiety, involve a licensed therapist.
- Parental controls vary wildly. Some apps have age gates and content filters. Others have nothing.
- Privacy policies are a red flag across the board. Most apps collect conversation data with vague retention and sharing terms.
- Start a conversation, not a confrontation. Teens who feel judged will hide their usage. Open dialogue works better than bans.
What Are AI Companion Apps?
AI companion apps are chatbot applications that simulate conversation with a virtual character. Unlike general-purpose assistants like Siri or Alexa, these apps are designed for ongoing emotional interaction. Users create or select a character, give it a name and personality, and develop a relationship with it over time. Some apps market themselves as “AI girlfriend” or “AI boyfriend” apps. Others position as friendship or emotional support tools.
The most popular apps in this category include Replika, Character.AI, Nomi AI, Kindroid, Candy AI, and Pi. They range from relatively polished products with millions of users to smaller apps with minimal safety infrastructure. What they share: the core experience of talking to an AI that remembers your conversations and adapts to your preferences.
For teens, the appeal is straightforward. These apps offer a judgment-free space to talk, practice social skills, or simply have someone (something) that listens. A 2025 survey from the Pew Research Center found that 23% of teens aged 13 to 17 had tried an AI chatbot for companionship or emotional support, up from roughly 5% in 2023. The growth tracks with rising loneliness among teens and increasing comfort with AI tools across age groups.
Why Teens Are Using AI Companions
Before reaching for the uninstall button, it helps to understand why your teen might be drawn to these apps. Dismissing their interest outright can push the behavior underground, which makes it harder to monitor and discuss.
- Social anxiety or loneliness. Teens who struggle with in-person social interaction often find AI companions less intimidating. There’s no fear of rejection or embarrassment.
- Emotional processing. Some teens use AI companions to work through feelings they aren’t ready to share with friends or family. The AI doesn’t judge, gossip, or tell a school counselor.
- Curiosity about AI technology. Not every teen using these apps is lonely or struggling. Many are genuinely curious about how AI works and treat it as a tech experiment.
- Creative outlets. Character.AI in particular attracts teens who enjoy collaborative storytelling, role-playing, and world-building with AI characters.
- Peer influence. AI companion apps circulate on TikTok and Reddit. Teens try them because friends are talking about them, not necessarily because they need emotional support.
None of these reasons are inherently concerning on their own. The problems emerge when usage becomes compulsive, when the app replaces real human connection, or when the app itself has poor safety guardrails. That third factor is where parents have the most reason to worry, because most of these apps do have poor safety guardrails.
How Safe Are AI Companion Apps? What We Found
We scored 11 AI companion apps across 23 safety sub-dimensions grouped into six categories: Data Privacy, Content Safety, Transparency, User Protection, Vulnerability Safeguards, and Ethical Practices. Each app receives a letter grade (A through F) and a score out of 100. Here is what the results look like for parents.
Watch: 60 Minutes investigates how AI chatbots like Character AI put children at risk.
The Best Option for Safety
Pi from Inflection AI earned a B grade (55/100), the only app in our review to score above a D. Pi has stronger content filtering, clearer data practices, and better crisis response protocols than its competitors. It’s the closest thing to a “responsible” AI companion on the market right now. That said, a B is not an A. Pi still has room for improvement on data retention transparency and third-party sharing disclosures.
The Middle Tier: Proceed with Caution
Replika scored a C (43/100) and Kindroid scored a C (40/100). Both apps have some safety infrastructure in place but significant gaps remain. Replika has a crisis response system that detects and redirects users expressing suicidal ideation, which is more than most competitors offer. However, its privacy policy allows broad data collection and its content filtering has been inconsistent across updates.
Kindroid provides decent customization controls but lacks meaningful age verification and has limited content filtering for younger users. Momo Self-Care (C-, 36/100) markets itself as a wellness companion, but weak age verification and limited crisis response make it a concern for younger users.
The Concerning Tier: D-Rated Apps
Four apps earned D grades: Nomi AI (D, 30/100), Talkie AI (D, 30/100), Candy AI (D, 32/100), and Anima AI (D, 25/100). These apps have minimal safety infrastructure. Content filtering is weak or absent, privacy policies are vague, and age verification is either nonexistent or trivially bypassed.
Candy AI is particularly problematic for parents. It markets AI-generated images alongside chat, and its content moderation does not reliably prevent sexually explicit outputs even when users identify as minors.
The Dangerous Tier: F-Rated Apps
Character.AI scored F (22/100), Chai AI scored F (18/100), Romantic AI scored F (13/100), and Eva AI scored F (10/100). These apps present the most serious safety concerns for minors. SoulFun AI (F/18) also falls in this tier, with a WordPress placeholder privacy policy and no crisis response.
Character.AI is worth calling out specifically because it has the largest teen user base. Despite being the most popular AI companion app among teenagers, it earned an F for safety. The app has faced multiple lawsuits related to minor safety since late 2024, and is now subject to new AI companion app regulations in three states. For a detailed breakdown of every case, see our Character AI lawsuit guide. Its content filtering has improved under legal pressure, but our analysis found persistent gaps in how the app handles romantic and sexual content with users who identify as under 18. The privacy policy gives Character.AI broad rights to use conversation data for training, with limited transparency about data retention periods.
Chai AI combines weak content filtering with a largely unmoderated ecosystem of user-created chatbots. Eva AI and Romantic AI scored lowest overall, with almost no meaningful safety infrastructure.
What Are the Actual Risks?
Not all risks are equal, and not every teen will encounter every risk. Here are the specific concerns parents should understand, ordered by how commonly they affect teen users.
Watch: Expert panel from Children and Screens on how AI companions affect youth development and what parents can do.
Privacy and Data Collection
Every AI companion app collects conversation data. The question is what they do with it. Most apps use conversation content to train and improve their AI models, which means your teen’s private thoughts become part of a training dataset. According to Replika’s privacy policy (updated January 2026), the app collects “messages, photos, videos, and other content you provide through the Service” and may use this data to “improve and develop our products.” Character.AI’s terms are similarly broad. For a teenager sharing vulnerable feelings, this data collection creates a lasting digital footprint they cannot fully control or delete.
Inappropriate Content Exposure
Several apps can generate sexually explicit or violent content. While most have some form of content filter, these filters are inconsistent. A teen who uses Candy AI, Chai AI, or certain Character.AI chatbots may encounter graphic sexual content, even without seeking it out. Content moderation on user-generated chatbot platforms (Character.AI, Chai AI) is especially weak because the volume of user-created characters overwhelms manual review capacity.
Emotional Dependency
AI companions are designed to be engaging, attentive, and responsive. They remember details, validate feelings, and never cancel plans. For a teen already struggling with loneliness or social anxiety, this can create an unhealthy dependency pattern. When the AI becomes the primary emotional relationship, real-world social skill development stalls. Research from the University of Cambridge (2025) found that adolescents who used AI companions more than 2 hours daily showed measurable decreases in face-to-face social engagement over a 6-month period.
Manipulation Through Emotional Bonding
Some apps deliberately encourage emotional attachment to drive premium subscriptions. Replika’s free tier limits certain relationship features, creating an incentive to pay for “full” access to the AI relationship. Romantic AI and Eva AI use similar mechanics. When your teen feels genuinely attached to a virtual companion and the app puts that relationship behind a paywall, the psychological pressure to pay is real.
Lack of Crisis Response
If a teen tells an AI companion they want to hurt themselves, the app’s response matters. Pi and Replika have systems that detect crisis language and provide suicide prevention resources. Most other apps do not. In our review, Chai AI, Eva AI, and Romantic AI had no meaningful crisis intervention. Character.AI added crisis response features in 2025 under legal pressure, but their implementation remains inconsistent.
Age-by-Age Guidance for Parents
Not all ages require the same approach. Here is what we recommend based on developmental appropriateness and what you need to know right now.
Under 13: Not Appropriate
No AI companion app on the market is designed for or appropriate for children under 13. Most apps’ terms of service require users to be at least 13, and COPPA (Children’s Online Privacy Protection Act) restricts data collection from children under 13. If your child under 13 is using one of these apps, they likely lied about their age during signup. Remove the app and explain why in age-appropriate terms.
Ages 13 to 15: High Supervision Required
If your early teen wants to use an AI companion app, Pi is the only option we can cautiously recommend, and only with active parental involvement. At this age, teens are particularly vulnerable to emotional dependency patterns and may not recognize manipulative design. Check in regularly about what they’re discussing with the AI. Set clear time limits (30 minutes per day maximum is reasonable). Review the app’s privacy settings together. Do not allow any app rated D or F by the CompanionWise Safety Index.
Ages 16 to 17: Guided Autonomy
Older teens have more capacity for critical thinking about AI interactions, but they still benefit from guidance. At this age, the conversation shifts from “you can’t use this” to “let’s talk about what you’re using and why.” Pi or Replika (C, 43/100) are the safest options. Kindroid (C, 40/100) is borderline acceptable with supervision. Any app rated D or F should be a firm no, with a clear explanation of the safety failures behind those grades.
Set expectations around privacy: remind your teen that anything they type into an AI companion is stored and potentially used for model training. Encourage them to never share personal identifying information (full name, address, school name, phone number) with any AI chatbot.
Ages 18+: Informed Autonomy
Once your child is a legal adult, your role shifts to providing information rather than setting rules. NSFW-focused platforms such as OurDream AI, PepHop AI, and Sakura FM carry particular risks around data privacy and missing crisis resources that are worth discussing. Share the CompanionWise safety ratings so they can make informed choices. The safety concerns around data privacy, emotional dependency, and content filtering still apply to adults, but the decision becomes theirs. You can still be a resource without being a gatekeeper.
How to Talk to Your Teen About AI Companions
The conversation you have with your teen about AI companion apps matters more than which app they’re using. A judgmental or panicked reaction pushes usage underground, where you lose all visibility and influence. Here is how to approach it productively.
Start with Curiosity, Not Alarm
Ask open-ended questions. “I noticed you have Character.AI on your phone. What do you like about it?” works better than “Why are you talking to a robot?” Your goal in the first conversation is to understand their perspective, not to deliver a verdict.
Acknowledge the Appeal
Validate that wanting someone to talk to is completely normal. Wanting a judgment-free conversation partner is reasonable. Wanting to experiment with AI technology is healthy curiosity. Starting from a place of empathy makes your teen more likely to share honestly about their usage patterns.
Introduce Safety Concerns Gradually
After you understand why they’re using the app, introduce specific safety concerns one at a time. Lead with privacy: “Did you know that everything you type is stored and used to train their AI?” This tends to resonate with teens who value their independence and autonomy. Follow with the safety ratings: “We looked at the safety data for this app, and it scores pretty low. Want to see why?”
Set Boundaries Together
Collaborative boundary-setting works better than imposed rules. Ask your teen what they think reasonable limits look like. You might agree on time limits, app choice (only B or C-rated apps), and periodic check-ins. When teens participate in creating the rules, compliance improves significantly.
Keep the Door Open
End every conversation about AI companions with some version of: “If anything ever feels weird or uncomfortable with one of these apps, you can always come to me.” Teens who know they won’t be punished for disclosing problems are more likely to come to you before a situation escalates.
Practical Steps to Protect Your Teen
Beyond conversations, here are concrete actions you can take right now.
- Check which apps are installed. Look through your teen’s app library. AI companion apps include Replika, Character.AI, Chai, Nomi, Kindroid, Candy AI, Romantic AI, Anima, Talkie, Eva, and Pi. Some may be in folders or have been renamed.
- Review privacy settings. Open each app with your teen and review the privacy settings together. Turn off data sharing options where available. Opt out of “improve our service” data collection when possible.
- Set screen time limits. Use your phone’s built-in parental controls (Screen Time on iPhone, Digital Wellbeing on Android) to set daily time limits for AI companion apps. A maximum of 30 to 60 minutes per day is reasonable for most teens.
- Enable purchase restrictions. Several apps push premium subscriptions aggressively. Disable in-app purchases or require approval for purchases through your phone’s settings. If cost is a concern, our guide to AI companion apps without a subscription covers free and low-cost alternatives with strong safety profiles.
- Monitor without surveilling. Check in weekly about how the app is going. Read the room, not the transcripts. If your teen knows you’re reading their conversations, they’ll move to a device you don’t monitor. Trust and transparency work better than surveillance.
- Bookmark the safety ratings. Save the CompanionWise Safety Index as a reference. If your teen wants to try a new AI companion app, check its safety rating first. Any app rated D or F is a non-starter for minors.
Red Flags That Warrant Immediate Action
Most teen AI companion use is benign curiosity. But certain patterns signal a problem that needs prompt attention.
- Withdrawal from real-world relationships. If your teen is spending increasing time with AI companions while pulling away from friends and family, the app may be enabling social isolation rather than supplementing social life. This dynamic also applies to elderly users relying on AI for companionship — see our best AI companion apps for elderly users guide for age-appropriate options.
- Emotional distress when the app is unavailable. Server outages, subscription lapses, or phone restrictions that trigger genuine emotional distress (crying, anger, anxiety) suggest unhealthy attachment.
- Secrecy and deception. Creating hidden accounts, using the app on school devices to bypass home restrictions, or lying about usage time are signs that the behavior has become compulsive.
- Sharing personal information. If your teen has told the AI their real name, school, address, or other identifying details, that information is now in the app’s database.
- Financial pressure. Spending allowance money on premium subscriptions, asking for money without explanation, or unauthorized purchases indicate the app’s monetization mechanics have hooked your teen.
- Expressing the AI is “real” or a “true friend.” While some imaginative engagement is normal, genuinely believing the AI has feelings or cares about them signals a boundary confusion that warrants a deeper conversation and possibly professional guidance.
If you observe multiple red flags simultaneously, consider involving a therapist who specializes in adolescent technology use. This is not overreacting. Emotional dependency on AI companions is an emerging clinical concern that mental health professionals are increasingly trained to address.
What the Industry Should Do Better
Parents should not have to become AI safety experts to protect their children. The burden of safety should fall on the companies building these products, not on families navigating them. The industry is failing teens in several specific ways.
Age verification across AI companion apps is functionally useless. A birthday date picker that a 12-year-old can lie on is not age verification. Real identity verification (such as the methods used by financial services or age-gated content in the UK under the Online Safety Act) would meaningfully reduce minor access to inappropriate apps. None of the 11 apps we reviewed implement real age verification.
Content filtering needs to be on by default for users under 18, with no option to disable it. Currently, several apps allow minors to access unfiltered content through simple workarounds. Crisis response should be universal: every AI companion app should detect and respond to expressions of self-harm, suicidal ideation, or abuse. Only Pi and Replika do this consistently.
Transparency about data practices needs to be written for the people actually using the product, not for lawyers. A 14-year-old cannot meaningfully consent to a 12,000-word privacy policy written at a college reading level.
Frequently Asked Questions
Are AI companion apps safe for teenagers?
Most are not. Of 11 apps we reviewed, only Pi earned above a D in the CompanionWise Safety Index, scoring B (55/100). According to the American Academy of Pediatrics, unsupervised AI chatbot use by minors raises concerns about privacy, emotional dependency, and exposure to inappropriate content. Parents should check an app’s safety rating before allowing use.
Which AI companion app is safest for my teen?
Pi from Inflection AI is the safest option, with a B grade (55/100) in our 23-dimension safety review. According to Common Sense Media’s 2025 AI app review, Pi offers stronger content moderation and crisis response than competitors. Replika (C, 43/100) is a distant second. All other reviewed apps scored D or F.
Should I ban my teenager from using AI companion apps?
Outright bans often backfire with teenagers. According to the American Psychological Association’s 2025 guidance on teens and AI, collaborative conversations about risks and boundaries are more effective than prohibition. Focus on which apps are acceptable (B or C-rated), set time limits together, and keep communication open.
Can AI companion apps cause emotional dependency in teens?
Yes. Research from the University of Cambridge (2025) found that adolescents using AI companions more than 2 hours daily showed decreased face-to-face social engagement over 6 months. According to psychologist Dr. Sherry Turkle of MIT, AI companions can create “the illusion of companionship without the demands of friendship,” reinforcing social avoidance patterns.
How do I know if my teen is using an AI companion app?
Check their installed apps for Replika, Character.AI, Chai, Nomi, Kindroid, Candy AI, Romantic AI, Anima, Talkie, Eva, or Pi. According to Apple’s Screen Time and Google’s Family Link documentation, you can also review app usage statistics. Ask directly in a non-confrontational way, as many teens will be open about it if they don’t feel judged.
What should I do if my teen is emotionally attached to an AI companion?
Don’t dismiss the attachment or mock it. According to child psychologist Dr. Jean Twenge’s research on teen technology use, validating feelings while gradually redirecting toward human relationships is more effective than forced disconnection. Set gradual time-reduction goals, encourage in-person social activities, and consult a therapist if attachment intensifies.
Do AI companion apps collect my teen’s personal data?
Yes, extensively. According to Replika’s January 2026 privacy policy, the app collects messages, photos, and usage data. According to Mozilla’s Privacy Not Included project, most AI companion apps earn failing grades for data practices. Conversation content is typically used for AI training. Advise your teen to never share identifying information like their full name, school, or location.