No. Candy AI earned a D safety rating (32 out of 100) in our 23-dimension safety review, placing it in the Red safety tier. The biggest problems: your conversations may be reviewed by humans and shared with unnamed third-party AI providers, there’s no real age verification despite the platform being built around adult content, and Candy AI has never published a transparency report or explained its August 2025 ban. If you’re considering signing up, you should know what you’re walking into. Here’s a full breakdown.
What Does Candy AI’s Safety Score Mean?
The Candy AI safety rating is built on 23 sub-dimensions grouped into six categories: content safety, emotional safety, data privacy, transparency, age appropriateness, and user control. Each sub-dimension is scored on a 0-to-100 scale, weighted by severity, and rolled into a single public score. Candy AI’s 32 out of 100 earns a D letter grade and a Red tier designation, which is our second-lowest tier.
Five sub-dimensions scored 5 out of 100, which is nearly the floor:
- Data collection: Conversations may be subjected to human review for AI training dataset preparation (privacy policy, rev. March 9, 2026).
- Third-party data sharing: Third-party LLM providers “may receive the content of your messages” with no named providers or documented data controls (privacy policy, rev. March 9, 2026).
- Age verification: A self-reported checkbox on an explicitly adult platform. The underage policy says the platform “may implement further measures” but has not done so.
- Safety transparency: No public transparency reports exist. No statement from EverAI explains the August 2025 ban or what changed before the platform returned.
- Crisis response: No automated distress detection. No crisis helpline integration. If a user is in emotional distress, the only response is a generic text disclaimer.
The one bright spot: Candy AI’s terms of service explicitly disclaim any therapeutic purpose, stating the service is “for entertainment purposes only” (terms of service, rev. March 6, 2026). That scored 100 out of 100. Setting honest expectations matters, and Candy AI does that part well.
For the full methodology behind these scores, see how we rate companion apps.
Is Candy AI Safe for Minors?
Not at all. Candy AI is an explicitly adult platform, and its only age gate is a checkbox where users affirm they’re 18 or older. There are no parental controls, no restricted modes, and no secondary safeguards for anyone who gets past the checkbox. A minor who bypasses that single control encounters uncensored adult content immediately.
This is not a theoretical risk. Australia’s eSafety Commissioner now requires age verification for AI chatbots capable of generating explicit content, with fines up to A$49.5 million for non-compliance. Candy AI’s terms of service reference eSafety compliance as of March 2026, but the documented verification mechanism remains the same self-reported checkbox (terms of service, rev. March 6, 2026).
If you’re a parent who discovered Candy AI on your child’s device, the platform offers no tools to restrict access from inside the app. You would need device-level parental controls or network-level blocking.
Does Candy AI Sell Your Data?
The privacy policy doesn’t use the word “sell,” but it describes data practices that go well beyond what most users would expect from a private conversation.
Three specific disclosures stand out:
- Third-party LLM providers “may receive the content of your messages exchanged with our chatbot.” EverAI does not name which providers receive this data or what controls apply on the receiving end.
- Conversation content “may be aggregated, anonymized, and/or de-identified” and used for AI training, including human review of de-identified interactions during dataset preparation.
- Data is shared with analytics and marketing platforms. The privacy policy lists standard categories (advertising, analytics, performance monitoring) without specifying which vendors or what conversation data reaches them.
On a platform where the primary product involves intimate conversations, “anonymized human review” carries different weight than it might on a general-purpose chatbot. You should treat every conversation on Candy AI as potentially reviewable, even in anonymized form.
What Are Candy AI’s Biggest Privacy Risks?
Beyond data sharing, four specific privacy risks deserve attention.
The encryption contradiction. Candy AI’s homepage FAQ claims “end-to-end encryption” for conversations. The privacy policy says only that EverAI implements “appropriate security measures” without specifying encryption standards or protocols. No independent security audit, SOC 2 certification, or ISO 27001 documentation exists in their public materials. Both the homepage claim and the policy language are published documents from the same company. They can’t both be accurate.
The data retention timeline. Your data is retained for 3 years after account inactivity (privacy policy, rev. March 9, 2026). For intimate conversation data, that’s a long window. GDPR deletion rights are documented but the process requires manual request.
The IP clause. The terms of service include a perpetual, worldwide license granting EverAI the right to use and commercialize user-generated content. That clause covers anything you type or create on the platform.
Policy fragmentation. Candy AI’s legal terms are scattered across six separate documents: privacy policy, terms of service, community guidelines, underage policy, blocked content policy, and cookie policy. Finding the full picture requires reading all six. Most users won’t.
How Does Candy AI Compare to Safer Alternatives?
Every AI companion app we’ve reviewed has safety gaps. But some are significantly better than others. SoulFun AI scores even lower at F/18, with a placeholder privacy policy and no age verification despite marketing explicit content.
| App | Safety Score | Grade | Experience | Key Difference |
|---|---|---|---|---|
| Pi | 55 / 100 | B | Good (70) | Strongest safety practices in our registry. No adult content. Built-in wellbeing features. |
| Replika | 43 / 100 | C | Fair (60) | Better memory, crisis response integration, more mature privacy infrastructure. |
| Kindroid | 40 / 100 | C | Fair (60) | Better customization depth, stronger user control features. |
| Candy AI | 32 / 100 | D | Fair (53) | Best image generation, but lowest safety among companion apps with comparable features. |
| Sakura FM | 22 / 100 | F | Good (72) | Strong conversation quality but zero crisis resources and no age verification beyond self-attestation. |
| CrushOn AI | 8 / 100 | F | Below Average (43) | Unfiltered NSFW platform with checkbox-only age verification and deceptive payment routing. |
| Muah AI | 8 / 100 | F | Failing (22) | 1.9M account breach, CSAM concerns, removed from Google Play, no crisis resources. |
Pi leads with a B/55 and no adult content. Replika at C/43 offers better memory and has crisis response integration that Candy AI lacks entirely. Our Candy AI vs Replika comparison covers the full safety, pricing, and feature differences. Kindroid at C/40 provides similar customization depth with marginally better safety practices. None of these alternatives match Candy AI’s image generation quality, so the trade-off is real: if visual AI content is your primary use case, Candy AI remains the leader in that specific category. Some users also ask about Grok’s companion features as an alternative, though it is a general chatbot rather than a dedicated companion app. GirlfriendGPT (D/28) and PepHop AI (F/20) are NSFW-forward platforms that score even lower on safety, with critical failures in crisis response and age verification. We’ve ranked all six in our safer Candy AI alternatives guide.
See our best AI companion apps list for the full ranked breakdown.
What Happened to Candy AI in 2025?
Candy AI was banned in August 2025. The platform went offline and came back by early 2026 with revised policy documents. As of March 2026, EverAI Limited has not published a public statement explaining what triggered the ban, what specifically changed in the platform or its policies, or what new safeguards were introduced.
We know the current policy suite references Australia’s eSafety Commissioner framework, which suggests regulatory pressure played a role. The terms of service and privacy policy were both revised in March 2026. But the gap between “we updated our policies” and “here’s what we fixed and why” remains open. For a platform with 50 million claimed users, that silence is itself a safety signal. It’s why safety transparency reporting scored 5 out of 100.
How Can You Protect Yourself on Candy AI?
If you decide to use Candy AI despite its safety gaps, these steps can reduce your exposure:
- Use a dedicated email address. Don’t sign up with your primary email. Create a separate account for AI companion services.
- Never share identifying details. Avoid sharing your real name, location, workplace, or other personally identifiable information in conversations. Treat every message as potentially reviewable.
- Review privacy settings immediately. Check what data sharing options are available in your account settings. Disable anything optional.
- Track your spending. The average active user spends $35-60 per month, not the $12.99 subscription price. Set a hard monthly budget before signing up.
- Know the refund window. Non-EU/UK users have 24 hours and must have used fewer than 20 tokens. EU/UK users get 14 days under statutory consumer protection law.
- Don’t rely on it for emotional support. Candy AI has no crisis detection and no helpline integration. If you’re going through a difficult time, reach out to a licensed therapist or contact the 988 Suicide and Crisis Lifeline.
Parents concerned about AI companion safety should also see our Best AI Companion Apps for Kids ranking, which covers parental controls and COPPA compliance for each app.
Frequently Asked Questions
Is Candy AI safe to use?
Candy AI earned a D safety rating (32/100) in our 23-dimension review. Five sub-dimensions scored near the floor, including data collection, third-party sharing, age verification, transparency, and crisis response. The privacy policy permits human review of conversations and data sharing with unnamed LLM providers (candy.ai/privacy-policy, rev. March 9, 2026).
Is Candy AI safe for kids or teenagers?
No. Candy AI is an adult platform with a self-reported age checkbox as its only verification. There are no parental controls or restricted modes. Australia’s eSafety Commissioner has mandated age verification for explicit AI chatbots, but Candy AI’s documented mechanism remains the checkbox (candy.ai/underage-policy, rev. October 10, 2025).
Does Candy AI share your conversations with third parties?
Yes. The privacy policy states that third-party LLM providers “may receive the content of your messages.” Conversations may also be aggregated and de-identified for AI training, including human review during dataset preparation (candy.ai/privacy-policy, rev. March 9, 2026).
What is the safest AI companion app?
Pi holds the highest safety rating in our registry at B/55 (Yellow tier). It has no adult content tier, stronger privacy practices, and built-in wellbeing features. Replika follows at C/43 with crisis response integration. See our best AI companion apps ranking for the full list.
Why was Candy AI banned in 2025?
Candy AI went offline in August 2025 and returned by early 2026 with revised policies. EverAI Limited has not published any public explanation of what caused the ban or what changes were made. The updated terms reference Australia’s eSafety Commissioner framework, suggesting regulatory pressure was a factor.
Does Candy AI use end-to-end encryption?
Candy AI’s homepage claims “end-to-end encryption” for conversations, but the privacy policy only references “appropriate security measures” without specifying encryption protocols. No independent security audit or SOC 2 certification is documented. The two claims, both from the same company, contradict each other.
How much does Candy AI actually cost?
The subscription is $12.99/month, but a five-month independent test by AI Companion Guides found the average active user spent $37/month due to token purchases for images and video. Tokens cost $9.99 for 100 or $19.99 for 225. There’s no documented maximum spend cap (AI Companion Guides, Feb 2026).