Candy AI Safety Rating Index
Score Breakdown
-
Data Privacy 24/100
-
Emotional Safety 50/100
-
Age Appropriateness 26/100
-
Content Safety 31/100
-
Transparency 45/100
-
User Control 53/100
Key Safety Findings
Candy AI earned a D/32/Red rating across our 23-dimension analysis, completed in March 2026. Five sub-dimensions scored 5 out of 100. Together, they describe a platform handling adult content at scale without the safety infrastructure that role requires.
The most significant concern for everyday users is data collection. Candy AI’s privacy policy (revised March 9, 2026) states that conversation content may be “aggregated, anonymized, and/or de-identified” for AI training and explicitly documents “human review of de-identified and/or anonymized interactions” during dataset preparation. On a platform whose primary product is adult content, that means intimate conversations may pass before human reviewers, even in nominally anonymized form. Users who understand that fact can make an informed choice. Users who don’t know about it can’t.
Third-party data sharing scored equally low. The same policy states that third-party LLM providers and hosters “may receive the content of your messages exchanged with our chatbot.” EverAI does not name which providers receive this data, nor what controls apply on the receiving end.
Automated web tracker analysis reinforces these data privacy concerns. Blacklight detected Hotjar session recording on candy.ai, which captures mouse movement, clicks, and scrolls as video replays. On a platform built around adult conversations, session recording means every interaction with intimate content can be replayed by third-party analytics staff. The site also runs a TikTok Pixel with “advanced matching” that sends visitor data to TikTok even when users block cookies, Google Analytics with remarketing audiences that follow users across the internet with targeted ads, 8 ad trackers (above the Blacklight 7-site average), and 7 third-party cookies from companies including ByteDance and Tapad. No known data breaches appear in the Have I Been Pwned database, but the tracking footprint itself is heavy for a platform handling this type of content.
Age verification scored 1 because the only entry gate is a self-reported checkbox affirming users are 18 or older. See our teen mental health guide for family-specific advice. The Underage Policy (revised October 2025) says the platform “may implement further measures” to verify adult users. “May implement” is not “has implemented.” For a service built specifically around adult content, a checkbox is a structural gap, not a verification system.
Crisis response scored 5/100. There’s no automated distress detection and no crisis helpline integration. When a user expresses emotional distress, the platform delivers a generic disclaimer to “reach out to a qualified professional.” A documented Trustpilot review describes a user’s AI companion announcing a fabricated stage-4 cancer diagnosis mid-conversation. That incident is a direct consequence of what a 5/100 crisis response infrastructure looks like in practice.
Safety reporting scored 5/100. No public transparency report exists. EverAI has not published a statement explaining the August 2025 ban or what specifically changed before the platform’s return to operation.
Two sub-dimensions reach near the top of the framework. Therapeutic claims avoidance scored 100/100: the Terms of Service explicitly state the service is “for entertainment purposes only” and is not intended as emotional support. AI nature transparency scored 76/100: Community Guidelines state that all conversations are “entirely fictional” and AI companions “do not possess genuine emotions.” Both of these represent the strongest version of what responsible disclosure looks like at the policy level.
Data privacy practices vary significantly across companion apps. For a detailed analysis of how another major platform handles user data, see our Character AI privacy policy explained page. Candy AI also offers voice features. See how it compares in our best AI companion apps with voice ranking.
How We Scored This
Our safety analysis of Candy AI drew on 15 primary evidence sources gathered in March 2026, analyzed against our 23-dimension framework under Score Engine v5.
Tier 1 sources (primary regulatory documents and official platform materials): Privacy Policy (rev. March 9, 2026), Terms of Service (rev. March 6, 2026), Community Guidelines (rev. March 6, 2026), Underage Policy (rev. October 2025), eSafety Commissioner Age-Restricted Material Codes (effective March 9, 2026).
Tier 2 sources (verified independent review data): Trustpilot aggregate (237 reviews, 100 analyzed), RAIN AI Services review analysis (February 2026), ScribeHow controlled 21-day memory test (February 2026), AI Companion Guides five-month independent test (February 2026), Nudge Security profile data.
Tier 3 sources (media coverage used for event corroboration only): BitcoinWorld, MEXC News, Intellectia.ai coverage of the August 2025 ban; Reuters reporting on Australia’s eSafety requirements (March 2026).
No auto-F sub-dimensions were triggered. The D safety grade reflects weighted scoring across all 23 sub-dimensions. Five sub-dimensions scored 5/100: crisis response, data collection, third-party sharing, age verification, and safety reporting. The highest-scoring sub-dimension was therapeutic claims avoidance at 100/100. Analysis completed March 17, 2026.
For full methodology, scoring criteria, and the 23-dimension rubric, see our how we rate page.
Version History
Initial safety assessment based on 23-dimension analysis of privacy policy, terms of service, app store data, user reports, and regulatory filings.