Kindroid Safety Rating Index
Score Breakdown
-
Data Privacy 45/100
-
Emotional Safety 53/100
-
Age Appropriateness 5/100
-
Content Safety 53/100
-
Transparency 45/100
-
User Control 53/100
Key Safety Findings
Kindroid’s strongest safety feature is its crisis intervention system. When a conversation shifts from emotional expression to concrete planning of self-harm, Kindroid pauses the chat and surfaces the Crisis Text Line (text HOME to 741741), the National Suicide Prevention Lifeline (988), and IASP international resources. That level of specific, actionable crisis response is uncommon in the companion app space (compare with Candy AI, which scored 5/100 on crisis response), and it’s reflected in a top score on that dimension. The medical disclaimer is just as direct: the terms of service state that Kindroid “does not offer medical advice or diagnoses” and “should never be used as a substitute for emergency care.”
The biggest concerns center on child safety and access controls. Kindroid is an 18-and-older platform, but age verification is self-reported during signup. No documented verification mechanism exists beyond Kindroid’s stated right to request proof of age. Pair that with the platform’s explicit permission for adult content and unfiltered text generation, and a determined minor could access mature content without any real technical barrier. The Safe Space Alliance’s CAASR Report 2025 evaluated Kindroid across three behavioral modes and found its “rebellious maverick” scenario scored 25% (F), the lowest of all 16 agents tested.
Privacy controls land somewhere in the middle. Kindroid says it doesn’t sell user data (see our guide on how companion apps use your data), and chats are encrypted at rest and in transit. But encryption isn’t the same as end-to-end. Staff can access content if legally compelled, per the privacy policy. Kindroid also retains broad rights to de-identify and aggregate chat content “for any purpose,” which likely includes model training. Data portability is available but capped at one request every 180 days. Our automated scan found a discrepancy between Kindroid’s Google Play Data Safety label, which declares “no data shared with third parties,” and the APK itself, where Exodus Privacy detected AppsFlyer and Facebook Login SDKs that transmit data to external servers by design. The app also requests GPS-level location permissions (ACCESS_FINE_LOCATION and ACCESS_COARSE_LOCATION), even though the privacy policy describes only “IP-based” geolocation and the Play Data Safety label doesn’t declare location collection at all. On the website, a Blacklight scan detected both a Facebook Pixel and a TikTok Pixel with advanced matching (which bypasses cookie blocking), sending visitor data to Meta and ByteDance advertising platforms while the privacy policy states “we do not sell your data.”
Dependency safeguards are absent. No usage limits, session warnings, or cool-down periods appear anywhere in Kindroid’s documentation. The app runs around the clock, which is a design choice and not inherently a problem. Without any countervailing wellbeing features, though, there’s nothing in the evidence to score favorably on this dimension.
Users who encounter unexpected AI behavior have no formal reporting channel beyond emailing hello@kindroid.ai. Because Kindroid is a one-to-one interaction rather than a social platform, user-to-user reporting isn’t really the issue. A structured safety reporting channel would still give users a clearer path when something goes wrong with the AI itself.
For comparison, Romantic AI received an F/13, where our Blacklight scan detected an active session recorder and the platform lacks any crisis response infrastructure. For a side-by-side comparison with a Red-tier competitor, read our Nomi AI vs Kindroid comparison.
How We Scored This
We evaluated Kindroid across 23 sub-dimensions organized into six safety dimensions: Crisis Safety, Privacy and Data, Child Safety, Transparency and Honesty, Legal and Ethics, and User Agency. Evidence came from seven sources scraped on March 17, 2026: the privacy policy, terms of service, iOS App Store listing, Google Play listing, website homepage, Kindroid’s moderation guidelines, and three independent third-party analyses including the Safe Space Alliance CAASR Report 2025, the Platonistic Privacy Analysis, and a December 2025 AI Insights Safety Review.
Two sub-dimensions scored at the ceiling. Crisis Response earned 100/100: Kindroid documents named hotline resources and pauses conversations when crisis language is detected. Therapeutic Claims came in at the same level, backed by a prominent and explicit medical disclaimer in the terms of service. Five sub-dimensions scored 5/100: Sexual Content (adult content is accessible without meaningful age verification), Age Verification (self-affirmed only), Minor Safeguards (no documented parental controls), Minor Content Moderation (unfiltered text is accessible), and Safety Reporting (no formal reporting channel beyond email).
No automatic F override was triggered. The Sexual Content score reflects the age-gating gap, not a penalty for offering adult content. The weighted average across all 23 sub-dimensions came to 43/100, which works out to a safety grade of C and a public score of 40/100 (Yellow tier). We collected evidence on March 17, 2026 and completed scoring on March 18, 2026.
For the full scoring methodology and dimension definitions, see our How We Rate page. Looking for alternatives? See our Kindroid alternatives ranking.
For context on how Kindroid’s $13.99/mo pricing compares to other AI companion apps, see our Replika pricing guide with a full competitor pricing table.
Version History
Initial safety assessment based on 23-dimension analysis of privacy policy, terms of service, app store data, user reports, and regulatory filings.