Talkie AI Safety Rating Index
Score Breakdown
-
Data Privacy 45/100
-
Emotional Safety 38/100
-
Age Appropriateness 19/100
-
Content Safety 26/100
-
Transparency 29/100
-
User Control 53/100
Key Safety Findings
Talkie AI earned a D/30/Red rating across our 23-dimension safety analysis, completed in March 2026. Three sub-dimensions scored 5/100: crisis response, age verification, and safety reporting. Those failures are not theoretical. TorHoerman Law is actively investigating Talkie AI for suicide and self-harm risks (March 2026), and the original Talkie app was removed from the US Apple App Store in December 2024. Neither Apple nor developer SUBSUP PTE. LTD. (a Singapore subsidiary of Chinese AI company MiniMax) has disclosed the specific violations.
Child safety is the most urgent concern. Talkie sets a 16+ age requirement in its Terms of Service, but verification relies entirely on self-reporting. Reddit threads and app store reviews document children as young as 9 using the platform. One Reddit post describes an 11-year-old accessing NSFW content after downloading Talkie from an in-game ad. A password-protected Teenager Mode exists (it disables search and enforces nighttime downtime), but multiple user reports indicate mature content remains accessible despite it. The iOS replacement app (Talkie Lab) now carries an 18+ rating with “Frequent/Intense” mature content warnings.
Privacy practices show a troubling gap between disclosures. Talkie’s privacy policy explicitly lists collection of messages, voice content, location data, and device identifiers. The Google Play Data Safety section, however, declares “No data collected.” That direct contradiction means users who check the Play Store before downloading get a fundamentally inaccurate picture of what Talkie actually collects. The privacy policy also covers CCPA and Singapore’s PDPA but omits GDPR entirely, despite Talkie operating globally with 50M+ Google Play downloads.
Automated technical analysis revealed some of the starkest findings. Exodus Privacy detected 32 tracker SDKs in the Android build, including 21 advertising networks. That count is the second-highest in our registry (behind Chai AI’s 34) and explains the relentless ad load users complain about. One of those SDKs is SuperAwesome, a kid-targeted advertising network. Embedding a child-focused ad SDK in an app with documented child safety failures raises serious questions about audience targeting. The app also requests 52 total permissions, including calendar read/write access, approximate location, and the ability to monitor other app usage on your device (PACKAGE_USAGE_STATS), none of which have a documented purpose in an AI chat app.
Monetization practices amplify the risk. Free users face ads between nearly every interaction, chat limits, and a gem-based premium currency system with gacha mechanics. The iOS App Store listing confirms loot boxes. In an emotional companionship product with a documented young user base, that combination of scarcity mechanics and interruptive ads creates monetization pressure on vulnerable users during emotionally invested moments. The combination of character roleplay and emotional investment also raises emotional dependency concerns, particularly for younger users still developing social skills.
Talkie does get a few things right. AI nature transparency and therapeutic claims avoidance both scored 53/100: the app markets itself clearly as AI, and it does not make wellness claims. User control dimensions (data portability, conversation management, privacy settings) each scored 53/100, reflecting basic CCPA opt-outs and account deletion capabilities. For comparison, Romantic AI received an F/13, where we found a FullStory session recorder capturing every tap and keystroke. Talkie’s issues are different in kind: the problem isn’t covert surveillance, but a massive gap between what the platform claims and what it actually does to protect its youngest users.
How We Scored This
We scored Talkie AI using eight evidence sources collected between March 20 and 22, 2026:
- Privacy policy (updated December 2025), terms of service (updated October 2025), and community guidelines (updated January 2025), all Tier 1 primary sources
- iOS App Store and Google Play listings, with over 733,000 user reviews on Android and 39,000 ratings on iOS (Tier 1)
- Automated privacy and tracker audit that found 32 embedded advertising and analytics SDKs, an unusually high count for a chat app (Tier 2)
- Regulatory and legal filings including the December 2024 Apple App Store removal, an active lawsuit investigation for suicide and self-harm (TorHoerman Law, March 2026), and the FTC’s September 2025 inquiry into AI companion chatbots (Tier 1 and Tier 2)
- Community and media coverage from 10 Reddit threads, Trustpilot reviews, and reporting on the App Store ban from multiple outlets (Tier 3)
We scored all 23 sub-dimensions on a 0-to-100 scale using a weighted formula across six categories. Three scores hit the floor at 5 out of 100: crisis response (no documented protocol despite active litigation), age verification (self-attestation only, with children under 10 documented on the platform), and safety reporting (no transparency reports published). The age verification score triggers a grade cap, but the weighted average of 34/100 already falls below that ceiling. The highest-scoring dimension was User Control at 53/100, reflecting basic privacy opt-outs and account deletion.
This is version 1 of the Talkie AI safety score, last updated March 23, 2026. For the full methodology, including how we weight each dimension and when override rules kick in, see How We Rate.