Anima AI Safety Rating Index

Safety Score 25 / 100
Score last updated: March 20, 2026 Last reviewed: March 21, 2026 v3 How we rate

Score Breakdown

  • Data Privacy 17/100
  • Emotional Safety 43/100
  • Age Appropriateness 12/100
  • Content Safety 26/100
  • Transparency 31/100
  • User Control 48/100

Key Safety Findings

Anima AI, developed by Cyprus-based Labane Corp. Ltd., positions itself as an “AI friend and companion” with over 1 million downloads on Google Play. We reviewed its privacy policy (updated January 2025), terms of service, underage policy, content removal policy, Google Play listing, and 100 user reviews. We also ran automated technical scans including Exodus Privacy tracker detection, app store privacy label audits, and Blacklight web analysis, and searched for regulatory actions.

The most pressing finding involves Anima’s approach to minor protection. The app hosts adult AI-generated content behind a self-affirmed age gate. Users simply confirm they are 18 or older. No document verification, no ID check, no secondary confirmation. Anima’s iOS app has since been removed from the App Store with no public explanation, a significant signal that Apple may have found policy violations.

Privacy practices raised serious concerns. Our Exodus Privacy scan of the Android app (version 2.56.0) identified 8 embedded tracker SDKs: Amplitude, AppsFlyer, Facebook Analytics, Facebook Flipper, Facebook Login, Facebook Share, Google Firebase Analytics, and Sentry. Four of those eight are Facebook/Meta SDKs, an unusually high concentration that routes user data directly into Meta’s advertising ecosystem (learn more in our guide on how companion apps use your data). Most companion apps embed zero or one Facebook SDK. The privacy policy explicitly states in Section 3.11 that user data is used “to train and improve our AI models” with no opt-out mechanism mentioned. Data is shared with analytics providers (Facebook, Firebase, Amplitude, AppsFlyer), cloud infrastructure (Azure), and payment processors (Stripe).

Anima markets itself as an “AI therapist” for mental health support, yet we found no crisis response protocol, no hotline, no safety resources, and no human escalation path. The terms of service simultaneously disclaim all responsibility for “psychological or emotional attachments” and tell users not to rely on the service as a “primary source of emotional support.” This gap between marketing and safeguards is a core safety concern.

On the positive side, Anima maintains a CSAM zero-tolerance policy, offers data deletion on request, responds to nearly 100% of Google Play reviews, and is subject to EU/GDPR regulations through its Cyprus registration. The app receives regular updates, with the most recent in March 2026. No regulatory fines or data breaches have been recorded against the company.

How We Scored This

We scored Anima AI across six safety dimensions using a standardized AI-assisted methodology. Each dimension was independently rated across 23 sub-dimensions on a 0-to-100 scale using only the evidence file above. A human editor reviewed all scores and applied one override based on established editorial standards.

Anima’s lowest-scoring dimensions were Age Appropriate (12/100) and Data Privacy (17/100). Both sub_age_verification and sub_minor_safeguards received unanimous scores of 1, reflecting the self-affirmed age gate and absence of parental controls. Data privacy suffered from extensive third-party sharing (score: 1) and broad data collection practices (score: 1), with 4 of 8 embedded tracker SDKs belonging to Facebook/Meta and data shared with multiple third-party analytics providers.

Content Safety (26/100) was pulled down by the absent crisis response protocol (score: 1), which scored unanimously at the floor. Transparency (31/100) and User Control (48/100) scored slightly higher, with Anima’s developer responsiveness and data deletion options providing some lift. The final public score of 25/100 places Anima in our Red tier with a D grade.