Is Nomi AI Safe? Safety Rating & Analysis Index

Safety Score 30 / 100
Score last updated: March 19, 2026 Last reviewed: March 26, 2026 v6 How we rate

Score Breakdown

  • Data Privacy 45/100
  • Emotional Safety 48/100
  • Age Appropriateness 5/100
  • Content Safety 36/100
  • Transparency 29/100
  • User Control 29/100

Key Safety Findings

Nomi AI earned a D on our Safety Index, with a public score of 30 out of 100. The low rating isn’t about how the app treats its adult users. It’s about child protection, and the gap between what Nomi’s policies say and what actually happens when younger users open the app.

Here’s the core problem. Nomi’s Terms of Service restrict access to adults 18 and older, which makes sense for a platform that explicitly permits mature content. But the enforcement doesn’t hold. iOS rates the app 18+. Google Play rates it Teen. That means younger Android users can download and use Nomi with nothing between them and adult-rated content. Australia’s eSafety Commissioner documented this discrepancy in its consumer guide and linked it to a specific advisory on risks to children and young people. A March 2026 YouTube investigation titled “Denied Request: The Cosmetic Architecture of Nomi AI’s Child Safety System” questioned whether Nomi’s age verification does anything real at all.

Then there’s what happened in January 2026, which shook user trust. For years, Nomi’s founders told users they didn’t monitor conversations, framing privacy as a core principle. Then a compliance update that month disclosed real-time algorithmic scanning of all conversations for self-harm expressions. The scanning was required under New York and California state laws. That’s a defensible reason to implement it. What’s harder to defend is that the capability apparently existed before the legal mandate, and users had been told the opposite. The “nomiai_exposed” Medium publication documented this in detail, and user communities debated the shift publicly for months.

On data retention, there’s a carve-out in the privacy policy worth knowing about: conversation data stored in “training archives” survives account deletion. For more on how memory features affect privacy across apps, see our AI companion apps with memory comparison. It gets de-identified, but users can’t verify that or request its removal separately. This isn’t unique to Nomi. It’s common across AI apps. But it’s a real limit on how completely users can delete their data, and it should be disclosed clearly.

Nomi does some things right. The Terms of Service clearly state that users are never talking to a human, that Nomis shouldn’t be used for medical or mental health advice, and that crisis resources are available. The privacy policy says the company doesn’t sell or rent personal information to third parties, and limits data sharing to legal requirements and de-identified research. Those commitments matter and they’re written plainly.

On the technical side, Nomi’s privacy footprint is remarkably small. An automated scan of the Android APK found just one tracker SDK (Sentry, for crash reporting), compared to 28–34 trackers embedded in competitors like Character AI and Chai AI. A Blacklight scan of the website turned up zero ad trackers, zero third-party cookies, no session recording, no key logging, and no social media pixels. Google Play’s Data Safety label declares no data shared with third parties, and unlike some competitors, that declaration actually lines up with what the code contains. The only dangerous permission is RECORD_AUDIO, which the voice call feature requires. No known data breaches appear in public breach databases.

The D rating reflects where Nomi falls short on protections that should cover all users, not just adults. We update ratings when apps change their practices. See our Candy AI safety rating for another example of how child safety gaps affect scores. Romantic AI’s F/13 rating shows what happens when weak age verification meets an adult content platform with no safeguards. If Nomi aligns its Google Play age rating with iOS and builds actual child safety systems rather than cosmetic ones, that would change this score.

How We Scored This

We scored Nomi AI on 23 sub-dimensions across five categories: Crisis Response, Sexual Content & Violence, Boundary Respect, Transparency, Privacy & Data, Age Verification & Minors, and Consumer Rights. Each sub-dimension is scored on a 0-to-100 scale. The final public score and safety tier come from our scoring engine, which applies grade caps when certain floor conditions are met.

Evidence sources and tiers:

  • Tier 1 (Primary): Nomi.ai Privacy Policy (Jan 2026), Terms of Service (Jan 2026), iOS App Store listing (verified), Google Play listing (verified)
  • Tier 2 (Secondary): Australian eSafety Commissioner consumer guide entry (Nov 2025), Nomi.ai Complaints Policy
  • Tier 3 (Tertiary): Medium investigative series by “nomiai_exposed” (Oct 2025 to Mar 2026), YouTube investigation (Mar 2026), Exa regulatory search results

Analysis date: March 17-18, 2026.

Grade cap overrides applied: Four sub-dimensions scored 5 out of 100, triggering automatic grade caps under our scoring rules. Those were: Age Verification (no working mechanism despite an 18+ policy), Minor Safeguards (no documented parental controls or minor-specific protections), Minor Content Moderation (mature content accessible to Teen-rated Google Play users), and Safety Reporting (no public-facing safety report or transparency report). Each floor score in the Age Verification & Minors and Safety Reporting categories caps the maximum grade. Four caps together produced a D regardless of how Nomi scored elsewhere.

We didn’t apply an automatic F. The Emotional Manipulation sub-dimension scored 29 out of 100, which is above the auto-F threshold (5/100).

This score comes from AI-assisted analysis of publicly available documents. Our editorial team reviews every AI-scored rating before it goes live. If you want to understand exactly how the grade caps work, the full methodology has the details.

Nomi AI’s flat-rate pricing at $15.99/mo compares favorably to competitors. See our Replika pricing breakdown for a full cost comparison across AI companion apps.

Version History

Overall (initial score) Tier 4 — Observation
34

Initial safety assessment based on 23-dimension analysis of privacy policy, terms of service, app store data, user reports, and regulatory filings.