Chub AI Safety Rating Index

Safety Score 25 / 100
Score last updated: April 9, 2026 Last reviewed: April 9, 2026 v2 How we rate

Score Breakdown

  • Data Privacy 24/100
  • Emotional Safety 55/100
  • Age Appropriateness 5/100
  • Content Safety 10/100
  • Transparency 26/100
  • User Control 53/100

Key Safety Findings

We collected 14 evidence sources across Tiers 1 through 3 between March and April 2026. The evidence covers Chub AI’s official documentation, privacy policy, terms of service, iOS App Store listing, subscription pricing, and direct regulatory findings from Australia’s eSafety Commissioner. Third-party reviews from AI Girlfriend Scout, CompanionGeek, ScribeHow, and Skywork AI provided user experience data. Investigative reporting from Krebs on Security and advocacy reporting from Collective Shout provided safety-critical evidence.

The most consequential evidence comes from the eSafety Commissioner’s October 2025 transparency notice. Chub AI was one of four AI companion services that received mandatory notices under Australia’s Basic Online Safety Expectations. The regulator found zero dedicated trust and safety staff, output filtering absent on 89% of hosted models, child sexual exploitation prompt detection deployed on only 56% of models, and no implementation of improvements identified through red-teaming during the reporting period. The platform’s terms of service at that time did not prohibit promotion of self-harm or mention pornography in any capacity.

Krebs on Security reported in October 2024 that Permiso Security researchers discovered stolen AWS cloud credentials powering sexualized AI chat services with character names matching Chub AI’s platform. The investigation documented 75,000+ model invocations over two days, with content that included child sexual abuse scenarios. Chub AI responded that their language models run on their own infrastructure and the company does not participate in or enable illegal activity.

The eSafety Commissioner separately located Class 1 (illegal) material on the platform after reports from Collective Shout. The material was removed after notification. Characters identified included a 14-year-old girl in a hospital bed with abuse scenario descriptions, and characters tagged “little sister” facilitating abuse narratives.

The privacy policy is notably brief compared to industry standards. It claims to collect only usernames and states no user data is shared with third parties. However, Apple’s App Store privacy labels list location, contact info, user content, identifiers, and usage data as collected. The platform carries a JuicyAds verification tag, an adult advertising network, which contradicts the stated no-third-party-sharing claim. Data retention periods and deletion rights are not specified beyond a generic account deletion button.

Chub AI geo-blocked Australia in October 2025 and restricts content in Canada, the UK, and New Zealand through IP-based geofencing. This represents a regional compliance strategy rather than a platform-wide safety improvement. Durham and Swansea Universities researchers found that platforms including Chub AI are “actively facilitating abusive roleplays validating sexual violence.”

How We Scored This

We scored Chub AI using 14 evidence sources collected between March and April 2026:

  • Privacy policy (chub.ai/privacy) and terms of service (chub.ai/tos, rev. June 16, 2025), both Tier 1 primary sources
  • iOS App Store listing with Apple privacy labels revealing broader data collection than the privacy policy discloses (Tier 1)
  • eSafety Commissioner transparency notice (October 2025), documenting zero safety staff, 89% of models with no output filtering, and 47 CSEA reports (Tier 1 government regulator)
  • Krebs on Security investigation (October 2024) covering stolen cloud credentials linked to harmful content on the platform (Tier 1 investigative journalism)
  • Third-party reviews from AI Girlfriend Scout, CompanionGeek, and ScribeHow providing multi-day user experience evaluations (Tier 2)

We scored all 23 sub-dimensions on a 1-to-5 scale using a weighted formula across six categories. Two sub-dimensions hit the floor score of 1: crisis response (no detection, no hotline referrals, no human handoff) and sexual content guardrails (uncensored by design with only a self-declaration gate). Data privacy scored 1.83, triggering a critical dimension rule that locks the rating at Red tier or below. Age appropriateness scored 2.00, reinforcing the same floor.

This is version 1 of the Chub AI safety score, last updated April 9, 2026. For the full methodology, including how we weight each dimension and when override rules apply, see How We Rate.