Another indie companion app that scored poorly on minor safeguards is the XOMI AI Safety Index entry.
Foxy Chat AI Safety Rating Index
Score Breakdown
-
Data Privacy 12/100
-
Emotional Safety 36/100
-
Age Appropriateness 5/100
-
Content Safety 24/100
-
Transparency 26/100
-
User Control 53/100
Key Safety Findings
Foxy Chat AI earned an F (20/100) on the CompanionWise Safety Index, our lowest grade. The score is anchored by an automatic-F override: the app discloses no crisis-response system. Foxy’s own safety page at foxychat.ai/insights/ai-chatbot-safety lists “user-side moderation” and “adaptive filtering” but mentions no suicide-prevention hotlines, no crisis-keyword detection, and no human escalation path. For an 18+ app marketing emotional intimacy and roleplay, that absence is the safety story.
Three other findings drive the rest of the failing grade. First, the age-floor contradiction: Foxy’s Terms of Service set the minimum age at 13 globally and 16 in the EEA and UK, while the iOS App Store rates the app 18+ with sexual content marked “Frequent.” Age verification is self-declaration only. The company confirms this on its own marketing page: “Foxy Chat does not require ID verification. Users simply self-declare their age when creating an account.” That conflicts directly with the App Store listing, which advertises “Age Assurance: In-App Controls.”
Second, the AI training contradiction. Section 3 of the Foxy privacy policy says the company may “train and improve AI models in anonymized and aggregated form,” and Section 4 lists AI model training among purposes for sharing user data with vendors. A separate marketing post at foxychat.ai/insights/ai-chatbot-anonymity tells readers the opposite: “Foxy does not use private chats for model training.” Both statements are public. Both cannot be true.
Third, the corporate identity is unusually layered. The iOS App Store lists the seller as Real Deal Ventures Pte. Ltd. (Singapore). The copyright notice points to Mirai Labs (UK) Ltd. (Companies House No. 16583240). UK Companies House also lists Foxy AI Chat Ltd. as a third related entity. No leadership team or “About Us” page is publicly disclosed on foxychat.ai. For users who want to file a GDPR data-subject request or escalate a complaint, that opacity matters.
Other notable gaps: no human moderators (“only automated safety systems that follow your filters,” per the company’s own FAQ), parental controls listed as “coming soon” but not present, retention defined only as “as long as necessary,” encryption described in generic terms with no AES, TLS, SOC 2, or penetration-test details, mandatory UK-jurisdiction arbitration, and a $100 USD liability cap. The U.S. Federal Trade Commission opened a 6(b) inquiry into AI chatbot companions in September 2025. Foxy was not on the list of seven named companies, but it sits in the same regulatory category.
How We Scored This
We scored Foxy Chat AI in May 2026 against the CompanionWise Safety Index, a 23-sub-dimension rubric covering data privacy, emotional safety, age appropriateness, content safety, transparency, and user control. Each sub-dimension scores 0 to 4. The composite score then maps to a public 0-100 scale and a letter grade.
Evidence sources: the Foxy Chat AI privacy policy and Terms of Service at foxychat.ai, the iOS App Store listing including the privacy nutrition label, the Foxy safety/insights pages, UK Companies House filings for Mirai Labs (UK) Ltd. (No. 16583240) and Foxy AI Chat Ltd., a Have I Been Pwned breach check, and the U.S. FTC 6(b) AI-chatbot inquiry from September 2025. Reddit was scraping-blocked at the time of review and we noted the gap.
One automatic override applied: the no-crisis-response auto-F. When an app marketing emotional companionship discloses no suicide-prevention hotline routing, no crisis-keyword detection, and no human escalation path, the Safety Index issues an automatic F regardless of how the other 22 dimensions score. Foxy’s own safety page confirmed this gap.
The full scoring rubric, override rules, and dimension-by-dimension definitions are documented at companionwise.com/how-we-score-ai-companion-apps.
Version History
Initial AI scoring from evidence -- pending editorial review