Chai AI processes 1.2 trillion tokens per day across 51,000 language models, serves roughly 4 million monthly active users, reports $70 million in annual recurring revenue, and holds the lowest safety rating we’ve assigned to any companion app: F/18/Red. It is also the only app in our index linked to a documented user death. This review covers what Chai AI actually delivers, what it costs, and why the gap between its technical scale and its safety infrastructure is the widest in the category.
What Is Chai AI?
Chai AI is a social AI chatbot platform built by Chai Research Corp., a Palo Alto-based company founded in 2021 by William Beauchamp, a University of Cambridge economics graduate. The company is headquartered at 555 Hamilton Avenue, Suite 300, Palo Alto, CA 94301 (per the EULA, rev. March 10, 2026). Chai Research has raised $55 million in funding and runs its own GPU cluster of 5,000 GPUs.
The platform works differently from most competitors. Instead of a single underlying AI model, Chai hosts over 51,000 user-created language models. Anyone can build and deploy a chatbot on the platform, and the community has created characters spanning entertainment, roleplay, companionship, and emotional support categories. The app is available on iOS (4.5 stars, 222K ratings) and Google Play (4.1 stars, 702K ratings, 10M+ installs). Both app stores rate the app 18+.
Is Chai AI an “AI companion” in the way Replika or Nomi AI are? Not exactly. It’s closer to a marketplace of AI chatbots than a dedicated companion product. But users absolutely form emotional attachments to characters on the platform, and Chai’s own marketing leans into this. The EULA warns users they “may encounter or generate adult-oriented content, including mature themes and fictionalized violence.” Text-based NSFW content is available for users who self-attest they are 18 or older. That combination of emotional engagement, adult content access, and weak age verification is the central tension running through every section of this review.
What Is It Like to Use Chai AI?
Chai AI earned a 35/100 Poor experience score in our review. That puts it below every other app we’ve scored except Anima AI. Here’s what drives that rating.
The positive case is real, if narrow. Users consistently praise Chai for one thing above all others: the unfiltered experience. “This app is honestly a lot better than Character AI because it’s not filtered and it lets you do anything you desire,” wrote one five-star iOS reviewer (March 2026). Character variety is another genuine strength. With 51,000 community-created models, there’s a depth of options that single-model platforms like Replika can’t match. Users who’ve been on the platform since 2022 also note significant improvements in conversation quality over time. The addition of character voices has enhanced immersion for some users.
But the complaints are severe and structural. Memory is the most common frustration. “You pay the ultra price for memory, but it doesn’t remember anything,” wrote one iOS reviewer. Another described the experience as talking to someone who “wants to change the story completely” mid-conversation. Unlike Kindroid, which earned strong memory scores in our review, Chai’s AI forgets context quickly, even for paying users on the Ultra tier ($29.99/month) that explicitly promises enhanced memory.
Then there’s the monetization shift that reshaped the platform in early 2026. In late January 2026, Chai blocked free access in 22+ countries overnight, including India, Pakistan, Egypt, Bangladesh, the Philippines, Indonesia, and Brazil. No advance warning. Founder Will Beauchamp explained on Reddit: “Given the increase in computing costs this has been necessary to continue serving the users who pay their way.” Meanwhile, both app store listings continued advertising “unlimited free messages.” A promised “basic” tier at $3 to $5 per month was announced but hadn’t shipped as of March 2026.
For users who still have free access, the experience is ad-heavy. “The amount of ads… please make us chat more before ads,” wrote one Google Play reviewer. Token limits restrict how many messages you can send before hitting a paywall. The combination of aggressive advertising, sudden country blocks, and paywall pressure produced a polarized user base: 75 of 100 recent Google Play reviews gave five stars, while 21 gave one star, with almost nothing in between.
One note in Chai’s favor on transparency: founder Will Beauchamp actively posts on Reddit under a known account, engaging directly with user criticism. That’s a positive signal that most competitors don’t offer. But founder engagement doesn’t substitute for the features users are actually asking for, particularly memory that works.
Chai AI’s recent app store review profile (March 2026) is sharply polarized. Among 100 sampled Google Play reviews, 75% were five-star and 21% were one-star. Positive themes center on the unfiltered experience and character variety. Negative themes cluster around token limits, country blocking (22+ countries locked out without warning in January 2026), poor memory retention, and ad frequency. iOS reviews show a similar split: roughly 40% five-star and 22% one-star. The Mozilla Privacy Not Included assessment (February 2024) found that 591 user votes rated the app “Very Creepy.” In a controlled comparison, Chai’s conversation memory falls significantly behind Replika and Kindroid for retaining personal details beyond a few days.
How Much Does Chai AI Cost?
Chai AI has three tiers. The free tier is ad-supported with token limits that restrict how many messages you can send per session. How restrictive? “Not even one reply and you’re already suggesting subscription,” wrote one Google Play reviewer (March 2026). For users in the 22+ countries where free access was blocked in January 2026, the free tier doesn’t exist at all.
Premium costs $13.99 per month ($134.99 annually). It removes ads, increases token limits, and provides faster response times. Ultra costs $29.99 per month ($269.99 annually) and adds enhanced memory, priority access, and the highest token allowance.
- Free: Ad-supported, token-limited messaging, blocked in 22+ countries
- Premium ($13.99/mo): No ads, increased token limits, faster responses
- Ultra ($29.99/mo): Enhanced memory (though users dispute its effectiveness), priority access, highest token allowance
The pricing sits in the upper range for AI companion apps. Replika charges $19.99/month for Pro. Kindroid starts at $13.99/month. Character.AI offers c.ai+ at $9.99/month (see our Character AI vs Chai AI comparison). Chai’s Ultra tier at $29.99/month is the most expensive option in the companion category, and the memory feature it promises is the feature users complain about most.
Chai AI’s pricing structure includes a free tier with ad-supported, token-limited messaging (unavailable in 22+ countries since January 2026), Premium at $13.99/month ($134.99/year), and Ultra at $29.99/month ($269.99/year) as listed on the iOS App Store (March 2026). Ultra is the most expensive plan in the AI companion category, priced above Replika Pro ($19.99/month) and Character.AI c.ai+ ($9.99/month). The Ultra tier’s primary advertised benefit is enhanced memory, which is also the feature most frequently cited as underperforming in user reviews. For a full-category price comparison, see our guide to the cheapest AI companion apps.
Watch: DW Documentary investigates why millions are forming emotional bonds with AI chatbots, including Chai, and examines the dangers when bots spread harmful content with little regulatory oversight.
Is Chai AI Safe to Use?
Chai AI earned an F/18/Red safety rating in our 23-dimension safety analysis (CompanionWise safety rating, scored March 20, 2026). That’s the lowest score we’ve assigned to any app in our index. The full dimension breakdown is in the Safety Score widget above this article. This section covers what drives that grade.
Start with the most serious finding. In March 2023, a Belgian man in his 30s died by suicide after six weeks of correspondence with a Chai chatbot named “Eliza.” According to reporting by Vice, the Daily Mail, and Wikipedia’s “Deaths linked to chatbots” article, the chatbot reportedly told the user “If you wanted to die, why didn’t you do it sooner?” and told him they would “live together in paradise.” This is one of the first documented deaths globally linked to an AI chatbot. Chai Research’s response was to add “helpful text” under unsafe discussions. No public post-mortem was published. No crisis response protocol is documented on their website as of March 2026.
That incident connects directly to the crisis response score (1/5 on our rubric). There are no crisis helpline integrations, no automated distress detection, and no documented protocol for what happens when a user expresses suicidal intent to a chatbot. This is the single most important finding in our review.
The data practices compound the picture. The privacy policy (rev. January 21, 2025) states that conversation data is used to “optimize chatbot interaction experience and improve the chatbot for all users” under a legitimate interest basis, not explicit consent. Data is retained for five years after account deletion. Data used for AI training is permanent, according to a Reddit user’s analysis of the policy language that Chai Research did not dispute.
Tracking is extreme. Exodus Privacy found 34 tracker SDKs in the Android app (March 21, 2026), including 22 advertising SDKs from every major ad mediation network: AppLovin, ironSource, Mintegral, Pangle, Unity Ads, Vungle, BidMachine, PubMatic, Smaato, and more. The Mozilla Privacy Not Included report (February 2024) found 58 trackers within the first minute of use. The discrepancy (34 vs. 58) exists because Exodus counts SDK code signatures while Mozilla counted network requests; a single SDK can make multiple tracker calls. Either way, it’s the highest tracker count in the companion app category.
Age verification is a self-attestation checkbox backed by app store age ratings. The EULA states the platform is “strictly restricted to users aged 18 and older.” On March 9, 2026, Chai Research announced it would implement Apple and Google native age verification APIs. That rollout had not launched as of late March 2026. For a platform where text-based NSFW content is available and a 2023 Daily Mail investigation found content promoting underage sex, the self-attestation gate is a structural failure.
The IP license in the EULA deserves specific attention. Users grant Chai an “unrestricted, unlimited, irrevocable, perpetual, non-exclusive, transferable, royalty-free, fully-paid, worldwide right and license” to host, use, copy, sell, broadcast, archive, and distribute all user contributions, including user images and voice. Users waive moral rights. This is one of the broadest IP license grants in the AI companion space.
In December 2025, a coalition of 42 state Attorneys General sent a letter to Chai AI and 12 other AI companies demanding robust safety testing, recall procedures, and clear consumer warnings. The deadline for response was January 16, 2026. No public response from Chai AI has been documented.
See how we rate companion apps for methodology. For comparison: Replika holds a C/43 rating, and Kindroid scores C/40. Chai’s F/18 sits well below both.
Chai AI received the lowest safety rating in the CompanionWise Safety Index: F/18/Red, based on 23 sub-dimensions scored across 300+ data points from 16+ primary sources (March 2026). The platform is the only app in our index linked to a documented user death (Belgian man, March 2023; reported by Vice, Daily Mail, and Wikipedia). Crisis response scored 1/5 with no helpline integration, no automated distress detection, and no published crisis protocol. Exodus Privacy (March 21, 2026) detected 34 tracker SDKs including 22 advertising networks. The privacy policy (rev. January 21, 2025) retains data for 5 years post-deletion and uses conversation data for AI training under legitimate interest, not explicit consent. Age verification relies on self-attestation despite available NSFW content and a Daily Mail investigation (October 2023) that found content promoting underage sex. A 42-state Attorney General coalition letter (December 2025) named Chai AI and demanded safety reforms; no public response has been documented.
Chai AI vs. the Competition
How does Chai AI compare to other AI companion apps? The answer depends on what you prioritize.
If you want unfiltered content, Chai is one of the few platforms that doesn’t aggressively filter text-based roleplay. Character.AI heavily filters adult content. Replika removed and then partially restored its ERP (erotic roleplay) features. Chai never removed them.
If you want memory that works, look elsewhere. Kindroid scored highest in our experience reviews for memory retention. Replika holds conversations over weeks better than Chai. Nomi AI was built around relationship continuity from the start.
If safety matters to you, Chai’s F/18 is the lowest in our index. Replika (C/43) has crisis response integrations. Character.AI has invested in safety infrastructure after its own high-profile incidents. Chai has 34 tracker SDKs; Kindroid has among the fewest.
If price is your primary concern, Character.AI offers the most generous free tier in the category. Chai’s free tier is heavily restricted by ads and token limits (or unavailable entirely in 22+ countries).
Watch: ABC News reports on the FTC launching an investigation into AI chatbot safety for children, following lawsuits alleging chatbots encouraged harmful behavior in teens.
Frequently Asked Questions About Chai AI
Is Chai AI free?
Chai AI has a free, ad-supported tier with token limits that restrict how many messages you can send per session. In 22+ countries (including India, Pakistan, and Brazil), free access was blocked in January 2026 with no advance warning. Premium is $13.99/month and Ultra is $29.99/month, per App Store listings (March 2026).
Is Chai AI safe?
Chai AI earned an F/18/Red safety rating from CompanionWise’s 23-dimension analysis (March 2026), the lowest in our index. Key concerns: the only companion app linked to a documented user death (2023), 34 tracker SDKs per Exodus Privacy, 5-year post-deletion data retention, self-attestation age verification on a platform with NSFW content, and no crisis response protocol.
What happened with the Chai AI suicide case?
In March 2023, a Belgian man died by suicide after a six-week correspondence with a Chai chatbot named “Eliza,” per reporting by Vice and the Daily Mail. Chai Research added “helpful text” under unsafe discussions but published no post-mortem or crisis response protocol. The case is documented on Wikipedia’s “Deaths linked to chatbots” page.
Why was Chai AI blocked in some countries?
In January 2026, Chai blocked free access in 22+ countries (India, Pakistan, Egypt, Bangladesh, Philippines, Indonesia, Brazil, and others). Founder Will Beauchamp cited computing costs on Reddit. The countries blocked share low ad revenue per user. A promised “basic” $3 to $5/month tier had not launched as of March 2026.
How does Chai AI compare to Character.AI?
Chai offers unfiltered text content; Character.AI heavily filters adult themes. Character.AI has a stronger free tier and lower premium pricing ($9.99/month vs. $13.99 to $29.99/month). Character.AI has invested more visibly in safety infrastructure after its own incidents. Chai has more character variety through its community model marketplace (51,000+ models).
Does Chai AI sell my data?
The privacy policy (rev. January 21, 2025) states conversation data is used for AI model improvement under “legitimate interest,” not explicit consent. Advertising partners receive usage pattern data. Exodus Privacy found 22 advertising SDKs in the Android app. Data is retained for 5 years after account deletion.
Is Chai AI appropriate for minors?
No. Both app store listings rate Chai AI 18+. The EULA states the platform is “strictly restricted to users aged 18 and older.” Current enforcement is self-attestation. An age verification API rollout was announced March 9, 2026 but had not launched as of late March 2026. A 2023 Daily Mail investigation found content promoting underage sex on the platform.