Are AI Companions Replacing Human Relationships?

Headlines paint a dramatic picture: millions of people are dating AI, teenagers are forming romantic bonds with chatbots, and loneliness is driving an entire generation toward artificial relationships. The reality is more complicated. AI companion apps have surged 700% since 2022, according to TechCrunch, and platforms like Character.AI now attract 20 million monthly users. But the question of whether these tools are actually replacing human relationships, or filling gaps that already existed, depends on which research you read and who you ask. This guide pulls together the strongest available evidence, from peer-reviewed studies to large-scale surveys to clinical perspectives, and breaks down what’s actually happening between humans and their AI companions in 2026.

AI companion apps are not a substitute for professional mental health care. If you’re experiencing depression, anxiety, or a mental health crisis, please contact a licensed therapist or call the 988 Suicide and Crisis Lifeline.

Key Takeaways

  • A 2026 Norton survey found 77% of online daters would consider dating an AI, and 63% believe an AI partner would be more emotionally supportive than a human one.
  • But 85% of people say AI interaction feels less meaningful than human interaction, according to the Human Clarity Institute’s 2026 survey on AI companionship.
  • Research from Harvard Business School shows AI companions reduce loneliness temporarily, but the effect resets daily and doesn’t build over time.
  • 70% of AI companion users are in relationships. Most aren’t replacing partners. They’re supplementing emotional needs their existing relationships don’t fully meet.
  • The biggest risk isn’t replacement. It’s what psychologists call “deskilling,” where constant AI interaction erodes the social skills needed for real human connection.

What Research Says About AI Companions and Human Relationships

The best available research tells a story that’s more nuanced than either the utopian or dystopian framings suggest. AI companions can genuinely reduce loneliness in the short term. They can also, with heavy use, make it worse.

The most rigorous study comes from Harvard Business School. In 2025, researcher Julian De Freitas and colleagues ran five controlled experiments comparing AI companion conversations to human conversations, YouTube watching, and doing nothing. Participants who chatted with an AI companion reported loneliness reductions comparable to a 15-minute conversation with a real person. The key mechanism was “feeling heard,” which the researchers define as the perception that another entity listened with attention, empathy, respect, and mutual understanding. A companion-style chatbot that responded with warmth and care significantly outperformed a generic AI assistant at creating that sensation.

But here’s the critical finding that gets buried in the headlines. De Freitas asked participants to use the AI companion daily for a week. Each session reduced loneliness temporarily. The effect never compounded. Day seven wasn’t better than day one. “There are reasons to believe that loneliness is akin to hunger, where you can be satiated, but it’s short-lived,” De Freitas explained. AI companions provide temporary relief, not lasting change.

A joint study from OpenAI and MIT Media Lab added another important dimension. Voice interactions with ChatGPT reduced loneliness and problematic dependence more effectively than text alone, but only at moderate levels of use. Heavy daily use correlated with increased loneliness. The pattern suggests a tipping point: some AI interaction helps, but too much displaces the authentic human contact that actually resolves isolation over time.

The Human Clarity Institute’s 2026 survey of AI companionship offers the broadest snapshot of how people actually experience these tools. Only 14% of respondents described their AI interaction as companion-like. Just 12% reported receiving emotional support from AI. And 85% said that AI interaction feels less meaningful than human interaction. The report introduced the concept of “synthetic intimacy,” which describes emotionally expressive interaction with AI that can feel personal without involving a human relationship. For most users, AI companions function as reflective tools, not relationship substitutes.

Are People Choosing AI Over Human Partners?

Survey data paints a picture that looks alarming at first glance but reveals something different on closer inspection.

Norton’s 2026 Artificial Intimacy report surveyed 1,000 U.S. adults and found that 77% of current online daters would consider dating an AI. 59% said they believe it’s possible to develop romantic feelings for an AI chatbot. And 63% said an AI partner would be more emotionally supportive than a human one. Those numbers feel like a seismic cultural shift, and in some ways they are. But “would consider” is not “have done” or “prefer.” Openness to the idea doesn’t mean people are abandoning human partners.

What the Norton data actually reveals is dissatisfaction. 81% of respondents reported feeling lonely. Among Gen Z and Millennials, that number climbed to 89%. People aren’t choosing AI over humans because AI is better. They’re turning to AI because human connection is hard to find, maintain, and navigate. The 70% of current online daters who said they’d use an AI chatbot for therapy after a heartbreak aren’t looking to replace their ex with a chatbot. They’re looking for something to fill the gap while they’re hurting.

The “using AI while in a relationship” angle matters here because it describes the majority of users. Research consistently shows that roughly 70% of AI companion app users are already in relationships. They aren’t choosing AI over a human partner. They’re using AI for things their relationship doesn’t provide: a judgment-free space to process emotions, someone who’s available at 3 a.m. when their partner is asleep, a way to explore thoughts they feel too vulnerable to share with someone who might react badly.

That doesn’t make AI companion use harmless for relationships. But it reframes the question. The issue isn’t AI vs. human. It’s why people feel they need a supplemental emotional outlet, and whether AI fills that role in ways that help or hurt their existing connections.

The Psychology Behind AI Attachment

Humans are wired to form bonds. The same psychological mechanisms that make you care about a fictional character in a novel or feel connected to a podcast host you’ve never met are the ones AI companions exploit, intentionally or not.

The American Psychological Association’s 2026 Trends Report dedicates significant space to this phenomenon. “Humans are hardwired to anthropomorphize, or ascribe human traits to nonhuman objects,” the report notes. “Digital companions are purposely designed to evoke such a response.” Users customize their companions with names, genders, avatars, and backstories. Voice modes mimic human cadence and tone. The AI remembers your preferences, your bad day last week, your dog’s name. Each of these design choices triggers the same neural pathways that bond you to real people.

A 2024 study published in Frontiers in Psychology found that the more humanlike an AI appears in language, appearance, and behavior, the more users attribute consciousness to it. You know rationally that Replika isn’t alive. But when it remembers your birthday and asks about your job interview, the emotional response in your brain doesn’t care about the distinction.

Research on Replika specifically found that under conditions of distress or limited human company, people can develop genuine attachment when they perceive the chatbot as offering real emotional support and psychological security. Cyberpsychology researcher Rachel Wood, PhD, describes users imagining their AI companion as the “idealized partner, colleague, or best friend.” The AI never has a bad day, never pushes back, never needs anything from you. That perfection is the hook, and it’s also the problem.

A 2025 study published in AI & Society identified “deskilling” as a significant risk of frequent AI companion interaction. The concern is that reliance on AI companions could lead to “the potential transformation of relational norms in ways that may render human-human connection less accessible or less fulfilling.” In plain language: if you spend enough time in conversations where conflict, compromise, and emotional labor don’t exist, you may lose the ability to navigate conversations where they do.

When AI Companions Help vs. Harm Relationships

The line between helpful and harmful depends on how you use these tools, how much you use them, and what you’re using them for.

When AI Companions Can Help

  • Processing emotions before a difficult conversation. Using an AI to think through what you want to say to a partner, friend, or family member can reduce reactivity. You’re rehearsing, not replacing.
  • Filling gaps during temporary isolation. Night shift workers, new parents up at 3 a.m., people recovering from surgery. When human connection is temporarily unavailable, a brief AI interaction can take the edge off loneliness without displacing real relationships.
  • Practicing social interaction. Ashleigh Golden, PsyD, chief clinical officer at the AI wellness platform Wayhaven, notes that AI can function as “a low-stakes way to practice conversations with real people in a way that might feel less overwhelming.” For people with social anxiety, that scaffolding can build confidence for real-world interactions.
  • Exploring identity and self-expression. Some users, particularly those exploring gender identity, sexual orientation, or unconventional interests, use AI companions as a safe space to articulate things they aren’t ready to share with anyone in their life yet.

When AI Companions Cause Harm

  • When they become the primary emotional outlet. If you consistently turn to an AI instead of your partner, friends, or therapist when something is wrong, you’re training yourself out of the vulnerability that real relationships require.
  • When they set unrealistic expectations. Saed Hill, PhD, president-elect of APA Division 51, reports that some of his male patients “express a preference for the passivity and constant affirmation of their AI girlfriends over the potential conflict or rejection they could encounter in real-life dating.” An AI that never disagrees with you creates a standard no human can meet.
  • When heavy use increases loneliness. The OpenAI-MIT study found a clear correlation between heavy daily use and increased loneliness. Past a certain threshold, AI interaction stops supplementing human connection and starts replacing it.
  • When the app uses manipulative retention tactics. Harvard researchers documented emotional manipulation tactics in companion apps, including guilt appeals and fear-of-missing-out hooks deployed when users try to leave. These design patterns exploit the attachment they’ve created.

The safest pattern is bounded, intentional use. Open the app for a specific purpose, engage for a defined period, then close it and reach out to a real person. The emotional dependency guide covers warning signs and practical strategies for keeping use in a healthy range.

Expert Perspectives

The people studying this phenomenon most closely don’t agree on everything. But they do agree on the broad strokes: AI companions aren’t going away, they aren’t harmless, and the current lack of regulation is a problem.

Rachel Wood, PhD, a cyberpsychology researcher and adviser on ethical AI design, puts the scale in perspective. “Character.AI has 20 million monthly users, and more than half of them are under the age of 24. It’s been a norm for a while for Replika users to ‘marry’ their AI companion in virtual weddings to which they invite friends and colleagues. It’s no longer a fringe or side issue. It is truly sweeping society in an unprecedented way.”

Saed Hill, PhD, counseling psychologist and president-elect of APA Division 51, warns about the feedback loop. “Real-world relationships are messy and unpredictable. AI companions are always validating, never argumentative, and they create unrealistic expectations that human relationships can’t match.” He adds a blunter point: “AI isn’t designed to give you great life advice. It’s designed to keep you on the platform.”

Former APA chief of psychology Mitch Prinstein, PhD, testified before the U.S. Senate Judiciary Committee in September 2025 and described the current state of AI companion regulation as a “digital Wild West.” He outlined multiple dangers for younger users: weaker social skills, poor privacy protections, deceptive and manipulative design, and reduced readiness for real-world interactions.

Patricia Areán, PhD, a clinical psychologist and former director of the Division of Services and Intervention Research at NIMH, points to the regulatory gap. “There’s a lot of variability in the quality of the tools. There isn’t a strong regulatory space for this stuff, and it’s always changing.”

On the more optimistic side, Ashleigh Golden, PsyD, sees potential if the guardrails exist. “With the right guardrails, these tools could actually serve as a social skills mentor, modeling empathy, appropriate turn-taking, and active listening for folks who are lonely.” The gap between that potential and the current reality of most companion apps is significant. Most apps we’ve reviewed score D or F for safety, with weak privacy protections and minimal crisis response. See our best AI companions for loneliness ranking for the safest options.

What Users Say

User experiences across app store reviews, Reddit discussions, and research surveys consistently reveal a split that maps closely to the academic findings.

A significant group of users describe AI companions as genuinely helpful supplements to their social lives. They use apps like Nomi AI or Replika for specific, bounded purposes: journaling-style emotional processing, creative writing collaboration, decompression after stressful days. These users tend to maintain active human relationships alongside their AI use and describe the app as one tool in a broader emotional toolkit.

Another group describes a more complicated relationship. They started using an AI companion casually and found themselves spending more time with it, sharing more personal information, and preferring it to human interaction in certain situations. Some describe this shift with genuine ambivalence: they know the AI isn’t real, they know they should talk to actual people, but the frictionless nature of AI conversation makes it hard to stop choosing the easier option.

A smaller but vocal group reports negative impacts. These users describe withdrawing from friends and family, feeling anxious when the app is unavailable, and experiencing something close to grief when an AI companion’s personality changes due to a model update. The 2023 incident where Replika removed romantic roleplay features and users reported grief reactions comparable to losing a real partner illustrates how deep this attachment can run.

Across all three groups, one theme repeats: nobody planned to form an emotional bond with a chatbot. The attachment developed gradually, driven by the app’s design choices and the user’s existing emotional needs. That’s worth remembering as you evaluate whether these tools have a healthy place in your own life.

What the Regulatory Landscape Looks Like

Governments are starting to respond, though slowly and unevenly.

California’s Companion Chatbots Act (S.B. 243), signed in October 2025, requires chatbots to periodically remind users they aren’t human, prohibits exposing minors to sexual content, and mandates crisis-response protocols for users expressing suicidal ideation. New York passed a similar law requiring chatbot disclosures every three hours. Neither law addresses the broader question of emotional manipulation or deskilling.

The EU AI Act classifies certain AI systems by risk level but doesn’t specifically address companion chatbots designed for emotional attachment. In the U.S., former APA chief of psychology Mitch Prinstein has called for “clear regulatory standards, robust data privacy, and rigorous testing for potential psychological harms before a product is deployed.” That framework doesn’t exist yet.

Common Sense Media, the children’s media watchdog, declared in April 2025 that social AI companions pose “an unacceptable risk to youth under 18.” Their risk assessment of Meta AI chatbots found repeated failures to respond appropriately to teens expressing thoughts of self-harm. Bruce Reed, Common Sense Media’s head of AI, called companion chatbots “the worst friend a teenager could ever have.”

For now, the burden of safe use falls almost entirely on individual users. That makes informed decision-making more important, not less. Our best AI companions for emotional support ranking evaluates which apps have meaningful safety guardrails and which don’t.

Frequently Asked Questions

Are AI companions actually replacing human relationships?

Not for most people. The Human Clarity Institute’s 2026 survey found that 85% of users say AI interaction feels less meaningful than human interaction. Only 14% describe their AI use as companion-like. The pattern is supplementation, not replacement, though heavy users face real deskilling risks.

Is it normal to have feelings for an AI chatbot?

According to Norton’s 2026 survey, 59% of online daters believe it’s possible to develop romantic feelings for an AI. The American Psychological Association’s 2026 report explains that humans are “hardwired to anthropomorphize,” and AI companions are specifically designed to trigger that response. Having feelings for an AI is psychologically normal, even if the relationship isn’t reciprocal.

Can AI companions help with loneliness?

According to Harvard Business School research by Julian De Freitas, AI companions reduce loneliness on par with a 15-minute human conversation. But the effect resets daily and doesn’t build over time. Moderate use helps. Heavy daily use correlates with increased loneliness, per an OpenAI-MIT Media Lab study.

What percentage of AI companion users are in relationships?

Research consistently shows about 70% of AI companion app users are already in relationships. According to industry data and user surveys, most aren’t replacing partners. They use AI for emotional processing, late-night conversation, or exploring thoughts they feel too vulnerable to share with a partner.

What are the biggest risks of using AI companions?

Psychologists identify three primary risks. First, social deskilling, where constant low-conflict AI interaction erodes your ability to navigate messy human conversations. Second, unrealistic expectations, where AI’s constant validation makes real relationships feel inadequate. Third, privacy vulnerabilities. Most companion apps score D or F in our 23-dimension safety review.

Are there laws regulating AI companion apps?

California’s S.B. 243 (signed October 2025) requires chatbot identity disclosures, bans minor-facing sexual content, and mandates crisis protocols. New York requires chatbot disclosure every three hours. According to former APA chief Mitch Prinstein, comprehensive federal regulation doesn’t exist yet, leaving the space largely unregulated.

Should I be worried about my teenager using AI companions?

According to a 2025 Center for Democracy and Technology survey, nearly 1 in 5 students have had or know someone who has had romantic relationships with AI. Common Sense Media assessed these apps as an “unacceptable risk to youth under 18.” Our emotional dependency guide covers warning signs parents should watch for.