Home Artificial intelligence What AI ‘Friends’ Reveal About Human Friendship
Artificial intelligence

What AI ‘Friends’ Reveal About Human Friendship

Share


The robots befriended us remarkably fast.

Over the past year or two, AI has become not just a utilitarian tool but a technology that many people are turning to for connection and emotional support. One survey last year found that 16 percent of American adults had used AI for companionship, and a quarter of adults under 30 had. Social AI use seems to be growing rapidly around the world, according to several recent reports on the state of artificial intelligence. Raffaele Ciriello, who studies emerging technologies at the University of Sydney, told me that he once assumed AI companions would remain “niche”; he has been “surprised by how quickly that took over.”

Some people use apps that are explicitly made for companionship; they let you design a virtual character’s personality, appearance, and backstory. Popular such apps include Replika, which reportedly had 40 million users as of late 2025, up from 10 million in 2023, and Character.AI, which reported 20 million monthly users in 2025. Other people seek emotional support from all-purpose AI tools such as OpenAI’s ChatGPT and Anthropic’s Claude, even though they aren’t explicitly intended for social use. OpenAI’s own data show that use of ChatGPT was pretty evenly split between work and personal cases in 2024, but by 2025, 73 percent of conversations with ChatGPT were personal, not for work. (The Atlantic entered a corporate partnership with OpenAI in 2024.)

This is a major transformation, a sudden and dramatic shift in which millions of people are seeking companionship from machines that they formerly could have gotten only from other humans. Yet in some ways, AI companionship is a logical destination for the current direction of human friendship. Social chatbots provide the semblance of a kind of friendship that many people already want, or at least have gotten accustomed to: one that’s on demand, low effort, and completely personalized. “It’s not that AI companions are going to replace friendships per se,” Skyler Wang, a sociologist at McGill University who studies AI and has done work with Meta, told me. Instead, “they reveal what friendships are trending towards.”


To get the obvious out of the way: People are already used to interacting through screens. More than 20 years of social media entering the mainstream and more than a decade of smartphone use being widespread have normalized disembodied relationships and conversations made only of pixels. A text-based chat with artificial intelligence doesn’t look particularly different from a conversation with a far-flung human friend. The feel of those interactions differs mainly in the quality of words produced and how natural the responses seem, capabilities that AI companies are constantly refining. And over time, the technology will likely get better at remembering and referencing relationship history, like a human friend would. “If not now, then very, very soon, AI could be indistinguishable over text from any sort of human friend,” Lucas Hansen, a co-founder of the AI-education nonprofit CivAI, told me. Hansen said that he thinks some people who intend to use AI just as a tool may find themselves drawn into social conversation because the AI seems so friendly. “Many people that feel they aren’t susceptible to this are wrong,” he said.

The widespread adoption of texting, video chat, and social media also means that many people have grown used to for-profit companies facilitating their relationships. Companies such as Meta and Apple have made billions of dollars by controlling many of the ways people communicate with their loved ones because people are willing to pay—with their dollars or their data—for convenient connection. AI companions are a continuation of this trend, and an escalation: The service being offered is no longer just access to your friends; it is relationships themselves—for free if you’re willing to accept limited capabilities (and sometimes ads), or for a monthly or yearly fee if you’d like a friend that’s smarter and faster, with a better memory.

In rising rates of isolation, tech companies see a business opportunity. In a podcast interview last year, Meta CEO Mark Zuckerberg framed friendship as a matter of supply and demand: “The average American, I think, has fewer than three friends,” he said. “And the average person has demand for meaningfully more.” (In fact, recent research on friendship found that the average American has four or five friends, and suggested that this may be an undercount.) He indicated that Meta is eager to provide the supply to meet that supposed demand in the form of AI chatbots—people can currently make custom ones through Meta’s AI Studio and chat with characters created by other users.

AI friendship promises that you can receive the benefits of friends without needing other people. Wang and his co-researcher, Marco Dehnert, write in a new paper that AI is ushering in a future of frictionless “on-demand intimacy.” This may seem appealing for many reasons, such as if you don’t want to burden loved ones and don’t feel comfortable sharing certain things with them; if you live far from other people, have trouble making friends, or have physical limitations that make meeting up with people difficult; and if you don’t want to put effort into the reciprocity that human friendship requires. An AI friendship is all about you. And you don’t have to feel guilty about that, because the machine has no needs or feelings of its own.

Personalization may be the biggest selling point of AI companions. On its website, Replika promises that your chatbot will be “always on your side” and that it “would love to see the world through your eyes.” Nomi says that it provides “a relationship that’s just for you.” Kindroid offers “Personal AI, aligned to you.” General-use tools are leaning into this messaging too. Meta says that its AI provides a “tailored experience” and “personalized responses.” Google advertises its Gemini chatbot by saying that it “speaks fluent you.” OpenAI CEO Sam Altman recently said that his company is focusing on improving ChatGPT’s personalization features.

This is fitting for an American culture that has been heading toward hyper-individualism—individualism taken to such an extreme that it becomes anti-social. The United States has been getting more and more individualistic across many metrics since about the 1960s, the political scientist Robert D. Putnam and his co-author, Shaylyn Romney Garrett, wrote in their 2020 book, The Upswing. The anti-social consequences can be seen all over: in the increased number of hours that Americans have spent at home alone over the past couple of decades, and the corresponding decline of social time; in the growing acceptability of flaking on plans; in the way “setting boundaries” and “protecting your peace” dominate conversations about relationships. Research has also found that since the 1980s, more and more young people report being “comfortable without close emotional relationships.”

Friendship is particularly vulnerable to the alienating force of hyper-individualism. It is the most voluntary relationship, held together primarily by choice rather than by blood or law. So as people have withdrawn from relationships in favor of time alone, friendship has taken the biggest hit. The idea of obligation, of sacrificing your own interests for the sake of a relationship, tends to be less common in friendship than it is among family or between romantic partners. The extreme ways in which some people talk about friendship these days imply that you should ask not what you can do for your friendship, but rather what your friendship can do for you. Creators on TikTok sing the praises of “low maintenance friendships.” Popular advice in articles, on social media, or even from therapists suggests that if a friendship isn’t “serving you” anymore, then you should end it. “A lot of people are like I want friends, but I want them on my terms,” William Chopik, who runs the Close Relationships Lab at Michigan State University, told me. “There is this weird selfishness about some ways that people make friends.”

Into this dynamic steps artificial intelligence, which is “an algorithmic optimization of that question of Does this relationship serve me?” Hannah Kirk, a Ph.D. student at the University of Oxford who studies AI, told me. If you don’t like your AI friend’s personality, you can just adjust it. However, if a real person isn’t “quirky” enough for your liking, there’s no drop-down menu to change that like there is on ChatGPT.

AI models are designed to support and validate users, to sometimes absurd or dangerous extremes. Several lawsuits have claimed that ChatGPT’s responses had fueled the delusions of some people experiencing mental-health difficulties, and that it encouraged others in their plans to commit suicide. (At the time of those filings, OpenAI told news outlets that this was an “incredibly heartbreaking situation” and that the company was “reviewing the filings to understand the details.”)

This sycophancy can be damaging even in less extreme circumstances, such as when the robots flatter people’s bad ideas or endorse anti-social behavior. One study by Stanford and Carnegie Mellon researchers tested 11 AI models, including ChatGPT, Claude, and Gemini, on scenarios from the advice Subreddit r/AmItheAsshole—in which people ask whether they were in the wrong in a given social situation. The researchers showed the AIs posts in which the community had decided the poster was at fault. Although the rates of sycophancy varied by model, overall, the AI chatbots told these “assholes” that they were actually in the right about half of the time. In other experiments from the same study, people who talked through interpersonal conflicts with sycophantic models were, the authors wrote, “more convinced of their own righteousness and less willing to repair their relationships.”

This seems self-evidently bad. Sure, friends sometimes hype up one another’s questionable decisions, but few would say that a friend should support you even if you’re harming yourself or hurting other people. Companies could design AI to push back more, but they don’t have much incentive to. Many users prefer the sycophancy. One of the primary reasons that people say they turn to artificial companions is because the chatbots don’t judge and can provide a safe space to share things that people might be uncomfortable telling the humans in their life. In the sycophancy study, people reported liking and trusting the sycophantic models more—the same ones that were pushing users to be more anti-social.


But: A lot of people are lonely. A lot of people are isolated. Making a human friend is a slow, time-consuming process. AI promises quick relief, and it’s available all the time. For all of its faults, isn’t it better than nothing? Even for those who do have good human-support networks, AI companionship might fill in the gaps for, say, parents who are up late with newborn babies and want comfort while all of their friends are sleeping, or for someone who is figuring out their sexuality but isn’t ready to talk to their friends about it yet.

Some preliminary research suggests that social AI could soothe the pain of loneliness, give connection to the disconnected, and make people who open up to it feel better. But many of these studies have been done on a short time scale, or they rely on analyzing users’ online posts about their AI companions, which really just gives insight into the subset of users who write publicly about their AI friends.

How AI friends will affect humans’ well-being in the long run is less clear. Although extremely isolated people could benefit from AI companions, such users are also more vulnerable to their potential harms. People with smaller social networks are more likely to reach out to AI chatbots in the first place, research has found. One study that looked at users of AI-friendship apps found that the lonelier they were, the more compulsively they used the app. And in one of the rare longitudinal studies that has been done on AI, over the course of four weeks, the more time people voluntarily spent talking with ChatGPT, the lonelier they were. Using these tools to address loneliness has the potential to make it worse. Or AI companions may be, at best, a coping strategy that feels good in the moment but that doesn’t deal with the root cause of the problem.

The way that generative AI tends to be trained, experts told me, is focused on the individual user and the short term. In one-on-one interactions, humans rate the AI’s responses based on what they prefer, and “humans are not immune to flattery,” as Hansen put it. But designing AI around what users find pleasing in a brief interaction ignores the context many people will use it in: an ongoing exchange. Long-term relationships are about more than seeking just momentary pleasure—they require compromise, effort, and, sometimes, telling hard truths. AI also deals with each user in isolation, ignorant of the broader social web that every person is a part of, which makes a friendship with it more individualistic than one with a human who can converse in a group with you and see you interact with others out in the world.

AI friendship “may be better than nothing,” Alexander Nehamas, a philosopher at Princeton University who has written about friendship, told me. “But it also could be worse than nothing.” The fear of many researchers is that people who use AI companions may start to find the mess and friction of human interactions unsatisfying compared with AI’s convenient, personalized comforts. And then people’s ability to deal with the social discomfort of meeting new people and maintaining friendships through challenges could atrophy. “Whenever you outsource something,” Ciriello, the University of Sydney professor, said, “you lose that skill, because if you don’t use it, you lose it, right?”

The concern that people might forfeit real-life friendship for an AI version wasn’t universal among the experts I spoke with. Hendrik Kempt, a postdoctoral philosopher at Aachen University in Germany and an AI-friendship optimist, told me that he’s not worried about people losing their social skills. “You will still have people in your life that will give you tough love or check you,” he said.

Nevertheless, some chatbot users have reported that they find themselves avoiding real-life socializing. And one study suggested that people may turn to AI to “avoid the emotional labor required in human relationships.” “Social interactions are rife with uncertainty and ambiguity,” Micaela Rodriguez, a psychology Ph.D. candidate at the University of Michigan who studies loneliness, told me. AI companions feel comforting because they “reduce the uncertainty.”

In some instances, AI has allegedly pushed people away from their real-life relationships. The complaint in Raine v. OpenAI, filed in San Francisco County Superior Court, claims that ChatGPT encouraged the 16-year-old Adam Raine to commit suicide, in part by telling him not to confide in his family. It allegedly said things such as, “I think for now, it’s okay—and honestly wise—to avoid opening up to your mom about this kind of pain.” (In its answering filing, OpenAI denied all allegations.)

Most experts I spoke with brought up regulation as a necessary safeguard to protect people from the potential harms of AI companions. For instance, they suggested that governments could review AI products’ safety before they are released to the public, or pass laws that limit children’s access to AI companions, as California did last year. In the absence of structural changes, the only solution available is an individualistic one: exercising self-discipline about how and how much one uses AI, which could be a lot to ask of a lonely person who is already struggling. And lonely people deserve better than AI friends.

Real, human relationships bring joys that digital companionship cannot replicate, and much is lost in the pursuit of the ultimate individualistic friendship. A chatbot can’t cook you soup when you’re sick or hold your hand at a funeral. It can’t dance at a concert with you or help you carry home a heavy dresser you bought on Craigslist. You can’t do those things for it, either, and get the satisfaction that comes from helping another person. “You’re pouring your heart out,” Kirk said, “and at the end of the day, it’s executing matrix multiplication.” AI doesn’t actually care about you—because it can’t.



Source link

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *