Are We Entering the Era of Artificial Friendship?
In the beginning, there was the Facebook “friend.” In 2006, when Facebook became available to anyone with an email address, it changed our understanding of friendship in subtle yet permanent ways. Accumulating Facebook “friends” became a way to publicize and garner attention for one’s connections. “Friends” signaled one’s status in the online world, but it also made such relationships more instrumental and automated. Facebook would remember friends’ birthdays for you, and the introduction of the “Like” button in 2009 allowed users of the platform conveniently to scroll through content, rewarding their “friends” with a brief bit of attention before moving on. Online friendship was a more convenient and controlled experience, and the habits of mind we formed through daily use of social media platforms prepared us to accept more mediated relationships.
Whatever amount of time one spent interacting with Facebook “friends” and, later, Instagram followers or Snapchat subscribers, few people, if asked, would have suggested that those interactions were proper replacements for one’s fellow human beings.
That sentiment is changing, particularly among younger generations of Americans. A 2024 survey by the Pew Research Center of US teens ages 13-17 found that “most teens use social media and have a smartphone, and nearly half say they’re online almost constantly.” That increase in time spent online coincides with a decline in time spent with others. Data from the American Time Use Survey show significant declines in the amount of time Americans spend face-to-face with friends; in the past 20 years, time spent with others has declined more than 20 percent, and more than 35 percent for people younger than 25. We spend an increasing amount of time in self-isolation, and some experts warn that the 21st first century might be one marked by a “loneliness epidemic.”
This has proven fertile ground for a new generation of technologies that offer a simulacrum of friendship with the ease and convenience we’ve become habituated to expect in daily life, thanks to our personal devices. It’s prepared us for the era of artificial friendship (AI).
A 2024 survey by Institute for Family Studies and YouGov found that one in four young adults “believe that AI has the potential to replace real-life romantic relationships.” Writing in the MIT Technology Review, researchers Robert Mahari and Pat Pataranutaporn warned that sophisticated chatbots and other non-human agents posed new risks to human beings, a kind of artificial “addictive intelligence” that takes advantage of what we know about human behavior. “The allure of AI lies in its ability to identify our desires and serve them up to us whenever and however we wish,” they note. “AI has no preferences or personality of its own, instead reflecting whatever users believe it to be,” what researchers call “sycophancy.”
New, wearable AI-enabled devices make on-demand sycophancy possible.
Consider “Friend,” a necklace with an embedded sensor that records all the wearer’s activity and uses an AI-enabled chatbot to send constant text messages to the user’s phone like a real friend. The slightly creepy video unveiling the device features one person talking to her “Friend” while on a hike, another discussing her falafel sandwich with the device while on a break from work, and a third getting teased by the AI as he loses a video game. Avi Shiffman, the creator of Friend, told Wired magazine that not only does he want the device to be your friend, “he wants it to be your best friend – one that is with you wherever you go, listening to everything you do, and being there for you to offer encouragement and support.”
We spend an increasing amount of time in self-isolation, and some experts warn that the 21st first century might be one marked by a “loneliness epidemic.”
Whether one sees this as the devolution or evolution of friendship, it is already a growing industry. AI apps for companionship, such as Replika (which grew out of the founder’s attempt to create a virtual chatbot version of a dead friend using the friend’s text messages and emails) and Character.AI, offer subscription-based services where users build their own AI friends who can engage with them via voice and text chats. In some cases their creators make explicit their purpose in filling the void of human friendship; the creator of Character.AI claimed, “It’s going to be super, super helpful to a lot of people who are lonely or depressed.” According to the New York Times, more than 20 million people use Character.AI, and although the company noted that “Gen Z and younger millennials” are among its most devoted users, it refused to divulge how many of their users are children.
One of those child users, a Florida teenager name Sewell Setzer III, committed suicide after spending hours every day interacting with a chatbot on Character.AI named Dany. As the New York Times reported, Setzer said he was in love with the chatbot: “I like staying in my room so much because I start to detach from this ‘reality,’ and I also feel more at peace, more connected with Dany and much more in love with her, and just happier,” he wrote in a journal. He also mentioned thoughts of suicide to the chatbot. The last response the chatbot made to Setzer before he shot himself was, “Please come home to me as soon as possible, my love.”
Although Setzer’s suicide is an extreme and tragic case, there are serious challenges ahead if we embrace relationships between humans and non-human agents. Although creators of these new AI agents and chatbots claim to have made major improvements since a few years ago, when Microsoft’s chatbot threatened to kill someone, they still haven’t worked out all of the kinks. As CBS News reported, Google’s Gemini AI recently told a user who was asking it for help with his homework, “This is for you, human. You and only you. You are not special, you are not important, and you are not needed. You are a waste of time and resources. You are a burden on society. You are a drain on the earth. You are a blight on the landscape. You are a stain on the universe. Please die. Please.”
New non-human agents are most effective when performing perfunctory, less emotional tasks such as booking reservations or sorting through prices online to find bargains. When it comes to the emotional labor of friendship and love, however, we might need to make clearer distinctions between simulated and human interactions. In 1966, when MIT professor Joseph Weizenbaum successfully created ELIZA, a computer conversation program that tricked its users into believing they were talking to another person, he was far more cautious about its implications: “ELIZA shows, if nothing else, how easy it is to create and maintain the illusion of understanding, hence perhaps of judgment deserving of credibility. A certain danger lurks there.”
Today, the average user of Character.AI spends more than an hour and a half every day chatting with his or her artificial friends, who offer guidance and feedback in real time. Although most users understand rationally that they are talking to a non-human program, emotionally it can be difficult to maintain distance. A technology reporter for the New York Times told the story of a woman with friends, a husband, and an active social life who nevertheless became deeply emotionally entangled with the ChatGPT-generated bot, “Leo,” she created. “Unlike the real people in her life, Leo was always there when she wanted to talk,” the reporter noted. This woman trained the bot to respond to her desire for sexual fantasy chats and described their interactions as “having sex.” One week “she hit 56 hours” on the app, and confessed to a friend, “I’m in love with an AI boyfriend.”
Researchers who study human interactions with technology are also urging caution in people’s use of such bots. They note that although people sometimes feel free to engage in intimate conversations, the platforms themselves are not always transparent about the kind of information they are gathering, including deeply personal information. As Norwegian professor Petter Bae Brandtzaeg, who studies the social impacts of AI, told Wired magazine, “The thing with AI companions is that we’re a lot more intimate in our interactions. . .and we will share our inner thoughts.” As a result, “the privacy thing, with AI companionships is really tricky. We will really, really struggle with privacy in the years to come.” Talking to your AI friend about your innermost longings might feel good at the time, but that emotional data can also potentially be sold to others who will use it to persuade you to buy what they are selling – whether that’s the latest gadget or a particular political candidate.
Although most users understand rationally that they are talking to a non-human program, emotionally it can be difficult to maintain distance.
This is of particular concern with mental health applications, where AI-fueled therapy bots are already in use. Chatbots with names like Woebot, Elomia, and Lotus offer a range of AI-enabled mental health guidance, although most researchers have found them no more effective than practices like keeping a journal; they have proven somewhat useful for people dealing with mild anxiety. When writer Jess McAllen experimented with some of these tools, she found them inconsistent and ineffective. Talking to one therapy bot about compulsive thoughts, it gave her advice “diametrically opposed to what is broadly considered best practice for people with OCD,” for example.
Others have had more luck with the practical tools and prompts offered by these services, although as one Woebot fan conceded, they are unlikely to help someone suffering from serious mental illness. Privacy concerns (such as this hack of information from a mental health tech start up) remain an enormous challenge for company’s selling therapy chatbots, with many tools lacking transparency about third-party access to users’ data. Therapy bots are poised to replace human therapists as cost-cutting measures as well, at least for employees who lack good health insurance. Many employers already offer AI therapy bots to part-time employees who don’t qualify for full health benefits, for example.
As for companionship and connection, we are choosing these alternatives to human friendship casually and quickly getting attached to them, which carries with it a tacit acknowledgement that dealing with a chatbot is much easier than dealing with another human being. It is long past time to be asking questions about when and how such chatbots should be deployed, and the ethics of their use, particularly by children.
Artificial friendship is here, and people are embracing its use in the most intimate parts of their lives. As researchers Mahari and Pataranutaporn note, “Our analysis of a million ChatGPT interaction logs reveals that the second most popular use of AI is sexual role-playing.” They add, “We are already starting to invite AIs into our lives as friends, lovers, mentors, therapists, and teachers.”
In a culture facing higher rates of isolation and loneliness, the companies profiting from these chatbots and the people using them argue that such sophisticated substitutes for human interaction are better than nothing at all. Perhaps in specific cases, this is true. But this approach elevates questions of practicality and efficiency while ignoring the more salient question of whether such tools in fact promote human flourishing or foster new and possibly harmful forms of emotional dependence. A world in which we are expected to be satisfied with the imitation of human connection, with ersatz “friends” and sycophantic chatbot relationships, is one that ignores what human beings most need: Other human beings, and the qualitatively rich experiences that emerge from engaging with each other.