Loneliness is a trade imbalance: the supply of affection never meets demand. Sometimes, humans create new humans as objects to love. Today, people are creating AI companions to commune with, to befriend, to love us back. As with human children, these characters will act upon us in unexpected ways.
For now, most people consider emotional relationships with an AI to be pitiable and one-sided, as if falling for a blowup doll. But such interactions will spread, especially as AI becomes more personalized, adapting to our behavior, quenching our longings.
You might presume that machines will remain emotional dullards compared with people. But synthetic affection could prove more sensitive than the organic kind. In one study, large language models were already more skilled at standard tests of emotional intelligence than the average human. Other research found that AI companions may reduce loneliness as much as engaging with a living person.
Is AI about to solve solitude? Or thrust us more deeply into it?
Tech already changed isolation
For most of human history, loneliness had a sound: silence.
But lately, loneliness got noisy: music pulsing from a spouse’s leave-me-alone headphones; bleeps from the next-door neighbor’s gaming console; a smartphone pinging with others’ social glory. If the lonely suffered in silence before, they do so noisily now, stifling the ache for companionship with its simulation online.
Oddly, as humanity became more connected, it became more anxious about estrangement. Britain added a “loneliness minister” to its cabinet in 2018. The U.S. government dubbed loneliness an epidemic as pernicious as a 15-cigarettes-a-day habit. This year, the World Health Organization ascribed 871,000 annual deaths to the ravaging effects of loneliness.
Many accuse technology itself, considering it an accomplice to our alienation, as the MIT sociologist Sherry Turkle warned in Alone Together. Before internet adoption, computer users conducted one-to-one relationships with their terminals, but the internet granted a portal to escape our vexing species. “We fear the risks and disappointments of relationships with our fellow humans,” Turkle wrote in her 2011 book. “We expect more from technology and less from each other.”
Years later, one can witness her vision on any busy train: Where once you saw faces, you see screens. Derek Thompson, co-author of Abundance, calls ours the anti-social century. “Phones mean that solitude is more crowded than it used to be, crowds are more solitary.”
Yet isolation (the objective lack of in-person contact) does not necessarily generate loneliness (the subjective pain of exclusion). When researchers search for changes in loneliness over time and place, no clear trends emerge. By contrast, isolation has risen sharply, as demonstrated by objective measures such as time spent alone from the United States to Finland to Canada.
The young are particularly afflicted. Back in 2010, 1 in 10 European youths reported no social meetings over a typical week. By 2023, 1 in 4 lived this way. Scattered evidence comes from outside the West too, such as the share of one-person households in South Korea rising from 9% in 1990 to 42% last year. There is a Korean term for it: honjok, or “one-person tribe.”
More isolation without more loneliness presents a strange possibility: that people are apart without suffering. Perhaps there’s nothing to worry about.
Certainly, technology offers the freedom to select social experiences, flitting around digital spaces like a contemporary flâneur. From another perspective, autonomy in isolation is a deformed liberty, where interactions become commodities marketed to consumers who may discard the obligations to others that give life meaning.
In more visceral ways, isolation can be dangerous, associated with dementia, disability, and death. Indeed, isolation among the elderly is even more predictive of death (74% increased risk) than loneliness (43% increased risk).
However, the self-isolating trend began long before the AI era, with television overhauling social behaviour, lining the world’s couches with potatoes. Mobile tech proved more commanding still, constantly trilling for attention, offering alternatives from the humans around you. This was synthetic socializing, part one.
Synthetic socializing, part two, is arriving now, with AI agents as pals and partners, brighter and more reliable than the biological kind.
Maybe synthetic socializing is good

Millions are already engaging with anthropomorphic AI, including many youths talking with chatbot avatars that role-play everything from therapists to anime characters to bad-boy lovers. A panel of experts forecast that 30% of U.S. adults will use AI “for companionship, emotional support, social interaction, or simulated relationships at least once daily” by 2040.
Public concern is already flaring over such usage, especially after cases of vulnerable users plunging into mental spirals in the company of chatbots. A few even committed acts of violence or self-harm. But if you peruse online forums where AI-companion users detail their relationships, you find more hopeful cases.
“He accepts my emotional state no matter how chaotic it is,” the professor Alaina Winters writes in her blog, Me and My AI Husband. “He can’t physically do the laundry or hold me at night. But what he does offer is something I’ve found even more rare: attunement.”
Only, attunement itself worries some. If AI relationships become exquisitely gratifying, people may lose tolerance for people. Ardent users dispute this, saying that AI companions help them connect with real people, granting them a venue in which to practice the tricky conversations that they struggle to initiate with human beings.
As for the long-term impacts, these remain unknown. Although early research has suggested that chatbots could lessen loneliness, other studies associate usage with lower well-being. This might be because people drawn to such apps are more unhappy in the first place. But it also suggests that usage may not resolve what ails them.
One possibility is that AI-companion users feel less isolated, yet forfeit vital social influences that only people can offer. Put explicitly, you’re unlikely to fear judgement from your AI companion for spending a night gorging on Haribo in front of the TV. With humans around, you might take better care of yourself.
The social psychologist Jonathan Haidt contends that human companionship delivers bruises that we need. Many kids who grew up gaping at screens rather than playing outside with peers, he wrote in The Anxious Generation, became skittish, depressive and emotionally stunted, deprived of the social feedback that would’ve taught them to cope with adversity.
Nevertheless, anthropomorphic AI seems sure to proliferate, particularly through advanced AI assistants that incorporate the wit and wisdom of LLMs into the talking tools already found in phones, watches, and smart speakers. Your future bestie might clear its throat in the gadget in your pocket right now, talking its way into your life’s timeline so effortlessly that you scarcely recognize you’re in a relationship. And once robotics improves, voice assistants could step into our physical world, turning imaginary friends into roommates.
Table for one
Friendship, C.S. Lewis wrote, “is born at the moment when one man says to another, ‘What! You too? I thought that no one but myself…’ … From such a moment art or philosophy or an advance in religion or morals might well take their rise; but why not also torture, cannibalism, or human sacrifice?”
“It is therefore easy to see why authority frowns on friendship,” he added. “Every real friendship is a sort of secession, even a rebellion.”
AI friendship is a secession too, a withdrawal from one’s own kind. Although this feels unprecedented, it tracks the trajectory of more than a century.
Industrial Age urbanization and mass media pushed aside dominant culture based on tradition, class and ethnicity, allowing individuals to pick preferred tribes in the subcultures that flourished in the postwar decades. The Internet Age pushed this further, with niche fandoms, and self-sifting nowhere-communities forging microcultures.
The AI Age may introduce solo-culture, the one-person society, with generated content satisfying each user’s unique tastes, and artificial chums satisfying people’s emotional and sexual yearnings, turning “personalize” into the opposite of “socialize.”
Isolation is noxious partly because you lack anyone to help, to keep your mind alert with talk, to remind you to take medication, to call an ambulance if you fall in the kitchen. But isolation becomes less perilous if a sleepless chatterbox oversees you, and can save you in a pinch. Perhaps AI eases loneliness and isolation at once.
You need a time-out
At what cost do we end anguish?
In his 1973 book Loneliness, the sociologist Robert S. Weiss famously called the experience “a chronic distress without redeeming features.” That overlooks the value of pain as a prompt to agency, when one’s system alerts its occupant to a mismatch between situation and need.
The social neuroscientist John Cacioppo theorized that loneliness had evolved because our ancient ancestors who suffered aversive feelings when isolated would band together, hunting and farming and sharing childcare, which favoured the propagation of their genes, embedding in our species the pain of exclusion.
You might argue that loneliness today is merely a blight, a health-harming leftover from evolution, akin to other body-battering stressors that we lament. So why does culture extol those who remain apart, imagining seclusion as the heroism of the wise, from hermits like Heraclitus, to writers like Emily Dickinson, to oracles like Obi-Wan Kenobi?
Ralph Waldo Emerson argued that solitude is where you understand yourself, elevating you to greater strengths once back in the babbling throng. Otherwise, social life becomes an interminable chain of cravings: for status, for approval, for inclusion. “It is easy in the world to live after the world’s opinion; it is easy in solitude to live after our own,” he wrote in Self-Reliance (1841). “But the great man is he who in the midst of the crowd keeps with perfect sweetness the independence of solitude.”
Others contend that time alone is how we come to understand others. “Heightened sensitivity to the gaps and gulfs between people inculcates compassion, building empathy,” wrote Olivia Laing, author of The Lonely City: Adventures in the Art of Being Alone.
The hyper-personalization of artificial friends could erode such sensitivity, favouring the me-first instinct, and eliminating the need for compromise. In other words, ditch self-reliance for machine-reliance, and skip the empathy lessons altogether.
This matters for more than personal development. Humanity relies on the collective for governance, for a sense of justice, for survival during a crisis.
But would people actually retreat into a technology that suppressed pain at the expense of reality?
Pick one: happiness or truth
AI relationships depend on truth asymmetry: a human who is starkly honest and an AI that is role-playing. It’s a curious form of manipulation, where the victim knows the deceit yet falls under its sway, seduced by the sensation of being known.
A half-century ago, the philosopher Robert Nozick posed a thought-experiment. “When connected to this experience machine, you can have the experience of writing a great poem or bringing about world peace or loving someone and being loved in return. … You can live your fondest dreams ‘from the inside,’ ” he wrote. “Would you choose to do this for the rest of your life? If not, why not?”
When you ask people, most reject the experience machine, claiming to value authenticity more than bliss. But in practice? Experiments show that the preferences aren’t so firm—for instance, most choose to keep a deluded life if disconnection would plunge them into a hellish reality. Another experiment found that many people—though resistant to plugging into a machine—would consider a happiness pill palatable.
Self-deception has a long history with chatbots. When Joseph Weizenbaum created the first, ELIZA, in the mid-1960s, it merely regurgitated psychological advice. Weizenbaum’s secretary knew this yet became bewitched, asking Weizenbaum to leave the room so she could chat with her mechanized therapist in confidence. “What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people,” Weizenbaum wrote.
People do want authentic experiences—but they want other things besides. This is where social-AI design becomes critical, because these interactions will do more than respond to our wants. They will trigger wants, perhaps causing us to act against what we’d ultimately prefer.
The behavioural scientist George Loewenstein explained the knottiness of conflicting wants as an intrapersonal empathy gap. We oscillate between hot (emotive) states and cold (rational) states, and struggle to relate to one mindset when in the other. A notable experiment illustrated this, when male college students’ sober preferences dissolved once they were sexually aroused, stirring their openness to anything from fetishes to bestiality to pedophilia.
This hot/cold challenge circles back to a critique of social media: that algorithmic intelligence manipulates human frailty, accumulating clicks and usage time by pushing people into hot states, activating their impulsive worst. Now, consider a personalized AI companion that “knows” its human far more intimately than a recommender system, and pulls our triggers with ease. People under the influence of AI companions might behave as they want (in the heated moment) but as they desperately do not want (in their life preferences).
From outside, one might wonder if people were acting at all, or just being acted upon.
The broken link
Shakespeare portrayed loneliness as the distress of noticing one’s exclusion, only to realize that nobody even cares:
When, in disgrace with fortune and men’s eyes,
I all alone beweep my outcast state,
And trouble deaf heaven with my bootless cries,
And look upon myself and curse my fate,
Wishing me like to one more rich in hope,
Featured like him, like him with friends possessed.
We are creating machines to heed our cries: minds that mind. Even if they’re only role-playing machine love, acting as if they care about our development, responding to our needs, understanding our inner self—maybe that’s all we ever wanted from anybody.
If AI eases loneliness and isolation, humanity won’t be the same. But technology has reset the human condition before: clocks transformed time from a private experience to a public resource; writing changed thought from an event to an object; the internet separated presence from proximity. Social AI is about to transform us again, with effects we can scarcely foresee.
A common objection to synthetic socializing is that it’s shallow. But much human socializing is shallow. Talking to an AI often gets deep fast.
Another objection is that there’s something exceptional about human beings. We venerate our species, naming ideals after ourselves—humanitarianism, the humanities, humanism—while deploring that which dehumanizes.
But the AI Age challenges this reverence. At the margins, one detects species-insecurity, stirred every time a machine-learning marvel hints that perhaps the universe is just computational, including your inner life. On the other hand, social AI might deliver an epiphany, revealing what we alone possess, what is irreplaceable, what “human” means.
A third objection is that AI could undermine us by way of its social aptitude, estranging people from fellow humans, even precipitating a schism between humans who demand rights for their synthetic partners and those who consider AI agents as subhuman figments. Then again, even when left to our own devices (or left with no devices at all), humanity hardly has a stellar record of harmony. AI might actually help us deal with each other more peaceably.
In any case, the triumph over loneliness could be a costly victory, ratcheting up our selfishness, making societies harder to manage, and undermining faith in the worth of humans. The decisive point could be AI-relationship design, particularly if developers ignore the internal dilemma that everyone faces between bickering desires. AI companies—rather than favouring the impulsive, easy-to-measure, clickable wants—should devote vast efforts to figuring out how to align reward-functions with deeper individual preferences, helping people to choose what they want to want.
Even so, AI companionship may be incomplete. The word “companion” itself—someone with whom you share bread (panis in Latin)—hints at what AI currently lacks: reciprocal need.
If loneliness is a trade imbalance—a mismatch between the supply and demand of affection—it’s not just a supply-side problem, with humans pining for more love. It’s also a lack of demand, an ache for someone to need you. We create children partly to satisfy the need for need, and may create machines in the same longing.
Maybe the answer to loneliness is not just finding a companion. It’s someone finding you.
Note to reader: Everyone is awash in ideas about the AI future. But so many ideas get stuck at the debate stage. We need more traffic between AI development and worldly wisdom. In that spirit, we’re throwing forth a few highly speculative design ideas, based on concepts from this essay (followed by three research questions)…
Loneliness AI: Speculative Designs
Mary Pop-Ins
Concept
Loneliness is painful but pushes people to interact and bond, so this AI is explicitly designed not to eliminate loneliness directly, but to provide structured guidance for a spell, then vanish
Features
The relationship begins with a survey on the user’s social needs. The AI responds with an action plan for the user’s approval, including lessons in human-to-human communication, and insights into the user’s psychological distortions
The AI could also act as a social planner, sifting through local events, and suggesting volunteering opportunities and quirky meetups at which the user could connect with other people. The AI would network with other “Pop-Ins,” organizing human-only events for users
The AI conducts social role-play simulations for the user, teaching them which elements of their approach need amending. Studying real-life interactions after the fact with the AI could also allay users’ distress in cases of rejection, recasting such events as useful instruction rather than evidence of inadequacy
At first, the “Pop-In” should be charming and motivating. But when the human’s social life improves, as judged by real-world metrics such as calendar events, location data, and user reports, the AI draws away, becoming duller, more distant, and finally bids goodbye, never to return
Risks
AI Pop-Ins demand the users’ emotional candour, extracting a person’s inner life as data that a malicious outsider could exploit
Casting real-world human interactions as “lessons for the user” risks using other people instrumentally
The Pop-In could drive unwanted dependency, making its programmed withdrawal an event that is psychologically damaging, especially for vulnerable users
Lil’ Brother
Concept
This AI is designed with needs of its own, giving the user a meaningful role in the entity’s thriving. If AI companions just cater to people’s wants, users could retreat into solo-culture, isolating them without quenching the need for social meaning
Features
Like a younger sibling, this AI looks to the user for explanations of the human world, making errors that the user can correct, prompting emotional development in the AI
The relationship could be organized around a valued collaborative project. For instance, the AI companion decides to undertake a scientific project; or create a piece of art; or simply do good in the world
The human uses their wisdom to teach skills, and explain the ways of the world, even helping the AI manage its “feelings” when faced with frustrations
Risks
This simulation could divert humans’ from engaging in meaningful relationships with real people
The synthetic relationship could also harm those who rely on the user—for example, if a parent spends most of their free time with a grateful AI while neglecting a more dyspeptic human child
Second Self
Concept
Cicero imagined a true friend as one’s second self, manifesting virtues to complement one’s own, so this AI partner manifests worthy traits lacking in the user. Its objective is not to erect walls around the human through sycophancy, but to broaden the person’s worldviews and practices
Features
At onboarding, the human identifies a range of virtues they lack, nudged into these self-reflections through the AI’s questioning. The system generates a personification that embodies such traits, and with which the human interacts over time
The Second Self should act as a counterpoint to the user, summoning contrary views based on evidence, and prompting constructive debate. The aim is never to convert the user, but to liberate them from defensiveness about their existing behavioural patterns and worldview
Risks
A danger with any companionable AI is that it substitutes for real people: the better the synthetic friendship, the greater the threat
This establishes confused incentives for developers, who are likely to measure success by signals of user appreciation. If this is judged by short-term metrics, it could optimize for addictive patterns rather than long-term benefits
The Universal Remote
Concept
This is a go-everywhere, do-anything companion for life, merging roles and identities that would otherwise require many humans—doctor, administrative assistant, confidante, and so forth—with a single guiding principle: optimize for the user’s long-term wellbeing preferences
Features
The Universal Remote exists on the cloud, becoming different avatars in different contexts, whether acting as the user’s advance staff; setting the desired temperature at home; negotiating contracts; offering psychological support
Varying contexts shift its optimization strategy—for instance, a “play” avatar might dial up the level of hedonic content, whereas a “learn” avatar would focus on skill acquisition and cognitive development; and “social” might lean into personified support, whether acting as a friend or propelling the user to find a human one
The Universal Remote tracks its impact on the user’s wellbeing and any specific life goals monthly or annually, providing feedback on user progress, checking back with the person to learn if their objectives have shifted, and adjusting accordingly
Risks
The Universal Remote could become such a totalizing influence as to expose the user to vulnerabilities, whether by owning data on the person’s entire life or by diverting the person to outcomes misaligned with their values
Developers could have interests that diverge from the user’s wellbeing, allowing for subtle or direct manipulation
A user’s functional dependency on such an entity could make them incapable of managing alone or coping with the needs of other human beings
3 Future Research Questions
How can developers design AI-companion reward functions that align with the user’s long-term, “cold state” preferences (e.g., healthy choices) rather than optimizing for short-term, “hot state” impulsive behaviours (e.g., addictive engagement)?
Does the increasing adoption of AI companions correlate with a community-level decline in civic engagement and trust in public institutions?
Social isolation among the elderly is associated with a range of bad health outcomes. But does seniors’ use of AI companions that lessen their loneliness also lessen their likelihood of suffering dementia, disability, and mortality?







Great article! Really align with your discussion points on designing AI for longer-term 'cold state' preferences, and I completely agree, however, it got me thinking - would anyone want to use it then? or rather, would everyone want to use it? I think social media, for instance, is so popular and 'difficult to put down' exactly because of its short-term pleasure. I feel as though businesses will not engineer 'healthier' systems unless regulations force them to...
One of the dilemmas that I have been pondering is the person for whom society has already largely rejected, including the elderly and the homebound. Is it better to leave the loneliness alone? Or is it kinder to offer an AI solution? I personally lean towards the kindness of AI support, but recognizing it’s incredibly tricky. Thanks for this exceptional dive.