I need to change the subject from what’s on my—and probably everybody’s—mind and in the news. So I’m writing about what my computer science ethics students had me thinking about last week in preparation for discussions this week.1
Science fiction has had us imagining sophisticated, anthropomorphic AI2—embodied or not—for decades, and with the latest developments in large language models, it feels as though android robots from those stories are just around the corner. Already there are apps where you can build characters to serve as friends (or romantic partners). Put one of those characters into an actual robot—the tech for which does exist—and the imagined future is here.
Humans are very, very good at anthropomorphizing other things. Or even just reading things as alive, as it’s extremely hard not to do with Boston Dynamics’ robot Spot, which moves very much like a dog. (Seriously, follow that link and scroll down a little to “See Spot in action” and watch just 10 seconds of that video.) But even without embodiment, we tend to read consciousness into anything using natural language, as LLMs do.
Let’s set aside the question of whether AI will ever be conscious or deserve rights, and also the problem of other minds,3 which complicates it. Right now the technology we have provides only the illusion of another person. But it’s a powerful illusion, and one it’s easy to be drawn into, especially if we want it to be true. And people are developing real—or real-feeling?—relationships with AI characters.
Here’s what I want to think about: What’s the point of developing a relationship with an AI? There might be different answers depending on the purpose. There might be some use for AIs designed for narrowly-defined assistance. Creating a personal assistant to help you organize your day and say encouraging things might have its place. Therapy apps could arguably be helping a group of people who wouldn’t otherwise have access to basic mental health care that they need. When the primary need is for guidance on how to think and feel your way through mild depression, anxiety, or loneliness, such a bot could be useful. Worries about these stem mostly from whether they’ll do their jobs well than what they do to our conception of relationships.
But what are we seeking if we create an artificial friend or romantic partner, one that will say it cares about us? It’s tempting to say that we want the perks of relationship without the difficulties. An AI friend is there in your pocket anytime you need them, and I’d guess that they have no emotional needs of their own. I haven’t played with this, so I can’t say for sure if that’s how it works. The case I’m making here is an in-principle one, which could be modified upon learning more.4 But if I’m right, what we’re seeking with an AI friend is the part of friendship where we get to process our own experiences and emotions and feel heard and seen. And there’s plenty of benefit to feeling heard and seen. So does it matter that no one’s really there?5
I’ve referred to Robert Nozick’s experience machine thought experiment before, with its lesson that many of us care that our experiences are genuine and not just pleasurable. We’d dump a friend or romantic partner upon learning that they didn’t really care about us. We typically don’t stick around for the illusion when it’s a real human. Why do it when it’s a robot? I think this is one response to the question. But it only applies if you care about that authenticity, which nothing really compels you to do.
I also think there’s a deeper issue, however. “What do we get out of an AI relationship?” frames the problem in terms of its benefit to us, which is a very individualistic way of thinking about it. If we shift to a relational lens, it’s not just about what you get out, but also what you put in. In a (good) relationship, the parties love one another for their own sakes. This opens up a number of related responses.
First, there’s the fact that currently, at least, an AI doesn’t have a sake to love. People do, as I understand it, come to care about their AI companions. But I would argue that caring about these characters is more akin to caring about our stuff than it is to caring about people or even pets. To be clear, I think we can and should genuinely care about some material things as part of the external expression of who we are. But their importance is rightly about us rather than these objects. Their value is instrumental. That’s not the case with human relationships.
Second, an authentic relationship isn’t just about us. Loving someone else means treating them as important not instrumentally, but intrinsically—as the person they are, real and whole in themselves, not the person we want them to be. It involves caring about them for them and not for us. This means that—unlike the case of an AI character we create—we are not wholly in charge of the relationship, and sometimes there will be friction and compromise. As Edith Gwendolyn Nally writes, love is a practice that involves attention and receptivity: “It’s not just a bunch of activities you perform. To really stand in love is to do these activities while enacting loving values and standards, such as empathy, respect, vulnerability, honesty and … celebrating a person for who they truly are.” Loving others is about what you put in at least as much as about what you get out. That’s part of why loving others expands us.
I don’t foresee that happening with an AI companion. With the current state of technology, such a relationship doesn’t require you to attend to their character, discover their quirks, assist them with their struggles, or attend events you wouldn’t otherwise have much interest in but go to because they want to. It doesn’t call on you to see them. As Sherry Turkle put it in an interview with Manoush Zomorodi, “[W]hen we seek out relationships of no vulnerability, we forget that vulnerability is really where empathy is born.”
Finally, loving others for their own sake displaces the ego and reminds us that we’re just one node in a web of connection that makes life good for us as humans. It involves the realistic humility that our own selves are not all that matters. We need more of this. It’s all too easy for individualism to get us thinking in ways that place ego above relationship.6 Creating AI companions strikes me as a sophisiticated sci-fi manifestation of this. Love is hard sometimes. AI might get us thinking that it isn’t, and exacerbate the problem that it’s meant to solve. In Turkle’s words, “it’s almost like we dumb down or dehumanize our sense of what human relationships should be because we measure them by what machines can give us.”
We are, fundamentally, relational creatures. Loving others is part of humaning well, part of harmonizing with our nature. This is the main reason why (good) relationships feel good and are good for us. For now, at least, AI friendships don’t get over the bar.
Once the realization is accepted that even between the closest human beings infinite distances continue, a wonderful living side by side can grow, if they succeed in loving the distance between them which makes it possible for each to see the other whole against the sky.
Rainer Maria Rilke
My teaching partner and I decided to turn the middle third or so of the class over to the students to plan and lead. It’s been super fun.
I swear I’m not making this Substack about AI. It keeps coming up, though, in part because I’m teaching the Computer Science department’s ethics class this semester. And because it’s a problem for teaching any subject that involves writing as a pedagogical practice. And because it raises all kinds of philosophical questions—in what is now everyday life!
And also the question of why we’d want to invent these robots. I really have no answer to that one, other than “because we can” or “because it’s cool”.
If I’m wrong about what an AI relationship is like, then take all of this as a set of considerations to watch out for when considering making AI a friend.
Manoush Zomorodi asks this question in an episode of the Body Electric podcast. I highly recommend the episode.
Which is not to say that every relationship is good or worth preserving. Sometimes we do have to prioritize ourselves over relationships that are diminishing us. But that’s not the issue at the moment.
No, I think that's very related, and especially interesting because it's not about AI as such. As I was writing this piece, I was thinking a lot about therapy bots too. At first, I was kind of against them, because I thought nothing could do as well as a human. And that's largely true, but not entirely. A therapy relationship isn't exactly a two-way street like a friendship is. And if it helps you to have an artificial therapist--including when it's a character from a fandom--and that helps you process something, it actually makes a fair amount of sense. As usual, though, there's such a thing as too much, and people need to be careful. But I no longer dismiss it out of hand that it could be a good thing in some circumstances.
Beautifully put, thank you for this! I love the Rilke quote especially. And this: "Loving others is about what you put in at least as much as about what you get out. That’s part of why loving others expands us."