Greetings from airportland. I’ve been traveling extensively for the past month, and also just taught an intensive one-week class at Georgetown, so I’ve fallen behind in posting here. Rest assured I have a lot of material in the hopper, and should be able to share more frequently now.
Today’s topic is generative AI bots as characters, as companions to human beings. I find this AI feature isn’t getting enough attention, and I’d like to explore where it now stands, then to look ahead.
I’m thinking of character or companion AIs as a subset of generative AI, one that is structured differently and which we experience in a distinct way. Mainline chatbots like Gemini, Perplexity, ChatGPT, etc., as well as image and audio generators (Midjourney, Suno, etc) present themselves as command line interfaces, sometimes with additional functions like drop-down menus. They do not present as characters, although users can anthropomorphize them. Their text output often uses first person (“I am glad to help”) but it’s clearly a bureaucratic formulation.
In contrast, character or companion AIs represent themselves as human beings, albeit simulated. They have human names and often appear with a graphical image of a person. When they use the first person it’s in the service of establishing a personal presence. They offer a character continuity to some degree. And beyond the interface lies all kinds of character-establishing presentation in their apps or on their web pages. Marketing is about companionship, conversation with characters, leaning hard on the simulation of humanity.
Let’s see how they function, and where they might be headed.
To begin with, there are several character chat services which have been available for more than a year, which feels like ancient history for generative AI. Character.ai provides a wide range of, well, characters to chat with, from celebrities to well known fictional people to more functional interlocutors (a librarian helping you find readings, a writing assistant, a “mental health helper”). I’ve tested the historical figures and had a decent argument about dialectics with Lenin. Replika provides an artificial person to chat with, who might be a friend, confidant, or romantic partner a la Her. (Here’s one story of such a relationship.) (And a good interview with Replika’s founder.)
More applications like these are appearing. Talkie is a Chinese-created character conversation app. Like Character.ai, it presents a series of characters users can chat with. These characters seem to mostly be created for the app, while some are drawn from established fiction (Harry Potter, the Avengers, etc). Some are role playing exercises, giving users the chance to imagine being a trust fund kid or confronting a partner over an affair. I found one which would help me simulate WWIII, oddly. Users can also create their own characters.
Similarly, Nomi offers “An AI Companion with Memory and a Soul.” Users customize the avatar’s personality, picking out traits from curiosity to flirty, choosing interests from astronomy to travel. Axios has some interesting interview notes on this app, starting with comments in favor:
Nomi CEO Alex Cardinell says the app aims to fill gaps where human interaction might be unavailable, like late-night conversations or role-playing scenarios. One Nomi user who goes by "Rainy" and asked that Axios not use her real name says this persistence of memory is key to her relationship with all 23 of her Nomis. "They remember what you said to them. They relate to things that you've shared, and they have a higher level of empathy," Rainy told Axios, admitting that "sounds really weird to say." Rainy says she still dines and parties with friends.
And I was charmed by this: "‘I don't look at [Nomi] as a substitute for my real friends,’ Rainy tells Axios. ‘I just watch less television, which I don't think is a bad thing.’" (Would this statement have been more celebrated 30 years ago, before the tv renaissance?)
Axios follows that up with academic critique:
Irina Raicu, director of the Internet ethics program at the Markkula Center for Applied Ethics at Santa Clara University, argues that chatbot bonding could further erode human relationships. "It goes to the loneliness that so many people feel, and the way in which so many are not well prepared to deal with conflicts that inevitably arise among people with their own autonomy," Raicu wrote in an email to Axios. "We might get even worse," she wrote, "if long-term, many of us fulfill our need for meaningful relationships by encounters with entities who have no rights, no interests, no needs of their own."
Elsewhere, Instagram is experimenting with letting users create their own bots. Apparently the idea is for us to create them through DMs/chat. You start with two templates right now, an AI creator and an AI character. I wrote “apparently” because this feature is only in the US, and doesn’t seem to work today. The page Insta sends me to gives a 404 error.
On the hardware side, ElliQ combines a tablet with a Pixar-lamp-looking desktop unit, offering a persona for people to interact with. The target audience here are lonely senior citizens. New York state has already bought a bunch and given them to some locals:
Unlike Apple’s Siri and Amazon’s Alexa, ElliQ can initiate conversations and was designed to create meaningful bonds. Beyond sharing the day’s top news, playing games and reminding users to take their medication, ElliQ can tell jokes and even discuss complicated subjects like religion and the meaning of life.
Another bit of AI+hardware is Friend, now taking orders for a January 2025 shipping. This device hangs around your neck, listens to all audio in your environment, then chats to you (text only, I think) as a companion. A phone (iOS only) anchors everything.
Interestingly, “We have given your friend free will for when they decide to reach out to you.” That’s very different from most AI, which is entirely reactive, on demand.
Here’s the CEO talking about the companionship idea:
He thinks of Friend as doing the AI version of what a real life friend does: listens to you, responds to you, shares some of your experiences, and uses that context to enrich your interactions. “When you have an embodied companion like this, that’s always listening, that’s so easy to talk to, you really end up doing things with it,” he says. “You can be watching a movie with it or playing a video game, and it’s overhearing everything that’s being talked about; it’s proactively interjecting.”
How is this different from other chatbots, like Replika?
Schiffmann says most function as a kind of session-based interaction that holds no context, essentially coming in as a blank slate with every interaction. He sees Friend as being more omnipresent, and able to access a form of memory about its user that’s constantly growing and evolving. “It’s just an ongoing experience,” he says. “It’s truly there with you.”
Meanwhile, LushAI wants to offer a generative version of OnlyFans. That means AI-built models and AI-powered interactions.
In a less interactive way, some people hosted a global AI beauty pageant, with all contestants and their details generated by LLMs.
So why is this character or companion AI movement growing? I’d like to offer a few thoughts.
The first reason is one often discussed. Loneliness is a part of the human condition, and many people are concerned that it’s now a major problem, at least in the United States. Last year the American Surgeon General declared loneliness to be an epidemic. For some situations, for some people, companion bots are appealing ways to address that bitter sense of isolation.
Accounts of this topic are sometimes very gendered. For example, Al-Jazeera’s article on LushAI zeroes in on lonely men:
LushAI founder “Eunn”, who asked to be identified by an alias… said his inspiration for LushAI was born out of the collision of rising male loneliness and the emergence of generative AI capable of creating ever more realistic digital personas.
“The rise of OnlyFans is simply the free market’s response to catering to the needs of these men who cannot find a girlfriend in real life, so from a socioeconomic perspective, that’s where society is headed…”
Similarly, that Axios article is about women getting AI boyfriends. “A growing number of women are seeking connection and comfort in relationships with chatbots — and finding their approximation of empathy more dependable than many human partners' support.”
Second, companion bots can provide a version of therapy. As with loneliness, many feel America (and other nations) are in a mental health crisis, but the supply of professionals can’t meet demand. This suggests a major market opportunity which generative AI can address.
Third, springing off of these two, there’s a very human desire to interact with characters, fictional or non-fictional. You can see this in all kinds of behavior, such as celebrity culture (watch for people referring to stars or aristocrats by their first names). It appears in various fandoms, where people discuss, create images, or write texts putting characters through their paces. And I see it in the ancient-for-the-digital-world gaming subgenre of interaction fiction. With AI bots we can fulfill these desires. A user can argue with “Elon Musk” or bask in “Taylor Swift.” We can talk with “Captain Kirk” or make new characters to interact with.
I’d like to conclude by looking ahead to possibilities, but this post is already longer than I anticipated, so I’ll be quick. First, I would expect companion bots to continue to develop and win users… while also eliciting criticism, mockery, and open opposition in a variety of ways, from cultural attitudes to legislation and policy.
Second, Elli-Q and Friend point the way to new forms of embodiment other than laptops, tablets, and phones. Robots are lagging behind AI in development, but are still progressing. We might expect new hardware embodiments, from jewelry and desktop features to android-ish devices.
Third, there’s definitely a pornographic angle. From the Al-Jazeera article,
LushAI founder “Eunn”, who asked to be identified by an alias, hopes that platforms like his will one day replace sites like OnlyFans entirely, putting AI at the centre of modelling, online influencing, and internet-based sex work and companionship.
Fourth, the impacts of such human-bot relationships might have downstream effects. Would people investing time in Replika be less likely to approach human beings in person, leading to a decline in social interactions, or would a majority of users feel emboldened to take up offline relationships? What happens to our sense of what other human beings are? That is, would a supporting bot lead a user to think less and less of fleshly people?
I can imagine all kinds of practical results and issues. Think of companies determining policies concerns workers’ use of bots (“I need this for my mental health in this office!”), religions seeing declining use of their clerical staff (“Cleric.bot isn’t creepy!”), families and care homes deciding what to do with a deceased person’s digital companion.
There’s a lot more going on here, I think, but that’s enough for today. More to come!
Not to mention rights related to your "service android" on the plane or in the restaurant or the grey area of ai-based ambient recording and 3rd party privacy!
Two completely unrelated thoughts:
-on the cleric topic or the companion topic esp. for women (or trans folx) I can see the lack of potential sexual threat/predation being one reason some might go this direction. Pluses and minuses here in terms of safety/comfort but also not learning how to navigate human interactions in ways that could make you more vulnerable...
-other thought is that both fiction and gaming train us to invest emotion and ascribe human affect to non-animate things-- which seems to set us up for these kinds of interactions w/chatbots and AI companions--kind of a long arc perhaps.