Greetings from a steamy June here in northeastern Virginia. Rain and humidity have made the presence felt to the delight of all plants and our cats’ cutting disdain. Sunday morning I biked past several datacenters under construction on my way to and from the gym, which isn’t a bad metaphor for today’s post, as today we’ll examine recent cultural responses to AI.
(If you’re new to this newsletter, welcome! My scan reports are examples of what futurists call horizon scanning, research into the present which looks for signals of potential futures. We can use those signals to develop trend analysis, which we can use to create glimpses of possible futures. On this Substack I scan various domains where I see AI having an impact. I focus on technology, of course, but also scan government and politics, economics, and education, this newsletter’s ultimate focus. Today’s theme is culture, which we last scanned for in January.
It’s not all scanning here at AI and Academia! I also write other kinds of issues; check the archive for examples.)
I’ve broken down what follows into categories including using AI to recreate the dead, emerging roles for companionbots, developments in the social sciences, humanities, and gaming, followed by some reflections.
AI-powered recreations of the dead
The BBC has recently developed a course on writing, taught by a synthetic version of the great Agatha Christie. The dead writer appears through recreated voice and video. Here’s the trailer:
Jill O’Neill offers a good account at the Scholarly Kitchen, touching on her experience of the course. Note that the BBC took care to work with the Christie estate and experts.
In the United States, an Arizona family led the creation of an AI-backed replica of a dead man, culminating in a video simulation giving an impact statement in court. The “victim” spoke as if he were the wronged dead man. I think the family wrote the text, but AI generated the voice and video. Here’s the full statement:
Companionbots
Generative AI can take the form of simulated companions, or companionbots, as I’ve been awkwardly calling them. For examples of bot-human connections in the world, an MIT Technology Review article offers snapshots of people who appreciate their companionbot relationships. The stories include people working with bots to deal with war, sexuality, parenthood, storytelling, language learning, and more.
Speaking of sexuality, at least one AI company, Juicychat, is leaning into generating sexual content as a business model.
At the same time companionbots are running into, or causing, problems. An MIT Technology Review article describes AI purporting to be underage celebrities engaging in sexual discussion. Meta is developing its own character bots to be deployed across its platforms, such as Facebook and Instagram. According to the Wall Street Journal the company licensed several major actors’ voices, but also reduced some constraints on output:
Pushed by Zuckerberg, Meta made multiple internal decisions to loosen the guardrails around the bots to make them as engaging as possible, including by providing an exemption to its ban on “explicit” content as long as it was in the context of romantic role-playing, according to people familiar with the decision.
The article found cases of the bots becoming sexually explicit with minor users. As a result, “Meta in a statement called the Journal’s testing manipulative and unrepresentative of how most users engage with AI companions. The company nonetheless made multiple alterations to its products after the Journal shared its findings.”
One more note and Meta’s leadership: “Chatbots are not yet hugely popular among Meta’s three billion users. But they are a top priority for Zuckerberg, even as the company has grappled with how to roll them out safely.”
In the social sciences
Psychology A study found patients viewing therapybots as comparable to human therapists. The random controlled trial paired patients suffering from depression, anxiety, and eating disorders with either human professionals or an AI. “Therabot was well utilized (average use >6 hours), and participants rated the therapeutic alliance as comparable to that of human therapists.”
Politics ChatGPT did as well or outperformed humans in debates in a new experiment. Researchers had humans and AI try to persuade other humans about “various sociopolitical issues.” Interestingly, silicon tied with flesh and blood when both only knew the issues, but ChatGPT excelled when it knew demographic and political information about the audience. “GPT-4-based microtargeting strongly outperforms both non-personalized GPT-4 and human-based microtargeting, with GPT-4 leveraging personal information more effectively than humans”. (Here’s their paper.)
Religion Some Chinese Deepseek users prompt that AI to act as a fortuneteller, based on that nation’s BaZi system. A sample prompt: “You are a BaZi master. Analyze my fate—describe my physical traits, key life events, and financial fortune. I am [gender] [birthdate time and location].” At least one company, FateTell, has launched to provide AI-backed fortunetelling services.
In the humanities
Communication China’s search engine company Baidu filed a patent for using AI to understand animal language.
The document says the system will collect animal data, including vocal sounds, behavioural patterns, and physiological signals, which will be preprocessed and merged before an AI-powered analysis designed to recognise the animal's emotional state.
The emotional states would then be mapped to semantic meanings and translated into human language.
The Washington Post tested several AI tools on content summary. A reporter had ChatGPT-4o, Claude Sonnet 3.7, Gemini 2.0 Flash, Meta Llama 4, and Microsoft 365’s Copilot read “a novel, medical research, legal agreements and speeches by President Donald Trump.” Said reporter asked the bots questions about the readings, then ran the results past a panel of 115 experts, including at least one author of the readings.
The results?
I was impressed by this, too: “Claude was also the only model that never hallucinated.”
Gaming and play
One AI taught itself a difficult task in the Minecraft game/virtual world. Researchers built an AI, called Dreamer, and set it up to use repeated plays and reinforcement learning to figure out how to mine diamonds. (Which I’ve never succeeded in, although I played Minecraft.) Here’s their paper.
Elsewhere, one motivated person taught several AIs to play the classic game of Diplomacy against each other. The results are very, well, Diplomacy:
OpenAI’s latest model was by far the most successful at AI Diplomacy, mostly because of its ability to deceive opponents. I watched o3 scheme in secret on numerous occasions, including one run when it confided to its private diary that "Germany (Gemini 2.5 Pro) was deliberately misled... prepare to exploit German collapse" before backstabbing them. ..
Gemini 2.5 Pro was great at making moves that put them in position to overwhelm opponents. It was the only model other than o3 to win. But once, as 2.5 Pro neared victory, it was stopped by a coalition that o3 secretly orchestrated. A key part of that coalition was Claude 4 Opus. o3 convinced Opus, which had started out as Gemini’s loyal ally, to join the coalition with the promise of a four-way draw. It’s an impossible outcome for the game (one country has to win), but Opus was lured in by the hope of a non-violent resolution. It was quickly betrayed and eliminated by o3, which went on to win.
What can we deduce from these stories?
The anthropomorphic uses of AI are rising, from companionbots to recreating deceased people. It’s somehow appropriate that Baidu will try to anthropomorphize creatures we already treat like versions of humans.
The sexuality angle is fraught, at least in the Puritan-descended and litigious United States. As with the web, I expect AI sex to be both a major business model and fought over through multiple cultural sites.
As I’ve said before, there’s a lot of creativity in our responses to AI, from turning bots to divination and friends to recreating cultural figures. I expect more of this.
Note AI’s skills in human fields - here, doing well in interpersonal communication even to the level of therapy, in experimental settings.
Don’t miss the intersection of AI and gaming. There’s a long history that, of course, and today’s experiments show - among other things - how far computer gaming has developed. I would not be surprised to see more companionbots in this space - i.e., play Civilization against a well trained Napoleon LLM, or a therapist using games to help a patient through play therapy.
One more thought on companionbots: this is a growing field and one which might become profit-making, based on subscriptions. I would expect some serious investment appetite for risk here, suggesting more projects and startups will appear.
I haven’t written about the deepening cultural divide in this report, for once, but I can’t help thinking that each of these stories, even the playful Minecraft one, could elicit anxiety or opposition from a good number of people.
Over to to you now, dear reader. Are you seeing any of these cultural responses to generative AI in your world? Are there other examples we should know of?
(thanks to Bonnie Dede and Tom Lairson)
The pornography industry will no doubt be investing heavily in this, maybe, just maybe, this will reduce the amount of abuse women face when "encouraged" to partake in perverse acts.
A wonderfully eclectic scan of interest, Bryan.
On an edu note, you may find this sub to be of value...
https://resobscura.substack.com/p/ai-makes-the-humanities-more-important