AI and culture in late summer 2025
Greetings from early September, where summer tries its best to fight off the first hints of autumn, at least here in the Washington, DC area. This means fall classes are about to start for those of us working in higher ed, or have already begun. I’m in the latter group. As I write this two of my seminars have started and the third - about AI! - begins tomorrow. (I’ll share the syllabus on my blog.)
But today’s post is not about academics. No, now we’re going to look into cultural responses to AI. Let’s fire up the scanner.
(If you’re new to this newsletter, welcome! This is one of my scanner reports, which are examples of what futurists call horizon scanning, research into the present which looks for signals of potential futures. We can use those signals to develop trend analysis, which we can use to create glimpses of possible futures. On this Substack I scan various domains where I see AI having an impact, documenting each instance. I focus on technology, of course (for example), but also scan government and politics, economics, and education, this newsletter’s ultimate focus. We last looked into culture in June.
(It’s not all scanning here at AI and Academia! I also write other kinds of issues; check the archive for examples.)
There are four categories today. We’ll start off with examples of AI as cultural artifacts, then AI for storytelling. Next we’ll look into the companionbot movement and check on the deep cultural divide over AI. I’ll conclude with some future-oriented reflections.
AI as cultural artifact
A new band, The Velvet Sundown, made quite a splash on Spotify, but turned out to be a kind of hoax or provocation with a great deal of AI-generated content.
Former CNN journalist Jim Acosta interviewed an AI-created video representation of a murdered young man, asking “him” how he would address gun violence.
In the world of food, an AI-powered restaurant is scheduled to open this month in Dubai. The place is named WOOHOO, possibly regrettably, and the name of the AI creating the menu is "Chef Aiman." Human chefs will actually cook the food. In the world of toys, Mattel and OpenAI announced they were working on including AI in products like Barbie.
Elsewhere, when Anthropic retired the Claude 3 Sonnet model, 200 fans held a funeral funeral for the AI in San Francisco. I’m not sure how much of the event was sincere and how much was camp, self-parody, satire, or art project.
AI for storytelling
Some creatives use AI to make or contribute to their work. Author Rie Qudan stated that she wrote her novel Sympathy Tower Tokyo with some AI assistance:
part of it – 5% was the figure given, though she now says that was only an approximation – was written using artificial intelligence. This, she tells me, comprised parts of the novel which are presented as a character’s exchange with ChatGPT. But Qudan also “gained a lot of inspiration” for the novel through “exchanges with AI and from the realisation that it can reflect human thought processes in interesting ways”. Qudan’s use of AI, in other words, seeks not to deceive the reader but to help us to see its effects.
The book won the Akutagawa prize.
Hashem al-Ghaili created a nine-minute speculative fiction video called “The Sentence” using Google’s Veo 3 generator. The story concerns an alternative to capital punishment:
Netflix used generative AI for at least one scene in El Eternauta, its adaptation of a classic graphic novel.
A Christian media organization used generative AI to produce a video of scenes from Revelations:
An advertising agency used AI to make a manic, nearly berserk ad for Kalshi, a prediction market.
Ad agency and client alike say they’re pleased with the attention the video stirred up. I feel that if I understood spectator sports I’d better appreciate this.
Companionbots
The use of generative AI as companions for humans seems to be holding steady or growing. For evidence, I can point to Reddit, which now has boards on AI relationships, AI companions, people having relationships with Replika bots, and people connecting with Kindroids, whatever those are. Last month Al-Jazeera reported on a group of women who had AI boyfriends, then felt gutted when those technologies upgraded.
On a related note, a group of researchers built an AI called Centaur which, they argued, replicated human characteristics fairly well. A Reuters article explores some neurodivergent people using AI to fine-tune their communications for neurotypical folks. In one example,
she regularly runs things by ChatGPT, asking the chatbot to consider the tone and context of her conversations. Sometimes she’ll instruct it to take on the role of a psychologist or therapist, asking for help to navigate scenarios as sensitive as a misunderstanding with her best friend. She once uploaded months of messages between them, prompting the chatbot to help her see what she might have otherwise missed. Unlike humans, D’hotman says, the chatbot is positive and non-judgmental.
That’s a feeling other neurodivergent people can relate to.
More therapybots have appeared, like Ash.
For a different perspective we can look to an Anthropic survey of user behavior, looking for affective and emotional connections users find with Claude. On that platform, they found only about 3% of users turned to Claude for emotional relationships, and there mostly for emotional relationship advice:
So perhaps the companionbot function is relatively scarce, at least on Claude. Meanwhile, some number use Claude for conversations in other ways:
Perhaps most notably, we find that people turn to Claude for companionship explicitly when facing deeper emotional challenges like existential dread, persistent loneliness, and difficulties forming meaningful connections. We also noticed that in longer conversations, counselling or coaching conversations occasionally morph into companionship—despite that not being the original reason someone reached out.
Interestingly, people’s tone (depending on how they measured it) tended to become more positive over the course of a Claude session.
Meanwhile, another group of researchers studied AI for mental therapy purposes and found the technology not fit for purpose. Their results shows chatbots encouraging dangerous mental processes and also disdaining psychological problems. “[W]e conclude that LLMs should not replace therapists.”
The divide over AI
Generative AI remains controversial, with proponents and opponents trying to persuade the world of their causes, often in response to current events. A legal data scholar launched a public database of cases where lawyers presented AI-generated hallucinations in court. Someone else set up an AI Darwin Awards site. Keen-eyed Warren Ellis shared a list of emerging epithets aimed at AI:
Personally, I have only heard the last two used by people not circulating that post.
Signs of this divide appeared across the world of culture. At FanExpo Canada several vendors demonstrated AI art, eliciting protests and a police presence. At another fan convention, DragonCon, authorities removed one vendor for misrepresenting her art as human-generated. Critics charged a computer game company with using AI-generated content in ads for one game, including at least one alleged deepfake.
A Hollywood Reporter article explores a divided film industry, with new companies and established ones offering AI services to studios, while some oppose the effort. Disney apparently stopped two uses of AI in two movies because of concerns about intellectual property ownership and public backlash.
During the Trump administration’s National Guard deployment to Los Angeles protestors torched a group of Waymo self-driving cars. Brian Merchant argues that the attackers saw the vehicles as linked to police surveillance and Silicon Valley. A half-dozen masked activists attacked an Oregon coworking space, charging it with supporting AI. “‘Fuck AI’” was tagged in pink paint across a vehicle parked close to the entry of the building… Papers left behind at Kiln after the vandalism tell of a ‘Butlerian Jihad Against AI.’”
(Pedantically, I must disagree with that article’s explanation of the term “Butlerian jihad.” The authors attribute it to Brian Herbert, but clearly the origin is in Herbert’s father’s classic novel Dune. The name is a reference to Samuel Butler’s 1872 novel Erewhon, where a society turns violently against machines.)
A cofounder of Reddit used Midjourney to animate a photo of himself as a child, hugged by his mother, as he had no videos of that experience. Many users critiqued this act as politically and psychologically unsound.
What might we take away from these stories?
It seems that several trends I’ve been tracking persist: anthropomorphizing AI into human companions, a cultural split over AI, and creative uses of the technology. This isn’t news per se, but is useful as further evidence for these trends continuing and perhaps having some influence in the future.
I do wonder how marginal some of these currents are. The Anthropic study shows a very small proportion of users actively engaging with AI as companions. The violent acts against Waymo and that Portland coworking place seem to have elicited no further actions. As for people using AI to create stuff, the videos shows above might be edge cases. I haven’t seen good data about just how many folks use LLMs for so much creativity. We might have to fall back on historical patterns about active creativity and political insurgency, seeing only a relative handful of people doing so. Yet again I’m cautious of the Claude finding, wondering how other platforms might demonstrate different use cases.
AI to recreate the dead continues to be one small but fascinating them with deep psychological and cultural roots.
Over to to you now, dear reader. Are you seeing any of these cultural responses to generative AI in your world? Are there other examples we should know of?
(thanks to Doug Belshaw, Tom Haymes, Amanda Lee, and Peter Shea)





Excellent and tactful scan.
+1 for Best Dune Reference ever.
AI just can't compare to human creativity. The hours it takes to get the prompts right are better spent drawing, painting, and actually doing the creative work. Having said that, the Bible one nailed the seven-headed beast in Revelation (and not much else). The attention to detail requires a human eye.
I like AI as a tool, in the same way I like having a color wheel and HTML color codes. I'd prefer a human assistant, but I can't get that for $20/month.