How is culture responding to generative AI?
Today I’ll share some stories which recently crossed my scanner, chosen because they point to interesting trends.
Key note: the cultural divide I’ve been noting and forecasting keeps deepening. I cover that at the end of this email.
Media on demand Let’s start with some music. Here’s an AI-generated Depeche Mode song. I am of precisely the age and musical background for this to gobsmack me. I spent some minutes Googling, trying to determine if this was a real Depeche Mode song. Instead, Mks (whomever that is) created this song using Udio. AI-generated pop music is now here - not in the future, but at your fingertips as I write this. [EDITED TO ADD]… and we can’t, as someone just removed the video from YouTube. Copyright or trademark claim? There isn’t any information. So listen to this one instead, which is pretty good and serves the same purpose:
AI-generated video is a related but different situation for on-demand, generated media. Video is much more complex, requiring an order of magnitude more computing power and design. I’ve covered some early starts; today we saw a Sora-generated Toys “R” Us commercial. Now we have news of Showrunner, which promises to be able to create short tv episodes on demand. It’s still in alpha, but follows the now established AI video pattern of offering some early examples. Here’s a 30 second skit of two talking cars. “What We Leave Behind” is a nearly one minute story of two siblings and their mother, anime style, only slightly animated. More ambitious is this five-minute clip, which includes a South Park-style sketch, prefaced by an introduction in a different style.
Assuming this works to some degree, what’s the use case? Showrunner’s CEO mentions generating sequel content to a beloved show, and that makes sense, as the vast body of fanfiction proves. I wonder how many people would use the app to create shows from scratch.
Anthropomorphism, pro and con I’m fascinated by how we sometimes engage with generative AI as a character, as a simulacrum of a human being. Applications like Character.ai and Replika don’t get much press, and most of that isn’t good, but the desire to interact with software as if it were a human-ish entity seems to be undeniable. Today’s example of this is an article weighing the benefits and drawbacks of generative AI for therapy, for teenagers. It’s an ambivalent account, switching back and forth between treating such conversations as dangerous or problematic, versus harmless or beneficial.
AI versus abuse: Japanese firm Softbank is experimenting with using AI to reduce the amount of hostile verbiage customers hurl at call center staff. Emotion canceling tries to preserve a caller’s information while reducing the associated friction:
The technology does not change the wording, but the pitch and inflection of the voice is softened.
For instance, a woman’s high-pitched voice is lowered in tone to sound less resonant. A man’s bass tone, which may be frightening, is raised to a higher pitch to sound softer…
[Toshiyuki Nakatani, a SoftBank employee] said he hopes that AI “will become a mental shield that prevents operators from overstraining their nerves.”
These stories describe what looks like internal research. Softbank could turn emotion canceling into a product or service for others to purchase.
Libraries as generative AI allies The excellent Dan Cohen has a thoughtful argument for libraries to contribute to AI training. The idea is that libraries make available very well structured and high quality texts, the careful ingestion of which would likely improve any AI dataset.
One generation’s AI gender divide? Gen Z men are significantly more likely to have used generative AI than their female counterparts, at least according to a Slack study.
However, I take such self-reported studies with a big grain of salt. The many negative attributes culture assigns to AI (it’s for cheating, for abuse, etc) make it unlikely that all survey respondents will accurately describe their experience. Call it an AI instance of the Bradley Effect. Nonetheless, such a finding isn’t too surprising, given the general association we tend to make between males and technology. Let’s look for more evidence.
AI in health, continued A Google Gemini spinoff called Personal Health Large Language Model (PH-LLM) apparently outperformed human professionals in giving health advice.
PH-LLM achieved 79% in the sleep exams and 88% in the fitness exam — both of which exceeded average scores from a sample of human experts, including five professional athletic trainers (with 13.8 years average experience) and five sleep medicine experts (with an average of experience of 25 years). The humans achieved an average score of 71% in fitness and 76% in sleep.
I found this noteworthy in terms of culture, rather than technology, for two reasons. First, health care is one of the most fundamental topics we shape and consider through culture. Seeing generative AI do well here can be either terrifying (remember how attached we can be to some medical professionals) or impressive as one way to deploy AI.
Second, this success points to the broader problem of how we acculturate automation when it exceeds human capabilities. We have a long history of seeing our tools as implements firmly under our control. A great deal of cultural habits and stories around technology portray devices as inferior to humans, especially tropes and stories not explicitly about these tools (think: war stories and weapons, medical stories and those implements). To the extent that AI can outperform skilled humans at key tasks, how will we respond? Some science fiction famously points to dread and resistance. Are there other options available?
THE CULTURAL DIVIDE CONTINUES
I’ve been noting and forecasting a cultural split over generative AI for a while, and reality keeps offering examples. Here’s the story of a Swiss film whose creator used ChatGPT to generate the screenplay, and which elicited enough protests for a London venue to cancel its British premiere. At a smaller scale, here’s a fun Tumblr meme calling for us to “NOIRMAX.” It sports an AI-generated image, which, despite getting many likes and reblogs, elicits nothing but opposition in every single comment:
How many reading this had a similar “ew ai” reaction to the two images I created and shared above?
In a more, ah, generative way we find this account of Stack Overflow users wanting to sabotage their work on the site in protest of its collaboration with OpenAI. Stack Overflow fought back:
Since the announcement, some users have attempted to alter or delete their Stack Overflow posts in protest, arguing that the move steals the labor of those who contributed to the platform without a way to opt out. In retaliation, Stack Overflow staff have reportedly been banning those users while erasing or reverting the protest posts. On Monday, a Stack Overflow user named Ben took to Mastodon to share his experience of getting suspended after posting a protest message:
Stack Overflow announced that they are partnering with OpenAI, so I tried to delete my highest-rated answers.
Stack Overflow does not let you delete questions that have accepted answers and many upvotes because it would remove knowledge from the community.
So instead I changed my highest-rated answers to a protest message.
Within an hour mods had changed the questions back and suspended my account for 7 days.
The divide appeared in the movie industry in the US, too, as this report makes clear.
“There are tons of people who are using AI, but they can’t admit it publicly because you still need artists for a lot of work and they’re going to turn against you,” says David Stripinis, a VFX industry veteran who has worked on Avatar, Man of Steel and Marvel titles. “Right now, it’s a PR problem more than a tech problem.”
“Producers, writers, everyone is using AI, but they are scared to admit it publicly,” agrees David Defendi, a French screenwriter and founder of Genario, a bespoke AI software system designed for film and television writers. “But it’s being used because it is a tool that gives an advantage. If you don’t use it, you’ll be at a disadvantage to those who are using AI.”
The divide over generative AI seems to be deepening, with participants willing to devote some of themselves - their work, their reputation - to opposing or advancing it. The split may appear in a wide range of human experience, albeit with different balances and appearances.
…and that’s all for now. More scanner issues are in the pipeline, along with some AI experiments and reflections.
Have you been following the backlash against Adobe? The precipitating cause is changes in their terms of service that give them permission to use customers images in various ways. But there's more going on. There are creatives complaining about pesky AI-generated prompts that interfere with their work. I've got a post on this: https://new-savanna.blogspot.com/2024/06/adobe-backlash-what-hath-ai-wrought.html
Great write-up, Bryan. Thank you for lots of great information and useful links. There is so much AI development in many different aspects that AI will for sure affect culture throughout the world. The very first aspect of AI Literacy that I teach to everyone is Awareness. You have done a great job in helping people realize how AI is impacting many different parts of their lives. Some of it might be good, some might be bad, and some might actually be both at the same time. We really appreciate your insights.