(Note: apologies for the delay in getting this issue out. I’ve been dealing with several family health emergencies, on top of work travel.)
While we explore the technical details of rapidly emerging generative AI, I’d like to keep an eye on how we respond to the stuff culturally. Today I’ll share some of the stories I’ve caught in my scanning work, and offer some futures reflections about them.
We’ll look at how people are actually using generative AI, then move on to some exceptional creative uses, then consider the critique/backlash. At the end I’ll touch on different religious uses of AI.
What are people actually doing with generative AI? One company mined user posts on various fora (Reddit, Quora, etc) to create a taxonomy. Top level categories include technical assistance, generating content, professional/personal support, education, fun, and research.
Filtered broke those groups into numerous subheads:
Note that while making stuff is a leading use, many of the other functions treat an LLM as a partner or colleague or staff person.
I’ve been impressed by the creative uses of AI, which I keep highlighting in these posts. For example, Al-Jazeera used Midjourney to create images for this cartoon about Haiti. On the more video-ish side, here’s a gorgeous video of animated AI images for a classic Pink Floyd song:
And here’s a vision of the classic science fiction/horror film Alien (1979), as if filmed in the 1950s:
That channel, Abandoned Films, has more examples of this kind of thing.
In Los Angeles, a group of creators used AI to create a kind of remake of the movie Terminator 2 (1991).
On the hardware side, several devices (rabbit, Human’s Pin) have emerged to displeased reactions. Yet I’m curious about Poetry Camera, which captures an image, then prints (in hard copy) an AI-generated poem about it.
Anxieties about AI in culture continue to appear in various forms. Polling shows attitudes towards AI turning to the negative, like this survey showing decreasing trust in AI companies. This is happening in many nations, including the United States: “Globally, trust in AI companies has dropped to 53%, down from 61% five years ago. In the U.S., trust has dropped 15 percentage points (from 50% to 35%) over the same period.” Within American society, the political partisan difference is clear: “Trust in AI is low across political lines. Democrats trust in AI companies is 38%, independents are at 25% and Republicans at 24%.”
Distrust and pushback can take the form of people criticizing art for using AI, like one Netflix documentary generating images of a crime suspect or a (very good) new horror film creating faux intertitles with AI:
404 Media slammed an AI-generated trailer produced by tv company TCL, saying that it:
looks so bad and is filled with so many jarring visual errors common in AI-generated images, it’s hard to believe it’s real…
It is not clear whether the trailer is bouncing between different characters, or if TCL has been unable to figure out how to keep them consistent between scenes. The lip-synching is wildly off, the scenes are not detailed, walking animations do not work properly, and people and environments warp constantly.
and offer this grim forecast:
It is easy to imagine that FAST [Free, Ad-Supported TV] channels will begin shitting out AI-generated movies and television shows at very little cost—and certainly at a lower cost than it takes to license real television. Whether on TCL TV+ or on another competitor’s platform, there will probably soon be entire channels full of AI-generated content.
Moviemaker Tyler Perry suspended work on a movie studio, after examining Sora. On a related note, a small publisher shut down, unable to cope with a flood of AI-generated submissions.
Crochet fans challenged Etsy sellers for using AI to create plush toy product images.
The backlash is so strong that some ad agencies are developing techniques to make AI-generated art look less AI-y:
The company is experimenting with a new crop of fine-tuning programs, such as Magnific, that “humanize” some of the perfect looks generated by AI with details like wrinkles and under-eye shadows.
I was glad to see that Wall Street Journal article conclude by seeing some creators enjoying the surrealism:
Others are leaning into AI’s idiosyncrasies.
Advertising holding company Publicis in January used AI to help create videos of Chief Executive Arthur Sadoun personally thanking some 100,000 employees by name for their work. The videos also featured fantastical scenes such as the company’s global chief strategy officer climbing K2 and Sadoun sporting an impossibly smooth arm tattoo of each employee’s name.
Another pushback trend concerns people openly, even proudly proclaiming their art to be AI-tree. Consider this Body Armor ad, which begins with a hilariously bad mock advert, then switches to un-AI video with the slogan: “Nothing in sports should be artificial”:
Which is a weirdly stupid thing to say. Is a soccer ball natural? Or a toboggan? Still, you can see the deeper, much older trope of natural (technology) vs unnatural (bad) technology. Generative AI has brought it back with an update. Watch for more uses of this trope, especially as anxiety, critique, and backlash build.
Deepfakes in the world Police arrested a Maryland physical education teacher for allegedly using AI to make another school official look bad. People continue to make sexualized deepfakes of women. An online group of con artists use generative AI to make images and sound for scammy video calls. While regulators strain to issue laws, nothing is slowing down deepfakes.
Religous bots South Korean groups have set up Christian AI-powered chatbots, apparently without incident. For example, one such “responds to inquiries on spiritual matters and day-to-day issues with bible verses, interpretations and prayers.” Yet an American Catholic organization set up a chatbot to simulate a priest and things didn’t go so well. Users could ask the kind of questions they’d otherwise put to a cleric. “Father Justin” did well at first, but gradually started crossing some lines: offering a sacrament, inventing a biography, stating his reality. The organization defrocked the bot (which is quite a phrase), but let it run under the name of lay theologian Justin.
Meanwhile, there are debates across the Islamic world about using AI to generate religious content, from speeches to rulings.
In Iran, leading clerics and technology entrepreneurs have openly affirmed the complementary role that AI can play in issuing fatwas, or religious decrees. Iran’s Qom Seminary—the largest Shia institution in the world—has entered into a partnership with the city’s leading AI research center. The UAE has also experimented with AI-generated fatwas, with the Islamic Affairs and Charitable Activities Department in Dubai setting up a “Virtual Ifta’” program in 2019. Even Egypt had previously announced in January 2020 an ongoing project to develop an AI-enhanced fatwa system.
Note the now-established problem of controlling an AI’s output across a large number of users on challenging content.
To quickly sum up: we are culturally responding to generative AI with a great deal of creativity, even when we oppose it. Some of us are using AI to make art, while others fear that very thing. Again, as I’ve been saying, we are deeply divided about generative AI, and that division will play out across all kinds of human domains including, and also beyond culture.
That’s all for now. I hope to ramp up the number of these newsletters, circumstances willing.
(thanks to Caroline Coward, Steven Kaye, and Ed Webb for links)