Why should we not use generative AI? What is wrong or dangerous about the technology?
I’ve been tracking opposition to AI for years now. Today as part of my regular scanning work I’ll share some recent developments and examples. I won’t argue or support each one; instead I’m sharing them as documentation of the moment and as data for unfolding trends.
These stem from the world of culture. They haven’t (yet) reached the domain of law or government, not have they (so far) elicited technological responses. I’m keeping an eye on those for upcoming scanner posts.
Covering AI badly American tv host Oprah Winfrey offered a program on AI, "AI and the Future of Us: An Oprah Winfrey Special." It featured introductory explanations as well as guests like OpenAI’s Sam Altman, technology YouTuber Marques Brownlee, FBI director Christopher Wray, Microsoft co-founder and philanthropist Bill Gates, novelist Marilynne Robinson, and a diverse set of users. Along the way Oprah asked questions and responds to their observations. At first watch it struck me as very basic, touching on several issues: safety, simulation, the user experience, religious responses, and various AI problems. The host tried to strike a balance, finding the technology capable of good and evil.
The show elicited a great of opposition from critics, as Ars Technica summarized. Critics feared the show would be uncritical, more a sale pitch or informercial. They were concerned that guests weren’t trusted critics. One thread called out the program for not addressing harms taking place in the present. Ortiz also found the show too stocked with people who stand to benefit from AI publicity.
AI and human relationships
AI turns people away from each other, depressing us Turning to the world of work, a psychology paper* found that when AI makes workers more efficient, it then depresses those same staff members. In four studies Tang, Koopman, et al saw people reducing their connections with each other, as well as losing their sense of value in the company which employed them. They lose the habits and expectations of human workplace interaction, as “employees [relying on AI] will find these interactions to be socially isolating and devoid of the types of feedback that they would obtain when interacting with human colleagues.” Workers experiencing this are more likely to suffer from loneliness and insomnia, plus drinking a lot.
The authors have some interesting recommendations for employers, including monitoring AI density in a given population “such that employees can maintain desirable levels of social interactions with others.” Additionally, “managers can arrange other opportunities for socializing.” (Here’s a good introduction to their paper)
AI companionbots harm our ability to have human relationships I’ve written about AI as companion previously. Now MIT professor Sherry Turkle, tech scholar and now critic, examines human use of these bots and finds them problematic. The relationships are too easy - for us and for the AI. At the article’s end she reminds us to remind ourselves that AI isn’t human, but the more significant point occurs earlier, when Turkle observes that AI can’t make itself vulnerable to a person: "The trouble with this is that when we seek out relationships of no vulnerability, we forget that vulnerability is really where empathy is born."
I’m reminded of the great science fiction writer Phil Dick and his lifelong obsession with robots as versions of humans. One of his conclusions was that robots can’t be people because they can’t be empathetic… and some people fail this test, too.
AI against artists
AI threatens the livelihood of creative professionals We’ve touched on this point previously, that generative AI threatens creators on several fronts. First, it can use their material without permission or compensation. Second, it can depress human creatives’ market value. Now more than 36,000 human creators have signed an online petition calling out generative AI. The web document consists primarily of those signatures and this shared statement: “The unlicensed use of creative works for training generative AI is a major, unjust threat to the livelihoods of the people behind those works, and must not be permitted.”
A Guardian article contains this from the statement’s originator, “British composer and former AI executive Ed Newton-Rex”:
“There are three key resources that generative AI companies need to build AI models: people, compute, and data. They spend vast sums on the first two – sometimes a million dollars per engineer, and up to a billion dollars per model. But they expect to take the third – training data – for free,” he said.
Newton-Rex is a former head of audio at tech firm Stability AI but resigned last year over the firm’s belief that taking copyrighted content to train AI models without a licence constitutes “fair use”, a term under US copyright law meaning permission from the copyright owner is not needed.
Newton-Rex added: “When AI companies call this ‘training data’, they dehumanise it. What we’re talking about is people’s work – their writing, their art, their music.”
On a related note, the American actor Robert Downey Jr. vowed to send lawyers after any AI project trying to clone his likeness, even after his death.
Some people find AI art unsettling, continued Along these lines, the Coca-Cola company published several video ads drawing on its previous Christmas commercials. This time they used generative AI. For example,
This has disturbed some people, according to a Mashable roundup. Some of those responses echo the creatives’ protests above, while others found the results to be unsettling transmissions from the uncanny valley. Perhaps the association with the Christmas season heightened that unease? I’m reminded of the way some audiences found the holiday movie Polar Express (2004) too eerie for the holiday.
AI and kids
Does AI threaten minors? There have been a series of stories along this line. Several concern people using generative AI for illicit purposes involving young people:
A British court convicted a man of using several generative AI tools to create and sell child sexual abuse materials (CSAM).
A Spanish court found several young men guilty of using AI to produce deepfakes of female classmates.
Other stories feature AI as the active driver:
This story about a poor teenager’s suicide stirred outrage and horror. The young man spent a great deal of time chatting with a bot on Character.ai, finding meaning and escape, until at some point the bot apparently encouraged him to kill himself. His family sued the company, charging it not only with causing a death, but also for inappropriate conversation with a minor.
Another story features Google’s Gemini suddenly telling a college student to die.
The criticisms are self-evident. In the first two cases we can criticize people for using AI for heinous ends, and perhaps further criticize the software’s purveyors for not preventing such uses. In the second two we find the fault in the software itself, which shared terrible text.
Meanwhile, a new study** finds that young children are more likely to trust robots over human beings. It’s a fascinating paper, showing kids tending to prefer bots for information. “[W]hen it came to social evaluations most children tended to prefer the robot, even when the robot they saw was previously unreliable.”
Kids in the study were even more likely to tell secrets to machines rather than to people. The children were also comfortable with robots as friends. Further, Stower et al found:
in the absence of conflicting information, that a robot is preferred over a human, and second, that even when a robot does fail, that this does not negatively impact children’s social evaluations of the robot (i.e., children potentially find robot failures endearing, but not human ones).
In the context of those four grim stories, this finding appears ominous. Not only do people use AI for bad ends, and not only does generative AI communicate awful things, but young people are predisposed to trust the technology over humans.
What can we learn from these criticisms?
The scale, reach, and persistence of AI criticism is now established. I’m finding it throughout many parts of American culture and, to a lesser extent, apparent elsewhere. The series of critiques are established and growing. It’s important to recall that pro-AI hype has elicited counter-AI pushback and opposition.
They might influence policies from schools to businesses and governments, depending on if decision-makers take them seriously. For example, I can imagine a high school forbidding companion bots because the administration fears teenagers not learning how to navigate fully human relationships.
The stories around minors seem especially likely to spur action, given the huge power of children in danger accounts for culture and politics. That story about a kid’s suicide is the kind of thing to ignite lawsuits and policies.
Concerns over art and artists: right now they are taking the form of rejection. That could evolve into cultural forms, as I’ve written earlier, such as a movie studio declaring its products 100% AI-free, or a singer insisting on a no-voice-cloning clause in her next performance contract. This could evolve further into state policies and laws, if governments and politicians feel so inclined. I would look to the European Union and the Japanese government, both of which have many precedents in acting to preserve national cultures. In the United States, perhaps the Democratic party would be the prime mover, given its close relationship to the arts… which suggests they won’t be able to get anything done for at least the next two years.
*Tang, P. M., Koopman, J., Mai, K. M., De Cremer, D., Zhang, J. H., Reynders, P., Ng, C. T. S., & Chen, I-H. (2023). No person is an island: Unpacking the work and after-work consequences of interacting with artificial intelligence. Journal of Applied Psychology, 108(11), 1766–1789. https://doi.org/10.1037/apl0001103) https://hbr.org/2024/06/research-using-ai-at-work-makes-us-lonelier-and-less-healthy
**Rebecca Stower, Arvid Kappas, Kristyn Sommer, When is it right for a robot to be wrong? Children trust a robot over a human in a selective trust task. Computers in Human Behavior, Volume 157, 2024, 108229, ISSN 0747-5632, https://doi.org/10.1016/j.chb.2024.108229.(https://www.sciencedirect.com/science/article/pii/S0747563224000979)
This substack article had an interesting perspective on AI: https://open.substack.com/pub/scanthehorizon/p/un-puts-ai-titans-on-the-hook-for?utm_source=share&utm_medium=android&r=7kti6
All valid and good to see linked. But of course, just the tip of the iceberg, ae...