How is civilization responding to generative AI?
In this issue of my ongoing horizon scanning work I’ll share some stories I’ve tracked about social responses - specifically, reactions and actions in the political domain.
I’ve selected this field because it’s obviously important, but also because it provides context for my main focus on the future of higher education. Too much educational AI discussion focuses narrowly on one part of academia - the classroom, usually, or scholarly communication - and misses these powerful contexts. And politics will likely impact us.
What follows starts with attempts to regulate AI from different countries and governments within nations, then cites some political uses of the technology. Next we follow up with some further legal aspects. I wrap up with reflections on trends in this space, followed by an AI assist.
Regulations
Japan’s Ministry of Defense issued new AI policies, aiming to use the technology for various military purposes:
detecting and identifying targets using radar and satellite images, intelligence collection and analysis, and in unmanned military assets…
command and control, cybersecurity, logistics support, as well as in helping to make administrative work more efficient.
The announcement also emphasized maintaining human control in all procedures.
In the United States, at the federal (national) level, a bipartisan group of senators introduced an anti-deepfake bill, the Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED ACT). The center of it requires AI services to give users watermark options. The National Institute of Standards and Technology (NIST) would develop watermark standards. People can sue over watermark violations. The Federal Trade Commission (FTC) and also state attorneys general would be the enforcers.
The music and movie industries back COPIED as does a major actors’ union and a newspaper group.
Less formal than a federal bill was a letter sent to OpenAI’s chair by some Democratic Senators, asking about safety and security issues. The latter range from allowing third party testing and how much OpenAI spends on safety to questions about employee rights.
Meanwhile, there is some AI regulatory action at the state level. A significant number of US states have enacted deepfake laws, or are considering such:
States are looking into other AI dimensions, from political and sexual uses to hiring bias.
Political uses of AI
The Canadian government is already using AI-backed chatbots, and now wants to expand that deployment. New potential functions include assisting government workers in interactions with residents and data analytics. The Ukrainian government launched an AI-generated virtual spokesperson, “Victoria Shi.” Here is one video interview:
The Israeli government has used AI image generators to create visions of a possible post-war Gaza, such as this:
In the United States one writer suggest president Joe Biden use AI as a stand-in. This would smooth communication issues and also be more available than one human being might be. (That was before he withdrew from the election.). Some right wing extremists are using generative AI to create social media bots and digital content, from images and memes to video. MEMRI reports on neo-Nazis doing this.
Law
The Center for Investigative Reporting (CIR) sued OpenAI and also Microsoft for copyright violations in training ChatGPT datasets. This follows numerous such lawsuits from creators and other intellectual property holders. Similarly, a group of major record companies sued Udio and Suno, alleging the AI music creators trained on their intellectual property.
Summing up
AI is already a political force in several ways. Many people are worried about AI-generated deepfakes, enough for them to form the basis of rumors and conspiracy theories. For example: “When Biden called in to an event with Harris and campaign staff on Monday, some online commentators immediately began to speculate that it was not in fact Biden’s voice, but a deepfake created with artificial intelligence.”
I’m struck by the range of reactions now in play. We have governments seeking to use generative AI, other governments trying to regulate the tech, and people using courts to seek damages from AI. That covers utility and opposition, exploitation in the positive sense versus charging companies with the same in a negative one. I can see forces taking AI into everyday life through the public sector and others trying to quash it. The political response to AI is, in other words, all over the place.
It is interesting to see arts opposition to AI build, while governments increase their use, generally speaking. Perhaps some artists will cast their AI attitudes as anti-military. Or the bigger creative industries (music, movies+TV, software) will lobby states to turn against AI. In this sense the political-governmental-legal domain reflects divides about the technology in the rest of society. Politics is, as ever, a mirror to humanity.
As a futurist, I’m tempted to build scenarios from these oppositions. That might be the subject of a post to come.
I also wonder if we’ll experience versions of all of these dynamics coming to pass. That is, a lawsuit clobbers OpenAI while one nation uses Microsoft CoPilot in some of its operations while another government requires AI providers to use a specific provenance standard. One country could end up very pro-AI while a neighbor - or rival - casts itself as AI-free.
One last note: I checked with Perplexity on this topic. The results largely mirrored what I’ve been tracking this year, but it did add a few themes which haven’t gotten as much attention:
“Creating guidelines for the responsible use of generative AI in professional settings, such as legal services” - and the citation here was interesting, quickly hitting a few points in the practice of law. Related to this: “Liability and insurance considerations: Addressing tort liability related to AI bias and developing appropriate insurance frameworks for AI-related risks.”
“Voter education initiatives: Launching programs to teach citizens how to scrutinize and identify AI-generated content, especially in the context of elections” - are we seeing this occur?
“Updating open records laws (also called sunshine laws) to account for the potential disruption caused by generative AI in election offices” - that’s actually a good idea. Well, I’m a fan of sunshine and other transparency laws in general, so am intrigued to see this new reason to support them.
Thanks for this breakdown. It's exhausting trying to keep track of all this!