How is AI impacting politics at this time? How are we responding through governments, policies, and regulations? Who is seeking to influence political AI?
Today I’d like to share results from my ongoing environmental scan. (In this series I examine the present and recent past for evidence of AI developments likely to shape the future. In each run I focus on a particular domain of human knowledge and experience; previous reports have scanned economics (for example), culture (for example), and, of course, higher education (example).
1 Actions in the AI and government space
There’s a lot happening here, starting with the Paris AI Summit happening as I write this. French president Macron has made the case for humanity to look to France, er, Europe for AI development. Interestingly, Macron argued that European electricity for AI was more sustainable than that from the United States, despite also making a “30 to 50 billion euros” deal with the United Arab Emirates (UAE) to build AI infrastructure in France. Additionally, Macron is aiming to raise even more funds for a European competitor to America’s proposed Stargate (see this post). An early draft statement has apparently been leaked and met with little support.
Other presenters made different arguments. Indian prime minister Modi envisioned AI as an economic boon, especially for the global south. European Commission president von der Leyen described raising Stargate-level 200 billion Euros for the continent’s AI work. American vice president Vance called for less regulation and for an end to content moderation as censorship. Beyond individual presenters, the United States and Britain each decided not to sign anything. It seems that most speakers viewed AI as an economic motor.
Elsewhere, businesses are pitching AI to governments. OpenAI announced ChatGPT Gov, “a new tailored version of ChatGPT designed to provide U.S. government agencies with an additional way to access OpenAI’s frontier models.” It looks like this service, or set of services, would allow federal offices to run enterprise versions of ChatGPT on Microsoft’s cloud. There are questions about ChatGPT Gov’s security and implementation. Fedscoop observes that “OpenAI appears to be the U.S. generative AI company farthest along in pursuing federal agency use cases, though Anthropic is also looking to work with the government.”
Various governments have taken actions around AI. Canada has launched the Canadian AI Safety Institute (CAISI), a government research and project development organization. That page emphasizes understanding AI problems, including “risks posed by synthetic content, including impersonation and fraud, as well as risks posed by the development or deployment of systems that may be dangerous or hinder human oversight.” CAISI is one part of several Canadian AI efforts. Meanwhile, the Chinese state started investigating chipmaker and LLM AI supplier Nvidia for anti-monopoly practices.
Elsewhere in the government space, we saw a great deal of anxiety about the impact of AI on elections last year. I shared this concern, but it looks like AI impacts were minimal, at least according to two Harvard researchers. Many uses involved: machine translation; using bots (some with avatars) to reach out to, and answer questions from, voters; generating content (for example). Schneier and Sanders found deepfakes barely apparent:
a stream of AI-faked celebrity endorsements or viral deepfake images and videos misrepresenting candidates’ actions and seemingly designed to prey on their political weaknesses… didn’t seem to have much effect.
One interesting governance or regulatory failure:
Despite market leader OpenAI’s emphasis on banning political uses and its use of AI to automatically reject a quarter-million requests to generate images of political candidates, the company’s enforcement has been ineffective and actual use is widespread.
In one creative use a candidate used a chatbot as a stand-in for the human opponent who refused a debate.
Looking ahead, we have one signal of a potential military use. The United States Special Forces is exploring creating human-passing AI agents for social media activity, according to The Intercept.
2 Attempts to regulate
Back to Europe: Garante, Italy’s data regulator, straight up banned China’s Deepseek for potential privacy problems. (Readers might recall a similar, short term block against ChatGPT 3, back when it took off.)
Meanwhile, the European Union has launched a two-pronged consultation on its unfolding AI law.
The first is how the law defines AI systems (as opposed to “simpler traditional” software). It’s asking people in the AI industry, business, academics, and civil society for views on the clarity of key elements of the Act’s definition, as well as examples of software that should be out of scope.
The second ask is around banned uses of AI. The Act contains a handful of use cases that are prohibited as they’re considered “unacceptable risk,” such as China-style social scoring. The bulk of the consultation focuses here. The EU wants detailed feedback on each banned use, and looks particularly keen on practical examples.
Similarly, the British government has launched an AI policy consultation, but along different lines:
boosting trust and transparency between sectors, by ensuring AI developers provide right holders with greater clarity about how they use their material.
enhancing right holders’ control over whether or not their works are used to train AI models, and their ability to be paid for its use where they so wish.
ensuring AI developers have access to high-quality material to train leading AI models in the UK and support innovation across the UK AI sector.
Additionally, the consultation addresses other emerging issues, including copyright protection for computer-generated works, and the issue of digital replicas.
Here in the United States there has been a flurry of regulatory moves. The outgoing Biden administration published policies setting up a three-tiered global system for sales of AI-intended chips. The first tier is for friendly nations, the third including restrictions for adversaries, notably China. The second: “most of the world — will be subject to caps restricting the number of A.I. chips that can be imported, though countries and companies are able to increase that number by entering into special agreements with the U.S. government.” This elicited a good deal of diplomatic protest.
A few days later, the new Trump administration issued orders overturning many Biden orders and policies, including the October 2023 AI policy.
Meanwhile, Josh Hawley, a Republican senator from Missouri, introduced a law aimed at criminalizing American use of Deepseek. His remarks included these:“Every dollar and gig of data that flows into Chinese AI are dollars and data that will ultimately be used against the United States… America cannot afford to empower our greatest adversary at the expense of our own strength.”
Also around this time the United States Copyright Office issued its opinion about copyright and AI. It’s a fascinating report, reflecting a year of consultation and research. The conclusions are weirdly relaxed, saying that: wholly AI-generated material does not receive copyright protections, while human creators can use AI to make copyrightable stuff. No need for new laws. There’s also this big opening for legal action to come: “Whether human contributions to AI-generated outputs are sufficient to constitute authorship must be analyzed on a case-by-case basis.”
Naturally I had to have DALL-E do this:
If you’re looking for AI regulations, some of my Georgetown University colleagues have just launched a handy archive of them, called ETO AGORA (AI GOvernance and Regulatory Archive).
That’s all for this run of the scanner. Next up we have scans of AI and economics, culture, education, and more.
(thanks to Ruben Puentedura for links and thoughts)