Today I’ll continue my practice of sharing interesting AI developments, this time with a focus on politics. (Last issue was about AI tech; more topics are coming up.)
A high profile AI in politics story took place in the United States and concerned presidential primaries. In January someone sent an audio message to thousands of New Hampshire voters. In the clip the voice of Joe Biden asked listeners not to vote in that primary election:
(source) The Biden administration denied making such a recording and call. Now the state’s attorney general has determined the fake’s source: “Life Corp., a Texas telecom marketing company.” Life Corp doesn’t have a website and might be just one guy.
In response, the federal government issued a ban on AI-generated calls, using a 1991 law.
That’s an unusual federal action, as Congress looks stymied on any AI regulation. Possibly in response to that silence, various states and even cities have passed various AI laws.
There’s a second type of political LLM usage already in play. In December one candidate had an AI voice bot do calls for her, publicly acknowledged as such; I’m not sure how the new FCC ruling impacts that kind of operation. Will regulations (and voters) accept bot content when it’s aboveboard? One presidential candidate used ChatGPT to produce a chatbot for his campaign, until Open AI shut it down.
The super PAC, called We Deserve Better, had contracted with AI start-up Delphi to build the bot. OpenAI suspended Delphi’s account late Friday in response to a Washington Post story on the SuperPAC, noting that OpenAI’s rules ban the use of its technology in political campaigns. Delphi took down Dean. Bot after the account suspension.
To be clear, the problem here doesn’t seem to be malicious faking but openly cloning oneself. Is OpenAI only opposed to this when it’s for a political campaign? How might they respond to a more successful campaign ding this?
A third political AI capacity appeared in Davos, where another audio service offered a voice translation of one nation’s president in real time.
This feat involved translating the address of Argentina's President, Alberto Fernández, from Spanish to multiple languages, including English, French, Mandarin, and Arabic, as he addressed the esteemed gathering.
The tech came from HeyGen. (I’ve started playing with it and will share results.) This is an advance over New York City’s mayor using AI to generate robocalls in multiple languages, given its synchronous nature.
A fourth political use is less flashy: government officials using generative AI in their paperwork. https://www.business-standard.com/economy/news/google-partners-with-maharashtra-govt-to-provide-ai-led-services-124020801682_1.html The American state of Pennsylvania has begun a pilot along these lines. On a related point, Open AI adjusted its policies to allow some military uses.
All of these examples are unitary, coming from individual governments. I’m curious about inter-governmental AI moves, which we’ve seen starting to occur. But we’re also seeing some private action on this front. The Aspen Institute launched an AI election initiative designed to being together multiple players and institutions. The Financial Times broke a story about a Middle Eastern group arranging discussions between American AI companies and Chinese governmental officials.
According to multiple people with direct knowledge, two meetings took place in Geneva in July and October last year attended by scientists and policy experts from the North American AI groups, alongside representatives of Tsinghua University and other Chinese state-backed institutions.
Attendees said the talks allowed both sides to discuss the risks from the emerging technology and encourage investments in AI safety research. They added that the ultimate goal was to find a scientific path forward to safely develop more sophisticated AI technology.
“There is no way for us to set international standards around AI safety and alignment without agreement between this set of actors,” said one person present at the talks. “And if they agree, it makes it much easier to bring the others along.”
Note the role of Chinese academics. Were any American academics involved?
One last political point: two professors called out another deepfake (or just fake) problem, of bad actors using generative AI to create false historical documents.
The prospects of political actors using generative A.I. to effectively reshape history — not to mention fraudsters creating spurious legal documents and transaction records — are frightening.
The authors recommend versions of digital watermarking as a solution.
Let me take a step back. We have just seen multiple political generative AI uses appear in the real world:
Official outreach and communication.
Deepfake persuasion efforts.
Translation.
Bureaucratic operations.
We’re also seeing multiple political responses in play:
National regulation
State and local regulation
Companies interacting with governments
Non-state actors creating political content and convening meetings
In short, the AI and politics world is already complex and growing.
Looking ahead we can expect some of these uses to continue, unless cut back by regulation and/or cultural outrage. In the United States 2024 looks like a banner year for hungry AI political entrepreneurs.
I’m interested in other uses which seem possible, based on history and non-political uses observed so far:
-planning election, espionage, and military campaigns
-some politicians and bureaucrats proclaiming their content to be AI-free, perhaps supported by regulation and vetted by third parties
-changes to political art and storytelling
-generative AI in political games and simulations
-inter-state conflict over AI regulation and development
-efforts to create inter-state AI bodies or dedicate existing organizations to that purpose
-citizens and non-citizens who aren’t political professionals (i.e., not Life Corp) contributing AI-generated content to political actions
-AI-created political characters, from the person on the street to analysts and leaders
…and more.
Next up on the scanner: business and economics.
Great article Bryan, as usual. I am trying to set up a presentation on this very same topic with the Political Science department at my university, to help students see how AI is becoming a very important part of this. I also wanted to share this from Time magazine: "2024 is not just an election year. It’s perhaps the election year.
Globally, more voters than ever in history will head to the polls as at least 64 countries (plus the European Union)—representing a combined population of about 49% of the people in the world—are meant to hold national elections, the results of which, for many, will prove consequential for years to come."
#BattlefieldAI -- such a rich space to exploit.
Superb post. Looking forward to the series...