I am overdue on an AI scanner report, not having done one since March. So let’s fire up the scanner and check on one topic for which we have a lot of material in the hopper - politics! Let’s check on the ways governments and politicians have been responding to AI. tl;dr - they’ve been busy.
(If you’re new to this feature of my newsletter, I regularly conduct what futurists call horizon scanning: searching the world for signals of potential futures. We try to sample a wide range of sources and always to accumulate documentation. Here I check for how different parts of the world engage AI: politics, economics, culture, education, and so on. I then track this data, these signals over time to see what they suggest about the future.)
There’s been quite a lot of activity in this field, so I’m dividing it up into:
International geopolitics
Nations
Copyright
1. International geopolitics
The Trump administration is increasing its pressure on Chinese AI by trying to get Nvidia to sell that nation fewer and less powerful chips. And the government might be targeting Deepseek: “It also is weighing penalties that would block DeepSeek from buying U.S. technology and debating barring Americans’ access to its services…” The context here is, of course, the US-China cold war.
Along those exact lines, OpenAI made a public pitch to the Trump administration for assistance against Deepseek. The company asked for federal assistance in getting ChatGPT into international markets, protecting the technology from copyright lawsuits (see below), and more governmental use of AI. The same document calls for “catalyz[ing] a reindustrialization” which means, I think, federal support for AI infrastructure - i.e., datacenters and the electricity and water to keep them going.
OpenAI also stated that they found the Chinese government using AI for various information warfare purposes. One was to “gather real-time reports about anti-Chinese posts on social media services in Western countries.” A second “used OpenAI’s technologies to generate English-language posts that criticized Chinese dissidents.” OpenAI’s official report dubbed the two efforts Peer Review and Sponsored Discontent, found that Chinese actors most often used Meta’s open source AI Llama, and includes examples like this one:
We recently banned a ChatGPT account that was generating comments critical of Chinese dissident Cai Xia; the comments were posted on social media by accounts that claimed to be people from India and the US, and did not appear to attract substantial engagement…
In this recent operation, the same actor who used ChatGPT to generate comments also used the service to generate long-form news articles in Spanish that denigrated the United States, published by mainstream news outlets in Latin America, with bylines attributing them to an individual and sometimes, a Chinese company. This is the first time we’ve observed a Chinese actor successfully planting long-form articles in mainstream media to target Latin America audiences with anti-US narratives, and the f irst time this company has appeared linked to deceptive social media activity.
There have been some US state-level actions along these lines. Virginia, where I live, banned Deepseek from state devices. (I encountered this in person, working with some local educators.)
Britain seems to be working on repairing or maintaining its US ties by focusing on shared AI work. According to that Politico article London is proposing a trade deal based on leveraging British computing strength and shared political identities as western democracies.
On a different geopolitical fault line, someone used AI to create a short video visualizing one of Trump’s plans for a postwar Gaza Strip, one where it has become a tourist destination. There’s a lot going on in it, from clips of a delighted Elon Musk eating local food and being showered with money, to a giant golden state of Trump.
AI is also starting to appear for one particular and emerging geopolitical domain: underwater infrastructure, including pipelines and internet cables. A Wall Street Journal article points out that getting intelligence on these submarine struggles requires sifting through a lot of data, which is where militaries can bring in AI. There are already AI-underwater companies emerging, like North.io.
At great depths in the ocean there is little light, so the data needed for navigation or observation is mainly acoustic. The information—collected by academic research institutes, wind-farm operators and other sea-based commercial operations—comes from sonar systems that locate objects in the ocean, seismic recording devices that register earthquakes, and satellites.
North.io’s innovation has been to create a way to manage the huge amount of data and standardize it to make it accessible on a variety of cloud-based systems. Its main product is the TrueOcean data-management system, with users that include the military, research institutes and companies in the offshore wind industry.
“We are creating a digital twin of the ocean,” [Jann Wendt, chief executive] says. “This wasn’t possible a few years ago.”
On a different level, AI regulation proposals proliferate. One Popular Science article noted two such drives in European and American states, each focused on deepfakes and misinformation. A Spanish bill would penalize companies for not properly labeling AI-generated content. Note the European framing: “the Spanish bill follows guidelines set by the broader EU AI Act that officially took effect last year.”
2. Nations
China Beyond the alleged use of Llama we noted above, the nation’s excitement over Deepseek’s success seems to be leading to government use of the tech. Techdirt found evidence of “more than a dozen local governments’ declarations of DeepSeek’s benefits for monitoring and managing situations online and off.”
The group indicated that since it adopted DeepSeek, it has seen great advances in both its efficiency at parsing “public opinion data” from across the internet and its ability to filter out noise, and that it can now more quickly identify potential hazards when monitoring hot topics. In addition, DeepSeek can automatically generate “strategic recommendations for public opinion response” based on its analysis of vast quantities of data, and provide smarter “suggestions for handling public opinion.”
It’s worth stating the obvious, that this is part of China’s large digital surveillance and control enterprise.
Canada That country’s Competition Bureau started investigating the extent to which landlords use AI to determine rental rates.
United States The National Institute of Standards and Technology (NIST) has, under the Trump administration, posted new policies about AI safety. To quote Wired’s writeup:
has issued new instructions to scientists that partner with the US Artificial Intelligence Safety Institute (AISI) that eliminate mention of “AI safety,” “responsible AI,” and “AI fairness” in the skills it expects of members and introduces a request to prioritize “reducing ideological bias, to enable human flourishing and economic competitiveness.”
This strikes me as consistent with remarks from vice president Vance downplaying safety concerns. One more passage from that article:
“The Trump administration has removed safety, fairness, misinformation, and responsibility as things it values for AI, which I think speaks for itself,” says one researcher at an organization working with the AI Safety Institute, who asked not to be named for fear of reprisal.
Don’t miss that last bit about fear. I’m seeing more and more statements like that from American researchers working on a range of topics.
Elsewhere in the federal government, the United States Army is using an AI tool nicknamed CamoGPT to find and remove diversity, equity, and inclusion (DEI) content from training materials. This, too, is consistent with Trump’s program, as well as that of Project 2025. (Here’s our open reading of that document.)
On a more speculative level RAND researcher Timothy R. Heath urges us to consider the chance that disruptive AI could increase chances of American civil unrest or even civil war. Drawing on the work of Peter Turchin, Heath starts from the possibility of AI driving elite dissatisfaction as automation blocks some from access to white collar careers, then adds the possibility of broader under- and unemployment.
Liberal democracies may… experience a growing temptation to adopt illiberal practices to marginalize and suppress rival elites. The fraying of democratic norms, increasing resort to hardball politics, the spread of political violence, and general decline in popular support for democratic institutions in the wealthiest and most stable democracies suggest that these trends may already be well underway.
3. Copyright
(Copyright is largely a national body of laws and policies, but this is hugely important for AI and has a very distinctive character, so I’m separating it here.)
A United States judge ruled against large language models in a signal copyright case. Circuit Court judge Stephanos Bibas agreed with publisher Thomson Reuters that an AI law content firm, Ross Intelligence, infringed on their intellectual property by training its models on some Reuters legal content. One key detail: the judge determined that Ross used Thomson Reuters in order to compete with Thomson Reuters. That’s a powerful argument against fair use in this case, and we might expect to see it in other, bigger cases.
Perhaps anticipating or responding to this legal action, OpenAI argued that courts should find fair use to protect generative AI - for national security reasons. If courts follow Bibas and declare no fair use protection for LLM training, then America would “forfeit… our AI lead to the PRC by [not] preserving American AI models’ ability to learn from copyrighted material.”
A group of legal scholars also posted in favor of allowing fair use to protect LLMs, but from a different angle. Rebecca Tushnet et al argue in a brief for Meta (facing a major copyright suit) that what generative AI does is transformative enough to the original material to constitute something new, not just a copy.
Yet Meta faces a different copyright challenge. It looks like they trained Llama at least to a degree on a lot of material from LibGen, a leading pirated content database. Alex Reisner points out that Meta staff probably used bittorrent to download stuff, which meant they likely provided uploads as well, compounding the offense, although Meta denies this.
In another AI copyright case a judge found in favor of Anthropic, when music publishers charged it with training on their copyright materials (song lyrics).
Lee rejected the publishers' argument that Anthropic's use of their lyrics caused them irreparable harm by diminishing their licensing market.
"Publishers are essentially asking the Court to define the contours of a licensing market for AI training where the threshold question of fair use remains unsettled," [U.S. District Judge Eumi] Lee said.
Now, this is not the end of the case, just an early stage, but still an interesting development.
Summing up, it’s obvious that governments are continuing to grapple with generative AI. Geopolitics and national politics are shifting at times, reshaping policy positions. Meanwhile, the AI copyright wars are heating up. Readers may remember how I’ve argued IP lawsuits present a serious threat to the technology.
The geopolitical angle doesn’t get enough attention, I think. It’s clear that AI companies are positioning themselves to take maximum advantage of shifting international strategies, which then has effects on institutional and individual users. There’s some uncertainty as the Trump administration pivots wildly across trade and alliance actions. Domestic AI policies also shift as nations try to position themselves for best advantage. We need to keep these levels of human AI response in mind.