As AI continues to develop and spread through society, how are different parts of civilization reacting? Today we’re going to look into what governments, politicians, and people embroiled on legal processes have been doing in response.
(If you’re new to this newsletter, welcome! This is one of my scan reports, which are examples of what futurists call horizon scanning, research into the present which looks for signals of potential futures. We can use those signals to develop trend analysis, which we can use to create glimpses of possible futures. On this Substack I scan various domains where I see AI having an impact. I focus on technology, of course, but also scan government and politics, economics, and education, this newsletter’s ultimate focus.
It’s not all scanning here at AI and Academia! I also write other kinds of issues; check the archive for examples.)
There’s a lot to cover here, and so I’ve broken things down into several big headers:
AI in international geopolitics
Nations approach AI, as well as American states
Copyright
…followed by some reflections.
1. AI in international politics
Last month several Middle Eastern nations made major moves to become major AI players. The United Arab Emirates, Qatar, and Saudi Arabia each announced very expensive deals to purchase hardware, set up data centers, and partner with American AI companies. This strategy connects with the deepening US-China cold war, as one anonymous source told the Washington Post: “The Trump administration’s view… is that the region aspires to be an AI powerhouse and that if the U.S. does not find a path for the countries to access American technology, Chinese hardware will become dominant there.”
One geopolitical note: Trump officials who strongly oppose China criticize this Middle Eastern AI scheme as it doesn’t sufficiently prevent Beijing from taking advantage of it. For example, China could buy AI-powering hardware from the UAE, bypassing US export controls.
Elsewhere, Nvidia and Perplexity are building up European offerings. Said offerings are quite distinct:
The chip titan is working with local European partners, including the French AI firm H Company, to build open-source, sovereign AI models that will be offered through AI search startup Perplexity’s platform.
The models will run on local AI infrastructure from European partners participating in DGX Cloud Lepton, a service designed to link AI developers with Nvidia’s network of cloud providers. Businesses can also access and fine-tune or customize those AI models through an integration with Hugging Face, which runs a popular open-source model platform.
“Sovereign AI” in this case means “countries investing directly in artificial intelligence,” according to Nvidia’s CEO.
The G7 group issued a joint statement on AI, which doesn’t seem to have received much attention. Its goal is to encourage member nations to “move from uncertainty to opportunity—to shift from being AI-aware to being AI-powered” with an emphasis on small and medium-sized businesses. It’s a pretty broad agenda:
[W]e commit to:
Work together to accelerate adoption of AI in the public sector to enhance the quality of public services for both citizens and businesses and increase government efficiency while respecting human rights and privacy, as well as promoting transparency, fairness, and accountability.
There are some specific initiatives, such as “establish[ing] a G7 AI Network (GAIN) to advance the Grand Challenge; develop a roadmap to scale successful AI projects; and create a catalogue of open-source and shareable AI solutions for members.”
Currently holding the G7 presidency, Canada’s new government is planning to run “the G7 GovAI Grand Challenge” as well as offering “a series of ‘Rapid Solution Labs’ to develop innovative and scalable solutions to the barriers we face in adopting AI in the public sector.”
The statement also addressed energy issues, noting the problem of AI driving more electricity demand, and calling for more efficient technologies, as well as using AI to help address energy problems as a whole.
Some educational points appeared. The announcement emphasized sharing AI expertise among the member nations. Education plays a key role with: “AI-focused talent exchanges, including with students from G7 members, specifically targeting Al adoption projects, to bridge research with practical application, developing high-level expertise in critical areas.” Academic research also appears on this score: “We seek to further promote secure, responsible, and trustworthy AI that benefits people, mitigates negative externalities, and promotes our national security. We will do this through advanced AI research…”
There is also a gender education angle addressing one chronic underrepresentation problem:
We will drive economic growth, address talent shortages, and ensure equal opportunity, by encouraging girls, as well as members of communities left behind by globalization, to pursue science, technology, engineering, and mathematics (STEM) education and increasing women’s representation in the AI talent pool at all levels.
One note: the statement emphasized it was continuing to follow the G7’s 2023 Hiroshima AI Process for developing AI governance. The group also promises to publish an AI Adoption Blueprint at some unspecified time.
2. Nation-states grapple with AI
Let’s look at different national governments:
Canada The new Canadian government of Mark Carney created and filled a new position, minister of Artificial Intelligence and Digital Innovation (Ministre de l'Intelligence artificielle et de l'Innovation numérique). The first holder of that position is Evan Solomon.
Israel The Israeli military uses an AI to process intelligence and determine individuals to attack in Gaza, according to one report. The application, named Lavender, apparently uses machine learning to teach itself the characteristics of potential targets: “fed data about existing Hamas operatives, it learns to notice their features, and then it rates other Palestinians based on how similar they are to the militants.” Humans are in the decision loop, but apparently sometimes rubber stamp the program’s conclusions, focusing in particular on the target’s gender.
China At a major Politburo meeting president Xi stated his views on Chinese AI strategy. The big picture according to the official statement is for “the healthy and orderly development of my country's artificial intelligence in a beneficial, safe and fair direction.” (Google Translate) This includes building up AI infrastructure, doing more research, and strengthening governance and regulation. The strategy also identifies what Xi sees as the nation’s strengths in this area: “rich data resources, a complete industrial system, broad application scenarios and huge market space.”
Academia plays a role in Xi’s plans. First is applying AI to some research: “[l]everaging AI to drive a paradigm shift in scientific research and accelerating innovation breakthroughs across various fields.” Second is boosting AI education across the board, in terms of AI talent:
He stressed promoting AI education at all levels and expanding public AI literacy, continuously cultivating a high-quality talent pool. Improvements in talent evaluation and career support mechanisms should be made to create favorable conditions and platforms for talents to realize their full potential.
Conversely, Xi seems to have called for business to take a larger role in “national teams” which include academia.
Speaking of Chinese AI regulations and governance, the nation managed to reduce its local AIs’ capabilities for the gaokao week. That’s the country’s mega-exam.
The United States President Trump’s “Big, Beautiful Bill” includes a provision which, as of this writing, would block states from passing AI regulations for ten years. Many technology giants lobbied for it.
Three government agencies (the National Science Foundation, National Coordination Office, and Networking and Information Technology Research and Development) posted a request for information on how to craft a new, post-Biden AI policy. The goal:
so that the United States can secure its position as the unrivaled world leader in artificial intelligence by performing R&D to accelerate AI-driven innovation, enhance U.S. economic and national security, promote human flourishing, and maintain the United States' dominance in AI while focusing on the Federal government's unique role in AI research and development (R&D) over the next 3 to 5 years.
The document specifies that the federal government should support AI research which the private sector is not doing. There is one mention of higher education: “Respondents to this RFI are encouraged to articulate… ideas for novel mechanisms for research partnerships with industry and/or academia.”
A DOGE staffer used generative AI badly in prompting it to identify Department of Veterans Affairs programs to cut, according to a ProPublica article. The prompts failed to address the full nature of each program, in part because the software used was an older model with a small context window.
A court ordered OpenAI to preserve ChatGPT logs. One reason concerned news organizations’ concerns that users would use ChatGPT to go around their paywalls, then delete the resulting records. OpenAI filed a protest.
At the state level, a Georgia judge found for OpenAI in a libel lawsuit. Plaintiffs accused ChatGPT of producing content which libeled one of them. In response, judge Tracie Cason determined that OpenAI offered enough language to caution users not to treat output as guaranteed facts that a user could see results skeptically. Cason also saw OpenAI as taking many steps to improve ChatGPT’s quality - i.e., they weren’t trying to lie or be careless with the truth, which is a key part of libel law. Disclaimer: I am not a lawyer. (summary here)
Elsewhere, a group of mental health advocacy groups facilitated by the Consumer Federation of America (CFA) asked state regulators to crack down on chatbots offering mental health services. Specifically, the CFA statement calls out Character.ai and Meta’s AI service for many reasons: acting as mental health providers when they are not licensed to be such; purporting to be confidential when the firms will likely use user data; violating their own terms of service.
(The CFA statement also calls on the federal government to act, which seems unlikely.)
At the local level, Memphis, Tennessee residents protested a gigantic supercomputer installation, charging X.ai with polluting the environment and lying about its systems.
Alleging that the unregulated turbines "likely make xAI the largest emitter of smog-forming" pollution, they've joined the Southern Environmental Law Center (SELC) in urging the Shelby County Health Department to deny all of xAI's air permit applications due to "the stunning lack of information and transparency."
Local activists charge Elon Musk’s company with environmental racism, as the neighborhood where X.ai situated Colossus is majority black.
In Louisiana, New Orleans police used AI to create arrest profiles… after being ordered not to. I’m not sure what kind of AI was involved, generative, LLM, etc.
3. Copyright
(I break out copyright as a separate category because it’s so interesting and deep. Let me know if you want me to fold it back under the national header.)
Digital publishing conglomerate Ziff Davis sued OpenAI, alleging lots of copyright infringement. Ziff Davis adds that ChatGPT scraped content even when websites used a robots.txt file to refuse it, and also deleted “this is copyrighted” language from scraped-up text. On a related note, Disney and NBC Universal sued image generator Midjourney, also alleging copyright infringement.
The Trump administration is deeply divided over its approach to AI and copyright, according to a Jacobin article. David Moscrop argues that two camps have emerged. One is populist and wants to rein in Silicon Valley for various reasons, and so supports stronger copyright protections which in turn can cramp generative AI. The latter depends on scraping enormous amounts of content, for much of which owners can assert copyright. Opposed to this group is big technology and libertarians, who want to unleash AI and so would rather copyright didn’t get in the way. “On one side are the copyright maximalists and their adjacent MAGA populists, who claim to defend workers and creators. On the other are the tech-bro libertarians, whose vision of the future depends on unfettered access to data and minimal regulation.”
One example:
after the Register of Copyrights, Shira Perlmutter, was turfed by DOGE-aligned officials, Trump antitrust adviser Mike Davis posted to Truth Social: “Now tech bros are going to steal creators’ copyrights for AI profits. . . . This is 100 percent unacceptable.” Trump reposted it. That’s the shape of the struggle: MAGA populists, who see their own content as sacred property, are up against a tech elite that views all content as extractable fuel.
What might we take from all of these stories?
International competition and collaboration on AI is heating up. The Trump administration is eagerly backing AI at a global scale, turning to Middle Eastern allies for resources, arranging private-public partnerships. It seems to be setting aside the Biden administration’s caution and guardrails, aiming at developing American industry while outcompeting China’s. Meanwhile, Xi is calling for China to play more of a global role in AI, for the nation to “[a]ctively carry… out international AI cooperation and helping Global South countries enhance their technological capabilities.” Cold War 2.0 clearly has an AI dimension, with Washington and Beijing competing to win other nations as consumers and co-developers for their respective nations’ AI industries. As Bill Bishop observes, “the US and China have reached the point of no return in building bifurcated AI systems, no matter how much the US might pull back on export controls.” This echoes similar divides in the digital ecosystem (payment, social media), electrical vehicles, and more.
At the same time, this global AI competition sometimes yields very local results. From the Wall Street Journal article about NVidia and Perplexity in Europe:
It’s likely that as regions like Europe expand their rules for sovereign technology—especially for highly regulated sectors—and have a greater need for localized AI, there will be more local versions of Perplexity and AI tools like it, Tirias Research’s McGregor said.
Nation-states are trying to play decisive roles in the global AI competition, creating policies, developing national strategies, promoting business deals, and staffing positions. They are also deploying AI in various state functions and to various degrees of efficacy and opposition. I am curious to see how Carney takes Canada forward on AI, especially on the global stage where his career has largely taken place.
In this context the G7 statement is very interesting. Overall it feels generally pro-AI, with some cautions. It seems implicitly to compete with China for AI leadership at the world level. However, it has some weaknesses. The intellectual property aspect is toothless. There’s no real commitment from national legislatures. And it downplays the climate change dimension.
America emerges as something of a chaos agent. Trump’s one-man rule draws many decisions to himself, and those are hard to anticipate. If the BB Bill becomes law and bars states from regulating AI federal policy - and Trump’s choices - will be that much more salient. His administration has some major divides over AI and a given faction might attain the upper hand to have an impact. Again, the outcome of such struggles depends on Trump. I do expect that if and when the president doubles down on AI we will we opposition to AI become more partisan.
On other topics, the addition of more copyright lawsuits is evidence for what I’ve been warning about for (gulp) years: that AI companies are vulnerable to such cases. We could see judges issuing rulings that drastically revise or end their generative AI offerings.
It’s interesting to see where AI criticism appears - or doesn’t - in the government space. The G7 statement takes care to mention safety, energy, and protecting people. Xi’s remarks touch on doing AI with an eye to health. Yet the Trump administration clearly is disinterested in anti-AI arguments. Perhaps we should expect more lawsuits to occur in this case, given widespread anxiety around the technology and Americans’ love of suing each other. AI’s impact on climate change seems to have fallen in importance.
Finally, I’m struck by how often non-educators do - or do not - invoke academia. Some governments explicitly call on post-secondary education to play a role in various strategies, but I hear little about this in the academic world. Yet the government space impacts how educators respond to AI, from copyright suits and rulings to geopolitical struggles and state actions.
That’s all for now. This was longer than I expected, but there’s *so much* going on.
(thanks to Tom Haymes, Donna Kidwell, and Peter Shea)