I’ve been sharing what I’ve discovered in my AI environmental scanning. So far I’ve focused on technological and educational dimensions. I’d like to extend that scanning into other domains, starting today with… politics .
New regulations and self-regulations
The European Parliament adopted an AI law. It calls for greater transparency from AI developers, and warns of risky uses, such as in infrastructure. There’s also this requirement for any AIs developed in Europe: “Regulatory sandboxes and real-world testing will have to be established at the national level, and made accessible to SMEs and start-ups, to develop and train innovative AI before its placement on the market.” I don’t know how they’ll enforce that - in all nations, or just the nation where coding and training takes place?
At the same time, the Indian government decided to require “significant” (i.e., not startup) AI firms to get state approval before proceeding with work. Specifically, it’s about AI threats to politics: ”The advisory… also asks tech firms to ensure that their services or products ‘do not permit any bias or discrimination or threaten the integrity of the electoral process.’”
One likely reason for this is the appearance of a very impressive deepfake a few weeks ago. That showed “KT Rama Rao, a leader of the Bharat Rashtra Samiti that was ruling the state, calling on people to vote in favour of the Congress [party].” Another reason might be this AI-generated political endorsement from someone who’s been dead for five years:
As the creator helpfully explained, “there is a market opening up [for such deepfakes]…. You can attribute some statements to a particular person and that kind of gives more value to it.”
The Indian policy is just an advisory for now, but might signal more Indian regulation to come. Perhaps this will inspire other countries as well.
It might also be a response to this declaration against using AI for election fraud by 20 AI and AI-related companies. Who signed up? “Microsoft , Meta , Google , Amazon , IBM , Adobe and chip designer Arm all signed the accord. Artificial intelligence startups OpenAI, Anthropic and Stability AI also joined the group, alongside social media companies such as Snap , TikTok and X.” Here’s a sense of the accord’s language and thinking:
Participating companies in the accord agreed to eight high-level commitments, including assessing model risks, “seeking to detect” and address the distribution of such content on their platforms and providing transparency on those processes to the public. As with most voluntary commitments in the tech industry and beyond, the release specified that the commitments apply only “where they are relevant for services each company provides.”
“Democracy rests on safe and secure elections,” Kent Walker, Google’s president of global affairs, said in a release. The accord reflects the industry’s effort to take on “AI-generated election misinformation that erodes trust,” he said.
It’s not clear what this means in practice. I would infer each company would add new guardrails, and would like to test them for this. The usual caveats about industrial self-regulation apply: aimed at forestalling government regulation, hard to verify, who can trust someone to regulate against their self-interest.
Military AI experiments
The United States Army is experimenting with generative AI bots as battle planning assistants (here’s an ungated link). So far it sounds like my experiments:
In the experiment, they provided information about the simulated battlefield terrain and details on friendly and enemy forces, along with military lessons on attacking and defending. Then they gave each AI assistant a mission to destroy all enemy forces and seize an objective point.
Each AI assistant responded within seconds by proposing several courses of action. A person playing the commander could then ask the AI to refine those proposals – such as making sure friendly units take control of a specific bridge – before approving final orders.
But unlike what I did, the army had the LLMs compete against other AIs. Interesting results: “The AI assistants based on OpenAI’s GPT models outperformed the two other AI agents. But they were not perfect by any means, suffering more casualties than the other AI agents while accomplishing the mission objectives.”
The idea of national or “sovereign” AI
Most of the major AI projects have aimed at being global in use and application. That is, although their datasets may reflect certain cultural biases and their servers located in certain nations, companies and nonprofits have positioned them as useful for any human with the right infrastructure. Readers know I’ve been forecasting variations on this pattern, such as national AI, and now Nvidia’s CEO has called for such. Jensen Huang calls the idea “sovereign AI”.
What might this look like? Huang advised developing national leaders to “codify the language, the data of your culture into your own large language model.” AI should “codif[y] your culture, your society’s intelligence, your common sense, your history – you own your own data.”
Ars Technica sees this as, unsurprisingly, a play for even more Nvidia chip sales:
The concept of each nation owning its own AI infrastructure is convenient for Huang and Nvidia because it would mean that the market for its AI-accelerating hardware products would span every country on the globe. Since the tech industry is possibly at the beginning of an adoption curve for deep learning AI applications, that could result in dramatic growth for Nvidia in the near future.
Let me tie together these stories. You can imagine a national stack of AI, starting with companies and their materials located in a specific country. The country’s AI projects would focus on local language, traditions, histories, folkways, geography, food, music, and more. Its national government would seek to guide and regulate AI according to current politics. Its military would apply locally produced AI for its own needs, as would its intelligence services and general bureaucracy. Each nation would offer its own AIs and AI frameworks for regulation, development, and support. Think of it as #digitalWestphalia.
That’s in opposition to the global, transnational AI we’ve seen so far from Google, OpenAI, Microsoft, etc. Future trends sometimes follow Hegel Newton, as one elicits its opposite. The questions now follow easily: which will gain more prominence, global or national AI? How might they conflict - or synthesize?
That’s all for this scanner. Next up: economics, society, culture… and, of course, technology and education.
So: My virtually sovereign god-intelligence will beat yours any day now.
Take Ukraine for starters.
Global hybrid war, however, requires $7 Trillion just to play at the high-stakes table this decade.
RE: national v transnational AI. I agree the future is more compartmentalized than the commercial vendors might prefer. I suspect the end state is something like early EC federation, where transnational AIs dip into national AIs as needed to answer transnational questions. Nations own and monetize the national AIs; the transnational vendors own and monetize the transnational--and possibly provide services for monetizing the national AIs.
In addition to the political-economic forces pushing toward such an outcome (how can the EU not federate?), from an engineering perspective one would expect this to improve results over either the national or transnational approach alone. MSGoogleMetaInc. would be fools not to anticipate and build the infrastructure for such, and compete to provide that infrastructure to the national AI builders
G7 attempts to regulate efforts to build military AIs in other nations are going to be interesting to watch. Nations will surely keep military and other 'sensitive' data out of the federation or in siloed federations of their own (e.g., NATO), but there are powerful incentives for most nations to play, to varying degrees and with varying constraints, for commercial and social purposes. Note that federation gives nations wanting to clamp down a built-in choke-point by means of which to meter domestic and international interactions to conform to their preferences ("information mercantilism"). Most will see that as a feature, not a bug.
We might fall short and end up with two or three competing federations reflecting geopolitical alignments, but for at least some purposes, until and unless diplomatic and trading relationships break down completely (and perhaps even then), the federation is likely to be global. I could see many possible twists and turns along the way, and predicting intermediate states and sequences will be very tricky, but a final outcome looking something like this, 15-25 years from now, is where I'd put my money.
Side-note: the biggest threat to the national-AI scenario may be sub-national and ideological AI. Once fragmentation starts, it's hard to stop. There are currently US state governors and their parties, as well as political parties in almost every nation, who would be excited at the possibilities for thought control inherent in subnational AI. Ideologically, just imagine what a transnational FoxAI could do to strengthen political biases and bubbles, domestically and transnationally. #confirmationbAIas