Greetings from northern Virginia, where I’m home for a few days between a whirlwind of trips (Ohio, Texas, Germany).
The American national elections have sidelined much of my research, as people try to grapple with what a second Trump administration might mean. I’ve been doing forecasting work on this for years, and so synthesized much of that this week on my blog with an eye towards academia.
Today I want to build on that work by sharing some thoughts about what Trump 2.0 might mean for generative AI. I’m working on several levels here: how a new administration might regulate AI at the federal level; effects of Trump’s energetic bully pulpit expressions; actions by other political actors in response to the White House. (I have other topics along these lines to write, if I can get the time, including how AI proved to have no effect on campaigns in terms of deepfakes.)
A few caveats: first, Trump is a famously mercurial and chaotic figure, so there’s more chance than usual in this situation. Second, I’m focusing here on his administration, not on how AI evolves on its own or under the impact of non-Trump, non-AI forces, such as European Union regulation, Chinese investment, etc. Third, the usual reminder that AI is developing both very quickly and in many locations around the world. You’ll see signs of these uncertainties in the best writing on the topic, like this Brookings article.
One more note: this is a first pass on the subject. There aren’t many second-order effects here. I have some more thoughts and hope to turn them into posts, if I can get the time. But I want to share this now, especially given the topic’s complexity.
tl;dr - much is contradictory or vague at this point, but there are many, many levels for the Trump-AI topic to play out.
1: The emerging Trump administration
It’s important to note that there’s a regulatory and leadership rupture here. The Biden administration is very much a lame duck and the Trump team isn’t installed in power until late January. Once the latter occurs, there’s a period of adjustment as people exit positions and new people take them up, as policies go through proposal and refinement phases, and as the new administration responds to events in the world. I wrote “rupture” to signal not just a pause but also the Republican hostility to Democratic policies, practices, and signaling. Switching to new leadership and the reversals entailed will also take time. Here I agree with Marc Watkins about the government missing a time when it could otherwise be responding to a famously fast-moving technology.
We might indeed expect the Trump team’s first move to simply be undoing Biden administration actions. So that White House’s Blueprint for an AI Bill of Rights and Biden’s executive order on AI should vanish. The National Institute of Standards and Technology (NIST)’s “Artificial Intelligence Risk Management Framework” would likely disappear. AI.gov website content should face a similar fate. NIST’s AI Safety Institute (AISI) could see staff exit and replaced by people with other plans and ideas, or just face shutdown. The Office of Management of Budget (OMB) AI policy will likely go. Guidelines, policies, and regulations that address any “woke” issues (race, gender), touch on climate change, or involve international cooperation will likely face the ax.
Now, what will Trump offer as a replacement? The field is fairly open here. Remember that Trump is in a very friendly regulatory environment, with majorities in Congress (for at least the next two years) and a supportive Supreme Court (for at least the next four years).
It’s important to remember that there are many signs of Trump 2.0 being very pro-technology. The Agenda 47 document (Trump’s published platform) calls for more tech in border enforcement (and, presumably, the massive deportation process): “ We will… and use advanced technology to monitor and secure the Border.” It similarly invokes high technology for a military resurgence: “We will invest in cutting-edge research and advanced technologies…” There are already AI applications in both of those domains, from data analysis to developing plans and target identification. If these ideas continue, we should expect more AI funding and other support in ICE, Homeland Security, and the Department of Defense.
More, Agenda 47 specifically calls for supporting AI in a specific way:
We will repeal Joe Biden’s dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology. In its place, Republicans support AI Development rooted in Free Speech and Human Flourishing. (frequent capitalizations in original)
This sounds like a clear call for federal backing. That could occur in terms of funding for AI projects across the board (think Department of Defense contracts, Department of Commerce business support grants, Department of Energy research, etc.), removing Biden’s regulations (see above), and more. Imagine Trump using the Defense Production Act (1950) to launch an Operation Warp Speed-style AI development blitzkrieg if, say, a Chinese firm has an AI breakthrough, or just to strike a pose or reward funders and supporters currently in favor.
It’s worth repeating that Trump’s third presidential campaign emphasized Silicon Valley censorship and content moderation. This was a consistent theme, and it’s worth thinking how it could play out as policy. Imagine the removal of regulations on bias, offensive speech, and discrimination resulting from AI. Wired has a good cautionary column on this view.
The China cold war Trump has made opposing China a key theme of his new administration, notably by proclaiming some upcoming massive tariffs. Project 2025 is very consistent about organizing the entire government to vigorously oppose China in Cold War 1.0 fashion. This could lead to specific regulations and policies aimed at Chinese AI producers. An increased commitment in the intelligence community towards AI detection and presumably suppression could also follow. (cf Project 2025, p. 217)
The intensifying struggle over chip production and distribution might take a turn if Trump blocks access to American-made materials, and if China retaliates, especially on rare earths. Has the Trump team thought this through, wargamed it? Perhaps they plan to lean on Holland for its key chip sector. Maybe they want to start off in fevered global competition mode, in order to fall back on deals they bet will be better for Americans.
We could see this conflict heighten a general technology trend of global digital divisions, or the splinternet. Different nations and companies carve out their own digital domains by various means, like content blocking, favoring certain domains and providers, and so on. Perhaps the US and China develop very separate AI ecosystems, much as our mobile device worlds. For now, consider the possibilities of a US-China AI Cold War.
Recall that this new cold war is a global, intercontinental thing, as Beijing and Washington struggle to influence nations around the world. Both will doubtless seek to sell domestic AI products and services to other nations, as Matthew Mittlesteadt observes. The new Trump administration seems likely to support this kind of economic growth.
Mittlesteadt points out a major problem facing a Trump team seeking to win such an AI war with China: the US depends on that nation to make AI work.
Many of the hardware components that make AI and digital tech possible rely on imported materials not found or manufactured in the United States. Neither arsenic nor gallium arsenide, used to manufacture a range of chip components, have been produced in the United States since 1985. Legally, arsenic derived compounds are a hazardous material, and their manufacture is thus restricted under the Clean Air Act. Cobalt, meanwhile, is produced by only one mine in the U.S. (80 percent of all cobalt is produced in China). While general tariffs carry the well-meaning intent of catalyzing and supporting domestic manufacturing, in many critical instances involving minerals, that isn’t possible, due to existing regulations and limited supply. Many key materials for AI manufacture must be imported, and tariffs on those imports will simply act as a sustained squeeze on the tech sector’s profit margins.
I can imagine the new American government pushing hard and in multiple directions to solve this problem through re-sourcing minerals elsewhere, lobbying for better prices and availability from China, and perhaps (under multiple layers of deniability) industrial sabotage. Not to mention urging American firms to innovate new AI forms which don’t depend so heavily on Beijing.
To return to higher education, again I note that we could see political pressure on American academics not to work with Chinese faculty, students, or staff on AI and related topics.
I suspect cybersecurity in general will appeal to Trump, given his persona and record. He could back CISO positions in government and industry, especially if phrased as protecting Americans from malign foreign actors. AI regulation advocates would do well to emphasize security aspects of their proposals.
On the economic side of AI, I suspect we should see more private-public partnerships and outsourcing. This is very much Trump, as a private sector booster, but also classic Republican thinking and policy. Such a way of thinking could lead to outsourcing some federal technology functions to private companies, like (hypothetically) hiring OpenAI to generate deportation raid plans for ICE or having Amazon provide cloud services for military analysis for Middle Eastern contingencies. Already there’s a deal between Amazon, Palantir, and Anthropic to supply AI to the government.
An interesting question is how Trump will react to the idea of AI throwing Americans out of work. Agenda 47 prominently features “TURN THE UNITED STATES INTO A MANUFACTURING SUPERPOWER” (caps in original, of course) as its fifth point, right on the front page. Using AI to accelerate American industrial output should appeal to Trump, but he might be appalled if that means fewer human workers involved in that economic renaissance.
On another economic front, all signs point to Trump ending Lina Khan’s antitrust work in the Federal Trade Commission (FTC). This could lead to AI business consolidation, with bigger firms snapping up small ones.
On the hardware side, we’re seeing AI increasingly appear in robotics, from industrial manufacturing to self-driving cars. I suspect a Trump administration will support this as a business growth measure. Other topics we’ve mentioned here should play a role on the robotic side, such as blocking Chinese hardware imports, the military partnering with more American robotics companies, and so on.
Open source AI Will Trump 2.0 back open source? This might sound like a perhaps too technical question for the man and the moment, but it’s one which has already come up. Elon Musk is a big open source AI backer. Mark Zuckerberg, who seems to be trying to woo Trump, has made Meta’s AI open source (to some definitions of the term, like open weights; I won’t get into that now).
Yet Sharon Goldman points out that American AI projects making open source content means that China and other adversaries could use the stuff. We could see Trump opposing AI for this reason, and on a number of levels (regulations, contracts, in the bully pulpit) as a matter of security.
Nuclear power I think Justin Joffe is on the right track when he notes the Biden administration’s quick policy response to Microsoft and Google’s plans for expanding nuclear electricity generation to power AI-focused datacenters. Will Trump support bringing Three Mile Island back online, both literally as well as a symbol for activating and building other atomic power plants to back AI’s ferocious energy needs? Or will Trump oppose this simply because Biden liked it?
Grist is unsure as is Utility Dive, pointing to mixed Trump signals: yes, Agenda 47 likes more energy (“Republicans will unleash Energy Production from all sources, including nuclear, to immediately slash Inflation and power American homes, cars, and factories with reliable, abundant, and affordable Energy”) but Trump was ambivalent on his Joe Rogan appearance. In one discussion with Elon Musk Trump complained about AI’s power needs. I imagine tv images of nuclear disaster horrified him, yet he also wants to avoid supporting renewable energy.
2: The rest of the nation reacts in the short term
All of the preceding concerns the federal level of American government. How will states, counties, and cities react? How will Trump 2.0 influence businesses and individuals working on AI issues?
Some state governments have issued advanced AI regulations, notably California. Techcrunch observes that:
In March, Tennessee passed a law protecting voice artists from AI cloning. This summer, Colorado adopted a tiered, risk-based approach to AI deployments. And in September, California Governor Gavin Newsom signed dozens of AI-related safety bills, a few of which require companies to publish details about their AI training.
State policymakers have introduced close to 700 pieces of AI legislation this year alone.
I suspect state and local Republicans are watching the emerging administration closely, looking for cues to follow. Similarly, Democrats must be scanning the scene to see what to oppose at local levels. We could see red and blue states (those with solid Republican or Democratic governance, respectively) take up opposing AI measures. For example, Democrats might advocate for safety and bias regulations, while Republicans proclaim a more open AI business development environment in their locales. Red and blue state/local policies could well emerge in mutual opposition.
I do wonder how gender deepfake laws will play out. Will Republicans support laws criminalizing sexual deepfakes as part of their tradition of sexual regulation and censorship, or will they avoid this as a pro-business move? Will Democrats push fiercely for deepfake control as part of their 2024 gender politics, or step back from that in response to their massive electoral defeat?
Will states, counties, and cities follow my hypothesis that Trump will support AI-infused robots? I suspect we’ll see some polities resist self-driving cars, like Waymo’s recent expansion.
At the levels of businesses and individuals, there are some intersting players and options in the mix. Elon Musk is currently a high level Trump ally and booster. I wrote “currently” because Trump famously burns through people at a furious rate, and he could easily dump or sideline Musk in a year or less. In the meantime, we might see Musk’s Grok AI become more friendly to the GOP while also gaining Republican-aligned investment. Musk might lobby Trump for his version of AI regulation, including monitoring the technology for existential threats to humanity.
Accelerationists on the right side of the aisle might find Trump an ally of sorts and double down on AI. This could take the form of a newfound push for longtermist thinking and projects or new investments in various Singularity-related ideas (mind uploading to the cloud, mind extension to robots, etc). We should also expect accelerationist splinter groups around Trump, like a neonatalist sect or an America First version which sees the US into a posthuman galactic existence.
Yet there are also some high profile pro-Trump AI opponents. Tucker Carlson is apparently one such, warning against AI as a threat to jobs and human autonomy, calling for people to nuke server farms around the world, proclaiming “There is no upside to AI.” I also wonder if right wing populists would celebrate AI as a threat to certain jobs: lawyers, government officials, bureaucrats, educators.
Then there are AI firms just lobbying for support and mindshare. OpenAI apparently just pitched the emerging Trump administration on backing AI. The pitch isn’t public yet, as far as I can tell, so I have to run with this Washington Post summary.
I am curious how AI firms which agreed to the Biden administration’s safety guidelines will act now.
Some wild card possibilities All of the preceding is based on evidence from the past: Trump’s first administration (like his AI statement), his third presidential campaign, recent statements. That’s fruitful ground for futurists, but does run the risk of being biased towards continuity and more predictable developments. I want to be open to possibilities that are less predictable, more strange: wild cards, in professional parlance. These are based on some signals and trends, but are more unlikely and chaotic - which is appropriate to the topic.
Several ideas:
Trump suddenly turns against AI, fearing it as a threat to American workers or even childbirth (too many people living primarily online), and pushes for firm regulations. Some investors, spooked, withdraw support.
At some point during the next four years one American AI project produces something which convinces a good number of people that it is artificial general intelligence (AGI). Does the Trump team celebrate and back its implementation everywhere, or fear and want it quashed?
Same idea as above, but the AGI appears in China. Or Europe.
Generative AI suffers a major cutback or dieback from one or several sources: quality collapse; financial failure; courts striking LLMs down for copyright reasons; major public revulsion. Would a Trump team try to keep AI afloat, or abandon it because the president despises losers?
This is all for now. You can see that the topic has many, many moving parts, as well as a great deal of uncertainty.
I’ll keep tracking this as part of my scanning posts. I might follow up, time permitting, with a visualization of this post or another one looking at secondary effects and possibilities.
(thanks to Ruben Puentedura for discussion and Tyler Cowen for one link)
At this point I believe Musk is a major influence on Trump's "thinking." Musk has Grok. So, I see the possibility that anything Trump administration does regarding AI will be influenced by the idea whether it is good for Grok or not. Then again, Trump will probably dump Musk in the next four years and at that point evrything will turn the opposite direction, because of his vengeful nature.