The question I’ve heard more than any other for the past half-year has been, “What might AI mean for higher education?”
There are many variations of this. Will ChatGPT kill the student essay? Is my job (as faculty member, staffer, publisher) in danger? What kind of new jobs are colleges preparing students for, if any? Will my campus continue to exist, or transform into some strange new institution? Can I help my students learn more? Together, these questions coalesce around this single intersection: of a complex, troubled academic ecosystem with a rapidly growing AI industry.
This newsletter is all about that intersection, with an eye towards its future development. Today I’d like to help frame a way of thinking about it by zooming out to the macro, big picture of how AI might engage other domains of civilization, and then how those forces might also reshape higher ed. I’m doing this to help situate strategically where our operational planning and action can proceed.
For today’s purposes I want to look in terms of time just at the rest of 2023, and also to assume that generative AI (art creation programs like DALL-E, text generators like ChatGPT) improve gradually, not radically, over that period. I would also issue a caveat, namely that what follows raises many more questions and possibilities than certainties.
There are many ways to proceed with futures work at the macro level. Some futurists use versions of the delightfully named PEST method, which invites us to consider Political, Economic, Social, and Technology domains. The more formidably acronymed STEEP approach adds Environmental for a second E, and this is an approach I often use, at least for a start. For today I’ll stick with the PEST, as I want to devote a full newsletter to E later on.
We can start with the economic domain, as that’s where many questions come from. Will AI take jobs from those now working at them? is the usual query. Right now… it’s very unclear. Jobs are complex entities, consisting of many functions, some of which change over time. We’re starting to see AI help workers perform some functions for their jobs, but not all. My favorite boring, non-sexy example of AI used in real life is this story about realtors using ChatGPT to generate sales copy. In this case software takes up part of one function of this job, writing text to sell houses. No realtors have lost their jobs as a result (yet! I can imagine someone botching the writing badly enough to have to exit the field).
We can imagine other jobs going through a similar adaptation process. Academics might face this process in our work, on two levels. First, we incorporate more AI into our various tasks: research, grant writing, memos, student feedback, recommendations, etc. Second, we shift our teaching to help prepare students for such adapted careers.
On a different economic level, we can also turn to the historical record of the past 200 years for another view. Broadly speaking, the sequence of industrial revolutions surfaced a variety of new machines and practices which outmoded current functions and jobs. Trains, then cars outmoded horses for transport. Gas lamps, then electric lights rendered wax and tallow candles obsolete. Yet generally, when part or all of an economic niche faded, a larger one appeared, yielding a net gain in employment and wealth for the population. For example, in New England an industry shipped lake and pond ice down the Atlantic seaboard to be used for cooling purposes in the Southeast. The inventions of refrigeration and air conditioning rendered that trade obsolete, but resulted in a far larger industry. “Creative destruction” is one term for this process.
Will AI follow the same path? We can imagine copy writers of all kinds falling away like horse jockeys after the automobile, but succeeded by new jobs and businesses. AI ethicist seems likely to be a popularly demanded job (if not one Google is necessarily comfortable with). Prompt engineer or prompt instructor - someone who helps others craft the most effective text input for an AI - is already surfacing as a useful function; it’s not a stretch to imagine it becoming a full time profession.
Alternatively, the age of AI might break with that industrial record and yield fewer net jobs, or less work overall. We might not replace the jobs software devours. For example, I can imagine (if not celebrate) the Hollywood writers’ strike ending by December with fewer writers employed by the studios, and all involved using AI to create images, storyboards, pitches, scripts, ads, etc. The number of artists (making images) declines, similarly, as people drop out of the field when they can’t compete on price with Midjourney. Some people work on the wrangling or administrative end in this fields, handling the complexity of multiple software packages and their relationships with the world, but they are very small in number. New AI drives down employment overall, giving rise to an era of widespread unemployment, and that taking various potential forms: a large population out of work, fewer working hours, reduced work demands, expanded social safety net, etc.
Speaking of deep economic possibilities, we need to bear in mind some divergent paths coming up quickly within the emerging AI industry. Right now it’s a field of giants, dominated by massively capital-intensive entities: OpenAI, owned by Microsoft; Google/Alphabet; Amazon; Meta. Most of the other digital giants seem to be using their resources to craft their own AI strategies and offerings. Will this pattern continue?
Possibly not. There are possibilities for producing good quality AI at much smaller costs. Open source applications are developing fast, as this famous Google memo warned. Some applications allow us to use far smaller datasets for software training. If either or both of these paths are viable, we could see an explosion of medium-sized, then small AI providers.
I’m also expecting the huge influx of investment to start calling AI tunes. As Carlotta Perez pointed out in her underappreciated Technological Revolutions and Financial Capital, capital initially supports the blossoming of innovation, but then shifts to discipline it towards business needs. If we have a Gartner hype cycle crash before December, watch for investors to continue shaping the AI which survives.
Let’s set aside the macro economic picture for the moment - we can return to it later on - and move on to the political domain. This is trickier to wade into, given the intense polarization and degradation of American politics, not to mention one hot war (Ukraine) and the escalating Cold War 2.0 between Washington and Beijing, but can can identity some key points:
Regulation. While legislatures often lag behind technologies (and I’ve experienced, and heard some true horror stories) they do have a tendency to eventually aim policies in the direction of novel things. Several European nations already moved quickly to push AI companies to change some details. The EU and the United States have the ability to emit some laws over the next half year, especially if goaded by events. The Chinese government already takes a more active role in promoting AI; it’s an open question how that might change under political or economic pressure.
Intellectual property, 1. Mandatory caveat: I am not a lawyer, although I have consulted with law schools. When it comes to copyright, there’s a lot of scrambling going on, based on the philosophical yet also legal question: what is creativity? Can software count as a creator and thus own copyright? So far decisions have disagreed, reserving creativity and IP to humans using these tools. People may contest this, if the history of copyright policy and case law is any guide. Further, there is the radical question of AI datasets including immense amounts of copyrighted material. For example, consult this clever Washington Post resource, which lets you check the URLs of content to see if they’re within the splendidly named Colossal Clean Crawled Corpus. (Some of my blogs and articles are.) Creators, especially visual artists, have cried foul, and rightly so, since what is going on is clearly unauthorized, uncompensated use. American fair use law shouldn’t apply here, since most AIs are aimed at turning a profit. There is the very real possibility that copyright action could stymie AI growth or cut back the existing industry.
Intellectual property, 2. When it comes to trademarks, another branch of IP, one wonders if AI app coders will build in guardrails to prevent generation of, say, a certain charismatic footwear swoosh, or a beloved rodent, etc. Since most IP action is based on people suing each other, we should anticipate suits along these lines, to which app owners and lawmakers may respond with new strictures.
Geopolitics. The American and Chinese governments are engaging in a technological competition along multiple fronts, including AI, and this can easily appear in restrictive ways. We’ve seen an anticipation of this with many American states trying to restrict access to TikTok; imagine the same for more openly AI products and services. Beijing might encourage Chinese computer scientists to be less friendly with American colleagues, to share less information, and to send fewer graduates to United States grad programs.
Using AI for political purposes. This is an easy one. We’ve already seen ad agencies use AI to create political images, some of which appeared in video clips. We should expect that many government officials, politicians, and nonprofit staff are using chatbots to workshop ideas, generate text, and to develop strategies. By December 2023 we could see AI across the world of politics, both overtly and behind the scenes. Instances could easily spark outrage or support.
Such public reactions bring us to the social or cultural domain. What views people derive about a technology, how our individual choices build up to massive perspectives, how we seek to influence those decisions all exert a powerful force on the use of a technology. In 2023 it seems that AI divides people along a wide spectrum of views. There are enthusiasts, the practical users like those realtors, those who haven’t tried AI or just didn’t get it to work right, and active opponents, not to mention those who haven’t thought about the tech or simply aren’t interested... much as people have responded to many new technologies, historically.
How will hundreds of millions, then billions of people make up their minds and express their thoughts about artificial intelligence? These positions might crystallize over the next six months. One AI school of thought could come to the fore in a given nation, population, political party, or industry. Think of a city or province where AI enthusiasm takes the lead, or a nation where an active majority despises the stuff. As with the political domain, a spectacular incident could drive such opinions - viz, a celebrity throwing themselves behind AI, or a telegenic disaster credibly blamed on the technology. These emergent views could develop into ideologies or schools of thought. Naturally they may compete.
At a different level new AI may change culture in a very material way over the next six months, simply by helping us make a lot more stuff. Craiyon, DALL-E, Midjourney are already helping some number of people create images for web pages, social media, videos, posters, and slideshows. ChatGPT, Bard, Bing, and others have enabled a torrent of textual content appearing from new Amazon ebooks to emails, memos, summaries, and many etc. This has all kinds of potential implications, which we’re just starting to realize:
Reducing some people’s anxiety around writing.
Exacerbating the information overload issue.
Switching the nature of writing from composition to a mix of prompt engineering and editing.
Inventing “AI writing” detection as a principle of reading and also software development.
Causing a rethink of what authorship is (what percent of a text must one manually enter to be considered its creator?)
Expanding the amount of content some deem problematic: disinformation, misinformation, plagiarism. Explicit abuse may be more challenging to create, given the guardrails on the AI giants, but users can be creative.
Determining the truth of a given object will become increasingly challenging. It was also summon up entrepreneurial energies of all kinds.
In short, we might have more culture by December than we would have, minus large language models.
Now we can circle back to what seems like to the most obvious macro domain to consider, that of technology. The questions I’d ask here are, how do other technologies interact with AI, and what impact might those connections have?
Already companies and individuals are plugging various AI software into other applications through APIs, such as Microsoft adding its Pilot service across its Office suite. Chat-generated text and images are already appearing across the web, from articles (ahem) to avatars and fulfilling the role of stock photos (ahem again). Over the next few months we can envision video, games and (any surviving) virtual worlds producers using tools like Midjourney to populate their content more quickly than humans usually do. Users may turn generative AI to assist their 3d printing efforts, yielding physical products shaped to some degree by the software. Many technology providers may consider using AI in their interfaces, especially by expanding their chatbot functions. In addition, there’s the huge potential - or specter - of AI’s role in coding, either as assistant, co-creator, or programmer replacer. Meanwhile, the potential of a tidal wave of cultural content sweeping across the networked world can challenge current technology to cope, like search engines navigating a massive new stratum of web pages.
Domain after domain, PEST, possibility upon possibility - while I’ve broken these down by individual categories, all of these domains interact upon each other as economics and politics coincide, technology and culture intertwine. Widespread technological unemployment could well yield political pressure to more steeply regulate, or just ban, AI. Cultural pressure to restrict AI can convince legislators in one area to restrict the tech, while giving another nation the opportunity to be more welcoming. New technological implementations offer new ways to tell stories.
All of this exists as some contexts for academics as we reflect on how AI impacts our work. One new use of AI with a preexisting technology can change a job market, which alters how a college or university prepares some students for that work. National regulation will alter the availability of AI we might consider using in our research. Policies and practices aimed at detecting AI-generated images and writing force us to adjust how we use those tools in teaching and the other operations of a campus. Chatbots and chatbot detecters should see an escalating arms race, which forms the ground of teaching and assessing writing. And so on.
I’ll pause for now. This is a lot for one newsletter and you can see how the possibilities open up. I’d really like to hear your responses and your own thoughts about how this frame works for you, and where things might be headed through December 2023.
Recent AI finds of note:
This is some post my man. Covers an entire atlas of LLM ground, and brilliantly so, as usual. Not being a Futurist, I wasn't aware of the PEST and STEEP methodologies of study. I'll assume Psychological impacts may fit somewhere under the "Social" umbrella and the Legal aspects (aside from those related to IP which you mentioned), will bridge the entire PEST/STEEP gamut.
In terms of the psychological aspects, I think mainly that of trust. We've all experienced the hallucinations and references to fictitious citations, but when these models are given more autonomy over control systems, how will humans trust the outputs? Trust the guardrails? I have a Tesla (and I love it). I let the car drive itself on the highway, I am trusting the programming and machine learning which went into the sensory systems that make autopilot possible. Yet, I am a programmer myself and how many times have I used one of my own apps for months before discovering a circumstance I had not foreseen which thwarts my logic blocks and derails the thing? A LOT! And knowing this, I still trust the car to propel me down the highway amidst trucks and dozens of other multi-ton hunks of steel?
Like the age-old balance between security and convenience, this magnetism extends directly to the balance between trust and convenience. But this is not new - this balance goes to the heart of using any technology. Do I trust my phone to not be listening to my conversations, do I trust social media platforms to respect my privacy when my account is, in fact, set to 'private', do I trust when Microsoft claims my documents and Teams/email communications which Copilot will use for generative text assistance will not be used to improve the LLM and/or its content be made available to the outputs other users request? It's a big one.
So looking forward to more Bryan, and thank you.