One vision of the future of AI in academia
Responding to a Chronicle of Higher Education article
Greetings from a turbulent May, friends. There’s a great deal going on in the AI space, and I have many posts in the pipeline. Today’s is going to focus on one article about AI, a fascinating essay which imagines a powerful transformation in higher education.
It’s paywalled, so I will summarize its main points, then add some reflections.
In his vividly written “Are You Ready for the AI University?” Scott Latham depicts two visions: a transformation experienced by today’s colleges and universities, followed by a description of new, AI-centered institutions.
For the former, Latham starkly describes decline as AI increasingly suffuses the academy. Campus staff will gradually shrink. To his credit, the author notes adjuncts and part-timers will be the first to go. “Over the next decade, through attrition and displacement, AI is going to decimate faculty ranks.” And: “Will jobs be lost? Yes, sadly and permanently.”

Interestingly, the author is bullish on humanities faculty, because “as AI fully assimilates itself into society, the ethical, moral, and legal questions will bring the humanities to the forefront.”
In his view, students will prefer AI in classes, due in part to their life experience but more for the technology’s personalization and convenience. For example,
In the near future, if a student misses class, they will be able to watch a recording that an AI bot captured. Or the AI bot will find a similar lecture from another professor at another accredited university. If you need tutoring, an AI bot will be ready to help any time, day or night. Similarly, if you are going on a trip and wish to take an exam on the plane, a student will be able to log on and complete the AI-designed and administered exam. Students will no longer be bound by a rigid class schedule. Instead, they will set the schedule that works for them.
Latham envisions campuses supplying AI agents, which become central to student experience:
Students will no longer sign up for courses; they will work with their AI agents to build personalized instruction. A student who requires a biology course as part of their major won’t take the standard three-credit course with a lecture and lab that meets for 14 weeks with the same professor. Instead, the student will ask their AI agent to construct a course that transcends the classroom, campus, and time. The AI agent would find expert scholars across the globe, line up real-time or recorded video lectures, and simultaneously incorporate material from YouTube, Google, and university libraries. If the AI agent can’t find lab space on campus, it will help find a lab with capacity halfway across the world and enable the student to participate using an augmented-reality headset. AI agents that have evolved with a student throughout college will be able to design assessments that reflect that student’s learning style, ensuring they have achieved fluency in the subject.
Remaining faculty increasingly work with AI, “becom[ing] technologists as much as scholars.”
They will need to train AI in how to help them build lectures, assessments, and fine-tune their classroom materials. Further training will be needed when AI first delivers a course. While students will readily accept non-human professors, any new technology will have “hiccups” that will require human intervention to resolve. Once the training wheels are off, AI-taught courses will become the dominant paradigm.
Outside of the classroom Latham imagines AI playing more of a role in scholarship, including accelerating research and improving logistical processes. As part of that grantors will want to see more AI in applications. Beyond research, the article describes increasing automation of other staff functions now performed by professionals in admissions, financial aid, registrar, career services, and assessment. Even senior administration is not immune:
Major organizational decisions related to expanding into new markets, building new laboratories, launching new programs, increasing housing capacity, investing in athletics and student life, and allocating resources will all be made with the guidance of AI. We are already seeing CEOs relying on AI for data-driven, resource-intensive decisions in industries far more complex than higher education, such as the life sciences, health care, and defense.
Now having outlined the AI-ization of the academy, Latham then adds a new post-secondary player, new and AI-centric campuses. He invites us to “[i]magine a university employing only a handful of humans, run entirely by AI: a true AI university.” They will operate at lower costs, and can then compete on lower prices. They will also have a different staffing situation than today’s campuses:
AI U will have quite a simple organizational structure. It will have an office of academic affairs with a seasoned provost and team. They will select a tight set of academic disciplines that lend themselves to the early-stage capabilities of artificial intelligence, such as accounting or history. As illustrated earlier, academic departments will have human chairs who will manage AI agents within each discipline.
Latham sees a bunch of startups in this area, with many of them failing. But some will prosper. One notable way for them to succeed:
AI U will target the “degree completer” market: the 40 million Americans with college credit but no degree. For the past decade, higher education has been chasing this elusive market to no avail. We have been trying to put a square peg into a round hole. AI U will help these individuals complete their degrees.
By the end the author imagines new “AI U”s co-existing with older yet AI-suffused colleges and universities.
I applaud Latham for the clarity of his vision and the details he supplies to flesh it out. It’s good to see interesting and thoughtful educational futures work! That said, I would like to raise some challenges to it.
To begin with, I would build on Latham’s scenario by adding some more features to it. Alongside the AI-ified and shrunken academy and the born-AI universities there should also be individuals using AI to learn on their own. If AI continues to improve and if people’s usage both grows and becomes more skilled, we could see growing numbers of AI-autodictats (AI-dictats?) studying across the curriculum and beyond. Commercial and nonprofit services might support them, such as tutorbots and personalized mentors. I’m not sure how large that sector would grow, as the self-taught after often a small proportion of the population, but it’s worth noting.
In response, we might see academic institutions try to support them. Much as some K-12 schools and publishers now serve homeschoolers in various ways, some colleges and universities might market themselves to the AI-autodictats. They could offer human tutors to supplement the bots and human personal support (mental health, career advice). They could also offer certification for AI-powered learning, tests and credentials of all kinds. Campuses facing financial or political pressures might see this as a fruitful path to take. Perhaps people refer to them as “AI finishing schools.”
Some other institutional forms are possible in this post-AI context. Years ago I offered a somewhat tongue in cheek scenario for a Retro Campus, which would require all members to leave digital devices at the campus gates. Back then I imagined such a school as a haven for those who dreaded screens and wanted to resist Silicon Valley. They would teach with pre-1995 technologies like books and analog lab equipment. The digital world would appear in the curriculum as an object of critical study. Returning to the present and Latham’s AI-saturated future, we can imagine more interest in Retro College and Retro university as opposition to LLMs builds.
I would also press on Latham’s note that traditional campuses would persist. He offers this cautionary observation:
Millions of students will continue to want an old-fashioned college experience complete with dorm rooms, a football stadium, and world-class dining. However, these experiences are not mutually exclusive: Even these tradition-bound institutions will employ AI. The market expectation will be that top-tier institutions will provide both an unparalleled student experience and AI-empowered education.
(As a futurist I appreciate when people foresee continuity with the present. When we focus on differences, similarities can fall aside. Yet they do persist.) This seems plausible to me in the American context of the traditional undergraduate experience. However, I would add the likelihood that many people view the resulting degrees as less worth. Absent any solution to the cheating problem, it seems likely that a significant proportion of the population will look askance at the football plus AI model. (I wrote about this at greater length here.)
On a more critical note, there are some passages in the Chronicle article which I think misfire. The article mentions learning styles and I can’t tell if it intends to mean that debunked theory which just won’t die, or just a way of phrasing customization. There’s also an offhand comment about teaching which is cruelly dismissive and just wrong: “Do professors really think it can’t narrate and flip through PowerPoints as well as a human instructor?” Besides being wrong, that’s simply going to lose a lot of readers. And I’m skeptical about the mention of blockchain credentials, as those seem to have stalled out.
More salient are some broader problems. One is to assume that AI will move from strength to strength, gradually conquering the academy and the world. Readers and listeners know I would caution people urging such a view, due to what I see as the fragility of the generative AI industry. Very quickly: there’s no working business model for LLMs; copyright lawsuits now under way could easily shut down big AI projects; regulation could constrain the field; concerns about climate impacts could change regulation and consumer use; culture unease is widespread; we might perceive AI’s quality problems (hallucinations etc.) as a fatal problem. AI *could* survive all of this in some way, of course, especially since the underlying ideas are now widespread, but I think anyone looking into AI in higher education’s future needs to be cautious.
Second, Latham dismisses academic resistance to AI too quickly, in my estimation, as I’m seeing more and more opposition rising across American and European higher education. The article at one point has faculty helping train AI in teaching functions -
Out of the gate, professors will work with technologists to get AI up to speed on specific disciplines and pedagogy. For example, AI could be “fed” course material on Greek history or finance and then, guided by human professors as they sort through the material, help AI understand the structure of the discipline, and then develop lectures, videos, supporting documentation, and assessments.
- and I believe a good number of faculty, especially humanists, will oppose this, at least personally, if not organizing against it at a departmental or institutional level. I’ve written about this repeatedly in this newsletter. There is a lot to say about this, of course, but for now I just want to mention that faculty might not go quietly into the AI night.
Further, and I haven’t posted about this yet (it’s coming!), academic and cultural resistance to AI might become more political than it already is. I’m referring not to a general politics or the sense of the personal being political but specifically to American party politics. There is now an argument that AI is fascist on multiple fronts, from aesthetics to an intertwined history of technology and the extreme right. (Helen Beetham is excellent on this score: for example) Others charge generative AI with being a multi-level threat to democracy. I don’t have time to summarize the details of this critique nor will I take a position on its merits now. Instead, I would cite the obvious, that these charges align with Democratic opposition to Trump, seeing him as an anti-democratic authoritarian and as fascist. I would then draw your attention to the Republican party’s embrace of AI, which you can see in Project 2025 and some of Trump’s remarks. Those same Republicans are fiercely attacking academics (I humbly point to my video series on the topic). Academic opposition to AI and to the GOP can coincide and reinforce each other, especially if the Trump administration pushes for AI in higher education, as it recently did in K-12. I’m waiting for the administration to more openly advocate for AI in higher ed. Here’s one example of the AI-Trump link from a progressive side just this morning.
I do wonder to what extent the Democratic party might take a position on this score. Its progressive wing has been increasingly anti-Silicon Valley for some time. Perhaps opposing AI in higher education will appeal to some Democratic leaders, at least in blue states. Perhaps old union politics will reappear for states where faculty and staff are more likely to be unionized. Should Democratic interest in the AI-ization of the academy appear (and it might not), then the GOP might double down on it, as is their habit. Then, to bring this back to the academy, academic rejection of AI might become an expression of party politics, with Democratic faculty, staff, and students opposing AI as part of their opposition to Trump. To return to Latham’s article at last, academic anti-AI activism would thus be even more energetic.
One more bit of pushback: Latham’s AI future seems predicated on screen-based interfaces, unless I’m misreading it. Perhaps people feeling too many screens in their lives would prefer to step away from them for their academic experience, at least to some degree. Yet there might be a second stage to the AI-ization of higher education, once robotics and interfaces advance further. Rather than thumbing text into an iPhone app or reading text scrolling along a laptop window, we might start asking verbal questions of bots standing near us and listening to their audible responses. Alternatively, we might feel comfortable interacting with a screen-based or projected virtual being who appears in two or some version of three dimensions, presenting as a character rather than chat box. I’m thinking today of Gibson’s Idoru and Kreiger’s waifu in Archer, yet there are many other fictional examples.
To sum up: Scott Latham offers a powerful and potentially disturbing vision of the post-AI university. It’s very much worth thinking through.
PS: There have been some interesting articles on AI and education over the past month. Perhaps I should do some more of these posts on them.
I appreciate the criticism. For me it is simply about this technology welding onto the rest of work, society, and pretty much everything. So we must adapt. But I think we can and there will be new jobs that mix tech with humanities: https://www.collegetowns.org/p/ai-wrangler-job-of-the-future-combines
I am curious why he did a 180 degree turn from his earlier piece in Inside Higher Ed where he advised the professoriate to resist AI - https://www.insidehighered.com/opinion/views/2024/06/14/memo-faculty-ai-not-your-friend-opinion