Higher education and AI in late 2025/early 2026
Another scan of developments aimed at the new year
Greetings from winter storm Fern, dear readers. I’m writing this from the Washington, DC area where around six inches of snow descended, followed by six hours of a kind of freezing drizzle. Hours of using shovel and snow-blower have made up for the gym being closed today, along with most of the local economy. Hopefully electricity holds out long enough for me to share this newsletter with you all.
In this freezing context, I’d like to return to examining what’s been happening recently with higher education and AI. Quite a lot has been going on over the past few months.
I’ve broken up what follows into several categories: AI, teaching, and research; scholarly publication and libraries; critiquing and opposing AI; general observations and research; final reflections.
(If you’re new to this newsletter, welcome! This is one of my scan reports, which are examples of what futurists call horizon scanning, research into the present and recent past to identify signals of potential futures. We use those signals to develop trend analysis, which we can use to create glimpses of possible futures. On this Substack I scan various domains where I see AI having an impact. I focus on technology, of course, but also scan government and politics, economics, and education, this newsletter’s ultimate focus.
It’s not all scanning here at AI and Academia! I also write other kinds of issues; check the archive for examples.)
AI, teaching, and research
Over the past few months I’ve seen many stories of academics using AI for teaching and research purposes. Purdue University (Indiana) did both, announcing it would expand its AI efforts. On the teaching side, all students will need to achieve “AI working competency” to graduate. What does this entail? The provost will lead departments along these lines:
the goal of this requirement will be to ensure that, from exposure and awareness to skill-building and problem-solving mastery, Purdue students possess job-ready skills and critical thinking competencies to:
Understand and use the latest AI tools effectively in their chosen field(s), including identifying the key capabilities, strengths and limits of AI technologies, as well as ways that AI can transform existing methods, processes and tools
Recognize and communicate clearly about AI use, decisions and limitations, including developing and defending decisions informed by AI-driven insights, as well as recognizing the presence, influence and consequences of AI in decision-making
Adapt to and work with future AI developments effectively and continually
Interestingly, Purdue doesn’t use the term “literacy.” On the research side, Purdue will support AI efforts in food systems, health, manufacturing, military, and transportation. There are also partnerships with Google, Apple, and local K-12 schools.
On a related note, Wayne State University (Michigan) is launching an Institute for AI and DAta Science (AIDAS), focused on research. It seems to be small scale, however:
Vice President for Research and Innovation Ezemenari Obasi said the institute would cost $200,000 over three years and would be funded by the Division of Research and Innovation. After that, he said, the institute was expected to find other sources of funding.
A Universität Zürich (Switzerland) team is starting an interesting project, using open source AI to produce new LLMs for one particular subject area:
A family of 4 billion (B) parameter large language models (LLMs) based on the Qwen3 architecture trained from scratch on 80B tokens of historical data up to knowledge-cutoffs ∈1913,1929,1933,1939,1946, using a curated dataset of 600B tokens of time-stamped text
At a similar if smaller scale, an American college student built a small language model trained only on 19th century British texts.
Stanford University has been very energetically working on AI. One faculty team developed a new way of training AI visual capabilities. Their method focuses not on recognizing objects so much as the functions of those objects. The authors foresee applications in robotics. A different Stanford effort is using AIs to build virtual scientists to work in virtual labs. A third deployed AI to design viruses, from the genome up.
Across the country, a group of Harvard University researchers published on an AI application they named PDGrapher, which would help generate new medications. Its code is freely available on Github. A University of Virginia professor wrote up his experiments with using AI agents to help academics conduct economics research.
Back to the learning world, one of the founders of the learning management system/virtual learning environment industry, Matt Pittinsky, wrote up his thoughts for how AI might change the LMS/VLE. Briefly, he sees AI powering a massive new feature set around personalized learning, which will become the lion’s share of the LMS. He also envisions chatbots becoming the interface through which instructors create and redesign classes in the LMS.
A Georgia State University - Perimeter College professor described an interesting use of AI, teaching students to prompt a chatbot to simulate a conversation with an ancient literary character.
MOOC provider Udemy announced new AI functionalities for its online classes. It looks like a range of tools or actions:
Instructor AI tools that will enable instructors to transform their trusted course content and underlying learning objectives into interactive microlearning activities.
Adaptive, AI-sequenced experiences, using a mix of short videos, quizzes, and other instructional and active learning content.
Instructor-validated quality, allowing learning content to remain practical and grounded in real-world expertise.
Context-aware interactions, enabling AI-enhanced responses informed by learners’ background, goals and motivations.
Expanded opportunities for instructors, welcoming both existing Udemy instructors and new creators who specialize in short-form, interactive learning experiences.
Some of those echo Pattinsky’s views, like building in AI as part of instructor course development. Adaptive experiences and context-aware interactions suggest personalization.
Scholarly publication and libraries
On the scholarly publication side, an MIT team surveyed hundreds of academic authors for their views on using their material to train AI. The results are quite nuanced and worth digging through (here’s a second part with advice). We can identify some highlights: around half of authors open to formal partnerships with AI firms; a widespread demand for attribution; much anxiety and many questions.
Along these lines, Northeastern University dean and history professor Dan Cohen described a project that combines a local Model Context Protocol (MCP) server and the Claude AI service into a plugin which aims to provide students with an accessible entrance into large scholarly databases. For example,
(Dan also offers the term “LAG, or library-augmented generation.”)
Critiquing and opposing AI
Conversely, there were many cases of academic research and institutions critiquing or opposing AI over the past few months. Two University of Sheffield (Britain) professors found that ChatGPT consistently refuses to recognize when a scholarly paper has been retracted or debunked. Michigan State University’s extension program is casting itself in opposition to AI, launching “a campaign to position MSU Extension as the antidote to AI slop.”
Some professors are turning to oral exams to mitigate the AI cheating problem, like this writeup of a New York University business professor. A San Francisco State University anthropology professor called on the California State University system to suspend its relationship with OpenAI, arguing that ChatGPT is doing serious mental health harm to students.
General observations and research
On a broader level, Washington University professor Ian Bogost determined that college and university students already have general access to AI in ways which changed their study habits, and that “plenty of professors are oblivious.”
It isn’t that they fail to understand the nature of the threat to classroom practice. But my recent interviews with colleagues have led me to believe that, on the whole, faculty simply fail to grasp the immediacy of the problem. Many seem unaware of how utterly normal AI has become for students.
A Chronicle of Higher Education survey found academics divided over the potential impact of AI on their institutions. Roughly a bit more than one half saw danger to the academy and viewed major structural action as called for, while around one third thought the danger overblown and change unneeded:
Readers may catch a couple of sentences in the report from me, where I aver that colleges and universities have little handle on the cheating problem.
Some reflections
What do these stories tell us about academic responses to AI in late 2025 and early 2026?
Last summer I posted some thoughts on a similar question and some of those takes still hold. We are still seeing a wide range of academic uses across the curriculum, at multiple scales (single student up to entire university), a lot of collaborative projects. The deep divide over AI within the academy persists, with opposition and critique taking many forms.
Adding to those: I’m glad to see the open source projects from Zurich, Harvard, et al., both as in using open source tools and sharing results in the open. I’ve been calling for this kind of work for years.
I’ve very curious about Pittinsky’s LMS/VLE vision. It represents quite a transformation for that mature technology. Which established providers will take it up fully? How many startups will aim new projects at that vision?
There’s also that overarching sense of academia falling behind the revolutionary technology. The Chronicle survey and Bogost’s essay depict colleges and universities as institutions, and academic workers as individuals, struggling to keep up and respond well to the challenge.
Are you seeing similar developments at your institutions? Are there other academia and AI stories we should be discussing?
Now, on to other and promised newsletters. More coming up!
(thanks to Bonnie Dede, Will Emerson, Karl Hakkarainen, Steven Kaye, Joe Essid)







Last night, our university president noted that a "storm" and a "tsunami" are coming for higher ed, from the impact of rapidly advancing AI.
I appreciated this scan of higher education and AI. It reflects much of what I’m seeing on the ground as well.
I teach AI in the Tippie College of Business at the University of Iowa, have been a member of our college AI task force, and am also the author of a textbook (AI in Business: Creating Value Responsibly). From that vantage point, one thing I’d emphasize is that AI’s impact on higher education is less about tools and more about judgment.
At Iowa, we’ve taken a university-wide approach through an interdisciplinary AI Certificate that spans business, engineering, liberal arts, and beyond. That structure matters. AI doesn’t “belong” to one discipline, and neither do the questions it raises about responsibility, sustainability, bias, or human decision-making.
In my own teaching, I don’t treat AI as something to ban or something to outsource thinking to. I treat it as a system students will be expected to work with when they leave campus. They need to use it critically, reflectively, and transparently. That means designing assignments where AI use is explicit, discussed, and evaluated, not hidden. It also means shifting some emphasis from product to process: how students frame questions, evaluate outputs, and recognize limitations.
I try to drive home that generative AI is designed to produce plausible responses based on patterns and probabilities in data, not to establish truth. Veracity is not its objective. Because of that, students must bring domain knowledge to the interaction and critically assess any response AI produces, rather than treating fluency as accuracy.
I share the concern that institutional responses often lag behind student behavior. Students are already using these tools extensively; the real risk is leaving them to develop habits without guidance. The arms race in detecting AI is futile, and students will need AI skills in the work world. Our responsibility, then, is to help them develop the judgment to know when AI helps, when it misleads, and when human insight still matters most.
I’d welcome hearing how other instructors are navigating the tension between educational objectives, academic integrity, students' realities, and workforce expectations.