Greetings from a wet and hot northern Virginia. Summer has arrived in force, drenching us with serious rains and boosting the temperature into the upper 90s° F (30+ C) along with some hefty amounts of humidity.
Two questions for you, dear readers, before this post gets going. First, would you mind if this newsletter offered shorter and more frequent issues? I realize some of the posts have been very long, which might be daunting. Second, would any of you be interested in a podcast version within this Substack platform? I’ve been considering it and also have a sound booth standing ready.
Now, to our point today. Let’s focus our AI and the future gaze on education. And under that header, let’s zoom in to the question of academic integrity. A New York magazine article about students cheating with AI has taken the world by storm. It shares a lot of anecdotes to show how many ways students use LLMs to produce content they were assigned to do on their own. Usefully, the article also reports on how students think and speak about these emerging practices. If you haven’t read it yet and you work in education, I strongly recommend reading it.
By the end of “Rampant AI Cheating Is Ruining Education Alarmingly Fast” you may feel frightened or outraged. You may note that James D. Walsh depicts no solutions or successful institutional responses. Software purporting to detect AI cheating is a failure and shows no sign of meaningful improvement. Some pedagogical practices can only work on the margins, like oral performance or blue book writing, given limitations of time and handwriting practice. The article doesn’t mention AI giants activating watermarks, which they haven’t done yet; the possibilities there are obscure.
I must admit that the article didn’t surprise me at all. Faculty, staff, and students from many colleges and universities have told me stories like those for years. Some of us have been speaking of the possibilities of AI-enabled cheating for a long time, even before ChatGPT 3 blew up, and then went on to warn about the reality. I hope the article gets more people thinking more seriously about the problem.
Looking ahead, there are several options for academics and others in our ecosystem.
Full scale resistance We could oppose AI across higher education. I’m hearing this more and more, mostly from humanities and/or writing faculty. The argument is that there’s no way to reduce AI cheating, so we must block the technology. Individual faculty should ban generative AI from their classes by all available means: class policy, in-class observation and action (watching for students using AI and telling them to stop, with sanctions). Campus IT should block AI at the enterprise level.
Readers will no doubt offer objections to this. Blocking AI from campus networks doesn’t prevent users from accessing it through cell phone networks. Outside of classrooms, where they spend most of their time, students will be free to hit up LLMs. And there’s the job argument, that the workforce demands employees with AI skills and it would be malpractice to prevent students from obtaining them. Resisting AI might be both impractical and a bad idea.
Redesign assessment from top to bottom If we don’t fight to remove AI from our institutions, we need to rethink and redo how we assess learning. This has to happen at all levels, from final reports and papers to exams, quizzes, and discussion board posts. It would have to occur across all disciplines. Some pre-AI practices could come in handy here, like emphasizing assignment process over the final product. A series of new assessment practices have appeared in the world of professors experimenting with AI-enabled instruction.
One huge obstacle: higher education generally doesn’t have the capacity for such a vast undertaking. Most colleges and universities lack the funding to support faculty and staff in such a redesign, especially in a short period of time. And a good amount of faculty and staff are exhausted, burned out, terrified for a variety of reasons: COVID-19 and its aftermath, institutional financial stresses, the impact of Trump’s second administration, dangerous local or state politics, and more. Think about how the greatest proportion of instructors is especially challenged: adjuncts, who have little institutional support as a rule, who are incentivized to be forgiving to students, and who often work at multiple, additional jobs.
Another, related obstacle: such a deep rethink of assessment entails rethinking pedagogical practice. That runs into the resource problem just mentioned, as well as the challenge of a lot of skilled professionals having to rethink their teaching in a hurry, under new pressures. So rebooting assessment is an enormous task for an overworked and exhausted professoriate and support staff.
Plan on AI going away There is historical precedent for this. We can find many technologies which bloomed then withered, either disappearing or existing only on the margins: pneumatic tubes, 8-track tapes, Second Life. Some people today crow about the decline of blogging. There are also examples of technologies which we regulated so strongly that they didn’t dominate the world, such as nuclear power for electricity or bioengineering.
There are also established human responses to technology and innovation which power such an attitude. I already mentioned exhaustion, and can testify that some overwhelmed faculty and staff greet AI with resignation: “Here’s *another* threat to the academy, one that’s enormously complex and difficult to follow? No thanks/I’m retiring soon.” And I can cite many conversations with faculty members over the years who insisted that certain technologies were fads, soon to disappear: the internet, the world wide web, digital spreadsheets, open education resources.
Specifically concerning AI, these academics can also draw support from the emerging industry’s fragility. As I keep reminding people, there are reasons that the sector might shrink or collapse: the barrage of copyright lawsuits; government regulation; popular opposition; the lack of a profitable business model; the possibility of declining output quality.
Accept that higher education’s reputation is declining If this goes on - that is, if generative AI continues to exist at an acceptable, consumer level; if academia doesn’t reboot assessment; if colleges and universities fail to bar AI from our communities; if no regulations appear to help us - then we should expect the public’s estimation of academia to drop. The American public has already soured on us over the past decade (for example). Now they will become rightfully skeptical of degrees and graduates, assuming some significant amount of cheating is now baked in. Yes, cheating is as old as the academy (think fraternity house files) but now it’s much, much more extensive. This worsened attitude will have implications across the board, from employers’ hiring practices to government funding and philanthropic support. It might also depress enrollment.
Transform higher education for a post-AI world Alternatively we could embrace the unfolding AI revolution and redesign the academia in response. Not just grading, but also how we do research, how campus operations work, and how we structure our institutions. This approach obviously runs into the problems mentioned above.
…whenever I offer multiple possibilities for the future, including scenarios, it’s worth remembering that two or more might come to pass at the same time. Higher education on the global stage is a broad, deep, and complicated ecosystem with all kinds of regional, national, and local variations. We could easily see multiple academics, units, and campuses trying out all of these options.
Which are you seeing in your world?
I don’t see how we get away from #3. You can’t ban it. It’s not like bioengineering or nuclear weapons. You need a lot of technology (just ask the Iranians or North Koreans or Pakistanis) to build nuclear weapons. There are a lot of choke points.
Banning AI is like trying to ban the wind from an outdoor wedding.
As I have blogged about before, the real threat AI poses is not to learning but to established systems of education. We have to get past this idea that learning and education are the same thing.
We can protect learning and go well beyond that to augmenting it if we are willing to see AI for what it is, face the flaws of our existing processes (like extrinsic motivators such as grades and everything that flows from them), and imagine new possibilities for our students and citizens.
If we don’t do those things, the existing systems are doomed in any case. If we do, we can reinvent our institutions for a post-Gutenberg world. I am hopeful that we can do the latter or that alternative systems will emerge that do embrace those possibilities.
Our priority needs to be to protect and augment our humanity. It’s inhumane systems (like assembly line learning and adjunct gig workers) that produce the outcomes that everyone seems to fear the worst, not AI.
There is no “AI cheating crisis.” It is a crisis of “academic rigor,” and education’s failure to adapt when technology catches up with its tradition of teaching to the lowest hanging fruit in terms of learning outcomes.