49 Comments
User's avatar
The Mental Forge's avatar

There is no “AI cheating crisis.” It is a crisis of “academic rigor,” and education’s failure to adapt when technology catches up with its tradition of teaching to the lowest hanging fruit in terms of learning outcomes.

Expand full comment
Bryan Alexander's avatar

Mental Forge, can you say more about rigor? That is, are you seeing a decline in rigor for various reasons (COVID, for example) which predates AI? Are we not being rigorous enough in assessment?

Expand full comment
The Mental Forge's avatar

Hey Bryan, James Hammer here… former educator (23 years in the classroom) and CEO for Mental Forge Media.

Here’s my response piece- not specifically to your article, but to the general “AI is cheating” mantra that’s making the rounds on social media.

https://open.substack.com/pub/thementalforge/p/the-ai-cheating-crisis-is-actually?r=4h4cyo&utm_medium=ios

A couple of the main points regarding rigor involve the reality that 60-70% of students were cheating prior to AI, 47% of tenured academic faculty acknowledge academic rigor has been in decline for decades- 33% admit reducing rigor themselves in their own courses. One analysis suggests the decline has been happening over the last 40-45 YEARS. Cheating remains consistent in only about 10% of current student work, even with AI.

In 2020, the transition to effective online learning because of the pandemic should have been like flipping a switch, but the fact it was a disastrous debacle left education with egg on its face and the entirety of the U.S. population with its confidence in our education system rocked to its core.

Technology’s encroachment into education, and the resistance in the system to embrace it and teach kids how to effectively use it (because of the difficulty in controlling it) has led to an increasing dissonance between what classroom learning could be and what it is… and our students have picked up on this as the entire system teeters toward irrelevance in an increasingly technological world.

Thanks for seeking more info… if I can offer any additional insight, please feel free to ask.

Expand full comment
Bryan Alexander's avatar

Mr. Hammer, many thanks for this substantial answer, as well as for the pointer to your Substack (just subscribed).

That's the arc of declining rigor I thought you were referring to.

I have more ideas about what caused it (start with adjuntification) but let's look to the options we have now, based on your experience and work. How might higher education rethink how we teach with and about technology in order to rebuild rigor and relevance?

Expand full comment
The Mental Forge's avatar

Essentially, if your lesson or assignment could be executed with an AI prompt then it wasn’t about meaningful learning to begin with, but merely information acquisition.

Expand full comment
Ernie Reyes's avatar

The way this comment is phrased isn't entirely accurate. Students will use AI to generate work for any assignment, regardless of how low- or high-stakes it is. If your point is that students are more likely to rely on AI when the material feels vague, overwhelming, or irrelevant to their lives, then I agree. But even so, many students will use AI simply because it's the easier path, and taking the easier path is (has become?) commonplace. I'm not surprised that, as of now, studies show AI hasn’t led to a noticeable increase in cheating, but it’s worth noting that middle schoolers are already using it to complete homework, and both faculty and students anticipate more academic dishonesty in the future due to AI.

I think it's important that we use this moment to reflect, reevaluate, and reassess what we believe and why. As Bryan notes, there are real issues with a lack of funding, energy, and perhaps even interest, especially when changes are expected to happen so quickly. For now, we need to remain critical of the situation at hand and avoid falling into the trap of “shiny object syndrome,” technological solutionism, or simply appealing to novelty.

Expand full comment
Bryan Alexander's avatar

I'm hearing AI use being general, for high and low stakes work alike, so long as it involves writing.

I think we're at an impasse.

Expand full comment
The Mental Forge's avatar

No. I think it’s spot on. I’m not talking about what assignments students might or might not, will or will not use AI for. I’m talking about the rigor of the work, not the behavior of students.

You are absolutely correct, in your assertion students will use these tools on any and all assignments, but we are talking about two different things.

You are also absolutely correct that we need to take the time to reflect, reevaluate, and reassess. Educators need to do this through the lens of what is best for students, not what’s convenient for them. Technology has emerged that schools will NEVER be able to control… sorry, but that’s the reality. If the classroom fails to adjust, it will become completely irrelevant. We’re already halfway there.

The time has come to build learning experiences that are on par with the direction that the workforce as a whole will move. AI is going to significantly impact every industry… people in the workforce are not going to be replaced by AI… they are going to be replaced by PEOPLE who know HOW to use AI.

Being critical doesn’t mean we should maintain a death grip on the status quo of a clearly ineffective system. Nor does it mean we should “follow shiny objects.” It means we need to find that middle ground where wisdom and experience intersect and make the most informed decision we can.

The U.S., in particular, clearly has an “academic rigor” problem that has been laid bare by AI. We must respond.

Creating an economy that blames kids for cheating is disingenuous and is not the answer.

Expand full comment
Ernie Reyes's avatar

When you say, "Educators need to do this through the lens of what is best for students, not what’s convenient for them," I completely agree. It is, however, demoralizing to see so many educators trying to be equity-minded and student-centered (reevaluating their approach) only to still see students lacking interest and continuing to use AI to generate their work. This is perhaps one of the many reasons why a significant number of teachers have been leaving the profession, especially since the COVID-19 pandemic.

You're also right when you say, "Creating an economy that blames kids for cheating is disingenuous and is not the answer." Part of the dialogue I hear from technological solutionists is that AI will cure many of the ills in the education system, yet there is little discussion of the longstanding systemic inequalities that remain in place. AI may be the tipping point that forces us to pause and reflect on the status quo. Thus, it’s important to understand why students feel the need to cheat. On one hand, it’s the easier path; on the other, it may signal that students feel insecure or ill-prepared. If that's the case, we have to ask: why is this so? The longstanding structures of our current education system certainly need a revamp, but it's just as important that we adopt a constrained (rather than unconstrained) view moving forward.

Lastly, I am very wary when you say, "People in the workforce are not going to be replaced by AI… they are going to be replaced by PEOPLE who know HOW to use AI." There’s reason to believe that AI (like the technologies before it) will continue to widen wealth inequality. As Philip Schellekens and David Skilling note, “While AI will, hopefully, boost macro-level productivity, it could widen income disparities within countries, benefiting highly skilled workers, displacing lower-skilled jobs in repetitive tasks, and concentrating wealth among those who control the technology.” This is why it’s important to be critical. If we continue to fall for the allure of the “shiny object” that is AI without fully engaging with it in meaningful ways, the results (much like we’ve seen with social media) will be unintended and potentially harmful.

Expand full comment
Bryan Alexander's avatar

Thank you for thess thoughtful comments, Ernie. You should imagine me nodding at many points: understanding why students cheat, the frustration of equity-minded faculty, the bad problem of widening inequalities.

"AI may be the tipping point that forces us to pause and reflect on the status quo" - this is what I hope. Each major digital technology has defamiliarized some current practices. AI might do this in a big way.

Expand full comment
The Mental Forge's avatar

I think we have to start with pulling our head out of the sand and taking an honest inventory. We can’t fix what we won’t acknowledge is broken.

Educators have to start seeing themselves as the professionals they are and begin building their teaching practice. We’ve bought into a lie that we’re only valuable attached to a campus or district… that’s not true any longer, and with artificial intelligence, we can build and scale and translate curriculum like we’ve never been able before.

Schools have to stop wasting billions of dollars on trying to control technology, and start teaching students how to cope with the reality of its presence and its integration into every aspect of human life.

This has to start in elementary and secondary education, then it will make its way to higher education.

Professors and tenured faculty have to stop seeing themselves as siloed experts and embrace their learning community AS community and embrace the expertise that comes from areas and departments that don’t necessarily resonate with their perception of academic value.

We have to re-examine our entire assessment structure and build assessments that delve deeply into higher order thinking skills. Ironically, AI can help us create those assessments with excellence, if we simply ask the right questions.

There is no silver bullet, and it will take time… time our students just don’t have the luxury of wasting while we bicker about protecting a system that clearly no longer meets their needs in preparing for the real world.

I hope we’re able to make that happen… we can’t afford the alternative.

Expand full comment
Robert Beutner's avatar

Thank you Brian. This, I believe, captures the moment. I can tell you, as an Instructional Technologist I have been fielding questions about ways to track use of Ai and the response that I have given is met with a meaure of frustration and resignation. And I can atest that the fatigue is real. I think the first best step, is to start a conversation across the curriculum where a shared language can be developed about the ethical uses of Ai and the intended uses of Ai so that both the faculty and students have a starting point.

Expand full comment
Bryan Alexander's avatar

Good thoughts, Brian. Have you seen any examples of such a conversation that we could draw upon?

Expand full comment
Tom Haymes's avatar

I don’t see how we get away from #3. You can’t ban it. It’s not like bioengineering or nuclear weapons. You need a lot of technology (just ask the Iranians or North Koreans or Pakistanis) to build nuclear weapons. There are a lot of choke points.

Banning AI is like trying to ban the wind from an outdoor wedding.

As I have blogged about before, the real threat AI poses is not to learning but to established systems of education. We have to get past this idea that learning and education are the same thing.

We can protect learning and go well beyond that to augmenting it if we are willing to see AI for what it is, face the flaws of our existing processes (like extrinsic motivators such as grades and everything that flows from them), and imagine new possibilities for our students and citizens.

If we don’t do those things, the existing systems are doomed in any case. If we do, we can reinvent our institutions for a post-Gutenberg world. I am hopeful that we can do the latter or that alternative systems will emerge that do embrace those possibilities.

Our priority needs to be to protect and augment our humanity. It’s inhumane systems (like assembly line learning and adjunct gig workers) that produce the outcomes that everyone seems to fear the worst, not AI.

Expand full comment
Bryan Alexander's avatar

Well said as always, Tom.

"and imagine new possibilities for our students and citizens" - sounds like the last option, no?

Expand full comment
Dariusz's avatar

👏👏👏 Couldn’t agree more.

Expand full comment
Stephen Fitzpatrick's avatar

Full scale resistance and planning on AI going away are guaranteed to make higher education obsolete. It’s absurd to think you can keep this technology out of the classroom. But number 2 is hard. Really hard. Which means it will be a bumpy ride. The widening gap between skeptical resistors who are praying this will all just go away and prognostications like AI 2027 is getting more and more pronounced. For a really bonkers set of predictions, read Ross Douthat’s interview with Daniel Kokotajlo, one of the authors of AI 2027. Which is the more likely reality we will be living in? I don’t think it’s the one where AI just withers and “goes away.”

Expand full comment
Bryan Alexander's avatar

AI 2027 is on my list to write about.

Good point about the gap widening.

Expand full comment
Joe Essid's avatar

Steve, good point. I privately called the CCCC Chair's "Refusing Generative AI" that Bryan links to "a professional suicide note." It's part of the turn away from techno-rhetoric and a turn to identity politics that have, in my opinion, made that organization lose credibility.

Expand full comment
Stephen Fitzpatrick's avatar

I'm a realist. My HS kids just did a podcast and basically laid out how much they have been using AI this year and how the whole landscape is changing. I have a post coming out tomorrow with some thoughts.

Expand full comment
Bryan Alexander's avatar

Steve, did your high school try to reduce access to AI?

Expand full comment
Stephen Fitzpatrick's avatar

Like many schools, we are still trying to figure things out. Some teachers ban it, others don't (I'm one of the others). Most barely talk about it. There is not a lot of consistency. But we're also looking at an AI tool like flintk12 so we're really all over the place. I don't really know how you "reduce access" to AI without reducing access to the internet which, for obvious reasons, is a big step. It's embedded in everything so it's just not realistic. The only path forward I see is complete transparency.

Expand full comment
Bryan Alexander's avatar

"Most barely talk about it." Wow. Is that resignation?

"I don't really know how you "reduce access" to AI without reducing access to the internet" - same here. You'd need to Faraday cage your classroom, but students get online the minute they leave.

Expand full comment
Stephen Fitzpatrick's avatar

Yup. School year is almost over….

Expand full comment
Ryan M Allen's avatar

Colleges and universities that emphasizes the foster the value of actually being there will be the winners. If classes and programs can easily be replicated in online format, then they are not doing it right. We need to give students a reason to be there. The good news is a lot of places already have this as an advantage. The bad news is many places do not and will likely close in the next 10-20 years.

Expand full comment
Bryan Alexander's avatar

Ryan, your thoughtful comment reminds me of changes to (of all things) movie theaters.

Once home theater experiences improved and streaming services took off cinemas has to figure out how to compete. Hence a tidal wave of efforts: cushy chairs, expanded food offerings, toys to buy, etc.

To some extent parts of higher ed have been doing this quite literally with the amenities arms race. But we should do more.

Expand full comment
Steve Boronski's avatar

There’s a relatively easy way to detect AI cheating that an academic friend told me and it’s to add some random words to the question posed but make the text very small and white on a white background. Then when the student cuts and pasted it into the AI the answer has a random statement within it.

Expand full comment
Tom Haymes's avatar

If there’s an easy way to detect it, there will be an even easier way to defeat it.

Expand full comment
Bryan Alexander's avatar

Steve and Tom, have you seen anyone defeat this yet?

Offhand I (if I were a student) would scrape the assignment and dump it into a text file, then delete everything besides the question.

Expand full comment
Steve Boronski's avatar

The students, at this point in time, are unaware that this is happening and it doesn’t take too much effort to conjure up some statement, even openly, that would make it obvious that the student was using AI.

And to be fair, using AI shouldn’t necessarily be a problem if, and only if, the student has built on the AI.

Expand full comment
Bryan Alexander's avatar

I suspect Tiktok videos will get the word out.

But can you say more about what "the student has built on the AI" means?

Expand full comment
Steve Boronski's avatar

I mean creativity, AI isn’t able to be creative, it might be able to learn and analyse data but it cannot be “creative”.

Expand full comment
Bryan Alexander's avatar

Ah, I see what you mean. Thank you.

Students using AI to enhance their creativity is one pedagogical path.

Expand full comment
Stephen Fitzpatrick's avatar

This is so 2023 ... Is this really where we want to be, playing these kinds of cat and mouse games with students? Student use can now run circles around these techniques. That's not how the more sophisticated kids are using AI anymore - the cut and pasters are going to get caught regardless. Is it really that satisfying to find a set of random words in a kid's assignment?

Expand full comment
Bryan Alexander's avatar

Personally, I'd find it depressing. But how many faculty would fine it necessary or just?

Expand full comment
Gary Bartanus's avatar

As a social constructivist instructional designer, I deeply appreciate your framing of the crisis and your scenario-based approach, Bryan. You’ve captured the urgency and complexity of the moment. But I believe only two of the five strategic responses you outline—#2 (redesign assessment) and #5 (transform higher education)—align with both sound pedagogy and a future-ready vision of education. Here’s why:

Why Resistance (#1) and Retreat (#3, #4) Won’t Work

• Banning AI in classrooms ignores the world students already live and work in. It's a losing game of cat-and-mouse that privileges surveillance over learning. Worse, it sidelines the opportunity to model ethical, purposeful use of AI tools.

• Waiting for AI to collapse or hoping the public forgets risks irrelevance. History tells us that technology doesn't disappear when it becomes pervasive—it evolves and embeds itself deeper.

Academic denialism only invites public distrust.

Why Redesigning Assessment (#2) Is Non-Negotiable

Social constructivism reminds us that learning is a process of making meaning through interaction, not simply transferring and absorbing content. When students can outsource answers to machines, it’s a sign that our assessments are emphasizing product over process.

Instead, we must:

• Design authentic assessments that ask learners to apply, reflect, critique, and create in ways that demonstrate deep understanding.

• Shift toward formative, scaffolded tasks where learners show how they’re thinking, not just what they can produce.

• Embrace peer collaboration, self-assessment, and dialogue-based inquiry—all hard to fake and rich in learning value.

Why Transforming Academia (#5) Is the Only Sustainable Path

AI isn't just a challenge to academic integrity—it’s a transformative force in society. That means we need to:

• Integrate AI literacy across disciplines, teaching students to critique, question, and co-create with these tools, not passively consume them.

• Leverage AI to augment learning—not replace it. Tools like adaptive feedback, simulation, and accessibility aids can create more inclusive and engaging experiences.

• Rethink what it means to be “educated” in a world where information is cheap, but critical thinking, ethical judgment, and creative synthesis are more valuable than ever.

We don’t need to panic. We need to design. This is our chance to re-centre education on what truly matters: learning, meaning-making, and human growth.

Thanks again for opening this conversation, Bryan. Your work continues to help us frame the future with clarity and care.

Expand full comment
Bryan Alexander's avatar

Thank you for this kind reply, Gary. And I appreciate your constructivist approach, which I try to teach about and with myself.

To your points:

Banning AI - I agree, but am hearing more calls for this from more faculty. I've heard several IT consultants say it's worth banning AI even knowing that's imperfect, because the ethics are so stark. Some say there are no good uses of LLMs. And to be fair, many of these folks are anti-surveillance.

I fear the divide is deepening.

AI collapse - that's my sense as well, but I'm still keeping options open. The IP lawsuits are especially powerful (potentially) here.

Transforming assessment or the academy as a whole - how can we do this, institutionally, given our strained capacity?

Expand full comment
Gary Bartanus's avatar

Bryan, I’ve been racking my brain for a succinct and intelligent-sounding response to your excellent question about how we redesign academia with strained capacity—and I’ve come up empty.

Then again, as you may have guessed from my spelling of centre, I’m Canadian. Our institutions may be underfunded, but at least they’re not being actively dismantled by an orange buffoon courting undereducated voters to pave the way for his next “term.”

So if things get too surreal down south, please know you and your constructivist colleagues have a standing invitation up here. We still believe in climate science, public health, and (mostly) facts. And we’d love your help redesigning assessments that can’t be outsourced to chatbots. 😉

Expand full comment
Bryan Alexander's avatar

Gary, I'm so sorry for the chaos inflicted on your nation's campuses with the cutbacks on international enrollment. Maybe Carney can offer some more support, but I'm skeptical. (Alex Usher is my reliable guide to Canadian higher ed)

And thank you for the kind (of course) invitation. As a longtime fan of Canada, this is sorely tempting.

Expand full comment
Joe Essid's avatar

These are excellent observations for this (semi) retiree. My plans for future classes, including one on SF this summer:

--redesign assignments to feature an iterative process where AI helps shape an argument and critique it, alongside human partners. We'll do a podcast as well as a short synthesis paper. AI will help in drafting and revising each. I'll point them to AI that can do the best work as co-pilots and give them some starter prompts. Students are lousy at prompt-engineering and many only think of ChatGPT and not the galaxy of AI out there.

--refocus on reading with a triple-entry journal online, where students post a summary of a point of interest, then in the next column, posit an analysis. In the final column, they ask unresolved or interesting questions to bring to class. Partners will add commentary to this documebnt.

--The class grade thus becomes 50% participation, including that deep-reading journal. To get an A, they have to talk and add value when they talk in class, and do that nearly every time.

--Up my expectations for A work. I tell them already "you will not have a job in 5 years if you cannot add value to AI content." That's harsh, but so is life. They need to hear this instead of being further coddled in college.

I'm not worried about cheating. Those kids won't have jobs soon and its their fault.

Expand full comment
Stephen Fitzpatrick's avatar

Grade the process, not the product.

Expand full comment
Bryan Alexander's avatar

I thought we were supposed to do that in writing classes.

Expand full comment
Bryan Alexander's avatar

That's a fascinating strategy, Joe. How large a class can it scale up to?

Expand full comment
William Scott Harkey's avatar

Yes, for podcasts, and if the forum could be posted in a podcast form as well. Thanks for all you do.

Expand full comment
Brent A. Anders, PhD.'s avatar

Excellent write-up, Brayan. Thank you. The article from the New York Magazine is right on point and matches what I have seen, experienced, and researched. This is a reality that I started to highlight to all six months before ChatGPT came out (https://www.academia.edu/82177646/A_Pressing_Need_for_Artificial_Intelligence_in_Academia). We are now almost three years into having freely and fully available generative AI, and yet, I still run into many in academia who want to just take a wait-and-see approach. We are long past that.

Academia must evolve. You mentioned some important points. Assignments and assessments must evolve away from just using essays. Experiential learning has always been key, and higher education must fully embrace that if it is to have any hopes of remaining and becoming truly relevant once again.

Expand full comment
Karl Hakkarainen's avatar

I am an instructor with the Worcester Institute for Senior Education (WISE), a lifelong learning organization based at Assumption University in Worcester MA. I just finished teaching two courses on AI. The first was an overview of the state of AI as it was in the last months of last year. The second, completed just two weeks ago, showed how to use AI for historical analysis. Can AI make sense of the 72,000 pages of JFK assassination documents? (Short answer: after a lot of Python code slogging through lots of documents, the result is "Dunno.")

Most of these seniors weren't fans of AI, although few had used any of the products. They were scared and curious, scared because they heard and read the stories about cheating, energy consumption, and 2001: A Space Odyssey and curious so that they showed up. I believe that they're a bit less scared.

My father grew up in a world without radio. I have seven great-grandchildren who will never know a world without AI. They will also know a world that I won't know and have jobs that look nothing like the jobs that I had. The best that I can do is help to dial down the fear among cohorts, children, and grandchildren. To quote another old guy:

Your old road is rapidly agin'

Please get out of the new one

If you can't lend your hand...

My thanks to Bryan for keeping us facing forward.

Expand full comment
Bryan Alexander's avatar

What a great perspective, Karl. Thank you.

PS: did I tell you about making a 2001 joke to my students, and none had seen it?

Expand full comment
Karl Hakkarainen's avatar

Yikes.

Let us remember that it is now their lawn that they're telling us to get off of. (He says, ending a sentence with _two_ prepositions.)

Expand full comment
Bryan Alexander's avatar

Ha!

Expand full comment