68 Comments
User's avatar
Bryan Alexander's avatar

Quick note: I'm really grateful that you ask academic audiences to support climate action.

Expand full comment
Karen Cangialosi's avatar

Not just support, but center environmental and social issues in our work.

Expand full comment
Bryan Alexander's avatar

Dear Karen, thank you for sharing this article, which I'd somehow missed. I admire your passion for imagination and justice. What a great response to the AI challenge.

Expand full comment
Karen Cangialosi's avatar

Thanks Bryan! I appreciate all of your work so much. (Haven't seen you in person since we invited you to our USNH event years ago). :-)

Expand full comment
Andrew Chambers's avatar

This is an excellent summary and I can only concur. There is food for much thought. Thank you. The university where I presently work wants to open up and encourage use of AI, but we will need very strong support for academics should this position take root. It is the ultimate challenge, much more than I have ever seen over 35 years in Instructional Design.

Expand full comment
Bryan Alexander's avatar

Thank you, Andrew. Is your university offering support for faculty and staff trying to think this through and develop practices?

Expand full comment
Andrew Chambers's avatar

Yes, but the centrally developed position statement on AI (to drive developments and support) is new and still open for comment. The university is small (in comparison to G.O.8 (top 8) universities in Australia) and has recently had extensive restructuring and cost savings being applied. The team to support AI from a technical perspective is growing. The support side as well. I am still pondering what support should look like. I come from the old days of classroom-based training in the LMS and delivery of centralised/specialised elearning support. Now there are workshops, seminars, exemplars, discussion groups, etc., but little availability of hand-holding, which academics would need. With the (almost) hands-off approach taken now to these workshops, it is hard to see how academics will gain the necessary in-depth support 'on the ground'. This is not a simple change, and not an adjustment to a new policy or standard. It is a wholesale onslaught of disruption that I doubt even Clayton Christensen could have envisaged!

Expand full comment
Bryan Alexander's avatar

Thank you for the sketch of Oz, a country I haven't visited for too long.

Serious question: what do you make of AI for that handholding?

Expand full comment
Felix's avatar
2dEdited

This is also what I'm noticing. I'm currently a student, and my (Dutch) university really wants to encourage AI use, although they do want us to do things responsibly (not using it as a source, fact checking, etc.).

They're not really offering us any other tools, either: every time I tell my professors that I refuse to use any AI, they try to convince me otherwise ("perplexity lists its sources!" for example). Instead of teaching me how to use, say, Google Scholar. That's the only other tool that I personally know for the literature research I need to do, so I'm very limited in that research, I feel.

At the same time, they're also not offering concrete ways to actually use AI. They're really letting people do their own thing, instead of teaching them how to use it "responsibly" (quotations because I believe its environmental impact is far too great to ever be truly responsible). For example, I recently heard that my uni has a contract with Microsoft that means they're not allowed to use any data that they get from Copilot, and so that's the "officially approved" AI tool that they tell us to use. However, this was in a specific extracurricular discussion about AI; none of my classmates, or even professors, seem to know that this is the case. I think most of them use ChatGPT because it was the first popular model. There's no communication at all when it comes to these things.

(At that same extracurricular, everybody but me and one other person use AI, out of 14. 12 out of 14 students use it - and they were surprised to hear that I didn't! I heard one even say that they know and care about the environmental impact, but it's apparently "just so difficult" not to use it.)

Expand full comment
Bryan Alexander's avatar

That's fascinating, Felix.

Not offering technologies other than AI - is this because they see AI swamping all other digital tools, or because they have budget limitations?

Concrete ways of using AI : I'll keep writing about this.

Expand full comment
Felix's avatar

I'll be honest, I don't actually know. I think it's because staff aren't being offered any of these tools either, so they have to improvise and figure out what they themselves prefer to use. And for many of them, that's turned out to be AI, and for some reason they can't fathom anyone wanting to use something else, even when I tell them that I want exactly that.

Expand full comment
Bryan Alexander's avatar

Very interesting. And folks often don't have time to explore on their own.

Expand full comment
MWiseman's avatar

Thank you for this extensive list. I believe you are onto a solution Bryan, we need to change assessment from top to bottom. Yes, that will be challenging and will only happen in small steps, including all and listening to voices across campus. On my campus, I am proposing an AI Community of Practice, so we can begin having these conversations. It will take a village.

Expand full comment
Bryan Alexander's avatar

A CoP is a good idea. Do you have any interest from upper administration, like provost, board, president?

Expand full comment
MWiseman's avatar

Thus far, at the VP for Academic Affairs level. So, this is positive. I need faculty involvement now. Staring the semester with a Professional Development day with a workshop that will be in a Design Thinking format to bubble up ideas from the campus/audience. I envision these answers steering the conversations in the AI CoP.

Expand full comment
Bryan Alexander's avatar

VPAA is vital - well done!

Any promising faculty leads?

Expand full comment
MWiseman's avatar

I'll be starting with the Deans, Dept Chairs & Program Directors....it's a matter of interest in AI and bandwidth....

Expand full comment
Bryan Alexander's avatar

Good luck. And please feel free to share any of my Substack posts, if they can be of use.

Expand full comment
Donald Clark's avatar

Whole piece is framed as seeing AI as a problem.

First: Flip it into an opportunity on teaching, learning, assessment and the scourge of admin.

Second: Look at tools such as Norvalid - problem is that no University would buy a rock-solid AI detection tool as it would show mass cheating and be applied to all faculty output - past Masters/PhDs, slide decks etc. Been sackings on that front already.

Third: We also have to be honest and accept that assessment is far too ‘text’ based. Much of it does not assess real skills or performance – even critical thinking.

Fourth: Far more focus on formative assessment, feedforward and retrieval practice.

Fifth: Open up to adaptive platforms. They work.

Sixth: Use more scenario and simulation assessment.

I could go on and on...

To turn assessment into a toxic cat and mouse game will not work, there are too many mice, the mice are smarter than the cats (Tom and Jerry comes to mind) and the mice will win.

Expand full comment
Dr. D | Still Being Human's avatar

Excellent reply, Donald - I agree wholeheartedly. We cannot run from the editable. Check out my response to Bryan in this thread if you'd like.

Stay human,

Dr. D

Expand full comment
Bryan Alexander's avatar

Donald, I was hoping you'd weigh in.

How does Norvalid compare with Turnitin and ChatZero?

A forthcoming post is about your flipped idea of seeing AI as learning enhancement.

Expand full comment
Karl Hakkarainen's avatar

Could you elaborate on item 5, adaptive platforms? I have my ideas of what that phrase means, but I need some help with specifics.

Expand full comment
Donald Clark's avatar

Written a lot on this and helped build a few.

Different species:

PRE-COURSE:

Student data defines pathways

Pre-test defines pathways

Learning styles – NO - don't do it!

IN-COURSE:

Continuous adaption

Formative feedback, sophisticated use of data (general and specific)

Good on knowing common misconceptions

Allows learner to go at their own pace

Stops and overcomes specific difficulties before moving on

Sometimes subject specific on maths, languages etc

POST-COURSE:

Continuous adaption

with shared data across courses/programmes

Adaptive assess

Adaptive retention

Performance support

More detail in my book "AI and Learning"

https://www.amazon.co.uk/Artificial-Intelligence-Learning-Generative-Development/dp/1398615668/ref=asc_df_1398615668?mcid=89ee8a14f6e33e7c954af77c0f4a3be9&th=1&psc=1&hvocijid=8652316774094020445-1398615668-&hvexpln=74&tag=googshopuk-21&linkCode=df0&hvadid=696285193871&hvpos=&hvnetw=g&hvrand=8652316774094020445&hvpone=&hvptwo=&hvqmt=&hvdev=c&hvdvcmdl=&hvlocint=&hvlocphy=1006565&hvtargid=pla-2281435176618&psc=1&gad_source=1

Expand full comment
Dr. D | Still Being Human's avatar

Nice summary, Bryan. As a professor, you hit all the crucial points about what can be done. I agree with the statement that in 2025, many professors haven't used an LLM; in my opinion, this is a disservice to students. My take is that (I'll wind up making this a post of mine) for the large majority of professions, traditional higher education needs a complete revamping. We cant keep our heads in the sand. Do students need to know material yes, some professions more than ever. Its easier for me in healthcare because students (med students, nursing, physical therapists, etc.) All have to pass thir boards which have been multiple choice for decades and decades, so our assessments primarily are proctored in person exams with other projects etc. However, for the projects, there is no way to avoid AI. AI is here to stay and will develop at an exponential rate. Nit having students use AI is likely worse than telling students not to use the internet 30 years ago. Remember they used to say its not what you know but how fast you can find it? Well, with AI, education should evolve around what you build. Employers are already letting go of thousands of people, we have to send students off with skills. Gone are the entry level theory like jobs. Its like the professors that teach business and entrepreneurship without having a business - students will need to produce. Yes, this is scary, but if there's no value in an education, students won't pay for it any longer and we already see that happening. Heck, I have my doctorate but unless one is going to be a physician, I dont think id go for a PhD at this time. And to be quite honest, it doesn't mean much unless actual research is being done in a lab versus just writing papers that AI can do. Assessments have to be oral presentations or in class writing as you've stated. With all the time it takes to get a PhD, atthis point, id use all that time to learn how to create something useful with AI. The typical PhD program takes approx 10,000 hours. Imagine what one could do with 10,000 hours of creating without having to go into debt. Are these concerns an academic wants to hear? Of course not, but its needed more than ever.

Thanks for your post, and, stay human.

Dr. D

Expand full comment
Bryan Alexander's avatar

Dr. D, I'm glad you're representing allied health, which often doesn't get enough attention in higher ed discussions.

It sounds like we need to do more than revamp assessments. Reboot the whole system?

Expand full comment
Dr. D | Still Being Human's avatar

Ugh. I just typed a whole response that whithered away. In short:

-thanks for the acknowlegment - somtimes allied health can feel like outcasts...so to speak

- I feel a good educator is constantly revamping course material and lectures for their students (I do). Sadly, I look forward to course evaluations from students (a requirement from our accrediting body) so that I can tweak my weakness and/or bolster my strengths. Such actions is the least we can do for our students who invest so much into their education, in addition to their trust in us that we will prepare them. Without providing them the proper tools for success will ultimately lead to them having a worthless piece of paper, which unfortunately happens to so many as it is.

FYI - your post catalyzed my recent post today as I have been thinking about this for some time. Feel free to check it out! https://substack.com/home/post/p-168337969

Stay human.

Dr. D

Expand full comment
Bryan Alexander's avatar

Oh, I hate losing comments and posts! Sorry.

(I think my fellow humanists often have a hard time seeing the rest of the academy, and we tend to make a lot of noise.)

Agreed on continuous revision (says the person doing that this week for fall classes). But comments today make me think of revising the whole system, from grading and certification to class size and professional development.

Now over to your post -

Expand full comment
Joseph Thibault's avatar

"Turnitin and ChatZero are the leaders in this field."

Tbh, I've had never heard of Chatzero nor seen it in the major studies. There are lots of players in this field, copy leaks, originality.ai, and pangram are usually the ones I see listed in studies and comparisons

Expand full comment
Bryan Alexander's avatar

ChatZero is very interesting. It took off in part because it came from within academia, a Princeton undergrad's project.

Expand full comment
Joseph Thibault's avatar

Wrong company, you are talking about Edward Tian and gptzero I think

Expand full comment
Joseph Thibault's avatar

Ironically, Chatzero looks to be a cheating/cheating-enablement too aka a text humanizer

Expand full comment
Bryan Alexander's avatar

Ah, thank you.

Expand full comment
dave cormier's avatar

We're actually perfectly setup for that change. The challenge is choosing what to do. I'm convinced that the change has to be in the student - we've spent years relying on our power over them to get them to do the work. We need to find a way to explain to them that there is a purpose for them to actually do the work. And I don't know if that's possible.

Expand full comment
Bryan Alexander's avatar

How do you mean, perfectly set up for the change? Say more, please!

Expand full comment
Lisa Dunick's avatar

For the last decade or more, our students have been trained that only outputs on high stakes tests matter. Of course they want a bot who can magically spit out the “correct” (or at least authoritative and confident) answer. Recentering education as a practice and learning as a value is something sorely needed. Not sure how to convince them though…

Expand full comment
Karl Hakkarainen's avatar

I had a conversation with a professor at an elite college who reported that his students were really struggling because a) they'd gotten all the right answers to get into this school and now were a middling elite among many others and b) they were thrashing when they were faced with problems that didn't have a "correct" answer. The professor was helping the students unlearn the tricks and techniques that got them into this college. Anyone who has to unlearn what they regard as core vales in order to embrace something new will empathize with that panic.

Expand full comment
Bryan Alexander's avatar

I have heard similar conversations with so many faculty. Heck, this happens to me sometimes. Poor Chinese students with Gaokao.

Expand full comment
Bryan Alexander's avatar

That's a really good point, Lisa.

Expand full comment
Felix's avatar

Ugh, this. I feel like I'm the only one in my class still putting effort into understanding things. Consequently, I'm the one carrying the group projects.

Expand full comment
Bryan Alexander's avatar

Felix, may I ask what subjects you are studying?

Expand full comment
Felix's avatar

Life Sciences, at a Dutch university of applied sciences.

Expand full comment
Bryan Alexander's avatar

Good for you. We'll need more health professionals.

Expand full comment
Nicholas Spina's avatar

This is a really good overview of the reality of the situation, which I agree is grim. It seems in the short run - which is all we can plan for given the scale and speed of technological improvement - the best option is to try and keep courses small, which of course is a real challenge for most schools. But if we can keep classes under roughly 30, cold calling on students using a kind of Socratic dialogue and utilizing small pop quizzes taken in class may be the only way to get around this problem. Of course, AI wearables will undermine even these approaches. I would like to add another pedagogy, which is the use of complex, semester-long simulations that require teamwork, negotiation, real time strategizing, etc. Sure, AI can help with it, but these sort of simulations prioritize human skills. Not going to work with all or even most disciplines though.

Expand full comment
Bryan Alexander's avatar

Nicholas, I think small classes would be a big improvement - although the financial cost would be huge.

Say more about those simulations? Are you thinking of, say, Harvard Business case studies or Reacting to the Past games?

Expand full comment
Nicholas Spina's avatar

Yes those are both fine. Many international relations classes use simulations like Statecraft or variants of strategy board games that link to course material. Model UN too. From an AI perspective, the benefit of simulation is that papers/reflections must reference specific moments of the exercise and cannot be done exclusively with an LLM. But these kinds of simulations are often tailored for social science courses and won't work in all disciplines. But it's a tool.

Expand full comment
Bryan Alexander's avatar

I love simulations for political topics. Two colleagues and I built one back in 1999: https://web.archive.org/web/20160312005252/https://toolormethod.wlu.edu/insights.html Having assignments reference a detail of a sim is very clever.

Expand full comment
Mike Cosgrave's avatar

I’ve been doing “process based writing “ for over a decade as a way to teach students about the various steps in the process of reading, analysis and writing. Reactions vary: masters students see the benefits of a structured writing process, undergraduates less so

Expand full comment
Bryan Alexander's avatar

Interesting difference! Why are undergrads less excited, because it's so new?

Expand full comment
Mike Cosgrave's avatar

I think so, and they haven’t as much experience of written work where they are expected to read a range of texts and integrate the analysis: in school they live on textbooks.

Expand full comment
Bryan Alexander's avatar

I wonder just how much K-12 writing has been downgraded.

Expand full comment
Mary Harriet Talbut's avatar

I don’t think you are being too bleak, and I wholly support rethinking assessment, from top to bottom. It has needed to be done for some time. As an Instructional Designer, I have discussed with faculty many of the points you make and find a true frustration among faculty and students. The students don’t want faculty to use AI, and the faculty don’t want students to use AI. I know you are specifically dealing with higher education, but we are seeing students coming to college already using AI and not totally understanding the implications. I think there is going to be a combination of many strategies, and content will play a part.

Expand full comment
Bryan Alexander's avatar

Agreed about the K-12 uses, Mary, and I should have mentioned that.

In your ID work, how are faculty planning to handle this come fall?

Expand full comment
Mary Harriet Talbut's avatar

We have been talking about it already, but I am going to start again with why we need to rethink assessments from top to bottom for the reasons you gave. OK, if I borrow your ideas and give you total credit?

Expand full comment
Bryan Alexander's avatar

Absolutely! Please share this post.

Expand full comment
Steve Covello's avatar

Here are several articles I have posted related to this topic:

- Develop new academic Competencies that embrace the ability to produce volumes of research at scale using AI: https://stevecovello.substack.com/p/what-is-scholarship-in-the-age-of

- Develop AI sense-giving chatbots for use as learner assistants according to principles of Sense-making and User-based Design: https://stevecovello.substack.com/p/online-learner-ai-chatbot-assistants

- There is no good place to put an AI sense-giving chatbot in an LMS because the LMS is not a learning application: https://stevecovello.substack.com/p/the-lms-has-nothing-to-do-with-learning

- When assessing AI-infused learning, assess reflection, not proficiency: https://stevecovello.substack.com/p/why-do-we-assess-reflection-in-an

Finally, a toolkit for discussing what is possible for using AI in instruction - "Controlling AI in Instruction." 3 Levels of thinking about how (if) AI fits into an assignment design; if so, a structural framework for development; and guidelines for the language to use in syllabi and assignments that prescribe the use of AI. https://pressbooks.usnh.edu/controllingai/

Expand full comment
Bryan Alexander's avatar

My dear Steve, you have written so much and so well. (Folks, subscribe!)

I do like Devin's ideas, your focus on simulations, and changing how we assess.

One question: how can students do your symphony conductor's work when they are, as it were, still so new to music?

Expand full comment
Steve Covello's avatar

This is an institutional question. What is it that we do that is relevant to a person seeking "higher education" in whatever definition that might be? If we intend to graduate students with the capacity to interpret volumes of research using AI, then the institution ought to build programmatic structures to enable them. If the Competencies are mapped across several levels of coursework, then it *should* result in a Capstone project that corresponds to the capability to operate in the metaphor of the orchestra conductor, in whatever domain they study.

That's a big "if." We already know that mapping Competencies across a program is embraced across only certain institutions.

Expand full comment
Bryan Alexander's avatar

It's very hard for an institution to do competencies at scale and in detail. We prefer disaggregation.

I suspect we're at the big picture phase now.

Expand full comment
Tom Haymes's avatar

There is only one answer to this problem: get rid of the word "cheating" in academia. We have debased learning so much that it's just a stupid game to most of them. Meaningless games beg for cheating.

We need to recognize that extrinsic motivators (grades, credit hours, completion) make their learning journeys nothing more than a maze that's begging for shortcuts.

There is no easy solution to these systemic issues. We have to convince students that their learning is more important than the factory systems of education they find themselves in. This is not easy and they have been well-trained in the system and what it takes to succeed in it (hint: it's not creative thinking).

Being able to think creatively for yourself is a necessity in a world where AI can mimic humans but not provide authenticity. We need to be teaching our students to survive in that world.

We should be teaching our students how they can use AI to augment their creative and intellectual capabilities. There is a lot of learning that will happen on that pathway.

In my classes I emphasize using AI to augment their work, not replace it. I also try to convince them that if they use AI to supplant themselves in their learning, it will supplant them in their professional lives because they won't have the skills to survive. Some of them buy it.

Finally, I emphasize formative assessment and growth in my class. This has two positive effects. First, it puts them at the center of the learning experience, not me. Second, it forces them to transform the things they build for class (idea -> blog -> visual communication in a web site). If they "cheat" along the way, they won't understand enough about what they're working on to do the next steps.

Expand full comment
Bryan Alexander's avatar

This sounds like another call for us to rethink the whole system, Tom.

(and for more people to teach like you!)

Expand full comment
Dan Bousfield's avatar

I teach large classes that use unproctored online multiple choice questions. This is our current workaround:

Every multiple choice question question has five answers with one option being 'this was not discussed in lecture'

If you have a question with data 'X' that was not discussed in lecture (and you are sure) and you combine it with data 'Y' that was discussed in lecture, you can have Chat GPT generate all the distractor answers.

You can even have it be quite absurd, but ChatGPT will generate something it thinks is plausible (and it creates a false positive if they try to cheat).

What does Atlantis symbolize in lesson 10 on health systems?      

A model for sustainable health systems

An example of a successful ancient health system

A real historical health system

A cautionary metaphor for idealized health systems

This was not discussed in lesson 10

It is a carrot (if you try to cheat you will hurt yourself, so don't bother) and a stick (if you do cheat you will hurt yourself).

Pedagogically, it's equivalent to attendance, but it does require at least human attention.

Expand full comment
Leslie Donovan's avatar

I love the way Bryan has broken down the options for teaching with AI or avoiding cheating through AI. This topic is something I have been struggling mightily with and am hoping to sort out a better way to handle for my fall classes. I am faculty in an honors college who teaches primarily inter/multdisciplinary humanities and communications. I am very interested in what people have to share. In my spring class for seniors (https://honors.unm.edu/academics/course-previews/courses/what-worlds-may-come-studies-for-the-future.html), I was so frustrated about what to do with AI that I essentially gave up and avoided it. I assigned no papers (which hurt this humanities gal!) and had our online LMS discussion earn 25% of the grade. My students did a LOT of writing on the LMS Discussion every week. Mostly, it seemed like they did not use Gen AI for that. However, I am at a loss about how we can reshape higher education curriculum if we no longer assign papers outside of class. In my early days of teaching, I HATED grading in-class essays and writing, and do not want to go back to that again. This post is intended to convey that I'm here, interested, and struggling with these issues as well.

Expand full comment
Bryan Alexander's avatar

First off, Leslie, I love that class! You know how to make a futurist happy.

Second, I'm impressed that students mostly avoided AI for the LMS discussion threads, but fear that might not keep happening. I've heard stories of heaps of AI answers, responding to each other.

Have you had students make video or audio stories?

All best -

Expand full comment
Charles A. Fndley's avatar

Thanks for pulling all this together. I have experienced so many of the different approaches that folks seem to be taking. I completely developed a new course in Crisis Management last Spring. In the process, I tried to do something different. I ended up with a process-- Assessment Learning, with incremental scaffolding. I didn't have an agenda or label when I started. When I pulled it all together I attached the labels. I did create a little write up to explain and perhaps motivate some instructors or Instructional Designers to give it a try. Since I can't attach a fill here, I email you a short write-up.

Expand full comment
Bryan Alexander's avatar

Got it. Will reply in email. Thank you.

Expand full comment
Lb's avatar

curious what you all think of this newer code: I'm copying from an article I read:

"

https://openai-openai-detector.hf.space/

from the essay:

“ Recently a Princeton student wrote some code to identify plagiarism of ChatGPT. I tested it out and it worked! First I copied and pasted an essay about the theme of Harry Potter and the Sorcerer’s Stone written by ChatGPT. Here were the results:

Then I copied and pasted my own essay and here were the results:

This tool was able to accurately identify which essay was plagiarized and written by ChatGPT and which essay was written by me, a human!”

Expand full comment
Bryan Alexander's avatar

That sounds like the ChatZero origin.

Expand full comment