Teaching with AI: the cloister and the starship
Greetings from that time of year when teachers and students hurtle towards fall classes. My first class actually met already, taking place all last week. My second starts Thursday. The third, one focused on AI, the week after.
Among other preparatory and kickoff things, my colleagues and I are considering how to approach AI across the board. Assessment, what to teach about AI, which applications to show students, how to talk with students about this controversial tech, and more are on the table (or in Canvas). I know other faculty are thinking about these issues and some are implementing them in their higher education classes.
Today I’d like to focus on one pedagogical model which has recently surfaced. It’s called the cloister and starship. (One source) (Another; archived) (Speaking of which, I’ve just started a series of posts on teaching with AI. More to come.)
The idea is straightforward. A student’s experience alternates between AI and no-AI zones. In the AI space (the starship, as in lots of impressive high tech) the learner has free rein to use the full range of AI tools. For a certain time (and perhaps space) students have untrammeled access to the full powers of generative AIs, using them for coursework and many other things. Then they switch to a space without any AI (the cloister) where they must nurture their non-AI skills, including mental and social abilities. Here students must instead rely on older technologies for study, such as paper, pens, non-intelligent lab equipment, and books.
The names “cloister” and “starship” are a bit hyperbolic, tongue in cheek. AI as a starship suggests images of technological advancement and power, coolness and imagination. Cloister is retro, drawing back in time, hinting at stillness and silence. Perhaps the terms are overkill, but let’s see where they take us.
The starship and cloister may be divided either by space or time. We can imagine different spaces on a campus. Certain classrooms, labs, library rooms are AI-enabled or -allowed spaces, while others as no-AI zones. Or the difference might break out in time, either by posted schedules (“AI permitted in the Learning Commons from 1-4 pm”) or under direction of an instructor (“You may now use AI for the next 30 minutes”).
This model draws on two strong and opposed positions within the academy. There is the argument that we should or even must teach with and about AI for various reasons: AI is offering new pedagogical possibilities, we need to prepare students for a world shaped by AI, we have to choose to teach in new ways given widespread impact of AI, etc. Against this is the anti-AI movement, which sees the technology as problematic or destructive for many reasons: bias, sapping student learning, deskilling professionals, links to reactionary politics, and more. I won’t rehearse either school’s arguments in full for this post, as readers are probably familiar with each. The point is that cloister/starship splits the difference between the two, or occupies a middle zone.
That split might not satisfy either strong position. Those opposed to AI may feel that allowing starship times or spaces still maintains institutional complicity with a technology they find reprehensible and dangerous. The academic population, especially its most marginal and vulnerable, remains exposed. In contrast, those promoting AI might see the cloister as a way for instructors and students to evade a tech they see as transformative and essential.
Personally, I have done versions of starship/cloister in my teaching practice. Over the past two years I’ve encouraged my students to use generative AI for various purposes and in different ways, including by teaching them about tools and prompt engineering, as well as having them teach other other. I’ve also recommended they avoid AI for some tasks and at certain times. On a related note, I structure my digital storytelling classes to have high tech and no tech times. The high tech is for working on digital audio, video, images, and so on; the low is the story circle time, when participants share their progress in a round table setting, emphasizing their humanity and shared learning. (Here’s my digital storytelling book, if you’re curious.)
At the same time I can see all kinds of problems with cloister/starship, starting with enforcement. It’s extremely difficult to ban AI from a campus space, unless one encloses it in a Faraday cage or ruthlessly polices access to hardware. Ezekiel Emanuel recently wrote about his experience doing the latter:
What I would really like is for every university classroom to be treated more like the sensitive compartmented information facilities, or SCIFs, in the White House and other government buildings: Phones are not permitted and are locked in cubbies outside of every room. Students would have to deposit their phones before class and pick them up after class.
I think these are very difficult things to do in practice, but policing hardware might be doable to a given professor’s or staff member’s satisfaction. It’s also possible that we’ll see students overuse AI zones and avoid the cloisters, which raises support issues.
The cloister and starship names may also, probably, irk some academics for various reasons. “Cloister” may seem too twee or elitist, for example. The “starship” name rankling those opposed to space travel or who dislike science fiction. And some find the historian Niall Ferguson disagreeable or politically abhorrent. (He’s the one came up with the terms, as best I can discern.)
Personally, I have some other issues with the model. These kinds of discussion about educational technology tend to privilege humanities practices over the rest of the curriculum. That is, the cloister appears as a place of books and writing by hand, but not so much, say, as one with voting datasets and telescopes. I’m also curious about Ferguson’s idea on admissions: “Revise admissions procedures to ensure the university attracts students capable of coping with the discipline of the cloister as well as the opportunities of the starship.” I’m not sure how that might take shape.
Looking ahead, I think a good number of academics and institutions may adopt this model, but not using those names. Imagine a department where some professors ban AI and others embrace it. Majoring in that field, you might experience the cloister-starship experience. Or consider personal/social practices, when a group in a shared space switches between deliberate AI practice and its opposite - by “group” I mean class, study group, or meeting. Cloister/starship might face into memory as a name, which successfully describing at least some parts of normal life.
In the meantime, I must return to launching not necessarily a starship, but three classes and several speaking engagements, not to mention completing a round of copy edits for the new book. Yet never fear, dear readers, as there’s a stack of Substack posts in process!
(many thanks to Stephen Downes, Ruben Puentedura, and Peter Shea for conversation on this topic; cloister photo by Michael Tinkler)




Thank you for this post. I'd not been familiar with Ferguson's arguments before. His two metaphors describe our situation rather well. I do find a strong dislike for SF and space travel among many left-leaning colleagues, which explains why they have not read a work of science fiction that offers a middle path out of catastrophe: A Canticle for Liebowitz.
For those who don't know it: the novel has cloistered monks saving the remains of a technological civilization after a nuclear war and, in time, building starships. They do this out of a moral imperative to help humanity by preserving knowledge and, under the influence of a new Renaissance likely to lead to another nuclear age, carry the word of God and the fruits of human learning to the stars.
What if we began our discussions of AI not with politics or even economics but instead with morals and ethics? How can we include this technology in ways that preserve human dignity and agency? That cause the least harm possible to our lives and natural world?
Thank you for describing this so thoroughly. These are definitely the two camps that are warring right now on the role of AI in education.
I'm in a professional school, so we have an added responsibility to educate social workers who have competencies as determined by our accreditation body. There is a specific competency around being able to use technology ethically. So given the changes in workplaces, I'm not sure not using it at all in education is an ethical option for us, although we do have faculty who fall into this camp. I was just part of a faculty committee that was charged to develop guiding principles for our school and we came up with these (https://socialwork.buffalo.edu/information-faculty-staff/guiding-principles-generative-artificial-intelligence.html ). They simply a start — very general and don't speak to the how of how to teach about/with generative AI.
Finally, I have to say I am always a bit suspicious of either/or arguments, so I like how you've articulated using one and then the other. An advantage of using both the starship and the cloister model is it forces you to consider where and why you would use one and when you might use the other. And yet I also think categorical thinking can trap us into boxes (even when we try to bring the two opposites together) and I'm much more interested in exploring the full spectrum of possibilities (e.g., Johansen, 2020). I look forward to reading more of what you have to say on this topic!
Johansen, B. (2020). Full-Spectrum Thinking: How to Escape Boxes in a Post-Categorical Future. Berrett-Koehler Publishers, Inc.