11 Comments
User's avatar
Joe Essid's avatar

Thank you for this post. I'd not been familiar with Ferguson's arguments before. His two metaphors describe our situation rather well. I do find a strong dislike for SF and space travel among many left-leaning colleagues, which explains why they have not read a work of science fiction that offers a middle path out of catastrophe: A Canticle for Liebowitz.

For those who don't know it: the novel has cloistered monks saving the remains of a technological civilization after a nuclear war and, in time, building starships. They do this out of a moral imperative to help humanity by preserving knowledge and, under the influence of a new Renaissance likely to lead to another nuclear age, carry the word of God and the fruits of human learning to the stars.

What if we began our discussions of AI not with politics or even economics but instead with morals and ethics? How can we include this technology in ways that preserve human dignity and agency? That cause the least harm possible to our lives and natural world?

Expand full comment
Bryan Alexander's avatar

Thank you, Joe.

Agreed on the opposition to sf from left academics, which always either maddens or saddens me.

Fascinating thought, thinking of AI and Canticle. (But don't forget how the novel ends.) (Did you ever see Babylon-5's tribute episode?) For me the connection makes me think of human nature through AI, the sharks in our depths as well as the good monks.

Expand full comment
Joe Essid's avatar

I mentioned this to you in a PM, but the ending struck me with the monks' notion that our Earth was too benign a place for humanity to thrive long-term. Millers' characters have a dark view of our nature. I love the final image of the monk knocking the dirt of Earth off his sandals as he departs our planet.

For our species to thrive without a new cycle of self-destruction, the monks cling to the hope that more difficult worlds among the stars will keep us busy with survival. I don't know that they are wrong; if you wear young men out in the fields all day, they don't have as much idle time or energy to get into trouble.

Expand full comment
Bryan Alexander's avatar

You remind me of Georges Bataille's theory of "the accursed share," that we have a superfluous amount of energy that was need to deal with. Historically we did it through intense religion or military services.

Expand full comment
Joe Essid's avatar

In retirement, I resumed building scale models at a one/month pace. I'm not great at it but also not horrid. I can use an airbrush well.

Honestly, hobbies like that bleed off surplus energy. They often immerse one in a community of others with similar interests. I wish more of us had active hobbies, instead of looking at screens.

Expand full comment
Bryan Alexander's avatar

That's a good example. Hobbies require a lot of time and, at times, careful attention. Models of what, may I ask?

Expand full comment
Joe Essid's avatar

Mostly WW2 aircraft, with some civil aviation subjects and the occasional ship or tank. I recently rehabbed a model that I built 40 years back and that sailed a lot further than its namesake did in 1944. https://modelingmadness.com/review/misc/ships/j/esshin.htm

Expand full comment
David Gibson's avatar

It's a very useful distinction and thank you for pointing me to Ferguson's piece. Among other things, it conveys the implicit message that you're either in the cloister or on the starship and you can't be on the starship pretending to be in the cloister, which is the approach of some of my colleagues -- making it fully possible to use AI but hoping that students will not.

Expand full comment
Derek Bruff's avatar

The University of Sydney has leaned into this model. Are you familiar with their “two-lane” approach? https://canvas.sydney.edu.au/courses/63765/pages/the-new-sydney-assessment-framework-the-two-lane-approach

Expand full comment
Nancy J. Smyth, PhD, LCSW's avatar

Thank you for describing this so thoroughly. These are definitely the two camps that are warring right now on the role of AI in education.

I'm in a professional school, so we have an added responsibility to educate social workers who have competencies as determined by our accreditation body. There is a specific competency around being able to use technology ethically. So given the changes in workplaces, I'm not sure not using it at all in education is an ethical option for us, although we do have faculty who fall into this camp. I was just part of a faculty committee that was charged to develop guiding principles for our school and we came up with these (https://socialwork.buffalo.edu/information-faculty-staff/guiding-principles-generative-artificial-intelligence.html ). They simply a start — very general and don't speak to the how of how to teach about/with generative AI.

Finally, I have to say I am always a bit suspicious of either/or arguments, so I like how you've articulated using one and then the other. An advantage of using both the starship and the cloister model is it forces you to consider where and why you would use one and when you might use the other. And yet I also think categorical thinking can trap us into boxes (even when we try to bring the two opposites together) and I'm much more interested in exploring the full spectrum of possibilities (e.g., Johansen, 2020). I look forward to reading more of what you have to say on this topic!

Johansen, B. (2020). Full-Spectrum Thinking: How to Escape Boxes in a Post-Categorical Future. Berrett-Koehler Publishers, Inc.

Expand full comment