Discussion about this post

User's avatar
Joe Essid's avatar

Thank you for this post. I'd not been familiar with Ferguson's arguments before. His two metaphors describe our situation rather well. I do find a strong dislike for SF and space travel among many left-leaning colleagues, which explains why they have not read a work of science fiction that offers a middle path out of catastrophe: A Canticle for Liebowitz.

For those who don't know it: the novel has cloistered monks saving the remains of a technological civilization after a nuclear war and, in time, building starships. They do this out of a moral imperative to help humanity by preserving knowledge and, under the influence of a new Renaissance likely to lead to another nuclear age, carry the word of God and the fruits of human learning to the stars.

What if we began our discussions of AI not with politics or even economics but instead with morals and ethics? How can we include this technology in ways that preserve human dignity and agency? That cause the least harm possible to our lives and natural world?

Expand full comment
Nancy J. Smyth, PhD, LCSW's avatar

Thank you for describing this so thoroughly. These are definitely the two camps that are warring right now on the role of AI in education.

I'm in a professional school, so we have an added responsibility to educate social workers who have competencies as determined by our accreditation body. There is a specific competency around being able to use technology ethically. So given the changes in workplaces, I'm not sure not using it at all in education is an ethical option for us, although we do have faculty who fall into this camp. I was just part of a faculty committee that was charged to develop guiding principles for our school and we came up with these (https://socialwork.buffalo.edu/information-faculty-staff/guiding-principles-generative-artificial-intelligence.html ). They simply a start — very general and don't speak to the how of how to teach about/with generative AI.

Finally, I have to say I am always a bit suspicious of either/or arguments, so I like how you've articulated using one and then the other. An advantage of using both the starship and the cloister model is it forces you to consider where and why you would use one and when you might use the other. And yet I also think categorical thinking can trap us into boxes (even when we try to bring the two opposites together) and I'm much more interested in exploring the full spectrum of possibilities (e.g., Johansen, 2020). I look forward to reading more of what you have to say on this topic!

Johansen, B. (2020). Full-Spectrum Thinking: How to Escape Boxes in a Post-Categorical Future. Berrett-Koehler Publishers, Inc.

Expand full comment
9 more comments...

No posts