How can we best use artificial intelligence in college and university classes?
For today’s post I’d like to radically shift ground from our previous discussion, which was at a very high, macro level. Now I’d like to drill down to the micro level, considering one particular use of AI for creative teaching and learning.
Here I’ll assume readers are familiar with at least the basics of using AI tools to generate images (Midjourney, DALL-E, Craiyon, etc.) and to create text (ChatGPT, Bard, Bing, etc.). Let me know if you’d like me to write up some intro posts to those applications.
Today I’m starting from a single, interesting video which demonstrates fascinating possibilities for digital storyteller. A YouTube channel, Views of an AI, recently published “Blade Runner 1929”, a very short video which combines the classic 1982 film with a group of 1920s movies and styles focused on Fritz Lang’s work. It’s just over three minutes long, and you can watch it now:
So why does this matter to those of us thinking about higher education and AI?
Take a closer look at the video. It materially consists of several elements: a voiceover, a series of images (slightly animated; notice how we close in a bit on each), and a music track.
Now that’s actually a classic combination for DIY video creation. The StoryCenter project (formerly Center for Digital Storytelling) honed this back in the 1990s for their autobiographical stories. The idea is to get first-time video creators to write and record a voiceover, mix in images and sound within a video editor, and voila! The curriculum is powerful and accessible. It can work splendidly in higher education, with students creating their own digital stories within classes. (I’ve taught workshops on this for years, and also wrote about it and other digital storytelling approaches in my first book.)
What’s different here is that Views of an AI built this out largely or entirely through AI tools.
Look at those material items again. The voiceover? Written by an AI and turned into audio by software. The images? Created by human prompts in Midjourney. The music is, I think, human created. And the idea came from a discussion between a human and an AI? From the credits:
How might this play out in a college or university class?
Imagine assignments which require students to craft such a video. Start from film, media studies, or computer science classes. Students work through a process:
Come up with a story idea, perhaps in conversation with an AI, and hopefully talking with classmates and instructor.
Craft a voiceover and record it. Save the audio file.
Create a series of images to illustrate the story’s key points and ideas. Save them, carefully labeled.
Import audio and images into a video editor. Synchronize them into a cohesive whole, and export to YouTube, learning management system, or elsewhere.
There’s a lot going on in this process. Students will need support at each point, support offered by the instructor, peers, staff, or digital resources. There are all kinds of questions about audio recording, video design, what makes a good voiceover, and so on. Fortunately, that’s all well known, as video-based digital storytelling dates back a generation.
What’s new is doing this through AI. Students have to learn how to use these apps at a basic level, first, then how to shape the best results through prompt engineering and iteration. They’ll also have to practice getting content out of the apps and into other tools.
All of this brings up a host of questions which the class instructor and the host institution should address. Which AI applications should students use? How can the class support these rapidly emerging and third-party sources? If students refuse to use them, what options are available for alternatives? What are the copyright issues? How should the institution - and the creators - preserve the results? These are questions instructors, staff, and students need to work through.
Let’s turn back to the curriculum. Based on the digital storytelling historical record, I can envision other classes and fields making use of the “Blade Runner 1929” model:
Literature: students imagine visually what they’ve been reading, perhaps with a creative twist.
History: students create a narrative of an event or situation.
Philosophy: visualize an abstraction, perhaps by working a concept or question through a story.
Sciences focusing on very small (microbiology, nanotechnology) or very large (astronomy, Earth Science) material: students visualize these and work them through a process over time.
Creative arts: well, that’s an obvious one.
Any class where development of student voice is an issue: the practice of creating such an artifact falls right in here.
Readers can no doubt think of more.
Such assignments don’t need to follow the “Blade Runner 1929” model closely. Nothing says the resulting video should be 3+ minutes long. We can make a one, two, or ten minute video. The timing of images is up to the creator - we can flick past them quickly or pause to savor one at a time. And the whole world of soundtracks is open, from music to sound effects to silence. Individual students or groups can make such stories.
For that matter, why a video output? Students could make simpler products, perhaps as a scaffold towards video, such as audio only (i.e., a podcast) or a comic/web page combining text and images. Alternatively, students could create more ambitious projects, from longer videos to working up AI content for a tabletop or computer game.
Now, please don’t interpret the enthusiasm of this post to mean “everyone should make these AI-powered videos all the time!” As with digital storytelling, and most educational technology, this works better in certain curricula than others, as well as for certain pedagogies than others, and different academic programs.
There’s a case to be made for having students create such videos without AI assistance. They can learn all kinds of skills on their own in so doing, from photography to writing and so on. There’s also the empowerment of learning to do these tasks on one’s own.
Further, there are many critical and political objections to incorporating new AI into the modern classroom. Relying on the for-profit vendors contributes to their bottom line and yokes pedagogy to business to some degree. Large language models have scraped copyrighted material without permission or compensation. Some AI have shown bias along racial, gender, linguistic, and other lines. Students may hold these views and resist using the technology. Instructors may do so as well. (Yes, there are posts in the works about the resistance to AI.)
Yet I think AI offers some advantages worth considering. First, if an instructor (or academic program or institution) wants to help students use AI thoughtfully, rather than just as consumers or for cheating, then composing stories is a classic way of so doing. Second, students may be able to parley AI wrangling practice into internships or jobs, or at least into further learning and thence to employment. Third, AI can assist students in being creative in areas where they are anxious, undersupported, or just awkward. I know from experience that a good number of people are terrified of writing. And I am personally dreadful at making images. A “Blade Runner 1929” experience might encourage students in those areas. Fourth, currently existing mainstream AI tends to be, well, mainstream and centrist in its output. Guardrails restrict what we can do. Prose tends to the bland and bureaucratic. Students can use that output to get a sense of centrist, inoffensive views on certain ideas - then push off into their own directions.
Let me close by looking ahead on this score. We can readily imagine the development of AI tools which make such creative storytelling even easier, notably AI video generators. We should expect AI storytelling apps to surface. “Blade Runner 1929” might rapidly become old school or retro in a few years, or even months.
Yet I think the key point remains, of students using the tech in a postsecondary education setting to be creative and learn from the process. As the generative AI revolution proceeds, we should expect to see more pedagogical uses and experiments along these lines.
Thanks for sharing this amazing video and your on-point thoughts. Minor addition: yes, the music in the video is man made. More specifically it was written by Tom Tykwer, director of Run Lola Run (and other movies) and the song is from his TV series Babylon Berlin.
Excellent instructive use case. Am architecting a multimodal AI game world for educational purposes and will be sharing this post, as it is most timely...