Today I took in a webinar on AI and higher education. The American Association of Colleges and Universities hosted “The AI Revolution: Transforming Higher Education for the Workforce of Tomorrow” and I’d like to share my running notes.
Please let me know if this kind of live-Substacking is useful, and if I should do more.
Bill McKinney (Senior Advisor for Regional Campus Affairs, AAC&U) introduced the panel by arguing that AI will drastically change the world of work, and therefore higher education needs to change in response.
McKinney then asked panelists to speak to how AI impacts preexisting labor conditions. Brian Haugabrook (Specialist Leader, Deloitte) saw the pandemic increasing our value of home time, while people increasing held multiple jobs and careers in their lifetimes. In response, higher education has to focus on teaching the value of information and the importance of collaboration. Earl Buford (President, Council for Adult and Experiential Learning (CAEL)) saw higher education already responding. Krystal Rawls (Workforce Integration Network Director, California State University, Dominguez Hills) thinks that higher education is already prepared, as we teach critical thinking, yet we should also prepare for AI’s limitations.
McKinney: how can higher ed better support adult learners? Rawls thinks AI can make education more accessible to learners through textbots’ ability to assist. Yet we have to make sure that tools are equitable through intentionality. Buford thinks 2- and 4-year institutions need to be better connected for workforce development. Prior learning assessment should also expand. AI can provide practitioners with more information about finding employers.
Audience question: which jobs are going to disappear or be changed drastically in the next five years? Haugabrook: administrative, repetitive jobs, including some within higher education. Rawls: we may see fewer jobs, rather than jobs disappearing (not sure I follow this) and we’ll need more jobs supervising automation, plus jobs changing with AI’s impact.
Rawls went on: we need to educate educators on AI. We need to use a positive, growth mindset for student development in order to reduce plagiarism, showing learners that faculty are also learning. Further, we need to teach students to critique these tools, while joining in the conversation about shaping a new world.
McKinney: what does trusted information mean? Haugabrook: we can use AI to help us sift through dis- and misinformation. At the same time educators have to prepare for increasing security threats. Buford: here’s the importance of teaching critical thinking.
McKinney asks: how can we approach equity? Buford: universities need to better recruit underrepresented populations (for IT, I think).
McKinney wonders about the nature of AI as a tool, especially as tools often contain biases about their use. Rawls wonders about how to bring incarcerated people into the AI world, and recommends students co-creating AI work. She reports asking ChatGPT to create three headlines each for a black man and a white man. Replies for the former were all about interactions with police, while the latter was about professional success. When she asked an image generator to create a picture of a woman with her characteristics, the result was an Asian, not black person. An audience member points to the Black in AI project.
McKinney asks about using AI for education outside of campuses. Haugabrook describes minimizing administrative work using AI, along with creating more personalized responses to users. He adds using Bard to help with his own PhD research. Buford mentions using AI to help generate real time intelligence.
McKinney wonders about teaching more STEM, and critical thinking, in the context of Grawe’s demographic cliff. He asks Rawls about how institutions should reexamine their curriculum, especially in light of changing students. Rawls advises faculty to use AI tools, supported by institutions; “faculty are not prepared yet… to genuinely engage with this tool without training.” Further, no gatekeeping by fear:
Students have the right to be educated on today’s tools, not on yesterday’s fears.
A final round of comments: Haugabrook advises us to study the tech, knowing that it’s developing quickly. Make sure campuses have good partnerships. Rawls: don’t let fear control the future.
Some thoughts from me: first, I was delighted that AAC&U actually supported chat. I’ve been frustrated by many webinars which turn off that crucial audience service.
Second, this felt like it had an institutional leadership, administrative focus. Much of the conversation addressed how campus leaders should support faculty, think about labor market dimensions, arrange responses in time. There wasn’t a focus on, say, pedagogical practices, the research enterprise, the role of the library, etc. Nor was there a dive into how AI works, either technologically or as business/nonprofit enterprises. There also wasn’t an exploration of global issues other than potential alterations in the job market.
Third, the overall tenor was both productive and optimistic. Yes, all panelists mentioned problems (Haugabrook stood out for me here), but they all urged institutions to engage with AI.
Fourth, the consensus seemed to be that inter-institutional collaboration was a key part of academic AI strategy.
I’m glad to see AAC&U convene this panel, and I hope for more.
PS: let me know if you’d like me to do more live event Substacking.