Hello from San Diego. I'm at the ASU-GSV conference for various purposes, the leading one being tracking how educators, companies, and financial firms are thinking about AI. Here I’ll share some notes from the day.
I began by going to the AIR Show, the event’s spinoff focused on AI and education. It took place at a separate hotel, and apparently included a lot of K-12 teachers and staff for the two days before I arrived. A medium-sized vendor’s hall greeted me with a mix of AI start-ups and established players showing their new or modified wares.
I was scheduled to do a brief talk with my friend Ian Wilhelm, a Chronicle of Higher Education reporter. Actually, the format of the event was supposed to be a "fireside chat" and also somehow a "keynote interview". What this meant in reality was that Ian Wilhelm and I had just 20 minutes on a stage to explore our respective work on AI. As the session description put it,
Together, they will explore the strategic decisions facing academic institutions: whether to pioneer the integration of AI technologies with all their inherent risks and rewards or to navigate a more conservative path forward.
We planned to rapidly hit our major themes, and did so. Ian showed very interesting Chronicle survey results, showing that respondents viewed teaching as the leading use of AI’s potential and also the leading vulnerability academia has to AI’s dangers. I noted that at the same time I’ve seen and expect a lot of non-pedagogical use, as faculty, staff, and students use chatbots for instrumental writing purposes.
We progressed to how academic institutions might engage with AI. I pointed out a popular lack of institutional engagement at a strategic or practical level, with leadership punting classroom AI decisions to departments or individual instructors. We discussed the substantial degree to which academic AI choices are at the mercy of external force and random events, including: copyright lawsuits; national policies; business model failure and transformation; business decisions; impacts on labor markets; rapid technological development.
Ian raised the idea of AI playing a role in academic labor negotiation. I thought it was likely that faculty and administrators might push for agreements on AI in teaching and research. Next, I hammered home the emergence of a deepening cultural divide over generative AI, with opposed positions hardening. I also urged the audience to consider the advantages of academics doing open source LLM work.
At the end, Ian asked me to imagine what kind of post-AI college or university might greet his son, once he turned eighteen in the year 2033. I offered these brief scenarios for post-AI academia in 2033:
Cyborg academy. Everyone uses AI in daily operations, from students and faculty to HR and presidents. Staffing remains roughly the same in nature and size.
The classic industrial-age model. Adopting AI has ended some academic jobs. In response, new ones have appeared and some established ones have changed. Faculty numbers have been reduced, such as those teaching introductory class instruction. New jobs include a successor to guidance counselor, focused on helping students use AI for learning and life.
Robocollege model. Higher ed institutions have died back, especially the majority below the elite. Lower class students are accustomed to learning from commodity AI. In contrast, having human professors is a sign of high status and wealth.
Competing AI college models. Some institutions are anti-AI, banning the technology and celebrating LLM-less human creativity. Other colleges and universities embrace AI and each resembles one of the preceding models scenarios.
I asked the audience to vote on which of these they thought most likely to occur. By a large margin they went for #2 (transformed academic jobs), which really fascinates me. They didn’t think job losses were likely, nor did they consider divided post-secondary education by AI attitudes to be the future.
After my session, I took in a panel on AI and the digital divide, focusing on historically black colleges and universities (HBCUs), including Virginia State University. Panelists began by speaking of the importance of good data, data analytics, and having systems in place to use them. Discussion moved on to challenges of making data work in enterprise systems.
How can they use AI? There was a lot of caution expressed on this point. One panelist argued that AI was not in a good enough state to use in a “governing” way, “especially for our students.” Several emphasized codesign work with students. I asked about open source; one CIO responded that they saw research universities being the ones to work on that.
After this I looked in on a session on storytelling and AI. It began by asking us to solve a children’s puzzle, then the presenter showed us a video about it, which suggested some ideas about storytelling. Unfortunately, I missed the rest as I had to leave for my next session.
That session was a gathering with a great deal of free-wheeling conversation about AI. Some talked about integrating AI into professional workflows. I spoke about the challenges of regulating AI, touching on section 230, the potential for copyright rulings, the cultural split I noted above, the slippery question of pinning down harm from AI-generated content, and more. There were some good points about historical antecedents.
Next, a panel on YouTube in education discussed AI. An Amoeba Sisters producer described using image generators to quickly produce sketches to work from. A teacher urged educators to teach more about media literacy. “I’m trying to instill in my students the habit of questioning images.” A YouTube representative (I think) spoke about using generative AI to create images and videos of science experiments impossible or unlikely to attempt in the offline world. Another panelist hoped that AI would serve as an assistant to creators, rather than being a creator directly. An English teacher (I think) thought video creators would use textbots to write scripts, but it would take people to read them. She emphasized that students needed to seriously improve their oral communication skills.
So ended the day. The day was also marked by connecting and reconnecting with readers, Future Trends Forum fans, clients, people I knew from old NITLE days, and still more people. ASU-GSV is a large event, with possibly more than 7,000 attendees, so it’s a good place to network and be with friends.
One question I was trying to answer, and have failed so far, is to what extent finance capital is trying to wrangle generative AI. I’m drawing on Carlota Perez’ important book, Technological Revolutions and Financial Capital (2003), which establishes that after the eruption of a new technology in the world, investors historically intervene to fund its development but also to reshape it for maximum profit and stability. I’m curious about what financiers today would like to see in generative AI: what forms they prefer, which uses they’ll support. I’ve seen signs of this in the world, of course, but am still looking for what the ASU-GSV domain says.
Overall, the theme was constructive engagement with generative AI. I didn’t see many signs of critique, although there were notes of concern about bias in AI and, more, problems of hallucinations and errors. There was a lot of curiosity about uses of generative AI in education and energy around the forms it might take. Excitement was in the air. To be fair, these observations are based solely on my limited experience of the event.
I so appreciate reading this analysis--I only wish I could have been there!
"Technological Revolutions and Financial Capital" - okay, next on the book list...