Greetings from a very busy and strange March, dear readers. As we try to work on AI the world polycrisis keeps ratcheting up. (I’ve been posting more videos about this.) This means - among other things - I have plenty of horizon scanning materials in the pipeline.
But today I wanted to update you all on how colleges and universities are engaging with AI.
Overall, as I travel, present, talk with people, and scan the world, I’m seeing many trends, efforts, and attitudes from the past half year persist. There is a significant push to explore and implement AI across the board, from the classroom to research and campus operations. Institutions are trying out various structures of response: committees, task forces, energetic individual leaders, externally funded projects, etc. Assessment and cheating loom large. IT departments are struggling to support access to AI at enterprise level, while many users either use free or paid consumer applications. There’s a great deal of unease and anxiety about AI, as well as pushback and opposition. The situation is complicated and shifting.
Now, let’s examine some recent stories. I’ll start off with some projects developed outside the academy, then explore on-campus developments in research and teaching.
1 Pushes and projects outside the academy
Google has been busy. They released an AI Co-scientist, an application aimed at helping people do scientific research. It’s a bit more elaborate than the now-established basic chatbot - I’m interested in how it uses multiple agents:
Meanwhile, a Google team published a paper claiming that their custom built AI engine, AlphaGeometry2, solved 87% of difficult math problems from the International Mathematical Olympiad. This achieved gold medallist level, they allege. And a big improvement over their previous effort: “we have significantly boosted the overall solving rate of AlphaGeometry2 to 84% for all geometry problems over the last 25 years, compared to 54% previously.” One caveat: this doesn’t seem to be a consumer-ready app, but an experimental engine using some Google Gemini tools.
2 Into higher education
We can start by catching up with what’s happening on campuses at the nexus of teaching and research: Bowdoin College just announced it was setting up a new AI effort, the Hastings Initiative for AI and Humanity. The “Hastings” in the name is from Reed, as in Netflix chair and co-founder, who donated $50 million to start it off. And starting off right now means:
hiring ten new faculty members in a range of disciplines;
supporting current faculty who want to incorporate and interrogate AI in their teaching, research, and artistic work;
leading conversations about the uses of AI and the changes and challenges it will bring, including workshops, symposia, and support for student research.
Why set this up? Hastings explained in a New York Times interview:
“We’re going to be fighting for the survival of humanity and the flourishing of humanity,” Mr. Hastings said. He compared A.I. to social networks, noting that social networking had grown so fast that few people initially understood the changes it might bring to human interactions and behavior.
“The A.I. change, I think, will be much bigger than the social networking change,” Mr. Hastings added. “So it’s important to get started early before we’re overwhelmed by the problems.”
Note that Bowdoin is a liberal arts college, not a research university.
Elsewhere in the American northeast, Emerson College is experimenting with AI offerings and projects through its Emerging Media Lab. That local media report contains some fascinating details, like a group of students refusing to do an AI video authoring assignment, one professor having different AI policies for two different classes, and a researcher building an AI-powered emotional support robot:
Note this reasoning behind committing to AI research in teaching:
[T]he college has faced an enrollment decline, which has led to budget cuts and layoffs at the beginning of this academic year. [Marlboro Institute professor and AI Taskforce member Russ] Newman said that investing resources in AI might help increase enrollment.
“Figuring out what our unique stamp on an education that happens to incorporate AI is, is only going to strengthen our overall standing,” he said.
Shifting our focus to research, and moving across the country, we find that a Stanford University professor developed a generative AI tool, Evo 2, which can produce DNA sequences of more than one million nucleotides. It was trained on quite a genetic database: “the known genomes of 15,000 or so plants and animals – the eukaryotes – which includes humans. Our dataset has now expanded from about 300 billion nucleotides to almost 9 trillion with Evo 2… It’s like a representative snapshot of all species on Earth.”
One interesting detail: “In terms of safety, we have left out the genomes of viruses to prevent Evo 2 from being used to create new or more dangerous diseases.”
Another academic research project studied AI and found it in bad shape in one area. A Columbia (University) Journalism Review article determined that AI citation of news sources was terrible. Klaudia Jaźwińska and Aisvarya Chandrasekar asked a series of chatbots to produce citations for journalistic articles, based on quotations, and the results were generally wrong - or, worse, incorrect with great assurance.
At a larger scale, a group of community colleges in the midwest, Rocky Mountains, and northeast joined forces to develop shared classes about AI. Facilitated by the Complete College America nonprofit, Atlanta Metropolitan State College, City Colleges of Chicago, the City University of New York (CUNY), Cuyahoga Community College (Tri-C), and Pikes Peak State College will develop curricula aimed at preparing students for employers’ AI needs. Two other organizations play a role in this project, Axim Collaborative and Riipen.
On a related note, some Chinese universities are expanding their AI course offerings. (archived) For example, perhaps the nation’s most esteemed institution, “Tsinghua University plans to take on 150 undergraduate students for a new academy to ‘cultivate talent in fields that integrate AI with other disciplines,’ official news agency Xinhua reported on Sunday.” Note the rationale: “The report said the aim was to ‘serve the needs of the country’s strategy and the development of society.’” The same article reports that some students are using AI assistants in their studies.
At a still larger level, the American Association of Colleges and Universities (AAC&U) surveyed several hundred campus leaders, “university presidents, chancellors, provosts, rectors, academic affairs vice presidents, and academic deans,” about AI on campus. The results are fascinating and useful, starting with the stated opinions that supermajority of faculty and students - but not faculty - are using generative AI. More faculty support is needed for the age of AI, including mitigating AI-enabled cheating, which respondents saw on the rise. There’s also this caution:
Large majorities of these leaders cite specific hindrances to GenAI adoption and integration at their schools. The challenges most often mentioned include faculty unfamiliarity with or resistance to GenAI, distrust of GenAI tools and their outputs, and concerns about diminished student learning outcomes.
At a meta-level, the AAC&U survey is also useful for what is shows about what academic leaders think is happening with AI. It’s very important that we have such studies; thank you to AAC&U.
Some reflections on these stories:
I’m fascinated by the different reasons academics have for doing AI work. For some it’s research to make something new, while for others it’s to study AI itself. Rationales for doing more AI teaching range as far afield as supporting national strategy, better preparing students for careers, and keeping one college afloat.
Similarly, I’m struck by the huge diversity of approaches campuses are taking. There’s analytical research, producing new stuff, crafting class policies, making new curricula, launching an initiative, etc.
It’s important to note the rise of collaborative projects. Within campuses, the Emerson and Bowdoin efforts show the importance of interdisciplinary work. Then it’s great to see inter-campus projects - once again, community colleges show the way.
At the same time it’s vital to keep an eye on what the AI corporations are providing, and how that is rapidly developing. I know this is a cliche, but it’s true.
Next up: we turn the scanner to other domains.
(thanks to a bunch of friends and colleagues for links and ideas, including Tom Lairson, Howard Rheingold, and Peter Shea)
Just a quick note. I was able to watch your virtual presentation at a staff event at SLCC. Great presentation!
There’s an obsession with AI in the project management community too, and everyone is using AI chat bots that are clearly Artificial but completely lacking in intelligence.