An essential player in the academic artificial intelligence world is campus technology. They are the people who make crucial decisions about how colleges and universities engage with AI, from developing and maintaining the necessary infrastructure to helping faculty determine tools and practices. They then support actual usage of AI. Yet they don’t get enough attention in discussions about AI and academia.
Earlier this month I and more than 7,000 others attended the 2023 EDUCAUSE conference in Chicago, and it was probably the best single venue on Earth for exploring what campus IT thinks and does about AI right now. By “attended” I mean I gave a preconference workshop, attended a bunch of sessions, talked with a ton of vendors, talked with a range of IT staff from CIOs to lab managers, and also ran a Future Trends Forum session smack dab in the middle of the conference floor.
(I also played an epic pickleball game with EDUCAUSE’s president and some staff, but that’s a story for another time, especially once the photos and video get out)
I’ll summarize these observations and conversations by drawing out some themes. I’ll add a couple of futures notes at the end.
![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feaadfe61-707c-40dd-9778-ecbe06c39a1b_1024x1024.png)
One caveat: EDUCAUSE is a huge conference. There are multiple tracks of events every days and, as I said, thousands of people. I did my best to cover a lot of ground but I can easily have missed some stuff. If I did, dear readers, please let us know in comments.
EARLY, FAST MOVING DAYS There was a general sense that generative AI is just starting to impact higher ed and the world, that the technologies are developing incredible quickly. For all the talk about hype, I only saw people who were curious and either humble about their understanding, or just focused on one bit of it, or were just selling their piece of AI and didn’t want to say anything about the rest.
In my workshop people consistently volunteered information about various practices, AI tools, and institutional strategies. In that classic Web 2.0 way we build up our shared knowledge. And there was always someone ready to deal with paywalls. I, for one, was glad to learn about Scite, Symon, and Research Rabbit.
CURIOSITY AND INITIAL RESEARCH Many, many folks spoke to me of trying to understand large learning models. Because they were in the IT world, most were keen to get their hands dirty by trying technology and sometimes building stuff.
One person mentioned this story of an AI “leading” a German church service.
Lance Eaton made this observation, which I agree with: “There’s a general confusion and angst about what we even know about generative AI collectively, and with that (or without that), how do we go forward?”
I was surprised by how the EDUCAUSE top 10 issues list didn’t include AI, and how they added AI as a kind of honorary important topic.
A SENSE OF SOMETHING LIKE INEVITABILITY A common sentiment was that AI was likely to keep growing in the world, and that higher education would have to prepare students for that future. As one or more commentators put it on our Google Doc, “Generative AI is not going away and students need to know how to use it effectively and responsibly.”
FIRST STAGE WORK A good number of folks spoke of initial work with and on AI. These included:
Testing out AI plagiarism detection tools
Hands-on exploration of one learning management system (LMS)’s first AI offering
Learning how to support AI in preexisting, already supported tools, including Photoshop
Using ChatGPT to product writing, from test questions to job descriptions, cover letters, and scope statements
Using image generators to create content for a virtual world
Creating PHP, Python code. Creating code for AWS research environments
One liberal arts college reported setting up a study group.
The University of Michigan was an outlier here, setting up an internal, campus-wide, protected AI sandbox.
![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F14e40a43-ca33-4c1a-a76e-9d7876999609_1792x1024.png)
SKEPTICISM Without running a real poll or survey, I can only estimate about half the people I heard expressed some criticisms of emerging AI. The most popular concern was over bias, especially by race and gender. There was also a current of anxiety over AI errors and hallucinations. In my workshop folks recommended various AI critiques, including “The AI Dilemma.”
I think everyone was fine with colleges and universities teaching and researching how to critique AI.
There was some concern about how people could access AI. That included some thoughts about disabilities, but was more about the digital divide in various ways, especially paying fees. Lance Eaton has good comments on this; be sure to read to his points on open access.
A LITTLE RADICAL CRITICISM A few people took a more critical stance than their colleagues. One person told me she thought of herself as a Luddite, in the Brian Merchant/Cory Doctorow sense. There was some concern about committing academia to the whims and profits of giant, sometimes badly behaved companies. Some people were worried about LLMs’ impact on climate change or on water supplies. On a different level, several took a classic open source critique of proprietary software.
Nobody argued for refusing or blocking LLMs.
NEITHER UTOPIA NOR DYSTOPIA The overall sense was of AI in a kind of middle ground. In Rogers’ classic innovation diffusion model, most discussions centered on AI uses which could yield incremental improvements to various academic technological functions: memo writing, student creativity, etc. Or they were about ways of reducing some problems. We were past appealing to early adopters stage and nowhere near wrangling with laggards. Few if anyone spoke seriously of AI bringing about a utopian world or dystopian nightmare.
LOOKING AHEAD
As a futurist, I presented on some directions I thought AI might take, and asked folks for their thoughts. Several themes stood out:
AI winter/hype crash Many, many people were concerned about AI hype, and expected LLMs to slide down the famous Gartner Hype Cycle, perhaps very quickly. Some older folks remembered previous AI winters (times when faith in new technologies faded, funding dried up, and no alternatives stood forth) and thought one such could be in the offing.
This gave rise to some campus ed tech concerns. Which providers would pull the plug on which services, or go out of business? If academic interest faded, following broader social attitudes, how to maintain AI services?
Interstitial stage before AI in everything This was something I’d been thinking about, and that some EDUCAUSE visitors were keen on. They pointed out how many people were experiencing generative AI through preexisting, more familiar applications, like Photoshop, Google Docs, or Excel. Perhaps campus IT should devote more resources to supporting and teaching people about LLMs in those contexts, rather than in standalone services like ChatGPT and Midjourney.
At a broader picture, if generative AI continues growing in service offerings and usage, spreading through a huge range of information and technology niches, perhaps the next stage will see ubiquitous AI. Much as we now see a range of tools available beyond their origins - text editors, time displays, audio recording - generative AI might become that general. We don’t use word processors in the 21st century, in the sense of old hunks of hardware, but we process words all over the place.
If that’s the case, perhaps today’s range of AI-specific tools will decline or disappear. When a user wants text generated they can turn to email or a word processor. If they want an image crafted they ask their game console. When they need audio created, they ask their car to do so and play it back at an appropriate volume.
If that’s the case (continued), then campus IT AI strategy has to shift. In terms of training and user support, teams would have to focus on traditional and mainline applications, rather than the plethora of AI-focused apps. Then they would have to bring their criticisms of AI to bear when negotiating for campus service agreements with giants like Microsoft, Google, Apple, Amazon, etc.
When I shared these thoughts with some folks they generally thought they weren’t implausible. But it seemed too early to see them at play now.
Were any of you in Chicago for the event, dear readers? What do you think of this snapshot from the campus IT point of view?
I really appreciated this, especially the question of how much future use of AI will be in existing applications vs in AI-specific tools, and thanks for the links you included.
Thanks Bryan. Very helpful. Shard on Bsky. Where I see you are also to be found.