I have three notes for you today, two brief and one more extended.
First, a link. I just posted a large critical AI resource list/bibliography on my blog.
That started off with me trying to pick the best readings for one class in my Technology and Innovation seminar, then grew into a blog post asking for suggestions, which in turn elicited all kinds of responses. I corralled them all into one post. Hopefully it’s useful. Later on, I might turn it into a WordPress “page” or a Google Doc.
Second, an update on subscriptions: so far I’ve published everything here on the free and open web. Yet this work on AI and education’s future is taking up more and more of my time. It’s also starting to incur software costs, and I will need to get some new hardware to experiment with open source LLMs. So I’m going to gradually place some context behind a paywall in order to make this work sustainable.
I’m an open person. I publish as much of my work as I can on the open web. Yet it’s going to be time to shift somewhat so that I can keep going this.
Now, third, an extended reflection on…
L’AFFAIRE ALTMAN
The story of OpenAI’s board firing CEO Sam Altman seems to have wound down. As of this writing Altman is back in his old job, all but one of the board have been booted, and the curious for-profit/non-profit centaur is back to striding across the plains of large language models once more.
There are plenty of summaries out there, insofar as we can find enough material to summarize. Instead of doing that, I’d like to take a longer view. Looking ahead as well as looking back, I had a few thoughts.
This is a story of opacity. It’s about innumerable private negotiations stacked on top of multiple layers of secrets. There is still no clarity around, for example, why the old OpenAI board fired Altman, not around the process of choosing the new board. Rumors and speculation naturally fly, while all kinds of players issue formal statements and sub rosa leaks. As I’ve said before, this is a poor way to develop a potentially world-transforming technology. The story hides much of what actually happened from the world of developers and users, not to mention humanity in general. I see no signs that Open AI or Microsoft, who owns 49% of it, is interested in becoming more transparent.
Perhaps the Altman saga will drive some users to alternative AI applications. Maybe Bard saw an uptick over the past week. Claude, Poe, and others might have enjoyed some growth to their user base. I haven’t seen data, but am very curious. Similarly, perhaps the OpenAI shambles encouraged exploration of open source. Did Hugging Face and Red Pajamas see new arrivals?
As OpenAI staff and stakeholders struggled with each other and the rest of us tried to figure out what was happening, the fact of emergent AI ideologies came forward. We learned of a divide between AI partisans who wanted to speed up development, hoping to learn and otherwise benefit from a faster rate of growth. They desire achieving artificial general intelligence as soon as possible (AI ASAP?). These AI accelerationists oppose other factions, which want to slow down AI development, fearing its dangers. That ideology includes people wanting to address problems of bias and representation among LLM developers, as well as more extreme believers who dread an AI attack on the human race: doomers or decelerationists. Some of these want to align emerging AI with some model of what’s good for humanity; indeed, OpenAI has a project along those (ahem) lines, possibly led by one of the board members who ousted Altman.
Some of the accelerationists blend their ideology with preexisting effective altruism, the now somewhat disreputable belief that the rich should contribute to society in ways which improve the long-term odds of human survival. AI plays a key role in this vision, from threatening to doom humanity to enabling a grand new civilization. That’s effective accelerationism, or e/acc.
Does Altman’s return to power represent a win for acc or e/acc? Probably, as he was a booster of speedy development. But the story also points to a broader ideological divide around AI, which may well seep out beyond the battered OpenAI boardroom.
(I think this divide is an expression of a deeper ideological split, one concerning the next century or two of human civilization, which I’ve started to write about elsewhere.)
There are rumors about another OpenAI project which might have inspired Altman’s brief fall. The story goes that Q* can reason its way through basic math - not through tokenizing prediction, as with ChatGPT, but through actually grappling with underlying ideas. Such a development could, given many grains of salt, represent a major step forward towards artificial general intelligence.
What does the Q* story mean?
For a start it reminds us of the sheer opacity of OpenAI. We have nothing real to go on, certainly no code to test out. Second, to the extent that people in OpenAI thought Q* was a real thing, it offers an example of the ideological wars mentioned above in practice, shaping a company/nonprofit’s strategy. Third, it could be a red herring, either misdirection from some OpenAI players or just noise emitted from a group of smart people in chaos.
Altman’s rapid fall and rise have soaked up a tremendous amount of discursive space. It certainly dominated AI discussion, especially in non-specialist media. This means it blotted out the huge range of AI issues, from ongoing copyright challenges to policy development, the emergence and use of new apps (I’m working on a couple of GPTs!) to popular attitudes around the technology.
As an observer, I have to dig up what’s just been buried. We all might have to catch up.
At a meta level, what made l’affaire Altman such a spectacle? Why did media and people follow the story so closely? I’d love to see a good study of this. For now I can offer some speculation. Altman’s firing was a narrative jolt to the excitement over AI, giving it a sudden sideways turn. For those who saw Altman as heroic or at least as the single visible protagonist for LLMs, his shocking descent was irresistible and meaningful, as was his triumphant return. For their opposites, those critical of AI and/or viewing its development with dread, perhaps the firing and chaos were a kind of spectacular proof of their views: that AI enterprises were incompetent or heinous.
For other people, regardless of their attitudes to the tech, perhaps the story resonated with their feelings of precarity about their own work lives. People experiencing or fearing sudden and unreasonable job loss might have felt a connection, or just felt well informed schadenfreude. Additionally, Altman was AI’s great celebrity, the human face of the tech, which made dramatic changes to his fortunes a natural for media who make, and people who consume, celebrity culture.
What does this story mean for higher education? I think we can apply some of the preceding points to colleges and universities. Opacity means campus faculty, staff, and students lack good information about OpenAI products and have little negotiating power in any enterprise relationship. We are making decisions about what to use and how in a vacuum. Perhaps this will nudge some academics to the alternative AIs previously mentioned.
Ideological conflict will start showing up on campus. For now I suspect it’ll be limited to computer science departments and IT units, but we shouldn’t be surprised to hear others thinking about accelerationism and deceleration. If AI grows in academic impact, these ideologies might start to play a role in our collective action around the technology.
I am curious to see if the AI ideologies connect with other debates on campus. I’ve previously written about opposing pro-growth and degrowth thinking about academic institutions and can imagine partisans of those sides feeling a resonance with the AI schools. Academic critics of the digital world, from Shoshana Zuboff and Ruha Benjamin to Chris Gilliard and Anna Mills, might see deceleration as an ally or at least a lesser evil.
That’s enough for now. I fear making too much out of one small if spectacular story, but this one has had some significant impact. It also stands at the crossroads of several major ideas. Please let me know what you think.
Now to try out some new AI exercises on my students, then fire up another scanner issue!
Great points Bryan. This was a powerful and dramatic story on many different levels which I think you nicely highlighted. I agree that there will be a growing movement of opposing ideologies regarding acceleration of AI. I noticed this in a recent conference I attended (OEB). Very smart people there. We had a fun and humor debate dealing with weather or not AI would cause more harm than good when it came to learning. A majority said it wouldn't, but I was surprised by how many did feel that it would cause more harm than good. The debate brought out good points on both sides. What are your thoughts?
You've brought important points to the fore -- _opacity_ in particular. What no one is mentioning however, is the DoD. My military engineer background brother recently retired as an MS exec deeply involved with early stages of Open AI -- the MIC is central to much of it, of course.