In this last day of 2023 - in its very last hours - I want to wrap up the year by considering how AI impacted two areas, culture and higher education. For each I’ll add some projections into 2024.
(This is a sequel to my last post about AI in terms of technology and economics)
1. Culture and society
Generative AI’s rapid expansion in 2023 elicited quite a range of cultural and social responses. We’ve seen attitudes stretching from delirious enthusiasm (AI is the most significant invention since fire!) to apocalyptic gloom (we need to call in airstrikes on rogue servers!) and a pretty wide range in between, not to mention people shrugging off LLMs because they don’t care or have other things to worry about.
Some of the extreme attitudes are worth noting here. On the dreadful side there have been calls to pause or stop AI research and development (for example). That option is still in the air as 2024 draws nigh. One reason is the argument that AI presents an existential threat to humanity. This can play out in all kinds of ways, such as artificial general intelligence replacing human functions or everyone’s favorite lazy invocation of Skynet (the machines just kill us all; Harlan Ellison did it earlier and better, but so did this movie and of course the original robot story). I remain fond of the paperclip maximizer idea: an AI stubbornly performs a human-set task to the bitter end, devouring human civilization and more in the process.
Opposing them are arguments for LLMs enhancing human life. Sometimes this means democratizing creativity. It can mean automating drudgery. And perhaps generative AI points the way to the old dream of reducing working hours, freeing up human lives for other pursuits.
This range of AI attitudes needs further research. At least someone should whip up an infographic.
Beyond AI cultural attitudes are AI uses. I was surprised that we didn’t see more geopolitical uses of these tools, like generating speeches, psyops, or propaganda, but perhaps that’s been happening under the radar. Who checks a party hack to see their work when they put out a poster? Perhaps politicians and operatives have taken generative AI more seriously and used it as an advisor or colleague, as I’ve suggested. Again, just because there haven’t been stories about this doesn’t mean it hasn’t been happening.
More visible have been AI uses in arts and communication. I think most of us have seen DALL-E, Midjourney, etc. images in the world (see my latest example above). AI-generated TV news anchors are on the offer from one new service. Chatbot text looks like it’s in the world, too, albeit not so obviously discerned, although some entertaining examples do crop up. Older forms of AI have been in computer gaming for a while; my sense is that designers and companies are using generative AI across the board, from making new assets to generating scripts.
Absent the legal and policy challenges I’ve mentioned before, it’s a fair bet people will keep using generative AI to generate stuff in 2024.
Anxiety about and criticism of AI generated content might lead to people labeling their content as not using AI, using some AI, or being all AI. Here’s an example which just came into my inbox this morning:
I imagine something like Creative Commons licenses: NO-AI, SO-AI, ALL-AI. Perhaps we’ll see a nonprofit or business emerge as an arbiter of such declarations, setting up a Good Housekeeping kind of seal of authentic humanness. Or folks will just do it themselves. It’s interesting to think about how best to prove such a thing - by providing videos of oneself at work, perhaps?
Further down the communication road is the place where users interact with an AI who represents a person. I’ve spoken before about the clear market appeal of bots like Character.ai and Replika, which create versions of historical, fictional, or idealized people. Hello History focuses on historical chats. Even with social scorn, I think we’ll see more of these appear.
One angle of this AI use is creating bots of living people. OpenAI’s DIY ChatGPT gives users the chance to make textbot versions of themselves (yes, I’m still working on the one for my professional self). Other services are starting to appear - Donald Clark has had some inspiring successes with generating video avatars of himself. I don’t think this has caught on enough to draw significant cultural responses.
Things get tricky when someone creates a doppelganger of someone else, with or without their consent. A recent Politico story describes some examples, like a technologist creating an app of his favorite but unavailable therapist. We are stumbling into a time when we can make digital twins and copies of living people which are convincing enough for at least some audiences to be valuable.
We need to spend more time on this. Our Future Trends Forum session last week dove into the idea and came up with all kinds of dimensions and issues, from cost and copyright to privacy and labor.
Last cultural point: the way that AIs produce problematic content was a major concern in 2023 and will probably be so next year. I mean “problematic” in the full sense of AIs creating creepy art, hallucinating or bullshitting, showing biases, etc. And while sometimes this occurs without the user’s intent, I anticipate we’ll be worried as well about people using the technology to deliberately create, enhance, and share lies.
2. Higher education at last
There’s a lot under this header, so let me share what I can before running out of time:
2023 showed higher ed engaging… slowly and gingerly with generative AI. As far as I can tell access to LLM tools is widespread (i.e., IT blocks were rare), which means some proportion of students, faculty, and staff have been using ChatGPT et al. Institutions have generally avoiding issuing policies, deferring instead to academic units (departments, libraries, IT shops) and individual faculty. Some faculty and staff use AI to assist in operations, notably in document development.
I don’t know what number of students use chatbots to produce material for their assignments, but can’t escape the sense that the real number is immense, and campuses have generally not taken this into account. Chatbot content detectors have failed to be of use so far. I also don’t know how many students use chatbots to summarize reading. For me this means two statements, which I’ll offer as hypotheses:
Fall 2023 saw a substantial increase in student cheating, enabled by AI.
Spring 2024 will see even more.
People who are concerned about academic integrity, as well as those worried about grade inflation, may well think this academic year represents a significant downturn in student learning.
There have been calls for information and digital literacy to address AI (for example), which is consistent with the history of those movements. I’m not seeing most institutions heed such calls, at least to the level of, say, hiring more information literacy librarians or setting up digital literacy centers. But they probably should.
I expected a lot more academic resistance to generative AI than I’ve seen. I was thinking of dread about students cheating, fear of instructors being replaced, and those notes have been sounded, just less often that I would have thought. One brilliant librarian on an early Future Trends Forum session recommended we not even use LLMs, as they were too problematic, and I’ve had some friends tell me this, but that attitude hasn’t taken hold. As an outlier, there has been one open academic attack on AI, and I wouldn’t be surprised to see a more much like it in 2024. Skepticism will persist for a variety of reasons. The counter-hype body of generative AI critique, partly coming from within academia, has won academic adherents and this will certainly continue. (I just found a scholarly article referring to chatbot content the authors deem problematic as “botshit”)
That seems likely to describe 2024, in fact, with academics starting to make use of the tech, with some criticism on the margin. However, if external forces reshape AI - policy and legal threats, a massive culture turn against it - then academics might step back from the technology somewhat.
What else should we expect from generative AI and higher education in 2024?
Chatbot detectors will keep trying to improve after their failures. Watch for Turnitin, ZeroGPT and others to issue iteration after iteration. This is a great space for startups to appear.
Educational technology companies will likely roll out AI services within their preexisting applications. That is, think about learning management systems/virtual learning environments adding chatbots for discussion board and syllabus generation. Imagine decent chatbots appearing within other enterprise systems.
At the same time I suspect we’ll see more academic use of generative AI in non-AI campus applications than in AI-specific apps like Bard. Already students, staff, and faculty have access to LLM tools through Google and Microsoft applications.
2023 saw a torrent of pedagogical AI uses. Instructors having students analyze bot output, students using LLMs as writing buddies, using AI to facilitate simulations, and many more are out there in the world now. Faculty and staff are researching these uses, so look for output to appear over 2024.
AI has long held out the possibility of personalized learning. We might see progress on that front in 2024, as campuses which set up programmatic pilots. Imagine, say, a university making available “MyCoach” software for each student, and the app trained on various curricular materials, then fed a particular students’ data. We should expect a company to offer something like this, too.
Two related points: first, I wonder where research uses will go. I’ve already forecast possibilities of using AI for various publication purposes, such as literature review and draft feedback. We’ve seen some journals block AI from author fields, but it seems likely that we’ll see more LLMs behind the scenes, backing articles and monographs.
Second, medical applications fascinate me. On the one hand allied health is a heavy user of technology in general, hardware and software, so it shouldn’t surprise the reader to learn of AI pilot projects. Medicine also has a longstanding communication problem, so it should also not shock us to see the field turn to AI for assistance (and assistants), as other industries have done. Yet on the other hand medicine has much higher hurdles for tech adoption than many other fields. Human lives are at stake, obviously, which ramps up demands for quality output. Medicine also has a stack of policy defenses, from national law (think FERPA in the United States) to professional constraints and training guidelines. I’ll watch this area carefully for its own sake, but also to see what might apply to the rest of higher education.
Back to the digital twin idea. I think we’ll see instructorbots appear in 2024. They can come from many sources: students coding up a beloved prof, administrators wanting to experiment with automating instruction, experimenters wanting to get ahead on the topic and maybe break new ground, companies eager to cash in on two centuries of automation’s practice, nonprofits seeking to bring faculty to underserved communities, other experimenters wanting to do digital twin research on professors, and more. I don’t think we’ll see bots replacing professors at scale by this time next year, but we should expect instances and perhaps pilots, plus a guaranteed controversy.
On the critical side, perhaps we will see stronger action. Months ago I asked if academics would try to block AI projects from scanning their content (on campus: courseware, local repositories, special collections; hosted elsewhere: scholarly publications); perhaps we’ll see efforts to do so, especially as lawsuits over intellectual property proceed. Clarke and Esposito recently argued that scholarly publishers are handing their market value over to AI companies. What actions will such attitudes drive?
I am curious about how campus drives for inclusion and social justice will intersect with AI in 2024. On the one hand some of the strongest critiques of LLMs have come from antiracist scholars, drawing attention to bias in datasets. Others have pointed out social inequalities in being able to access and make use of the technology. On the other hand, I’ve heard from other academics that they see ways of using generative AI to redress inequities of access to education.
There’s still more to say, and I’d like to hear from you in comments, but let me wrap up this post for now with pointer to what we might want to do.
There’s a crying need for faculty and staff professional development about generative AI. The topic is complicated and fast moving. Already the people I know who are seriously offering such support are massively overscheduled. Digital materials are popular. Books are lagging but will gradually surface. I hope we see more academics lead more professional development offerings.
For an academic institution to take emerging AI seriously it might have to set up a new body. Present organizational nodes are not necessarily a good fit. For example, a computer science department can be of great help in explaining the technology, but might not have a lot of experiencing in supporting non-CS teaching with AI. Campus IT will probably be overwhelmed already, and might not have the academic cloud needed to win the attention of some faculty and staff. Perhaps a committee or team is a better idea, with members drawn from a heterogeneous mix of the community. Not to be too alarmist, but we might learn from how some institutions set up emergency committees to handle COVID in 2020, bringing together diverse subject matter experts, stakeholders, and operational leaders. If a campus population comes to see AI as a serious threat, this might be a useful model.
This is the heroic age of generative AI, as it were, with major developments under way and many changes happening quickly. Things will settle down in a bit, most likely, as new technologies become productive level services and as the big money and governments start corralling AI for their ends, at least until the next wave hits. By this I mean colleges, universities, and individual academics have the opportunity to exert influence on the field while it’s still fluid. As customers, as partners, as intellectuals we can engage with the AI efforts. The engagement can take various forms, including creating open source projects, negotiating better service contracts with providers, lobbying for regulations, and issuing public scholarship. I hope campuses can grasp and support such work.
That’s it for 2023. Thank you for reading and supporting this Substack. What do you anticipate for next year in AI, culture, and higher education?
AI might be able to help determine if reports are written by AI itself. My project management world is always subject to inaccurate reporting:
https://open.substack.com/pub/agilepmosimply/p/project-or-programme-failing?r=26elou&utm_medium=ios&utm_campaign=post
Existential to some creatives, perhaps...
https://spectrum.ieee.org/midjourney-copyright