The year 2023 is running out of days, so I’ll use this time to look back on what happened in AI and education, looking ahead to where it all might go in 2024.
Let’s start with the technologies and how they developed, then consider the economics angle. I’ll add some projections based on those. Next post we’ll turn to society and education.
1: Generative AI takes off
2023 saw generative AI simply explode. The large language model idea became widely recognized in the technology world and inspired all kinds of research and development. It elicited all kinds of excitement, fear, hype, critique, and discussion.
The LLM poster child ChatGPT 3 appeared in late 2022 and it raced ahead ever since. New iterations appeared (now 4.0), new features emerged, and above all a ton of usage took place. Microsoft embraced the thing, which it funded, and plugged versions of it into all kinds of services. Google started the year by launching a quick competitor, Bard, then ended the year with another one, Gemini. Baidu launched Ernie. Meta put out various AI tools. Amazon launched Amazon Q, albeit only in preview and for licensing fees.
Image creating tools proliferated and improved. Midjourney led the way in terms of richest, most complex output, but OpenAI’s DALL-E advanced rapidly and is about on par. Other types of generative AI appeared, producing presentation slides, comic book pages, mini-clips of audio (one Google project), and more.
Those different modalities started crossing into each other. OpenAI connected DALL-E to ChatGPT. Bard allowed image uploading. Multimodal became the idea, with image and audio input and output. Perhaps more importantly, AI started appearing within non-AI apps, from Google Apps to Photoshop. Microsoft infused CoPilot across its services, from desktop to Teams to Office. Snapchat and Instagram added generative AI tools.
At the same time as these companies (plus OpenAI, a weird semi-firm) were energetically producing services, open source efforts emerged. HuggingFace has become a leader in open source generative AI, hosting a range of projects. One open application spun out of Meta. Redpajama published a vast training dataset.
An important move in 2023 was AI narrowing. That is, we started the year with applications aimed at speaking generally, creating images for everyone, globally. (That’s one of my go-to uses of Bard and ChatGPT, to get a sense of the consensus story about a topic.) And we still have those: ChatGPT, Midjourney, etc. But as I forecasted, projects appeared with narrower ambitions for smaller-than-global functions and audiences. MONAD is a fun little example, a chatbot trained on one regional and historical period’s texts. Elon Musk fired up a sarcastic chatbot just for Twitter/X. OpenAI launched a DIY chatbot service, allowing users to craft bots for their own purposes.
2: The macroeconomic side
Generally speaking, the business of general AI boomed. The R+D mentioned above was powered by large investments. Huge firms really led the way, from Google and Meta to Amazon and especially the oft-derided Microsoft.
Startups popped up, most notably Anthropic’s Claude, now in version 2.0. Initial forms of new AI businesses started to appear. For example, OpenAI set up a store for people to sell and buy chatbots and plugins. And, as noted above, AI expanded across industries. Morgan Stanley added a bot for its staff to use internally.
However, I don’t think we’ve seen a good business case for these major AI enterprises. Remember, these huge systems are very expensive to set up, train, and run. As I’ve said before, if Google or Microsoft uses AI to increase the value of their preexisting offerings, is that likely to boost usage in sufficient numbers to pay back the enormous investments they’ve put in? I don’t know if the paid subscription model has proven robust in practice yet. Will businesses pay to improve their processes (cf one survey) in enough numbers? Does Meta think AI will turn its fortunes around after the metaverse flop?
LLMs are also very costly in terms of their electrical power demand, so much so that Microsoft is apparently considering getting some nuclear power plants to power its AI systems. Research continues to examine just how big a watt (and carbon) footprint generative AI puts down. For example, generating images takes far more energy than text, according to one paper. The carbon footprint, however, doesn’t seem to have had much impact on AI debates.
Beyond the AI business itself, there continue to be arguments and anxieties about LLMs’ impact on the labor market. How many people will the technology render obsolete, and how many new jobs will it create, in the classic industrial revolution pattern? How many coders will civilization actually need to hire, as AI gets better at generating software? By 2023’s end I saw no sign of these questions being settled in any way. They and employment concerns remain.
One last note on the economic angle: the Altman affair saw a frenzy of activity and interest, then ended with something close to a status quo ante, with Altman leading OpenAI once more. I stand by my observations on what the story revealed: a damning lack of transparency, the growth of competing AI-centered ideologies, the news media’s fierce hunger for human stories in the AI space.
3: So what then? or, some projections
Based on what we’ve seen from the technology and economics of AI in 2023, what should we anticipate for 2024?
More research and development will certainly occur, often at a frantic pace, fueled by hype, usage, and investment from private and public sources. AI offerings will become more full featured and more complex, spreading into different computing domains with multimodal as the default. Watch for generative audio and video as well as improvements to text and images.
AI will most likely continue to appear in other, non-AI-related applications. Imagine generative text, images, and other media across the digital landscape, from gaming to insurance forms, transportation to government. It may be that more people use AI-infused tools rather than AI-specific ones.
We should expect more AI aimed at narrower audiences than the entire world. Months ago I wrote that we should see AI created by and for particular nations, corporations, religions, and fanbases; I stand by that.
We should also look for generative AI to appear across a range of hardware. Meta already got some AI into glasses. I wouldn’t be surprised to see applications appear on game consoles, smart watches, ereaders, bicycles, medical devices, and more.
Problematic AI output will continue to be an issue, and not a simple one to address (this story just broke as one example). Already we’ve seen calls for applying information and digital literacy to the problem. The problem might also drive government regulation. Naturally there are technological solutions in the offing, like digital watermarks and using generative AI to correct generative AI; none have had real world impact yet.
Government actions: I’m not sure where these are headed. 2023 saw one big international statement and an American directive focused on safety. Safety will probably loom large, especially to the extent legislators and populations make the “we failed to corral social media and now must control AI” argument. On the flip side, we should anticipate more national investment in developing AI projects aimed at local interests, like this Japanese effort.
One underappreciated AI form is the character bot, like those from Character.ai and Replika. These seem to have an audience or market and I’d expect to see them grow and develop. As I’ve said elsewhere, and as per a Future Trends Forum session this week, we should see various forms of digital twins develop, including post-mortem ones. Hearing the dead speak is a feature of older media, like the gramaphone and movies; now we can have approximate conversations with them. I’m waiting for “twin me” webpages, into which we can enter varied datasets for training.
Once again I summon up an existential threat - not a threat from AI to humanity, but instead for the existence of generative AI. The history of AI is one of a series of booms and busts, a/k/a “AI winters,” and there’s no reason to suppose the current wave will be immune. Previous AI developments have endured Arctic fates when they failed to achieve postulated goals, and it’s possible LLMs will fail in various ways, such as to staunch the flow of bad output.
At the same time I see the chance of a hype crash, as the hype machine keeps churning, and we know what that often means. Gartner positions generative AI at the very tip of its famous “peak of inflated expectations”:
…which means a tumble into that Trough of Disillusionment coming up fast. Don’t be surprised when the hype machine stutters and media start emitting more skeptical, then hostile noises.
Other threats to AI loom ahead. I mentioned challenges to business models up above. Security risks also stand out, as text or voice entry can easily carry information capable of misuse. Furthermore, many are concerned about AIs generated biased content, especially along lines of gender and race. High profile bias events might damage LLMs’ usage.
I am especially focused on copyright lawsuits. Just this week the New York Times joined a parade of such legal challenges, the general idea of which is to criticize for-profit AI projects for monetizing intellectual property through training on content without approval. As I’ve said before, it’s not hard to imagine a judge in any one of these cases to order (say) OpenAI to suspend its applications, or to rule that they be shut down or erased permanently. We have to be ready for such events, which many will no doubt react to with shock and surprise.
What might we anticipate for 2024 which 2023 doesn’t suggest?
There have been sporadic shouts about generative AI yielding artificial general intelligence (AGI), either from current technology evolving past a key point or through experimentation on LLMs’ fundamentals. I’m skeptical, but this is a possibility, and we should at least prepare for shouted claims of AGI being attained.
In contrast, I’m bullish on the open source side. There’s a *lot* of development happening in that world, with a wide range of attitudes, projects, and philosophies in play. At the least we might see open source AI shrinking the electrical (and carbon) footprints of LLMs. And we might also expect some new developments which the giants won’t currently allow.
That’s all for this post. In the next I’ll look into AIs in society and higher education, again peering back across 2023 and ahead to 2024.
This was my view from a little over a year ago.
https://open.substack.com/pub/agilepmosimply/p/the-data-has-better-idea-ai-in-project?r=26elou&utm_medium=ios&utm_campaign=post
Gary here lays out the copyright issues for all to clearly see...
https://garymarcus.substack.com/p/things-are-about-to-get-a-lot-worse