Greetings from a very hot late June, here in northeastern Virginia. Today we’ll fire up the scanner and check out the latest in AI economics, with reflections on implications for the future.
An emerging business partnership model OpenAI and TIME magazine agreed to a partnership, whereby the AI firm could ingest the periodical’s archives going back a century. ChatGPT output would “featur[e] a citation and link back to the original source on Time.com.” On its side, TIME would “gain access to OpenAI's technology to develop new products for its audiences.”
Meanwhile, YouTube is apparently negotiating with the music industry to license individual performers for audio cloning.
Apple’s partnership with OpenAI is another example of this emerging trend. There are several key features to note. First, Apple will deploy ChatGPT in two levels: improving the preexisting Siri service, locally on devices, then pushed into the cloud for more complex tasks. Further, as Singularity University notes, Apple isn’t paying money for the deal: “It involves no cash for now: OpenAI will get exposure to Apple customers in exchange for granting access to its top AI model.” Further, it’s not an exclusive deal: “Apple has made it clear it’s open to other partnerships in the future. This could mean a similar arrangement with Google or Anthropic.”
Overall, this trend could be very significant if it scales up. From the YouTube story above:
YouTube is seeking new deals at a time when AI companies such as OpenAI are striking licensing agreements with media groups to train large language models, the systems that power AI products such as the ChatGPT chatbot. Some of those deals are worth tens of millions of dollars to media companies, insiders say.
This could be a way for the pro-AI side to progress in the great cultural divide I’ve mentioned earlier. Dan Cohen reminds us of some the fraught issues on connecting AI to music.
We should also keep an eye on non-business partnerships: nonprofits, governments.
One AI failure McDonald’s ended its trial of an IBM-provided AI for drive-through orders. The problem? “Two sources familiar with the technology told CNBC that among its challenges, it had issues interpreting different accents and dialects, which affected order accuracy.”
AI’s power demands One criticism of generative AI is that it draws a huge amount of electricity in order to function. The training process and the massive amoutn of cloud computing involved appear to increase our total carbon footprint, right when we should be reducing it.
Now the cultural divide over AI comes for the CO2 question. Bill Gates asks us to not be so concerned, arguing that AI can help us reduce electricity demand and envisioning companies paying a “green premium” for that power. An Ars Technica article dives into just how much electricity generative AI consumes and puts it in context with the far larger and longer-running data center growth, which supports so many other digital functions, like videoconferencing and web content provision. Generative AI might grow to be the size of PC gaming, in terms of electrical demand.
On AI investment I found this fascinating chart of AI investment by nation - based on GDP, with some very interesting results:
Very interesting to see Israel and Singapore stand forth - very small countries, but apparently with significant AI investments going on. (I found this in one of Adam Tooze’s excellent newsletters, but can’t for the life of me find the right one. Can’t source it from Daily Dot either, as they are subscription only.)
Cloud vs web vs AI crawler Wired magazine criticized the Perplexity AI engine with crawling websites which refused to be crawled. For decades, websites can add a small file, robots.txt, which tells visitors if they can scrape content or not. In Wired’s account, Perplexity’s robots ran roughshod over robots.txt.
WIRED observed a machine tied to Perplexity—more specifically, one on an Amazon server and almost certainly operated by Perplexity—doing this on WIRED.com and across other Condé Nast publications.
In response, Amazon’s cloud services empire division is now investigating.
There are several interesting points here. One is seeing a cloud provider disciplining one of its customers. Another is the example (and I haven’t seen it debunked) of generative AI continuing to snarf down as much content as possible, and damning obstacles.
Let me close this scanner with a couple of other economics-related thoughts. First, I’m still not seeing a viable business model for generative AI. The technology is enormously expensive, especially at scale, and there don’t seem to be any major income flows. I don’t mean venture capital investment, but revenue from sales. OpenAI makes some money by subscriptions, but nobody is claiming that’s enough to keep the systems running. I don’t imagine Microsoft or Google are bringing in heaps of new cloud and application subscribers because they added AI to the mix. There is now a scrambling to make generative AI pay off. What happens if nobody solves that problem?
Second, here I’m influenced by Carlotta Perez’ Technological Revolutions and Financial Capital: The Dynamics of Bubbles and Golden Ages (2004). Perez argues that after an initial rush of funding into a wild new technology, funders then pull back the discipline the chaos. Venture capital gets serious about wanting returns, and businesses learn to adapt. Governmental regulation looms large, adding to the desire for companies to form smooth and profitable operations.
This is where it feels like we’re headed with generative AI. We’re moving past the “frenzy” stage Perez describes, beyond the AI toothbrush (seriously). It’s an incredibly fast journey, which makes it hard to track. That doesn’t mean there won’t be hype and goofiness, but that we should watch for firms to reset their expectations and plans.
Meanwhile, what does this mean for higher education? We are in a better position to negotiate with vendors, to a degree, as they are feeling money pressures. We also have the opportunity to develop open source applications and uses. Academics can look to those smaller nations for innovations. New partnership models are emerging.
We also need to watch what happens to the industry wrt the business case I mentioned. A massive contraction is possible.
(thanks to Vlad Oligarchsky, Anne Boysen, and George Station)
As a vendor who has delivered over 40 years, HE is not a big market. They are not really buying AI, unless bundled with their VLE. They are already way behind other types of organisation, largely because of mindset but also deeply embedded practices and cultural attitudes - negativity, obsession with credentialism, focus on lecturers not learners etc. Neither is the internal development of open source product likely, as they are not set up to develop product - that all comes from the outside.