Today I’d like to return to the question of existential risk - not to humanity, but to generative AI. Let’s explore this as a futures exercise based on a single prompt:
What happens if a court suspends ChatGPT?
For a quick explanation, generative AI applications rely on big datasets for training purposes. Some of these datasets are huge, hence the name *large* language models (LLMs). And some contain copyrighted material. This is the basis of a number of lawsuits against some AI providers, and resulting court action might lead us to a breaking point for LLMs.
Let’s assume for the sake of argument that, as I’ve argued previously, a judge in one of these court cases decides that training ChatGPT likely committed a vast amount of copyright infringement. The judge orders OpenAI to suspend the bot’s operations until the court can complete its work. Or perhaps that’s the final ruling, for OpenAI to pull the plug on ChatGPT. Either way, when users click on https://chat.openai.com/ they will see not a login box nor the familiar interface, but instead only receive an embarrassed apology or defiant note.
(Please remember that I am not a lawyer. I’ve consulted with law schools and believe that everyone should have access to law, but I will gladly defer to actually trained counsel for details stemming from their expertise.)
What might this event mean? How might the world respond?
On a practical technological level, ChatGPT just shuts down. DALL-E might follow suit. Every application relying on ChatGPT’s APIs starts failing. The OpenAI GPT marketplace stalls. Developers are frustrated.
A flurry of lawsuits would likely follow, not least because Americans are litigious. OpenAI would appeal and fight back. Some copyright holders could see this as a blazed path to compensation and will, er, follow suit. Lawyers will argue about the nature of the precedent just set. Beyond the realm of American law, other nations may see similar legal actions. Legislators everywhere might view this as an opening for new regulations for LLMs in particular or AI in general.
In terms of public opinion, the suspension of ChatGPT would represent the collapse of hyperbolic hype for some, a decisive rupture in the LLM story. Down it slides along Gartner’s hype curve. More broadly, it might seem like the start of a plunge into another AI winter.
AI winter means a drop in funding, as venture capital flees what it deems to be a failed model. Grants vanish. Research falls away, as investigators don’t want to waste efforts on a doomed path. This period could last for years, if history is any guide.
More on public opinion: I wouldn’t be surprised to see popular applause from all quarters, with lots of people cheering on an AI defeat. Recall that AI polls very negatively, at least in the United States. People are concerned about the technology’s job impacts, about bias, and the range of critiques we’ve already discussed. We could imagine all kinds of people enjoying some schadenfreude: intellectual property holders, legacy media, security experts. Observers will proclaim the death of generative AI. Some will compare generative AI to NFTs or bitcoin, deeming them to be flimflam, scams, tulips.
I wonder what happens to media and pop culture coverage of Sam Altman in particular, ChatGPT’s face. Will be become another villainous techbro in popular estimation, that rising antagonist, or a tragic figure meriting sympathy? Or would be become instead a doughty fighter, if he campaigns against the ruling?
At the same time others will protest the ruling using various legal and other arguments. AI optimists and accelerationists will critique the judge and call for a renewed artificial intelligence development effort. Some investors will defend their investments. Developers and researchers will probably howl. And the huge number of ChatGPT users will be depressed, some calling for a return of their app.
How would other commercial LLM providers respond, once OpenAI loses? Google has invested enormously and developed extensively. Perhaps they will energetically redefine their AI as something very very very other to ChatGPT, both rhetorically and through digital presentation, then in architecture. Microsoft will be dragged down to the extent it uses Open AI’s tools. It might pivot to other generative AI applications in a hurry - expect another branding of Copilot, perhaps. Both commercial giants will lawyer up and start lobbying for legal protections. Both may suffer serious financial losses and market value decrements.
Perhaps new AI projects would appear, trying to carve out safer service paths. Some of these could train apps on public domain sources and/or licensed content, avoiding the copyright problem. One program could crawl the Internet Archive, or IA could cover itself (is this already happening?). Academic and cultural heritage institutions (libraries, archives, museums, galleries) could also mount such efforts. Governments might train AI on their own publications: reports, records, laws, public communications, histories, etc. We could view that as dull (BureaucratBot!), benign (citizens using a policy bot to answer civics questions), or threatening (more powerful apps for authoritarians).
What might the apparent fall of ChatGPT mean for higher education?
The technological absence will ripple across colleges and universities as faculty, staff, and students can no longer use the app. Projects using the API - how many are there? - will break.
Some academics will jubilate, especially those who dread students using AI to cheat. Critics may see this as a fair ruling, on balance, and recommend more of this kind of court decision. Others will think they’ve dodged a bullet and can move on. It might be difficult for those working most closely with AI - IT, educational technologists, computer scientists, some librarians - to win resources and support for that work, including using other generative AI tools not yet juridically shut down… which could make it harder to maintain or launch open source LLM initiatives. Campus-retained lawyers may recommend institutions back away from LLMs or AI in general; academics might heed this.
Academically-adjacent organizations will react in other ways. Textbook and scholarly publishers, for example, will deal with losing ChatGPT access if they built it into workflows. They might back away from other LLMs, from fear of followup lawsuits. Funders, be they private or public, could react in different ways, either supporting academics in generative AI work to stiffen their resolve amid the sudden collapse, or warning applicants away from the whole field. Local, state, or federal authorities could issue regulations, as said above, which impact colleges and universities.
Naturally I asked some AIs about this question. Here are some answers.
ChatGPT:
If a judge were to order the shutdown of ChatGPT, several consequences and considerations would emerge, spanning legal, technological, and societal domains.
1. **Legal Precedents and Implications**: Such a ruling would likely set a significant legal precedent regarding the regulation of artificial intelligence and similar technologies. It could spark debates over free speech, innovation, and the control of digital tools. The specifics of the ruling would be critical in understanding its broader implications. For instance, if the shutdown is due to privacy concerns, it might lead to stricter regulations on how AI handles user data.
2. **Technological Impact**: A shutdown would halt the service provided by ChatGPT, impacting users and developers who rely on it. It could also influence the development of AI technologies, possibly leading to more cautious approaches in the future. Other AI developers might modify their projects to avoid similar legal challenges.
3. **Economic and Business Consequences**: OpenAI, the organization behind ChatGPT, and its stakeholders would face significant impact. This could include financial losses and a shift in business strategy. Additionally, companies and individuals who integrate ChatGPT into their operations would need to find alternatives.
4. **Educational and Research Implications**: ChatGPT is used in various educational and research settings. A shutdown could impede certain types of research and learning experiences, particularly those that leverage its language processing capabilities.
5. **Public Reaction and Discussion**: Such a decision would likely spark public debate about AI ethics, the role of judiciary in technology, and the balance between innovation and regulation. This could influence public opinion and potentially lead to more stringent public policy regarding AI.
6. **International Considerations**: The impact of a U.S. judge's ruling might be limited geographically, but it could influence international perspectives and policies on AI. Different countries might react differently, leading to a patchwork of AI regulation globally.
7. **Future of AI Development**: This event could lead to a more cautious approach in AI development, with a greater focus on ethical considerations, transparency, and compliance with legal standards.
In summary, the shutdown of ChatGPT by a judicial order would be a complex event with far-reaching implications across various sectors. It would likely become a landmark case in the evolving relationship between law, technology, and society.
Perplexity:
If a judge shuts down ChatGPT, it would impact the legal profession and the use of AI in legal services. The use of ChatGPT in the legal field has raised concerns about its potential to produce inaccurate and unreliable information, replace human lawyers, and lead to job losses. Several judges have taken steps to ensure that AI-generated content is declared and checked by humans to prevent the submission of fictitious or misleading information in court. In specific cases, lawyers have been fined for submitting fake cases created by ChatGPT, leading to sanctions and ethical considerations regarding the use of AI in the legal profession[1][2][3][4][5]. The implications of shutting down ChatGPT would likely involve a reevaluation of the use of AI in legal services and the development of regulations to address the challenges and ethical considerations associated with its use.
Citations:
[1] https://www.seyfarth.com/news-insights/update-on-the-chatgpt-case-counsel-who-submitted-fake-cases-are-sanctioned.html
[2] https://techcrunch.com/2023/05/30/no-chatgpt-in-my-court-judge-orders-all-ai-generated-content-must-be-declared-and-checked/
[3] https://clp.law.harvard.edu/knowledge-hub/magazine/issues/generative-ai-in-the-legal-profession/the-implications-of-chatgpt-for-legal-services-and-society/
[4] https://www.cbsnews.com/news/chatgpt-judge-fines-lawyers-who-used-ai/
[5] https://www.courthousenews.com/sanctions-ordered-for-lawyers-who-relied-on-chatgpt-artificial-intelligence-to-prepare-court-brief/
Bard:
The potential consequences of a judge shutting down ChatGPT depend heavily on the context and reasoning behind the ruling. Here are some possibilities, ranging from minimal impact to significant disruptions:
Minimal impact:
Temporary shutdown: If the shutdown is temporary, it would likely cause inconvenience for users and potentially impact research or projects relying on ChatGPT. However, other similar language models are available, and ChatGPT could resume operation after addressing the judge's concerns.
Limited functionality: The judge might restrict certain features or outputs of ChatGPT, addressing specific concerns about bias, misinformation, or harmful content. This could limit its usefulness for some tasks but wouldn't necessarily cause a complete shutdown.
Significant disruptions:
Permanent shutdown: If the judge deems ChatGPT too risky or harmful and orders a permanent shutdown, it would be a major blow to OpenAI and the field of large language models. Research and development in this area could be set back, and projects relying on ChatGPT would need to find new solutions.
Wider precedent: Depending on the legal arguments and reasoning behind the shutdown, it could set a precedent for regulating other AI systems or large language models. This could lead to stricter regulations and hinder the development of this technology.
Public debate: The shutdown could spark a public debate about the role of AI in society and the ethical considerations surrounding large language models. This could raise awareness about potential risks and benefits, leading to more informed discussions about AI development and use.
It's important to remember that ChatGPT is just one specific language model, and its shutdown wouldn't stop the development of similar AI technologies. However, it could still have significant consequences for OpenAI, the field of AI research, and potentially even wider societal discussions about the role of AI in our lives.
Ultimately, the specific impacts of a judge shutting down ChatGPT depend on the exact circumstances and the broader context surrounding the decision.
I hope this information is helpful! Let me know if you have any other questions.
I am mildly entertained by the tone I read into each: defensiveness from ChatGPT, something like cattiness from the others.
Let me pause and turn the idea over to you all. How do you think such a potential event might play out in the world - and in your own work and life?
OpenAI and other LLMs are the inevitable consequence of the commodification of ideas. The fight here is between two corporate models fighting over who owns the ideas of our society. This is a perversion of the original idea enshrined in the US Constitution: To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries;
AIs can be one of the most powerful tools for "promote the progress of science and the useful arts" ever created. Fights over copyright kind of miss the point. The point is whether OpenAI has the right to commodify other people's ideas. In doing this, they are following in the venerable tradition of Disney and the content industry that emerged in the 20th century.
Unfortunately, this kind of decision would, most likely, be incredibly destructive. The idea that ChatGPT is constructing new constructions out of existing data is merely an automation of what artists and scholars have been doing for centuries. This kind of decision would also likely break truly open AI models as well because they wouldn't be able to use "proprietary" ideas as well.
Sam Altman and other AI CEOs demanded AI regulation last year. This was a disingenuous ploy to raise the barriers to entry into the field. I think we should take them up on it and regulate to decommodify the models being used. There is a simple expedient to achieve this goal and that is to demand that all models adopt a minimum level of transparency. I don't think that's the kind of regulation they wanted to see.
However, transparency is unlikely to be the regulatory course taken, because the only group that would benefit from such a regulatory model would be the public. Content providers simply want the AI companies to pay "rent" on the ideas they are using (this is already happening as OpenAI has signed agreements to train their models with several content providers already).
The problem with this approach is that it raises the barriers to entry and virtually insures that monolithic corporate models will be the only ones who have access to a significant enough corpus to train powerful models. This also suits the incumbent AI companies because they have the resources to work out these kinds of deals.
The other thing this kind of framework would do is to give advantages to players who are, for one reason or another, not under the US legal umbrella to hack and steal information to create models that are even more powerful than the artificially siloed information models that a pay-to-play system would establish in the US.
The losers here would be the general public and the educational communities that could emerge around AI systems. If the cost to entry rises due to paying "idea fees," you will exclude large chunks of the American populace (as is already happening with the $20 monthly subscriptions). We already suffer under idea fees that hide behind paywalls to scholarly content (this doesn't benefit the authors financially either - still waiting on my check, Elsevier).
The PC revolution was ultimately a democratizing force in our society. The Internet/Web has been a bit more uneven but more and more people have access to it as costs have come down and its necessity has become apparently (we still have work to do there).
These technologies have transformed how we view ourselves. They have opened up vast new possibilities for lifelong learning, employment and entrepreneurship. Holding onto the Disney version of copyright will bankrupt these kinds of systems.
Furthermore, I think that any effort to do this will be futile in the end as economies based on false scarcity inevitably lead to black markets. AI itself will create a disruptive force in these warring economies.
It would be better, however, if we used the current AI moment as an opportunity to rethink how we approach the commodification of ideas. We need to defend open models and push for the transparency in our systems that AI could open up. I'm trying to be optimistic that will happen without too much chaos in the interim.
Call me cynical if you wish but I have to say that, like every other illegal activity on the internet, the criminal enterprise will take it forward anyway. Therefore any court of law needs to take that into consideration when making a decision.
I understand that the New York Times is complaining, this is the newspaper that covered up communism’s atrocities and praised a certain German dictator.