Discussion about this post

User's avatar
Tom Haymes's avatar

OpenAI and other LLMs are the inevitable consequence of the commodification of ideas. The fight here is between two corporate models fighting over who owns the ideas of our society. This is a perversion of the original idea enshrined in the US Constitution: To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries;

AIs can be one of the most powerful tools for "promote the progress of science and the useful arts" ever created. Fights over copyright kind of miss the point. The point is whether OpenAI has the right to commodify other people's ideas. In doing this, they are following in the venerable tradition of Disney and the content industry that emerged in the 20th century.

Unfortunately, this kind of decision would, most likely, be incredibly destructive. The idea that ChatGPT is constructing new constructions out of existing data is merely an automation of what artists and scholars have been doing for centuries. This kind of decision would also likely break truly open AI models as well because they wouldn't be able to use "proprietary" ideas as well.

Sam Altman and other AI CEOs demanded AI regulation last year. This was a disingenuous ploy to raise the barriers to entry into the field. I think we should take them up on it and regulate to decommodify the models being used. There is a simple expedient to achieve this goal and that is to demand that all models adopt a minimum level of transparency. I don't think that's the kind of regulation they wanted to see.

However, transparency is unlikely to be the regulatory course taken, because the only group that would benefit from such a regulatory model would be the public. Content providers simply want the AI companies to pay "rent" on the ideas they are using (this is already happening as OpenAI has signed agreements to train their models with several content providers already).

The problem with this approach is that it raises the barriers to entry and virtually insures that monolithic corporate models will be the only ones who have access to a significant enough corpus to train powerful models. This also suits the incumbent AI companies because they have the resources to work out these kinds of deals.

The other thing this kind of framework would do is to give advantages to players who are, for one reason or another, not under the US legal umbrella to hack and steal information to create models that are even more powerful than the artificially siloed information models that a pay-to-play system would establish in the US.

The losers here would be the general public and the educational communities that could emerge around AI systems. If the cost to entry rises due to paying "idea fees," you will exclude large chunks of the American populace (as is already happening with the $20 monthly subscriptions). We already suffer under idea fees that hide behind paywalls to scholarly content (this doesn't benefit the authors financially either - still waiting on my check, Elsevier).

The PC revolution was ultimately a democratizing force in our society. The Internet/Web has been a bit more uneven but more and more people have access to it as costs have come down and its necessity has become apparently (we still have work to do there).

These technologies have transformed how we view ourselves. They have opened up vast new possibilities for lifelong learning, employment and entrepreneurship. Holding onto the Disney version of copyright will bankrupt these kinds of systems.

Furthermore, I think that any effort to do this will be futile in the end as economies based on false scarcity inevitably lead to black markets. AI itself will create a disruptive force in these warring economies.

It would be better, however, if we used the current AI moment as an opportunity to rethink how we approach the commodification of ideas. We need to defend open models and push for the transparency in our systems that AI could open up. I'm trying to be optimistic that will happen without too much chaos in the interim.

Expand full comment
Steve Boronski's avatar

Call me cynical if you wish but I have to say that, like every other illegal activity on the internet, the criminal enterprise will take it forward anyway. Therefore any court of law needs to take that into consideration when making a decision.

I understand that the New York Times is complaining, this is the newspaper that covered up communism’s atrocities and praised a certain German dictator.

Expand full comment
18 more comments...

No posts