Selling Your House For Firewood
Media companies are cutting deals with OpenAI that they will regret.
I am not a business guru. Nor will I ever be a business guru, due to rampant discrimination against socialists in the business world. But if I was a business guru, one my core guiding principles would be “do not plant the seeds for my own industry’s demise.” Sadly, it feels like this principle is being lost in my own industry today.
News Corp announced this week that it has signed a deal with OpenAI. The deal, which could be worth up to $250 million over five years, will allow OpenAI to train its models on content from the Wall Street Journal and other News Corp publications. Other media companies like Axel Springer and the Financial Times have signed similar deals. The opposite approach has been taken by the New York Times, which has filed a lawsuit against OpenAI and Microsoft saying they illegally went ahead and trained their models on NYT content, and implying that billions of dollars in damages should therefore be due.
One thing that should be said right at the start of any discussion about this is: Yeah, OpenAI did that. We know they did it! They fed everything they could find on the internet into their models to produce their product. This is very much in line with the approach of many successful tech companies of the past, which is “just do what we want and pretend that laws don’t really apply to our technology because it’s new and then get so big and rich that we’re able to sort of buy our way out of it on the back end.” This was, for example, the approach that Uber took when they said “we’ll pretend we’re not a taxi company so we’ll just open a taxi company in every city but not follow any of the laws that regulate taxi companies,” and what Airbnb did when they said “we’re not actually in the hotel business so we will just open one zillion unlicensed hotel rooms everywhere and pay no attention to any meddlesome regulations.” In both cases, the approach worked—the companies established themselves and damaged their unfortunate competitors who were subject to existing regulations and are now able to fight about how to write new regulations for themselves, at their leisure.
The conceit that any new technology renders all preexisting laws and regulations inapplicable is a profitable one. Hundreds of billions of dollars can be made with the “ask forgiveness, not permission” philosophy. That is what OpenAI is doing now. “Copyright laws? Well you see, the people who wrote those laws never really said they applied to AI models that did not exist when they were writing the laws, so we figured, hey, it should be fine to just use the whole internet to train our models. Right? Did we make a gaffe? Well, geez, dang, let’s sit down and work out a nice little payment to you to make up for it.” The amounts of money that media companies are getting in these deals sound nice up front, but they are peanuts for OpenAI, which is probably worth more than $100 billion already. Assuming they are not stupid enough to just think that this is lucky free money, the media companies themselves are already making the calculation that OpenAI is so big and established that fighting its fundamental business model is hopeless. This was the most telling part of the Wall Street Journal’s story about the News Corp deal:
Some media executives regretted not driving a harder bargain years ago [with the tech platforms that came to monopolize online advertising] and are looking to take a tougher stance now on AI—but missing out on potential revenue from licensing is also a big risk.
“It’s in my interest to find agreements with everyone,” Le Monde CEO Louis Dreyfus said in an interview, referring to tech companies. “Without an agreement, they will use our content in a more or less rigorous and more or less clandestine manner without any benefit for us.”
That’s a man who has given up! That’s a man who sees this tech company come along and steal valuable journalism in order to build a machine that will put his staff out of work and says… “eh, can’t fight em. Let’s get a few bucks at least.” The New York Times’ lawsuit, at least, is an attempt to draw a line in the sand, to police the borders of journalism in a way that could force tech to exist alongside it, rather than to swallow it up. I don’t know if the lawsuit will succeed or not. But the alternate path of cutting licensing deals to train the Automation Death Star to be able to more precisely replicate your work in the future is the equivalent of feeling pleased with yourself that you made five bucks selling your house keys to some burglars.
There is, of course, one important wrinkle to the odd decision-making process going on inside of these media companies. The executives making these deals with OpenAI likely do not imagine that AI is the sort of thing that will put them out of work. They imagine instead that a properly trained AI might put their staff out of work. Which, hell, could be a great thing from the company’s perspective. Lower labor costs! More profits for the executive bonus pool! If AI is just a straightforward version of automation that applies to creative and white collar jobs rather than to blue collar manufacturing jobs, then it is easy to see why company executives would feel cavalier about opening their doors to it. (Yesterday, the new CEO of the Washington Post company told staffers that “the paper has to have ‘AI everywhere in our newsroom.’” Remember this when the next layoffs hit.)
In the media industry, though, it’s not going to be that simple. First, we should be busting our asses right now to lay down this principle of journalism ethics: Thou shalt not publish any AI-written journalism. Ever! AI, no matter how well trained, no matter how much it can simulate the tone of a newspaper, and no matter how many editors you have look over what it spits out, lacks one vital thing that must be present in any ethical journalism: Accountability. It cannot tell you how it made the decisions to write what it wrote, and the editor checking its work can’t tell you either. Therefore AI must be limited to being a tool for journalists to use, rather than a technology that replaces journalists. I assure you that that is not a limit that the executives of media companies imagine exists. But if they don’t respect it, they are selling out their own future by auctioning off the credibility that is their real product.
In fact, the more widespread AI chum becomes online the more valuable human journalistic credibility will become. The media companies that look at the rise of AI as something to mostly be defended against are the ones that are actually protecting their own long term value. The ones that rush to position themselves as the most AI-friendly, and happily sell their archives to train the next generation of AI, and scheme to lay off as many editorial employees as they can while they automate their work, will come to realize over time that they are fading ever deeper into the mist of indistinguishability. Ubiquitous AI, which is coming, means ubiquitous creepy simulacra of the human voice. What will stand out will be humanity, and the accountability that comes with it.
I’m sure that the taxi industry wishes that it had hastened to regulate Uber when it first showed up. The hotel industry wishes they had pressed legislators to rein in Airbnb before it spread. Newspapers wish that they had not allowed Facebook and Google to steal their advertising business out from underneath them. All these things can be learning opportunities for the rise of AI. The workers who stand to have their own jobs swept away by a shitty algorithmic clone understand all of this already. It’s the executives I’m worried about. Which just drives home the point that the best use of all for AI would be to automate the CEOs.
More
Related: My piece in CJR on AI and journalism ethics; Public Funding of Journalism Is The Only Way; Incuriosity, Inc.
I will be speaking at a fundraising dinner for the Louisville DSA on Saturday, June 22. If you’re in Louisville, put it on the calendar and come through. This week a DSA candidate in Louisville won an election for Metro Council. It’s working!
Thank you to all of you who read How Things Work. As I’ve said before, the “business theory” of this publication is: I will keep it free for everyone to read, and this will work as long as those of you who can afford to pay for it become paid subscribers. It’s not very expensive. Very affordable really. If you are not yet a paying subscriber, please ask yourself: Do you enjoy reading this publication? Would you like it to continue to exist? Do you make over, like, $75k a year? If you answered “yes” to these questions, but you are not yet a paying subscriber, please take a few seconds right now and become one. Let us all prove to the world that this funding method works.
Automate the CEOs.
Please print that on a t-shirt and sell it here. Another great piece.
You make very important points here, Hamilton.
Trust can never be generated by AI, and trust, I believe, is the new gold.