Why The Atlantic signed a deal with OpenAI

  News, Rassegna Stampa
image_pdfimage_print

Today, I’m talking to Nicholas Thompson, the CEO of The Atlantic, one of the oldest magazines in the United States — like really old. It was founded in 1857 and is now owned by Laurene Powell Jobs, whose last name I am certain that Decoder listeners will recognize.

I was really excited to talk to Nick — like so many media CEOs, he just signed a deal allowing OpenAI to use The Atlantic’s vast archives as training data, but he also has a rich background in tech. Before he was the CEO of The Atlantic, Nick was the editor-in-chief of Wired, where he set his sights on AI reporting well before anyone else, including me. So he’s been paying attention to this for a long time.

Now, I feel like I should disclose right away that Vox Media, The Verge’s parent company where I work, also has a deal with OpenAI, which was announced on the same day as The Atlantic’s deal.

I actually don’t know very much about the terms of our deal, since I’m on the editorial side of the house and there’s a strict firewall between the business side and the editorial side. I suspect all of these deals are pretty similar, but I actually asked Nick about that. And there’s a pretty funny reason that he doesn’t know either; you’ll hear us talk about it.

Of course, I also asked Nick why he was willing to sign a deal with OpenAI in the first place, and why now when there’s so much general unhappiness about AI companies using other people’s work without permission, and specific unhappiness with OpenAI. You’ll hear Nick explain that what he really wanted to get back was a sense of control: Control over how much data was being used, how results were being displayed, and, of course, over how much money The Atlantic was being paid.

You’ll hear Nick say this all sounds like OpenAI is gearing up to build a next-generation search product, which of course led us to talking about Google and whether getting Google to pay for AI search is a realistic goal.

I was also really interested in asking Nick about the general sense that the AI companies are getting vastly more than they’re giving with these sorts of deals — yes, they’re paying some money, but I’ve heard from so many of you that the money might now be the point. That there’s something else going on here, that maybe allowing creativity to get commodified this way will come with a price tag so big money can never pay it back.

If there is anyone who could get into it with me on that question, it’s Nick. This one went long, and it’s a good one. Okay, Nick Thompson, CEO of The Atlantic. Here we go.

This transcript has been lightly edited for length and clarity.

Nick Thompson, you are the CEO of The Atlantic. You are also notably, for this conversation, the former editor in chief of Wired. Welcome to Decoder.

Thank you so much, Nilay. I’m delighted to be here.

I am really excited to talk to you. I bring up the Wired thing because I want to talk to you about AI and the deals media companies like The Atlantic, and notably Vox Media, the company that I work for, are making with companies like OpenAI. It feels like you have to understand the media business, the tech business, and where the tech business might be going in relationship to the media. Let’s start at the very beginning, why make a deal like this with OpenAI? What is your deal with OpenAI?

We can go through it in complex way or the simple way. The simple way is we believe it provides revenue, but more importantly provides a potential traffic source. Provides an avenue for a product partnership that could be very beneficial, and that provides a way for us to help shape the future of AI.

AI is coming, it is coming quickly. We want to be part of whatever transition happens. Transition might be bad, the transition might be good, but we believe the odds of it being good for journalism and the kind of work we do with The Atlantic are higher if we participate in it. So we took that approach.

We started talking to all the AI companies, all of the large language model companies. We had parameters that we would accept for a deal, parameters we would not accept for a deal, and we reached a deal with OpenAI. So that’s the basic framework.

What were the parameters?

The deal really has three parts, four parts, depending on how you look at it. Part one is for a limited period of time, two years in our case, they’re allowed to train on our data. So they can read Atlantic stories and they can incorporate that into their base large language model. We have some controls over the kind of outputs they’re allowed to give to people, but they’re allowed to train on our data for two years.

The second part of the deal is the product partnership. So they give us credits. So we were building tools on the business side with the engineering team that are using OpenAI. So we don’t have to rely on Llama, we are just using OpenAI.

Credits, we are working with them. At some point there may be engineering support, there may not be engineering support. Who knows exactly how that is going to work, but that is a potentially valuable part. And we are launching a lab site soon where we’ll have a whole bunch of experimental tools to help readers.

First one we created was a Chrome extension that will, as people are reading other places on the web, will show them stories The Atlantic has written that are related to, just stuff like that. So we’ll have a labs experimental site. So that’s the second part of deal.

Third part is this very interesting search element, where right now in OpenAI they have browse mode and they can link out to Atlantic stories. They have said that they’re going to build a search product. They have not launched the search product, but they have said they would build it. We have allowed them to include The Atlantic in their search product.

Our view is that if this becomes an important way that people navigate the internet, that it will be better for us to be in it than to not be in it, and also to help shape it than not help shape it. So that’s the third part.

And then the fourth part is that there is a line back and forth. So when we see something, like in browse mode we notice something interesting about the URLs and the way they’re linking out to media websites. You go back and forth and those things get fixed. So our sense is that we are helping the product evolve in a way that is good for serious journalism and good for The Atlantic.

So those are the key components of the deal. Underlying it is a view that journalists and media companies should be paid for their work. Obviously the large language model scraped without permission, did not pay us. We think we should be paid for that.

There are a whole bunch of ways you can get paid for that, you can sue, you can do deals, you can shake your fist. You figure out whatever the best approach is to get paid, but there should be a fair exchange in value. So that is a key part of it.

But we also believe that the world will be a better place for serious journalism if content like that created in The Atlantic and that created in The Verge is part of these models. If the search results return Verge stories, that is better for the readers and it is better for the world than if they do not, right? There are all kinds of trade-offs, but that is another element in it.

There’s a lot there. I want to take one piece of it and just focus on it for one second. You mentioned revenue. How much money is it over two years?

Some of the terms of the deal are nondisclosure agreements. Obviously I can’t disclose that particular term, but it is a fair exchange in value.

Do you think it’s material or meaningful to The Atlantic’s revenue as a whole?

So it’s short-term revenue. Is it material in 2024, is it material in 2025, the two years of deal? Of course. Would you want to extrapolate out to 2026? Of course not.

One of the things that we all know from deals with tech companies is they care about their interests, not your interests. They do deals that end, you don’t expect it to continue forever.

I feel like the industry learned that lesson in the hardest possible way, the rug pull of Facebook’s various news initiatives or Google’s various news initiatives and that money going away. Basically everyone depended on those companies and then that dependency was revealed to be in error. Do you feel that? Was that skepticism present when you were talking to OpenAI?

Yes and no. So I think there was a different mistake. My view, my philosophy, and this is not a perfect metaphor, is that basically the editorial work sits upstream and then everything else is downstream. That’s the way you run a business.

So you decide what stories you’re going to run, the editors choose them, they write them to the best way they can, and then you fight like hell to get as much traffic as you can from Google, from Facebook, on Instagram, TikTok, whatever you’re doing to get them on to read in the right way. You do all those things, but you do those things after you’ve written exactly the story you want.

And where the companies made mistakes is that they moved the Google and the Facebook stuff upstream, and so they signed these deals. And they didn’t just expect that the revenue would continue forever, which is mistake one. But the much more serious mistake is if you start to assign stories, or edit stories, or change even a word in the stories because you want to have it go viral on Facebook, then you’ve started to sacrifice the thing that you do that matters, right?

I’ve spent most of my life working only at three places. I worked at Wired, then it was The New Yorker, then went back to Wired, now I’m at The Atlantic. All of those places are kind of different dynamics. And The New Yorker, what you’re doing is you’re fighting really hard to make sure everyone pays attention, fighting for Facebook and Google. There is no risk that this will be moved to the wrong place in the river, in other magazines, and certainly you saw it in a lot of media. So that’s mistake number one.

Mistake number one, and that is the crucial mistake, is making the business deal upstream of the editorial. Mistake number one is assuming that these companies will partner with you forever, and if they say, “We’re going to give you X money this year,” you’ll also have that X money in three years, which is after the contract ends. That is a mistake.

But the much more important mistake is if you start to change the sacred thing you do, which is the creation of stories for the platforms.

So now back to the AI deal. Is there any way in which we will change the way we do our stories because of this deal? Absolutely not, this will have no effect. We will do the exact same stories in 2024 and 2025 than we would’ve if we didn’t have this deal.

One of the big criticisms here is, okay, you sold this stuff for two years, they’re going to train their model, it’s going to get better. Then the deal will end. They won’t pay you again, but they will have already trained the model. And that value will remain forever and then they will just continue doing whatever they want to do.

There are about 20 different terms that are important when you’re negotiating a deal like this. That is one of the important terms. And so it has been publicly stated, and so I can say this, they are destroying our data. They will use our data to train any model that they build in the next two years, the two years after we sign that deal.

They train each new model on entirely new data, and so they will have our data for the next two years, but when it gets to GPT6 they won’t, unless they have another deal. That clause is important both for the reason you said, and also so we have more leverage when there’s another moment of negotiation.

It feels like OpenAI is the challenger. They obviously are the upstart, they’re chaotic in the ways that startups can be chaotic, in a fun way and also in a compromised way.

The real target here, it feels like, is Google, which has had a very extractive relationship with the media for a long time. Now is keeping more of that traffic for itself. Is also building AI search products, delivering AI results, and is paying no one. Do you think a deal like this helps you get leverage against Google?

I think so. Google has a different situation, where they have so much more leverage on us because you can’t block Google. I mean, there are ways you can partially block Google, and you can block this Googlebot, not that Googlebot, but they have a lot more leverage on us that OpenAi does, the negotiations are different.

I also would imagine that they are waiting. There are a lot of things that are happening with OpenAI, including the New York Times lawsuit. I think they’re waiting to see how that shakes out. I haven’t talked to Google directly about this, but if they pay for content, do they have to pay for all the links? And do they have to pay back for 25 years worth of it?

So I don’t know what their calculations are, but I think they’re watching what’s happening. And my hope will be that there’s a fair value exchange with Google as they build AI search.

That part where you said OpenAI has already taken it, they’ve already scraped on what they refer to as publicly available information, which might include all the way up to YouTube, and these are the reports that we’ve heard. Do you feel like you’re taking the payment now in recompense for what they’ve already taken? Or is this for the future?

That’s a hard question to answer. This is not like you committed a sin and you’re paying us for the sin, we don’t view it that way. We view it as, you created… I was trying to do a calculation the other day. I was like, “How much does the high quality journalistic content, how much value did it create for OpenAI?” And you can actually kind of do a back of the envelope calculation, and you can see how much money, based on that calculation, a rough back of the envelope, what they owe the journalism industry or what the journalism industry contributed.

And you can think about of what the journalism industry contributed, what percent should go to us and what percent should they keep, right? And that’s sort of one way where you came up with a number. I don’t view it as paying for a sin. I view it as, “Okay. They’ve built this thing, it has this value. We’re part of it. We’d like to be paid for it.”

That calculation, when you went to open AI with it, did that match what they wanted to pay you? Or were you higher or lower?

That particular calculation has so much variation in it because how much do you weigh each of the factors is roughly where we ended up.

The reason I ask it that way is the notion that this is a pre-settlement for a lawsuit that you might’ve filed the way that the New York Times filed a lawsuit, or you’re setting a price floor for a further negotiation with Google, really changes the way you think about the deal itself, right?

So if you’re saying, “You already took it. Just pay us to catch us up, and then in two years, we’ll start over from scratch,” that changes versus, “You’re building GPT-5 and a search product. We want to be on the ground floor as the challenger to Google.” You might accept a discount in that case because you think the upside is higher. What’s the balance there?

We want to maximize several things, right? We want to maximize the amount of money that comes to serious journalism companies. We want to shape the industry in the best possible direction based on our values, and we think the values that are important. We want to bring in as many readers as we possibly can. And so as we think through the deal, we’re weighing all of those things.

Now, the question of how you maximize money for the Atlanta grading publication is interesting because you do have an option. You can take the New York Times route or the Alden Capital route, and you can sue. We looked at that calculation in the case of Open AI and chose not to sue. That doesn’t mean we’re not going to sue every other large language model company out there.

You weigh what they’re offering on all those fronts. All the benefits they’re offering, again, the product partnerships, search, et cetera. You weigh all those things versus what it would cost to sue and how much you would get from it, and then you make a choice.

It’s been reported that The Times is a million dollars deep into its legal fees against OpenAI. That’s-

Suggesting they expect to get more than $1 million for the content.

They assume they’ll get more than $1 million. The Atlantic is owned by billionaires, it’s owned by Laurene Powell Jobs. Would she have fronted $1 million in legal fees, or is that off the table for you?

That’s a complicated question. I mean, the answer of course, yes, right? If we made an argument to her that this is what is best for the future of serious journalism, then she would certainly have supported it.

The reason I ask that question that way is, there’s a lot of risk there, and when you have a rich owner, you can accept maybe more risk than if you are a publicly traded company or you have a bunch of VC money like Vox does. But the risk there is almost impossible to ascertain because the copyright law argument is a total coin flip at this moment in time.

Do you think it’s a coin flip? Or do you think it’s a 60/40, 40/60, 70/30, 30/70?

I think it’s a pure coin flip, actually.

You think it’s 50/50? Former copyright lawyer Patel here.

And that is a pure lawyer answer. And I think you can run through the argument, and on a good day, a judge that has just used Dall-E to make a storybook for their grandchild is on your side; and on a bad day, they’ve just seen the two startups that ripped off Johnny B. Goode, and the RIAA is suing them, and they lose. And I think that is as an emotional decision as almost anything right now.

But do you actually think The Times is going to reach an outcome, or do you think they’re going to settle it? Partly you settle based on where you think the case is going, right? And you do the arguments and you’re like, “Oh my God. It’s now 70/30, so we should settle on different terms.”

Right. I think there’s that, and we haven’t gotten through any of that, and we really haven’t seen anything substantive from OpenAI in terms of how they’ve trained most of these companies. It’s really under lock and key, what they’ve trained on, what their approach to training was, what their approach to copyright law and training was. So sure, maybe as time goes on, that will change.

But just on a straight let’s go through the argument, you ingest a bunch of data, you train a model on it, which means you set some weights and you throw the data out, and I can do this generation. Who knows? If The Times wins, for example, and your two years is up, and it turns out it wasn’t fair use to train these, do you think you’ll be able to get more money? Are you just waiting at the clock on these lawsuits?

Oh. If The Times wins, we will get more money from everybody. Every journalistic organization will get much more money from everybody, right?

If The Times loses-

We’ll all get much less.

I’m just asking, how are you factoring that risk?

Basically, you have a conversation with your lawyer and your lawyers, and I talked to lots of copyright lawyers to decide. If I thought The Times had a 99% chance of winning, I would have a very different perspective going into these negotiations. If I thought The Times had a 1% chance of winning, a different perspective, right? So you make your decisions based on that.

You also weigh other things, right? Will text be important to training large language models two years in the future, or will it all be multimodal data? Will synthetic data be so good? Right? I’ve had people making large language models basically say, “We don’t need you because we can do it all through synthetic data in the future.” And maybe the synthetic data is derivative of the organic data, but you have to weigh what will your data be worth tomorrow?

And therefore, are you getting a better deal now or will you get a better deal tomorrow? Do you think your data is going to be worth more tomorrow because text will still be valuable. And in fact, that organic human-certified data that we create at The Atlantic and have been doing forever, if you think that is going to be more and more valuable and you think The Times is going to win, well then you will be more cautious. You would demand more in the deals. I’m not saying you wouldn’t do any deals, but you just have a different framework.

Do you think that the decision to take the deal now is rooted in, “Well, we can get some revenue now, and hopefully all of these copyright lawsuits,” because there’s a lot of them. The industry really just has to lose one to get to where you’re saying, right? The record labels have to win or The Times has to win, or Sarah Silverman has to win, and then the dominoes start falling in your favor.

But here’s one more factor which I think is interesting. I believe that us doing this deal and the Wall Street Journal doing their deal helps The Times because it shows that there is a market for this stuff.

There’s a criticism like, “Why is there not this collective action?” And the reasons why there isn’t collective action are hard, including antitrust law, which means that I can’t talk to Bankoff and negotiate with him-

Jim Bankoff is the CEO of Vox Media.

Right. So Jim and I can’t talk and negotiate together and get better terms for both of us. There’s another collective action problem where if you join a group, a consortium, the money presumably is spread based on the word contribution, but some people like The Times presumably think that their brand value and their words are more valuable on a per-word basis. At the top of the food chain, they have an incentive not to join a consortium. So you have a whole bunch of reasons why you can’t do collective bargaining together as an industry to get better terms, which would probably be better overall for Medium.

While that is true, one of the ways that we can help the industry is by making deals and setting a market. So that then, I believe, that us doing a deal with OpenAI, makes it easier for us to make deals with the other large language model companies if those come about, I think it makes it easier for other journalistic companies to make deals with OpenAI and others, and I think it makes it more likely that The Times wins their lawsuit.

The fourth factor in the fair use analysis that a court would do is the effect of the new use on the market for the old work. And you’re saying, well, you have to have a market. You have to set some prices for this kind of use.

And we are setting the market.

And you think that that over time will strategically help The Times?

The Times case is going to depend on 1000 things that are more important, but I do think that as a general principle set in a market and getting a fair exchange of value is good precedent for our industry.

There’s another layer of implications to taking this kind of deal, and it comes from the people who are making all of the content, who are making the work, who are writing the stories and making all the podcasts. And the thing that really strikes me about it is that The Atlantic’s union is mad. The Vox Media Union, which the Verge team that I manage is in, is mad. The union for New York Magazine, another Vox Media imprint, is mad. They’ve all written letters and circulated statements saying they’re outraged about this, and I’ve been thinking a lot about that outrage and what it means.

No one seems mad when a media organization licenses their content at Apple News or we publish on YouTube, even if the terms from YouTube or any of these other platforms are worse or feel even more exploitative. And I’ve been trying to pull this apart, and what I’ve kind of landed on is the copyright part of this is just an economic argument. You took our stuff, you didn’t pay for it, now you got to pay for it. You want to use it in some new way? We’ll come to some agreement on some parameters, and you’ll pay for it.

And the money on the economic side does not cure the moral problem that people see, which is partially a labor issue, this technology might displace all of us on some timeline, and partially just the, “Hey, you just took this stuff.” And now the CTO, Mira Murati, is running around saying, “Maybe some creative jobs shouldn’t exist,” right? There’s a blitheness to this industry, particularly from OpenAI.

And that disconnect between the economic problem that copyright law might help you solve or The Times case might help you extract more money from, and the moral dilemma, seems like it’s wider than ever.

Oh, I totally agree. I wrote a book on the history of the Cold War that was published in 2009, and when I learned that that was in the training set of Llama, sort of the emotional, “Wait. So the book was pirated?” And not only that. It was chopped up into the wrong order. It was like this violation, right?

And so I think there’s at least two things that are super important here. There’s one, that feeling, like, “Wait a second, they just took this. They didn’t pay for it.” And then secondly, there’s this fear, which is AI could do terrible things to our industry. Absolutely. So you have those two very emotional factors coming together, and this is a deal with an AI company.

So my view or my role as CEO is to try to put that aside and to say, “What I’m trying to optimize for is the future health of The Atlantic, the future economics of The Atlantic, the future of this industry. I’m weighing all these different factors together, and I think the deal, net-net is very good for us in all these ways.

AI is this rainstorm, or it’s this hurricane, and it’s coming towards our industry, right? It’s tempting to just go out and be like, “Oh my God, there’s a hurricane that’s coming,” and I’m angry about that. But what you really want to do is, it’s a rainstorm, you want to put on a raincoat and put on an umbrella. If you’re a farmer, you want to figure out what new crops to plant. You want to prepare and deal with it.

And so my job is to try to separate the fear of what might happen and work as hard as I can for the best possible outcome, knowing that because I have done a deal with an AI company, people will be angry because AI could be a very bad thing, and so there’s this association. But regardless, I have to try to do what is best for The Atlantic and for the industry.

That was the CEO answer. There’s a reason I introduced you as the former editor-in-chief of Wired, because I want that answer too, which is you ran an industry-leading publication during the social media era.

A lot of what I’ve heard from people who wish to regulate AI or slow it down or anything is we failed to learn anything from the social media era. We failed to learn how to regulate these companies, we failed to learn how to hold them in check. We all certainly failed to learn how to get paid for how much they use our content. Facebook made a bunch of money distributing our content and media companies made none. YouTube, I think, still doesn’t pay high enough rates to support a news organization on YouTube, and it’s just a moral failure on YouTube’s part.

From that perspective, as you watch the social media era unfold, what mistakes from that era are you trying to avoid making? Because the idea that the tech companies are just the weather is very tempting. They’re just going to do this and we can’t stop. The social media is just going to happen to us.

And it did, but I think a lot of people are looking back on that and saying, “Boy, did we just make a bunch of assumptions about their motivations or how people would communicate using these tools.” It turned out to be utterly wrong, and we should have actually stopped it earlier or changed it earlier.

Answering as a CEO, that is what we are trying to do. We are trying to figure out a way that these tools evolve in such a way that they are best source. Maybe it’s just the weather is the wrong example because we do have some control in the very early stages in making these things better. Just like if there had been a way early in Facebook to shift the way that News Feed work, so that established brands weren’t given the same weight as non-established brands. There were like 20 fundamental sins at the beginning of the News Feed, which ended up being hugely damaging to both journalism and American democracy.

But one of the tweaks would’ve been, can you change the weight in the way the design and the way fonts work or whatever so that somebody in Macedonia can’t start a publication called The Verge with another Z at the end that looks just like you and has the exact same weight? I think that one of the lessons is to pay a lot of attention. So the AI search products have not been built and have not been launched. As they’re built and as they’re launched, what are the values we want embedded in them? How much text do we want them to show? How do we want the external links to work? How do we want the level of summarization? Those are really crucial questions to get right at the beginning, and I think we are more likely to get them right as they do these kind of deals.

The other thing I’ll say though, as the former editor of The Wire, like, “Oh my God.” Some days I wake up, I’m like, “I wish I was a reporter again.” It is so amazing the stories that… I mean, you guys are telling a lot of them, but the opportunity to report because it is total madness right now. It’s like the best story to report on in years. It’s incredible. And so I can’t do any of that because I’m a businessman now and I don’t even talk to the editors. I don’t even know what we’re going to run in The Atlantic today, but I would love… I spent a lot of my time writing those stories on Facebook back when I was there and at Wire, I loved that. I love writing on these crazy people in this world of churn making these massive decisions. It’s so much fun.

When you say it’s all crazy out there. The thing that really strikes me is I would say even two years ago, people thought the internet sort of calcified into a series of platforms and this is what it’s going to look like. And then Elon bought Twitter and then ChatGPT showed up, and now it feels like everything’s breaking apart. And the thing that feels mostly like it’s breaking apart to me is the assumption that the big platforms have our best interests in heart or can be trusted or trusted with our children. You see the spate of legislation that’s out there that would regulate how kids use platforms. You see all the reporting that is out there about Facebook willfully ignoring some of the problems it causes with teenagers.

The other side of it is a lot of the underlying assumptions about the value that is being exchanged, are kind of like Google’s assumptions. Google does image search, they get sued, they win because they’re a bunch of kids. Google indexes all of our sites, but they send us traffic and we sort of agreed with that approach for a long time. They keep winning because they are innocents or they at least hold themselves out to be innocents and they deliver a lot of value in a new way. That part feels like it has definitely changed to me. This assumption that it’s just a bunch of kids trying to change the world, and of course we should let them skate by and ask for forgiveness, not permission. Do you think from the business point of view, that that is actually going to create opportunities to bring value back to the people who make the work because that’s the real problem here?

I don’t think that’s changed. I think that changed in 2016, or that changed in late 2016, early 2017, and then by Cambridge Analytica, which was 2018, I think that’s when… I mean, that is all changing now. You are very right that is changing, but I think the trajectory-

The specific similarity that I’m drawing is not, I can’t trust them because of Cambridge Analytica. I’m pointing right at Perplexity is scraping a bunch of paywalled websites and showing the results, or OpenAI trained on YouTube to make Sora, or Suno, the company the RIAA just sued, is making music… And the underlying piece of it is, “Well, it’s just out. It just ours to take, and we’ll pay some money to cure it at the end, and that’s just cost of doing business.”

So this is so at stake and it’s at stake today, this very moment while we’re doing this, and it’s at stake in the case of Perplexity, I think. So Google got away with stuff because “Hey, we’re cool kids and we’re wearing five-fingered lizard shoes to meetings with Senators.” And it’s all cool, and they get away for a while, and then eventually regulations catch up. They have to balance. It’s complicated. The dynamic changes. Facebook, the dynamic changes after the election of Trump, and then even more so with Cambridge Analytica.

Uber comes along and has a totally different strategy, which is, “We’re going to get away by just ignoring everything and then making so much money that we’re huge and then we’ll follow along.” Which is a very different approach. I think that Perplexity is trying to decide, “Are we going to be Uber?” And we’re just going to ignore Robots.txt? You read all these stories. “Or are we going to try to do kind of the Google thing and just be like, ‘We’re an AI company. We’re interested in we’re going to get big and see what happens’ or are we going to change and cooperate with the publishers?” And I think that is at stake right now.

And my sense is that there are probably ways that we, as an industry, can push Perplexity into that third path that I’m talking about, where they are a responsible player that doesn’t do 900 word summaries of a 901 word story. And that actually does sort of a fair use summary and a proper link out. Will that happen? If that happens, that is so much better for us than if it does not. And so what is the role that I can play in making that happen? And what is the role that you can play in making that happen? That is very important for the future of media.

And I think it’s particularly important because I think the biggest thing happening to media right now or the most… And you talked about this in the amazing conversation with Ezra Klein and you guys talked about the enshittification of the web, that is the thing that is most at stake right now. AI content right now is bad. What if AI content becomes good? What if the web it becomes sort of indistinguishable and you can’t find yourself around? How do you navigate through that? And building search engines that are still able to direct you to legitimate real content, not the billions of spin-offs, that is one of the most existential problems that exist. And if that problem is not solved, we’re in a world of hurt. So that’s the thing that is happening right now that I am most worried, intrigued, interested in for the next couple of years.

Because Google’s entire business model depends on probably the open web. I mean, the thing you’re talking about breaking is Google search broadly. If the web becomes so enshittified that Google cannot sort the wheat from the chaff, that version of the web comes to an end, and maybe we’ve all paid enough attention to Perplexity and they have a deal or OpenAI search product has better sources from The Atlantic and whoever else, and that will become the winner because people will seek out quality. It’s a big bet, but it kind of relies on the web becoming so polluted that Google can’t sort it out.

When I think about, “What is The Atlantic’s future?” you have to decide, “Okay, what happens if the web becomes super polluted?” Okay, if it does become super polluted, will Google, Perplexity, OpenAI, whatever the next Bing, whatever search startup there is, will they be able to navigate it? If they don’t, how then do we have a successful business model? Do you rely only on direct… Basically, if the web is gone as a distribution mechanism for The Atlantic, how do we reach readers? Well, thank God we have a print magazine. It’s the most hilarious thing, like the revenge of print. There’s print, but there’s also, of course, there’s your apps. There’s direct relationships you have with people in your newsletters.

And then the interesting question is, “What about the walled gardens? What about Apple news?” Do they become more important? If the web becomes so polluted, you can’t really have a functional website there, do you rely more on those places? Now my hope is that the web doesn’t get so polluted, and I think one of the key tasks of the tech industry and everybody else is to try to make sure it doesn’t get that way, but who knows? The finances, Incentivizing pollution are high. Maybe it gets so polluted that the polluters no longer have an incentive to be out there. Who knows what’s going to happen? Anyway, figuring out a strategy for a world of maximum pollution is a fun part of my job.

I have this concept that I call “Google Zero,” which is the notion that basically every publisher is 30% of traffic comes from Google, give or take. Over time, that number is going down, or it’s moving an ecosystem away from some publishers and more towards the other publishers. And so eventually you have to just look in the mirror and say, “Okay, if my Google traffic goes to zero, what am I? Is there still a business here?” What does The Atlantic’s business look like if Google goes to zero?

We’re fine, I think. I mean, we have a very strong subscription business and those people renew. And so you can imagine a situation where our Subscription business becomes a higher percentage of our revenue, which is already the majority of our revenue becomes just a higher part, and we are figuring out… Some of those people find us from Google. The question is, in your Google zero, as long as there’s still a few thousand queries a month that are subscribed to The Atlantic, we can lose the like, “What is the meaning of life” queries.

Oh, I see. Super Bowl queries are gone.

Super Bowl queries are gone.

But as long as people are Googling, “How do I subscribe to The Atlantic,” and then subscribing to The Atlantic, that’s fine.

So as long as it’s Google one, not Google zero, we’re in a decent spot.

Do you sort your Google queries right now based on the value of which queries convert and which queries don’t?

No, but we go into Google Search Console. The top 10 queries are, “The Atlantic, Atlantic subscribe.” Subscribe is not in there, but people who are searching for The Atlantic probably have a high subscription intent. So actually, when ChatGPT came out, we did a fun analysis where we went through Google Search Console, and we went through each of the top a hundred queries and then a random sample up to the hundred-thousandth query. And then analyzed would that query go away with a perfect chatbot? And The Atlantic queries aren’t going to go away. What is the meaning of life, which actually does or used to direct you to an Atlantic article by Arthur Brooks.

Those queries go away. And so then we were like, “Okay, how much of our traffic will disappear, and then how much of our subscriptions will disappear?” And the traffic decline is much steeper than the subscription decline. So in a world of Google one, let’s call it, we take a real hit in traffic and readership, and that has knock-on effects. It does have some knock-on effects on our subscription business. It has knock-on effects on our advertising business. It has knock-on effects on the number of people who read a story, which maybe makes a journalist less likely to write. All those different things, but it’s not crippling. We are less dependent on Google in a deep way, I think, than most publishers. But still, we’re profitable, but we’re not massively profitable. And hit is a hit, so we’d have to figure that out.

We’ve talked a lot about the web being so polluted that might just be too polluted to operate on. Do you think there’s another referrer out there? Do you think it’s like OpenAI’s search product might become a reliable referrer of traffic to you?

That’s the bet. There are a lot of people out there who are like, “Well, OpenAI’s search doesn’t work.” It’s like, “Yeah, AI search doesn’t work right now. It doesn’t work very well. AI is good for a lot of things. It’s not good for search.” That means that Google search traffic is not going to go away for a while. Once AI is good at search, that’s when Google regular TenBlueLink search traffic goes away. There’s a hedge built into this. I do think that AI search will start to work. It’s a hard problem because the logic of the vector model that makes the base training models is not good for search. Then you have to build another model on top of it, which is your rag model. But your rag model isn’t just going to do a basic search. It has to do a whole bunch of other complex things.

So we’re building internal Atlantic search and it’s super complicated. Somebody puts in a query and we do A, B, and C with AI in order to get the best results. That’s us with one engineer working on this problem. Over time, the AI companies will put many engineers on this problem, and they will, I think, figure out how to solve search. So when that happens, it’ll be a partial replacement for Google, and God willing, there will be norms within the AI search industry where again, you won’t be giving a 900 word search result about a 901 word article. You will be linking out in a way that gets people to that article. So God willing, that happens. Do I think that three years from now we will have as much search traffic from Perplexity, OpenAI, all of their competitors as we do right now from Google? Absolutely not. But will we have some? I sure hope so.

If you have a paywall and that’s the main revenue stream, what’s the value in letting OpenAI synthesize any of your work in a search product like this? Is it sending you traffic?

Yeah, it’s sending traffic.

Are you guaranteed any traffic from them?

We’re not guaranteed traffic, but it’s sending us traffic. Every visitor who comes to the site, A, as long as they like what they see, it increases the brand value, and B, there’s some chance they end up subscribing. And we don’t have a hard paywall. We have a paywall with adjustable rules. Many people hit the gate, some people don’t hit the gate, so some people won’t hit the gate and they’ll read an article and they’ll see our ads or maybe even they’ll see an ad on the gate at article. So there’s lots of revenue we make off of every person who comes to The Atlantic.

One of the things that has really struck me in conversations with various social media executives is the belief, it is a rock solid belief, that what social media products have revealed is that people don’t care about brands. They care about people. And that all media will be individuals and not brands. I don’t believe them. I think that is a very self-serving approach from the social media companies because they have an infinite supply of teenage creators they can just replace at will. But you run one of the most storied brands in media. When you hear an Adam Mosseri just post that, he posts things like that all the time, what’s your reaction? Is it, “is this correct?” Is it, “I need you to just walk away from platforms like this?” Or is it more mercenary? “We need to find some customers on Instagram and show them the brand there, but we can’t depend on them.”

We’ve been through this in the last five years in our business where if brands didn’t matter and all that mattered were individuals, everybody would be on Substack. And that’s not the way it shook out. And the way it shook out is that there are some individuals who’ve made amazing brands and are on Substack, and there are some people who are at The Atlantic or at The Verge

The incentives are different for every writer. They’re different for every individual, they’re different for every editor. And so I think the way it’s shaken out is that what social media did, and Mosseri has been talking about this for 15 years. He is right that social media did help individual brands. I remember the first time I heard Moi make this argument, I’m pretty sure he was talking about the NBA. And he’s like, it was the East vs. the West in the Allstar. Now it’s Team Giannis vs. Team LeBron. I think that’s the metaphor he used the first time I heard him talk about that.

And it’s true. It did make LeBron’s individual brand more valuable, but LeBron still plays for The Lakers. Giannis still plays for The Bucks. The brands, the teams, the structures still exist in this world. I think you’ve seen a little bit of a barbell in our industry in a lot of industries where power, wealth, and influence have accumulated people at the far end, like The New York Times, and they’ve accumulated as individuals, and then it’s local news in the middle that’s been crushed. I think you may see more of that in the future, and so my hope would be that The Verve and The Atlantic are on the far side of the barbell.

Yeah, that is also my hope. I look at the far side of the barbell and I look at The New York Times, which is undoubtedly one of the winners in all of this. They can fund the million dollars in legal fees, to no result, on the back of a video game subscription service. That is the revenue. That’s what makes the product sticky, Connections and Wordle.

Remarkable.

It’s very smart. They were very clever. They realized that word games are going to fund the word games of the front page.

Or the other way around. You feel better about playing Wordle because it’s part of The New York Times, because you associate The New York Times with the Ukraine coverage. I don’t know exactly how the virtuous circle works, but they figured out a good way of doing it.

The thing that scares me about that is that’s what I hope, is that that is all just one synthesis of brand value and just value values. But then I hear people say the content is not actually valuable. People are paying for the games. They come back every day for the games and the content is just there. And if they weren’t paying for the games, they would just find the content elsewhere because it’s free. And we’re getting to a place where the high-quality work is behind the paywall and low-quality pollution is freely available to everyone else, and the information environment has been destroyed. And if you say the content isn’t valuable, and I’ve heard so many people say the content itself is not valuable, the work is not valuable, it’s the services that are valuable or the distribution that’s valuable or whatever, the ad targeting is valuable, then at some point, none of us can pay anyone. At some point we’re just saying the content isn’t valuable. We might as well let teenagers read our stories on TikTok for free.

I have a good counterpoint to that.

So there’s this magazine I’m very familiar with, and they’ve been entirely unsuccessful at launching games products and crosswords products and derivative products, and their CEO even tried to spin up an AI-based social media platform which ended up getting sold, and yet it’s still making lots of money and it’s still profitable. It’s called The Atlantic. A hundred percent, the content is valuable. The people are paying us. We’re the experiment proving that The New York Times content is valuable. We have well over a million people. We just announced that they’re paying us ever more money for our content. They’re not paying for anything else.

Do you think that that is extensible? The Atlantic is singular, is an institution in America. Do you think that’s extensible to local news? Do you think this is extensible to some of these small communities or effectively news deserts where there’s nothing?

One of the hard things is that if you look at the brands that have been most successful with paywalls. They tend – not completely, The Information is a good counterexample – they tend to be brands that have existed for a long time and it built up a lot of brand value. And so could you create a new local newspaper and even with fantastic reporting create a paid model for it? 

That is a hard problem. I have not seen great evidence that that could be done in the long run. Maybe it can be. But I have no doubt that there is really good content that can be extremely valuable, and if you create something with extreme value, you can get people to pay for it. Now, you have to run your business efficiently and you have to be lean about it and you have to figure out all the smart ways to get people to read it. It is definitely doable, and I 100% completely, totally, fundamentally disagree with all those people who’ve been saying to you that the content is not valuable. They’re just wrong.

You said something to me as you were on your way to The Atlantic that has stuck with me ever since. I asked you why you were leaving your post as editor-in-chief at Wired to go be the CEO of the Atlantic, and you said this thing to me that I’ve never stopped thinking about. You said, “I can’t wait to run the product team.” You were so excited about it. It was the thing. And as an editor-in-chief, I was like, “Yep, that’s it. That’s why you would go. I want to run the product team. That’s the thing I want.” You’ve been there for a minute. You are making deals with another giant product organization. You’re getting some credits to use OpenAI’s systems. What are the products you want to build?

It’s funny you mentioned that today. Probably two hours ago, I was talking to the guy who’s building internal AI search at The Atlantic. It’s just a demo model, and who knows if it’ll go out on our website or if it won’t go out. But the question is, how do you basically build an AI-based search engine internally to The Atlantic? And it’s an amazing problem, because you could just say, “Well, just send the query out to OpenAI and search the database they have,” but that’s not the right way to do it.

Some of the way people do search is they’ll take a query and then they’ll run a next query on it to turn it into a 500-word query, and then they’ll take that 500 word query and then they’ll compare that to the database. Is that the way to do it? That’s the way to do it.

Okay. Then how do you write the query for how you want to compare it? It’s an amazing problem. And when I was the editor, I wouldn’t get to deal with that. Jeff Goldberg does not get to be in these conversations. Jeff Goldberg gets to decide how we’re going to cover the industry, but I get to decide how we’re going to run our AI search product. I love getting to help run the product and engineering team and helping to hire those people. And one of the most fun things that I’ve gotten to do the last three years is we’ve run hundreds of experiments on how our paywall works, how our pricing works, how our checkout page works. We’re getting to run this operation as machine like a tech team. We get to say, “Well, okay, let’s run this price test and let’s change this color and let’s have the paywall rules be A and B, unless the sell-through rate crosses X and unless the subscription propensity in a particular story…”

I just love that stuff. That was part of the reason I love being at Wired is I’m a nerd. And the opportunity to be in those conversations helped shape those conversations and then to see that it’s working, that the data science team, the product team, the engineering team, the consumer team have built this thing that took us from losing a lot of money to making money in a short period of time, that’s great and that’s really fun. Now, the next evolution will be can we at The Atlantic help build the next set of products that’ll help this industry survive in the era of crazy AI? Maybe we can, maybe we can’t. But the fact that I have an opportunity to help try to do that is great, so I stand by that. We have a great product engineering team.

I think in the social media era, a lot of media companies thought they were suppliers of content to other distribution. You’re talking about owning the product, right? You want an app on the home screen next to Instagram. Do you think about that as a competitive field you’re in now?

You mean like The Atlantic app or a separate thing that we build?

Just your product experience. Again, in the social media era, the Buzzfeeds of the world were like, “Our business is going viral better than anyone else can go viral on someone else’s platform.” That feels over in many ways.

That’s over.

And now you’re saying, “I’m going to run a product team to build a product.” That product is competing for attention with everything else.

Nick Thompson:

As we’ve mapped out how does The Atlantic thrive in a world where the web goes away, redoing your app has been one of the major initiatives. We didn’t even have an Android app. So building an Android app, getting it to parity, getting the feature set so you could build a feature on iOS and have it launch on both, figuring out what it is that the readers want. It’s not the most glamorous stuff, but it’s really important and really cool. And so now absolutely, figuring out how to build that product and how to compete. In my ideal version, three years from now, at The Atlantic, we’ll have figured out something amazing and launched it into the world that is really good for long-form journalism.

I took one crack at this. Through Emerson, which is a parent company of The Atlantic, I built this social media platform. I worked with my partner, Raffi Krikorian, to build a platform called Speakeasy, and the idea was to create conversations that are positive and fulfilling online. Twists and turns of a startup and all this and that, and we ended up selling the technology into McCourt and Project Liberty, which is a good outcome. But, as you may have noticed, we didn’t end up supplanting Twitter. But the opportunity to do that kind of thing is wonderful, and I hope that there’s a moment to do that at The Atlantic. Again, whether it’s something that’s related to The Atlantic and its mission, whether it’s something that’s directly part of the Atlantic, that is a hundred percent something I hope we can do in the next two, three years.

Alright. Well, you’re going to have to come back and show me that product when you launch this thing.

Ideally, you’ll see it and you’ll be like, “Oh, that’s cool. It first came up when we were talking on Decoder.”

Yeah, I’m excited for it. Nick, thank you so much for joining the show.

Thank you so much. It’s great to talk with you.

Decoder with Nilay Patel /

A podcast from The Verge about big ideas and other problems.

SUBSCRIBE NOW!

https://www.theverge.com/2024/7/11/24196396/the-atlantic-openai-licensing-deal-ai-news-journalism-web-future-decoder-podcasts