Today, I’m talking to Harvard law professor and internet policy legend Lawrence Lessig. Larry is a defining expert when it comes to free speech and the internet. He’s taught law for more than 30 years at the University of Chicago, Stanford, and Harvard; he co-founded the Creative Commons in 2001; and he’s published a dozen books since 1999 delving into the intersection of communications, money, the media, the internet, and democracy itself.
Larry is a hero of mine. I have been reading him since I was in college, and so much of his work has directly shaped so much of my thinking. It was an honor to spend some time with him. But I have to tell you, this episode is a little looser than usual. See, the Decoder crew was in Boston last week for the launch of Harvard’s new Applied Social Media Lab, which Larry is a part of. The launch event for the lab was a glitzy affair featuring a number of guest speakers, including former President Barack Obama, who was set to join Decoder and talk about how social media can actually benefit democracy and how we might build toward that goal.
Unfortunately, President Obama had to drop out of the event due to illness. We are working with his team on rescheduling that conversation, which we’re excited to have for you in the future. Thankfully, Larry was available to fill in, and he was more than prepared to dive deep on the big issues the lab was designed to address and that I’ve been thinking about quite a bit lately. As you’ll hear us say, there’s a lot to unpack here.
You’ll hear us agree that the internet at this moment in time is absolutely flooded with disinformation, misinformation, and other really toxic stuff that’s harmful to us as individuals and, frankly, to our future as a functioning democracy. But you’ll hear us disagree a fair amount about what to do about it.
Here in the United States, the First Amendment puts really strong protections around speech and heavily limits what the government can do to regulate any of it — even outright lies, which we saw with both covid-19 and the 2020 election. But because there’s so much stuff on the internet that people do want taken down, a number of strategies to get around the First Amendment have cropped up. For example, there is one law that is really effective at regulating speech on the internet: copyright law. Filing a DMCA claim on something is one of the fastest ways to get it removed from the internet, and there’s a whole folk understanding of copyright law, which is often wrong, that has sprung up in the creator economy.
For instance, Larry and I talked about the current and recurring controversy around react videos on YouTube, not what they are but what they represent: the users of a platform trying to establish their own culture around what people can and cannot remix and reuse — their own speech regulations based in copyright law. That’s a fascinating cultural development. There’s a lot of approaches to create these types of speech regulations that get around the First Amendment, and I wanted to know how Larry felt about that as someone who has been writing about speech on the internet for so long. His answers really surprised me.
Of course, we also had to talk about artificial intelligence. You’ll hear us pull apart two different types of AI that are really shaping our cultural experiences right now. There’s algorithmic AI, which runs the recommendation engines on social platforms and tries to keep you engaged. And then there’s the new world of generative AI, which everyone agrees is a huge risk for the spread of misinformation, both today and in the future, but which no two people seem to agree on how to tackle. Larry’s thoughts here were also surprising. Maybe, he says, we need to get all of politics offline if we’re going to solve this problem.
This conversation is full of twists and turns, and I came away with so much to think about. Again, it’s pretty loose; Professor Lessig just agreed to come hang out with me at the last minute, and we just talked about it. Maybe we should do all of our Decoders like this.
Okay, Professor Lawrence Lessig. Here we go.
This transcript has been lightly edited for length and clarity.
Lawrence Lessig, you are a professor of law at Harvard Law School. You’ve written a lot about copyright law, a lot of which I read when I was a young law student. Welcome to Decoder.
I am really excited to talk to you. We are here at Harvard, which just launched the Applied Social Media Lab, which is intended to study the future of social media and its interaction with democracy. There’s a lot of ideas to unpack there. Really quickly, tell us what the lab is meant to do and your involvement in it.
I think what’s important is how it wants to do it. What the lab is going to do is bring people from industry, not loaned from industry but people who have left industry, who have decided they want to try to do good in the world, into the context of Harvard Law School, where we’ll have policy people engaging with technology people, architecting how we could build a better social media. Of course, social media means media generally — how we can take what is turned into a really poisonous soup and filter it into something that might actually be helpful for democracy.
There are a few things happening right now in that soup that are new, one of which is AI, which I want to talk about. Another is two different wars on two different fronts that are defined by mis- and disinformation at scale. A third is what feels to me like a generational reset of how we think about social media at large. We came through the Facebook era. We went through what you might call the Twitter era. The Twitter era is definitively over. It is now a different company with a different name, and it feels like a younger generation is looking at social media differently. They’ve been raised with different expectations. All of that feels like, “Well, we’ve been at this for over a decade. Should we try something new, or is something meaningful finally going to change?” Do you feel that as well?
Absolutely. I think that the worst part of social media’s place in society is people not understanding, in fact, what it is. People have a naïve view: they open up their X feed or their Facebook feed, and [they think] they’re just getting stuff that’s given to them in some kind of neutral way, not recognizing that behind what’s given to them is the most extraordinary intelligence that we have ever created in AI that is extremely good at figuring out how to tweak the attitudes or emotions of the people they’re engaging with to drive them down rabbit holes of engagement. The only thing they care about is engagement.
The Facebook Files, which, two years ago, were released by Frances Haugen — I was honored to be able to represent her at the first stages of that — if you look at the Facebook Files, they are filled with engineers, really good engineers trying to do the right thing, trying to talk about how to make the platform safer so it doesn’t lead young girls to commit suicide or doesn’t lead to radicalism in politics. And again and again, they were overruled by people who were focused on the business model of engagement to drive profits. I think once you step back and you realize, “Wow, all of this is being fed to me for a reason, and the reason really has nothing to do with making democracy work better or helping me understand the world better — it’s all about using me for an end that is not my own end.” The AIs that are doing it are the most powerful intelligence that has ever tried to do this. It’s terrifying when you realize it because [it’s] not clear what we do about it.
There’s a lot to unpack there. There’s the commercial incentives of the social media platforms. There is the notion that we should regulate them in some way. There’s the emergence of new technology, which might be hard to understand for even the technologist, let alone the average person. Let’s start with that middle bit: how we might regulate it.
In countries around the world, in Europe in particular and China, most particularly, the governments just go ahead and regulate the social media networks. They do not have the restrictions on speech regulations that we have in this country with the First Amendment. We keep trying to get around it. There’s a number of ways our government tries to get around it.
Lately, it seems like you can just be Republican and pass some speech regulations in Texas and Florida. (We’ll just see how that goes.) You can ban some books. You might do what’s called jawboning, where the Biden administration put a lot of forceful pressure on Facebook and other platforms — they got themselves in trouble, but they did it. You could use other laws. You could use copyright law, which is a frequent stand-in for speech regulations because everyone accepts it. Is that an appropriate approach — “We can’t just do it straight up because of the First Amendment, so we’ll find other ways to do it”?
It’s first important to be clear about the first part of what you said, which I think is really important, and that is the way other countries are actually doing it. In China, which obviously is not a political system to be recommended, but we can observe that there’s a Chinese version of TikTok and there’s an American version of TikTok. The American version of TikTok is completely uncontrolled. It runs all the time. It feeds the worst possible content to especially young people to drive them to engage.
The Chinese version of TikTok is blocked during certain hours. It limits the total amount of time you’re allowed to be on it. The substance of what it provides is aiming to lead people to want to be astronauts. If you ask our kids what their number one dream job is, it’s being a social influencer. If you ask the Chinese kids, it’s being an astronaut. That’s not an accident. That’s not the free market working. That is an intentional design decision that they’re making towards helping their kids and we’re making towards giving up on our kids.
I would say if you went and asked a bunch of parents in America, “Should the government pass some speech regulations that make our TikTok look more like Chinese TikTok?” then you described those outcomes, they would say, “Yes,” and then you would have to say, “Well, we can’t because of the First Amendment.” That seems like a really important tension that we are facing right now.
Even among my own colleagues, we have this conversation about “How should the First Amendment be applied in this context?” Let’s take a very precise example. Imagine China invaded Taiwan tomorrow and immediately started amplifying all the American voices on TikTok that were saying that “Taiwan had it coming. China’s the natural leader for this area. It’s ridiculous that we’ve been resisting One China for so many years” and suppressed all the American voices on that platform that were saying, “This is outrageous. This is a free people,” blah, blah, blah. That decision to suppress and amplify is what we call editorial judgment. It is the core of First Amendment protection. When you say, “Could we do anything about it?” the standard First Amendment answer is: “No, there’s nothing you could do about that. That’s absolutely the most protected thing you could have.”
It’s amazing to see how we slid to this position of being vulnerable in this way. If, in 1960, the Soviet Union had come to the FCC and said, “We’d like to open up a news station in the United States for Americans to consume in the United States,” the FCC, of course, would’ve blocked it, but we’ve moved to a place where we have no legal capacity to do anything about this kind of influence, with the consequence being that we are so incredibly vulnerable. The First Amendment is basically going to knock down most of these experiments that are happening within the states.
What saddened me the most about this: when Frances Haugen originally testified in the fall of 2021, there was a wide range of agreement among Republicans and Democrats that this was a huge problem. We needed to do something about it. Then, AOC did her first TikTok, and her first TikTok was rejecting the idea that the United States government should do something to respond to TikTok. Her basic argument was, “It’s unfair to do something to TikTok when we still have not passed any privacy legislation in the United States,” which, first of all, demonstrated she didn’t understand anything about the nature of the problem and, number two, demonstrated that money has gone into the Democratic Party in a way to divide the two parties when it comes to social media and what we do about social media. The First Amendment, of course, is an important barrier, but even more important than the First Amendment is the fact that the economy of influence that’s governing in this space is an economy of influence of money.
Wait — I’m more cynical about this than you are, maybe even more cynical than you’re suggesting.
Wow, that’s hard to believe.
I am pretty cynical about the bipartisan nature, or the supposed bipartisan nature, of wanting to regulate Facebook. What I see is a bunch of politicians on both sides who know the First Amendment exists. You can’t just go ask the platforms for content moderation that would favor their party. Whenever they see a weapon like the Haugen papers, the Facebook Files, they say, “Oh, we can just threaten you with legislation that may or may not pass, but we’ll threaten you with it. That’ll be a lawsuit, and that’ll be a long process for you to go fight it, and the public opinion will be against you, or you can just change your rules. You can just moderate content to favor the Republicans or favor the Democrats.”
I see that cycle play out over and over and over again, and to me, it appears to be wholly in bad faith. The First Amendment exists. No one wants to go litigate whether the government should make speech regulations, but can we go threaten a bunch of these companies with pretend legislation or manufactured outrage because of what has been leaked? Sure we can. Then, maybe they’ll turn the knobs and amplify our content more. That cycle seems to be more dangerous than almost anything because it is effectively speech regulation in practical impact on these companies without any sort of oversight or any check on how much it might be imposed.
Absolutely, and that’s exactly what happened in 2016 and in 2020. Indeed, the Facebook Files demonstrated that though, for example, conservatives said that they were being discriminated against by these platforms, in fact, the interventions from the political department of Facebook overwhelmingly were interventions to bend the rules in favor of allowing conservatives to do whatever they wanted on the platform. You’re right. That dynamic was certainly what happened in that election.
2024 will be that plus an order of magnitude greater danger produced by foreign governments that have targeted the United States in this election. There are all sorts of stories of the Russians in 2016 and 2020. The Chinese in 2024 — it’s already revealed this is what they’re doing. The first round will be the Taiwanese elections in January of ’24, where they have already begun to figure out how they can flood that space to create disinformation to tilt the election in the way that they want to tilt it. That’s just a dry run. In fact, the technology they’re using will work better in America than it does in Taiwan because it turns out the AI is better tuned to English than it is to the Taiwanese version of Chinese. We will be completely vulnerable to something that has as its purpose screwing up our election.
Look: Mark Zuckerberg doesn’t hate America. Facebook wasn’t trying to destroy American democracy — it was just trying to make money, but now, you’ll have even more sophisticated AI focused like a laser on blowing up the basic institutions of our democracy.
The mechanism is AI? Or the expression is AI in your mind, specifically TikTok?
It’s the algorithms behind things like TikTok, but it’s also, more importantly, the content generation capabilities that LLMs will give them. The LLM capability to mislead people into believing that they are engaging with certain people about certain things, that they’ve seen a certain speech by Obama, a speech by Biden, or a speech by Trump — all of these things that we don’t have any defense mechanism for right now will just flood into the system. Think of it like pathogens. They will just spread pathogens throughout the system, and we have no antibodies for them.
As you’re pointing out, because you’re obviously legally educated, we also have a constitutional barrier to building those antibodies or at least the government taking steps to build those antibodies. The most dominant platforms for facilitating the spread of information have basically given up. I’m not sure what Facebook will actually do in 2024; they’re back and forth about it, but Elon Musk has fired the election team. That platform alone is terrifying from its effect of what it’s going to do for misunderstanding in this context.
Assuming it’s still there by the time of the election.
It’s a year from now —
Something like that. But this war in Israel right now, I think, is a perfect example of this. The level of misinformation on both sides is astonishing, and yet, you don’t see people reacting in a healthy way. It’s not like you see them shifting to sources that will help them understand. Instead, they’re just doubling down on the sources that help them to misunderstand.
I think one of the most startling facts of the past three years: after January 6th, a bunch of polling reported in January of 2021 found that 70 percent of Republicans believed Donald Trump that the election was stolen. When that number came out, everybody was saying, “Well, that’s just temporary. They’ll get it. Eventually, they’ll just relax.”
That was always wishful thinking in my mind.
It went down a little bit, but it’s now come back up. If we were talking about the Soviet Union and you said, “Well, in the Soviet Union, they believe that America caused the war in whatever, and they still believe that completely false fact,” we’d say, “Yeah, well, that’s because you’ve got state-run media.” Well, we don’t have state-run media here, but you can still perpetrate an obvious lie, and that stays. Not just stays: it gets embedded into the identity of the people who consume that media and not accidentally, both because the business model of social media is engagement and it turns out the politics of hate is the most effective way to get people to follow, and also because there’s strategic bad actors who are very good at leveraging and using the platform.
I want to stay focused on the AI component for one second. There’s the AI-powered recommendation algorithms that a normal consumer encounters when they use a platform. There, I think you have a TikTok problem. The Chinese government is entangled with TikTok in some way. That it’s meaningful or not, it’s actually very hard to know, but they’re definitely entangled with ByteDance, which owns TikTok, and people might encounter those recommendation algorithms. Those are obviously opaque. Everyone’s pages are similarly opaque. There’s not even a shared cultural context behind the algorithms everyone is experiencing there.
That’s one bit of regulation, and you can see that the government might be able to fumble its way towards a theory of banning or restricting TikTok or forcing a sale to Oracle, all these ideas that they’ve tried. “This is a foreign actor. They’ve built a distribution channel to millions upon millions of young Americans. We should probably take a look at that.” You can see how the government could wind its way to the Supreme Court and potentially win on some theory of restricting that platform.
The other part of it — “Oh, there’s a bunch of LLMs in the world that can be used to make misinformation, and we should impose some sort of regulatory structure such that you can’t just lie to people at scale.” That runs headfirst into the First Amendment. Even if you’re a foreign national in this country and you decide to use Google Bard to tell you a lie, that’s fine. I can’t figure out how to make that illegal. Can you figure out how to make that illegal?
Let’s talk about both. I think it’s nice that you unpacked them that way. The TikTok issue is, I think, extremely hard. Montana has basically passed a law to ban TikTok. Immediately, TikTok filed a suit challenging it, bringing a whole bunch of artists forward who said that their whole livelihood was TikTok, so therefore, it’s the core of First Amendment protection. I think it’s going to be hard under the First Amendment. I actually volunteered to help Montana defend their law, but I think it’s going to be hard. Even if you solve the foreign influence problem, you still haven’t solved the business model problem because, even if you know that China is not intentionally trying to screw with the elections, you still have a platform that’s trying to figure out how do you make people spend more time on the platform? It just is our psychology that the best way to do that is to turn us into conspiracy theorists who are crazy about all sorts of —
I just want to — I agree with you in that that is effective. I don’t think it’s the best way. I would actually offer you one very meaningful counter-example, which is: I think Taylor Swift is a much more powerful figure in American culture and politics right now, and she’s done the exact opposite, which is harder. I think it’s cheaper and easier to make people angry, but it’s more effective and more durable to make people not angry.
Absolutely, but the question is, if you are sitting there watching week-by-week engagement numbers, Taylor Swift invested a long time in building the brand that would make it so that she could be an oracle of something good and true, and it’s not —
I’m saying most durable art makes people feel good, not bad.
I love what she’s doing, and she’s a hero, not just because —
I’m not asking you to criticize Taylor Swift. I’m just challenging your premise that the single most effective way to build engagement is anger.
I would just say that the fact that you can point to Taylor Swift, and I can point to everything in social media, doesn’t mean that I’ve lost the argument. You’re right. Taylor Swift is a good example. Who else?
Marvel movies. You can pick any number of things that make people generally feel good.
Let’s just recognize that’s going to be a hard problem, and people feeling good doesn’t mean that they know the truth. There are a lot of people who believe the Big Lie who feel good about what they’re doing. They’re defending the truth, defending America, and they’re told that in a very positive way.
Would you make that illegal? The thing that I’m stuck on here is, again, I came up reading your work, which was very much in favor of loosened speech regulations. In particular, I think the stuff that you have published that has been most meaningful to me is the idea that the more we tend towards copyright maximalism, the more we backdoor our way into speech regulations so that Disney is now in control of art because Disney has overwhelming copyright protection. It feels like you’re arguing the flip now, that we actually need some more speech regulations, that we should find a way to regulate what is allowed on these platforms or what people are allowed to make with technology. That feels like a really big shift for you.
I don’t think it’s a flip. I wouldn’t admit it’s a flip, but I don’t feel like it’s a flip. It feels like it’s a different problem. I don’t favor the idea of speech regulations in the sense of the government deciding which speech is allowed and which isn’t. I think it’s important, though, the government’s allowed to address a new kind of problem. Algorithms are a new kind of problem. Some people say, “When the algorithm decides to feed me stuff that makes me believe that vaccines don’t work, that’s just like the New York Times editors choosing to put certain things on the op-ed page or not.”
It’s not. It’s not at all like that. When the New York Times humans make that judgment, they make a judgment that’s reflecting values and understanding of the world and what they think they’re trying to produce. When the algorithm figures out what to feed you so that you engage more because you believe vaccines don’t work, that is not those humans.
Indeed, the best example of this is in 2017. In the fall of 2017, ProPublica published an article about how Facebook had an ad category called “Jew haters.” You could buy “Jew haters” as a category and advertise to Jew haters. When ProPublica published this, Facebook was like, “Whoa, whoa, whoa, whoa. We didn’t do that. There was no human that wrote that category.” That was true. It was the AI that generated that category. The point is that we’ve got to break the tyranny of analogical reasoning here. It’s like the editor at The New York Times, but it is not the editor at The New York Times. If you do that, then you can begin to see that we need to have a capacity to respond to this new kind of threat, which is different from, I think, the traditional issues that we were talking about in the context of copyright. That was the TikTok issue. Remind me of the second one because I thought that was really interesting.
That a foreign national in this country uses ChatGPT to tell a lie, and that should be somehow restricted.
The challenge there is, you’re right, the foreign national has certain rights.
We’ll just make the foreign national Vladimir Putin. Vladimir Putin shows up, probably in Florida, and he opens up ChatGPT and says, “Tell some lies about the election.” That is almost certainly protected speech.
Certainly protected speech, although, remember that the DC circuit, Brett Kavanaugh writing the opinion, affirmed without dissent by the United States Supreme Court, upheld the idea that you could limit the ability of legal immigrants to spend money independently of a political campaign because we’re trying to protect the American democracy to be a democracy that Americans choose. There’s a tension in the jurisprudence. On the one hand, you speak like Citizens United does. It says that the question is just the speech, and does the government have the right to regulate “speech”? It doesn’t matter who’s speaking. [Supreme Court Justice Antonin] Scalia, when he wrote his concurrence in Citizens United, said, “Everybody’s making a big deal about the fact that we’re giving rights to corporations. That has nothing to do with it. The question is whether the government has a right to regulate the speech. Who cares who’s uttering it? It could be a robot uttering the speech, a replicant uttering the speech. It could be you uttering the speech.”
I don’t think that was in Citizens United, replicants.
It should have been. It was a really important point. [Laughs] If you adopt that position, then you’re right: there’s nothing you can do about Vladimir Putin — but I don’t think we’re going to settle on that position. I think we’re going to settle on a position that begins to recognize we have a lot to do to protect democratic processes.
That’s actually related to the work that we’re doing at this new lab because there are a bunch of people out there like, “How do we reform the media to make it so it’s like our ideal of what the media should be?” I think that’s hopeless. God bless them, and good luck. I think we instead need to recognize that we need to protect democracy from the radiation of these AIs. Think of it as if it’s like all of a sudden, we can’t —
Just to be clear, this is the generative AIs, not the recommendation algorithm AIs.
I think they’re the same thing right now. They will be deployed hand in glove in order to achieve the objective of the person deploying them.
Walk me through that specifically because I’m still stuck on the idea that if we’re going to write some regulation, especially in the context of speech, it ought to be strict scrutiny from the beginning, very narrowly tailored. That means you have to define the problem. When you say “hand in glove,” there’s going to be a bunch of weird generative AI-created content about our election that gets fed through an AI recommendation algorithm?
I’ll walk you through this, but let’s be clear about what I’m trying to say: What I’m trying to say is that enterprise of figuring out how to regulate media to solve that problem, I think, is hopeless. We’re not going to regulate media to solve that problem. That problem’s going to be there in some form, some metastasized, dangerous form, regardless of what regulation there is, even if we have creative ways of thinking about how to get around the First Amendment. These two things work together because the recommendation engine is just very good at figuring out what people are open to.
You have a firehose of AI-generated content —
— and the recommendation algorithm is going to —
— figuring out where to aim it. It’s like shotguns that have guidance missiles on top of them. The pellets have guidance missile systems. My point is that’s going to happen, and we should recognize we can’t do democracy in that space. It’s not going to work. Then, we’ve got to think about: where can we rebuild democracy or what could it look like that was protected from that sort of thing?
One of the things that is happening around the world, completely invisible in American media, but one of the most interesting democratic innovations that’s happening around the world is the explosion of things called citizen assemblies. France did a huge one around climate change. Ireland has done a whole bunch of them, including two that proposed ending regulations on abortion and endorsing same-sex marriage. The citizen assembly came up with those solutions, overwhelmingly supported them, and then it went out to referendum. The public overwhelmingly supported them. We know that the politicians in Ireland could never have supported those two things, but these citizen assemblies could do it. There are hundreds in Japan; they’re all through Europe right now. There’ve been some tiny experiments in the United States, but not much.
The point about these citizen assemblies is that these are places where citizens confront each other. They’re representative. They’re large and representative. They confront each other. They hear the other side. They see that the other side’s not a bunch of reptiles — they’re ordinary humans with the same issues, like, “How do I make sure my kid has a job after high school?” When they experience that, they deliberate, and they come up with some resolution of some issue — whatever it is that’s being presented to them — it has been protected. It’s almost like it’s in a shelter from the corrupting influences of AI, whether foreign-dominated influences or even just commercial engagement influences. What I think is that the more people see this, the more they’ll be like, “Well, let’s see more of this. Let’s figure out how we can make this more of our democratic process.”
At the lab, we just are closing a deal to acquire a really powerful virtual deliberation platform that has been a proprietary platform and charges a bunch of money for people to use it. We’re going to open-source it, and we’re going to make it available in every single context. Churches could use it. Schools could use it. Local communities could use it. A DAO could use it. A game could use it. You could be in the middle of organizing your friends at a game in a clan, and then you push the deliberation button. It will be an API. It opens into a healthy deliberation context. We’re doing this because we believe, first, we don’t know how best to enable deliberation. Secondly, we think people out there do. Third, we think that, when they do, they’ll begin to think about this as another way to do democracy, and it would exercise a muscle of a kind of democratic relation to other people that we don’t have right now.
Our democratic relation to other people is to hate the other side. The politics of hate is not just that we’re polarized — it’s that we have to turn into villains people who are not the same political view as we are. What I believe is that we’re going to have to find a way to begin to build something different while, at the same time, we’re going to fight the war to make sure the media doesn’t poison us too much. Again, the radiation metaphor, I think, is really powerful here. The skies open up, and we’re not protected from the UVs in the way we were before. We’re going to have to go underground, we’re going to have to have shades on, and we’re going to have to protect as much as we can. [It’s] not clear we’re going to succeed.
It’s hard to acknowledge exactly how terrifying these threats are. The politics threat is one thing. Your friends report a conversation with one of the big AI companies, one of the senior developers at one of the big AI companies, who said, “My kids are not going to see high school.” What that statement was is that he thinks we’re not going to be able to control what happens with AI. It is an existential threat that we will not meet. When you realize exactly how dangerous these things are and exactly how weak our capacity to do something collectively about it is, there’s a lot of reasons to be terrified about it, and that leads a bunch of people to say, “Well, whatever. I’ll just spend my time watching Netflix.”
It is interesting how much the AI doomers are also the people building the systems.
A fascinating relationship. I only have a few minutes left with you. There is one way that the current moment in AI could come to an end, and I would be remiss if I didn’t ask you specifically about it, which is: All of these LLMs are built on vast amounts of training data scraped from the open internet. Who knows if that was appropriate, legal, or authorized? There are a number of fair use cases pointed at these LLM systems now. There’s one against OpenAI. There’s, I believe, one coming against Google. If there isn’t, there will be. I can make that prediction, even if I don’t know. They’ve got the most money. My colleague Sarah Jeong, says, “It feels like this industry is built on a time bomb. This is a house of cards because no one knows the answer whether this copying was fair use or not.” You’re a copyright law professor.
Do you think it’s fair use or not?
I have two strong views, and one is very surprising. The not surprising view I have is that, whether you call it fair use or not, using creative work to learn something, whether you’re a machine or not, should not be a copyright event. Now, maybe we should regulate in another way. Maybe we should have a compulsory license-like structure or some structure for compensation. I’m all for that, but the idea that we try to regulate AI through copyright law is crazy talk.
But that’s where we are right now.
Yeah, that’s what they’re trying to do.
It is always the first and fastest regulatory method.
It’s got the most vigorous remedies.
It’s got the most money on the other side of it.
The most money, yeah. I think all of this in the American context should be considered fair use.
Is that a policy decision or a legal conclusion?
It’s a legal conclusion. I think that [if you] run the fair use analysis, that’s what you get.
Even if — the Sarah Silverman case, I’ll take it for an example. They clearly took her entire book, and they can clearly spit out excerpts of her entire book. Somehow, in that, you run that analysis… I think that’s a coin flip. Fair use in the courts right now feels like maybe more of a coin flip than ever before.
There’s a recent case that might make it more of a coin flip than I would’ve thought. [Supreme Court Justice Elena] Kagan wrote a very strong dissent in the case. Maybe that signals that copyright law is shifting in a new way. The question of legal access to the underlying material is always there. I’m saying if you have access to the underlying material and you have the machine —
That’s where your license scheme would come in.
Yeah, and I’m not even sure it’s a licensing point. My point is, if it’s out there in the world, somebody has legal access to it and they use it to learn, that’s not a copyright event. Reading a book is not a copyright event. Even though when you do it online, it technically copies, the whole point is it shouldn’t be a copyright event because the equivalence — reading — is the sort of thing that was free. It was protected as free. Copyright was a narrow range of controls that we had to impose to create incentives for authors. I don’t think any of those controls are relevant to the context of the training. I’m a very strong “Training is free.” The view I have, which is surprising to people, or people who know anything, the 10 people in the world who know anything about my views about copyright —
I have to make sure I’m one of those people.
— is that I absolutely think that, when you use AI to create work, there ought to be a copyright that comes out of that.
The copyright office has just said no.
They’ve said no. What they’ve said is, “Maybe if you have a complicated enough prompt, then you can get a copyright,” which we used to say fair use was the right to hire a lawyer. Now, copyright is the right to hire a lawyer because you’re like, “Here’s my prompt. Is that a copyright or not?” when what we need is an efficient system to basically just allocate the rights.
Now, the tweak I would make, which I think is really critical, is I would say you get a copyright with these AI systems if and only if the AI system itself registers the work and includes in the registration provenance so that I know exactly who created it and when, and it’s registered so it’s easy for me to identify it because the biggest hole in copyright law, a so-called property system, is that it’s the most inefficient property system known to man. We have no way to know who owns what. We have no way to know with whom to negotiate. Certain entities love that, like the collecting rights societies. They love the fact that it’s impossible to know because then you’ve got to have these huge collective rights societies. The reality is we could have a much more efficient system for identifying “ownership,” and I think AI could help us to get there. I would say let’s have a system where you get a copyright immediately and —
I push Generative Fill in Photoshop; it immediately goes to the copyright office, says in some database, “Here’s my picture I made in Photoshop.”
And, “Here’s how it was made, when it was made and what fed into it.” Whatever the provenance has to be to make it useful, I’m not sure of that exactly, but if you began to do that, you would begin to build an infrastructure of registries that would make it easier for us to begin to navigate in this context. The other reason to push for this is that artists in the next 10 years are going to increasingly move to AI generation for their art. If you don’t get copyright from that, then basically, these people have almost no way to make any living.
You’ll remember this, I hope, from copyright law: the birth of America, foreign authors got no American copyright, and all the Americans thought, “This is great. We’re protecting the Americans against the foreigners.” Of course, what that meant is that all the English books were much cheaper than the American books. The American authors were at a disadvantage because the English authors weren’t getting copyright. The American authors began to push, “Give everybody copyright so that there’s no un-level playing field.”
Well, that’s the same with this generative AI. If, when I sit down and I make a creative work, I get a copyright, and you push a button on Midjourney and there’s no copyright there, the people consuming that work, like businesses trying to build advertising, are going to stop dealing with the artists, they’re going to just deal with Midjourney, and they’re going to get all this stuff for free that they can use in a commercial way that the artists before would’ve been compensated for.
Don’t the basic laws of supply and demand get in the way well before the copyright licensing cost? If I’m using Midjourney, I can make 10,000 images in the time it takes an artist to make their first cup of coffee. This is what I hear from artists and musicians: our markets are about to get flooded with C+ work because most of it’s C+ work, and most people don’t care enough, but if you have enough supply of C+ work, the price of that will fall to zero, and no one will ever pay us for A+ work. Whether or not paying me for A+ work comes with an appropriate copyright license or not seems pretty secondary to that basic economic problem.
Well, think about photographs. The same dynamic happened with photography. In the old days, when the only people who had cameras were professionals, the quality of photographs was very good. Then, all this consumer cameras and then digital cameras came along, so the number of pictures in the world went up dramatically, and the average quality of them went down. Now, you would say, “Did that mean that there wasn’t a demand for professional photographers?” Well, a lot of them went away, but there are still pretty good professional photographers who are hired for substantial amounts for their particular professional work.
My point, I think, is slightly different. All I’m saying is we need to have a world where everybody’s on the level playing field — every creative work is protected in some sense. I just want to radically lower the cost of negotiating that protection. If we radically lowered the cost of negotiating that protection, then the best work would rise to the surface more easily, and people would be rewarded for having figured out how to produce that best work.
I would say that, knowing what I know of you, that is a surprising viewpoint! Do you feel like you’ll end up in a place where copyright steps in, as it so often does, as another solution to other problems? Provenance: “I want to know this was made by a person. Now, there’s a federal database that would tell me if it’s a person or an AI, and copyright will be the vehicle by which this information is disseminated.”
It could be copyright. This actually relates to another part of a really great question you asked before that I think it’s important to be clear about. When you were asking about AI in the context of elections and how are you ever going to control that stuff because there are all sorts of … right now, proprietary AI like OpenAI has said, “You can’t use this content for political speech,” but we have all sorts of open-source models out there that can use the content for political speech.
One question is, “Can you do anything about that?” The answer is absolutely: you could control American campaigns. People are not creative enough about how to control it, but if, for example, you made every treasurer of every campaign swear on the penalty of perjury that no money was spent for any AI-generated content in that campaign, you could shut it down right away, but that still would leave you vulnerable to the foreign influence. Now, that’s not to say that there shouldn’t be something we do about it — we should do that — but the point is it will never be complete.
I think it’s the same point with the provenance issue. There are so many fantastic databases out there that are being developed, blockchain databases, that would allow us to do much more in identifying provenance and ownership of content. I think that if they were set up so that the copyright office could recognize them, that would be the best of both worlds. We don’t want the copyright office doing it because no government agency is going to have the capacity to do it in the creative ways that it’s being done right now, but we need to get to a place where that is easier, and I think the market could drive us there more effectively than the copyright lawsuits because copyright applied to provenance claims is going to be a really hard thing to enforce.
I think the blockchain people are desperate for a use case that looks like this, anything other than what they’ve got now.
I want to end more in the weeds on something: I started out by saying it feels like a reset moment on the internet. The platforms are shifting; user behavior is shifting. A thing that has jumped out to me, maybe more than anything, is that younger people on the internet are so deeply aware of copyright law in a way that my generation wasn’t, beyond getting sued for using Napster. That was basically our interaction with copyright law.
Right now, on YouTube, there is a controversy over so-called react videos, where one creator makes a video, a bigger creator reacts to that video, adds nothing other than some faces, potentially, and then they get all the views. This is fine or not fine, but actually, within the creator sphere on YouTube, the notion that this is a copyright violation, something should be done, this is wrong, and they’re going to reach to copyright law is very strong. There’s actually a copyright maximalism amongst younger creators that is shocking to me. Do you see that there’s a new folk private copyright law where, because speech regulations are definitely against the American idea, we’re going to substitute in our folk wisdom about what copyright law can or should be. We’re going to say fair use like a magical incantation to claim moral superiority and have a fight. That seems all very wrong to me. Something bad is about to happen here because we’re once again training people to talk about regulating speech without actually talking about regulating speech.
I think it’s actually a reaction to a very bad implementation that YouTube made of the original effort to protect copyright owners from piracy on YouTube. YouTube set up this very complicated … not complicated, but it’s a very sophisticated content identification system to be able to identify content. Then, if content is identified, then you can either demonetize the site that’s doing it or order the site to take it down. That created both a war for legitimate uses… For example, I gave a bunch of speeches that included music, and the label did a takedown. My speeches were speeches about copyright law using these as examples that I wanted to demonstrate what the law was, and they issued a takedown to me. YouTube was very automatic about it. I fought back, and they eventually threatened me, and then I sued them and got Liberation Music to agree that they weren’t going to play this game anymore.
Of course, everybody plays that game, where they’re issuing takedowns to everybody, but that created this really perverse incentive for people to basically create complete ripoffs of other people’s work and then just use the same mechanism to go after them. These were all complete fraudsters. There are people who take public domain work, they put it up, and then they use this registration system to say, “These other people producing the public domain work are violating my copyright,” and the machine’s not smart enough to be able to do anything about it. This, I think, has created an economy of what feels to me like real piracy because these are not creators who are trying to do something creative, remixing in some interesting way; they’re just trying to exploit the system to steal from others. That reaction creates a counter-reaction, which is, I think, the culture you’ve identified.
I’m not sure how we get beyond it other than a commitment by platforms like YouTube to be more sophisticated in its policing about what’s actually going to be allowed as a remix and protected as a remix and what isn’t. If they were actually aggressively policing that in a way that’s consistent with the values of copyright, I don’t think it would be triggering the kind of anger that’s being triggered on the other side. I’m just as angry if I see somebody take creative work, just put a smiley face on it and then sell that or try to monetize that — that’s totally wrong. But I think everybody should be able to agree that we should be allowed to take creative work and comment on it, critique it, or use it to… I love the people who try to teach guitar, and they find that they have a certain two or three notes that then triggers this reaction, and their whole site gets demonetized.
It’s particularly bad in music. I know a lot of music podcasters, for example, who just won’t put their work on YouTube because it’s too hard.
Music is, quite frankly, one of the worst areas for fair use.
This is what radicalized me, by the way.
If you try to give the equivalent of text and music — and film is the same — the freedoms that we take for granted in the context of text just don’t exist in the context of music and don’t exist in the same way in the context of film. They could. What’s necessary is the courts or maybe Congress to tilt it in a direction to try to achieve a kind of common recognition of fair use across these platforms. Instead, there’s been conventions set by industries that were long before the internet that creates these expectations.
This is happening right now. YouTube is entering into deals, particularly with Universal Music, where they’re going to invent some private copyright law on the platform to deal with AI. There was an AI artist on YouTube that sounded like Drake. It’s a big scandal. Universal got very mad about it. YouTube has basically said, “We’re going to invent some stuff for you so that if there’s AI that sounds like Drake, we’ll let you take it down.” That’s not in any federal law that I can find or any decision that I can find yet. It hasn’t been litigated. It’s not really in any state law outside of likeness. You basically have a private copyright law to one platform for the benefits of the music label. You could argue that that is an appropriate market solution to this problem, but it feels like we should probably actually have a law. Where do you think that lands?
It could be a market solution, assuming we’re not having an antitrust issue involved. I’m not sure I would assume that right now.
With YouTube specifically?
Yeah, with YouTube and some of these labels. I have a lot of sympathy for the artists who are anxious about the fact that their style is taken and used in a particular way. AI is obviously making this easy, trivial. That’s why I said at the beginning I think there might be sui generis ways to compensate for that sort of consequence.
I think what we need is a vigorous debate on both sides of it, and what we saw in the early internet was that most of the loudest, most important forces were coming from the maximalist control perspective. That was a mistake. It was a mistake for artists. I remember 20 years ago, artists were convinced that this campaign of copyright extremism would produce an internet that would be profitable for artists. Well, ask artists how much they get from Spotify today. It’s actually worked against the interest of artists. I think that if we’d had a healthier debate back then, more open, less moralistic, like there were criminals on one side, pirates on one side and believers in property on the other, we could’ve come up with a better solution. I hope that we have… well, I don’t actually hope — I don’t think there’s any hope for this at all, but what we ought to be having is a more healthy debate about that today.
I think that’s all we can hope for. Professor Lessig, thank you so much for being on Decoder. I really appreciated it.
Decoder with Nilay Patel /
A podcast about big ideas and other problems.