In this special “2x” episode (#32) of our news show 16 Minutes — where we quickly cover the headlines and tech trends, offering analysis, frameworks, explainers, and more — we cover the tricky but important topic of Section 230 of the Communications Decency Act. The 1996 law has been in the headlines a lot recently, in the context of Twitter, the president’s tweets, and an executive order put out by the White House just this week on quote- “preventing online censorship”. All of this is playing out against the broader, more profound cultural context and events around the death of George Floyd in Minnesota and beyond, and ongoing old-new debates around content moderation on social media.
To make sense of only the technology and policy aspects of Section 230 specifically — and where the First Amendment, content moderation, and more come in — a16z host Sonal Chokshi brings on our first-ever outside guest for 16 Minutes, Mike Masnick, founder of the digital-native policy think tank Copia Institute and editor of the longtime news & analysis site Techdirt.com (which also features an online symposium for experts discussing difficult policy topics). Masnick has written extensively about these topics — not just recently but for years — along with others in media recently attempting to explain what’s going on and dissect what the executive order purports to do (some are even tracking different versions as well).
So what’s hype/ what’s real — given this show’s throughline! — around what CDA 230 precisely does and doesn’t do, the role of agencies like the FCC, and more? What are the nuances and exceptions, and how do we tease apart the most common (yet incorrect) rhetorical arguments such as “platform vs. publisher”, “like a utility/ phone company”, “public forum/square” and so on? Finally: how does and doesn’t Section 230 connect to the First Amendment when it comes to companies vs. governments; what does “good faith” really mean and what are possible paths and ways forward among the divisive debates around content moderation? All this and more in this 2x+ long explainer episode of 16 Minutes.
An explanation of what Section 230 is and what it covers [2:03]
Publishers vs. platforms and a discussion of current events [6:37]
Why platforms are not legally considered public utilities or “public squares” [11:57]
An overview of the Executive Order on Section 230, and the powers of the FCC [18:10]
How the Executive Order restricts federal spending on platforms [27:13]
The difficulty of content moderation and why Section 230 protects all websites [30:23]
[updated intro as of January 8, 2021 here]
Mike: The law is actually very short, and very simple, and very straightforward. And I should note, that the Communications Decency Act itself did have many more things that it did, but all of that was determined to be unconstitutional. So the only thing that survives is Section 230. There was a big lawsuit, ACLU vs. Reno, in the late 90s, and that threw out most of the Communications Decency Act as unconstitutional; the thing that remained was 230.
So Section 230…really does two things, and they’re somewhat related, and they’re both incredibly important to the functioning of the modern internet. The first thing that it does is it puts the liability on the person actually violating the law. So, if someone goes onto a website, and says something that is defamatory or other otherwise violates the law, the liability for that action belongs on the person who is speaking — and not the platform or site that is hosting that content.
The second thing that it does is that if a website chooses to moderate its content (or anything that is put on the site), then it is not liable for those moderation choices.
Sonal: I’m so glad you’re bringing that up because this is the #1 thing I wanted to start with, which is, the flip side of it — not just the protection, but the fact that they can moderate whatever they want — so can you actually break that down, Mike? What does that mean?
Mike: So, where it came from — which I think is important to give sort of the history very quickly — is that, there were a series of lawsuits in the early 90s that tried to hold internet services that had moderated some content. There were defamation cases, effectively, brought up.
The most famous one is Stratton-Oakmont vs. Prodigy and as a little fun aside, Stratton-Oakmont was a financial firm that was immortalized in the movie “The Wolf of Wall Street” —
Sonal: That’s a fun fact.
Mike: Yes. And Stratton-Oakmont got upset because people in Prodigy’s message boards were accusing the company of being up to no good and so they sued Prodigy.
A court said that Prodigy was liable for the libelous statements because Prodigy positioned itself as a family-friendly service that would moderate content. Because it moderated some content — [i.e. taking] down cursing or porn (or anything that it felt was inappropriate) — anything left up (according to the judge) [Prodigy] was now liable for as if it had written that content.
And that freaked out people in Congress namely…two members of the House: Chris Cox (a Republican) and Ron Wyden (a Democrat). They put together Section 230 to say wait, that’s crazy. If a website wants to moderate content to create, for example, a family-friendly environment, it shouldn’t get sued for the content that it chose not to take down.
And so that section of CDA 230 is designed to make sure that any website can moderate content how it sees fit, in good faith, to present the content in a way that meets with the goals of the service.
Sonal: Right. And to be clear, these are not just quote “content moderation things.” It could be spammy posts [or] the kind of thing that would actually turn you off from using the service [i.e.] family-friendly site getting rid of porn. The companies can use whatever discretion they wanted, as long as it complies with their terms of services, which could change.
But what’s interesting about this back-story: It’s a very small thing that was preserved, but that had huge consequences for where we are today, in terms of the internet we have today. Whether it’s going on a recipe-swap site; whether it’s sharing photos of family and friends; whether it’s posting a car for sale — there are so many layers to this. It has allowed the modern internet to thrive. One of the best lines I heard (I think this was actually in Verge) is that, in many ways, this Act was a gift not to big companies, but a gift to the internet.
Mike: I think the point is not that it is the biggest gift to big internet companies OR that it’s the biggest gift to the internet — I think it’s really the biggest gift [of] free speech for everybody, right? Because if you don’t have 230 set up the way it is set up, there would be much more limited ability for users to actually post content online.
It’s a little bit crazy to me that people think that changing or getting rid of 230 will enable more free speech, when the balances that are set up within 230 are very much designed as a gift to free speech.
Sonal: Okay, so now my question for you is, given that we did enter this world where user-generated content — whether on sites like YouTube with videos, educational or non-educational, political or non-political — we now live in a world where…people often use the framing of “platform versus publisher” (which I think is kind of meaningless and arbitrary). Sometimes, [they also] use the ridiculous phrase, “platisher,” as a hybrid of the two. I’d love to get your take on that framing and how that doesn’t (or does) apply here?
Mike: So, one of the things that comes up over and over that people say is, “Well if they moderate or if they change content, they are no longer a publisher; they are now a platform, and therefore, they lose Section 230 protections.”
The law makes NO distinction between platform and publisher; the law is not designed to protect one or the other, or say that there is difference between it — there’s no classification. It’s not a safe harbor where you have to meet, you know a, b, and c criteria in order to get the protections. You just need to be an “interactive computer service” that hosts third party content.
So the debate over “Are they a publisher, or are they a platform?” is completely meaningless under the law.
Sonal: Let’s actually talk about some recent events because I think it’s a useful case in sort of understanding 230 — and then we can break down some of the recent news as well around that.
So one recent event is that Twitter added a feature earlier this week where, [on] one of the President’s tweets, they added a link to other sites as a sort of quote “fact-check” mechanism. This could be contentious because a lot of people do not actually believe that everything the media writes is correct. That said, it linked to other third-party news sites, and it kind of labeled it as a fact-check feature.
Then they added another thing where they kept a tweet from the president up — in the context of the Minnesota George Floyd protests — but put like a limit on it where people could retweet with comment but they couldn’t retweet, like, or reply to it, because it violated their site’s terms of services around speech that incites violence.
And so, in case one, they were adding what they quote called a “fact check” layer; in case two, they were adhering to their own terms of service around spreading violent speech, which said they kept up in the public interest.
So that’s a super, super high-level summary of what happened so far. My question for you now, Mike, is how [Section 230] does and doesn’t apply here? Because in this case, “the fact check” could be construed as commentary content, not just third-party content.
Mike: It’s a really complex topic. Each layer of it adds new complexities and each of those complexities are in some way important.
Let’s do the two tweets separately. The first tweet, they added something — and just as a minor correction (and this has been going around a lot; people said this is the first time that Twitter had used this) — Twitter has been using that feature over the last couple months but, this is the first time they have done it on a politician’s tweet.
What’s amusing to me is the time I saw it used…about two weeks earlier. It was used to debunk a Jimmy Kimmel video that was making fun of Mike Pence. Twitter put on a thing that said this is manipulated media, it is not accurate. It was a tweet that had gone viral. It was making fun of something that Pence had done, and Twitter stepped in and said no, this is incorrect or manipulated media. [Twitter then] had a link to third party content saying why it was manipulated.
And so, that is allowed under 230. What it is doing is adding more speech — it is linking to other sources [and] providing more context. The part that is not protected by 230, was never protected by 230, and no change to 230 is going to change that, is any speech that comes directly from Twitter itself. So, in this case, that was the very narrow line that was put under Trump’s tweet. It said something like, “Get more facts about mail-in ballots,” or something to that effect. That particular line is from Twitter itself and therefore is not protected by 230. But, it IS completely protected by the First Amendment. The third-party content that they link to is then protected by 230.
[In] the second tweet, Twitter did something new (which I had not seen before) in which they put up a note that said that this tweet violates the terms of service; however, they want to keep it up because they feel that [the tweet] is relevant and important for people to see [its] content but to understand that it violated the terms of service. So they added more context and limited the ability for people to retweet or reply to it.
And again, this is 100% allowed by 230. It did not remove the content, it didn’t take it down. Even if Twitter chose to take it down, or take down his speech, or take down that tweet, that wouldn’t violate his free speech rights. The First Amendment protects people from the government acting, not from a company. Now, I have also since seen Twitter use “This tweet violates our terms of service, but we are leaving it up because it is newsworthy,” message on at least one other tweet this morning from somebody who was defending the president.
Sonal: Let me ask you another question, and then we can break down the executive order…Since we already debunked this platform-publisher distinction, [what do you make of] these companies that provide these interactive web services are like phone companies? They always use this line that, “Oh but imagine if the phone company decided to take down that conversation you had and interrupted you in the middle?” What do you make of that analogy?
Mike: Yeah so that is popular in a wide variety of circles across the political spectrum and doesn’t fall into any sort of partisan viewpoint — sort of the “public utility” argument.
Sonal: And that by the way, of course reminds me of Net Neutrality, which we both covered quite a bit.
Mike: Right. There are some funny parallels between this situation and Net Neutrality in that a lot of people’s positions are reversed from one to the other.
Sonal: We don’t have the time to talk about Net Neutrality, but I covered it extensively at WIRED, as you know, and from all different perspectives: from carriers, to FCC, to internet companies, you name it. That is exactly what’s fascinating to me, is that the positions and the sides are inverted in this case. So, anyway, what do you make of the phone and common carrier-type argument?
Mike: So it’s an important one to understand, but I don’t think it applies. I think that most people who are deeply familiar with public utilities and what is required to be declared a public utility would recognize that internet services — what’s sometimes called “edge providers” (which are the services that you and I use every day, that we interact with) — do not qualify and do not meet the requirements of a typical traditional utility service. To clarify, what that means [is that] usually a utility service is something that is offered to everybody, but is also commodified. If you use AT&T, or Sprint, or Verizon, you are getting the exact same service. There is no real differentiation in terms of the service that you’re getting — it is a commodity, one provider to the other. Same thing.
That is not the case with various internet edge providers i.e. Google, YouTube, Twitter, Facebook. Each of them have all of these different features and all these different things. They are not 1:1 replaceable. It is not a commodity that you can switch out; therefore, the public utility argument does not really apply.
You can argue that there should be some other kind of classification (and some people do argue that), but comparing them directly to a telephone service is different because it’s not core infrastructure — [they’re] things that are at the edge, things that you use as a service provided beyond that.
Sonal: What do you then make of this “public square / public forum” argument?
Mike: People say that Twitter or Facebook shouldn’t be allowed to do any moderation or take down any content because it’s the new public square, and therefore violates their rights. They will often point to two different lawsuits in making this argument: one is Pruneyard, the other is Packingham. These two cases… have been brought up in a whole bunch of lawsuits and I’ll just say that, every time they’ve been brought up in a lawsuit to argue that a social media site is the public square, they have failed. I have not seen a single judge anywhere agree that these things make any sense in this context.
But just to give the quick background on the two cases, and they go deep, but I’m going to give as high-level as I can, and as quick as I can: Pruneyard was a case about a mall that was trying to kick people out, effectively. It was argued that the mall was a gathering place and became the sort of de facto public square and that took away some of the rights of the private property owner [of the] mall to kick people out. The court said that it was a de facto public square and they could not kick people out.
Now it is an extraordinarily limited ruling and extraordinarily focused on the facts of that case. [The mall] was effectively the only place in town that anyone could gather. The mall owner sort of acted as a local government and was therefore replacing government functions — functions that normally were done exclusively by the government. Every other case after that that references Pruneyard has effectively limited it; it only applies in a very, very narrow situation, which is basically Pruneyard and Pruneyard alone. You can’t just say that something is a public square.
The Packingham case is a more recent case. It was a Supreme Court case that kicked out a state law that basically said if criminals had done some sort of criminal activity online, part of their punishment could be that they are barred from using the internet. The Supreme Court said you cannot pass a law that kicks people off of the internet because the internet is so essential to people’s lives and ability to work and all that kind of stuff. So people have taken that to mean [that] the services themselves cannot kick people off, but that is not what the case has said — it just said that the government cannot pass a law that forces people offline.
There is a third case that people never mention but is the most important case. It was just decided last summer, and that is the Manhattan News Network case. I won’t get into the details but the Supreme Court ruling was written by Brett Kavanaugh, who was the most recent appointment, and his ruling said that you can’t just declare any place where people can speak — even if a lot of people speak there — a public square. It doesn’t become a state action; it doesn’t take on government control.
The idea that something is a public square or that there is state action involved from a private company only applies in a very limited set of circumstances where that service or operation is, again, replacing activities that were traditionally done by the government. The ruling makes it very clear that Twitter, Facebook, YouTube, and every other website out there does not qualify. They are not replacing government [and] are not offering services that were traditionally given by the government.
Sonal: Right. This basically means that if the sites DO perform services that are exclusively a service provided by the government (i.e., if the government decided that all tax reimbursement would be done entirely online and no longer through the U.S. Postal Service), they would then have to comply [with] provisions.
Mike: Right, there could be an example, something that was traditionally and exclusively handled by the government. I could see an argument where someone could not be kicked off or blocked because that would imply state action issues.
Sonal: So now let’s talk about the news — again, as a way to explain what CDA 230 is and isn’t. We’ve explained and debunked some of the myths and framings around arguments of platforms vs publishers and analogies to phone systems. Let’s talk very briefly about the recent executive order that was issued this week…[and discuss] what the executive order can and can’t do here, or what it purports to do and doesn’t do.
Mike: There were drafts of this executive order that made the rounds over the last two years. This is something that the White House has been thinking about. I reported on it, a number of other news sites reported on it as different drafts were leaked out to the press about earlier versions of this executive order. And, the story is that, in the past, they’ve passed this around to different agencies like the FCC and the FTC. The message that the White House got back was that this was unconstitutional, and they couldn’t do any of this, but it seemed that they took it out of the drawer, dusted it off, and then put a fresh coat of paint on it.
[The executive order] says a lot of very angry stuff about the internet services and platforms and the way they handle moderation. There are seven different sections but [there are] two sort of scary-ish parts of the executive order; to one extent, the order effectively tasks the FCC with coming up with a new interpretation of 230 [but] hints very strongly what the FCC’s interpretation should be — that interpretation is totally at odds with both what is written in the law and what 20 years of case law have said.
That’s worrisome only to the extent that anyone would ever actually pay attention to that FCC interpretation. The FCC in ACLU vs. Reno (which is the lawsuit that rejected and made most of the Communications Decency Act unconstitutional) made it extremely clear that the FCC has no authority whatsoever to regulate websites. None, zero, zilch. It’s not even an open question — they cannot.
Sonal: And just to be very clear here, the FCC (Federal Communications Commission) is an independent agency; it has a five-member commission. I believe there’s currently three Republicans, two Democrats.
Because it comes up a lot what FCC can do / can’t do, like, it cannot make laws, but it does have the ability to interpret existing laws and put out certain rule making things. They do these Requests for Comments which create public records of people’s commentary and whatnot. They also have the power to ask for documents — and they can do distracting things — but they may not have legal-making authority. So I think it’d be very helpful for you to break down a bit more specifically what they can get away with and also can’t.
Mike: So they can do rule making, and that is a long involved process. Interestingly, because [the FCC] is an independent agency, the President cannot instruct them to do something. The executive order instructs the NTIA (which is part of the Commerce Department) to ask the FCC to do this. Technically, the FCC does not need to do this, but the FCC will certainly feel the pressure to probably do something. The FCC could certainly create a lot of a lot of nuisance, and yes, there will be comment periods, and people’ll have to testify, and put in comments. As we saw with the Net Neutrality hearing, the comment system was filled up with bots and nonsense, so the commenting and the rule making process is a bit fraught with distraction.
And so yes, [the FCC] can make rule making and can do something to enforce that rule making. If the rule making covers things that it is authorized, that the FCC is authorized to have regulatory power over by Congress —
Sonal: That are in its jurisdiction, so to speak.
Mike: That are in its jurisdiction — and websites are clearly not. Congress has never said that websites are within the FCC jurisdiction, and the main court case that tested the theory that websites were in the FCC jurisdiction has said no.
One other thing that I do want to note about the executive order and the request to the FCC is that, it is couched in a term that totally misinterprets CDA 230.
Sonal: Which is?
Mike: So earlier, I talked about the two different parts of the CDA — that one is about liability on third-party content, and one is about the platform’s protection in moderation. There are a few very narrow conditions on that moderation ability. They say: it has to be in good faith; and there’s a list of different content that you can moderate that includes otherwise objectionable content.
That otherwise objectionable content is very, very broad — it can cover basically whatever the platform thinks is otherwise objectionable. In order to argue good faith, [you] would open up a whole other First Amendment can of worms.
But what the instructions to the FCC indicate is that those limitations — the good faith, otherwise objectionable stuff — somehow applies to the first part of CDA 230, which is the part about not being responsible for third-party content. That has never been the case. Nobody’s ever suggested it is the case. It has never shown up in any lawsuit. It has never been argued in a legitimate way and yet, the executive order suggests that the FCC should look into whether or not that interpretation makes sense.
Sonal: So you’re basically saying that the two provisions of CDA 230 — that people are not liable for libelous content that their users might put on their site (or any other content their users might put on their sites) — is being conflated in this case with the good-faith aspect of being able to discretionarily moderate in “good faith.”
Mike: Exactly; they’re sort of mixing those two things up. I would argue that is done in bad faith, to make use of the good faith, limitation on all this.
Sonal: Oh, man. Right. What other aspects of the executive order — again, without going into breaking down every little detail because this is really more about the underlying principles — would you say have impact for understanding and really interpreting and explaining what CDA 230 is and isn’t?
Mike: So one important part — and this was added at the last minute, perhaps literally, because the draft that was leaked the night before did not have this, but the final executive order did have it — is that, it instructs the Attorney General to draft a law — oddly, not a federal law, but to draft like a reference state law — to effectively reinterpret CDA 230 in a way that diminishes its power.
And that could be problematic. Here’s an aside that I probably should have brought up earlier: 230 is not a universal immunity. It is not as universal as people make it out to be. One thing that it does not cover is federal criminal liability. So, if you break a federal law — drug trafficking, human trafficking —
Sonal: Child pornography, etc.
Mike: Child pornography, all of that stuff — the sites are still liable; 230 specifically exempts that. So, the Justice Department and the FBI, if they felt that any of these platforms were violating federal law, they have always, always under 230 been able to go after those sites and that includes third-party content. There are a whole bunch of conditions on that.
So if there is drug dealing, human trafficking going on those sites, [they] potentially could be criminally liable. The Attorney General, and the Justice Department, and the FBI have always had only way to make use of the law to go after these sites. And yet for the last few months, the Attorney General has been attacking 230 and acting as if it limited his power in some way when it simply does not. But now he can draft a law, and, he’s sort of already been doing that.
Sonal: What’s so amazing about what you just said, though, Mike, the part about the federal part actually immediately reminded me of the encryption debate — which we actually have discussed on this very show “16 Minutes” (and listeners can listen to our reframing of that debate); another place where policymakers on both sides have very conflicted views on.
Mike: Yeah, and there’s already a bill that’s in Congress — that was put together with the help of the Attorney General — and it sort of ties the 230 debate to the encryption debate. And, it’s very convoluted.
Sonal: Oh, this is the EARN IT–
Mike: The EARN IT Act. And what it has the potential to do is to say that if you are offering end-to-end encryption on your service, you no longer get 230 protections (it’s a little more complicated than that); but, his abilities to do that in a manner that would remain constitutional is a pretty big question.
But again, it could create a huge nuisance. And part of this is also [the Attorney General is] going to establish a working group so there will be discussions, roundtables, panels, hearings, subpoenas and all sorts of things that are going to happen in the meantime that are designed to be an intimidation tactic to try and — the phrase that everyone uses is “work the refs”, right? Tt means basically, “Hey, Twitter/ Facebook/ YouTube, if you don’t want us to keep causing trouble for you, maybe don’t be mean to us,” you know. Don’t fact check us. Don’t limit our tweets. Don’t limit our content. Don’t put extra notices on it or other limitations on it because the more you do that, the more of a pain we’re going to be to you.
Sonal: To summarize, at a super high level, the FCC has extremely limited jurisdiction over websites, specifically; the Attorney General does have some ability.
We haven’t talked about what’s not in the executive order, but this is where there’s a little bit of the dust storm [that] is very distracting which is that Congress could choose to rewrite policy (if they wanted) using this as an incitement for that.
Mike: Yeah. There are people in both the House and Senate who have said that they will introduce legislation based on this and try and do more than the executive order can do. Whether or not that legislation can actually go anywhere, any such legislation would almost certainly be subject immediately to a First Amendment challenge and would likely fail, but that would be many years into the future.
Sonal: Right. So we forgot one bit of the executive order, which is probably the only legit thing in there seemingly, which is that part of this had the threat of limiting any government dollars of advertising going to these sites. And I by the way did a little quick check and based on federal procurement records (this is according to The Verge) apparently, only $200,000 of advertising have been provided to Twitter specifically since 2008 — which sounds a little crazy to me. It can’t be getting everything, that seems way too low but even still, it does suggest that the government advertising is actually a very tiny piece of the bottom-line revenues of these companies. But, I’m curious for your take on that.
Mike: Yeah. So, that is one thing that an executive order actually can do, right, which is instruct certain federal agencies in terms of how they’re spending their money in some form or another —
Sonal: — Oh by the way, to be clear, when you say “their money,” we’re actually still talking about taxpayer money here.
Mike: Yes, yes — mostly taxpayer money. There are a few exceptions, but mostly taxpayer money is what we’re talking about here. What’s funny is the executive order sort of implies that it is telling agencies to stop spending on these websites, but it doesn’t actually say that. It says they have to account for what they’re spending, and they have to submit it to the Office of Management and Budget, and then something maaay happen in the future based on that. And the implication is that they should not be spending.
So, there could be a tiny, tiny, tiny miniscule drop in spending and, what’s silly of course is that, I would bet that the various political campaigns of everyone who is cheering this on are still spending much more money themselves as campaigns on these social media platforms in order to advertise.
Sonal: No question! On all sides.
Mike: So the one concern from a societal perspective is that the few federal agencies that do advertise on social media, actually probably have pretty good reason for that, and the one big example is the Census Bureau. And it’s 2020, and we’re in the midst of supposedly collecting the census.
Sonal: I forgot about that, right.
Mike: Because every 10 years, we have to do a census.
And one of the best ways that the government has found to get out the word, and to get people to actually fill out their census forms is through advertising on social media; therefore, pulling that budget and telling the Census Bureau that they cannot advertise actually could limit the ability of the Census Bureau to collect the data that they are required under the Constitution to collect.
Sonal: So, Mike, this is a wonderful summary so far of what Section 230 of the Communications Decency Act does and doesn’t allow; of the recent news, what’s hype/ what’s real, and sort of really using that to explain these laws that have allowed our modern internet. I will be linking — just in the show notes so people know — to a lot of the articles that did good explainers, a lot of your wonderful pieces in particular, as well as the actual executive order, and the analysis of the differences that Eric Goldman (our mutual friend) put up.
One question I do have for you — this is very much playing out against a broader backdrop of debates around big tech, debates around content moderation — is, given that the recent example did not necessarily remove or necessarily even fully restrict (except maybe in spread and engagement and scale), there [have] been a lot of complaints about things like shadow banning. There’s also a lot of conflation between content and behaviors (like what sites can do versus what they say) and for me, it seems like when it comes to this content moderation debate, you’re damned if you do and you’re damned if you don’t.
I’m curious for your thoughts on a) where this fits in that longer/broader scape of that debate; and then b) is there a way forward in your mind?
Mike: So I put a joke on Techdirt a few months ago and I keep referring to it over and over again. There’s a famous economist, Kenneth Arrow, he had this thing called the Arrow impossibility theorem. He looked at all different kinds of voting systems and argued that none of them can accurately reflect the will of the populace. And so I did a play on that, which I called — humbly — the Masnick Impossibility Theorem.
Sonal: You are a very humble guy. We go way back, I think it’s been quite a number of years I’ve known you.
Mike: I don’t even remember how long ago that was, but it was way back.
Sonal: It might be like 15. No, not 15. Maybe 15, almost like 12 years now. I don’t know.
Mike: Could be, yeah.
Sonal: I love that you named it after yourself; I want to hear about the Masnick Impossibility Theorem!
Mike: It is impossible to do content moderation well, and there are a variety of reasons for that:
That is just the reality of the process of moderating content, and nothing is going to fix that. Hiring more human moderators is not going to fix that; building better AI is not going to fix that. You can improve on it but one of the nice things about Section 230 — and the way it is structured in that there is no liability for the moderation — is that it allows for different experimentation to happen.
So you have very different approaches. And, everybody focuses on Twitter, and Facebook, and YouTube — but then you have to take into account tons of other sites, including Wikipedia. Wikipedia is allowed to have all these individuals editing their platform because of 230. Or you look at another site like Reddit, right; Reddit has set up all these different subreddits, and each of them have their own moderators that allow them to set up their own rules. That’s allowed — that is possible — because of Section 230. And any of these changes could make those kinds of things impossible.
Sonal: It’s funny because in the examples you listed, you made sites that are very often used by students, like Wikipedia for research; but also, I just wanna make the point on this, that it applies to vaccine sites, and anti-vaxxer sites. It applies to all kinds of sites and that variety is partly the point here as well. I think that’s really important to underscore.
Mike: And let me underscore it even further. CDA 230 protects every website online. People say that, “Oh it’s a gift to big tech and newspapers don’t get this.” No, newspapers get it too for their website; every website gets this, and that means your personal blog. It means when you retweet someone, you get that protection as well.
All of these things, and all of these other sites, and all these other services, and everything that everyone is building — I mean lots of people listening to this are building different internet services — all of those services are protected by 230. And this matters waaaaay beyond just the big three or four companies out there.
Sonal: I am so glad you brought that up, Mike, because the most and really only alarming line in the executive order to me was this quote: “For purposes of this order, the term ‘online platform’ means any website or application that allows users to create and share content or engage in social networking or any general search engine.” And that is quite literally every site.
Mike: That is every site.
Sonal: Every site of every size. And it makes me think of the other law — it’s not Masnick’s Law of Impossibilities — it is the Law of Unintended Consequences.
And this seems true for every regulation — and I think of GDPR and all these other regulations — that all they really did, in fact, was help bigger companies, the very group they were trying not to. All the smaller players who don’t have huge compliance arms, legal officers, and the people they can hire to moderate, process queries and takedown requests get punished, which then further entrenches [them]. So it’s a vicious loop, essentially.
Mike: And that should be very scary. Because part of the executive order itself starts out by claiming that the reason they have to do this executive order is because there are limited number of social media sites out there.
And yet the definition that they have in the setup of what they’re trying to do would effectively limit that, even further, by making it impossible for new competition to show up, and for smaller sites to exist. And the more you put in place these kinds of rules and regulations, the more difficult you make it for there to be any new startups in this space, any new websites — because it becomes a costly mess for any smaller website to comply.
Sonal: Right. And while I completely agree with you that people alone or technology alone is not the answer, one thing I do want to point out about the “way forward” part of it is that this conflates the ownership of WHO decides versus the size of the company that decides.
So, for instance, instead of having like a single CEO decide, “This is my vision for this big company,” crypto is an often cited case — my partner Chris Dixon, has written an op-ed in WIRED about this a couple years ago — as a way forward for thinking about the governance of some of these sites and thinking of a crypto-decentralized native way, so that it’s “a community owned and operated service” (which is his way of thinking about it). You and I have talked about crypto many many times over the course of our friendship and years (and I think at the inaugural Copia Policy Institute, I think you had a whole section on crypto, if I remember); and I’m curious for your thoughts on that as well.
Mike: Yeah so last year, I wrote a paper for the Knight First Amendment [Institute] at Columbia University, which is called “Protocols, not Platforms.”
Sonal: Ahh, I remember this.
Mike: Oh yeah, the horcruxes.
Sonal: I teased you about it where I was like, “Mike, hallows, not horcruxes Mike!” And I myself do not love when people use Harry Potter analogies, but my god that was so perfect for that. I’m sorry. It’s very much ”Hallows, Not Horcruxes” which is great — “Protocols, Not Platforms.”
Mike: Yeah, you know, that paper discusses what the content-moderation world looks like in a distributed, decentralized system — potentially based on crypto. The paper touches on not just crypto, but just more decentralized, interoperable protocol-based systems.
And that changes a number of the content moderation questions. It doesn’t make them go away — and I do think that is one mistake that some people make, which is they think if we just set it up on a crypto-based distributed system, then we just wipe our hands of it; and it’s everybody’s individual decision, however it’s implemented, let that happen.
Sonal: It also doesn’t leave room for the variety of governance approaches that are inevitable in that as well. Because for the record, just as you’re arguing for a variety of experiments — whether it’s a privately owned, public owned company, centralized, decentralized, whichever — even in the crypto world, there’s a variety of governance approaches that can be applied, which is great. And there’s been a lot of experiments already playing out on that front when it comes to protocols.
Mike: And I think that’s good! It is that experimentation that we need.
And that experimentation is not designed just to like find the best result, but to recognize that there are different best results for different communities, and different purposes, and different services. There are certain cases where you want a Wikipedia approach; and there are certain cases where you want a Reddit approach; and there are certain cases where you want a Twitter approach; and whatever other approaches there are as well.
You can have all these different things, and some of them work [only] in some cases. The only way we’re allowed to figure that out is if we have the freedom to make those choices and see what happens.
Sonal: That’s a wonderful note to end on.
So, in this show, we ask our guests (our experts) to bottom-line it for me. And while this has been longer than 16 minutes — it’s a special long episode — bottom-line it for me, Mike. What’s the big takeaway?
Mike: So, the rules of how the internet works are under attack. This executive order by itself is not going to effectively change anything directly: It’s going to cause a lot of heat and light, but very little actual fire.
What we are seeing — and this goes beyond just this executive order — is that, people are really trying to change the way moderation works online. And we’ve already seen some laws — both in the U.S., and certainly outside the U.S. there have been a bunch of laws that are direct to that content moderation — and that is going to continue. I worry very strongly about what that does, whether that locks everyone into a specific type of content moderation, and what that means over the long term for freedom of speech on the internet.
Sonal: Thank you so much for joining this segment, Mike.
Mike: Thank you, for having me.
16 minutes is a short news podcast covering the top headlines of the week, separating what’s real from what’s hype.