Katherine Fruge Corry
During a joint session of Congress convened on January 6, 2021, to count the electoral votes and confirm the electoral victory of President Joseph Biden, a radical faction of Trump supporters stormed the Capitol building in an unsuccessful attempt to thwart the democratic process. Tragically, several lives were lost in connection with these activities. Despite the attack, democracy prevailed when order was restored to the Capitol, and President Biden was formally declared victor of the 2020 Presidential election, in accordance with the will of American voters. Two days later, Twitter, Facebook, Snapchat, and Instagram, among others, permanently or indefinitely suspended former President Donald Trump’s social media accounts. In the aftermath of these suspensions, media consumers throughout the world could hear the resounding silence. To many, the silence ushered in relief, and an end to a stream of election misinformation emanating from Donald Trump’s social media accounts. For others, these actions were seen as an Orwellian precedent for a broader framework towards censorship of conservatives by Big Tech companies.
Twitter’s rationale for permanently suspending Donald Trump from its site was that he violated its own Glorification of Violence policy. Twitter cited the following two tweets from Donald Trump’s account as violative of their Glorification of Violence policy:
The 75,000,000 great American Patriots who voted for me, AMERICA FIRST, and MAKE AMERICA GREAT AGAIN, will have a GIANT VOICE long into the future. They will not be disrespected or treated unfairly in any way, shape or form!!!
To all of those who have asked, I will not be going to the Inauguration on January 20th.
Specifically, Twitter’s public statement laid out five points explaining why these tweets, in the context of the aftermath of the storming of Capitol Hill, violated their Glorification of Violence Policy. The suspension essentially came as a result of Twitter’s determination that Trump’s tweets were likely to encourage and inspire people to replicate the criminal acts that occurred on Capitol Hill on January 6, 2021. The justification for this determination rests wholly on Twitter’s subjective interpretation of Donald Trump’s tweets, and the inferences which Twitter feared its users might draw from those tweets.
In contrast to the widespread and swift censorship of Donald Trump by social media companies following the Capitol Riots, other world leaders who have made controversial statements on social media have not been so sharply monitored and censored. For instance, Ayatollah Ali Khamenei, Supreme Leader of Iran, has not been permanently suspended from Twitter for tweeting inflammatory and inciteful rhetoric. Several blatantly problematic tweets from Khamenei’s Twitter account that remain unflagged and uncensored to this day include the following:
Our stance against Israel is the same stance we have always taken. #Israel is a malignant cancerous tumor in the West Asian region that has to be removed and eradicated: it is possible and will happen. 7/31/91.
Those who ordered the murder of General Soleimani as well as those who carried this out should be punished. This revenge will certainly happen at the right time.
Millions attending Martyrs Soleimani & Abu Mahdi’s funerals in Iraq & Iran was the 1st severe slap to the US. But the worse one is overcoming the hegemony of Arrogance & expelling the US from the region. Of course, revenge will be taken on those who ordered it & the murderers.
The next question to ask is: why is it a crime to raise doubts about the Holocaust?
At a hearing on anti-Semitism in front of the Knesset during the summer of 2020, Israeli lawmakers asked a Twitter spokeswoman why—in light of Twitter’s heavy policing of Donald Trump’s accounts—Ali Khamenei’s account had not been suspended from Twitter and why certain tweets from his account had not been censored. Defending Twitter’s choices, the spokeswoman cited a Twitter policy whereby tweets from world leaders generally are not in violation of Twitter’s rules when they interact with fellow public figures, comment on current affairs, remark on economic and military issues, or make strident statements of foreign policy.
Comparing Twitter’s treatment of Donald Trump’s account to its treatment of Ali Khamenei’s account demonstrates that social media companies often fail to apply to their content moderation standards in a consistent and fully transparent manner. Given the prevalence of social media use in modern culture, social media companies inevitably play a substantial role in facilitating communication. Moreover, a small number of powerful companies dominate the entire social media sphere—practically, users have few alternatives outside of Twitter, Facebook, Instagram, and Snapchat. Given the amount of power that social media companies wield over communication, and the lack of alternatives available to users, serious problems emerge when social media companies enforce their content moderation standards inconsistently or ambiguously.
Recognizing these problems, another world leader, Mexican President Andrés Manuel López Obrador, posed the following question in response to the social media ban of Donald Trump: “How can a company act as if it was all powerful, omnipotent, as a sort of Spanish Inquisition on what is expressed?” The answer is that under 47 U.S.C. § 230, Twitter, Facebook, Instagram, and all other U.S.-based online services that publish third-party content are given the right to curate their websites according to their own standards without facing liability. These companies are acting lawfully and within their rights under § 230 in banning Donald Trump. Furthermore, § 230 shields these companies from the potential for liability that was traditionally attendant to publishing and distributing third party content, such as defamation liability. Simply put, Big Tech companies—and all other online service providers—can censor and de-platform whomever and whatever they want, and they cannot be sued for doing so, even when they act like traditional publishers.
Section 230 was enacted in 1996, at a time when no one could imagine that 25 years later, some 4.66 billion people worldwide would become active internet users. Additionally, § 230 was enacted before anyone could predict that Big Tech and social media platforms would serve as the de facto gatekeepers of the information technology industry and become dominant, influential forces in American culture, as well.
Section 230 is the internet’s First Amendment, enshrined not in the Bill of Rights, but rather, in a congressional statutory enactment. Without § 230, internet service companies would be hamstringed by traditional standards of publisher and distributor liability, which would have significantly stifled the growth of the internet. Hidden in the Communications Decency Act, § 230 promotes the development of the internet and other web-based technologies while protecting free expression and open, diverse discourse on the internet. Each of the 10 most trafficked websites in the United States in 2020 rely heavily on § 230 in order to exist and thrive in their current forms. Thus, it is clear that nearly every American with internet access is deeply impacted by § 230. No other country has protections similar to those contained in § 230. Because it provides internet-based companies the opportunity to exploit the internet largely unfettered, § 230 gives the United States a global competitive advantage, helping to solidify the dominance of the American economy.
Though the internet was intended to function as a forum for open political debates, partisans have drawn the very infrastructure of the internet technology industry into their political controversies. Politicians have made § 230 a central focus of these conflicts. Recently, § 230 has engendered significant bipartisan disdain. Democrats complain that, because of § 230, online service companies do not fight harder to censor hate speech and false information online. Republicans complain that § 230 allows Big Tech and other internet service companies to censor conservatives with impunity. Both parties have § 230 in their crosshairs. President Biden has stated, “Section 230 should be revoked, immediately should be revoked, number one. For Zuckerberg and other platforms.” Former President Donald Trump tweeted, on May 29, 2020, “REVOKE 230!” Are they correct—should Congress repeal § 230?
In this blog post, I will first explore the traditional liability of publishers for third-party content. Next, I will explain what prompted Congress to enact § 230, and how the policy goals behind § 230 were achieved. Finally, I will discuss recent suggestions with regards to the future of § 230.
I. Traditional Liability of Publishers and Distributors
Section 230 was Congress’s legislative response to two court cases out of New York. Cubby, Inc. v. CompuServe, Inc. and Stratton Oakmont, Inc. v. Prodigy Services Co. were the first cases to address the question of what is the appropriate standard of liability to be applied to an online service provider for defamatory third-party content. To understand why § 230 was enacted, it is useful to briefly discuss the traditional framework of liability for third-party content, and the distinction between publisher liability and distributor liability.
Under traditional common law principles, a publisher of a third party’s defamatory statements shares the same liability for such statements as the speaker. In contrast, liability for distribution of defamatory materials exists under common law principles “if, but only if, [the distributor] knows or has reason to know of its defamatory character.” Distributors, as opposed to publishers, are considered passive conduits of materials they distribute or deliver because they lack the editorial control of a publisher. Publishers exercise editorial control—which includes deciding what material to publish, making determinations as to the content of the material published, and choosing when to publish or withdraw materials. And with editorial control comes increased liability. Moreover, imposition of a higher standard of liability on distributors would result in severely restricting public access to reading materials, because the distributor would self-censor and restrict distribution to those materials it can personally verify and pre-screen.
Cubby, Inc. v. CompuServe, Inc. was the first case to address the appropriate standard of liability for third-party content in the online context. The defendant, CompuServe Inc., provided a general online information service which allowed its subscribers to access thousands of information sources, including special interest forums and electronic bulletin boards. A columnist for one of the special-interest forums posted defamatory comments about a competitor, and the competitor sued CompuServe for libel, alleging that CompuServe should be responsible for the defamatory remarks since it hosted the statements on its forums. CompuServe moved for summary judgment, asserting that it was a distributor of the statement, and that as a distributor, it could not be held liable because it neither knew nor had reason to know of the allegedly defamatory statements. The court agreed with CompuServe, holding that the appropriate standard of liability to be applied to CompuServe was distributor liability—namely, whether CompuServe knew or had reason to know of the alleged defamatory statements. In applying the standard, the court noted that CompuServe had no ability to pre-screen the publications before they were posted to the forums, because once a publication was submitted, it was uploaded to the forum instantaneously. CompuServe held little to no editorial control over the materials posted to its online forums, just as a brick-and-mortar library holds no editorial control over the print materials it circulates. CompuServe could not pre-screen and examine every publication on its forums, and the First Amendment prevents a rule which would require a distributor to examine every piece of material it distributes. Ultimately, the court found it an undisputed fact that CompuServe had neither knowledge nor reason to know of the alleged defamatory statements. Therefore, the court granted CompuServe’s motion for summary judgment, dismissing the libel claims against CompuServe.
The second case to take up the question of the appropriate standard of liability for online third-party content is Stratton Oakmont, Inc. v. Prodigy Services. Prodigy Services was a web services company that hosted online bulletin boards. An unidentified third party posted allegedly defamatory statements about Stratton Oakmont, Inc. on Prodigy’s online bulletin boards, and, in response, Stratton Oakmont sued Prodigy for libel. Prodigy held itself out to the public as a family-friendly computer service which exercised editorial control over the content of its online bulletin boards. Prodigy engaged in curation by pre-screening posts for compliance with its content guidelines. The court found that this content moderation was an exercise of editorial control over the content posted to its bulletin boards. Thus, the court held that Prodigy was a publisher of the third-party content for purposes of the libel claim. The court distinguished Cubby on the grounds that Prodigy, unlike CompuServe, chose to exercise editorial control over the content on its forums, which opened Prodigy up to publisher liability.
Cubby held that internet service providers were entitled to be treated as distributors, not publishers. Stratton Oakmont held that only those internet service providers that exercise no editorial control over publicly posted materials would get distributor treatment, and service providers that exercised some editorial control—by moderating and curating content—would be treated as publishers. The result was the creation of a “moderator’s dilemma” whereby online services that host third-party content would be forced to elect between the following two strategies: exercise full editorial control by screening and reviewing all third-party content, and accept liability for whatever legally problematic content that slips through the cracks, or exercise absolutely no editorial control over user content, thereby minimizing potential liability while leaving their services open to all problematic third-party content. On the one hand, not all internet service businesses can implement full editorial control over the third-party content posted to their online servers because of financial or logistical constraints. On the other hand, internet services who cannot implement full control may not be able to leave their sites open to all forms of expression without compromising their values, forcing small scale religious and child-friendly sites totally offline. Trapped in the vice created by the moderator’s dilemma, many online services would have likely abandoned all efforts to moderate user content or would have completely shut down. However, the negative implications of Cubby and Stratton Oakmont were never realized. Five weeks after the Stratton Oakmont decision, Congressmen Chris Cox and Ron Wyden introduced H.R. 1978, the Internet Freedom and Family Empowerment Act, which would later become § 230 of the Communications Decency Act.
II. The Act
Congressmen Wyden and Cox were concerned that the holding in Stratton Oakmont would spread to courtrooms across the country and stifle the growth and potential of the burgeoning internet industry. They further recognized that Stratton Oakmont incentivized internet service providers to exercise no moderation or regulation over third-party content. Based on these concerns, Wyden and Cox set out to craft legislation that achieved two ends: first, immunizing online service providers from liability for third-party content, in order to promote the growth of online services, and, second, permitting online service providers to self-regulate and set their own content moderation policies, without being held responsible for the third-party content posted on their sites. Their proposed legislation was codified in § 230 of the Communications Decency Act. Subsection (c) of § 230 specifically carries into effect the congressmen’s two stated goals by providing:
(c)(1): No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
(c)(2): No provider or user of an interactive computer service shall be held liable on account of—
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).
In order to encourage the growth of internet service providers, subsection (c)(1) shields online services with sweeping immunity from liability for claims arising from third-party content posted on their online platforms. Courts have expansively interpreted Subsection (c)(1), finding that it shields online services from liability even when the online service provider knew or should have known of the defamatory or illegal third-party content. This expansive interpretation has rendered the First Amendment distinction between publishers and distributors obsolete in the online context. Hence, lawsuits seeking to place online services in the position of a publisher or distributor of third-party content are categorically barred.
Subsection (c)(2) (the “good Samaritan” provision) achieves Cox and Wyden’s second goal. In order “to remove disincentives for the development and utilization of blocking and filtering technologies,” § 230(c)(2) shields online services from liability as a publisher when they exercise editorial control by blocking and screening offensive or otherwise objectionable material.
Section 230 eliminates the moderator’s dilemma. Online services, under § 230, have the discretion to engage in a wide range of content moderation methods, while at the same time maintaining immunity from liability for defamatory or illegal content that slips through the cracks. Hence, even when online services exercise editorial control by moderating and curating third-party content, they are still shielded from liability, and “[t]his includes promoting a political, moral, or social viewpoint.”
Section 230 has been described as a flagship example of “internet exceptionalism”—treating the internet more favorably than traditional media because of the novelty, uniqueness, and superiority of the internet. Even as early as 1995, Congress recognized that the internet offered a forum for open discourse, innovation, economic and intellectual opportunity, and diversity of opinion. Given the millions of users of online services in 1995, Congress was aware that it would be impossible for online service providers to screen every individual third-party post for potential problems. Today, with the number of internet users up to 4.66 billion, the feat would be even more impracticable. If these online service providers faced potential liability for the third-party content posted on their services, practical considerations would severely constrain the amount and variety of third-party content they could host. Congress understood that the threat of tort lawsuits would chill free speech on the internet and would amount to “another form of intrusive government regulation of speech.” In light of the clear implications on open discourse on the internet, Congress chose to immunize internet service providers from liability for third-party content.
Furthermore, § 230 aims to keep government regulation of the internet at a minimum. Hence, § 230 (a)(4) states, “The Internet and other interactive computer services have flourished, to the benefit of all Americans, with a minimum of government regulation.” Moreover, § 270 (b)(2) explicitly sets forth that it is the policy of the United States “to preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation.” Thus, under § 230, internet service providers have the discretion to regulate their own platforms, according to their own standards, driven not by government compulsion or the threat of civil or criminal liability, but by market forces, personal values, and the court of public opinion.
III. Internet Dumpster Fires and a New Form of “Self Help”—Embracing Internet Exceptionalism Means Taking the Bad with the Good
Without § 230, the internet as we know it could not exist. Internet services such as Twitter, Facebook, YouTube, and Yelp require some liability protection from third-party postings in order to continue to provide the crowdsourced content that their users consume every day. At the same time, however, § 230 is also partially to blame for the proliferation of evils on the internet, because it immunizes sites when they host revenge porn; false, defamatory, and reputation-destroying reviews; gruesome videos; and calls to violence, as well as content that harasses women and racial minorities, which can have the practical effect of suppressing their voices.
There are many individuals whose lives have been devastated by online defamation, revenge porn, and harassment. Even worse, § 230 tells these victims they have no recourse against the online services who provided a platform for such content. Cecilia Barnes learned this lesson in an extremely unpleasant way. When Barnes broke up with her boyfriend, he became angry and sought revenge by creating fake profiles in Barnes’ name on a website run by Yahoo!, Inc. The boyfriend impersonated Barnes in Yahoo! chat rooms, striking up conversations with men and directing them to the fake profiles. These profiles were filled with sexual overtures and nude photographs of Barnes, as well as her contact information and home address. Men began contacting Barnes and showing up at her home with expectations of sexual intercourse. Barnes complained directly to Yahoo!, but Yahoo! did nothing to remove the fake profiles. Barnes eventually filed suit against Yahoo!; however, the U.S. Court of Appeal for the Ninth Circuit dismissed Barnes’s negligence claim, holding that Yahoo was immunized under § 230.
The experiences of victims like Cecilia Barnes cast doubt on Congress’s assertion that, under its laissez faire approach to online speech,“[t]he Internet and other interactive computer services have flourished, to the benefit of all Americans.” Section 230 allows “unfettered speech,” ranging from the valuable to the vile. Without § 230, however, the power over communication would revert to traditional media. In comparison to the broader U.S. population, the rich and powerful have disproportionate access to the channels of traditional media. It is important to recognize, however, that the internet offers more than simply a digitized version of traditional print media; rather, the internet allows two-way, instantaneous communication on a nationwide, even global, scale. Thus, the internet plays a part in giving victims of harassment and members of marginalized groups an avenue for “self-help” and a voice. Videos posted and shared on social media capturing incidences of police brutality towards black people have helped bring public attention and awareness to the issue of racism in policing. These widely shared videos can serve as eye witness documentation of real-time events, which can be used to verify the accuracy of official police incident reports. The video depicting an officer kneeling on the neck of George Floyd was circulated throughout social media, sparking a national outcry against police violence and racism. Without the protections of § 230, the threat of defamation liability would likely lead online service companies to refuse to host controversial content addressing incidents of police brutality. In that way, § 230 helps bring awareness to injustice, and awareness is an important step towards accountability and improvement.
IV. “Everybody Hates § 230”: Redirecting Misplaced Ire from § 230 to Big Tech Oligopolies
As explained above, § 230 has produced significant benefits, but those benefits have not come without a cost. Recently, national debate has focused on whether the balance struck by § 230 should be recalibrated to deal with the exponential growth of the world wide web in the past quarter century. Since § 230 is a congressional statute and not a constitutional amendment, it can be repealed or amended by Congress at any time.
In the wake of the assault on Capitol Hill and the “Trump-ban,” Republicans increasingly claim that internet service companies over-moderate and target conservatives, arguing that the Capitol breach is being used as a pretext for broader censorship of conservative viewpoints online. Democrats, on the other hand, increasingly claim that social media sites are responsible for under-moderating election misinformation, specifically in allowing their sites to be used in planning the Capitol breach and in proliferating the type of content that motivated the breach.
One proposal offered by members of both political parties is revoking § 230. If § 230 were revoked, common law standards set by the courts would govern the liability of online services for illegal, defamatory, or otherwise problematic third-party content. Given that the only two cases to address the issue under the common law standard are Cubby and Stratton Oakmont, online services would be left hanging in the balance until a coherent common law standard is formulated by the courts.
Repealing § 230 would not result in “neutrality” because the First Amendment already gives these interactive internet service companies the right to choose what content they will or will not display on their platforms. What § 230 does is shield these companies from being held liable for third-party content on their sites when they engage in moderation, thereby encouraging self-moderation. Moreover, under the First Amendment, interactive internet services, like traditional publishers and distributors, do not face liability for speech that is protected by the First Amendment. What § 230 does is provide protection against liability for third-party content that is not protected by the First Amendment, such as defamatory or illegal content. Without § 230, only First Amendment protections would exist in the online context, and it is a harsh reality that only the most profitable Big Tech companies could continue to thrive in their current forms.
A handful of Big Tech companies and social media giants control much of what is said and seen online, but they do not always apply their moderation policies consistently, and enforcement decisions are often less than transparent. Being censored or fact checked on Facebook, Instagram, or Twitter is so consequential because there are no comparable alternatives to turn to. If the internet is to continue to function as a “forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity,” as the drafters of § 230 intended, a small number of companies cannot wield untempered, outsized control over online speech. Competition among online interactive services helps combat both censorship and concentrations of power in the online service industry. Without § 230, smaller and newer online services—including those with less-restrictive moderation policies—hardly stand a fighting chance against their more powerful competitors. And all internet users would be worse off without § 230. After all, “a variety of sites with a plethora of moderation practices keeps the online ecosystem workable for everyone. The Internet is a better place when multiple moderation philosophies can coexist, some more restrictive and some more permissive.”
The problems inherent in Big Tech’s control over online speech cannot be addressed by forcing them to host or remove particular content, or by requiring them to be politically neutral in their moderation decisions. The First Amendment, not § 230, constrains the government from requiring private companies to host certain content on, and remove protected content from, their platforms. Government should instead consider an anti-trust approach to Big Tech, in order to help increase competition among online interactive service. Congress should also address the data capture and data usage models that drive Big Tech’s dominance. And by increasing transparency, notice, and opportunities for appeal, internet service companies can improve their current self-moderation efforts.
 Certification of Electoral Votes (January 6-7, 2021), Ballotpedia, https://ballotpedia.org/Certification_of_electoral_votes_(January_6-7,_2021) [https://perma.cc/4P4H-P8V6] (last visited Jan. 27, 2021).
 Sara Fischer & Ashley Gold, All the Platforms That Have Banned or Restricted Trump So Far, Axios (Jan. 11, 2021), https://www.axios.com/platforms-social-media-ban-restrict-trump-d9e44f3c-8366-4ba9-a8a1-7f3114f920f1.html [https://perma.cc/8YA5-BQHD].
 Big Tech and Censorship, Economist (Jan. 16, 2021), https://www.economist.com/leaders/2021/01/16/big-tech-and-censorship [https://perma.cc/K34K-4TQS].
 Donald Trump (@RealDonaldTrump), Twitter (Jan. 8, 2021). More information cannot be provided in regard to this tweet because Donald Trump’s twitter account has been suspended indefinitely.
 Donald Trump (@RealDonaldTrump), Twitter (Jan. 8, 2021).
 Twitter, Inc., Permanent Suspension of @realDonaldTrump, Twitter Blog, (Jan. 8, 2021), https://blog.twitter.com/en_us/topics/company/2020/suspension.html [https://perma.cc/U4W8-TB4P].
 Id. Twitter specifically stated that its determination that Trump’s two above-cited tweets were likely to inspire further violence was based on the following five factors: (1) Trump’s statement that he would not be attending the inauguration was being received by Trump supporters as further confirmation of the illegitimacy of the election and as Trump disavowing his previous claim that there would be an orderly transition of power; (2) the second tweet may also serve as encouragement to those potentially considering violent acts that the inauguration would be a “safe” target, because Trump will not be at the inauguration; (3) the use of the words “American Patriots” to describe his supporters is also being interpreted as support for those committing violent acts at Capitol Hill on January 6, 2021; (4) that the use of the words “giant voice” to describe his supporters and that those supporters “will not be disrespected” is being interpreted as further indication that Trump does not plan to facilitate an orderly transition, but rather that he plans to continue to empower and support those who believe he won the election; and (5) that future plans for armed protests had already begun proliferating, both on and off Twitter. Id.
 Ayatollah Ali Khamenei (@Khamenei.ir), Twitter (June 3, 2018, 12:49 PM), https://twitter.com/khamenei_ir/status/1003332853525110784 [https://perma.cc/VT2L-DMEK].
 Ayatollah Ali Khamenei (@Khamenei.ir), Twitter (Dec. 16, 2020, 4:48 AM), https://twitter.com/khamenei_ir/status/1339160462932533249 [https://perma.cc/2EMV-39KE].
 Ayatollah Ali Khamenei (@Khamenei.ir), Twitter (Dec. 16, 2020, 5:03 AM), https://twitter.com/khamenei_ir/status/1339164261982097409 [https://perma.cc/N8NS-4JUS].
 Ayatollah Ali Khamenei (@Khamenei.ir), Twitter (Oct. 28, 2020, 11:48 AM), https://twitter.com/khamenei_ir/status/1321494146989907969 [https://perma.cc/EW38-DERD].
 Ebony Bowden, Twitter Defends Blocking Trump Tweets but Not Iran’s Ayatollah Khamenei, N.Y Post (July 29, 2020), https://nypost.com/2020/07/29/twitter-defends-blocking-trump-tweets-but-not-irans-ayatollah-khamenei/ [https://perma.cc/48U5-D39M].
 Mark Stevenson, Mexican President Mounts Campaign against Social Media Bans, Associated Press (Jan. 14, 2021), https://apnews.com/article/donald-trump-marcelo-ebrard-mexico-media-social-media-a5303f532810447575ccf2af6692a2d4 [https://perma.cc/W6XX-GBDT].
 47 U.S.C. § 230 (2018); Corynne McSherry, EFF’s response to Social Media Companies’ Decisions to Block President Trump’s Accounts, Electronic Frontier Foundation (Jan. 7, 2021), https://www.eff.org/deeplinks/2021/01/eff-response-social-media-companies-decision-block-president-trumps-accounts [https://perma.cc/XS29-F4L3].
 McSherry, supra note 18.
 47 U.S.C. § 230(c)(1)–(2).
 Joseph Johnson, Worldwide Digital Population as of October 2020, Statistica (Jan. 27, 2021), https://www.statista.com/statistics/617136/digital-population-worldwide/#statisticContainer [https://perma.cc/2Y9F-YC54].
 See 47 U.S.C. § 230; see also Jeff Kosseff, The Twenty-Six Words That Created the Internet 253 (2019).
 47 U.S.C. § 230 (a)–(b).
 Top 100: The Most Visited Websites in the US, Semrush, https://www.semrush.com/blog/most-visited-websites/ [https://perma.cc/HGG8-SEFX] (last visited Jan. 27, 2021). The top 10 most trafficked websites in 2020 were Google, YouTube, Facebook, Amazon, Wikipedia, Yahoo, Reddit, Pornhub, Twitter, and Instagram.
 Eric Goldman, Internet Law: Cases & Materials 330 (July 14, 2017 ed.).
 See David Post, A Bit of Internet History, or How Two Members of Congress Helped Create a Trillion or So Dollars of Value, Washington Post (Aug. 27, 2015), https://www.washingtonpost.com/news/volokh-conspiracy/wp/2015/08/27/a-bit-of-internet-history-or-how-two-members-of-congress-helped-create-a-trillion-or-so-dollars-of-value/?utm_term=.87428a710ed7 [https://perma.cc/RYK9-KBN4]; see also Latest Developments in Combating Online Sex Trafficking: Hearing on H.R. 1865 Before the S. Comm. on Commc’n and Tech., H. Comm. On Energy and Commerce, 115th Cong. 47 (2017) (written remarks of Professor Eric Goldman).
 Big Tech and Censorship, supra note 5.
 Anshu Siripurapu, Trump and Section 230: What To Know, Council of Foreign Relations (Dec. 2, 2020, 7:00 AM), https://www.cfr.org/in-brief/trump-and-section-230-what-know [https://perma.cc/9X5Q-P7FC].
 New York Times Opinion Editorial Board, Joe Biden: Former Vice President of the United States, N.Y. Times (Jan. 17, 2020), https://www.nytimes.com/interactive/2020/01/17/opinion/joe-biden-nytimes-interview.html [https://perma.cc/ZSC7-D4EM].
 Donald Trump (@RealDonaldTrump), Twitter (May 29, 2020).
 CDA 230: Legislative History, Electronic Frontier Foundation, https://www.eff.org/issues/cda230/legislative-history [https://perma.cc/77RM-HWNQ] (last visited Jan. 27, 2021).
 Cubby, Inc. v. CompuServe, Inc., 776 F. Supp. 135 (S.D.N.Y. 1991).
 Stratton Oakmont, Inc. v. Prodigy Servs. Co., 1995 WL 323710, at *1 (N.Y. Sup. Ct. May 24, 1995).
 In the context of defamation law, distributors are considered to be a subset within the larger publisher category; hence, distributor liability is a species of publisher liability. Distributors are held to a different standard of liability than pure publishers—distributors must have knowledge or a reason to know of the defamatory content. See Zeran v. Am. Online, Inc., 129 F.3d 327 (4th Cir. 1997).
 Restatement (Second) of Torts § 578 (Am. L. Inst. 1977). Except as provided in § 581, “one who repeats or otherwise republishes defamatory matter is subject to liability as if he had originally published it.” Id. Publication of defamatory matter occurs when defamatory words are intentionally or negligently communicated to someone other than the person being defamed. Id § 577. Republication occurs when a person repeats defamatory words he heard or read, or when a person prints or reprints defamatory words which were previously published either verbally or in writing. Id. § 578. Each time defamatory material is communicated by a new person, a new publication or republication has occurred, and liability can attach to each incident. Id. § 578 cmt. b.
 Id. § 581.
 Stratton Oakmont, 1995 WL 323710, at *3.
 Smith v. California, 361 U.S. 147, 152–53 (1959).
There is no specific constitutional inhibition against making the distributors of food the strictest censors of their merchandise, but the constitutional guarantees of the freedom of speech and of the press stand in the way of imposing a similar requirement on the bookseller . . . . For if the bookseller is criminally liable without knowledge of the contents, and the ordinance fulfills its purpose, he will tend to restrict the books he sells to those he has inspected; and thus the State will have imposed a restriction upon the distribution of constitutionally protected as well as obscene literature . . . . And the bookseller’s burden would become the public’s burden, for by restricting [the bookseller] the public’s access to reading matter would be restricted.
 Cubby, Inc. v. CompuServe, Inc., 776 F. Supp. 135, 137 (S.D.N.Y. 1991).
 Id. at 138.
 Id. at 141.
 Id. at 137.
 Id. at 140.
 Stratton Oakmont, Inc. v. Prodigy Servs. Co., 1995 WL 323710, at *1 (N.Y. Sup. Ct. May 24, 1995).
 Id. at *2.
 Id. at *2–3.
 Id. at *4.
 Id. at *5.
 Cubby, Inc. v. CompuServe, Inc., 776 F. Supp. 135, 140–41 (S.D.N.Y. 1991).
 Stratton Oakmont, 1995 WL 323710, at *4–5.
 Latest Developments in Combating Online Sex Trafficking, supra note 27.
 Kosseff, supra note 23, at 81.
 Id. at 75.
 Id. at 81.
 47 U.S.C. § 230 (2018).
 Id. § 230(c).
 Zeran v. Am. Online, Inc., 129 F.3d 327, 330 (4th Cir. 1997).
 Id. at 332.
 Id. at 330.
 See 47 U.S.C. § 230(c)(2).
 Id. § 230(b)(4).
 Zeran, 129 F.3d at 330.
 Latest Developments in Combating Online Sex Trafficking, supra note 27.
 John Bergmayer, What Section 230 Is and Does – Yet Another Explanation of One of the Internet’s Most Important Laws, Public Knowledge (May 14, 2019), https://www.publicknowledge.org/blog/what-section-230-is-and-does-yet-another-explanation-of-one-of-the-internets-most-important-laws/ [https://perma.cc/5ZPE-M8K5] (emphasis added).
 See Eric Goldman, The Third Wave of Internet Exceptionalism, Tech. & Marketing Law Blog, (Mar. 11, 2009), https://blog.ericgoldman.org/archives/2009/03/the_third_wave.htm [https://perma.cc/5MPR-D2DE]. “Internet Exceptionalism” is bottomed on a view that the internet is a novel, unique, and inherently special medium of communication. “Internet exceptionalism” eschews regulations that treat the internet the same way as traditional print media while promoting the belief that the internet should only be subject to laws specifically tailored to it. See id.
 47 U.S.C. § 270(a).
 Zeran v. Am. Online, Inc., 129 F.3d 327, 331 (4th Cir. 1997).
 Johnson, supra note 21.
 Zeran, 129 F.3d at 331.
 Id. at 330.
 Id. at 330–32.
 47 U.S.C. § 270(a)(4) (2018).
 Id. § 270(b)(2).
 Stewart Baker, Reforming Section 230 of the Communications Decency Act, reason (June 19, 2020, 5:29 PM), https://reason.com/volokh/2020/06/19/reforming-section-230-of-the-communications-decency-act/ [https://perma.cc/8WWG-LE27].
 Kosseff, supra note 23, at 265.
 See 47 U.S.C. § 230(c).
 Barnes v. Yahoo!, Inc., 570 F.3d 1096, 1098 (9th Cir. 2009).
 Id. at 1106. Barnes’s complaint also included a breach of contract which was grounded on a theory of promissory estoppel. Id. at 1109. Since the promissory estoppel claim was not at issue on appeal, the court did not resolve it. Id.
 47 U.S.C. § 270(a)(4) (2018).
 Kosseff, supra note 23, at 253.
 Id. at 266.
 Id. at 268.
 Id. at 268–70.
 Audra D.S. Burch & John Eligon, Bystander Videos of George Floyd and Others Are Policing the Police, N.Y. Times (May 26, 2020), https://www.nytimes.com/2020/05/26/us/george-floyd-minneapolis-police.html [https://perma.cc/HD8H-F774].
 Senator Ron Wyden, I Wrote This Law to Protect Free Speech. Now Trump Wants to Revoke It, CNN Business (June 9, 2020, 10:31 AM), https://www.cnn.com/2020/06/09/perspectives/ron-wyden-section-230/index.html [https://perma.cc/E9PV-XQBH].
 Brooke Conrad, Section 230: Both Sides Say It Needs Work, but They Don’t Agree on Why, Sinclair Broadcast Group (Jan. 25, 2021), https://katv.com/news/nation-world/section-230-both-sides-say-it-needs-work-but-they-dont-agree-on-why [https://perma.cc/XMG7-VKZ9].
 See 47 U.S.C. § 230 (2018).
 Caitlin Johnstone, MSM Already Using Capitol Hill Riot To Call For More Internet Censorship, Rand Paul Institute for Peace and Prosperity (Jan. 7, 2021), http://ronpaulinstitute.org/archives/featured-articles/2021/january/07/msm-already-using-capitol-hill-riot-to-call-for-more-internet-censorship/ [https://perma.cc/97UE-S9RG].
 Kate Conger et al., Violence on Capitol Hill Is a Day of Reckoning for Social Media, N.Y. Times (Jan 6, 2021), https://archive.is/2bRR8#selection-437.0-449.14 [https://perma.cc/47UA-KHWA].
 New York Times Opinion Editorial Board, supra note 33; Donald Trump (@RealDonaldTrump), Twitter (May 29, 2020).
 See Reno v. ACLU, 521 U.S. 844 (1997) (recognizing that the First Amendment fully applies to online speech); see also Elliot Harmon, It’s Not Section 230 Trump Hates, It’s the First Amendment, Electronic Frontier Foundation (Dec. 9, 2020), https://www.eff.org/deeplinks/2020/12/its-not-section-230-president-trump-hates-its-first-amendment [https://perma.cc/MKG5-64R8].
 Jason Kelly, Section 230 Is Good, Actually, Electronic Frontier Foundation (Dec. 3, 2020), https://www.eff.org/deeplinks/2020/12/section-230-good-actually [https://perma.cc/WFL8-E83D].
 See 47 U.S.C. § 230(c) (2018).
 Elliot Harmon, Changing Section 230 Would Strengthen The Biggest Tech Companies, N.Y. Times (Oct. 16, 2019), https://www.nytimes.com/2019/10/16/opinion/section-230-freedom-speech.html [https://perma.cc/SF6D-3SYD].
 Kelly, supra note 118.
 Harmon, supra note 121; Kelly, supra note 118.
 Harmon, supra note 121; Kelly, supra note 118.
 Harmon, supra note 121; Kelly, supra note 118.
 Kelly, supra note 118.
 47 U.S.C. § 230(a)(3) (2018).
 Kelly, supra note 118.
 Harmon, supra note 121.
 Kelly, supra note 118.