•  
  •  
 

Document Type

Article

Abstract

Social media is a valuable tool that has allowed its users to connect and share ideas in unprecedented ways. But this ease of communication has also opened the door for rampant abuse. Indeed, social networks have become breeding grounds for hate speech, misinformation, terrorist activities, and other harmful content. The COVID-19 pandemic, growing civil unrest, and the polarization of American politics have exacerbated the toxicity in recent months and years.

Although social platforms engage in content moderation, the criteria for determining what constitutes harmful content is unclear to both their users and employees tasked with removing it. This lack of transparency has afforded social platforms the flexibility of removing content as it suits them: in the way that best maximizes their profits. But it has also inspired little confidence in social platforms’ ability to solve the problem independently and has left legislators, legal scholars, and the general public calling for a more aggressive— and often a government-led—approach to content moderation.

The thorn in any effort to regulate content on social platforms is, of course, the First Amendment. With this in mind, a variety of different options have been suggested to ameliorate harmful content without running afoul of the Constitution. Many legislators have suggested amending or altogether repealing section 230 of the Communications Decency Act. Section 230 is a valuable legal shield that immunizes internet service providers—like social platforms— from liability for the content that users post. This approach would likely reduce the volume of online abuses, but it would also have the practical effect of stifling harmless—and even socially beneficial—dialogue on social media.

While there is a clear need for some level of content regulation for social platforms, the risks of government regulation are too great. Yet the current self-regulatory scheme has failed in that it continues to enable an abundance of harmful speech to persist online. This Article explores these models of regulation and suggests a third model: industry self-regulation. Although there is some legal scholarship on social media content moderation, none explore such a model. As this Article will demonstrate, an industry-wide governance model is the optimal solution to reduce harmful speech without hindering the free exchange of ideas on social media.

DOI

10.37419/LR.V8.I3.1

First Page

451

Last Page

494

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.