Social media companies may increase content oversight if motivated by federal incentives and liabilities.
Online platforms can be “double-edged swords,” generating spectacular economics while offering a forum for bad actors and bad behavior. To fix the dark side of platforms, only a combination of government regulation and pressure for companies to self-regulate has a chance to curb the extremes in social media. U.S. Supreme Court Justice Clarence Thomas has hinted at a similar position.
Although the challenges facing social media are global and require multinational solutions, we focus here mainly on the United States. We start by clarifying the problem we are trying to solve. On the one hand, social media platforms have an incentive problem. Facebook, Instagram, Google, YouTube, Twitter, and other platforms frequently accelerate the viral distribution of extreme content because it generates greater user engagement, which translates into billions of dollars in advertisements.
The more outrageous the content—for example, stories of grand conspiracies around the U.S. presidential election, the coronavirus, and COVID-19 vaccines—the greater the profit opportunity.
Yes, the social media companies have “terms of service” that allow them to delete harmful or illegal content. But there are no financial incentives for these firms to police themselves.
On the other hand, the volume of social media content—four billion videos are viewed on Facebook alone each day—makes it difficult and expensive, although not impossible, to track, curate, or edit so many postings.
We have been studying and writing about digital platforms since they first emerged in the 1990s. Our research has led us to conclude that many of the proposals being discussed in the press and the U.S. Congress are unlikely to work. Let us consider them one by one.
One option is to break up Facebook. This option would reduce Facebook’s market power, but it does not solve any of the incentive problems or conflict of interest issues identified by Frances Haugen, the former Facebook employee and whistleblower. Even if Facebook, Instagram, and WhatsApp returned to separate companies, each application would still be a huge, powerful network with no new incentives to stop bad behavior. Similar problems would persist if YouTube were detached from Google.
Another option is for the federal government to regulate social media algorithms directly. This option, which Haugen proposed, is possible—but would be difficult to execute for technical reasons. Social media and search algorithms are designed using artificial intelligence technology. The algorithms are difficult for outsiders and even the software developers themselves to pull apart and alter, let alone fully understand. Furthermore, these algorithms are based on machine learning and big data. They continually evolve as users interact with the platforms billions of times a day. Putting in a bureaucratic change-control process monitored by government officials or even expert special masters—as occurred with the Microsoft antitrust trial settlement—would likely freeze the technology and put a damper on innovation.
A third option would be for lawmakers to eliminate Section 230. This change would get rid of the 1996 law that protects online platforms from civil liabilities for content they disseminate. But Section 230 distinguishes the platforms from publishers, the latter who are fully responsible for their content. As such, the law remains fundamental to the operations of all online platforms. We want these internet businesses to continue to be as open as possible to innovation and to facilitate freedom of expression in user-generated content—without being legally liable for each of the billions of posts on their sites.
A better alternative would be to combine government regulations that alter incentives and put more pressure on social media companies to do more self-regulation. We offer three suggestions.
First, the U.S. government should require all social media platforms to abide strictly by their own “terms of service,” vesting enforcement power over these terms with the Federal Communications Commission (FCC), Federal Trade Commission (FTC), or even the Securities and Exchange Commission (SEC). These terms already give social media companies the ability to curate content or shut down accounts when they see misuse of their platforms. For example, Facebook has a set of “community standards” comprising a long list of potentially prohibited content, including posts considered to encourage violence and criminal behavior, child or sexual exploitation, hate speech, false news, and intellectual property violations.
Having terms of service is not enough, however. The federal government needs to make sure that social media firms adhere to their publicly stated terms of service and other standards. One way to force adherence would be government lawsuits or civil litigation—with fines levied against not only the companies, but also against their CEOs, other executives, and members of their boards of directors. Just as the Sarbanes-Oxley Act of 2002 made officers and directors personally liable for financial statements, new regulations could make Mark Zuckerberg and the Facebook board liable for failures to implement their terms of service.
Second, Congress, with the cooperation of the U.S. Department of Justice, should revise Section 230. We suggest that the revisions make it possible to hold the online platforms accountable for profits they earn from deliberately disseminating harmful content. The companies could then become open to civil lawsuits or specific FCC, FTC, SEC, or Justice Department actions. Since Facebook and Alphabet, Google’s parent company, collectively earned $69 billion in net income in 2020, episodic fines are unlikely to alter behavior—as we have seen with European antitrust fines against Google or the $5 billion FTC fine levied against Facebook. It should be possible, however, to force the platforms to identify profits gained from advertisements tied to these viral posts and then to have those monies clawed back as fines. Those numbers could be very large.
Finally, any additional government regulation should be administered in a way that promotes more effective self-regulation, as occurred with movie ratings and advertisements for tobacco, alcohol, pornography, terrorist recruitment, or self-preferencing in airline flight listings during the 1960s and 1970s. The technology that digital platforms use is too complex and fast-changing, and the content too vast, to rely only on government monitoring.
Historically, when new, complex technologies arise and new industries emerge, a credible threat of intrusive government regulation triggers many companies to regulate their own operations. In industries such as movies and video games, the threat of government intervention spurred coalitions of firms to establish content standards and codes of conduct—including content rating systems—which have worked reasonably well. This coalition approach could work with social media platforms.
Ultimately, accountable self-regulation is good for business: Social media platforms need to take more responsibility for their impact on the world, lest they run the risk of continuing to damage essential common resources for information and commerce—the internet itself, and user trust in digital content.