The Social Responsibility of Social Media Platforms

Group posing for a selfie.

Scholar weighs options for putting an end to harmful speech on social media platforms.

One in three people globally use social media platforms, such as Facebook, Twitter, and TikTok. These platforms act as breeding grounds for viral cartoon memes and trendy dance choreography, as well as widespread social movements and vibrant displays of activism.

But social media also has a dark side. Disinformation, hate speech, revenge pornography, harassment, terrorism activity, and sex trafficking run rampant on social media sites. Critics claim that social media platforms do not take sufficient action to eliminate the damaging forms of speech that live on their spaces.

In a recent article, Nina Brown of the Newhouse School of Public Communications at Syracuse University argues that popular social media platforms fail to place safety at the head of their operations, leading to the propagation of harmful speech.

As they currently stand, social networks are self-regulated. Because of private content moderation, most social media platforms employ a combination of algorithmic and human action to determine what kinds of content to eject from their sites.

The wide discretion that platforms possess over content moderation can be dangerous, Brown warns. Social media platforms often put greater weight on generating profits than on protecting users from destructive speech. This is why, Brown explains, greater regulatory oversight is needed to prompt change within the industry.

Brown examines the law’s interaction with social media in the United States, flagging the Supreme Court’s treatment of social media as identical to its treatment of print media. In both cases, the Court gives great deference to the speech protections of the First Amendment in the U.S. Constitution. That deference means that the content of online communications remains largely unregulated.

Congress has followed the Supreme Court’s lead and enacted Section 230 of the Communications Decency Act. Section 230 offers internet service providers immunity from liability for content posted by third parties on their platforms. Brown notes that courts apply this liability shield broadly. Section 230 has protected providers from claims pertaining to intentional infliction of emotional distress, terrorism support, and defamation.

Brown explains that social media platform users implicitly rely on the wide protections of Section 230. Without these liability safeguards, the ways in which users interact with these online spaces would be limited, as platforms would constrain users’ ability to comment freely on content or post product reviews.

The Section 230 shield, however, stirs controversy on both ends of the political spectrum. Conservative critics push back against Section 230 for stifling viewpoint neutrality, calling out social platforms for censoring political perspectives. On the other hand, Democratic lawmakers criticize Section 230 for creating a lack of platform accountability when it comes topics such as the spread of falsehoods and child exploitation.

Despite calls for change to Section 230, Brown says that the reform of liability allocation could lead to problems with “de facto” government regulation of speech: The government would be able to “deputize” platforms into creating content-based censorship.

If Congress removed the immunity law, this would encourage online platforms to put content through a rigorous moderation process to minimize risk. According to Brown, that process would force platforms to draw distinctions on speech based on its substance—which could lead to over-blocking, over-censorship, and impermissible government interference on speech.

To determine the best way to reconcile Section 230 with the need for content moderation, Brown plays a game that she calls “regulatory goldilocks.”

Brown assesses the three primary options for regulatory action within the field of social media: self-regulation, government regulation, and sector-structured industry regulation. Given the proven problems with the current self-regulatory model and the First Amendment issues inherent with a government regulation model, Brown shoots down the first two options.

Brown identifies the third option, industry regulation, as the regulatory route that is the “just and right” one to address platforms’ harmful speech problem.

Brown explains that self-regulatory councils would be at the forefront of the industry regulation model. These self-regulatory councils, because of their position as independent bodies, could wield the authority to make industry standards and enforce their own regulations.

Turning to other industry self-regulatory councils as examples, Brown emphasizes the success of the Advertising Self-Regulatory Council (ASRC). The advertising industry developed the ASRC in the height of the public’s distrust of advertising in the early 1970s. The ASRC stepped in to combat negative attitudes with industry reform, restoring regulatory protection over consumers.

Brown says that the ASRC model could work in the social media industry. In response to social media platforms’ issues with damaging speech, Brown recommends building a Social Platform Regulatory Council (SPRC) to set regulations for social media platforms.

According to Brown, the SPRC would need four basic elements to function: voluntary participation across platforms, a diverse board composition, a guiding set of principles, and the authority to make real change.

Brown underscores that the SPRC model “must have teeth.” In particular, Brown argues that a connection between the SPRC and a government agency is crucial, as it could be an avenue for enforcing industry standards. In addition, Brown envisions that the SPRC would need the ability to impose penalties for noncompliance.

Brown believes that her proposed Social Platform Regulatory Council could answer the public call to rid of harmful speech on social media platforms—a call that rings louder every day.