In this week’s Saturday Seminar, experts discuss regulatory solutions for content-filtering algorithms on social media platforms.
Frances Haugen, a former data scientist at Facebook, testified before the U.S. Congress in 2021 that the social media platform was harming children. Haugen urged lawmakers to act, pointing to numerous studies finding that Instagram, which is owned by Facebook, led to worsened body image and suicidal thoughts among teenage girls. The problem, according to Haugen, stemmed from the algorithms Facebook was using.
Social media algorithms are processes designed to filter content, recommend new content, or even delete content from users’ feeds entirely. Ultimately, social media algorithms determine the pictures, videos, or information that users see.
Facebook uses an engagement-based algorithm, which displays content based on the numbers of “likes” and “comments” posts get, rather than based on the chronological order of the posts. In her testimony, Haugen contended that, as a result, users are more likely to encounter extremist or hateful content, which tends to generate more online reactions.
Researchers have investigated content that other social media platforms recommend to its users. A recent study discovered that YouTube’s algorithm was recommending videos containing violent and disturbing content. Another study revealed that Twitter’s algorithm promotes more right-leaning than left-leaning content.
Under Section 230 of the Communications Decency Act, social media platforms are not held liable for the content that their users post on their platforms. Some proponents of this immunity argue that regulating private social media companies is an abridgment of free speech. But social media is playing an increasingly significant role in society. A 2021 survey, for example, found that approximately 53 percent of adults get their news from social media. Accordingly, many scholars argue that there should be greater regulatory oversight. Activists such as Frances Haugen are urging policymakers to hold social media companies liable for their algorithmic designs that determine how content is spread.
In this week’s Saturday Seminar, we collect scholarship that discusses the ways in which social media algorithms filter information and highlights regulatory solutions for preventing the spread of harmful content.
- In an article published in The University of Chicago Law Review, Dan L. Burk of the University of California, Irvine School of Law discusses social media algorithms designed to delete content for copyright infringement. Burk explains that some of this deleted content may claim legal protection under a fair use exception. Thus, some scholars propose incorporating a fair use exception into algorithms. But implementing fair use metrics into algorithms designed to delete copyrighted content would be problematic, Burk argues, because of the complex and dynamic nature of the legal standard. Furthermore, he contends that “algorithmic fair use” would be difficult to incorporate both technically and practically, and would ultimately “degrade the exception into an unrecognizable form.”
- In an article published in The Case Western Reserve Journal of International Law, Rebecca J. Hamilton of American University Washington College of Law explores social media algorithms that aim to remove terrorist content from their platforms. YouTube is largely self-regulated and determines what content gets removed from its platform, Hamilton explains. But the United Kingdom government successfully urged YouTube to change its algorithm in 2017 to detect and remove terrorist content. Hamilton explains that the European Commission followed by increasingly pushing for social media regulation, and in 2019, the European Parliament implemented fines for social media companies if they failed to remove terrorist content within an hour of detection. Hamilton suggests that regulating YouTube’s algorithm to achieve the state’s regulatory goals could eventually lead to issues when such goals do not align with business incentives, however, arguing that companies “cannot be expected to prioritize the goals of justice and accountability in the face of competing business demands.”
- In a forthcoming paper in The Albany Law Journal of Science and Technology, Xaioren Wang of The University of Glasgow School of Law discusses the effect of algorithm regulations on user creativity on platforms such as YouTube. Wang argues that YouTube’s recommendation algorithm–which suggests viral video content to YouTube users based on recorded preferences–hinders creative production. She contends that the algorithm directs user attention away from other videos and discourages the creation of content that is not mainstream. Wang suggests that existing data protection regulations, such as the European Union (EU) General Data Protection Regulation (GDPR), can be used to offset this negative effect. Wang explains that GDPR articles 22, 21, and 17 provide users with the right to opt out of recommendation algorithms, allowing them to reject recommendations based on personal data processing and instead engage with content organically.
- In an article published in the Journal of Law and Innovation, Ana Santos Rutschman of Villanova University Charles Widger School of Law discusses the role of algorithms in the regulation of misinformation on social media platforms. Rutschman explains that social media companies currently police the content on their own platforms. She argues that this self-regulation suffers from technical limitations, as many companies rely on imperfect algorithms to monitor high volumes of content. To combat misinformation, she argues that the United States should expand the self-regulation framework by adopting legislation similar to the EU Code of Practice on Disinformation, which clarifies what constitutes misinformation and provides an annex of recommended practices to be adopted by industry.
- In a recent article published in the Internet Policy Review, Joe Whittaker of Swansea University and his coauthors assess the degree to which recommendation algorithms on digital platforms create “filter bubbles,” which direct users towards content with which they already agree. They find, for example, that YouTube is twice as likely to recommend extremist content to users who engage with far-right materials. Furthermore, they argue that the amplification of extremist content by algorithms is currently a blind spot for social media regulation. They contend that legislators struggle to address this amplification because of the dearth of appropriate, transparent policy instruments available. To address the issue, Whittaker and his coauthors encourage a co-regulatory approach that combines self-regulation by platforms with traditional regulation by public authorities.
- According to Anja Lambrecht of London Business School and her coauthors, charities seeking exposure through digital advertising on YouTube may not benefit from algorithm-produced “up-next” recommendations as much as they anticipate. They find that the algorithm frequently steers users toward videos unrelated to the focal charity and that the likelihood of disassociation between suggestion and charity increases as users pursue a chain of recommendations. These results, Lambrecht and her colleagues argue, demonstrate a potential mismatch in expectations between YouTube and the charity at issue and indicate minimal echo chamber effects on videos not intended to polarize viewers.