The Regulation of Addiction

Matthew Lawrence describes how regulation can be used to respond to technology addiction.

In a recent discussion with The Regulatory Review, Matthew Lawrence, a professor of law at Emory University School of Law, advocates regulation to address addictive design in technology.

Lawrence shares concerns that social media platforms and associated technologies use psychological tricks to entice users into endless engagement that becomes addictive. Addictive design techniques, he warns, are dangerous to users, and are viewed by some policy makers as a public health emergency.

Regulators and legal bodies, according to Lawrence, struggle to balance First Amendment rights of tech companies with the right to be free from addiction owed to users. Social media companies argue that platforms are “regulation free zone[s]” because functions are “expressive.” Lawrence counters, however, that even if functions are expressive in nature, states may have a superseding right to protect their citizens from life-altering addictions.

Lawrence acknowledges that reform will be a hard-fought battle. He advocates solutions that encourage social media companies to develop platforms that are good for mental health and prioritize community and socialization over user engagement.

In addition to teaching at Emory Law, Lawrence served as a senior advisor to the Drug Enforcement Administration, special legal advisor to the U.S. House of Representatives Budget Committee, a trial attorney at the Department of Justice’s Federal Programs Branch, and an attorney advisor in the Office of Management and Budget’s Office of General Counsel. Lawrence was previously an assistant professor of law at Penn State Dickinson Law and held a courtesy appointment as an assistant professor at Penn State College of Medicine before joining the faculty at Emory University School of Law. Lawrence is an affiliate faculty at Harvard Law School’s Petrie-Flom Center for Health Law Policy, Bioethics, and Biotechnology and was a fellow with the Center prior to that.

Lawrence was awarded the Provost’s Distinguished Teaching Award for Excellence in Graduate and Professional Education by Emory University in 2023. He was also recognized by the American Society for Law, Medicine, and Ethics as a 2017 Health Law Scholar.

The Regulatory Review is pleased to share the following interview with Matthew Lawrence.

The Regulatory Review: Your article, Addiction and Liberty, proposes a right to “freedom from addiction,” particularly as it relates to technology addiction. How does recognizing this right contribute to broader efforts to regulate big tech while still promoting individual autonomy in the digital age?

Lawrence: First of all, I know readers probably have many questions about what I mean by a “freedom from addiction,” what “addiction” means for constitutional purposes, how history and tradition supports recognizing such a liberty interest, extensions, counter-arguments, limitations, and so on. I won’t try to address those here—but I do my best in the article linked above. Relatedly, I try to be more careful in writing than in, say, a podcast interview or conversation. But for purposes of this fun feature, I’m being a bit more conversational—I refer readers who want more careful statements of my views or any of the things I discuss to my articles or the specific sources I reference.

As for your question about how the right to freedom from addiction balances big tech regulation with individual liberty, I would flag two things.

First, state lottery reform advocates allege that slot machines are built to use psychological tricks like “intermittent re-enforcement and variable reward”—called “operant conditioning”—to foster compulsion in users. Advocates for regulating social media allege that social media uses the same tricks. For example, in pending lawsuits, plaintiffs allege—to paraphrase—that Instagram withholds “likes” rather than providing them immediately and that the resulting buildup of “likes” produces a jackpot-like effect.

Put to one side the question of whether casinos and social media platforms—and sports betting apps that blur the lines between casinos and social media—should be able to use such design techniques without even warning users about their potential psychological effects. The core liberty concern is that governments will use these same techniques to foster addiction in residents. State lottery reform advocates allege that governments have done this since the legalization of state lotteries in the latter half of the twentieth century—that lotteries act as a tax on those residents in whom they foster compulsion. The right to freedom from addiction says that fostering psychological compulsion in residents is a constitutionally suspect regulatory tool.

Second, I want to return to the question of regulating addictive design by private companies, such as social media platforms. I have written elsewhere about a host of regulatory activity in this space. Critics say that regulating social media interferes with the First Amendment, while proponents argue that there are broad state interests in doing so to foster public health generally. But courts look to constitutionally protected liberty interests in identifying substantial and compelling state interests. Recognizing a right to freedom from addiction gives states a narrow basis on which to regulate addictive design by social media companies that is less susceptible to arguments that they “prove too much” than broader public health justifications for regulation.

TRR: In Addiction and Liberty, you discuss how social media companies often encourage addiction in users without their knowledge. What regulation might prevent these efforts?

Lawrence: Zephyr Teachout has a great work in progress, In Techno Parentis, describing the different efforts. Courts might ensure social media companies have an incentive to implement cost-justified measures to protect users through negligence or products liability approaches. States might regulate social media companies’ design choices, by banning particular features, for example. Congress might impose duties on social media companies—as in the Kids Online Safety Act—or empower an agency to regulate. Or policymakers might empower schools or parents to regulate their classes or kids.

My own view is that right now, we need to see experimentation on all these fronts through laboratories of democracy and diverse regulatory approaches. First, we need to learn what works. But second, between Section 230 and the First Amendment, courts will develop the rules of the road for legislatures in the years to come and likely invalidate many well-meaning attempts simply for failure to follow the constitutional rules that courts are only beginning to develop. That’s a byproduct of our system, which in a developing space like this, forces legislatures to write laws without the benefit of clarity about the limits on their authority, which comes later when courts adjudicate constitutional challenges to enacted laws. But it means we should spread our regulatory eggs across as many baskets as possible.

TRR: Your article, Public Health Law’s Digital Frontier, raises questions about the regulatory frameworks governing online platforms’ impact on mental health. What role do you envision health law regulation playing in addressing addictive design practices online?

Lawrence: Addiction is a public health problem, and the regulation of addictive products and services is a major public health challenge. That’s obviously true of addictive drugs—hence the Controlled Substances Act—and of addictive behaviors—hence gambling regulation.

On a terminological note, and at the time of this writing, medical providers do not recognize social media addiction as an illness—the process by which they make those calls is a slow moving one. But I still think the term “addiction” is useful here for reasons I’m writing about­—and I hope to have something posted on that soon.

There have been many major failures in addiction policy, including failures to protect public health and policies that have caused affirmative harm. Often, stigma surrounding addiction and the idea that addiction is morally different than other chronic illnesses has fueled our addiction policy failures.

My hope is that understanding social media addiction as a public health problem from the start—rather than a moral problem or even a medical problem—will help us regulate better in this space, help more people and, of course, cause less harm. The ubiquity of social media actually gives me hope here. Half of Americans answer “yes” in surveys asking if they are “addicted” to their smartphones. That means that it will be much harder for the stigma and marginalization that have held back addiction policy in other domains to distort efforts to reduce the harms of addictive design.

TRR: How does the conflict between public health regulation and internet regulation impact potential regulatory solutions?

Lawrence: The internet-tech paradigm is focused on innovation but distrusts regulation. The public health paradigm starts from the premise that many problems we face can be best addressed working together through our legislatures and courts. I see this when reading cases addressing addictive design claims. Courts that conceptualize the issues as an internet or tech problem tend to rule for the defendants, and courts that conceptualize the issues as a public health problem tend to issue nuanced rulings that allow many claims to enter discovery.

TRR: You recommend that courts include addictive design regulation in public health law. From a standpoint of health law regulation, what criteria should courts consider when adjudicating cases related to addictive design? How might their rulings shape future regulatory approaches?

Lawrence: In our system of government, we make decisions about how to resolve hard policy questions through democratic processes. Is concern about social media a “moral panic” that should be ignored for fear of intruding on platforms’ innovation? Or is concern that regulatory protections intended to safeguard the public health will interfere with innovation or economic growth itself a “corporate panic” cooked up by industry as a lobbying tactic? These aren’t question for courts or law professors to decide. They are questions for “we the people” to decide through our elected representatives. Often, we opt to delegate decisions to political appointees and agencies making choices through processes—such as notice-and comment rulemaking—that bridge democratic input with expertise. That’s the administrative state.

The upshot is that the criteria courts should consider are whatever criteria the law says. The only exception comes when the Constitution itself limits what kinds of laws might be passed. On that, I think a point made by Justice Kagan in her opinion in Netchoice v. Moody is really important. As I read her arguments and to generalize, platforms have argued that social media is something like a regulation-free zone because of the First Amendment. Kagan said otherwise. Yes, she said, the First Amendment protects much platform activity, but it does not protect everything platforms do. It’s a question that must be evaluated platform function-by platform-function, looking to whether the particular function is expressive. And even as to platform functions that are expressive, the state might still have a sufficient interest in regulating—such as, I would argue, to protect residents’ liberty interest in freedom from addiction.

TRR: Given the challenges posed by addictive design and the mental health crisis associated with it, what collaborative efforts do you believe are necessary among policymakers, regulators, and industry stakeholders to address these issues effectively within the framework of health law regulation?

Lawrence: I have one collaborative effort I’d like to see. It is very hard for individual users, parents, or schools to depart from whatever norm develops around kids’ social media use. If all the kids in a friend group use Snapchat, then there is tremendous peer pressure for other kids to use that same service. Changes have to be made group-wide, and that’s very hard for individual users to force.

I’d like to see leading expert entities—such as the American Psychiatric Association or the Surgeon General—work together to create an accreditation for social media platforms. To be accredited, platforms would have to show that they take all reasonable steps possible to protect kids’ health—something like the tort negligence standard. Then, policymakers and parents could piggyback off the accreditation. Rather than ban social media in their houses, parents could ban non-accredited social media—a well-built Wi-Fi router might even permit this as a one-click parental setting. Similarly, Apple and Android might build in parental controls allowing such limits—prohibiting access to non-accredited apps—to be imposed in a phone’s operating system. And policymakers, too, might vary regulatory controls by whether a platform is accredited or not.

TRR: Looking ahead, what recommendations would you offer for the regulation of addictive design and promotion of digital well-being from a health law perspective? How might these recommendations balance the need for regulation with the goal of fostering innovation and economic growth in the technology sector?

Lawrence: First, I don’t want to lose sight of the benefits of social media, or sports betting apps for that matter. The best addictive design regulation would encourage social media and related apps to compete on how good platforms can be for mental health, community, socialization, and other things users affirmatively value, rather than competing, as some say platforms currently do now, on designing their products to foster compulsive use and long engagement on devices in their users.

Second, anyone thinking about regulating in this space needs to be mindful of how high the odds are stacked against positive change. If a state wants to understand how a design feature like infinite scroll impacts kids, it has to offer a grant to a clinician who gets approval from an institutional ethics review board to study impacts on kids—including informed consent—and then wait for the results to go through peer review.

If a platform wants to understand how a design feature impacts users, it can randomly roll out the design feature to a subset of users and see what happens. Moreover, if a state wants to regulate, critics—and maybe courts—will ask it to explain how the design feature it wants to regulate works and why it is harmful. By contrast, platforms’ business model gives them an evolutionary incentive to adopt any change that increases time on device—even if they don’t understand why or how it’s impacting users to produce that result. And then, on top of all this, addictive design has proven intensely lucrative, so the platforms can hire the smartest people not just to build their products but to intervene in legislative fights, regulatory fights, and even in the development of scholarship around these questions. What I think of as the iron law of lobbying looms large here: The more money a group makes from the status quo, the more influence it has with which to protect the status quo.