Automation in the administrative state could upset the relationship between people and their government.
The U.S. Environmental Protection Agency (EPA), the U.S. Department of Veterans Affairs, and the U.S. Food and Drug Administration are just a few of the agencies turning to automation as a way to improve regulatory functioning. In the years ahead, we will see only more instances of agency use of cyberdelegation—or the reliance on computer programs to make government decisions. Thoughtful use of computers in administrative government—and in particular deployment of artificial intelligence technologies involving expert systems and deep learning—have the potential to increase consistency in decision-making and to help agency officials understand a complex and changing world to make better decisions.
But the advantages of cyberdelegation in the administrative state will bring with them at least four sets of challenges warranting careful scrutiny. First, the societal value of government reliance on computer programs will depend on highly-contestable assessments of programs’ objectives. And deciding how to instruct computer programs on matters of broad public concern—and telling them what to maximize—will be more difficult in practice than in theory.
These difficulties will arise even when there is widespread societal agreement about a given general goal, such as keeping food safe at a reasonable cost, or reducing vulnerability to terrorist attacks, in part because agreement at a high level of generality rarely translates into consensus on how to implement policies through administrative agencies.
Plenty of debate will occur within agencies and among legislators about the precise mix of goals that should animate various administrative decisions, such as the imposition of economic sanctions. It is easy enough to suggest that the goal is to change the behavior of the target country. But the details matter. Often, implementing policy involves political tradeoffs that an expert system could elide but would still be making, implicitly, by applying a particular analytical technique.
A second challenge will be determining how much will be lost when human cognition is replaced by machines. Our often under-theorized goals must inform whether we should try to screen out features of human cognition—including the often-mentioned “heuristics” and “biases”—that diverge from conventional and easily systematized accounts of rationality. There is no reason to think that all heuristics and biases are bad from a social welfare perspective. Whether a heuristic is valuable depends on what goal one has for society.
Some features of human cognition that vary from conventional rationality—such as the tendency to weigh more heavily the stories of specific individuals rather than aggregate statistical information—may be integral to qualities such as empathy, or to the ability of policymakers to explain governmental decisions to the public. Accordingly, at least in some circumstances, quirks of human decision-making that are often treated as “biases” to be screened out by computer algorithms may instead merit an increasingly important place in legal decision-making as many routine decisions are guided by algorithms.
Third, potential side effects from automation must be considered. The incorporation of computer programs into the administrative state could carry with them cybersecurity risks and have other adverse impacts that will not necessarily be weighed in a calculus that may encourage reliance on computer programs.
It may be tempting to ignore cybersecurity problems because we have yet to develop an effective technical means for quantifying the risks. But it would be a serious mistake to consider the benefits of automation without considering the associated security problems.
For example, greater EPA reliance on pervasive data gathering and computer programs to target enforcement could result in a world with less pollution, but also one more vulnerable to cybersecurity threats that could, at a minimum, undermine the integrity of the regulatory process and, at worst, exploit vulnerabilities to undermine industrial infrastructure. Cybersecurity problems should loom especially large due to the many examples of governmental failures involving information technology.
Fourth, heavy reliance on computer programs may adversely affect the extent of deliberation that occurs in the administrative state. Implicit in democratic governance is an aspiration for dialogue and exchange of reasons that are capable of being understood, accepted, or rejected by policymakers, representatives of organized interests, and members of the public.
Except when computerized decisions can rely on relatively straightforward, rule-like structures, difficulties will arise in supplying explanations of how decisions were made that could be sufficiently understood by policymakers and the public. For example, if computer systems determined how to allocate scarce inspection resources among processing facilities handling the increasing proportion of the American food supply that comes from abroad, it would probably matter to importers and consumers that these systems would be unable to yield carefully-reasoned explanations for the choices undertaken.
Confronting these four major challenges today is important because major path-dependent effects will make it difficult to undo the use of algorithms once they are incorporated into legal decision-making. Path dependence will arise because infrastructure is costly to replace and habituates people to make decisions in a particular way.
For example, given recent advances in DNA sequencing and genetic medicine, it is not difficult to envision an ever-greater role for expert systems in analyzing information relevant to the approval of specialized drugs. Even though computer programs and organizational expertise may function as complements today, they may become substitutes at a later time. Once an agency’s organizational expertise begins to erode due to greater reliance on computerized decision systems, the agency will face steep costs in recovering that expertise.
Overall, the administrative state is about expertise and, more importantly, its translation and engagement with the broader public. Administrative decision-making involves moving back and forth from discourses surrounding expert knowledge and legal authority to conversations that entail public deliberation and moral debate. The core question underlying cyberdelegation will be what happens to this process of translation when automated systems have a more prominent role in the administrative state.
This is not to say that the status quo is any deliberative panacea. On the contrary, it is easy to criticize the current administrative state for its lack of opportunities to allow the public to participate in decisions. Yet the growing reliance on automated computer programs to make sensitive decisions in the administrative state will only complicate what little deliberation does occur. Cyberdelegation risks diffusing responsibility between the agency’s leadership and the team or set of machines that designed the relevant software, raising the likelihood that decisions would be made on a basis that is different from what could be understood or even explained by human participants.
This essay, part of a four-part series, draws on Justice Cuéllar’s forthcoming book chapter, Cyberdelegation and the Administrative State, which formed the basis of his remarks delivered at the annual regulation dinner at the University of Pennsylvania Law School.