Agencies, policymakers, and the courts can all address the risks associated with cyberdelegation.
With widely-circulating media accounts that a foreign power used cyber-intrusions in an effort to affect a recent American national election, it is not radical to suggest that reliance on computers to make agency decisions is a risky enterprise. But in some ways, cybersecurity problems are just the tip of the iceberg. From cybersecurity risks to changes in public deliberation, government agencies’ use of automation and artificial intelligence will pose numerous challenges for the administrative state.
Although no simple compass or rubric exists to decide precisely how to navigate these uncharted waters, the following ideas offer a few possibilities for how agencies, policymakers, and the courts could help increase society’s capacity to make informed choices about the use of automation in the administrative state.
First, it may be worth exploring how we may better police the extent of human decision-maker engagement with automated expert systems. Until now, the courts have been reluctant to probe the actual decision-making of administrative leaders under the so-called presumption of regularity that emerged over time following Morgan v. United States. In rejecting a challenge to an order by the Secretary of Agriculture fixing maximum rates to be charged by market agencies at the Kansas City Stockyards, the Supreme Court in Morgan declined to allow an intrusive analysis of the Secretary’s actual decision-making process and considerations. “It was not the function of the court to probe the mental processes of the secretary in reaching his conclusions,” the Court concluded, “if he gave the hearing which the law required.” With courts loath to stray from this presumption of regularity over the decades, it has persisted—and with it, courts’ unwillingness to police exactly by whom a decision is taken.
As reliance on information technology increases, courts and policymakers should consider taking more seriously requiring accountability to be lodged in specific decision-makers. Perhaps it is time to consider recalibrating the “presumption of regularity”—to ensure that agency officials have clearly recognized the risks of relying on automated analytical techniques that are too complex or opaque for officials themselves to understand entirely.
As a practical matter, this approach raises difficult further questions about the scope of discovery in suits to review administrative action, but perhaps those questions are worth facing, given the risk that decision-makers will rely on algorithms they do not fully understand.
Second, on a related note, arbitrary and capricious review may prove most meaningful if it encompasses whether there is consistency between substantive explanations offered in, say, justifications for rulemaking and the analytical techniques actually used to make decisions. It is one thing to justify a program to freeze assets associated with organizations that meet a specific, statutorily-grounded threshold of suspicion; it is quite another to deploy algorithms that entirely redefine that threshold, dynamically, in response to new information. Attention to cybersecurity risks may also fit within the context of arbitrary and capricious review.
Third, agencies must accelerate efforts to engage scholars, civil society, and other stakeholders in increasing our understanding of how to harness the analytical capacity of automated computer systems without eroding our sense of how decisions are made. As part of this process, agencies should consider engaging in medium-to-long-term planning about how they would address the use of automation within the rulemaking process. The U.S. Food and Drug Administration could further investigate how trends in artificial intelligence could change the agency’s use of outside experts in the drug approval process. Officials at the U.S. Department of Labor may face unexpected challenges arising from labor market changes driven by automation. Virtually all agencies will benefit from explicitly experimenting with different models of decision-making that aim to leverage artificial intelligence technologies while keeping humans in the loop.
These efforts will matter because, increasingly, agencies and entire governments will face the challenge of how to instruct complex machines that will work across domains and agency jurisdiction, aggregate data, and guide human decisions. Government agencies seem to face trouble even when updating conventional information technology infrastructure, so the ability to integrate artificial intelligence into administrative tasks may seem far-fetched.
Yet ironically, such weakness could strengthen the case for using systems that adapt and learn. Such systems may prove crucial to reducing the gap between a machine’s capacity and that of a person familiar with an agency’s culture and organizational routines. As a general matter, as computer systems that perform administrative tasks become adaptive and capable of modifying themselves, the more they are likely to avoid the problems of efficacy and cost that sometimes plague government information technology projects.
But as software becomes more analytically sophisticated, and in particular, more adaptive to the point of being able to rewrite much of its own code, it will be more difficult to predict longer-term consequences ranging from subtle changes in function to unexpected rapid growth of analytical capacity. As machines become more capable of optimizing to achieve the goals we articulate, higher stakes attach to how we articulate those goals and the trade-offs we allow. Crucial to our ability to navigate these dilemmas will be a cadre of lawyers and policymakers who understand artificial intelligence, its possibilities and limits, and particularly its capacity to adapt in unexpected ways.
Lawyers and policymakers will almost certainly need to adjust their approaches to using automation in the administrative state, since different scenarios involving automation are possible, and some will prove far more difficult to manage than others. What makes little sense is to ignore the dilemmas that society will confront as the administrative state comes increasingly to rely on automated systems. Nor is it justified to assume that human decision-making is so fundamentally flawed that it must be tamed by computer system.
At its core, the administrative state is about reconciling calculations of social welfare with procedural constraints. It is an enterprise that pivots in subtle and profound ways on human institutions, assumptions, and aspirations—however imperfectly fulfilled—for deliberation.
An alternative that promises to make the regulatory process eminently more tractable, technically precise, and less messy by leaning on algorithms and neural networks will likely remain alluring because collective human decisions are as messy and imperfect as human societies are themselves. The biggest risk associated with automation is to assume that most of what concerns the administrative state can be made simpler, more predictable, cheaper, and more effective without any trade-offs. Whether that perspective originates from a deep-seated view that governing is simple or from the seemingly-anodyne choices made by a software engineer deciding how to visually present the results of a complex deep learning algorithm, the problem with that perspective is eliding precisely the sort of deliberation about the nature of social welfare that justifies the administrative state in the first place.
This essay, part of a four-part series, draws on Justice Cuéllar’s forthcoming book chapter, Cyberdelegation and the Administrative State, which formed the basis of his remarks delivered at the annual regulation dinner at the University of Pennsylvania Law School.