Using Artificial Intelligence in Administrative Agencies

ACUS issues a statement to help agencies make more informed decisions about artificial intelligence.

Federal agencies increasingly rely on artificial intelligence (AI) tools to do their work and carry out their missions. Nearly half the federal agencies surveyed for a recent report commissioned by the Administrative Conference of the United States (ACUS) employ or have experimented with AI tools. The agencies used AI tools across an array of governance tasks, including adjudication, enforcement, data collection and analysis, internal management, and public communications.

Agencies’ interest in AI tools is not surprising. These tools promise improvements in the accuracy and efficiency of certain government functions, as they yield improvements in sectors ranging from regulatory enforcement to benefits adjudication. 

But AI tools also present potential problems for governance. If used extensively and without care, AI tools could hollow out agencies’ human expertise. Agency personnel might grow too trusting of AI tools or defer too readily to the tools’ determinations in fields that call for the exercise of human discretion. 

Artificial intelligence tools might also increase opacity in public decision-making when agencies rely on tools they cannot adequately explain. And AI tools might sometimes exacerbate human biases, whether because of their designs or the data on which the tools rely. 

Moreover, AI tools pose potential problems for agencies when it comes to considerations such as privacy, data security, developing technical capacity, procurement, data usage, oversight, and accountability.

For scholars and students of AI, these are not new concerns. They have been discussed and debated for years. Despite the consideration they have received, however, problems persist. 

For instance, a 2018 study by researchers Joy Buolamwini and Timnit Gebru measured the accuracy of three commercial gender classification algorithms. Buolamwini and Gebru found that all three algorithms misclassified lighter-skinned males less than 1 percent of the time but misclassified darker-skinned females between 21 percent and 35 percent of the time. 

The stark discrepancy revealed by Buolamwini and Gebru’s research illustrates the need for care in deploying AI tools, particularly when federal agencies’ uses of such tools are rapidly expanding.

For that reason, late last year ACUS convened a committee to identify issues agencies ought to consider when adopting, procuring, or modifying AI tools and when developing procedures for their use and regular monitoring. The committee included dozens of ACUS members, along with experts from academia and the private sector with relevant knowledge in different aspects of agency use of AI. 

The committee drew on two ACUS commissioned reports to inform the committee’s research: one report by law professors Daniel Ho, David Freeman Engstrom, Catherine Sharkey, and Supreme Court of California Justice Mariano-Florentino Cuéllar, and the other report by University of Pennsylvania law professor Cary Coglianese

The committee convened four times, worked through several draft statements, and ultimately produced a ten-page statement that ACUS adopted during its December plenary session. The ACUS statement provides a series of factors for agencies to consult when making decisions about AI tools. 

ACUS divided the statement into nine subsections: transparency, harmful bias, technical capacity, obtaining AI systems, data, privacy, security, decisional authority, and oversight. Each section covers a different set of issues agencies will confront in many decisions about AI tools. Within each of the nine subsections, the statement identifies specific risks, and suggests strategies for mitigating them.

Recognizing the various types of AI tools and the wide array of uses, the statement stops short of instructing agencies about specific tools or methods to employ in addressing particular problems. It respects that agencies might reasonably emphasize different considerations in making decisions about whether or how to use AI tools. Agencies may, for example, have different priorities and constraints, or need to employ the technologies for different purposes.

For instance, in the section on decisional authority, ACUS does not prescribe a definitive test for ascertaining when an agency has delegated too much decision-making responsibility to an AI tool. Instead, it highlights the real risks that come with relying on an AI tool to make certain decisions. Those risks include the possibility that human operators will devolve too much responsibility to, or place too much trust in, the digital tools. The public in some instances might react negatively to having certain decisions made by AI tools instead of humans. 

The ACUS statement is clear that agencies ought to consider these risks when making decisions about what AI tools to deploy and how to deploy them, but it is silent about how agencies ought to balance those risks against countervailing considerations such as efficiency and predictability in decision-making. The “right” way to manage those tradeoffs is, ultimately, a question of policy, to be decided on an agency-by-agency basis. 

ACUS’s goal in adopting its statement is simply to make it more likely that agencies will resolve the policy questions related to their use of artificial intelligence with a more comprehensive understanding of the many different values at stake in decisions about AI tools.

There will be those who wish the statement provided more precise and tailored guidance about the right and wrong ways to use specific AI tools or even the circumstances in which they should be deployed. But the committee made a conscious choice to avoid those questions. 

Agencies’ uses of AI tools are developing rapidly, as are the tools themselves. Recommendations aimed at specific practices or existing technologies would have a comparatively short shelf life. By contrast, ACUS grounded its statement in familiar administrative law principles—such as transparency, accountability, and reason-giving—and familiar objectives of agency administration—such as ensuring privacy and security and building institutional capacity. ACUS intends the statement to remain relevant and useful for as long as those key principles and objectives do.

If agencies are attentive to the considerations laid out in the statement, they will make better and more informed decisions about AI tools, be more alert to the many considerations and values at stake in selecting, obtaining, and deploying AI tools, and be better equipped to monitor and understand the consequences of deploying AI tools in different environments. 

The statement should help agencies use artificial intelligence tools in ways that maximize their potential benefits and reduce the attendant risks. Given the growing ubiquity of AI in federal agencies, that is an outcome everyone should hope for.

Mark Thomson

Mark Thomson is a deputy research director at the Administrative Conference of the United States.

John F. Cooney

John F. Cooney is a senior fellow at the Administrative Conference of the United States. 

The views expressed in this essay are those of the authors and do not necessarily represent the views of the Administrative Conference or the federal government.

This essay is part of a 7-part series, entitled Improving the Accessibility and Transparency of Administrative Programs.