Retooling the Acquisition Gateway for Responsible AI

For government to make the most of artificial intelligence, it needs changes throughout the procurement life cycle.

The federal government’s demand for artificial intelligence (AI) systems far exceeds its in-house capacity to design, develop, field, and monitor this technology at scale. Accordingly, many if not most of the AI systems used by government will be procured by contracts with technology firms. This outsourcing of algorithmic governance brings many advantages; chief among them is the government’s ability to capitalize on industry’s innovation, institutional know-how, and high-skilled workforce.

But acquiring AI is not business as usual. Procurement law—like other areas of law—will need retooling to respond to the unique opportunities and challenges of algorithmic governance.

Beyond bits and bytes, AI systems are infused with value-laden tradeoffs between what is technically feasible, socially acceptable, economically viable, and legally permissible. Without proper planning and precaution, the government may acquire AI with embedded policies that align with commercial interests, but are shrouded in secrecy, prone to bias, and frustrate norms of good governance. The fact that AI systems are virtually unregulated in the private market only exacerbates these concerns.

The opacity of acquired AI systems, for example, might violate constitutional and statutory requirements for government transparency and accountability. Even if those thresholds are met, the inputs and outputs of AI systems may violate antidiscrimination norms, privacy laws, and domain-specific legal constraints. More generally, public anxieties around big data, big government, and big tech converge in the acquisition gateway in ways that can derail specific agency initiatives or undermine the legitimacy of algorithmic governance.

To be sure, procurement law cannot solve all the many challenges ahead. But any reformist agenda aimed at addressing the challenges will be hazardously incomplete without also retooling procurement law. More than a marketplace, the acquisition gateway can be a policymaking space to enable, and check, the ambitions of algorithmic governance.

Toward those ends, I have elsewhere prescribed a pragmatic agenda for “acquiring ethical AI,” keyed to four phases of the procurement lifecycle: (1) acquisition planning; (2) market solicitation; (3) bid evaluation and source selection; and (4) contract performance. Some of the main points to be addressed in each phase include the following:

  • Acquisition planning: Unlike conventional software and information technology, AI has unique risks pertaining to safety, transparency, accountability, bias, and privacy that are not captured in existing acquisition planning policies or protocols. To address this gap, tailored AI risk assessments should be required as a matter of law. Skeptics may worry that AI risk assessments will create a bureaucratic drag on AI acquisitions. To some extent, however, that is the point: to carve time and space for critical deliberations that otherwise may not occur or would come too late. Especially under current market conditions, it would be irresponsible for the government to acquire AI solutions without rigorously screening for risks relating to safety, discrimination, privacy, transparency, and accountability. Furthermore, humans interacting with an AI system may reject its outputs or engage in (risky) compensating behaviors if they do not trust the technology. Done right, AI risk assessments can set the foundations for mission success and AI trust, both inside and outside of government.
  • Market Solicitations: The dividends of AI risk assessments extend beyond the planning phase. Most pertinent here, the government should recast any identified risks as focal points in the government’s market requests for information and bid proposals. For example, agencies can ask prospective vendors to describe how their developmental protocols and practices enable end-to-end auditability of their AI solutions. Moreover, vendors can be asked to describe whether and how their AI solution will be explainable, interpretable, nondiscriminatory, and so on.
  • Evaluation and Source Selection: Under existing regulations, agencies must evaluate vendor proposals solely on the criteria pre-specified in the relevant contract solicitation. Thus, agency officials will need to include ethical AI requirements in contract solicitations if the government intends to award contracts even partly on that basis. Moving forward, ethical AI standards could be established—such as by the National Institute of Standards and Technology—and contract awards conditioned on compliance with such standards. Alternatively, or in addition, ethical AI criteria could be factored into a contracting officer’s pre-award responsibility determination as threshold requirements for doing business with the government. For example, in connection with a particular contract or class of acquisitions, prospective vendors could be required to allow independent third-party auditing of their proposed AI solutions. Vendors could also be required to waive trade-secrecy claims under certain conditions or in certain contexts, such as in adjudicatory settings where the government must provide an explanation for an AI output.
  • Contract Performance: Even if the recommendations above are duly implemented, the operational challenges during contract performance will be significant and varied. Most of the challenges are rooted in the government’s market dependencies, which are exacerbated by informational, cultural, and regulatory asymmetries. For instance, when acquiring commercial off-the-shelf AI solutions, the government’s data rights are significantly limited. Consequently, the government may forfeit crucial opportunities to address its legal and operational needs, both at the time of purchase and thereafter. By contrast, customized AI systems may be better suited for the government’s missions and lifecycle needs. But this gives rise to different challenges. Among them, the government cannot lawfully outsource inherently governmental functions. Although this is a notoriously fuzzy and forgiving constraint, it could—and arguably should—prevent the government from devolving value-laden policy choices to vendors in the AI development process.

Although there are no simple solutions, the status quo will not do. If the government’s use of AI is to be for the people, and paid by the people, then it must be safe, fair, transparent, and accountable to the people. By centering ethical AI across the procurement lifecycle, agency officials and vendors will be required to think more holistically—and competitively—about the AI tools passing through the acquisition gateway for governmental use.

David S. Rubenstein is the James R. Ahrens Chair in Constitutional Law and Director of the Robert Dole Center for Law and Government at Washburn School of Law.

This essay is part of a nine-part series entitled Artificial Intelligence and Procurement.