The Procurement Path to AI Governance

Procurement standards could promote responsible use of artificial intelligence by government.

The growing interest in and adoption of artificial intelligence systems by all levels of government has inspired significant debate.

AI has much to offer government, as automated tools can allow officials to make their decision-making processes faster and more precise and predictable. But reliance on algorithms, if not implemented carefully, can risk cutting humans out of decision-making processes, introducing or reinforcing biases into those decisions, and depriving the public of insight into how systems of government work.

Although those dangers are not unique to the use of AI, the sophisticated and somewhat black-box nature of many machine-learning systems has engendered some degree of public suspicion over their use. That suspicion necessitates that government officials take care when developing and deploying AI tools.

For agencies that rely on outside contractors to build and operate their AI systems, the needed care should begin at the very moment that a contract is negotiated and signed with their vendors. The procurement process itself, in other words, can provide an important path to AI governance.

Concerns about the irresponsible use of AI tools are hardly unwarranted. Undetected defects and biases in other algorithms have at times led governments to erroneously deny people welfare benefits and subject individuals to unfair and unjustified arrests, pretrial detentions, financial penalties, and prison sentences.

Responding to these concerns and building public trust in algorithmic governance will take some work, since public officials must be able to demonstrate that their uses of machine learning generate accurate, unbiased, and consistent results. That endeavor will require sharing with the public information about what decisions are informed by algorithms, how those algorithms were developed, what data inputs they consider, and what they do with that data.

Providing adequate transparency may prove challenging, however, when AI tools used by state and federal agencies are created by outside contractors. When faced with requests for transparency, those companies have sometimes invoked trade secret protection to try to preserve the proprietary nature of their algorithms’ operations and underlying data.

Failing to push back on that secrecy can pose legal as well as ethical quandaries for government. For instance, a federal court in Texas found enough evidence to proceed to trial on a due process claim brought by public school teachers against their district, which decided which teachers to terminate based in part on performance evaluation scores generated by a privately designed algorithm.

The creator of the Texas school district’s algorithm—a private firm hired by the district—claimed trade secret protection and refused to disclose its methodology for calculating the teachers’ evaluation scores. The court accepted that claim but reasoned that, unless the school district could find some other way to let teachers test the accuracy of their scores, its “policy of making high stakes employment decisions based on secret algorithms” would be deemed “incompatible with minimum due process.”

As we have discussed elsewhere, governments are not without recourse in balancing their desire to reap the benefits of AI tools with their legal and ethical obligations to use those tools in a transparent, accountable, and unbiased manner.

One administrable path forward is through the creation and implementation of AI-specific procurement standards, which set the terms under which a public body will acquire services from private contractors. Governments can craft their contracts with vendors of AI systems to include specific principles by which contractors must abide in designing and operating algorithms for public-sector use. Those principles can also outline expectations about what information vendors must be prepared to share with government agencies and the public.

This approach is far from unprecedented, as public procurement has regularly been employed as a tool for advancing policy goals. For instance, agencies at various levels of government must give certain preferences to female-, minority-, and veteran-owned businesses in awarding contracts. They must also follow “green” procurement standards that prioritize suppliers who comply with specific environmental standards.

Similarly, governments can encourage the adoption of certain best practices for the development and acquisition of AI by incorporating provisions into their contracts that mandate conformity with those norms.

To figure out what those standards should be, agencies can look to existing frameworks for the responsible, transparent, and fair use of AI that have been produced at the local, state, federal, and international level, or they can craft new statements of principles to guide how they expect AI systems to be created and used. Agencies can then distill those principles into concrete metrics and standards to which AI vendors must adhere as a condition of their public contracts.

Turning to procurement as a vehicle for promulgating good government values in AI tools offers a number of advantages. Most readily, it can allow agencies to address directly the transparency and explainability questions that arise when they use technologies developed by private firms. Contracts can require vendors to accept limited waivers of trade secret protections, so that agencies can monitor the algorithms they deploy for inaccuracies and biases and so that impacted members of the public can find out, at a minimum, what inputs those algorithms analyze and what results they generate.

Beyond providing minimal assurance that government agencies can justify the use of AI tools to judges and members of the public, procurement can provide a vehicle for promoting responsible AI practices. It can furnish a means of AI governance—via contract.

Governing AI by contract would provide flexibility, too, allowing agencies to tailor contractual obligations to the specific circumstances of each use of AI and adapt those standards as norms and technology evolve.

By being intentional about the procurement of AI tools, government may also produce positive downstream impacts throughout the industry. In other fields—such as data encryption and energy efficient building standards—standard-setting via government contracting has helped nudge companies in the private sector to match those norms.

Here, too, procurement may be able to diffuse norms about algorithmic transparency and fairness, beginning with contractors that serve both public and private sector clients. These large contractors may find it more efficient to follow the same best practices in building and operating all their algorithmic systems, even the ones not subject to government procurement policies. As more vendors embrace certain ethical values and principles of AI design, those norms will become more mainstream, producing trends with ripple effects across industry.

Better AI governance through public procurement could still prove successful, of course, even if it only brings about reform in the public sector. The power of the state to affect individuals’ lives can create uniquely heightened concerns about governmental use of algorithms, particularly when those tools’ inner workings are hidden from the public. It is thus especially important for government agencies to ensure that their uses of AI systems satisfy principles of explainability and fairness and conform to legal obligations such as due process and equal protection.

Public bodies can work toward those aims by using procurement to set standards ex ante for the transparent and ethical development and implementation of machine learning in government. Doing so will enable regulators to enjoy the benefits of AI-informed decision-making while still adhering to good-government values such as openness, evenhandedness, and accountability.

Procurement ultimately offers a path toward a governmental AI paradigm that can more readily garner the public’s support and trust.

Lavi M. Ben Dor is a law clerk to the Honorable Kent A. Jordan, United States Court of Appeals for the Third Circuit. This essay reflects the views of this author in his individual capacity and not those of Judge Jordan or the Court.

Cary Coglianese is the Edward B. Shils Professor of Law and Professor of Political Science at the University of Pennsylvania, where he directs the Penn Program on Regulation and serves as the faculty advisor to The Regulatory Review.

This essay is part of a nine-part series entitled Artificial Intelligence and Procurement.