Robot Regulators Could Eliminate Human Error

Scholar examines what a world of regulation by robot might look like—an innovation that could be just around the corner.

Long a fixture of science fiction, artificial intelligence is now part of our daily lives, even if we do not realize it. Through the use of sophisticated machine learning algorithms, for example, computers now work to filter out spam messages automatically from our email. Algorithms also identify us by our photos on Facebook, match us with new friends on online dating sites, and suggest movies to watch on Netflix.

These uses of artificial intelligence hardly seem very troublesome. But should we worry if government agencies start to use machine learning?

Complaints abound even today about the uncaring “bureaucratic machinery” of government. Yet seeing how machine learning is starting to replace jobs in the private sector, we can easily fathom a literal machinery of government in which decisions made by human public servants increasingly become made by machines.

Technologists warn of an impending “singularity,” when artificial intelligence surpasses human intelligence. Entrepreneur Elon Musk cautions that artificial intelligence poses one of our “biggest existential threats.” Renowned physicist Stephen Hawking eerily forecasts that artificial intelligence might even “spell the end of the human race.”

Are we ready for a world of regulation by robot? Such a world is closer than we think—and it could actually be worth welcoming.

Already government agencies rely on machine learning for a variety of routine functions. The Postal Service uses learning algorithms to sort mail, and cities such as Los Angeles use them to time their traffic lights. But while uses like these seem relatively benign, consider that machine learning could also be used to make more consequential decisions. Disability claims might one day be processed automatically with the aid of artificial intelligence. Licenses could be awarded to airplane pilots based on what kinds of safety risks complex algorithms predict each applicant poses.

Learning algorithms are already being explored by the Environmental Protection Agency to help make regulatory decisions about what toxic chemicals to control. Faced with tens of thousands of new chemicals that could potentially be harmful to human health, federal regulators have supported the development of a program to prioritize which of the many chemicals in production should undergo the more in-depth testing. By some estimates, machine learning could save the EPA up to $980,000 per toxic chemical positively identified.

It’s not hard then to imagine a day in which even more regulatory decisions are automated. Researchers have shown that machine learning can lead to better outcomes when determining whether parolees ought to be released or domestic violence orders should be imposed. Could the imposition of regulatory fines one day be determined by a computer instead of a human inspector or judge? Quite possibly so, and this would be a good thing if machine learning could improve accuracy, eliminate bias and prejudice, and reduce human error, all while saving money.

But can we trust a government that bungled the initial rollout of Healthcare.gov to deploy artificial intelligence responsibly? In some circumstances we should.

After all, no matter how advanced technology becomes, humans will ultimately retain control over the value choices embedded in any robotic regulatory machines. These machines depend on humans to determine their parameters and input instructions, and humans can always pull the plug on systems that go awry. Of course, to avoid getting to that point, government agencies will need to build up their own human expertise to understand artificial intelligence systems and how they work. At times, appropriate safeguards will be needed, and responsible officials will need to strive to ensure that the “big data” they process via algorithm does not have built-in, historical human biases.

Nevertheless, just as private-sector applications of artificial intelligence promise to improve inventory control, make cars safer, and enhance medical decision-making, a world in which regulators and robots work together could be one that is also much better and safer.

Regulating by robot need not conjure up a dark future to be feared. Rather, its value should be approached with cautious optimism. Government needs to strengthen its human expertise to understand and apply machine learning responsibly, and it needs to engage in a public process of issuing guidance that clarifies the core values that algorithms should seek to optimize, including how trade-offs between competing values should be made. By taking steps like these now, government will be better positioned to take advantage of machine learning’s potential to bring to the public sector the same kinds of improvements we are witnessing in the private sector.

Cary Coglianese

Cary Coglianese is the Edward B. Shils Professor of Law, Professor of Political Science, and Director of the Penn Program on Regulation at the University of Pennsylvania Law School. He is co-author of the Georgetown Law Journal’s forthcoming article, Regulating by Robot: Administrative Decision-Making in the Machine-Learning Era, as well as one of the winners of the Fels Policy Research Initiative’s inaugural collaborative grants in recognition of his project, Optimizing Government: Policy Challenges in the Machine Learning Age. Coglianese is the founder of and faculty advisor to RegBlog.

This essay originally appeared in the San Francisco Chronicle on May 5, 2016.