Algorithmic Safeguards For the People and By the People

The White House seeks public input on a “bill of rights for an automated society.”

We live our lives enmeshed in applications of artificial intelligence (AI): virtual assistants such as Siri or Alexa listen to us, search bars finish our thoughts for us, and social media sites curate the information we consume. Decisionmakers may also use AI when determining who to hire, who receives loans and medical support, or even who to hold without bail.

Scholars, activists, government leaders, and members of the public worry that these technologies threaten constitutional rights, further discrimination, and could be used for oppression.

In October 2021, the White House Office of Science and Technology Policy (OSTP) responded to these worries by announcing plans for a “digital bill of rights” that would “clarify the rights and freedoms we expect data-driven technologies to respect.” In an opinion piece first published in WIRED, Eric Lander, director of the OSTP, and Alondra Nelson, OSTP Deputy Director for Science and Society, noted that the creation of a new bill of rights for an automated age is only a first step—government agencies must also decide how those rights are to be enforced.

The original Bill of Rights was meant to protect Americans against the possibility of government encroachment on individual liberties, but as NPR reported as early as 2013, these protections often cannot hold up against the powers of new technology, whether used by government or private industries. It is not clear what, if any, binding legal effect the digital bill of rights planned by the White House would have.

For people looking to understand how AI has affected the legal landscape, the OSTP hosted six different virtual panel discussions on AI’s impact on consumer rights and protections, the criminal justice system, civil law, democratic values, social welfare, and the health care system. Each session featured representatives from industry, academia, members of the public, and other advocacy groups, who came together to discuss both the promises and pitfalls of artificial intelligence.

Panelists addressed concerns that AI tools may perpetuate the biases implicit in their creation, noting, however, that AI has the potential to be more equitable than the systems it may replace.

The CEO of the AI-driven recruiting platform pymetrics, Frida Polli, explained that standardized tools and testing have been used in many of these contexts for decades, often leaving out marginalized groups. According to Polli, algorithmic tools could lead to more equitable hiring practices because they could provide a more individualized and nuanced view of a candidate than could be achieved through a screening exam.

Similarly, Sean Malinowsky, a former chief of detectives at the Los Angeles Police Department, argued that if people were worried about police bias and corruption, automated systems could be more equitable because they limit officer discretion.

Some panelists expressed concerns about focusing on a technology-oriented bill of rights to regulate data at all.

In one session, the director of the Fourth Amendment Center at the National Association of Criminal Defense Lawyers, Jumana Musa, argued that discussions of what kinds of technology can be used, and how, are “not going to ever address the underlying issues” of the criminal justice system and that racial justice could better be served by decriminalizing mental health issues, addiction, and poverty. She suggested that instead of asking what guardrails should be placed on technology, policymakers should consider whether technology is even a viable solution to the underlying problem.

In another session, Fabian Rogers, a community advocate in New York City, emphasized that policymakers should not get caught up in regulating the technology but instead should attempt to fix the underlying systems that technology is meant to support.

Other panelists worried that systemic change, although necessary, would take time, and they suggested that a digital bill of rights would be necessary to safeguard civil liberties during the transition period.

Panelists generally agreed that public input, at all stages of development, is critical to ensuring that technology is used fairly. Rogers suggested that governments create oversight boards made up of multiple community stakeholders who can make decisions about which technologies are implemented, and how they are used. One panelist emphasized that community members who chose to engage in these discussions should be compensated because otherwise only those who could afford to take off time from work and participate would have their voices heard.

The OSTP also solicited information about AI-enabled biometrics—technologies that use facial recognition, physical movements, heart rate, and other physical indicators to identify people and infer information about them. The OSTP hosted two listening sessions on public and private uses of biometric technologies, and the office requested input from anyone who has ever been affected by these technologies.

Members of the public may email comments to the OSTP at ai-equity@ostp.eop.gov.