Scholar argues that public attitudes about facial recognition tools should inform governmental use of the technology.
Most people use facial recognition every day. Their phones scan their faces to unlock. Their social media apps recognize their faces when tagging photos.
Government officials also use facial recognition every day. But in a forthcoming article, Matthew Kugler of Northwestern Law argues that public attitudes and expectations with respect to facial recognition should limit the government’s use of this technology.
Facial recognition is a type of artificial intelligence that matches images of people’s faces to other faces in a database. Kugler focuses his discussion on public sector uses of the technology, including in law enforcement investigations, security screenings at government offices and facilities, and fraud detection.
Kugler describes how the police typically use facial recognition technology to match images not just to government-owned databases, but also to publicly available photos from social media platforms. The Detroit Police Department, for example, connects its facial image database with cameras at various locations in the city, including health clinics, schools, and apartment buildings. In addition, many public schools use facial recognition tools to confirm that those entering the school’s facilities are students or staff.
These uses have sparked concerns about the scope of these tools. Critics view this technology as improper government surveillance. Others worry that these tools erode free speech and personal autonomy, Kugler observes.
Some individuals might also be surprised to learn that private companies working for government agencies have scraped their facial images from public websites. Those individuals might also feel their privacy is being infringed when their face is included in the millions of faces government officials analyze to identify a law enforcement suspect.
In addition, facial recognition technologies tend to disproportionately produce false positives and false negatives for certain ethnic groups, Kugler notes. He explains that these accuracy concerns heighten a deeper problem of “automation bias” that can arise from overreliance on technology as a replacement for other forms of gathering and analyzing information.
In light of concerns over government uses of facial recognition, Kugler makes the case for new regulations. As it stands, the federal government has not passed any legislation restricting the technology, and state and local laws are “scattered,” according to Kugler.
He argues that lawmakers need guidance on what types of facial recognition use cases are “reasonable” to the public. “It is helpful to know whether 20 percent, or 80 percent, of people consider the information gathering an undue intrusion,” he writes.
Kugler reports that empirical studies on this question demonstrate that the public’s comfort with facial recognition tools varies with the context in which they are used. People are more comfortable with the government’s use of facial recognition technology to investigate major crimes, but they take greater issue with other government uses, such as tracking attendees of Alcoholics Anonymous meetings or political rallies.
Kugler contends that these public attitudes should inform regulators’ approaches to reining in the consequences of facial recognition. Specifically, he suggests that the discomfort of even a substantial minority could form a legislative “floor”—that is, lawmakers could ban all uses of the technology that make people uncomfortable and decide whether and how to limit other uses.
Kugler also recommends that the government should be allowed to use facial recognition technology only in limited circumstances: to verify identity in places where the government already demands identity verification, such as airports; to scan for security risks within government buildings, including schools; and to facilitate security at major public events.
When law enforcement uses facial recognition, Kugler proposes that rules apply that are similar to those found in the Wiretap Act, a federal law that governs when and how law enforcement can intercept people’s communications. The Wiretap Act requires prosecutors to demonstrate that other conventional law enforcement techniques do not suffice for their objective before obtaining a warrant from a court authorizing communications interceptions.
Even if eventual legislation does not require that law enforcement obtain a warrant before using facial recognition technology for identity verification, Kugler argues that new laws should at least mandate additional steps be taken before governmental authorities engage in live scanning—a unique form of facial surveillance that captures images of individuals entering a particular area and matches them against a database in real time. Such mandated steps “would prevent live tracking from becoming an everyday tool while preserving it as an option for more serious offenses,” Kugler claims.
Kugler concludes that, although facial recognition is here to stay, “Americans reject universal face surveillance.” He argues that preserving individual privacy will require prohibiting certain governmental uses of the technology while allowing others with appropriate guardrails.