Scholars examine the evolving regulatory landscape of facial recognition technology.
In the past few decades, facial recognition has emerged as one of the most powerful and controversial tools of artificial intelligence. Capable of identifying and verifying individuals by analyzing their facial features, facial recognition technology has integrated into everyday life, revolutionizing tasks such as unlocking smartphones and screening airline passengers.
Facial recognition technology operates through a multi-step process of detection, alignment, and matching. The system measures up to 68 distinct datapoints, such as eye corners, nose bridge, and jaw contours to create a detailed map of facial features. This “faceprint” can then be compared against a database of known faces. Modern systems employ deep learning algorithms that improve accuracy through exposure to millions of facial images.
The technology has found widespread adoption. Law enforcement agencies increasingly rely on this technology for criminal investigations, with over 100 U.S. police departments now subscribing to facial recognition services. Private sector use has also expanded rapidly. Many employers use facial scans to track work attendance, and amusement parks such as Shanghai Disneyland allow their seasonal pass holders to enter the park by scanning their faces.
The regulatory framework governing facial recognition technology remains fragmented in the United States. Although the Civil Rights Acts of 1957 and 1964 prohibit discrimination based on race, religion, sex, color, national origin, and disability, there are currently no federal constitutional provisions or laws that specifically regulate or restrict the federal government’s use of facial recognition technology or other forms of artificial intelligence.
On a local level, several states and municipalities have enacted their own regulations. For example, Illinois’s Biometric Information Privacy Act requires companies to obtain written consent before collecting biometric data. In 2019, San Francisco became the first U.S. city to ban government use of facial recognition, followed by other municipalities including Boston and Portland, Oregon.
Proponents of the technology argue that facial recognition serves vital public safety functions. According to recent polling, 46 percent of American adults support law enforcement’s use of facial recognition technology for public safety purposes. The technology has helped solve cold cases, locate missing persons, and prevent criminal activities.
Critics of facial recognition technology, however, raise significant concerns about privacy implications and potential misuse. Advocates for civil rights warn that widespread adoption of facial recognition systems threatens individual privacy and could enable mass surveillance. The technology can track individuals’ movements across multiple locations without their consent, potentially chilling free expression and assembly rights.
Technical limitations compound these concerns. A study by the National Institute of Standards and Technology found that leading facial recognition technologies showed error rates up to 100 times higher for Black and Asian faces compared to white faces. These disparities often lead to misidentifications and wrongful arrests.
As facial recognition technology grows more sophisticated, policymakers face mounting pressure to develop comprehensive regulatory frameworks. Success will require balancing legitimate public safety applications with robust protections for privacy and civil rights. The challenge lies in crafting regulations that promote responsible innovation while preventing misuse and discrimination.
In this week’s Saturday Seminar, scholars discuss current regulations surrounding facial recognition technology.
- In a recent article in the Notre Dame Journal of Law, Ethics & Public Policy, Shlomit Yanisky-Ravid of Fordham University School of Law and practitioner Kyle Fleming raise concerns about privacy rights and civil liberties as facial recognition technology becomes increasingly prevalent. The authors argue that current U.S. constitutional protections under the Fourth Amendment may be insufficient to protect privacy in the digital age, as traditional Fourth Amendment analysis focuses primarily on physical trespass and body searches, not digital surveillance. In the light of these inadequacies, Yanisky-Ravid and Fleming recommend tailored regulatory approaches and call for stricter oversight of facial identification applications and a ban on indiscriminate use of the technology.
- In an article in the Washington University Global Studies Law Review, practitioner Christopher Kim argues that although facial recognition technology offers beneficial applications, its potential abuse through oppression and privacy violation requires immediate attention. Kim proposes three main regulatory interventions for minimizing the technology’s potential for discriminatory application. First, Kim recommends increasing transparency in the usage of facial recognition technology by requiring that companies seek approval from regulatory bodies for each new proposed use of the technology. Second, Kim proposes a ban on the technology in high-risk contexts involving vulnerable populations, such as minors. Finally, Kim calls for clear remedial measures for misuse and misidentification, including private rights of action and mandatory investigations by independent agencies.
- In a recent article in the North Carolina Law Review, Amanda Levendowski, a professor at Georgetown Law, highlights the inadequacies of current frameworks for regulating facial surveillance technology. Levendowski explains that existing solutions, such as voluntary corporate moratoria and piecemeal local legislation, have failed to prevent the spread of invasive and systemically biased face surveillance technology. She proposes instead using copyright law to target the unauthorized use of copyrighted photographs to build the databases upon which these technologies rely. Although Levendowski concedes that this solution is not perfect, she urges the strategic application of copyright law to hold companies accountable and challenge the unchecked proliferation of facial surveillance while broader federal regulations remain out of reach.
- In a recent article in the DePaul Law Review, Samuel D. Hodge, Jr., a professor at Temple University Beasley School of Law, warns that the advantages of the growing commercial use of facial recognition technology must be weighed against its potential harms. Hodges notes that facial recognition technology can offer enhanced security and tailored consumer experiences, but emphasizes accompanying ethical issues, such as algorithmic bias, privacy invasions, and misuse risks. Hodge also explains that facial recognition technology’s reliance on flawed datasets leads to racial and gender-based misidentifications. Accordingly, Hodge advocates for regulation and business practices that strike a proper balance between useful commercial application and safeguarding individual rights and privacy.
- In a student note published in Stetson Law Review, practitioner Hope Corbit proposes legislation that could regulate companies’ use of facial recognition technology. One aspect of the legislation Corbit discusses is a federal biometric privacy bill to create a set of standards for private companies that contract with the government. Corbit also recommends the creation of an independent biometric privacy safety board that would both generate guidelines for and inspect companies’ policies and procedures on biometric data collection. According to Corbit, this legislation should require all companies using biometric technology to acquire written consent from their users before any information is collected, used, or transferred.
- In a recent article in the Columbia Science and Technology Law Review, Matthew Kugler of Northwestern Pritzker School of Law examines public attitudes toward government’s use of facial recognition technology. Through three studies, Kugler highlights broad support for its application in addressing major crimes, enhancing security in airports and schools, and streamlining identity verification in secured spaces. Kugler notes significant discomfort with its general use in public spaces, particularly where it could restrict freedoms like assembly. Based on these findings, Kugler proposes a tiered approach: non-law enforcement uses should be permitted if sufficiently narrow and targeted, while law enforcement use of facial recognition technology should require a warrant.
The Saturday Seminar is a weekly feature that aims to put into written form the kind of content that would be conveyed in a live seminar involving regulatory experts. Each week, The Regulatory Review publishes a brief overview of a selected regulatory topic and then distills recent research and scholarly writing on that topic.