Fighting Discrimination from Unfair Algorithms

By taking enforcement actions, the Federal Trade Commission can set standards for algorithmic fairness and nondiscrimination.

This spring, Federal Trade Commission (FTC) Chair Lina Khan and the heads of three other federal enforcement agencies announced their authority and commitment to enforce laws against discrimination and bias in automated systems. And just last week, the FTC reportedly opened an investigation of OpenAI, the maker of the widely known ChatGPT tool, to see whether it has engaged in unfair business practices.

FTC enforcement actions combatting algorithmic discrimination would represent a powerful regulatory strategy to protect consumers. It follows an ongoing but separate battle the agency has used to protect privacy for consumers.

With respect to privacy protection, the FTC has not merely enforced company promises in their privacy policies, but it has also developed a body of substantive rules to protect consumers based on their reasonable expectations of privacy. These rules function essentially as common law, which companies rely on to guide their own privacy practices.

Now, with a growing wave of algorithmic tools and systems, the FTC has another opportunity to lead the charge in protecting consumers from discrimination created by biases in new digital technologies. Extending the FTC’s practice of bringing enforcement actions to police algorithmic harms is a natural next step from its enforcement of privacy violations.

With the rise of surveillance capitalism, companies have been motivated to overlook privacy obligations and collect troves of consumer data to build algorithmic systems and extract profits, often at the expense of consumer’s economic or emotional well-being.

The FTC has recognized that once companies have trained and produced an algorithm with illegally obtained or biased data, remedies should not stop at requiring companies to update their privacy and cybersecurity practices. To that end, the FTC has already ordered two companies to delete algorithms they developed through wrongfully obtained consumer data. The FTC has also published an advance notice of proposed rulemaking to identify further consumer harms and remedies related to algorithms.

Rulemaking, however, is notoriously slow, while algorithmic tools—and the discrimination they can produce—are rapidly spreading.

Already, for example, algorithmic systems have led to discriminatory decisions that harm people, such as by denying them access to credit, employment, and housing. Bias pervades the algorithmic technology itself, including facial recognition, speech recognition, and natural language processing. The FTC has warned companies to watch out that their algorithms do not produce biased impacts.

Moving forward, the FTC should bring enforcement actions to crack down on discrimination in algorithmic decision-making systems.

The FTC has authority to enforce the Equal Credit Opportunity Act, an antidiscrimination statute that empowers the FTC to challenge algorithms that play a part in credit discrimination. The FTC also has substantial power to challenge algorithmic discrimination that harms consumers under its authority in Section 5 of the FTC Act to regulate “unfair and deceptive acts and practices.”

A “deceptive” business practice under Section 5 involves a representation, omission, or practice that is material and is likely to mislead a consumer acting reasonably in the circumstances. The easiest deceptions to pursue would be broken promises. Given the current state of technology, the FTC could pursue claims that an algorithmic system is “100% bias-free” as being per se deceptive. Even without such claims, the FTC should still hold companies responsible for representations and promises they make to consumers about the capabilities of their products.

Meanwhile, an “unfair” business practice under Section 5 is one that causes or will likely cause substantial injury to consumers that they cannot reasonably avoid and is not outweighed by benefits to consumers or competition.

Pursuing “unfairness” is a tricky approach to take. But the FTC recently used a Section 5 unfairness argument to combat discriminatory business practices. In a case involving automobile sales, the FTC accused an auto dealer’s executives of engaging in unfair business practices by imposing higher interest rate markups and extra fees on Black and Latino consumers compared to similarly situated white consumers. The Commission reached a $3.3 billion settlement with the executives.

In this matter, the FTC demonstrated how it can use its Section 5 unfairness authority to challenge discriminatory practices. Notably, the FTC included a separate count in its complaint alleging that discriminatory financing violated a separate antidiscrimination law, the Equal Credit Opportunity Act. The FTC argued that the existence of this antidiscrimination law did not preclude the Commission from also using its Section 5 unfairness authority to pursue discriminatory practices. The FTC’s choice to include this count indicates its commitment to use “all” of its available authorities to target harmful discrimination, including Section 5. The FTC’s “unfair discrimination” argument would apply in the same way if an algorithm was involved.

Moreover, using the Commission’s authority under Section 5 would provide important benefits in cases involving algorithmic discrimination. The unfairness standard sidesteps a “black box” problem in which companies do not know how their algorithms produce discriminatory results. The FTC can establish standards that will protect consumers from “substantial injury” from algorithmic discrimination without having to prove anything about the motives of companies—or their algorithms.

The unfairness standard also accounts for the inherent power imbalances between companies that deploy algorithmic decision-making systems and consumers who are subject to them. Consumers cannot reasonably avoid situations with algorithms that affect them. In fact, they may not even know that an algorithm is at play.

Ultimately, by taking enforcement actions under the unfairness standard in Section 5, the FTC will have flexibility to consider any countervailing benefits for companies and take actions when the Commission thinks it appropriate. And by taking enforcement actions to combat algorithmic discrimination, the FTC can not only target those companies it pursues—it can warn other companies to update their practices as well.