Why Metaphors Matter in AI Law and Policy

Font Size:

Scholar warns that figures of speech play an outsized role in shaping artificial intelligence regulation.

Font Size:

Is artificial intelligence (AI) a “critical strategic asset” or a “generally enabling technology”? Is it “our creation” or an “agent”?

It depends on which aspects of the technology one cares about or wishes to regulate, Matthijs Maas, a Senior Research Fellow at the Institute for Law & AI, writes in a new article. He argues that although none of the above terms is strictly right or wrong, policymakers must assess and choose carefully the figures of speech they use in referring to AI. For better or worse, these figures of speech could determine the shape that AI policy takes, Maas notes.

Maas defines analogies and metaphors as “communicated framings” that analogize a thing or issue to something else, and in doing so, suggest how to respond to the thing or issue. Such framings are fundamental to human reasoning and policymaking, Maas avers.

Maas offers an “atlas” of 55 terms that policymakers and experts, as well as the larger public, use to refer to AI. Each invokes, either explicitly or implicitly, a reference to some other, generally better understood entity or process. Importantly, the terms reflect connotations of, and value judgments about, those other phenomena. As each of these framings highlights certain qualities, benefits, or risks of AI, they obscure others.

As a result, the 55 terms Maas provides can be contradictory. For instance, conceptualizing AI as a “black box” implies a lack of transparency and inability to comprehend how AI tools function, whereas “organism” centers the “development, evolution, mechanism, and function” of AI tools as discoverable and explainable phenomena.

Whether policymakers will address a given AI-related issue may depend on how “regulatory narratives” frame the issue and whether the framing resonates with policymakers on an intuitive level, Maas suggests. In shaping government perceptions of a technology, metaphors and analogies affect how a government program develops its regulatory priorities.

For example, Maas observes that the U.S. military’s early use of the term “cyberspace” analogized the internet to another domain of conflict, alongside land, sea, air, and space. This strategic rhetoric helped justify the military’s involvement in cybersecurity and the creation of the U.S. Cyber Command under the Department of Defense, Maas contends.

A likely analogy regulators of AI could draw on is consumer “products,” according to Maas.  A regulatory program’s framing of AI systems as “products” may imply that the technology can be subsumed under existing frameworks for product safety and redressing consumer harms.

But this framing is not without risks.

Framing AI as “products” with attendant consumer harms could divert attention from risks present in the development of AI products, Maas argues. It could also obscure dangers AI tools may pose to humans’ fundamental rights, if such rights are not easily conceived as consumer harms.

Mass also predicts that metaphors will play a critical role in legislation and in the courts.

Maas highlights the “battle of analogies” in recent class action lawsuits against the companies Midjourney and Stable Diffusion, which make AI image generators. In these lawsuits, the parties disagree over whether the programs are mere “collage tools,” that copy and regurgitate human-made images, or “art inspectors,” that make careful and sophisticated analyses of the human works they train on. If one of the metaphors gains traction, it could affect how the courts rule on whether the AI companies have violated copyrights or made non-infringing “fair use” of human works, Maas suggests.

A perhaps more important issue, Maas notes, is how Section 230 of the Communications Decency Act (CDA) will be applied to AI-generated content. The resolution of the matter could turn on which existing technologies the U.S. Congress or federal courts analogize AI to.

Section 230, passed by the U.S. Congress in 1996, protects web platform operators from liability for carrying illegal content created entirely by third parties. But “information content providers”—those who create or contribute to unlawful content—can be held liable under 230.

If courts interpret AI chatbots such as ChatGPT as, or as similar to, a “search engine,” then the makers of such bots will likely be immune from liability, Maas explains. Given their ability to generate novel content, however, courts could also analogize chatbots to information content providers, which would subject AI companies to liability for their chatbots’ outputs under Section 230. The latter framing could seriously implicate the future viability of many AI products, Maas notes.

Regulators and courts probably have no option but to analogize AI to earlier technologies, Maas recognizes. Indeed, the already existing frames associated with these technologies could be quite helpful in regulating AI, says Maas. He warns, though, that the further removed an analogy is from the unique characteristics of a largely unprecedented technology, the more likely it is to have dangerous implications.

Maas urges policymakers and lawmakers to analyze the figures of speech related to AI they hear and consider what other metaphors might be more suitable and useful. Importantly, regulators should consider which features a given metaphor highlights and which it occludes, Maas highlights. Otherwise, the hasty adoption of metaphors could lead to inhibited understanding of AI and even ineffective and harmful laws, Maas warns.