Current tort law regimes fail to address nuances of AI innovation and consumer harms.
Artificial intelligence (AI) is ushering in a brave new world. Critics decry computer vision as “killer robots,” denounce autonomous vehicles as unethical, and accuse AI innovations at large of “summoning the demon.” Restrictive regulations are often lauded as a logical line of defense against consumer harms.
But strict liability—de facto responsibility for manufacturers—is the wrong regime to regulate this growing sector. In a new research paper, I examine the shortcomings of the current tort regime and propose a new framework to fit public policy objectives of innovation incentives, manufacturer responsibility, and consumer protection.
Strict liability penalizes manufacturers for consumer harms regardless of knowledge or precautions, which stifles innovation. Furthermore, existing literature overlooks the reality that many of these mistakes—which I call “efficient errors”—are beneficial to innovation and benign to consumers. Classifying all errors as per se inefficient is problematic: This logic is noneconomic, depriving consumers of incredible advances in financial literacy, labor, digital pathology, fraud detection, business management, medical care, and analyses of large datasets across industries.
In many cases, efficient errors are tolerable because AI performance nonetheless surpasses human baselines. AI lacks the biological limitations and psychological fallacies of its human counterparts—an AI surgeon will never forget a sponge in a patient’s body, and an AI doctor will more accurately identify patients at high risk of fatal heart attacks. If underlying data used to train AI are inherently rife with human errors, subsequent remedial measures should focus on fixing the root of the bias—humans—instead of merely the symptoms—AI.
For other cases, efficient errors are unavoidable because game theory—where one’s gain entails another’s loss—plagues alternative courses of action. AI technologies must make decisions that compromise between individually and systemically rational choices, which are often at odds. For instance, an autonomous driving system may have to choose between the good of the community, such as avoiding pedestrians, and the good of the individual, such as protecting the driver. Because even human subjects find these choices difficult, such cases should be left to a jury.
In addition, some errors are necessary for AI technologies to learn and improve. According to the exploration-exploitation theory, all decision-making involves a fundamental choice at each step: “exploit” by making the best decision given current information, or “explore” by gathering more information. The choice to explore may result in an immediate outcome that is worse than the current best outcome yet provides invaluable advancements for the technology.
Lastly, some AI errors pose no harms but are misclassified due to outdated legal frameworks meant for old technologies. For instance, the National Highway Traffic Safety Administration’s pre-1980s regulations still require all vehicles, including autonomous ones, to be equipped with side-view mirrors—no matter that an autonomous vehicle does not need such additions. Manufacturers should not incur penalties for surface violations of outdated regulations.
Regulating AI manufacturers under the negligence rule likewise proves inadequate, rendering recouping damages impossibly difficult for consumers. Negligence may fit traditional products, where a plaintiff can simply trace causation back to the negligent individual who caused the error, but for AI technologies, the “black box” of AI algorithms obscures the vectors of causation. Under a negligence regime, although proving causation by the AI generally is simple, proving causation by the human agent behind the AI is difficult, and as a result, consumers will be unable to meet the outdated causation burden—the linchpin of the case.
An effective tort regime must navigate a middle ground. The current system deals out rough justice—strict liability for some AI products and the negligence rule for other AI services—both of which fail to account for the unique characteristics of AI that ultimately create commercial value.
Instead, I propose a new framework to regulate AI technologies: bestowing corporate personhood. Under this proposal, AI systems would themselves be incorporated as a limited liability corporation subject to direct liability, while their human members or managers would face limited liability for harms resulting from the technology.
Corporate limited liability strikes an optimal balance between manufacturer liability and consumer compensation. Manufacturers are relieved from absolute liability, which enables experimentation and innovation. The AI black box transforms from an insurmountable impediment to an accessible compensation target through an easier causation burden, mandatory corporate insurance, and accountability mechanisms such as “piercing the corporate veil,” whereby limited liability of human members, managers, and shareholders is lifted to hold these agents responsible for egregiously wrongful acts.
Many critics have resisted reevaluating the tort regime for AI technologies, arguing that calling AI “persons” is “highly counterintuitive” when they lack “additional qualities typically associated with human persons, such as freedom of will, intentionality, self-consciousness, moral agency, or a sense of personal identity.”
Such arguments are non sequiturs. The issue is a legal matter of policy innovation and consumer protection, not philosophical intuition. Corporate personhood does not require proof that AI is equivalent to a natural person. Corporate divisibility permits the grant of “more, fewer, overlapping,” or disjointed sets of rights and obligations, which circumvents the philosophical criticisms of giving AI the complete set of rights of full legal personhood. The current tort system needs urgent renovation that equally considers consumers and manufacturers alike.
Alicia Lai is a student at the University of Pennsylvania Law School.
Portions of this essay draw on the author’s paper, “Artificial Intelligence, LLC: Corporate Personhood for AI,” forthcoming in the Michigan State Law Review.