Robots Are People Too…Maybe

Scholar argues that robots should be regulated based on three key individual traits.

Movies like Transformers and television shows like Westworld invite viewers to see robots as human. Silicon Valley has yet to produce such lifelike entities, but lawmakers are already considering how to assign rights and responsibilities to robots and their creators.

In 2017, the European Parliament proposed an “electronic persons” status for robots, quickly provoking criticism from robotics researchers and legal experts as “inappropriate.” Instead of creating a new legal status, Professor Ignacio Cofone of McGill University Faculty of Law recommends in a recent paper that lawmakers classify robots and other artificial intelligence entities on a “continuum between tools and people.”

Cofone argues that by determining whether a particular robot most resembles a tool, corporation, animal, child, or adult, regulators can assign legal rights and responsibilities to the robot, its creator, or its user. He contends that legal treatment should depend on three core characteristics: the robot’s ability to interact with the world, the foreseeability of its actions, and the way people perceive it.

The most important of these three characteristics, Cofone writes, is how humans perceive the robot. Cofone terms this “social valence.” The more empathy people feel toward a robot—the more human they think it is—the more vulnerable people will be in relation to it.

A robot with high social valence could be capable of deceiving people, Cofone warns. It “could pretend to care for our interest” but in fact be programmed to serve “the commercial interests of other people.” He emphasizes that humans have extensive experience in defending themselves against deception by other people, but so far they have virtually no experience defending against deception by robots. Regulators designing consumer protections should consider a robot’s social valence in their analysis, Cofone urges.

Cofone also argues that the foreseeability of a robot’s actions should determine “how liable other people should be” for a robot’s actions. He contends that if a robot could make its own decisions, that would justify allocating liability to the robot itself, rather than its creator. Such a system would require the creation of legal incentives to which robots could—and would—respond.

But experts agree that robots do not yet possess the ability to make their own decisions. Cofone acknowledges that today’s primary regulatory issues concern when to hold creators responsible for their robots. That analysis, he states, should depend on the foreseeability of a robot’s actions.

Cofone uses the example of Tay—a short-lived Microsoft chatbot—to illustrate how foreseeability can be used to evaluate product liability for robots. He writes that within 16 hours of online human interaction, Tay unexpectedly “became racist and sexist, denied the Holocaust, and supported Hitler.” Although the United States protects free speech, in other countries, such as Germany, Tay’s statements would have been criminal. Liability should depend on the degree to which Microsoft could have foreseen Tay’s behavior, Cofone argues. He acknowledges that regulators could impose strict liability on robot creators to encourage maximum caution, but argues that a foreseeability analysis is preferable because in tort law “one is rarely responsible for what one cannot foresee.”

Cofone also explores a third characteristic of robots—their physical form, or “embodiment.” But he emphasizes that plenty of artificial intelligence technology can affect the world without a physical presence: smart home thermostats, trading algorithms, and Siri, for example. For that reason, he concludes that embodiment is not essential when assigning rights and responsibilities to robots.

Cofone concludes his paper by addressing robot rights. Beyond allocating responsibility for harms caused by robots, he argues that regulators should consider how to allocate rights as well. These concerns include whether robots should have free speech and whether they should own the copyright for the work they produce—and if they should not own it, who should? Questions of rights should be addressed in conjunction with questions of responsibility, Cofone suggests.

By evaluating robots according to their social valence and emergence, Cofone argues, regulators can place individual robots on a “continuum between tools and people” to determine their rights and responsibilities.