Scholar calls for preventative regulation to protect users in virtual reality.
Neil Stephenson’s 1992 science fiction novel Snow Crash pioneered the concept of the “Metaverse,” a virtual reality playground where anything is possible. In Stephenson’s metaverse, humans live in fantasy worlds with superhuman abilities that far surpass their counterparts in the real world.
In 2023, technological advancements driven by industry giants such as Meta, Amazon, Google, and Microsoft, bring Stephenson’s vision closer to reality than fiction.
Concerned for potential abuse, Omer Aamir, a former research fellow at the O’Neil Institute for National and Global Health Law, argues that creative regulatory guard rails must be applied to the metaverse before widespread adoption. He warns that without virtual privacy protections against biometric data abuse, criminal activity, and financial fraud, metaverse users will face serious dangers in real life.
Aamir describes the metaverse as a “decentralized environment” focused on digital asset ownership. He reports that in the metaverse, interweaving virtual communities and individuals, represented by virtual avatars, “develop professionally, socialize, entertain, undertake commerce and even trade with virtual properties.” According to Aamir, virtual communities in the metaverse exist parallel to real life and remain accessible from any location through devices such as smart phones and virtual reality headsets. Aamir claims that the practical applications of the metaverse are extensive, including virtual therapeutics, remote surgery, entertainment, and education.
Aamir argues, however, that the metaverse poses unique regulatory challenges due to its lack of physical boundaries. Users in the United States, Japan, and India, may all interact in a single virtual space, governed by the European Union. The users remain physically elsewhere, but are constrained by local law in their respective countries. Aamir further reports that there are differing governance systems across virtual platforms. Some virtual worlds are controlled by a centralized developer, while others use democratic self-governed systems. Aamir reports that tech companies such as Google and Apple, which both offer virtual reality hardware, have differing codes of conduct as well.
Aamir argues that policymakers must determine a common source of legal authority to overcome the non-physical nature of the metaverse. He warns that as the metaverse grows, crimes such as identity theft, fraud, and trading in illicit items will become more sophisticated, especially in the absence of universal policing systems. He reports that many of the same problems associated with cryptocurrency, such as asset devaluation, theft, and the use of funds to bankroll terrorism, will threaten metaverse users. According to Aamir, simply adding content moderators is not a sufficient solution, so regulators must turn to other, more creative options.
Aamir also warns that legislators must implement user protections against predatory advertisers that leverage biometric data. He reports that virtual reality hardware is capable of reading heart rate, body temperature, eye dilation, and other physiological responses. Tech companies, according to Aamir, record these data, and sell them to corporations such as Nike, Ford, Louis Vuitton, Zara, and others for use in product advertising. The company then maps biometric data to create individual “biometric psychography” profiles for each user that guarantee higher rates of content engagement. Without regulation, Aamir warns that content and ads tailored perfectly to unconscious physiological movements will cause severe addiction issues in users.
Aamir again claims that the nature of the metaverse makes effective regulation for biometric advertising difficult. He argues that policy makers must broaden their understanding of the metaverse and revamp current protection efforts. According to Aamir, the European Union’s General Data Protection Regulation (GDPR) and several U.S. states have attempted to protect biometric data to undercut predatory advertisers with limited success. Delaware law, for example, narrowly protects biometric data gathered from “measurements or analysis of human body characteristics for authentication purposes.”
Aamir warns, however, that Delaware legislation, and others like it, fail to anticipate the true scope of the metaverse. In virtual reality, there are many abstract categories of data collected by immersive technology beyond physical features. For example, Delaware law protects iris patterns but overlooks habitual tendencies that are equally vulnerable to abuse. Furthermore, according to Aamir, Delaware legislation focuses only on data used for “authentication purposes,” providing a major loophole for advertisers to store data relating to interests or motivations not used for authentication.
Aamir also reports that certain features of the metaverse run counter to core principles of the GDPR. The GDPR requires that users maintain the right to erase their personal data when collection or use of the data is no longer needed. Non-fungible tokens (NFTs), a system that records digital assets permanently for the sake of retaining ownership and value, cannot conform to GDPR requirements without undermining its core functionality. Again, Aamir argues that policy makers must rethink their approach to regulation if they hope to achieve success.
Although Aamir criticizes potential misuses of the metaverse, he remains confident that its benefits are worth pursuing. He argues that virtual reality learning will make education more accessible, sustainable, and effective, and provide new sources of income for entrepreneurs who lack access to traditional banks. As long as regulators are willing to think broadly about virtual reality regulation, Aamir seems cautiously confident that the technology will be an overall benefit to humankind.