In this week’s Saturday Seminar, experts propose ways to regulate voice-activated technology and protect consumer privacy.
Around the world, billions of people have their own personal assistants to help them with everyday tasks, such as making shopping lists or scheduling meetings. But these assistants are not people—they are virtual assistants that can be summoned instantly and perform tasks with a simple voice command, such as “Hey, Siri.”
Voice assistants, such as Amazon’s Alexa and Apple’s Siri, are artificial intelligence-based software tools that respond to voice commands and operate on devices such as smartphones, tablets, and smart speakers. IBM released the first voice assistant in 1962, IBM Shoebox, which could perform arithmetic using voice commands. Apple, however, helped make the technology commonplace when it introduced Siri in 2011.
In recent years, devices with voice assistant technology have risen in popularity, and technological advancements have expanded the range of tasks they can perform. Shoebox could recognize 16 words, but today, voice assistants like Alexa can speak multiple languages and play music, search products online, and tell jokes.
This technology has caused consumers to worry, however, that voice assistants are listening to and recording their private conversations. In 2020, Google confirmed that its Google Home devices were recording users’ conversations due to a software update error. In addition, some consumers with an Amazon Echo have reported a striking similarity between products promoted to them on Amazon and products they mentioned in past conversations. Amazon reports that Alexa devices only start listening when it hears “Alexa,” but surveys indicate that many consumers are not convinced.
By listening to and recording users’ voices, voice assistants gather large amounts of personal data that technology companies can share with third parties. Technology experts expect the voice assistant market to continue growing, but there are few federal regulations that apply to voice-activated technology. The U.S. Federal Trade Commission (FTC) has provided recommendations to help consumers protect their privacy from voice assistants, but some experts argue that agencies should strengthen regulatory oversight over this developing technology.
In this week’s Saturday Seminar, we collect scholarship discussing voice assistants and proposing regulatory solutions to protect consumer privacy.
- In an article published in the Journal of Strategic Innovation and Sustainability, Clovia Hamilton of the Indiana University Kelley School of Business, William Swart of East Carolina University, and Gerald M. Stokes of Stony Brook University examine the ethical and legal implications of using voice-activated personal assistants. Swart and Stokes explore the privacy risks of using the technology in different environments, including homes, financial institutions, and schools. They report that about half of all adults in the United States use voice-activated personal assistants in these spheres, but many consumers are unaware of the risks they impose. In light of the technology’s growing prevalence, they advocate making an institution responsible for “the administration and provision of protective measures” for consumers that use artificial intelligence systems.
- In an article published in IEEE Transactions on Technology and Society, Anthony Aguirre of the University of California, Santa Cruz and several coauthors explain that artificial intelligence (AI) assistants create conflicts of interest between the technology companies behind them and consumers. Companies that sell AI assistants have a financial incentive to gather users’ personal data, Aguirre and his coauthors explain, and this incentive can cause AI assistants to be disloyal to the consumer. For example, AI assistants may manipulate and encourage users to purchase certain products that would benefit their companies, Aguirre and his coauthors contend. They suggest a regulatory framework that would impose a duty of loyalty on AI assistants to best serve consumers’ interests and require AI systems to disclose the specific “steps and data sources that led to particular recommendations or actions.”
- In a paper presented at the Research Conference on Communication, Information and Internet Policy in 2019, René Arnold and his coauthors analyze the emerging policy challenges posed by consumer use of voice assistants. Arnold and his coauthors discuss patterns in consumer usage of five dominant voice assistants, finding that the average usage appears to revolve around music playback, internet searching, and call features. Arnold and his coauthors conclude that immediate legislative action to regulate digital voice assistants may not be necessary given current usage patterns, which appear to be relatively basic. Yet, given the prevalence of devices with voice assistants, they warn that a major advancement in their technological capabilities could result in rapid consumer adoption before policymakers can respond. To avoid this, they suggest that policymakers continue to monitor consumer usage patterns and capabilities of such voice assistants.
- In a recent article published in International Data Privacy Law, Stanislaw Piasecki of the University of Amsterdam, and Jiahong Chen of the University of Sheffield School of Law argue that smart devices’ compliance with the European General Data Protection Regulation (GDPR) is necessary to protect the data of vulnerable users, such as children and adults with mental or physical disabilities. They stress the importance of informing users about smart device data collection processes and find that the GDPR’s transparency, comprehensibility, and accessibility requirements allow vulnerable users to properly inform themselves and exercise their data protection rights. To ensure transparency for all users, Piasecki and Chen argue that smart device organizations should adopt special data protection measures for vulnerable populations, such as opt-in mechanisms, high privacy settings, and child-friendly language.
- In an article published in IA, Culture et Médias, Eleonore Fournier-Tombs and Céline Castets-Renard of the University of Ottawa explore the effects of personal assistants with female voices–such as Amazon’s Alexa–on gender norms. They argue that these assistants display stereotypical female attributes such as compliance and subordination, and that their pervasiveness in the modern world might perpetuate sexist representations of women. To combat gender discrimination, Fournier-Tombs and Castets-Renard recommend fostering a more gender diverse artificial intelligence ecosystem by funding more companies run by women. They also suggest requiring developers to test technologies for user safety from psychological and other harm.
- The GDPR should alter its framework for deciding who qualifies as a controller—or person who determines the purposes and mechanisms for processing personal data—argue Jurriaan van Mil and João Pedro Quintais of the University of Amsterdam. In a recent article published in the Computer Law & Security Review, they contend that establishing controller qualifications is critical because virtual assistants pose individual privacy threats such as hacking and the recording of personal conversations. Recent developments in case law and supervisory guidance shaping the rules on controllership have led to a complicated and ambiguous legal test for choosing controllers, van Mil and Quintais explain. Because an expansive interpretation of controllership can paradoxically diminish data protection by weakening the distribution of responsibility, they recommend that the GDPR formulate a concept of controllership in which only the parties in charge of functional data processing are controllers.
The Saturday Seminar is a weekly feature that aims to put into written form the kind of content that would be conveyed in a live seminar involving regulatory experts. Each week, The Regulatory Review publishes a brief overview of a selected regulatory topic and then distills recent research and scholarly writing on that topic.