Scholar argues that the EU’s AI Act needs redrafting to protect consumers from manipulative digital practices.
Advances in artificial intelligence (AI) now allow digital sellers, social media companies, and other digital operators to manipulate the choices of digital consumers with greater precision, sometimes without the consumer’s awareness.
The European Union’s recently approved Artificial Intelligence (AI) Act could address AI-driven “dark patterns”—deceptive design techniques some digital platforms use to shape consumers’ perceptions and actions. Yet the Act is too vague and uncertain, argues Mark Leiser. In a recent paper, Leiser argues that some dark patterns could slip through the cracks. He urges European policymakers to clarify crucial provisions of the Act to protect consumers from these growing threats to their autonomy.
Dark patterns deceive consumers into making purchases or subscribing to services, consenting to broad third-party uses of personal data, or otherwise acting in ways that benefit digital platforms. These devices are all detrimental to consumers’ interests but vary widely in their subtlety and sophistication.
Legacy dark patterns such as hidden charges, obscured buttons, misleading navigation features, and preselected options can victimize vulnerable groups but are rather straightforward for EU regulators to identify and enforce against.
More insidious tactics driven by algorithms and artificial intelligence—“darker” and “darkest” patterns, in Leiser’s phrasing—pose a greater challenge to regulate.
The tactics include “sensory manipulation,” such as subliminal messaging and the use of imperceptible noises or background audio at frequencies that induce desired psychological responses.
Algorithmic manipulation, another dark pattern, builds on covert surveillance of consumer behavior, in presenting consumers with curated streams of information that can determine what content they engage with. Adaptive algorithms may introduce or rigidify users’ cognitive and behavioral biases and distort their decisions to the benefit of platform operators. These processes are opaque, leaving consumers unaware of the extent to which their choices are pre-configured, Leiser argues.
Behavioral conditioning techniques, furthermore, provide subtle positive or negative feedback to users, to encourage or discourage certain online behaviors. Some digital interfaces replicate games, in dispensing tangible or virtual rewards to users for taking certain “pathways” to the operator’s preferred objectives.
Platforms may combine these and other tactics to overwhelm users’ psychological defenses. Dark patterns may influence behavior immediately, such as causing the user to make a certain purchase, or alter the user’s preferences incrementally over time.
It is not always a simple task to distinguish legitimate, hard bargaining persuasion from deceptive manipulation. Although the EU’s Digital Services Act prohibits superficial dark patterns, it and other consumer protection and data privacy laws may not address more deeply embedded deceptive designs, Leiser states.
The AI Act, upon whose terms the European Parliament and Council reached a provisional agreement on December 8, 2023, could change that, Leiser writes, but not in its current form. The Act subjects AI systems to requirements of varying stringency depending on the risks that a system poses to consumers’ well-being and rights. Article 5 of the Act would ban the most harmful AI systems.
Article 5(1)(a) would prohibit AI systems that use “subliminal techniques beyond a person’s consciousness to materially distort a person’s behavior in a manner” that could cause physical or psychological harm.
Article 5(1)(b) would prohibit AI systems that exploit vulnerable populations, due to age or disability, to materially distort their behavior in a manner that could cause physical or psychological harm.
Leiser observes that the open-endedness of Article 5’s legal wording is a double-edged sword. In potentially addressing all devices of subconscious manipulation, the provision may fail to ban any in particular.
The notions of subconscious influence and psychological harm, Leiser notes, lack a uniform meaning across the medical, psychological, and legal fields. Policing the boundary between conscious and subconscious influences could prove impossible for regulators without clearer guidance.
Showing a causal nexus between a subliminal technique and some material harm, a requirement for imposing liability, will also be difficult in practice. Leiser is concerned that many dark patterns that alter a user’s behavior incrementally may not meet the causation threshold.
In practice, many dark patterns that “navigate the peripheries of consciousness without breaching into the territory of material harm” could go unregulated.
Leiser recommends a more precise framework, backed by psychological as well as technological research, with definite applications to manipulative AI systems.
Leiser provides a table categorizing various “psychological techniques” by the types of manipulation they represent and the material harms they may cause–for instance, mental health issues or overconsumption. He argues that such a rigorous understanding of how dark patterns can harm consumers must inform Article 5’s reach.
With this understanding in mind, the drafters should clarify ambiguous concepts, Leiser argues. His proposed redrafting of 5(1)(a) omits the imprecise term “subliminal techniques.” Instead, it bans AI systems that “employ techniques which exert an influence on individuals at a subconscious level,” including “any form of stimuli not consciously registered.”
If “subliminal techniques” are to remain part of 5(1)(a)’s language, Leiser suggests they be defined as “any attempt to influence that bypasses conscious awareness, including non-perceptible stimuli.”
Leiser would also alter 5(1)(b)’s scope to ban practices that exploit certain psychological traits, such as suggestibility, “nudgeability,” and various cognitive biases, in addition to recognized mental disabilities. Section 5(1)(b) could be written to prohibit targeting users based on any distinct characteristic.
Furthermore, Leiser suggests that the Act should clarify that AI systems may not influence users’ values and decisions by any nonobvious means. The drafters should use express wording to prohibit both tactics inducing immediate consumer responses and those that alter consumer preferences and behavior incrementally, which may not produce immediate, tangible harms.
New AI systems will enable dark patterns that target individuals’ biases and vulnerabilities with frightening precision. The onus is on policymakers, in the EU and elsewhere, to be more precise themselves in drafting regulation to protect the freedom of consumer choices, Leiser concludes.