The OECD should offer recommendations on the full potential and risks of artificial intelligence in rulemaking.
The use of artificial intelligence (AI) in the public sector is increasing in many legal systems. The most popular examples involve adjudicating social benefits and incentives, evaluating asylum requests, admitting students to universities, selecting public officers, and assessing the reliability of firms in government contracting. Although many AI uses deal with law enforcement, applications of AI in the regulatory life-cycle—from the drafting of new rules to their ex post evaluation—are less obvious, but certainly no less sensitive.
Regulation—that is, the setting, monitoring, and enforcement of rules—appears to be a classic target for the use of AI tools. AI in regulation may succeed for the same reasons as any data-driven decision: it can speed the process and reduce the need for specialized staff, while also making possible more effective and fine-tuned interventions. Moreover, AI has the potential to overcome human cognitive biases, reduce errors, decrease noise in decision-making, and prevent corruption.
The exact scope of AI use in the public sector is blurred, however, because transparency is limited, both in the United States and in Europe. People often only learn about governmental use of AI via press releases, litigation outcomes, or soft regulations.
To be fair, researchers in the United States undertook an extensive, albeit not systematic, survey in 2020, which followed an exercise performed in Australia in 2004. A similar, though less far-reaching approach, featured AI uses in Italy from 2021 to 2022.
But besides these exercises, the lack of official and comprehensive mapping of government use of AI is paradoxical, since the use of AI in rulemaking, adjudication, and enforcement is already a reality in many countries. Thus, transparency about public uses of AI, which might affect third party interests, becomes crucial for maintaining effective procedural and judicial review safeguards. For instance, the city of Helsinki, Finland, launched one of the most accessible and user-friendly centralized registers to make citizens aware of AI applications at the municipal level, along with associated data and risks.
The spread of AI in the public sector has not gone unnoticed by the Organization for Economic Cooperation and Development (OECD). The OECD’s latest Regulatory Policy Outlook emphasizes the benefits and potential risks of using AI in regulation. The Outlook further underscores that technology can encourage innovation and efficiency in rulemaking and regulatory practice.
Although chapter six of the Regulatory Policy Outlook analyzes the benefits and potential pitfalls of AI for risk-based regulation, the report does not otherwise develop the full potential for and risks of AI in rulemaking. Analysis and recommendations from the OECD on AI in rulemaking can be particularly relevant when the use of AI in the public sector is widespread and countries are struggling to find a balance in promoting innovation while protecting human rights.
At the national and European level, some regulations deal with the use of AI as it pertains to aspects of adjudication and enforcement, but AI use in rulemaking is currently uncertain. For instance, both the General Data Protection Regulation and Europe’s regulatory framework proposal on AI prohibit fully automated adjudication, with some exemptions. Since 2016, France has required that executive agencies indicate whether they used an algorithm to reach administrative decisions. France also recently authorized tax authorities to examine social media using automated processing for evidence of tax fraud.
Although the idea of a computer setting rules might appear futuristic, it is just as viable as using AI for rule monitoring or rule enforcement. There are several applications of AI in rulemaking.
AI can, for example, carry out automated or semi-automated drafting activities, improving the way rules are constructed and making them available digitally. “Rules as code”—the encoding and translating of a new rule into a computer language while it is being drafted—enhances drafting activities by automatically detecting inconsistencies or incompatibilities among rules and identifying unintended consequences of such rules. This digital practice is a developed reality in some countries and has been endorsed by the European Commission.
The OECD’s Regulatory Policy Outlook introduces this new wave in its first chapter on “regulatory policy 2.0.” That chapter lists the above-mentioned benefits and attempts a synthetic description of potential pitfalls. Although the extreme simplification of “rules as code” can be benign in itself, it can also lead to the impoverishment, if not the distortion, of important normative debate. Recent Spanish litigation, for example, has led scholars to question the accuracy of computer-automated translation and ask whether translation alters existing legal frameworks.
A rules-as-code approach can also allow for automated enforcement when rules do not call for discretion and thus when an overlap exists between rulemaking and adjudication—but it does so with a potential loss of procedural guarantees at both levels. Automated decisions have already devastated numerous communities and prompted litigation in many legal systems. Perhaps greater coverage of these instances would have been deserved in the OECD’s Regulatory Policy Outlook.
In addition to using AI in rulemaking to automate rules as code, AI could be used in rulemaking to support the retrospective review of existing law and regulation, so as to adapt rules to changing economic contexts or regulatory frameworks. Since any ex post review of the stock of regulations may turn out to be influenced by deregulatory or pro-regulatory approaches, the participation of all interested stakeholders can help to counterbalance these outcomes. If an AI system performs the review, though, this helpful consultation would not usually take place.
Finally, AI can be used to support rulemakers in collecting and processing data from various sources, such as complaints or enforcement activities, which may in turn reveal the need for regulatory intervention or updating existing regulations. For instance, the Bank of Italy uses AI to reorganize consumer complaints. Similarly, the U.S. Food and Drug Administration uses post-market surveillance techniques to update rules and guidance. AI also supports rulemakers in reorganizing and analyzing comments collected in highly participated consultations and mass campaigns.
These applications do not come without risks, however, and insufficient care with algorithm design can lead to distortion and alter the quality of information available to decision-makers. For instance, the inadequate collection of information might result in some reports or complaints receiving less attention because of spelling mistakes or jargon. It may also cause regulators to overlook other comments when technology automatically excludes mass comments. To tackle this challenge, the European Commission and the European Court of Auditors recommend analyzing comments from mass campaigns separately and presenting the results appropriately.
The OECD has a long tradition in providing decision-makers with best practices and recommendations to improve the quality of rules and their enforcement, which often support national reforms and start cultural changes. One example, of many, is the impact of the OECD publication on enforcement through inspections in Italy, which led to Italy’s successful implementation of AI in risk-based inspections in some regions.
This same guiding role, based on the analysis of megatrends emerging from the ground, would be useful to guide OECD member countries toward appropriate care when dealing with AI in rulemaking.
This essay is a part of a nine-part series entitled, A Global Regulatory Policy Outlook.
This contribution builds on the publication OECD (2021), OECD Regulatory Policy Outlook 2021, OECD Publishing, Paris, https://doi.org/10.1787/38b0fdb1-en. The additional opinions and arguments employed herein are those of the authors and do not necessarily reflect the official views of the OECD or of its Member countries.