ACUS recommends best practices for how agencies manage mass, computer-generated, and falsely attributed public comments.
Falsely generating comments to influence public policy is an old political game. Even in Shakespeare’s Julius Caesar, the Roman senator Cassius makes an attempt to convince Brutus of the public’s support by faking letters “in several hands … as if they came from several citizens.”
The question today is how to handle such an old political game when new digital tools are at play.
The Administrative Conference of the United States (ACUS) has been studying this question in the context of “notice-and-comment” rulemaking. ACUS recently issued a new recommendation on how agencies can better manage falsely attributed comments, computer-generated comments, and comments resulting from mass campaigns that generate hundreds of thousands, if not millions, of comments.
In most cases, when an agency wants to create a new regulation, it must first publish a notice of the proposed new regulation in the Federal Register. The agency invites the public to comment on the proposal by giving all “interested persons” the opportunity to submit “data, views, or arguments.” Agency staff then place the comments they receive in an online rulemaking docket and take them into account before issuing a final rule.
Before the internet, the public’s opportunity to comment was constrained by practicalities. To write a comment, a person had to write a physical letter to the agency. To access other comments submitted, interested members of the public had to travel in person to a physical reading room in an agency’s office.
Following the introduction of e-rulemaking in the early part of this century, commenting has become simple and open to all. It is also easy to access others’ comments in online dockets at Regulations.gov.
But like other forms of mass participation online, this comment process is susceptible to “astroturfing.” Astroturfing occurs when an individual or group creates an illusion of grassroots support even when no such support exists. An astroturfer can, for example, use false profiles to generate large numbers of comments, creating an illusion that many people support or oppose a policy.
In some recent high-profile rulemakings, agencies have received—or have appeared to receive—millions of comments, many of which were fake or manipulated. As a recent New York Attorney General report suggests, organized astroturf campaigns have produced some of these comments. Some people have used stolen names and addresses to mask their identities when leaving comments.
Falsely attributed comments may raise legal issues under several civil and criminal statutes. They may also mislead agencies—for example, by giving the impression that a non-expert’s opinion is the view of an expert, or by making a business’s opposition to a proposal look like the views of individual community members.
The seeming proliferation of such false or misleading comments has prompted some policymakers to question the integrity of the rulemaking process and whether agencies can adequately address the challenges these comments can pose.
At the same time, even genuine comments are proliferating. Agencies increasingly find that some of their most significant rulemakings garner large numbers of similar or identical comments, frequently in response to calls to action by public interest and advocacy groups. The comments from these mass campaigns are not inherently problematic. They may even indicate robust civic engagement.
But when agencies are confronted with hundreds of thousands, if not millions, of comments on a proposed rule, they can struggle to process and analyze such large volumes of comments. Many of these comments can also be nearly identical, giving agencies little new information. Still, agency staff must process these comments and make them available in their online dockets.
Furthermore, when members of the public go to these online dockets, they may struggle to find what they are looking for when they have to wade through entry after entry of nearly identical comments.
The management challenges mass comment campaigns pose for agencies can be daunting—even more so when taking into account the challenges associated with today’s digital equivalent of Cassius’s fake letters: so-called robo-comments.
A new recommendation put forward by ACUS seeks to help agencies overcome these comment management challenges. It offers agencies best practices for processing and storing mass, computer-generated, and falsely attributed comments.
Grounded in a research report prepared by a team of expert consultants, the ACUS recommendation instructs agencies on what tools they can use to manage their comment submission and review processes, such as reCAPTCHA and deduplication technology. It explains how to manage their online e-rulemaking dockets and what agencies should do to be more transparent and consistent when confronting mass, computer-generated, or falsely attributed comments.
The ACUS recommendation can help agencies and the public better navigate rulemaking dockets, protect individuals whose identities may be falsely used, and ultimately help in synthesizing important public views, data, and other information. The recommendation also encourages agencies to be more direct about the types of comments that they believe would be most helpful during rulemaking.
Altogether, the recommendation gives agencies a roadmap for managing comments while retaining the integrity of the rulemaking process.
ACUS’s recommendation was a long time in the making. The origins of the project date back to a 2018 forum that ACUS held on “mass and fake comments” in the rulemaking process. ACUS also benefited from a 2019 Senate Committee report on the subject, along with several U.S. Government Accountability Office reports. The recommendation emerged at a time of overall growing public and state interest in the implications of commenting in the digital era.
The ACUS recommendation reflects the input of many contributors, including the consultants, ACUS members and staff, and interested members of the public. Although everyone involved agrees that stealing others’ identities and falsely attributing comments to them is inappropriate, a deeper debate surfaced among participants over the potential value of computer-generated comments and mass comment campaigns that seek to influence rulemaking decisions.
Some participants in the recent ACUS process argued that mass comment campaigns should be discouraged outright, as the rulemaking process is not supposed to be a plebiscite. These participants worried that mass comments bring no value and only increase agencies’ workloads.
Other participants argued that mass comments can provide useful information about public values or helping to gauge expected compliance with a proposed regulation. Some participants recognized that even computer-generated comments might be helpful when, for example, bots assemble disparate information or identify “typos, broken links, and incorrect legal citations in proposed rules,” ultimately helping “make regulations more accurate.”
Rather than trying to settle the debate over computer-generated comments or mass comment campaigns, the recent ACUS recommendation gives agencies general guidelines for how to improve their management of the comments they receive, in whatever volume and from whatever origin. ACUS took this approach because—whatever the ultimate value may be of the different kinds of comments made possible by digital technology—agencies should expect that they will continue to receive mass, computer-generated, and falsely attributed comments in high-profile rulemakings.
When they do, the ACUS recommendation can help.
The views expressed in this essay are those of the author and do not necessarily represent the views of the Administrative Conference or the federal government.
This essay is part of a four-part series on the Administrative Conference of the United States, entitled Improving Participation, Impact, and Fairness in the Administrative State.