The Evidence Act and Regulations

The future for evidence-based regulation has never looked brighter.

Two recent recommendations from the Administrative Conference of the United States (ACUS) call for agencies to promote a “culture of retrospective review” by planning for evaluation of a rule as it is being written. Such an evaluation includes defining how success will be measured, how the necessary data will be collected, and what kind of experimental methodology will be employed to draw valid causal inferences.

Drawing on these two ACUS recommendations, Cary Coglianese and I have previously argued that having agencies create evaluation plans when they develop new regulations could help strengthen a culture of retrospective review of regulation. Just as society demands that pharmaceutical companies test how well their medical interventions work, regulators also should be expected to plan for and test how well their policy interventions work. In crafting an evaluation plan, an agency should develop a hypothesis about how its new regulation will work, specify the data it will need to collect to prove or disprove the hypothesis, and specify the methodology it will use to test the hypothesis (such as a randomized control trial or quasi-experimental method).

In just the past year, Congress has taken action in exactly the direction that ACUS had recommended and Coglianese and I had urged. Both the House and Senate overwhelmingly passed the Evidence-Based Policymaking Act of 2018, and it was signed into law. This Evidence Act and an accompanying Office of Management and Budget (OMB) guidance call on agencies to apply to their programs the same basic framework laid out in the ACUS recommendations and our recent article.

Under the Evidence Act, all agencies subject to the Chief Financial Officers Act must create “learning agendas” and “evaluation plans.” Agencies not subject to this Act are also strongly encouraged to do so.

A learning agenda is simply a list of research questions for which the agency intends to conduct an evaluation to answer. For example, agency managers might ask: How successful are our regulations in promoting national-level environmental improvement? Or they might ask: How can we change our internal human resources process to shorten time periods for applicants?

Evaluation plans are where the rubber meets the road. Agencies are required to identify from their learning agendas all “significant evaluations” and develop and publish a plan for those evaluations. Whether an evaluation is significant enough to warrant a published plan is up to the agency, but the agency must consider factors such as the importance of the program to the agency mission, the size of the program in terms of the number of people affected, and the extent to which an evaluation will fill an important knowledge gap in the program.

The Act and accompanying OMB guidance do not specifically state that an agency must create an evaluation plan for a regulation. Agencies are left to decide what constitutes a “program” to be evaluated, as long as agencies consider importance, size, and knowledge gaps. Many regulations—for example, economically significant environmental and health regulations—affect all or nearly all people in the country. For this reason, it is difficult to imagine a scenario in which evaluation plans are not created for at least some regulations.

Required evaluation plans must include at least the following five elements:

  • Questions to be answered: In discussing the key questions, evaluation plans need to describe a program’s purpose and objectives and discuss how the program is linked to its intended effect. Each agency is encouraged to discuss any evaluation activities that relate to its entries in the Unified Agenda, recognizing that these activities need to occur well before the development of economically significant regulations.
  • Information needed for evaluations: Agencies must note whether they will undertake new information collection requests or if they will use existing information.
  • Methods to be used: Agencies must describe the evaluation design. For example, will they use a randomized control trial or a quasi-experimental method? Agencies could benefit from reviewing the ACUS recommendation Learning from Regulatory Experience as they explore some of the analytical and ethical advantages and disadvantages associated with each approach.
  • Anticipated challenges: Agencies should discuss any anticipated challenges to completing their evaluation plans. For instance, agencies may find that, in attempting to collect data, they will need to be mindful of the Paperwork Reduction Act.
  • Dissemination: Agencies must propose how they will use the evaluation results to inform policy making.

Agencies do not need to create an evaluation plan before a program is put in place. An agency can create evaluation plans solely for existing programs. Agencies, however, would do well to heed the advice of ACUS’s recommendation on retrospective review, which encourages them, prior to issuing new regulations, to establish a framework for assessing these regulations in the future and then to include portions of the framework in the preamble.

Creating such an evaluation plan before a rule is created saves the agency time and energy later. This is because prospective evaluation plans can ensure that the agency has identified the correct data it needs to evaluate the rule, identified potential roadblocks to obtaining that data, and designed the rule to allow for valid causal inferences. Once a rule is in place, it is still possible, although far more difficult, to design a valid plan for determining its effectiveness.

Of course, it is not possible for an evaluation plan to be created for every program or regulation. The Evidence Act and OMB guidance recognize that agencies have limited resources. With respect to rules, for example, agencies still must satisfy requirements that have long predated the Evidence Act, including those under Executive Order 12,866 and various statutes.

Evaluation can be very time intensive. Regulatory impact analyses often consist of hundreds of pages of economic, engineering, and other technical analyses. Fortunately, the ACUS recommendation on retrospective review lays out 11 factors that agencies can consider in deciding how to prioritize those rules for which to create an evaluation plan.

ACUS recommends that agencies take into account factors such as: the agency’s degree of confidence in its initial estimates of regulatory benefits and costs; the likelihood of increasing net benefits and the magnitude of those potential benefits; and the likelihood of attaining the statutory objective. Whether an agency chooses to use these criteria or develop its own separate or additional criteria, it will need some way of prioritizing the rules for which it will create evaluation plans.

By requiring agencies to create evaluation plans for their programs, the Evidence Act empowers agencies to think rigorously about whether the solutions they put forward adequately address the problems they are designed to remedy. By publishing these evaluation plans and soliciting input from the public, agencies can refine their research methodologies and develop better-informed programs, including their regulatory programs.

Thanks to the requirements of the Evidence Act, the future of evidence-based regulation has never looked brighter.

Todd Rubin

Todd Rubin is an Attorney Advisor at the Administrative Conference of the United States.

The views expressed in this essay are those of the author and do not necessarily represent the views of the Administrative Conference.

This essay is part of a 13-part series, entitled Using Rigorous Policy Pilots to Improve Governance.