Data from recent rulemakings do not show that OIRA review systematically lowers estimated benefits and increases estimated costs.
Today’s regulatory process is anything but simple and transparent. Critics on both sides of the political divide complain that agencies and the Office of Information and Regulatory Affairs (OIRA) are impervious “black boxes.”
The critics further charge that OIRA routinely slashes benefit estimates of rulemakings and often inflates cost projections to delay “economically significant” rules. Because of the opacity of the process, we cannot say exactly why rulemakings change at OIRA and at independent agencies. We can, however, examine how the regulations change as they move from proposed to final rules.
Based on an analysis of 160 rulemakings published in 2012 and 2013 (excluding airworthiness directives), the calculated costs of 74 rules (46%) increased at the final rule stage, compared to the proposed rule stage. Only 46 rules (28%) had lower estimated costs, and the costs of 40 rules (25%) did not change during the rulemaking process.
Independent agencies are not generally required to submit their rules to OIRA review. Not surprisingly, cost changes for rules that OIRA reviewed were dramatically different from rules created by independent agencies. The estimated costs of finance rulemakings – regulations by the Consumer Financial Protection Bureau (CFPB), the Commodity Futures Trading Commission (CFTC), and the Securities and Exchange Commission (SEC) – increased by an average of 1,691 percent from the proposed to final rule stages. In contrast, healthcare rulemaking costs increased by 143 percent, and Environmental Protection Agency (EPA) rulemaking costs only increased by 41 percent.
(Data on file with author. Click chart for full size image.)
The sample-wide average showed a nearly 400 percent increase in calculated costs as rules move through the rulemaking process, although this number is inflated by a few finance regulations. The first column in the accompanying table is the raw average percent change in calculated costs from the proposed to final rule. The next column is the average excluding the three rules with the largest percent changes in each category (“adjusted average”) to better capture the typical change.
The finance rules had the largest disparity in percent changes, as displayed by the dramatic reduction between the average and the “adjusted average.” The need to exclude outliers was less essential in the other two areas because the dispersion was not nearly as pronounced.
When analyzing this data, we expected special scrutiny on large, “economically significant” rulemakings. We hypothesized that more expensive rules would be subjected to longer OIRA review times and more pressure from outside interests during the rulemakings process — and that therefore they would have larger percentage changes.
However, the evidence does not support these expectations. Regression analyses do not reveal any correlation between the aggregate cost of a proposed rule and either net or percent change in cost calculation. In addition, there is no correlation between length of review time at OIRA and the net change in rule cost or the percent change.
Although critics may have other reasons to chastise OIRA for lengthy rule reviews, there is no statistical evidence that OIRA is unduly distorting the cost and benefit calculus of rules by holding them for a longer period.
A more targeted look at the federal government’s most notable regulations reveals results similar to the larger sample. Among those rules that OIRA included in its 2013 Draft Report to Congress, fourteen rules contained both cost and benefit estimates at both stages of the rulemaking process (proposal and final rule). Only three rules increased in cost from their proposed to final version; six decreased in cost, and five had no change. Recall, for the broader sample, roughly half of the rules increased in costs, compared to just a quarter for this smaller OIRA sample.
On the other side of the cost-benefit calculus, seven of the fourteen most significant rules increased in calculated benefits from proposed to final rule stages. Four rules saw decreases, and three remained unchanged. The average calculated benefits in this sample increased by 14 percent, but the sample size for benefit changes is surely too small to draw sweeping conclusions.
Suggestions that OIRA systematically reduces benefits and inflates costs are simply not supported by the data. Changes in agencies’ cost-benefit analyses appear instead to be specific to the unique nature of individual rules. There is a large amount of variability in the data that neither length of OIRA review nor the cost of the rule appears to explain.
Of course, this systematic analysis of how rules are changing is unable to prove why they are changing. For that, greater transparency at OIRA and in agency processes can help to answer why regulatory changes occur.