Regulators should rely on both clinical and actuarial judgment when setting policy.
Regulatory agencies either use clinical or actuarial judgment to set priorities and develop internal policy. Scholars have long debated the relative merits of each. Yet lawyers, including those involved in overseeing the administrative state, are less accustomed to probing this distinction. In fact, many are not aware of it at all. But lawyers and agency officials need to be more aware so they can find ways to improve regulatory decision-making.
Clinical judgment primarily follows from cumulative expertise and experience, drawing upon agency experts’ own received wisdom and that of their professional community. In a way, it reflects the method a doctor routinely uses with a patient. Medical examination produces factual findings, which the physician compares against an experiential knowledge base. That appraisal, in turn, yields a diagnosis and possible treatment protocols.
Actuarial methods, on the other hand, predict or identify the same diagnosis through statistical analysis. Clinical judgment certainly is empirical, but actuarial prediction uses sophisticated quantitative methods, like regression models, to demonstrate a relationship between the observable world and predicted outcomes. The actuarial decision-maker’s process is more removed from the subject—literally more calculated—than their clinical counterpart. It deemphasizes intuition, which may or may not be evidence-based, in favor of externally validated conclusions.
A more optimal approach to making regulatory decisions would combine both clinical and actuarial judgment. Two suggestions might help regulatory agencies.
First, federal actors should not dispense with clinical judgment altogether. Rather, they should suppress their human-generated decision criteria until reviewing an actuarial assessment. The sequential combination of objective empirical evidence and professional judgment will strengthen the knowledge base from which new policy emerges.
Second, agencies should at some point also evaluate the two approaches against each other. Use of low-cost randomized control trials (RCTs) should yield an ideal combination of empirically based evidence and considered choice. This proposal borrows a methodological page from the U.S. Food and Drug Administration and deploys RCTs (whether as pilots or as more comprehensive, multi-year studies) toward a more reliable basis for rule-making.
Consider, for example, the process of administrative risk assessment, which, when successful, efficiently identifies threats to public health, safety, and security. Civil service staff undoubtedly rely on professional heuristics and shelves of acquired knowledge when measuring the likelihood of perilous events. Agencies considering any issue from natural disaster hazard mitigation to environmental safety also follow actuarial, evidence-based practices.
Choosing one approach—clinical or actuarial judgment—exclusively over the other represents a false dichotomy. The acceptance of quantitative prediction does not demand the rejection of purely human decision-making. Quite the contrary. Human decision-making will always exist, but, as John Monahan of the University of Virginia School of Law argues, it “must be disciplined and checked and it is crucial that the process start with an actuarial estimate of risk.”
Too many commentators have staked all-or-nothing positions when it comes to the clinical-actuarial divide. With respect to assessing criminal risk, algorithm skeptics worry about biased classification without rigorous evidence. Proponents of actuarial methods boast of their clear superiority over “unstructured clinical judgment.” Even a third way commands its detractors; one scholar found that combined actuarial and clinical approaches were more error-prone than pure reliance on an actuarial tool. This conclusion, however, assumes that humans are adulterating the algorithmic recipe with their own preferred clinical ingredients. But that would be an “off-label” use of the instrument, one that presumably is discouraged by its developers.
I reject the foregoing options and encourage innovative federal agencies to embrace actuarial tools as suggestive baselines. For any outcome that a regulatory body seeks to improve, there will be no a priori reason to adopt either a strictly formulaic or strictly human-generated answer.
One example would be ecological risk assessments used by the U.S. Environmental Protection Agency. Federal guidelines encourage several methods of classifying a potentially threatened population. But classifications alone are of limited value. Some process must then convert the designation into an actionable rule or procedure. Agency officials maintain control over the evidence-based result by using their clinical faculties to embrace or reject the actuarial assignment. In that way, an actuarial “decision” is merely a suggestion; a human actor retains the final word.
An ideal protocol sequences the actuarial and the clinical, in that order. This path combines the best of both worlds: a commitment to widely held, scientifically backed principles, together with the preservation of ultimate authority and responsibility with agency staff.
This sequential decision-making is more defensible, transparent, and flexible than any other discrete option. Regulatory innovation can start, on this account, with the selection of a validated actuarial tool. Armed with an evidence base, the agency can leverage proven arguments for using certain input factors, algorithmic weighting schemes, and outcome measures as they are relevant to regulatory objectives.
Moreover, a take-it-or-leave-it actuarial recommendation exemplifies flexibility in choice. The agency, however, should evaluate further. If the regulatory body is interested in more than an actuarial tool’s predictive validity—specifically, whether the tool improves preferred outcomes when used—then it can and should test the tool’s recommendations against purely clinical judgment. A low-cost pilot RCT might be a required step in the initial implementation plan. Alternatively, the agency could still conduct an RCT (assuming ethical feasibility) after adoption.
The point of a rigorous evaluation would not be a return to exclusively actuarial or clinical judgment. An agency could use RCTs to show that relying on an actuarial baseline followed by a clinical decision works or does not. If it does not work, the agency should seek a substitute quantitative instrument and repeat the clinical trial. This iterative approach to policymaking avoids complete reliance on either judgment paradigm, and it ensures that agency staff have the best empirical evidence at their disposal. It also demonstrates that the methodological war presumed by current scholarship might end in a reasoned truce.
The administrative state should embrace both quantitative recommendations and clinical assessment as compatible means of achieving the goal of optimal regulation.
This essay is part of a 13-part series, entitled Using Rigorous Policy Pilots to Improve Governance.