A new proposal seeks to reduce the time and cost of securing approval for human subject research.
Ask researchers about one of the biggest impediments to experimental research, and they will point to the federal law that requires all federally funded human subject research to get preapproval from an ethics committee known as an institutional review board (IRB).
Over the decades, IRBs—regulated by the Office for Human Research Protections in the U.S. Department of Health and Human Services—have increased their jurisdiction to cover any interaction of scholars with human beings, including oral histories, ethnography, studies by students, and more. All have to be preapproved. A researcher wants to pay internet users to assess visual images? Get IRB approval. Interview lawyers about their practice? Get IRB approval. Poll students about their experiences? Get IRB approval.
It is widely recognized that IRBs have exercised “mission creep,” continuously expanding the de facto scope of their oversight. Some might describe this trajectory charitably as the advance of ethical norms, but the cost of IRB expansion is undeniable: more burden on researchers, slowdown of research, fewer studies, and inevitably less progress.
Can this burden be reduced without increasing risks to subjects? The University of Chicago is about to launch a pilot reform to test this question. The reform will address the great majority of social science experiments that are classified as minimum risk—by my own count well over 95 percent of the protocols received by the social science IRBs are treated as either “exempt” or “expedited.”
The reform is propelled by a simple premise: Instead of applying for IRB approval, researchers would self-determine that their studies are low-risk and launch them without IRB review.
This reform is entirely in line with the law. Federal law, it turns out, does not require these minimum-risk experiments to be reviewed and preapproved by an IRB. The law lists several categories of human subject research that are “exempt” from IRB review. For example, research that only includes educational tests, surveys, interviews, or public observation, where subjects are not put in any meaningful risk of loss, is expressly exempt.
The problem is that research universities in this country subscribe to the view that it is up to an IRB to determine whether particular research fits within the listed exemptions. Researchers do not know the law nor which studies are exempt, and these scientists are therefore instructed to apply for IRB determination of exemption. Even if ultimately exempt, they have to incur all the compliance costs and delays associated with such applications. Paradoxically, universities have uniformly adopted a system in which researchers need an IRB review to determine that they do not need an IRB review.
The law, however, does not mandate this. The Office of Human Research Protections explicitly permits—and even encourages—researchers to make a self-determination of their exempt status. The Office has stated that “the regulations do not require that someone other than the investigator be involved in making a determination that a research study is exempt.” It also endorses procedures that rely on investigator self-determination, such as checklists or web-based forms.
Working with the IRB, I developed at the University of Chicago a model webtool to screen exempt research. Researchers would answer simple questions to determine if their research is exempt from IRB review, such as “Does the study involve interactions with children or their environment?” and “Does the study involve interventions that, under normal circumstances, are likely to be painful, harmful, or that subjects are likely to find offensive or embarrassing?”
The questions are drafted by the IRB to ensure that all risks are addressed. If the answers to all the questions are “no,” the researcher would receive a formal exemption from IRB review. Any responses of “yes” would then divert the researcher to a traditional IRB review. It would only take a few minutes to complete this self-determination, and I expect that the great majority of behavioral studies could comply with the legal oversight requirements by using this instantaneous mechanism.
Naturally, a reform breeds concern and unease regarding the effects of self-determination. Would this really save time and increase the number of experiments? Would it cause more harm to subjects? Would it be misused by researchers? This anxiety has slowed down the implementation of the self-exemption tool.
To address this uncertainty, I propose a supervised rollout of the reform in experimental fashion. Researchers preparing a human subject study and seeking IRB approval would be randomly assigned into one of two channels: either pass through the new self-determination webtool, or be diverted to traditional in-person IRB review. With hundreds if not thousands of protocols per year, this experiment could have enough power to offer statistically meaningful comparisons.
The plan, still in the works, is to compare the two forms of review by measuring outputs. On the benefit side, we would measure the difference in durations, cost of compliance, and levels of researchers’ satisfaction across the two modules. On the risk side, we would measure any harms to subjects, the incidence of complaints by the subjects or third parties, breaches of data security, and any researcher misconduct such as misreporting. We can even measure whether the channel of determination—self versus IRB—affects research outcomes, such as quality of publication.
Such pilot experiments can answer the ultimate question of whether IRB review of low-risk studies adds any social value or prevents any harm. What is more appropriate than to roll out a new legal template for oversight of experiments…experimentally?
This experiment has not yet begun. I am ready to share the blueprint for the self-determination tool and the plan for its experimental rollout with any academic institution ready and willing to launch this experiment.
This essay is part of a 13-part series, entitled Using Rigorous Policy Pilots to Improve Governance.