Forces that Shape and Constrain Medical Practice

Font Size:

Michelle Mello discusses how the law, artificial intelligence, and the COVID-19 pandemic have shaped health care.

Font Size:

In a conversation with The Regulatory Review, leading empirical health law scholar Michelle Mello shares her perspective about the forces that shaped the practice of medicine and how policymakers might learn from the past to facilitate positive changes to the environment in which health care professionals deliver care.

In recent years, the health care industry and health care systems have seen changes as a result of shifts in regulatory requirements following the pandemic and an increasing focus on the roles that artificial intelligence (AI) and for-profit actors play in medicine.

Mello discusses how the shadow of the law and regulation can have a meaningful effect on what providers do to deliver safe and equitable care to patients. Highlighting lessons learned from the pandemic, Mello calls attention to the implications of a regulatory landscape that allows too much space for defensive medicine and investors with profit motives. In doing so, she describes how policymakers can learn from providers and patients to help health care systems bear the cost of care and confront challenges of adapting to changing technology.

Mello is a professor of law at Stanford Law School and a professor of health policy in the Department of Health Policy at Stanford University School of Medicine. She is an expert in health law and policy, with research that covers alternative dispute resolution, bioethics, AI, and medical malpractice. Mello’s research and professional achievements have garnered widespread recognition, including an election to the National Academy of Medicine and the Alice S. Hersh New Investigator Award from AcademyHealth. Her recent research examines how the law and regulation shape health outcomes and the delivery of care.

The Regulatory Review is pleased to share the following interview with Michelle Mello.

The Regulatory Review: How does law promote safety in medicine?

It fills in the gaps where professional self-regulation and market forces don’t yield the level of safety we want and are willing to pay for. The major tools in the legal toolkit include medical malpractice and product liability, requirements tied to receiving federal health program funding, requirements for professional licensure, and antitrust laws.

Some rules directly spur specific behaviors—for example, maintaining a certain level of nurse staffing. Others create incentives—for participating in an accountable care organization, for example—or they facilitate action by others in the market, such as by requiring information disclosure to patients selecting a hospital for surgery or insurers populating a physician network.

TRR: By the same token, how does law stand in the way of safe and effective medical care? Does it, for example, constrain providers’ ability or willingness to think outside of the box to achieve better health outcomes for their patients?

There are many examples, but to name a couple: the perceived threat of liability can chill clinicians and provider organizations from doing things that would benefit patients—such as using a novel technology that doesn’t have an established track record of safety—or spur them to do things that aren’t necessarily good for patients—so-called “defensive medicine.” Imposing regulatory mandates without providing the resources to meet them can lead to health care organizations making hard choices, perhaps to patients’ detriment.

Along those lines, I’ve been thinking a lot about new rules for organizations that use AI tools. Health care organizations are supposed to be investigating those tools to see if they’re biased, but I have doubts that many organizations will really know how to do that in a meaningful way, and no one’s offering up money or technical assistance to help them.

TRR: For policymakers looking to empower physicians in the context of future public health emergencies, what lessons are most important to learn from the COVID-19 pandemic?

One of the bright spots in the pandemic story is that the federal government and states relaxed many of their everyday regulatory requirements for the practice of medicine by health care facilities and clinicians and nurses. This not only allowed providers to flex and surge their capacity to nimbly meet emerging needs, but it also led to some innovations with demonstrable, lasting benefits to patients, such as expanded telehealth. There are good reasons to have strong structures in place to regulate health care quality and safety in “peacetime,” but showing flexibility during emergencies can be the difference between life and death—and can illuminate new ways of organizing work outside of emergencies.

TRR: In your view, how much room does the current regulatory landscape leave for empathy in the practice of medicine? What could policymakers do to create more space for empathy?

The single greatest threat I perceive is the march of highly profit-oriented owners and investors into health care provision. Private equity has a time and a place, but I am very skeptical that it is commensurate with providing the space for high-quality, compassionate care in facilities like nursing homes and hospice. More generally, consolidation and ownership by huge, integrated corporations seems to be driving a wedge between clinicians and their professional norms and ideals, including patient-centered care.

TRR: Is defensive medicine a serious problem in the U.S. health care system?

Yes and no. There’s little doubt that it contributes to high costs. My work estimated its contribution at $45.6 billion in 2008––about $70.3 billion in today’s dollars. That is not chump change—but it’s a small part of what’s driving overall health care costs. As for whether it’s a serious safety problem, that’s hard to say. Some services that are provided for defensive reasons probably end up benefiting patients––that extra scan the physician didn’t think they needed ends up showing something worrisome, for instance. Other services burden patients with invasive procedures, infection risk, inconvenience, and anxiety.

TRR: How might the experiences of both providers and patients be used to shape policies that could unsettle practices of defensive medicine?

The trick is to align good medical practice with successfully avoiding malpractice claims and malpractice payments. It would be possible, for example, for state legislatures to provide courts more guidance about how to incorporate reputable practice guidelines into malpractice determinations, as a few states experimented with back in the 1990s.

Another promising approach is for health care organizations and liability insurers to have a strong policy of proactively offering compensation when they determine that care falls below the standard of care but vigorously defending the care when they find it is reasonable. That would give physicians greater confidence that these organizations will stand behind them when a claim is filed in a situation where the patient had a bad outcome but the care was good.

TRR: Earlier this year, you testified before the U.S. Senate Committee on Finance at a hearing focused on the governance of AI tools in health care. Given how quickly the field of AI is changing, how would you recommend health care policymakers approach emerging risks of AI tools?

Most of the policy emphasis right now is on figuring out wise regulations for AI models themselves; comparatively, little attention is being paid to how those tools get used. Problems can arise even for AI models and tools that test well when they’re under development, because much depends on how human users interact with them and how they’re integrated into clinical workflow. For this reason, part of the oversight of these AI tools needs to come from the organizations that choose to adopt them. Policymakers can facilitate that by requiring organizations to have standards and processes in place for evaluating and monitoring AI tools—and, critically, providing a way to reimburse entities for the costs of running them.