The Future of Technology in Health Care

Font Size:

Scholars discuss the need for federal regulations to combat risks associated with technology in health care.

Font Size:

Most U.S. adults use technology to improve their health—nearly 60 percent browse the Internet for medical information, and over 40 percent obtain care through telemedicine.  Despite technology’s health care potential, however, six out of ten Americans are uncomfortable with their health care provider relying on AI to diagnose diseases and recommend treatments.

AI can enhance quality of care by helping physicians verify their diagnoses and detect diseases earlier. For example, researchers have found that AI technology can help predict a patient’s risk of breast cancer. Similarly, a combination of physician expertise and AI algorithms can increase the accuracy of diagnoses.

Yet, AI systems can fail, and if humans rely too much on software, an underlying problem in one algorithm can injure more patients than a single physician’s error. In addition, AI algorithms incorporate biases from available data. For example, Black patients receive, on average, less pain medication than white patients. An algorithm trained to recommend pain treatment from these health records could suggest lower doses of painkillers for Black patients, irrespective of biological needs.

At the same time, technology can help underserved communities gain access to health care. These communities often experience shortages of trained practitioners and standard health care facilities, resulting in higher risk of disease and misdiagnoses. Telehealth, as one example, increases access to quality care by allowing patients to meet with doctors online or have their vitals monitored remotely.

Currently, no federal law regulates the use of AI in health care. Although the U.S. Food and Drug Administration (FDA) reviews most products using technology or AI software on patients, it does not currently make determinations as to whether uses of AI in health care are safe for patients. Instead, FDA approves AI-enabled devices through a process known as 510(k) review. During a 510(k) review, a manufacturer must show that its technology is “substantially equivalent” to a product already available in the market. The process allows AI-enabled devices to be approved without clinical trials proving their safety or accuracy.

Last year, the Biden Administration pledged to oversee the responsible development of AI, including in health-related fields. President Joseph R. Biden’s executive order on the subject includes requirements for health care providers to inform users when the content they provide is AI-generated and not reviewed by a physician. In addition, providers are responsible for mitigating potential risks posed by the technology and ensuring that it expands access to care.

Health professionals have also expressed concern about adolescents self-diagnosing medical conditions discussed by influencers who promote telemedicine on social media. Currently, FDA does not require telemedicine companies to disclose information about potential risks of services, and companies receive free speech protections as “advertisers.”

Advocates for stricter regulation of technology in health care point out that telehealth providers escape regulation by classifying themselves as communication platforms that connect patients with doctors, and not as providers of medical services. Telehealth companies maintain their independence from medical providers, allowing them to avoid legal liability for those providers’ actions.

In this week’s Saturday Seminar, scholars offer varying suggestions on regulating the use of technology in health care.

  • AI algorithms are inherently biased, yet no federal regulation addresses the risk of biased diagnostics when AI is used in health care, recent Seattle University School of Law graduate Natalie Shen argues in an article in the Seattle Journal of Technology, Environmental & Innovation Law. Shen explains that in the absence of federal action, states have taken the lead in passing laws to address automated decision systems such as AI in health care. By analyzing New Jersey’s and California’s approaches, Shen recommends improvements to future state legislation, including extending any future law’s coverage to the private health insurance sector, and imposing continuous assessment requirements as AI technology evolves.
  • In an article for the Virginia Law Review, Berkeley Law Schools Khiara M. Bridges argues that educating patients about the risk of race-based algorithmic bias should be a prerequisite before using AI in health care. Bridges explains that people of color are more likely to distrust physicians and health care institutions and thus, are likely to be skeptical of medical AI. Furthermore, medical algorithms are developed based on a primarily white “general population,” reducing their predictive accuracy for communities of color, Bridges notes. She argues that disclosure of AI-related risks would foster patient-physician dialogue in communities of color, encouraging more patients of color to use the technology and ultimately remedying existing algorithmic biases.
  • Regulation of AI-enabled health tools must include pre-market authorization and continued performance monitoring processes, urge Joana Gonçalves-Sá of Complexity Science Hub and Flávio Pinheiro of NOVA Information Management School in an chapter in Multidisciplinary Perspectives on Artificial Intelligence and the Law. Gonçalves-Sá and Pinheiro propose improvements to FDA’s pilot program, Total Product Lifecycle, which tracks the safety risks of AI. Under the program, an AI company can achieve “precertified status” if it can demonstrate that it develops high quality algorithms and continues to monitor their effectiveness after market entry, Gonçalves-Sá and Pinheiro explain. FDA should also investigate the reliability of datasets and engineers that train AI tools, Gonçalves-Sá and Pinheiro recommend.
  • Regulators should lower legal barriers that prevent community organizations such as Black churches from helping poor and marginalized people to gain access to telehealth services, argues Meighan Parker of the University of Chicago Law School in a recent article in the Columbia Science and Technology Law Review. Parker notes that although community organizations such as Black churches could help some people to overcome mistrust of health care providers, involving them could cause conflicts between the churches’ beliefs and patients’ medical needs, or open the churches to malpractice liability. In response, Parker proposes softening or adjusting regulatory barriers to ensure that churches will not face ethical conflict or legal liability for connecting people with needed telehealth services.
  • In a note in the Washington Journal of Law, Technology & Arts, Kaitlin Campanini, a student at Pace University Elisabeth Haub School of Law, argues that the U.S. Drug Enforcement Administration’s lax regulation of telehealth providers has worsened inadequate mental health treatment and increased excessive drug prescriptions. Although telehealth providers’ business models can render treatment more convenient and affordable, the expedited treatment model they offer “blurs the line between offering health care to patients and selling controlled substances to customers.” This is because such companies fall into a regulatory gray area. They disclaim providing medical services by maintaining that they are independent from providers. Yet they aggressively market stimulants to consumers and facilitate questionable prescriptions after short, virtual evaluations.
  • In a recent note in the Belmont Law Review, J.D. candidate Nora Klein argues that regulators should close legal loopholes that allow direct-to-consumer (DTC) pharmaceutical companies to unfairly influence social media users. Klein notes that DTC pharmaceutical companies have avoided FDA advertising regulations in part by labeling themselves as entities over which FDA has no regulatory authority. Accordingly, these entities are only subject to FTC advertising regulations, which are difficult to enforce, Klein observes. She argues that the DTC model is harmful because it leads to misdiagnoses and patient complications more often than traditional health care services. To address the problem, Klein proposes that FDA require DTC pharmaceutical companies to disclose important drug information to consumers.

The Saturday Seminar is a weekly feature that aims to put into written form the kind of content that would be conveyed in a live seminar involving regulatory experts. Each week, The Regulatory Review publishes a brief overview of a selected regulatory topic and then distills recent research and scholarly writing on that topic.