Does Industry Self-Regulation of Mental Health Apps Protect Consumers?

Scholar advocates increased regulation of mental health apps.

More than half of the adults who experienced mental illness in 2023 did not receive treatment for it. A growing number of mental health apps may help fill this gap by providing tools for self-management and access to talk therapy.

But in a recent article, Leah R. Fowler and Jessica L. Roberts, both of the University of Houston Law Center, argue that mental health apps have inadequate consumer protections and require increased regulation.

Fowler and Roberts explain that barriers to health access such as high cost of insurance, scarcity of services in underserved areas, and stigma around mental health treatment often prevent individuals from seeking out help.

Meanwhile, mental health apps are often free or low cost. They also provide anonymity, which can help individuals who face stigma in seeking out mental help, Fowler and Roberts note. They also refer to several studies which show that some mental health apps can work. For example, one meta-analysis—a study that reviews the results of numerous studies to look for patterns—by physician and researcher Joseph Firth showed that these apps have the potential to reduce depressive symptoms, especially for mild to moderate depression. Fowler and Roberts point to another meta-analysis by Firth which revealed that mental health apps can replace outpatient therapy sessions without significantly lessening treatment efficacy for anxiety.

Despite their potential, mental health apps range in quality, Fowler and Roberts argue. They warn that while some apps are evidence-based and show clinical benefit, others may be downright harmful. Unfortunately, consumers do not have a way to distinguish these two types, according to Fowler and Roberts. They point to the lack of uniform regulations for mental health apps as a potential source of consumer confusion.

Both the U.S. Food and Drug Administration (FDA) and the U.S. Federal Trade Commission (FTC) currently regulate mental health apps only to a minimal degree, Fowler and Roberts explain.

FDA categorizes some of these apps as medical devices that require approval and clearance before market entry. FDA categorizes other apps as low risk or non-medical devices, however, which do not require FDA approval before they are marketed to the public. Fowler and Roberts note that this light-touch regulatory strategy is meant to encourage innovation by minimizing market entry requirements for products that are unlikely to harm consumers.

The FTC’s role is to regulate deceptive or unfair advertising sometimes by bringing enforcement actions against apps that make claims which are unsupported by evidence. Fowler and Roberts suggest, however, that the FTC will often only involve itself when there is severe misrepresentation.

Ultimately, they argue that given both agencies’ minimal enforcement actions, the mental health app industry is largely self-regulated.

In 2020, FDA issued guidance that Fowler and Roberts claim contributed to this state of affairs by making it easier for consumers to access these digital health therapeutic devices. Under the guidance, FDA would not investigate low-risk products that made mental health claims, nor would the agency take enforcement actions to hold these apps to medical device standards.

Fowler and Roberts argue that, after FDA issued this guidance, mental health apps began to appear more “medical” and it became unclear to consumers which apps were highly regulated and which were not. The result, Fowler and Roberts suggest, is that the mental health app market suffers from high availability but limited evidentiary support. Too many studies on health apps are insufficient, they say, and the more rigorous efficacy studies are often not available to consumers.

Fowler and Roberts concede that a lack of evidence does not always indicate that a mental health app is harmful. They argue, however, that even if the app is simply benign, it would be harmful if consumers perceived such an app as helpful. If consumers replace valuable mental health services with an unhelpful app, this can worsen their symptoms.

In fact, consumers usually select an app without researching it and they rely solely on the app store’s display to present them with information, Fowler and Roberts note. According to Fowler and Roberts, this display creates an informational asymmetry. App developers are aware of the actual quality of their app, but consumers are not and must blindly select an app from search results which display the apps in a nearly identical way.

To resolve this informational asymmetry in the absence of more rigorous regulatory oversight from the agencies, Fowler and Roberts propose a voluntary labeling system to help distinguish higher quality apps from lower ones. In their proposed system, app stores would require developers to label their app with information showing how their app functions and what scientific evidence supports their claims.

They suggest that app stores should incorporate study results into their display algorithm so evidence-based apps would appear higher in search results than those without such support. With this evidence available before downloading an app, Fowler and Roberts claim that consumers would be able to make informed decisions with a clearer understanding of the app’s quality.

Fowler and Roberts also note that this labeling system could lead to higher profits for app developers who create evidence-supported apps. Because users struggle to distinguish evidence supported apps from lesser quality apps in the current system, a new labeling system would draw attention to the comparative quality of these apps and drive purchases.

Ultimately, at the same time that Fowler and Roberts urge the tech industry to adopt a labeling system, they suggest that such a system would be but an initial step in improving consumer safety. Ideally, they hope to see federal legislation that creates universal standards for app security and efficacy.