Week in Review

Silverman Hall

The Education Department issues new debt relief program regulations, FDA increases the availability of infant formula, and more…

IN THE NEWS

  • The U.S. Department of Education issued final regulations to expand eligibility for and improve debt relief programs. One rule establishes a defense for borrowers against repayment—either individually or as a group—if their institutions misled or manipulated them. A borrower can also discharge debt if their school closes or if they have a permanent disability. The rules permit eligible borrowers making income-based payments to receive one-time account adjustments under the Public Service Loan Forgiveness program. The regulations “ensure that all our targeted debt relief programs live up to the promises made by Congress in the Higher Education Act,” said Education Secretary Miguel Cardona.
  • The U.S. Food and Drug Administration (FDA) announced efforts to increase the availability of infant formula in the United States by permitting the importation of formula products from Ireland and New Zealand. FDA stated that the products will be sold at major U.S. stores and that the “agency continues to dedicate all available resources to help ensure that safe and nutritious infant formula products remain available for use in the U.S.” FDA’s announcement built on industry guidance it issued in September to keep certain formula products on the market, which FDA claimed will help “make families less susceptible to shocks in the infant formula market.”
  • A federal judge in Arizona declined to issue an injunction that would have prevented Clean Elections USA, a group committed to fighting potential voter fraud, from engaging in monitoring activities at early voting ballot drop boxes. The lawsuit arose from three complaints alleging that Clean Elections USA violated the Voting Rights Act, when the group intimidated voters at early voting drop boxes by photographing voters and their license plates. The judge found that, based on the evidence the parties had presented, Clean Elections USA’s actions did not fall into “any traditionally recognized category of voter intimidation” but maintained that he would welcome new evidence on the matter.
  • The U.S. Centers for Medicare & Medicaid Services finalized a rule that adjusts payment rates for physicians providing behavioral health services. One provision now allows providers of opioid use disorder treatments to bill for telehealth mobile unit services, such as treatment vans. The Centers also lifted certain restrictions on non-physician providers of behavioral health services, such as marriage therapists and professional counselors. The rule also promotes racial equity in access to Medicare programs by creating new financial incentives for physicians to provide coordinated care in underserved areas.
  • The Federal Trade Commission (FTC) filed an order against education technology company Chegg for inadequate data security practices. The FTC alleged that Chegg’s failure to improve its practices led to four data breaches that exposed its customers’ and employees’ personal information, including social security numbers. Director of the FTC’s Bureau of Consumer Protection Samuel Levine noted that the “order requires the company to strengthen security safeguards, offer consumers an easy way to delete their data, and limit information collection on the front end.”
  • A federal judge in New York denied a health provider’s attempt to prevent the U.S. Department of Labor from pursuing monetary damages on behalf of a COVID-19 whistleblower. The suit alleged that the health provider violated the Occupational Safety and Health Act when it terminated an employee who reported COVID-19 concerns about an in-person meeting. The court noted that when administrative agencies bring these types of legal actions, the agencies do so both to vindicate broader public rights and to represent the injured party. The Labor Department praised the ruling for reaffirming “the central importance of strong whistleblower protection provisions and enforcement.”
  • The U.S. Environmental Protection Agency (EPA) published a new list of drinking water contaminants that it plans to monitor for harmful or potentially harmful effects to human health under the Safe Drinking Water Act. Under the Act, EPA must create a list every five years of potentially dangerous contaminants found in drinking water that are not already subject to national drinking water regulations. EPA’s latest list added over 60 new individual chemicals and expanded the existing categories of per- and polyfluoroalkyl substances (PFAS), which are non-biodegradable, potentially harmful man-made substances. EPA Assistant Administrator for Water Radhika Fox stated that the new list is a step toward better protecting the public health—and the environment—from “forever-chemicals,” such as PFAS, and represents the “latest milestone in our regulatory efforts to ensure safe, clean drinking water for all communities.”
  • New York City passed a law that requires employers to include a “good faith salary range” on their job advertisements. New York City’s law outlines that the employer’s projected salary range should represent what the employer, in good faith, believes it would pay for the position at the time it posts the advertisement. The city’s Commission on Human Rights explained that the law requires that employers provide both a minimum and a maximum salary and that job postings with language such as “$15 per hour and up” or “maximum $50,000 per year” would not comply with the new law for failing to include both sides of the salary range.

WHAT WE’RE READING THIS WEEK

  • In a working paper, J.S. Nelson, visiting researcher with the Program on Negotiation at Harvard Law School, argued that the United States should adopt widespread environmental, social, and governance (ESG) standards to keep up with other developed nations. Nelson identified the tension between recent U.S. court cases limiting administrative authority, the Biden Administration’s climate change policies, and new administrative rules around climate change as a major limiting factor in America’s ability to adopt and implement ESG standards. In addition, Nelson pointed out that imprecise ESG standards that currently exist in the United States can lead to more prosecution of corporate misrepresentation and fraud around ESG issues. Nelson claimed that, until regulators and lawmakers adopt more precise standards, corporations will remain confused about how to comply with ESG mandates and the United States will remain “out-of-step” with other developed countries in tackling the climate crisis and other social issues.
  • In an article in the Harvard Journal of Law & Technology, Ifeoma Ajunwa, a professor at the University of North Carolina School of Law, argued that regulators should require employers to perform routine audits of the algorithms that they use to process and review job applications. Ajunwa noted the rise of automated hiring systems that screen out applicants without any human decision-making and claimed that these employment algorithms may violate antidiscrimination laws and equal opportunity principles by making hiring decisions biased against protected groups, such as female and Black job candidates. As a solution, Ajunwa proposed that legislatures mandate companies to conduct self-audits that verify “the accuracy of predictions made by the automated hiring system.” Ajunwa also recommended that a government agency serve as an external auditor of companies’ algorithmic decision-making tools.
  • In a Yale Law Journal article, Daniel Walters, associate professor of law at Texas A&M University School of Law, argued that agencies should incorporate political conflict into their regulatory practices to enhance democracy. Walters emphasized that his theory of democratic participation encourages conflict between agencies and the President because it provides additional avenues for democratic accountability. Walters concluded that agencies increasing opportunities for accountability could address the “perception that the administrative state persistently favors entrenched or favored interests.”

EDITOR’S CHOICE

  • In an essay in The Regulatory Review, Ashley Casovan and Var Shankar, directors at the Responsible AI Institute, discussed procurement criteria that regulators prescribe for organizations’ contracts with vendors of artificial intelligence (AI) tools. Casovan and Shankar called on regulators to tailor procurement criteria based on the relative risks that different types of AI pose. Casovan and Shankar explained that risk in the use of AI tools includes not only the extent to which they may or may not meet standards such as accountability, fairness, and explainability, but also the tools’ compliance with regulations. Such a risk-based approach, Casovan and Shankar argued, would put organizations that procure AI on notice of issues that may arise later on in their contracts with AI vendors.