Will the White House’s AI Policies Lead to Permissionless Innovation?

Facial Recognition

New artificial intelligence guidelines aim to improve oversight of growing automation in the United States.

The rise of automation through artificial intelligence—or AI, for short—has become a topic of increasing concern throughout society.

Last month, the White House Office of Science and Technology Policy (OSTP) and the Office of Management and Budget (OMB) spoke to concerns about AI by issuing a statement containing 10 high-level principles aimed at providing direction to U.S. federal agencies seeking to regulate artificial intelligence. These policy principles follow from last year’s AI Executive Order 13859 announcing the American AI Initiative. This broad initiative is focused on directing federal funding, research, and infrastructure toward AI research and deployment. The executive order did not, however, direct federal agencies on how to approach regulating AI technology once deployed, a gap filled by the recently released policy principles.

According to the OSTP and OMB, the new AI policy principles rest on three major pillars:

  • Ensuring public engagement and trust by increasing public participation in AI policymaking;
  • Limiting regulatory overreach by requiring cost-benefit analysis before any AI regulations are imposed, with an emphasis on flexible regulatory frameworks and on interagency cooperation to avoid duplicitous efforts; and
  • Promoting trustworthy AI by considering fairness, openness, nondiscrimination, transparency, safety, and security in regulation.

As federal agencies grapple with AI and its myriad implications for society and the U.S. economy, these AI principles signal a clear win for “permissionless innovation”—the development and circulation of products or services without prior approval from regulators—over the “precautionary principle” of using traditional approval processes before new technologies can be introduced in the marketplace.

This win for permissionless innovation should not be understated. Given AI’s potential to disrupt society in profound ways, there have been some calls for immediate and more heavy-handed AI regulation both at home and abroad.

In a parallel universe, the OSTP AI principles might have taken a more populist approach to the technology given its potential to disrupt the workforce, impacting blue collar jobs in particular. Estimates of the likely job losses from AI-enabled automation vary widely and are somewhat speculative in nature. For example, a 2013 Oxford University reportpredicts that U.S. job losses from AI-related automation may reach as high as 47 percent over the course of the next decade. But an Organization for Economic Cooperation and Development report estimates that only 9 percent of jobs are at high risk of automation. That said, all the relevant studies point in the same direction: a conclusion that AI will result in labor force disruption and job losses.

Similarly, AI also raises ethical and privacy considerations that have led other jurisdictions to consider slowing down its adoption altogether. For example, the European Commission is reportedly considering a ban on AI-enabled facial recognition technology in public spaces, citing ethical and privacy concerns. (Already, some localities in the U.S. have restricted use of this technology.) This policy approach with respect to facial recognition systems suggests that the EU may be willing to adopt the precautionary principle in formulating regulations governing other AI-powered technologies.

In light of concerns about AI, federal regulators in the United States might also have been tempted to slow down the pace of AI adoption rather than stand aside and allow the technology to emerge under a light-touch regulatory framework. That is what makes the OSTP/OMB principles so significant.

At the end of the day, OSTP and OMB have stuck with permissionless innovation, which has been the north star for technology regulation for over two decades in the United States. It perhaps reinforces the White House’s objectives that a hands-off approach is aligned with U.S. national security interests as the U.S. competes against China for global AI dominance. The OSTP/OMB approach to AI should help to accelerate the United States in its race against China, and this was no doubt an important consideration for the decision-makers involved.

The alignment of economic and national security interests appears to have outweighed competing equities in the development of the recent federal principles on regulating AI. This has been so even in a White House otherwise characterized by robust, even healthy, competition between economic populist concerns about maintaining good jobs for lower-skilled workers and conservative policy perspectives that favor unimpeded technological innovation.

Abigail Slater

Abigail Slater is the Antitrust Law Section Leader at the American Bar Association and the Senior Vice President of Policy & Strategy at the Fox Corporation in Washington, D.C.