Harnessing AI to Combat Climate Change

At a Penn Program on Regulation workshop, Cass Sunstein explains how AI can help consumers make climate-friendly choices.

In the same year that OpenAI released its revolutionary generative artificial intelligence (AI) program ChatGPT, greenhouse gas concentrations, global sea levels, and ocean temperatures reached record highs.

Might this monumental technological development help people around the world address the monumental challenge posed by climate change?

Yes, argued law professor Cass Sunstein, speaking at a recent Penn Program on Regulation workshop.

During the workshop, Sunstein proposed that policymakers, climate activists, and developers mobilize “choice engines”—AI-driven algorithms that could nudge consumers toward sustainable decision-making.

Aside from reducing consumers’ carbon footprints, choice engines could also assist consumers in making prudent financial decisions, Sunstein explained. For example, choice engines could help reduce both the environmental and consumer costs associated with motor vehicles or appliances that consume significant amounts of non-renewable energy.

After noting the tension between externalities—costs imposed upon the climate—and internalities—costs individual consumers impose on their future selves—Sunstein outlined how choice engines could reconcile these often-competing interests. Sunstein argued that in many instances, the climate-friendly choice is also the wallet-friendly choice. Consumers, however, may not realize that is the case and instead they end up making what Sunstein called an “imperfect choice.”

According to Sunstein, choice engines would address four of the main reasons that consumers make imperfect choices: lack of information, optimistic bias, inertia, and present bias.

Consumers rarely have sufficient information about the environmental consequences or long-term economic consequences a particular decision to make an informed decision. On the other hand, Sunstein pointed out that people tend to be overly optimistic about themselves and believe that their decisions will not negatively impact the environment.

Inertia also affects individual decision-making, and a cognitive “tax” associated with departures from the status quo often inhibits their behavior change. Similarly, Sunstein explained that, because of a bias for the present, some people would prefer a smaller but immediate benefit—such as $10 today—over a larger but delayed benefit—$20 in a month.

Sunstein argued that choice engines may help consumers overcome these informational deficits and behavioral biases while also tailoring recommendations to diverse consumer preferences. According to Sunstein, in contrast to traditional, more paternalistic means of government intervention, such as label mandates or taxation, choice engines can account for an individual consumer’s needs, priorities, and values and then suggest a particular course of action.

Crucially, this capacity means that choice engines could preserve freedom of choice while still nudging consumers toward more sustainable decision-making, Sunstein maintained.

For example, Sunstein envisioned a choice engine that recommends a particular car based on a consumer’s individualized preferences. For a consumer uninterested in an electric car, the choice engine could instead recommend a relatively fuel-efficient gas-powered car. Sunstein outlined how a choice engine could make suggestions based on long-term costs or emissions, thus overcoming consumers’ information deficit, their tendency to downplay their own environmental impacts, and their prioritization of lower upfront costs over long-term savings.

Sunstein classified his proposal as a form of libertarian paternalism because, although choice engines may nudge consumers to reduce their externalities, the consumer ultimately retains the choice. According to Sunstein, choice engines could align climate-friendly and wallet-friendly decision-making and reach those less receptive to more heavy-handed regulations.

Although Sunstein maintained that choice engines are less intrusive and paternalistic than traditional interventions, he conceded that they retain a degree of paternalism that may invite pushback.

Sunstein also cautioned that “coarse” rather than “personalized” choice engines could replicate the problems of traditional, “mass” interventions that have a poor track record of producing meaningful behavioral change.

Furthermore, Sunstein recognized that AI may replicate human biases or develop its own biases. And he warned that choice engine developers could be self-interested and exploit consumers’ informational deficits or behavioral biases.

Sunstein urged policymakers to guard against such risks by adopting regulations that scrutinize AI for deception and manipulation.

Despite AI’s challenges, Sunstein celebrated its potential both to improve consumer welfare and reduce climate externalities. Sunstein remains optimistic, ultimately concluding that choice engines represent an “entirely realistic” means of mitigating climate change. And he emphasized that now is the time for action—the time to remember that what “we hold in our hands is our planet.”

Sunstein’s presentation was part of a year-long workshop series on “AI and Climate Change:

Global Sustainability in an Era of Artificial Intelligence” organized by Cary Coglianese, the director of the Penn Program on Regulation. A video recording of Sunstein’s presentation, along with recordings of all the other workshops in the series, can be found at the Penn Program on Regulation’s YouTube channel.