Outcome-Based Cooperative Regulation

The key to improving regulation rests with strengthening its capacity to reinforce productive cooperation.

We need to cooperate more if we want to solve problems. And society’s problems are only getting bigger, with more risk.

Cooperation enables learning and improving. Defeating pandemic viruses requires cooperating in research into vaccines and their dissemination to people. Defeating global financial crises involves using common or public funds and taxes to support banks, companies, and families. Defeating aggressive militaries and electronic aggression and fraud calls for cooperating too, starting with the sharing of information. And people succeed best in commercial institutions where all employees work together in a culture of cooperation.

But today, too many relationships are polarized and, as a result, many institutions are significantly under-achieving. Regulation, for example, finds itself in an inherently polarized situation. So how do regulators solve this and achieve more by cooperating more?

Outcome-based cooperative regulation is a new model of regulation­­­ based on scientific research on how people behave—rather than on theories of philosophy, law, or economics about how they might behave or ought to behave. This regulatory model is part of a wider model of how people achieve more through outcome-based cooperation. Both these models are already successful in various contexts but putting all the elements together unlocks huge opportunities for achieving more—as various organizations and regulators are finding.

But exactly how do we cooperate more? The answers are widely known but poorly applied. In essence, the answers involve purposes, outcomes, trust, evidence, ethics, feedback, self-scrutiny, and support.

Purposes and outcomes. To cooperate, we need to know what we are cooperating about. What are we trying to do and achieve? What outcomes are we intending to deliver? Which outcomes are good, and which ones are harmful? We can then monitor outcomes and see if we are delivering good or harm. We can see if we are improving performance or not.

But we need to agree on our common purposes and priorities. Since there are so many purposes, and some conflict with others, we need to discuss and rank them. We might not be able to achieve all purposes at the same time, so sequencing is critical.

For example, corporations need to produce profits, but society needs protection from harm: the classic regulatory conflict. Corporations, however, can pursue many more purposes, such as employment and various social, environmental, community, and national goals. This reality is reflected in the shift in emphasis in corporate management and regulation from maximizing shareholder value as the sole purpose of corporations to the pursuit of ESG goals and conscious, stakeholder capitalism.

It will be more effective to try to agree on how to achieve all these purposes in advance, rather than expect companies and regulators to work out the answers once a conflict arises.  Admittedly, it is far easier for regulators to measure their outputs—such as the number of statements or rules issued, or the number of inspections or fines imposed—than to demonstrate outcomes—such as safe streets, a safe internet, or safe investments. But it is the outcomes that matter. Stating them clearly will help.

Trust, evidence, values, and ethics. Cooperation is based on trust. Trust is the mental mechanism that enables people to plan and act in the face of uncertainty. Predicting the future is impossible, but people can have a level of confidence that things will work well and a level of trust that people will behave as expected.

All human systems and relationships are based on trust. But today, too often people say they do not trust various politicians, companies, or partners. We should not be surprised if conflict and under-achievement are the consequence.

Trust is based on evidence. The best evidence builds up over time and forms a consistent and coherent whole about whether someone can be trusted. Much of the evidence in investment, markets, and regulation is familiar, including auditing results and systems that control activities against agreed standards. We have recently realized that evidence of behaviors and organizational cultures is also essential, even if difficult to produce.

Humans evaluate evidence of behavior, and even of official standards and rules, against an inherent set of ethical values. People’s brains are wired to know the difference between right and wrong. But people also possess heuristics and biases. They can maintain self-worth by tricking themselves that what they do is right—that is, cognitive dissonance. This ability means it is important to be careful and open to scrutiny and challenge.

The trick is to turn things around: Can an organization produce evidence of why it should be trusted by staff, investors, customers, and society?

Humans perform better when their intrinsic motivation is high. All good managers—and employees—know this. Supporting others’ autonomy, competence, and relatedness works; undermining these qualities impedes performance.

Trying to create incentives and rewards in human terms works better than in terms of financial targets—and it also reduces the risk of bad outcomes. Similarly, setting universal goals in terms of human, social, and environmental values—such as achieving both prosperity and protection—will generate broader involvement and increased trust.

Realizing the ideal. This model can also apply to all human relationships: families, communities, organizations, regulation, and dispute resolution. Where it applies, it will achieve more and better outcomes.

Yet whether achieving improved outcomes will be easier or more challenging depends on various personal, social, and public factors that can either facilitate or impede cooperation. Both the Nordic states and New Zealand, for example, benefit from high social capital, while numerous nations are held back by corruption and lack of coordination mechanisms. Similarly, attempts to “control” employees are problematic.

The ideal, though, is achievable. It largely exists in high-risk safety systems, such as aviation safety. That system was devised in the 1980s to move away from rules and enforcement to a performance-based “open and just” culture that applies across a network of all public and private organizations and actors, however modest their individual contribution. The Boeing 737 MAX disasters demonstrated what happens when responsibility is delegated without justified trust, and a commercial organization pursues only a profit goal, corrupting the culture. Volkswagen’s “dieselgate” and many other examples show what can go wrong.

Outcome-based cooperative regulation. The core model of outcome-based cooperation in regulation involves six core elements:

1. All stakeholders agree on purposes, outcomes, evidence metrics, and systems.

2. They agree on expectations for how those who wish to be trusted should behave and set out this agreement in a code.

3. All actors who wish to join the “trust community” and “regulatory trust track” produce evidence that they are trustworthy. The type of evidence will evolve and be proportionate to the business and risk.

4. Those actors who do not wish to produce evidence of trustworthiness continue to be regulated under traditional rules and enforcement, but without the benefits of having a trusted reputation, including benefits of regulatory sandboxing, such as procurement, commercial, employment, and investment advantages and a reduction in regulatory burden.

5. The trusted parties cooperate in a trusted and respectful environment, identifying and fixing problems, delivering the achievement of desired outcomes, and increasing performance.

6. All parties help to identify harms and risks quickly and take action to deliver protection.

The basic elements of an outcome-based cooperative regulation typically involve three core players. The first is a stakeholder council that oversees the operation, and mode of operation, of the entire system. The stakeholder council ensures that the system is operating well and that there are no gaps. It also sets the primary code of behavior.

The second player—the regulator—represents the state and works to protect society and markets overall. It oversees the operational aspects of the system. It may be empowered to make the code mandatory to refer complaints to an ombudsman. It also enforces the legal rules that act as the boundary of society’s requirements, any breach of which may trigger enforcement action. But it differentiates between deliberate and reckless behavior or breaches—which may trigger enforcement—and those that occur by well-intentioned actors who take an ethical approach to preventing, identifying, rectifying, and learning from risk and harm—which instead would usually trigger cooperative support and intervention.

The third and final player in an outcome-based cooperative regulatory system will be an ombudsman, who is an independent source of trusted information, advice, early resolution of problems and disputes, and decisions applying the code, including by referring points of law to a judge. The ombudsman plays a key role in providing information and communication between parties and mediation. It works to aggregate data from all inquiries and disputes, and then feeds the learning back for appropriate action by companies, regulators, consumers, and others.

Many of these elements operate well in the United Kingdom and some other markets. They draw together developments in regulation and enforcement, as well as in dispute resolution and the use of online systems and artificial intelligence. They build on learning about why problems occur in organizations and businesses—and how to avoid them. Today, ground-breaking examples of the complete model of outcome-based cooperation are being considered in areas as diverse as financial services, energy and climate change, water, property and housing, biomedicine, and medical devices.

Ultimately, society needs to cooperate more to solve the complex problems facing the world. Orienting regulation more toward cooperation is one important step in a much-needed direction.

Christopher Hodges is emeritus professor of justice systems at University of Oxford.