The agency’s design allows it to develop advanced technologies, some of which present ethical implications.
The federal agency central to developing the modern internet and other life-changing technologies is unique in structure, with an impressive record of advancements. But does its entrepreneurial mission and nimble decision-making expose the agency to a greater risk of ethical issues than other, more traditionally designed agencies?
DARPA—the Defense Advanced Research Projects Agency—is a research and development agency within the U.S. Department of Defense. Known as the “Pentagon’s Brain,” it is responsible for a host of defense-focused inventions that have seeped into everyday life. Aside from the internet, DARPA can be credited with the early development of GPS, Siri, the computer mouse, and a host of other technologies.
The agency’s history mirrors its innovative mission. In the midst of the space race of the 1950s, President Dwight D. Eisenhower formed what was then called ARPA. Much of the agency’s initial functions were later transferred to the National Aeronautics and Space Administration. But Eisenhower’s creation retained its key purpose of developing advanced technology in the name of national security. To reflect this defense-focused purpose, ARPA changed its name to DARPA in the mid-1970s.
Perhaps the ingenuity of DARPA lies not just in its inventions, but also its governmental setup.
Because DARPA is an independent agency, the President has limited power to remove the agency head. DARPA’s director and deputy director approve programs and report to the Defense Department. Below the directors, 220 government employees span six technical offices, including around 100 program managers.
This “flat” organizational model gives program managers the power to explore innovative project ideas without many bureaucratic delays. To further its technological expertise, DARPA collaborates with a range of experts in academia, government, and the private sector. Importantly, although DARPA receives consistent funding from Congress and is subject to general oversight from government officials, it functions without extensive overhead supervision.
Such a model came about by design. DARPA administrators enjoy distinctive hiring abilities, including “special statutory hiring authorities and alternative contracting vehicles” that allow the agency to advance projects quickly. DARPA hires program managers for limited tenure, usually three to five years, to ensure they work at the agency “to get something done, not build a career.” Although this hiring policy results in quick turnover, managers have more flexibility than those in other U.S. research agencies.
These features, not enjoyed by other federal agencies, give DARPA its entrepreneurial style of management. Indeed, DARPA prides itself on “pushing the frontiers of what is possible.” This creative spirit has resulted in projects that test technological boundaries.
But therein lies the rub. Jeffrey Mervis of Science notes that critics of DARPA claim that the agency’s autonomy-driven nature can create ethical pitfalls. Other commentators also suggest that the limitless mentality of DARPA officials allows ethical discussions to fall by the wayside.
For example, the agency has proposed developing neurotechnology that alters the brains of soldiers. To create a stronger combatant—or even an invincible “super-soldier”—DARPA has explored human enhancements, such as implanted memory chips and a brain-machine interface. Although these ideas are aspirational and even hard to imagine, they illustrate just how far the agency is willing to push its technology.
Currently, DARPA is attempting to create fighter planes that operate only with artificial intelligence (AI), rather than human pilots. As the Pentagon plans to invest almost a billion dollars on technology driven by AI, critics oppose the use of AI technology to make “life-taking decisions” in the wartime effort. Hundreds of scholars and tech companies have called for regulations prohibiting these types of lethal autonomous weapons.
DARPA does recognize the ethical, legal, and social implications of its work. In response to growing concerns, the agency has developed procedures that assess these implications throughout the development of any given project. Skeptics worry that these measures are not enough.
Still, the U.S. government and others continue to think the DARPA design has promise. Countries such as Japan and Germany have launched DARPA-inspired programs. Moreover, the United Kingdom and the Liberal Party of Canada seek to emulate the DARPA model with similar agencies. And within the United States, President Biden has proposed the creation of two DARPA-like agencies for health and climate initiatives.
It remains to be seen whether future renditions of DARPA’s “high-risk, high-reward” model will face ethical and moral questions similar to those that critics, and DARPA itself, have acknowledged. As shown by DARPA’s valuable work, cutting through red tape and expediting output can yield unrivaled results. But the ultimate consequences of technological innovations developed through DARPA may be harder to discern.