A password will be emailed to you.

On July 16, 1945, some of the greatest minds on Earth gathered together in the New Mexico desert to watch the first test of a nuclear weapon. As the tension mounted prior to the 5.30am detonation, physicist Enrico Fermi joked with other scientists present – including Richard Feynman and Robert Oppenheimer – by saying “Let’s make a bet whether the atmosphere will be set on fire by this test”.

Fermi’s joke was underpinned by a serious query, made during the first months of the Manhattan Project by Edward Teller: In exploding a nuclear fission weapon, was there a chance that the temperature of the blast could fuse together nuclei of light elements in the atmosphere, releasing further huge amounts of atomic energy (the reaction which would be used in later, larger nuclear weapons)? If so, a run-away chain reaction might occur, through which the entire atmosphere of planet Earth could be engulfed in a nuclear fusion explosion.

The proposition was taken seriously, even though subsequent calculations would show that the chain reaction was an ‘impossibility’. It is said that was also one of the reasons the Nazis baulked at building their own nuclear weapon, also in 1942. According to Albert Speer:

Professor Heisenberg had not given any final answer to my question whether a successful nuclear fission could be kept under control with absolute certainty, or might continue as a chain reaction. Hitler was plainly not delighted with the possibility that the Earth under his rule might be transformed into a glowing star.

Hitler did see the macabre, surreal humour of needing to even pose the question though, sometimes joking that “the scientists in their worldly urge to lay bare all secrets under heaven might some day set the globe on fire”.

The Nazi leader’s off-hand joke glosses over an extraordinary insight: 1942 marks an important time in the history of humanity, a turning point – a moment when our quest for knowledge reached a point where we wondered whether we now had the god-like ability to destroy the entire Earth.

In the intervening three quarters of a century, the further advancement of science has provided more fears of humanity creating its own apocalypse: the advent of genetically engineered ‘superbug’ bioweapons; the ‘grey goo’ scenario of runaway molecular nano-machines consuming everything on Earth; the suggestion that particle colliders might destroy the Earth via the creation of black holes or strange matter; the advent of a malevolent, super-intelligent Artificial Intelligence (the ‘Skynet’ scenario).

And as time goes on, these scenarios will not only further proliferate, but the technology required to achieve them will move closer to ‘off-the-shelf’ rather than being rare and expensive. So is it time that research into some of these areas was carefully monitored and regulated?

These concerns are at the heart of a new paper posted at arXiv.org, “Agencies and Science Experiment Risk“, authored by Associate Professor of Law Eric E. Johnson:

There is a curious absence of legal constraints on U.S. government agencies undertaking potentially risky scientific research. Some of these activities may present a risk of killing millions or even destroying the planet. Current law leaves it to agencies to decide for themselves whether their activities fall within the bounds of acceptable risk. This Article explores to what extent and under what circumstances the law ought to allow private actions against such nonregulatory agency endeavors. Engaging with this issue is not only interesting in its own right, it allows us to test fundamental concepts of agency competence and the role of the courts.

Johnson notes that the Acts which govern much of this research were written in the 1940s, and thus “never comprehended today’s exotic agency hazards”. Furthermore, he says, this legal gap “might be less troubling if it were not for insights from behavioral economics, neoclassical economics, cognitive psychology, and the risk-management literature, all of which indicate that agency scientists are prone to misjudging how risky their activities really are.”

Johnson is astounded that, given “the exotic agency-science risks discussed here constitute a truly elite set of menaces”, it is “all the more remarkable that our legal structure refrains from engaging with them.”

As examples for discussion, he concentrates on two scenarios: particle colliders creating strange matter, and a plutonium-fueled spacecraft crashing into the Earth. Both of these have already had real-world public concerns about the possible dangers – the 1999 concerns over ‘strangelets’ being created at the Relativistic Heavy Ion Collider (RHIC); the latter with the ‘Stop Cassini’ protest in the lead-up to that probe’s 1997 launch. Johnson digs into the debates that occurred regarding the risks of both of these scenarios, and shows quite clearly that self-evaluation by the agencies involved can not be trusted: “when it comes to low probability/high-harm scenarios occasioned by an agency’s own conduct, that agency is unlikely to adequately safeguard the public interest.”

For instance, NASA calculated the possible deaths resulting from a Cassini fly-by crash at 5000, while other notable scientists estimated numbers from 200,000 to 40 million. And Sir Martin Rees criticised a paper dismissing the risks of strangelets by saying the theorists “seemed to have aimed to reassure the public . . . rather than to make an objective analysis.”

In summary, Johnson notes:

These sorts of ultrahazardous-risk issues are unlikely to go away on their own. To the contrary, we should expect them to proliferate… Thus, a refusal of the law to deal with agency-created risk becomes
increasingly undesirable.

What other end-of-world scenarios should we be looking out for? And what are your thoughts on regulating these areas more carefully?

(h/t Norman)