Click here to support the Daily Grail for as little as $US1 per month on Patreon
Exploding Earth apocalypse

Ending the World, For Science! Should We Start Regulating ‘Ultra-Hazardous’ Research That Has the Potential to Destroy Us?

On July 16, 1945, some of the greatest minds on Earth gathered together in the New Mexico desert to watch the first test of a nuclear weapon. As the tension mounted prior to the 5.30am detonation, physicist Enrico Fermi joked with other scientists present – including Richard Feynman and Robert Oppenheimer – by saying “Let’s make a bet whether the atmosphere will be set on fire by this test”.

Fermi’s joke was underpinned by a serious query, made during the first months of the Manhattan Project by Edward Teller: In exploding a nuclear fission weapon, was there a chance that the temperature of the blast could fuse together nuclei of light elements in the atmosphere, releasing further huge amounts of atomic energy (the reaction which would be used in later, larger nuclear weapons)? If so, a run-away chain reaction might occur, through which the entire atmosphere of planet Earth could be engulfed in a nuclear fusion explosion.

The proposition was taken seriously, even though subsequent calculations would show that the chain reaction was an ‘impossibility’. It is said that was also one of the reasons the Nazis baulked at building their own nuclear weapon, also in 1942. According to Albert Speer:

Professor Heisenberg had not given any final answer to my question whether a successful nuclear fission could be kept under control with absolute certainty, or might continue as a chain reaction. Hitler was plainly not delighted with the possibility that the Earth under his rule might be transformed into a glowing star.

Hitler did see the macabre, surreal humour of needing to even pose the question though, sometimes joking that “the scientists in their worldly urge to lay bare all secrets under heaven might some day set the globe on fire”.

The Nazi leader’s off-hand joke glosses over an extraordinary insight: 1942 marks an important time in the history of humanity, a turning point – a moment when our quest for knowledge reached a point where we wondered whether we now had the god-like ability to destroy the entire Earth.

In the intervening three quarters of a century, the further advancement of science has provided more fears of humanity creating its own apocalypse: the advent of genetically engineered ‘superbug’ bioweapons; the ‘grey goo’ scenario of runaway molecular nano-machines consuming everything on Earth; the suggestion that particle colliders might destroy the Earth via the creation of black holes or strange matter; the advent of a malevolent, super-intelligent Artificial Intelligence (the ‘Skynet’ scenario).

And as time goes on, these scenarios will not only further proliferate, but the technology required to achieve them will move closer to ‘off-the-shelf’ rather than being rare and expensive. So is it time that research into some of these areas was carefully monitored and regulated?

These concerns are at the heart of a new paper posted at arXiv.org, “Agencies and Science Experiment Risk“, authored by Associate Professor of Law Eric E. Johnson:

There is a curious absence of legal constraints on U.S. government agencies undertaking potentially risky scientific research. Some of these activities may present a risk of killing millions or even destroying the planet. Current law leaves it to agencies to decide for themselves whether their activities fall within the bounds of acceptable risk. This Article explores to what extent and under what circumstances the law ought to allow private actions against such nonregulatory agency endeavors. Engaging with this issue is not only interesting in its own right, it allows us to test fundamental concepts of agency competence and the role of the courts.

Johnson notes that the Acts which govern much of this research were written in the 1940s, and thus “never comprehended today’s exotic agency hazards”. Furthermore, he says, this legal gap “might be less troubling if it were not for insights from behavioral economics, neoclassical economics, cognitive psychology, and the risk-management literature, all of which indicate that agency scientists are prone to misjudging how risky their activities really are.”

Johnson is astounded that, given “the exotic agency-science risks discussed here constitute a truly elite set of menaces”, it is “all the more remarkable that our legal structure refrains from engaging with them.”

As examples for discussion, he concentrates on two scenarios: particle colliders creating strange matter, and a plutonium-fueled spacecraft crashing into the Earth. Both of these have already had real-world public concerns about the possible dangers – the 1999 concerns over ‘strangelets’ being created at the Relativistic Heavy Ion Collider (RHIC); the latter with the ‘Stop Cassini’ protest in the lead-up to that probe’s 1997 launch. Johnson digs into the debates that occurred regarding the risks of both of these scenarios, and shows quite clearly that self-evaluation by the agencies involved can not be trusted: “when it comes to low probability/high-harm scenarios occasioned by an agency’s own conduct, that agency is unlikely to adequately safeguard the public interest.”

For instance, NASA calculated the possible deaths resulting from a Cassini fly-by crash at 5000, while other notable scientists estimated numbers from 200,000 to 40 million. And Sir Martin Rees criticised a paper dismissing the risks of strangelets by saying the theorists “seemed to have aimed to reassure the public . . . rather than to make an objective analysis.”

In summary, Johnson notes:

These sorts of ultrahazardous-risk issues are unlikely to go away on their own. To the contrary, we should expect them to proliferate… Thus, a refusal of the law to deal with agency-created risk becomes
increasingly undesirable.

What other end-of-world scenarios should we be looking out for? And what are your thoughts on regulating these areas more carefully?

(h/t Norman)

Editor
  1. GMOs are another example –
    GMOs are another example – creating BT toxin producing plants, releasing them into the environment, and then correlating the resulting rise in Crohns, Ulcerative Colitis, and other related intestinal conditions. Nature is very unpredictable and uncontrollable – gene transfer cannot be ruled out and it’s possible gut bacteria inherit the BT Toxin code turning people’s bodies against themselves. To allow for profit greed motivated corporations to supervise their own testing, and release their genetically modified creations into nature where they can never be recalled or controlled creates a potential for catastrophe, similar to the Africanized Killer Bee escape, that still terrorizes and kills people every year. In the last couple years several people were killed in different incidents by Africanized bees in CA near where I live.

    Yes, governments and scientists should have the burden of proof on them to prove their experiments are safe before they conduct them whether they be food related, military, high-energy particle accelerators, etc… Their arrogance and recklessness is absurd.

    Hindsight is always 20-20, but too late sadly to undo the damage.

    1. burden of proof
      You say that governments and scientists should prove that their research will be safe, before they start serious work.

      But how do you enforce that? Who gets to decide?

      My ooint is that this idea is not enforcible. Especially when we talk about government sponsored reaearch. But also private reseaarch. If someone wants to do the work, they will. WHo could stop them?

      And who has the authority to decide ?

  2. A lot of areas of scientific inquiry have in fact been banned
    It’s a great question, but I suppose in my mind the larger question is: why did we ban researching psychedelic drugs? Why did we ban researching marijuana (I’m talking about in the US where I live)? It’s easy to forget that replicating Reich’s Orgone studies? Also banned. Most countries have bans on cloning as well.

    So, I think when having this conversation it’s pretty important to acknowledge that research not only can be banned, but has been repeatedly. The real question is why on Earth we think manipulating one’s consciousness is so dangerous that it can’t even be studied, but building muderbots? Nukes that could destroy society? Totally cool. There are some incredibly primitive spiritual superstitions at play here that I think it’s important to acknowledge. But as to how could you possibly ban areas of scientific inquiry? You make it illegal and throw people in jail. It’s already been done.

  3. It also becomes a question of
    It also becomes a question of scale. If I want to “experiment” and do “research” on bomb building there are laws against that. And rightly so – don’t want to blow up my neighbor by accident. There was a guy who built his own nuclear reactor in his backyard shed – got in a little trouble.

    Yet corporations and governments are exempt though their “research” and experiments have the potential to kill millions, or to end mankind in the extreme. To say because it would be difficult to enforce, it should not be attempted is ridiculous. A small powerful or influential group of people should not be making decisions that affect large number of people and potentially the world’s population, without some type of oversight and accountability… though it obviously happens.

    This is not anti-science. It is pro-responsible science. And what we have far too often, is irresponsible science.

    1. being responsible
      True companies and individuals should be resppnsible. So should goverments.

      But who sohuld enforce all this? The US government? THe Soviet government? THe Chinese government? All of those have been involved in very risky research.

      Who is left? The UN ? At best the UN has some moral authority – but that’s about it, and whoever does not care about that can be sued after the fact. Only when it comes to really dangerous things, there will be nobody left to sue.

      My point is not that it is difficult to enforce. My point is that it is impossible.

      It does not help us to say that some behaviour is irresponsible, when we don’t have any mechanisms to even slow the irresponsible people down. And in my estimation, we don’t have anything. So the only choice I see is to stay ahead with at least our understanding.

      Take nuclear weapons – that is 1945 technology. People will catch up.
      Take genetically modified crops. People will know this. Now saying (like Monsanto) that seeds blown over from an authorized field to a non-participating neighbor belong to Monsanto, that is just plain ridiculous and should be thrown out of court. Those laws exist already.

  4. Clearly one has to start
    Clearly one has to start somewhere. The article states even Hitler as insane as he was, “baulked at building their own nuclear weapon” due to the perceived possibility of runaway fission. The point is, just because China or…won’t play by the same rules, we should throw all the rules out? One must lead by example…

  5. Caution vs Fear-mongering
    The “Cassini” protests were stupid. The RTGs were essentially unbreakable and the amount of Plutonium-238 insufficient to cause even 50 deaths – by themselves. High estimates required fine powdering and dispersal within confined spaces to increase cancer *risks* – yet no one bats an eye-lid at the finely dispersed radioactives belched from coal-fired power plants all the time, or the radon build-up in basements in granitic provinces across North America, which is ignored until someone has the balls to enforce ventilation codes.

    We accept everyday risks all the time – the USA doesn’t do anything about the car-deaths or gun-deaths that are easily preventable and *known* killers like *tobacco* and *coal-ash* are perfectly legal to inflict upon other people.

    And don’t get me started on the anti-GMO fear-mongering and closely related “organic food” bullshit.

    1. political talk lately
      Political talk lately has degenerated into panic mongering. On all sides.

      In the USA, Trump is bad. Hillary is bad too, but not as bad as Trump, Unless you listen to Trump.

      I did listen to the Green party candidate. More panic. The excessive student loan debt is a reason to panic. So are a bunch of other issues. Panic panic panic.

      Isn’t the definition of panic that you make stupid decisions while in that state of mind ?

  6. The point of the subject was
    The point of the subject was catastrophic ultra hazardous world ending science experiments.
    “We accept everyday risks all the time – the USA doesn’t do anything about the car-deaths or gun-deaths that are easily preventable and *known* killers like *tobacco* and *coal-ash* are perfectly legal to inflict upon other people.”
    These are all problems, but not world ending. Runaway hypothetical black holes, GMOs run rampant, etc…these are the world ending items mentioned. And with the exponential growth of science and technology, the possibility of an experiment resulting in an accidental catastrophic result increases accordingly. Caution is appropriate.

    1. exponential
      Exponential growth sounds scary doesn’t it.

      That is what it is meant to do, it is supposed to scare us.
      If anything we are running out of smart people doing real science. Because we are spending a lot more time nnd money entertaining today’s kids than we spent educating their grandparents.

      Education takes time and effort. And we are subcontracting out most of it these days. To schools and kindergarten and babysitters. Parents (mostly) just want their kids out of the way.

      The problem with these parents is that they are pretty much uneducated themselves. Both on a science basis and on a basis of responsibility. Today it is desirable to just do whatever your company wants, so you get ahead financially as fast as possible. I consider that very harmful.

      Freedom is not just do what you want. More important is don’t do what you don’t want, often because you find it irresponsible or morally distasteful.

      We do not want to farm out moral judgement the way we farm out education of the kids.

      But hey, that would be work. Make decisions ourselves, with the risk that we will not find the results comfortable. Nah, a lot easier to listen to some “authority”.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Mobile menu - fractal