The nuclear power complex made a huge mistake in how it originally promoted itself. That’s the contention of former MIT professor Jack Devanney in his latest book, Why Nuclear Power Has Been a Flop at Solving the Gordian Knot of Electricity Poverty and Global Warming.
Since the industry’s early days, proponents have alleged that the probability of a sizable release of radioactive material from a nuclear plant accident “is so low that you can just assume it won’t happen.” This assertion embodies the fiction that any release of radioactive material is unacceptable.
However, despite subsequent decades of propaganda to the contrary, not all radioactive releases pose a danger to humans or the environment. Therefore, Devanney states, instead of focusing on the probability of any radioactive release, no matter how small, emphasis should be placed on the potential consequences of a release. This is because small releases do not warrant the same response as large ones.
Regardless, the aim of the nuclear industry was eliminating all concern regarding accidental radiation releases of any amount whatsoever. “Once the industry and the regulators promulgated this falsehood, they had to try to make it true,” Devanney explains.
Unfortunately, the cost associated with that effort is pricing nuclear power out of the energy market. Let’s look at how the technique adopted by the industry to measure nuclear risks contributed to this situation.
Probabilistic Risk Analysis
“Probabilistic risk analysis” (PRA) has become the cornerstone of the nuclear regulators’ approach to nuclear safety analysis. Devanney explains that PRA is the tool that provides the basis for declaring potentially harmful radiation releases to not be a credible concern.
He agrees that while we should take reasonable measures to assure radioactive releases are rare, the real issue should be explaining the potential consequences of a release. In other words, consequences — based on scientific fact — are more important and more concrete than probabilities — based on limited data and “expert” opinion. Resources could then be used to develop appropriate consequence-mitigating measures.
Interestingly, the use of PRA avoids the need for safety testing. “The PRA paperwork might be horribly expensive, but it was a hell of a lot cheaper than building a plant just to put it through a series of rigorous stress tests,” writes Devanney. Both the Semiscale Program of 1965-1986 and the Loss-of-Fluid Test (LOFT) facility employed in the 1970s and 1980s at the Idaho National Lab used reduced scale systems to perform loss-of-coolant testing in lieu of constructing a full-scale pressurized-water reactor system for the purpose of conducting the necessary safety tests. Devanney alleges that PRA became a substitute for very expensive testing that would validate or invalidate the safety case of a nuclear plant license applicant.
Logical Risk Assessment
To understand proper risk assessment, consider the following maxims of risk management paraphrased from P.L. Clemens and R. R. Mohr (Concepts in Risk Management, February 2002):
Everything has hazards, and all hazards have risk.
Risks are not equally consequential.
Risk has two components — severity and probability of occurrence. Both must be evaluated to assess risk.
Man lacks omniscience — some risks won’t be known.
Man lacks precognition — some risks won’t be foreseen.
Man’s resources are limited — resources available to control risks are also limited.
A thing is “safe” only to the degree that its risks are acceptable. There is no absolute safety.
Recognized risks exceeding an acceptability limit must be made known to those who may suffer their consequences.
Probabilities of all risks are finite. Therefore, the bizarre (low probability) mishap will occur — sometime, somewhere.
The last maxim is critical to understanding why the focus of nuclear power plant safety should be on accident consequences, rather than probability of occurrence.
We can easily get caught in the trap of believing that because we can calculate an event’s probability, we therefore know its likelihood of occurrence. Such calculations, although necessary for performing bona fide risk assessments, must not be afforded absolute status.
Low-probability events do occur despite best efforts to minimize their likelihood of occurrence. Just because an event has an extremely low probability of occurrence doesn’t mean it can’t happen today, tomorrow, or next week. In fact, it could occur today, and tomorrow, and next week. It is not likely, but neither are any other unlikely catastrophes that strike without any apparent schedule (e.g., earthquakes, floods, tsunamis, etc.).
Breaking Public Trust
The risk of assuring the public that a very low-probability radiation release won’t happen is that when it does happen, public trust is lost. Such is the legacy of Pennsylvania’s Three Mile Island (TMI), where a 1979 accident triggered widespread, unwarranted panic. In the preface to the seventh edition of his book The Health Hazards of Not Going Nuclear, engineering professor Petr Beckmann quoted news stories that erupted in the wake of TMI with the recurrent theme: “Scientists told us an accident of this type was virtually impossible, but now it has happened.”
Beckmann pointed out that, with an accident like TMI, “which resulted in nothing but property damage,” the probability of a radiation release is not insignificant. What is highly improbable with such an accident is any loss of life or long-term adverse health consequences.
Meanwhile, Beckmann explained that the “average dose received by people in the neighborhood of TMI due to the accident was one millirem.” To put that in perspective, consider that passengers on a coast-to-coast jet flight receive about 5 additional millirems due to cosmic radiation. That helps explain why, during health monitoring in 18 years following the TMI incident, the Pennsylvania Department of Health found no adverse health effects attributable to TMI in more than 30,000 people living within five miles of the plant.
Rebuilding Public Trust
Finally, mention should be made of the maxim listed above: “Recognized risks exceeding an acceptability limit must be made known to those who may suffer their consequences.” This is extremely critical to gaining and maintaining the public’s trust.
The nuclear power complex must first educate the public regarding the real health consequences (or lack thereof) of potential nuclear plant accidental radiation releases. This will establish a scientifically based acceptability limit for such situations. The industry can then specify under what circumstances accidental releases might exceed acceptable limits and plan mitigating measures to limit public exposure.
Of course, this means that the U.S. Nuclear Regulatory Commission must abandon the flawed “linear no threshold” (LNT) radiation dose-response model which erroneously advises that all radiation is harmful. LNT must give way to a realistic model with a defined threshold for actual biological harm.
Related articles:
Dispelling Irrational Fear of Radiation
Understanding Radiation Risks & Benefits
The Dynamic World of Radioactive Decay