Here’s a particularly egregious and scientifically vapid (as you’d expect, of course) interview with Helen Caldicott… recorded recently on some kind of “environmentalist” podcast.
Now… I could write a comprehensive technical deconstruction and debunking of essentially the whole lot… but I’m only one person, with a finite amount of time.
I’ll get started, just for now, by taking a look at just one particular sentence of nonsense from Caldicott.
Ask questions. Seek the evidence. Ask everybody questions, and never take anybody’s word for it. Are factual statements backed up by evidence? Are quantitative statements backed up by measurements, calculations, or derivations? Can those measurements or derivations be described and reproduced? Read everything you possibly can, and you decide.
Do people like Caldicott have the right idea? Or do people like George Monbiot have the right idea – that beneath the FUD, rhetoric and hysteria, these people have absolutely no real evidence, facts, knowledge or technical literacy at all?
HC: “Well it won’t recover. These accidents go on forever because plutonium’s half-life is 24,400 years. It lasts for half a million years. Thirty tons of plutonium got out at Chernobyl.”
Thirty tons of plutonium “got out” at Chernobyl!?
Personally, that reads many thousands of counts per minute on my baloney detector.
Let’s follow Dr. Caldicott’s favourite piece of advice… let’s read her book. Surely, just like all of Caldicott’s other “references” usually are, it’s got to be “in my book”, right?
“Plutonium is so carcinogenic that the half-ton of plutonium released from the Chernobyl meltdown is theoretically enough to kill everyone on Earth with lung cancer 1100 times, if it were to be uniformly distributed into the lungs of every human being.”
(From Nuclear Power is Not The Answer).
Hmmmm. Curious. It looks like we’ve gone from “a half-ton” in the book to “thirty tons” in this recent interview. Well, so much for “you should read my book… it’s all in the book!”
(By the way… that “kill everyone on Earth with lung cancer 1100 times…” bit is complete baloney. But that’s a story for another day.)
Reactor-grade plutonium typically consists of approximately 1.3% 238Pu, which has a half-life of 87.7 years and a specific activity of 634 GBq/g, 56.7% 239Pu, which has a half-life of 24,110 years and a commensurately far smaller specific activity of 2.3 GBq/g, 23.2% of 240Pu, with a half-life of
6564 years and a specific activity of 8.40 GBq/g, 13.9 % of 241Pu, with a half-life of 14.35 years and a specific activity of 3.84 TBq/g, and 4.9% of 242Pu, with a half-life of 373,300 years and a specific activity of 145 MBq/g.
Taking the weighted sum of all the above, we find that the overall specific activity of reactor-grade plutonium is 545.3 GBq/g, predominantly due to the 241Pu and the 238Pu content.
(Reactor-grade plutonium is considerably more radioactive than weapons-grade plutonium, due to the presence of substantial concentrations of these relatively unstable, high-activity plutonium nuclides. Weapons-grade plutonium is almost entirely 239Pu, which despite being a good fissile fuel, is more stable and less radioactive. The radiological heat output of 238Pu, gamma-radiation (from the 241Am daughter of 241Pu) and the high rate of neutron emission from the spontaneous fission of 240Pu all make these nuclides extremely deleterious and undesirable in nuclear weapon design and engineering.)
The best value determined based on the available data for the quantity of plutonium (a reactor-grade cocktail of different plutonium nuclides) released at Chernobyl is, as published in the reports of the Chernobyl Forum, 3 PBq (3×1015 Bq).
The approximate total mass, based on the best available data, of plutonium released into the environment at Chernobyl is 3 PBq divided by 545.3 GBq/g.
As the British physicist David Mackay put it, I’m not trying to be pro-nuclear. I’m just pro-arithmetic.
It’s 5.5 kilograms.
Incidentally, that’s a very small amount of plutonium compared to the amount of plutonium that has been dispersed around the environment from half a century of nuclear weapons testing. 5.5 kilograms of plutonium is, approximately, the amount of plutonium in the pit of a single nuclear weapon. A single zero-yield “fizzle” of a nuclear weapon with no fission, or a zero-yield one-point-implosion safety test, or the accidental HE explosion (without proper implosion of the primary, as in the Palomares and Thule accidents) of a single nuclear weapon will disperse a roughly comparable mass of plutonium into the environment. (But less radioactivity, since weapons-grade Pu is less radioactive than reactor-grade Pu.)
So, Caldicott has gone from exaggerating the true number by a factor of approximately 100 to exaggerating the true number by a factor of approximately 6000.
Anyway… let’s just step back a minute. 30 tons of plutonium released at Chernobyl? Let’s apply what scientists, engineers and technologists sometimes refer to as the “reasonableness test” or the “smell test”. Can you quickly “smell” the data and determine if it is roughly plausible or not?
The total mass of uranium dioxide fuel in the fuel assemblies of a fully fueled RBMK reactor is about 180 tonnes. That’s about 159 tonnes of uranium, if you take off the mass of the oxygen in the uranium dioxide. When LEU fuel is irradiated at a typical burnup in a nuclear power reactor, about one percent of the mass of the original uranium ends up as transuranic actinides, mostly plutonium, by the time the fuel is removed. So, that’s a total plutonium inventory in the Chernobyl reactor of approximately 1.6 tonnes.
So, if we make a conservative, pessimistic and entirely unrealistic assumption that 100% of the plutonium inventory in the nuclear fuel was entirely vaporised and released into the environment during the Chernobyl accident, that would be 1.6 tonnes of plutonium released to the environment. (In reality, that fraction was something more like 0.34% of the total inventory of plutonium within the irradiated uranium dioxide fuel.)
So, does “thirty tons” pass the smell test? Not by a long shot.
I’m going to focus this post on radiation dosimetry – because radiation dosimetry is what really matters in terms of deciding whether anybody can actually get hurt. So far, nobody around Fukushima has been hurt by radioactivity, although of course tens of thousands are still dead or missing because of this great tragedy.
It doesn’t matter what you do or don’t do to the reactors or the used fuel, or what condition they’re in – at the end of the day, the radiation dose to the public is how we measure the effect of this incident on the public and its potential for harm.
To be honest, I’m really not concerned much with what the dose rates are in the plant itself.
The men and women who work there understand dose rates and health physics quite well. They routinely work in areas of elevated above-background dose, and they know how to work safely in those environments. They understand how to measure and quantify the radiation field in the working environment, and the accumulated doses that they’re personally receiving.
They understand how to manage shielding, exposure time, radiation measurement and dosimetry in order to get the work done safely and effectively.
Even with abnormally significantly elevated radiation fields in some areas as a result of these incidents, they still know how to work safely. If the radiation dose rate in some particular area is so highly elevated that it cannot be entered safely for any length of time at all, then they won’t be entering it.
It’s pointless to scare the public with elevated on-site dose rate measurements. They’re not working on the site. Leave that for the people with health physics training. I’m much more interested in off-site dose rate measurements, personally, as those are the measurements that are actually of relevance to the public.
Off-site radiological dose rates
The KEK accelerator physics complex in Tsukuba (165 km from Fukushima) has a webpage showing their real-time measurements of the environmental gamma dose rate (counted with a GM tube). They’re currently measuring 0.17 μSv/h, at the time I write this, which has been fairly constant over the last few days, except for a brief, narrow spike up to 0.6 μSv/h, which they observed on the 16th.
(NOTE: Just to avoid any ambiguity, “m” means milli, 10-3. “μ”, or “u” if you don’t have access to the Greek alphabet in your software, means micro, 10-6. And when I say milli I’m quite sure I mean milli, and when I say micro I’m quite sure I mean micro. Because these terms are important, personally, I make damn sure to get them right.)
Each year, a resident of the United States receives an average total dose from background radiation of about 3.1 mSv. This is the radiation dose from natural background sources; from natural radioactivity in the Earth, and cosmic rays from space. That’s equal to 0.354 μSv/h.
In practice, the average dose that a person receives each year in the United States is significantly higher than that natural background dose, about twice that, once you’ve added on the dose from medical imaging and things like that.
The radiation dose rate being measured in Tsukuba right now, after the Fukushima accident, is less than half of the average natural background radiation dose rate that a person receives in the United States. This includes all sources of radiation in Tsukuba, including natural geological radioactivity, cosmic radiation, and any radioactivity released at Fukushima, as well as any ionising radiation from the particle accelerators at KEK, which is what these sensors are actually intended to monitor.
That brief, narrow spike seen in the radiation field measured at KEK doesn’t really concern me. The radiation dose you’ll receive if you hang around in that area for an extended period of time is the area under the graph – the integral – over that period of time. For such a short, sharp spike, the overall potential dose is still quite small.
In order to quantify the potential harm from a significant release of radioactivity, it would make more sense to “filter” that dose-rate data from the detector as a rolling average, to make it simpler to make a more straightforward interpretation of the potential to receive any significant radiation dose.
KEK is also measuring the concentration of 131I and the short-lived fission product 132Te in the atmosphere and reporting regular updates to this data online. The concentrations we’re looking at here are extremely small – on the order of 10 microbecquerels per cubic centimeter – but they are concentrations which they are able to accurately measure at KEK, using a high-volume air sampler and a high-purity germanium gamma-ray spectrometer.
The environmental gamma-ray dose rate measured in Tokyo between 11 pm and 12 am, averaged across one hour, on March 18th, was 0.0471 μSv/h. This radiological monitor in Tokyo returned its highest reading yet on the 16th, from 05:00 to 05:59, at a dose rate of 0.143 μSv/h.
So, that most recent figure from Tokyo is 13% of the average natural background radiation dose rate in the United States. One banana dose is something like 0.1 μSv, so what we’re measuring in Tokyo at the moment comes in at just under 0.5 banana per hour. (One banana per hour, and you’re going to triple that dose rate.)
The highest figured measured at all in recent days, 0.143 μSv/h, is equal to 40% of the average natural background in the United States.
The radiation level in Saitama, outside Tokyo, is also being recorded and charted on the web. As of 21:00 on the 18th of March, they report a dose rate of 0.058 μSv/h. The maximum value reported, at 1 am on March 17, is 0.067 μSv/h. These figures are 16% and 19% of US natural background, respectively.
Two things are apparent from this data.
(a) Japan has very low levels of natural background radiation compared to the continental United States. (This is interesting in itself! It’s probably a combination of both low elevation providing shielding from cosmogenic radiation and a relatively low abundance of uranium and its daughter products in the ground.)
(b) The background ionising receive dose rate that people receive across Japan has not been elevated significantly at all, at least outside the immediate vicinity of the plant, as a result of the Fukushima damage.
(ASIDE: If you can’t read Japanese – I can’t – a little bit of Google’s automatic translation goes a long way in helping you sort through this important data.)
If we look at the 5 monitoring sites closest to the 30 km radius marked on the chart, we see that the last three measurements marked on the chart, for each of those sites are 52, 52, 52, 140, 140, 150, 40, 45, 45, 8.5, 9.0, 8.7, 1.6, 1.6, and 2.0 μSv/h.
This tells us that there is detectable radioactivity which is moving in a narrow plume in the atmosphere – it is not distributed out isotropically, which is indeed exactly what you would expect from thinking about the meteorology.
This chart of compiled radiation measurements also tells us a very similar story.
At that 30 km radius, the average dose rate from that monitoring station which reports the high outlier values – the one corresponding to the location of the plume – is 143 μSv/h.
I wonder what radionuclides are present in that plume? The presence of 131I, 132Te and 133Xe would tell us that this radioactivity has come from a reactor, the absence of these short-lived fission products will tell us it has come from used fuel in the pool. A little bit of gamma spectroscopy, and we would have the answers.
The presence of these radionuclides as measured at KEK confirms that at least a tiny bit of radioactivity has been released from the reactors themselves.
That’s fairly high, but it’s not obviously high enough to hurt people. If you stood in the location of that plume for an entire week, you would receive 24 mSv over the course of a week – which is a dose figure which would be consistent with a relatively-high-dose nuclear imaging procedure using something like 201Tl to make an image of a tumor, or something like that.
If we remove those three outliers corresponding to the plume location from the above set of numbers and we take the mean of the remaining values, this gives us a rough idea of the mean dose rate elsewhere along the 30 km radius, outside the location where the source term of radioactivity is passing in a plume of wind. That mean value, then, is 26 μSv/h.
If you were standing in that radiation field, 26 μSv/h, for five hours per day every day for a year, you would reach a total annual dose of 47 mSv, which is just above the allowed occupational radiation dose – above natural and non-occupational background – of 50 mSv per year, for people working around radioactivity, such as nuclear power plant employees. (This is the limit set in the United States by the NRC; I’m not sure what the corresponding dose limit is in Japan, but it will be something loosely similar.)
But it’s well worth remembering that that radioactivity that is present now, in very low levels, will not be sticking around for a whole year. It is dispersing rapidly, and it drops away exponentially as you move away from the Fukushima site. As we move further out from the 30 km radius marked on that map, the dose rates recorded are all at harmless levels, consistent with background radiation dose rates experienced by people in the United States and elsewhere across the world.
In Ramsar, Iran, the natural background radiation dose rate is unusually high, at 260 mSv per year in some places. That is 30 μSv/h, which is higher than the mean value of about 26 μSv/h measured at these monitoring stations 30 km west of Fukushima, as I described above.
The people of Ramsar experience a background radiation dose significantly above that which most other people across the world experience – but they do not seem to experience any ill health effects at all from this.
I hope all the above helps to put these dose rates in context.
The composition of the radionuclides that are responsible for most of the radioactivity in used nuclear fuel that has been stored in a cooling pool for a few months is very different than in nuclear fuel in a reactor that is operating, or has just been shut down.
Fuel straight out of an operating reactor contains a number of rather short lived, rather high specific activity fission product radionuclides which are of the largest health physics significance in the time immediately following severe reactor accidents.
Some of these short-lived fission products include iodine-131, xenon-133, xenon-135, tellurium-131, tellurium-132, and ruthenium-105. These short-lived fission products were very significant contributions to people’s radiation doses in the environment around Chernobyl in the time immediately following the Chernobyl disaster, for example, when they were dispersed from “hot” nuclear fuel from the reactor.
However, they are not present to any significant level in stored nuclear fuel, because they decay away relatively fast, and they cannot contribute any significant source term into the environment in some sort of accident scenario involving used nuclear fuel which has been stored for a month or three post-defueling.
So, what radionuclides are present in stored fuel? The main ones of interest here are the longer-lived fission products. 137Cs, 85Kr and 90Sr are the most significant ones. Of these, 85Kr is a gas, and has the most potential to be readily released from the fuel into the atmosphere. 137Cs accounts for most of the radioactivity of the used nuclear fuel, and it is usually the most feared radionuclide in the used fuel inventory, in terms of the potential source term released from an accident with a used-fuel pool.
Fuel-pool water evaporation
“In this house, we obey the laws of thermodynamics!”
— Homer Simpson
With the used fuel heating the water in the Unit 4 fuel transfer pool, how long will it take for all the water to be boiled away? It is actually possible to know this, without any real speculation. The physics is pretty simple.
We know that there is a full core-load of used fuel in the Unit 4 defueling pool, which was put there after the reactor was shut down for inspection on November 30.
(Assumption I’ve made here which may possibly be wrong: That that single core-load of fuel is the only fuel in the pool. Is there additional fuel in the pool? If there is, somebody needs to tell me how much and how long it has been cooling for, so we can re-run the numbers – or you can take what I’ve explained here and re-calculate it for yourself.)
That fuel has been cooling for the last 3.5 months, or approximately 9.2 * 106 s.
After this time, the radiothermal power output of the used fuel is small. Looking at this decay heat chart, we read that the decay heat is approximately 2 MWt. However, this chart is for a power reactor with a thermal power rating of 3000 MW. (And I’ve done a not-really-precise job of eyeballing that chart.)
But the Fukushima-I Unit 4 reactor has an electrical power capacity of 784 MW; that’s about 2352 MW thermal. So, we need to scale back the above figure commensurately; it’s approximately 1.6 MWt, from the entire core load of fuel.
The latent heat of vaporisation of water, at 100 C, is 2260 kJ/kg. (Let’s assume, conservatively, that the water in the pool is boiling; it’s at 100 degrees C, and the only route of energy dissipation from the system is through vaporisation of the water. This also assumes that none of the energy released is stored in the water by means of a rise in the water’s temperature, because it’s already at boiling point, and that there is no functioning mechanism for otherwise cooling that water.)
1.6 MW / (2260 kJ/kg * 1000 g/L) = 0.7 litres per second. 61 cubic meters per day.
The used fuel pool at Vermont Yankee, which is also a GE BWR-4, is 40 feet long, 26 feet wide and 39 feet deep, and is normally filled with 35,000 cubic feet of water. I will make an assumption that the Fukushima I Unit 4 used fuel pool has the same dimensions.
The level of water in the used fuel pool is normally 16 feet above the top of the fuel assemblies. With the water level evaporating at the rate described above, the water level will drop by 2 feet per day.
Uncovery of the fuel assemblies will take eight days. (Beginning from the point where the water level reached boiling point, after active cooling ceased.)
(Working assumption which you may subject to some skepticism: That there is no form of leakage or other water loss pathway from the used fuel pool.)
It seems plausible that a fire hose or something can be used to add water to the pool at a rate equal to this loss rate of 700 ml per second. The Chernobyl-style helicopter drops seem like overkill, and they are doing an effective job of whipping up “Chernobyl again” fear, and images that the newspapers are having a field day with.
“Is it true they have nuke stuff inside of them?”
“What happens if one gets busted open? Everyone gets all mutated?”
“If you ever find yourself in the presence of a destructive force powerful enough to decapsulate those isotopes,” Ng says, “radiation sickness will be the least of your worries.”
— Neal Stephenson, Snow Crash
At the present time, the people of Japan are struggling to deal with one of the most serious natural disasters anywhere in the world in recent recorded history. My thoughts are with them.
But I’ll get straight to the point. All right, what do we actually know about the effects of this disaster on Unit 1 at the Fukushima I Nuclear Power Station?
There are 53 operational nuclear power reactors in Japan today. Most of them were operating normally at the time of the recent earthquake, and continued to operate normally, since they were relatively far away from the earthquake’s epicenter. Some of them were offline at this time for routine scheduled maintenance or refueling. Several reactors, closer to the earthquake’s epicentre, experienced normal, automatic reactor trips (“SCRAMs”, in Western nuclear engineering parlance) controlled by the RPS (the Reactor Protection System), exactly as you would expect either in the presence of ground acceleration under earthquake conditions, or due to a loss of electricity grid connectivity to the plant (which is known as a Loss of Offsite Power, or LOOP, in nuclear power engineering parlance), which is a very likely event during a severe earthquake.
For the purposes of designing safe nuclear power plants, loss of offsite power is recognized as a relatively frequent, relatively high probability event. For the purposes of designing safe nuclear power plants, especially in Japan, it is recognized that the plant can be subjected to a severe earthquake – and on the Japanese coast, to tsunami surges as well.
And of those 53 power reactors, only one is behaving in a somewhat abnormal way in its shutdown state – Unit 1 at the Fukushima I Nuclear Power Station. (Fukushima Dai-Ichi, as opposed to Fukushima Dai-Ni, which is the Fukushima II plant.) The other 52 are completely normal, either operating or behaving as predicted in a shutdown state. To see that 52 out of 53 are behaving completely normally, and many are still operating normally, generating electricity on the grid, in the wake of one of the strongest earthquakes the world has ever seen in an industrialized area, shows you just how resilient nuclear power infrastructure is in response to natural forces like this.
Meanwhile, oil refineries and natural gas plants are pretty much all going up in flames in Japan’s earthquake-affected areas.
The absolute worst case scenario that we could potentially be looking at here is partial melting damage to the nuclear fuel – similar to the Three Mile Island accident. This will not harm any people or harm the environment, but it will have serious financial and political costs for TEPCO, it may write off the reactor, and it will be a significant political and rhetorical advantage for anti-nuclear activism and FUD.
If there is any real environmental damage to come out of this accident, it will come as a result of increased use of coal and fossil fuels instead of nuclear energy.
Fukushima I-1 is a General Electric BWR (Boiling Water Reactor), with a (relatively small) nameplate capacity of 460 MW. It first achieved criticality in October 1970, only 3 years after its construction commenced in 1967.
Fukushima I Unit 4, Unit 5 and Unit 6 were all already offline for maintenance and fueling operations at the time of the earthquake, and Units 1, 2 and 3 were shutdown automatically at the time of the earthquake, either by the RPS seismic sensors or by the RPS relays opening when off-site power was disrupted, completely as intended.
The control rods are all already fully driven into the reactors, and the reactors are fully subcritical. The systems are not even close to criticality and cannot reach criticality – or any measure of supracriticality – at all. This has been the case at all times following the initial RPS trips and control rod insertion at the time of the earthquake.
However, for a limited period of time following reactor shutdown, cooling of the reactor core still has to be maintained, to dissipate the decay heat of the short-lived fission products in the nuclear fuel. And that cooling, and how it is maintained, or not maintained, in the absence of offsite power, is at the root of our discussion of all this fuss at Fukushima.
Even with the reactor in a subcritical configuration, with control rods inserted, if the reactor core coolant level drops excessively and it is not replenished, over the course of the next 48 hours or so following reactor shutdown, the fuel can eventually heat up excessively from its decay heat, leading to core damage – partial melting of the fuel, which will be very difficult and costly to fix. This is not significantly dangerous for the people and the environment around the nuclear reactor. This worst-case scenario, damage to the fuel in the reactor core, is not dissimilar to the damage to the Three Mile Island Unit 2 PWR in the United States in 1979; although it is worth noting here that the TMI reactors are Pressurized Water Reactors and the Fukushima reactors are BWRs.
As the name suggests, however, the decay heat will decay away fairly rapidly – and the fuel’s thermal power output will drop below levels which are potentially problematic in the absence of proper cooling after a few days. It will take a few days for the fuel rods to stabilise their own temperature, in the absence of active water cooling, as the short-lived fission products in the fuel which are generating the heat continue to decay. The reactor will be then in what is known as “cold shutdown”. At that point, only minimal coolant injection into the reactor will be required, and preparations can be began to remove the nuclear fuel from the core.
We’re probably already not far from reaching this point, chronologically, at Fukushima I-1. The decay heat from the fission products in the fuel has been decaying constantly for the last couple of days, ever since control rod insertion at the time of the earthquake. We’re now reaching the point, over the coming few days, where the risk of further potential core damage has passed.
Following LOOP, most of a reactor’s instrumentation and emergency systems are generally transferred to a backup auxiliary power supply provided on site by diesel generators, or to batteries in the case of some systems. However, it appears that these generators at Fukushima I-1 were damaged by the earthquake. The diesel generator appeared to start up correctly, and then it stopped abruptly about an hour later.
So, what happens to a BWR, in terms of its decay heat removal, when the reactor is tripped, and offsite power is offline, and the auxiliary electric power supply from the diesel generators is offline?
To find out, we need to take a closer look at the BWR. To start with, here are a few little diagrams that basically illustrate the architecture of a typical BWR of the kind we’re discussing here. Click through for the full-sized images.
Wikipedia has a surprisingly good page on the safety systems of a Boiling Water Reactor, and I think that’s a very good place to start. It’s not too technical, since it is Wikipedia, after all, but it’s impressively good basic technically literate material for a Wikipedia article.
Like all light-water reactors, the GE BWR has a negative void coefficient. In other words, as the proportion of steam to liquid water increases within the reactor, the moderation of neutrons within the reactor is decreased, since the lower-density steam is less effective as a moderator, and the reactor’s average neutron energy spectrum hardens a little, and this causes the reactor’s neutronic power output to decrease, since the reactor is operating with an enriched uranium fuel in a typical neutron energy spectrum where hardening the neutron energy spectrum will decrease the fission power output.
A sudden increase in steam pressure within the BWR (caused, for example, by the closing of the main steam isolation valve from the reactor) will cause a sudden increase in the proportion of liquid water to steam within the reactor, which will cause an increase in the reactor’s power output, due to the negative void coefficient. Such an event is known as a pressure transient.
The BWR is specifically designed to suppress such pressure transients, by safely venting the overpressure through safety relief valves to below the surface of a pool of liquid water within the containment. This toroidal-shaped tank, known as the torus, is shown on the drawings above. There are 11 safety overpressure relief valves on the older generation of BWRs such as the ones at Fukushima, and only a couple of them need to be opened in order to completely mitigate a pressure transient.
Although a pressure transient will cause a transient in the fission power output for a brief moment, the rapid actuation of the pressure relief valves will cause the pressure to drop off rapidly, and correspondingly, the neutronic power will rapidly drop off once the valves are opened, to a level far below nominal operating power.
There is an intrinsic physical relationship between temperature, pressure and fission power output in a light-water reactor, because of the void reactivity coefficient.
The Emergency Core Cooling System, the ECCS, of a light-water reactor is made up a set of many interrelated, redundant layers of different safety systems which are designed to protect the nuclear fuel within the reactor pressure vessel from overheating in the event of the loss of coolant level, by maintaining that coolant level. To understand what’s going on at Fukushima, it is good to have a basic understanding of what these different systems are.
The Emergency Core Cooling System(s)
The High Pressure Coolant Injection System (HPCI) is the first line of defense in the ECCS. The HPCI is designed to inject substantial quantities of water into the reactor while it is at high pressure, and to prevent the activation of the additional, redundant low-pressure “layers” of the ECCS. HPCI can deliver approximately 19,000 L/min to the core at any core pressure above 690 kPa (100 psi). This is usually enough to keep the water levels sufficiently high to avoid activating the low-pressure “layers” of ECCS except in a major contingency, such as a large break in the makeup water line. The HPCI necessarily operates at a high pressure because it injects water into the reactor at a high flow rate against the high pressure already within the reactor, without releasing that pressure.
It’s worth noting here that whilst the Fukushima reactor may be losing coolant level at a limited rate through steam venting through the pressure relief valves into the torus, there is no pipe break, no stuck-open valve, or any other serious large-scale LOCA scenario here with a serious rate of coolant loss, which is the kind of thing the ECCS is designed to safely compensate for.
The HPCI system is powered by steam from the reactor – its operation is not dependent on off-site power, or power from the diesel generators, or battery power. It is powered by the heat remaining in the reactor itself.
It is completely plausible that a turbine trip, with sudden closure of the main steam isolation valve (MSIV) between the reactor and the turbine hall, will cause a significant power transient in the reactor, for the reasons described above, and steam venting into the relief valves as a result of that transient will cause some loss of the coolant level. The HPCI system is more than adequate to make up the reactor water level in this scenario.
The next one of the redundant components of the ECCS is the Reactor Core Isolation Cooling System, or RCIC. RCIC is also one of the high-pressure coolant injection systems, capable of injecting approximately 2000 L/min of water into the reactor core. The RCIC is able to operate with no source of electric power other than battery power, and is capable of providing decay heat removal by itself in the event of a station blackout, where off-site power is lost and the backup power supply from the diesel generators also fails.
If the water level cannot be maintained with the HPCI and/or the RCIC, and the core water level is still falling below some present point even with these systems working full-bore, then the next systems in the stack of ECCS systems respond. If, for some reason such as a large-break LOCA, the water level cannot be maintained, we then move to looking at the next layers of redundancy in the ECCS – the depressurisation and low-pressure coolant injection systems.
For the low-pressure coolant injection components of the ECCS to operate, the pressure within the reactor must be reduced, by the depressurization system. The Automatic Depressurization System (ADS) is designed to activate in the event that the reactor pressure vessel is retaining pressure, but the water level cannot be maintained using high pressure cooling alone, and low pressure cooling must be initiated. When the ADS activates, it rapidly releases pressure from the reactor vessel in the form of steam, through pipes that are piped to below the water level in the the torus, which is designed to condense the steam released into it, bringing the reactor vessel pressure below 32 atmospheres, allowing the low pressure components of the ECCS to be activated.
The low-pressure ECCS systems have extremely large capacities compared to the high pressure systems and are powered by multiple different power sources. They will maintain any required water level, and in the event of a worst-case LOCA, such as a break of a large water pipe feeding into the reactor vessel below core level, which could potentially lead to temporary fuel rod “uncovery”, they will rapidly return the water level over the fuel in the core prior to the fuel heating to the point where core damage could occur.
The Low Pressure Core Spray System (LPCS) is the first of the low-pressure ECCS components, designed to suppress steam generated by a major contingency. As such, it prevents the reactor vessel pressure from re-increasing above the LPCI coolant injection pressure, 32 atmospheres. It activates while the pressure in the reactor is still below 32 atmospheres, and delivers approximately 48,000 L/min of water in a deluge from the top of the core.
The Low Pressure Coolant Injection System, LPCI, is the final piece of the ECCS, the “heavy artillery” in the ECCS. Consisting of 4 pumps driven by diesel engines, it is capable of injecting a mammoth 150,000 L/min of water into the core. Combined with the core spray system to keep steam pressure in the core sufficiently low, the LPCI can suppress all core-cooling contingencies by rapidly and completely flooding the core with coolant. One should also note that the diesel-engine driven pumps that run the LPCI are completely independent of off-site electrical grid power, they are independent of steam power being extracted from the reactor (unlike HPCI), and they are independent of the diesel generators that provide the backup electricity supply for the plant in the event of the loss of offsite power.
The Standby Liquid Control System, the SLCS, is used in the event of major contingencies as a last-ditch measure to prevent core damage. It is not intended ever to be used, as the RPS and ECCS are designed to respond to all contingencies, even if multiple components of those systems fail, but if a complete ECCS failure occurs, it could be the only thing capable of preventing core damage. The SLCS consists of a tank containing a large quantity of water loaded with soluble nuclear poisons (such as boron) protected by explosively-opened valves and redundant battery-operated pumps, allowing the injection of the water into the reactor against any pressure within it. This water is fully capable of dissipating heat from the nuclear fuel; and the nuclear poisons in this water will send the system fully subcritical even if, somehow, insertion of the control rods has completely failed (which is not the case at present for any of these Japanese nuclear power reactors.)
The SLCS is a system that is never meant to be activated unless all other measures have failed to maintain integrity of the nuclear fuel. In the older generation of existing BWRs its activation could cause sufficient damage to the plant (due to the salts used as neutronic poisons causing corrosion and contamination of the whole nuclear steam supply system) that it could make the reactor inoperable without a complete overhaul.
There is now talk of pumping seawater into the reactor building; although the information in the press on this subject seems to be vague and confused. There is very little good, unambiguous information out there. Are we talking about spraying seawater within the reactor building, in order to condense steam and reduce the temperature and pressure? That seems to make sense. Are we talking about spraying seawater within the drywell to help cool the reactor pressure vessel, and reduce temperatures within the drywell? That also makes sense.
Surely it wouldn’t make sense to actually inject seawater within the actual Nuclear Steam Supply System, would it? This would cause significant problems with regards to contamination and corrosion of the entire nuclear steam supply system, which would be difficult, time consuming and expensive to rectify. Why would this ever be considered, when the SLCS and the ECCS systems are designed to perform the same function, safely and reliably, under adverse emergency conditions, without ruining the reactor? I do not expect that this is actually what is being planned – but again, the information that is tricking out through the hysterical mass media is so bad, it’s hard to tell.
The Emergency Core Cooling Systems and the Design Basis Accident
(I’ve borrowed most of the material for this example scenario illustrated here from here.)
The Design Basis Accident (DBA) for a nuclear power reactor is the most severe possible single accident that the designers of the plant and the regulatory authorities could realistically imagine, as a contingency which the operators of the plant must be able to handle. It is, also, by definition, the accident the safety systems of the reactor are designed to respond to successfully, even if it occurs when the reactor is in its most vulnerable state.
The DBA for the BWR consists of the total rupture of a large coolant pipe in the location that is considered to place the reactor in the most danger of harm – specifically, for the older generations of existing BWRs, such as the Fukushima BWRs, the DBA consists of a “guillotine break” in the coolant loop of one of the recirculation jet pumps, which is substantially below the core waterline, and as such, has the makings of a very serious Loss of Coolant Accident or LOCA. The DBA scenario combines this large-scale loss of coolant with a simultaneous loss of feedwater to make up for the water boiled in the reactor (a loss of proper feedwater or LOFW), combined with a simultaneous collapse of the regional power grid, resulting in a loss of power to certain reactor emergency systems, or LOOP.
The BWR is designed to shrug this accident off without core damage.
The Design Basis Accident is not directly relevant to what happened to the reactor at Fukushima, but it is a good example to use to illustrate how the various different layers of the ECCS and the Reactor Protection System work under severe accident conditions, which is important background to a good understanding of what happened at Fukushima.
The immediate result of such a large-scale pipe break (we’ll call this time T+0) would be a pressurized stream of water well above boiling point shooting out of the broken pipe into the drywell, which is at atmospheric pressure. As this water stream flashes into steam, the pressure sensors within the drywell will report a pressure increase to the Reactor Protection System, within no more than 300 milliseconds; that is, by T+0.3. The RPS will interpret this pressure increase signal as the sign of a break in a pipe within the drywell. As a result, the RPS immediately initiates a full SCRAM, closes the Main Steam Isolation Valve (isolating the containment building), trips the turbines, attempts to spin up RCIC and HPCI using the residual steam, and starts the diesel-driven pumps for LPCI and the core spray.
Now, let’s assume that the LOOP occurs at this time, at T+0.5. The RPS is on an uninterruptable power supply, so it continues to function. The RPS immediately detects the loss of offsite power, however, and already enters a fully defensive state and trips the reactor and the turbine, if it has not already. Within less than a second from power outage, auxiliary batteries and compressed air supplies are starting the emergency diesel generators. Power will be restored by T+25 seconds.
(Remember that at Fukushima I, the backup diesel generator failed shortly after, but there was no real pipe break or LOCA. But never mind that, in the scenario we’re looking at here. In any case, remember that many of the ECCS sub-systems have different, redundant energy sources.)
Due to the rapid escape of coolant from the reactor core, HPCI and RCIC will fail rapidly due to loss of steam pressure, but this is immaterial, as the 2,000 L/min flow rate of RCIC available after T+5 is insufficient to maintain the water level; nor would the 19,000 L/min flow of HPCI, available at T+10, be enough to maintain the water level, even if it could work without steam, in the event of such a serious LOCA. At T+10, the temperature of the reactor core, at approximately 285 °C at this point, begins to rise as enough coolant has been lost from the core that voids begin to form in the coolant between the fuel rods and they begin to heat rapidly. By T+12 seconds from the initial LOCA, fuel rod uncovery begins. At approximately T+18, parts of the rods have reached 540 °C.
At T+40, the core temperature is at 650 °C and rising steadily; the LPCI and the pressure-regulating core spray kick in and begin deluging the steam above the core, and then the core itself. A large amount of hot steam still trapped above and within the core has to be knocked down first, or the water will be flashed to steam prior to it hitting the fuel. This happens after a few seconds, as the approximately 200,000 L/min of water these systems release begin to cool first the top of the core, with the LPCI deluging the fuel rods, and the core spray suppressing the generated steam until at approximately T+100 seconds, when all of the fuel is now subject to this deluge and the last remaining hot spots at the bottom of the core are now being cooled.
The peak temperature that is attained in the fuel elements in this scenario, even with temporary uncovery of the fuel rods, is 900 °C, well below the maximum of 1200 °C which is acceptable before fuel damage begins, at the bottom of the core, which was the last hot spot to be cooled by the water deluge.
The core is now cooled rapidly and completely by the LPCI, and following cooling to a reasonable temperature, below that consistent with the generation of steam, the core spray is shut down and the LPCI flow rate is decreased to a level consistent with maintenance of a steady temperature of the fuel rods, which will drop over a period of days due to the decay in fission-product decay power output within the fuel. After a few days, decay heat will have sufficiently abated to the point that defueling of the reactor is able to commence. Following defueling, LPCI can be shut down completely.
On March 12, there was an explosion near the Fukushima I-1 reactor building. What happened?
There are about 5 different layers of containment which exist, in a power reactor reactor like the ones at Fukushima, between the people outside and the potentially dangerous radioactive fission products within the nuclear fuel.
The fuel rods themselves are clad in tubes of zirconium alloy, and that represents one such layer. That nuclear fuel is inside the reactor pressure vessel, which is made of steel six inches thick, and that reactor vessel is the next such layer. The reactor pressure vessel is within the primary containment vessel, the drywell, which is made of steel one inch thick, and that represents the next such layer. The primary containment vessel is within the secondary containment structure, which is made of steel-reinforced, pre-stressed concrete between 4 and 8 feet thick. The reactor building which is built around the secondary containment structure is the last of these multiple layers of containment, and it is also made of steel-reinforced, pre-stressed concrete, between 30 cm and 1 m thick.
If every possible measure standing between safe operation of the plant and severe core damage and melting of the nuclear fuel fails, the containment can be sealed indefinitely, and it will prevent any significant release of radioactivity to the outside environment occurring under any circumstances.
Now, let’s look at some diagrams of these structures. Click-through for the full resolution images.
The outermost layer of the multiple layers of containment – the reactor building – has walls and a roof made of solid concrete, and it’s roughly cube-shaped.
On top of the concrete reactor building, however, there is an additional part of the structure – it is not made of concrete, but it is made of steel, with steel sheets over a steel frame. This steel building on top of the reactor building houses the fuel transfer crane, and it is built on top of the concrete roof of the reactor building. I’m referring to the part of the structure above the concrete shield plug and the refueling platform at the top of the concrete reactor building, as shown on the first of the diagrams above.
It is this relatively weak steel structure on top of the reactor building, which is not really part of the reactor building proper, which seems to have been blown out by a hydrogen explosion.
The explosion at Fukushima I-1 does not appear to have occurred within nor does it appear to have breached any of the fundamental layers of containment structure described above.
Now, an explosion has not occured as a result of a release of nuclear energy. That is a scenario that is simply outside the laws of reality. An explosion can be caused by one of two things; a chemical explosion, such as an ignition of a hydrogen-oxygen mixture, or a sudden release of stored gas or steam pressure.
It appears that the structure has probably been damaged as a result of a hydrogen explosion. It’s probable that excessive hydrogen generation within the reactor core, either radiolytically or chemically by reduction of water in the presence of the zirconium cladding at significantly elevated temperatures, has been vented into the torus, and as temperatures and pressures have began to rise within the torus steam pressure in the torus has been vented out into the reactor building surrounding the torus. From there, the hydrogen mixed with that steam and water vapor has risen, as hydrogen does, and worked its way through the reactor building, escaping at the top of the reactor building, and accumulating at the top, in the area around the fuel transfer crane. It appears that the accumulated hydrogen has then mixed with air and exploded.
Radiochemistry and radioactivity releases to the environment
When a light-water reactor is operating, some of the oxygen-16 in the water is activated into radioactive nitrogen-16, by the 16O(n, p)16N reaction. 16N is very short lived, with a half-life of only 7 seconds, but its specific activity is correspondingly very high. When a BWR nuclear power station is operating, the entire nuclear steam supply system, including the turbine hall, is a radiological controlled area, due to the radioactivity from 16N. However, after reactor shutdown, the 16N decays very quickly, reducing the radiation dose around the turbines to negligible levels basically immediately. This is one key difference between a BWR power plant and a PWR power plant – since the secondary coolant loop that drives the turbine in a PWR is isolated from the reactor’s primary coolant by the steam generator, the secondary coolant is never radioactive during reactor operation.
There can also be very small amounts of other radionuclides formed within the reactor coolant, for example tritium, which is formed by the fission of boron used as a soluble reactivity shim in the reactor coolant (if you really want to know: neutron capture on 10B forms an excited state in 11B, which splits apart into two 4He and 1 3H nuclei. A similar reaction occurs beginning from 11B, with the re-emission of one additional neutron in the breakup of the excited 12B nucleus), and 14C, which is formed from nitrogen compounds such as hydrazine which are added for pH control and oxygen scavenging in the reactor coolant.
If excessive pressure within the torus or within the primary containment vessel is vented out into the reactor building, and from there it is allowed to escape out into the atmosphere, then small amounts of these radionuclides may be released out into the atmosphere, which is a possible scenario we might be seeing at Fukushima.
(A quick note on terminology: Radiation is not a substance, and it cannot leak, nor contaminate a person. To speak of an escape of radiation from a nuclear power station is kind of like to speak of an accidental leak of light from a lightbulb factory. What we are talking about here is a possible release of radioactivity, of a substance containing a radioactive nuclide.)
We will know, over the next few days, exactly what the true situation is regarding the composition, and quantity, of any releases of radionuclides into the outside environment. It’s very easy to detect radioactivity, to measure it quantitatively with high precision, and to discriminate the presence of different radionuclides and identify them.
What I suspect we might see from some anti-nuclearists, however, is something that we saw after Three Mile Island, and something that we still see on rare occasions up to the present day in regards to TMI – the conspiracy theories.
Some people will probably try and claim that there were actually enormous releases of radioactivity into the environment and it was never really measured or documented – or that it was measured and known that there were huge releases of radioactivity into the environment at Fukushima, but there’s a big conspiracy by big bad unethical TEPCO or by the Big Bad Nuclear Industry in a more general sense (and by the evil government and the conspirators at the IAEA, and the armies of Big Nuclear Shill bloggers, of course!) to cover it all up! We saw this once or twice after TMI, and I think we’ll see it again from those who are truly devout believers in the absolute, unmoderated evil of the Big Bad Nuclear Industry.
Of course, that’s absolute nonsense, for exactly the same reasons that it’s nonsense in the context of TMI. You simply cannot ever, in any context, release a very large amount of radioactivity into the atmosphere and cover it up or keep it quiet.
Look at Chernobyl for example. The Soviets didn’t tell the West about it immediately – they didn’t even tell their own nuclear scientists. Soviet nuclear experts found about it when radiation sensors at nuclear research sites and nuclear power plants (eg. the Ignalina plant in what is now Lithuania) across the eastern USSR started going off, and the West found out about it when radiation sensors at Sweden’s Forsmark NPP and other Swedish nuclear engineering facilities started going off. (For more on this note regarding Chernobyl, see the excellent first chapter of Richard Rhodes’ excellent Arsenals of Folly.)
Nuclear power plants and other facilities that use radioactive materials are all over the place in our society, and they all have sensors and instruments to make sure everything is safe and radioactive contamination does not occur. If a Chernobyl-style event occurs, you will detect it at any such site. Any nearby NPP. Any nearby molecular biology lab working with radiolabels. Any nearby physics lab. Any nearby clinic working with X-rays or medical imaging. Anyone nearby developing photo film.
If a person who has recently had a radiopharmaceutical medical imaging procedure walks into a nuclear power plant or physics lab, or a radiation detector installed at a border crossing or port around the USA, they’ll set off alarms.
Radioactivity is so easy to detect that in 1896 Becquerel discovered it accidentally.
I remember that there was a case, in November of 2008 I think, where a little bit of radioactive 133Xe was vented from the ANSTO Lucas Heights radiopharmaceuticals facility… this was quickly detected in Melbourne by the atmospheric radiochemistry monitoring station which is part of the network being developed for the CTBTO for CTBT verification… one of a large network of such sites, which are extremely sensitive, all around the world which are used to detect any possible nuclear weapon test.
Japan, and Hong Kong and mainland China have plenty of expertise and infrastructure that they can use to, for example, perform the sensitive analysis of fission-product radionuclides in the atmosphere to monitor nuclear weapons testing and nuclear fuel processing in the DPRK… so they can also certainly analyse the presence of traces of artificial radionuclides in the atmosphere from this nuclear power plant incident.
How many nuclear power stations are there in the United States that are located relatively close to TMI, in the states geographically around TMI? What did their radiological monitors show? Anything? Photographic films from everyone around the area was collected and looked at – no radiation was recorded.
Basically, the whole idea of such an enormous cover up is just an enormous, impractical conspiracy theory – which would need to involve the state government, the federal government, the nuclear energy industry, and huge numbers of the public and huge numbers of scientists and industries – like an Apollo hoax conspiracy theory.
We will know, over the next few days, exactly what the true situation is regarding the composition, and quantity, of any releases of radionuclides into the outside environment. There are no coverups or conspiracies in this context – there simply cannot be.
I hope you’ve found this post helpful. Please feel free to post comments, with any further discussion, questions, criticisms or what-have-you. I will likely follow up this post with a future post, following future developments of this issue, and responding to questions or new information.
Well, long time no post. I hope all my readers are well.
So, apparently today is something called “Blog Action Day“, and this year the topic of interest is anthropogenic forcing of the climate system, and mitigating the potential thereof.
So, OK, I thought I’ll write a blog post about it. The day is supposed to be about action, as the name suggests, so let’s talk about specific actions, with a view towards making a significant mitigation, in a realistic way, of Australia’s anthropogenic carbon dioxide emissions.
Australia’s brown coal (lignite) fired electricity generators have by far the highest specific carbon dioxide emissions intensity per unit of electrical energy generated, since they’re burning relatively high moisture brown coal. They are the most concentrated point contributors to the anthropogenic GHG output. Therefore, these are the “low-hanging fruit” – a very valuable target to look at first and foremost if we want to make the greatest realistic mitigation of the country’s carbon dioxide emissions in a practical way, followed by black coal-fired generators.
Australia’s total net greenhouse gas emissions in 2006 were 549.9 million tonnes of CO2 equivalent.
If we look at the three main sets of lignite-fired generators in the Latrobe valley in Victoria, they represent a very concentrated point source of CO2 output, so they’re a very good case to focus on specifically.
In 2006, Hazelwood generated 11.6 TWh of electrical energy, and 16,149,398 tonnes of carbon dioxide to atmosphere.
In 2006, Loy Yang A generated 15.994 TWh of electrical energy sent out to the grid and 19,326,812 tonnes of carbon dioxide to atmosphere.
I’ll exclude Loy Yang B from this list for the moment, since its numbers are eluding me.
In 2006, the Yallourn power station generated 10.392 TWh of electrical energy sent out to the grid and 14,680,000 tonnes of carbon dioxide to atmosphere.
If you look at the the total contribution of just those three brown-coal-fired plants combined, you’re looking at 9.12 percent of Australia’s total anthropogenic carbon dioxide emissions. If you replace those with clean technology that can deliver an equivalent electricity output, you get a 9.12 percent reduction in Australia’s CO2 emissions. (When you include Loy Yang B, I think it’s approximately 11-12%.)
That’s not a bad target for Australia to implement for the relatively short term for a real reduction in CO2 emissions. It can actually be done, if the real political will exists to do it.
Now, I’m not interested in this “100% renewable energy by 2020” business from the extremist any-excuse-for-a-protest Socialist Alternative set, because it is nonsense.
Replacing all the coal-fired and gas-fired generators in this country inside 10 years (and presumably only using wind turbines and solar cells, not nuclear energy of course since it doesn’t fit their para-religious ideology)? That’s complete bullshit, of course, because in the real world it cannot be done.
There’s a difference between setting a challenging target and setting a nonsense target. Unless you’re only trying to implement a political bullshit stunt instead of actually trying to hit your targets.
Of course, you don’t just close down the coal-fired generators. You’ve actually got to build their clean replacements first. So what do you use that can realistically replace a coal-fired power station? Nuclear power, of course.
Now, again, to be realistic, we probably can’t build LFTR/MSR, PBMR/HTGR, IFR/PRISM or any kind of nuclear fusion based generation capacity on a large scale to generate grid-connected energy right now. That’s not to say that pilot-scale research and development on those very cool technologies shouldn’t continue, but right now, getting more nuclear energy on the grid means advanced light water reactors – or maybe heavy water CANDU-type things, or conventional sodium-cooled fast reactors maybe. The most practical thing for serious deployment in the relatively short term is advanced LWR technology. In the slightly longer term, there is certainly a place to be encouraging both Gen. IV and fusion.
To get the same amount of energy as the total output from those coal plants, as above, which we’re talking about replacing, we need 4.56 GW of installed nuclear capacity, assuming a 95% capacity factor.
With 4 x 1154 MWe Westinghouse AP1000s, with a 95% capacity factor, you’ve got 4.62 GW, which is a little more than what’s needed.
You can easily have four nuclear power reactors integrated into one nuclear power plant.
Now, how much does it cost?
On March 27, 2008, South Carolina Electric & Gas applied to the Nuclear Regulatory Commission for a COL to build two AP1000s at the Virgil nuclear power plant in South Carolina. On May 27, 2008, SCE&G and Santee Cooper announced an engineering, procurement, and construction contract had been reached with Westinghouse. Costs are estimated to be approximately $9.8 billion for both AP1000 units, plus transmission facility and financing costs.
That gives you an idea of how much a nuclear power plant costs today, in the current financial environment, in the current regulatory environment.
If we double that figure of USD$9.8 billion, it’s AUD $21.4 billion. There will be some saving since we’re considering building four reactors at one plant, not two independent two-reactor plants.
How much that saving will be, quantitatively, I don’t really know. If the cost is reduced by 30%, we’re looking at 15 billion Australian dollars.
How long would it take? If the real political will exists to do it, 10 years is heaps of time. We could probably do even more in that timeframe if we really, really wanted to. AP1000 construction takes 36 months from first concrete poured to fuel load, if you ignore any political protest rubbish.
This is really just a base-line relatively achievable “base case”. After this decade, of course, the rate of nuclear power deployment – and related GHG emissions mitigation – could foreseeably accelerate.
What about the uranium input? About 600 tonnes of natural uranium per year total, for all four reactors. Australia’s present production, off the top of my head, is something like 10,000-11,000 tonnes. Australia’s present uranium production can very, very easily provide for Australia’s total electricity production even without expansion of uranium production – again, considering the inefficient once-through use of low-enriched uranium in conventional LWRs.
What about the so-called “waste”?
Roughly 80-85 tonnes of used uranium fuel per year. 96% of that is unchanged uranium, so that 76.8 tonnes of uranium can be seperated and re-used. It’s just uranium, so it’s not going to hurt you.
The remaining 3200 kg is made up of the valuable, interesting and unique byproduct materials from a nuclear reactor – unique resources with all kinds of different technological applications, which aren’t all radioactive, which you cannot get anywhere else.
Anyway, that’s one scenario which I happen to think has a lot of merit.
Maybe you don’t agree – but if you don’t agree, I’d love to see you elucidate an alternative scenario which can deliver the equivalent greenhouse gas emissions mitigation – shown to be accurate in a quantitative way – within a comparable timeframe and within a comparable cost.
It will not be inexpensive, and it will not happen overnight – but I have yet to see any scenario which can honestly do the same job faster and cheaper, when some real quantitative analysis is applied.
I wrote this in response to a comment over at Brave New Climate (shameless plug), but I thought it was quite a nice little post, so as not to waste it, I’ll re-post it.
PS: Sorry about my lack of blog activity lately, sometimes real life seems to eat up my time. Extra apologies, especially, if you’ve posted comments that the software has kept from being posted pending manual moderator approval, and therefore your comments haven’t been getting through.
Ah, the old Three Mile Island conspiracy theory, the notion that there were enormous amounts of radioactivity released, and people who experienced acute radiation poisoning, that was somehow covered up.
Let’s look, for a moment, at the Chernobyl disaster. When the Chernobyl disaster happened, we didn’t see the Soviet premier calling up Reagan to tell him all about this catastrophic accident and large release of radioactivity, did we? Of course, they tried to keep it a secret.
So, how did we in the industrialised world find out about the Chernobyl accident and this large release of radioactivity? We found out about it when all the radiological sensors and alarms started going off at the Forsmark nuclear power plant in Sweden.
As another example, we all know today that it is possible, if you live in an area of relatively uranium-rich geology, for significant amounts of radioactivity in the form of radon to leach into your basement naturally from the surrounding rock.
But how was this radon issue first discovered? It was discovered when a person employed at a nuclear power station in the United States kept setting off the radiological sensors when he arrived at work every day.
Those incidents show you just how sensitive the detectors and monitors used at facilities like nuclear power plants are.
In the modern world, if there is some sort of massive release of radioactivity into the atmosphere it is impossible to hide it or to cover it up.
If this massive release of radioactivity at TMI isn’t just a myth, then you would have recorded clear evidence of it on every bit of photographic film for miles around. After the accident, all such photographic film was collected and analysed by Kodak, and no such evidence was found.
There are many other nuclear power plants in Pennsylvania and neighbouring states which aren’t too far away from TMI. They would have recorded real evidence of a large cloud of radioactivity in the environment, if it was really present to that extent.
You’d record real evidence of it anywhere where photographic film is stored or used. You’d record it at every nuclear power plant, or anywhere else where radioactive materials are stored or used where health physics controls are implemented. You’d record real evidence of it anywhere where medical or industrial X-ray images are made. You’d record it anywhere where radioactivity is used for scientific or medical purposes. You’d record it everywhere where particle detectors are used for physics experiments. You’d maybe even detect it on every old duck-and-cover civil defence radiological detector that someone might have had laying around as an unpleasant relic of history.
But no such real physical recorded evidence to support the theory was ever recorded, anywhere. People went looking for it, but it wasn’t there.
There are people who claim they got sick as a result of the Three Mile Island accident, who claim that they exhibited symptoms consistent with acute radiation poisoning, and massive doses of ionising radiation.
There are also people who claim that they have been made sick by witches who put a curse on them – once upon a time, upon hearing these stories, we’d have them tell us the identity of the witch so the witch could then be tortured and murdered, based on these stories.
There are people who tell stories about how they’ve been beamed aboard the extraterrestrial flying saucer, and sexually molested by the aliens – but once again, as with the above examples, there are mere stories but there is no actual real evidence that stands up to scientific enquiry.
Shortly after the TMI accident, people like Helen Caldicott went and gathered up local residents and described to them the scary sounding symptopms of acute radiation poisoning from acute exposure to massive doses of ionising radiation, and implied that that’s what would happen to them. With that kind of fear and stress, it’s no surprise that we can see mass hysteria, and we can see people who say that they think they might be starting to exhibit those symptoms that they’ve been told about.
But instruments and detectors and photographic films and thermoluminescent dosimeter crystals aren’t subject to fear, panic and mass hysteria – and they recorded nothing.
A common claim of TMI conspiracy theorists such as Caldicott is that longer lived radionuclides, such as Cs-137, Sr-90, Pu-239 (or pick your favourite moderate-to-long half-life well-known reactor-produced radionuclide) were released into the environment at TMI, not just short-lived gaseous fission products.
But if such nuclides were released, you could go and take some soil from TMI, and physically show the evidence of such release, because those radionuclides would still, mostly, be there. They can show us real, undeniable, physical evidence today, if that hypothesis is true. But that evidence is never forthcoming.
ABC Unleashed has recently featured an article by environmentalist Geoffrey Russell; Rethinking Nuclear Power.
It’s worth reading.
I like the idea of closing down uranium mines, and using existing stocks of mined uranium efficiently.
Uranium mining is far less environmentally intensive than mining coal, of course, but it’s basically inevitable that all mining is fairly environmentally intensive, and it’s always an appealing prospect if we can mine less material (whilst still maintaining our energy supplies and our standards of living, of course.)
I have to admit, when I first saw Geoff’s claim that we could completely eliminate uranium mining, I was skeptical. So I took a more detailed look.
A nuclear reactor which is efficiently consuming uranium-238 and driving a relatively high efficiency engine (typically, a Brayton-cycle gas turbine) will require approximately one tonne of uranium input for one gigawatt-year of energy output. This high efficiency use of U-238 could be best realized something like an IFR or a liquid-chloride-salt reactor (the latter is essentially the fast-neutron uranium fueled variant of a LFTR). This figure of one tonne of input fertile fuel per gigawatt-year is also comparable for the efficient use of thorium in a LFTR.
There are about one million tonnes of already mined, refined uranium in the world, just sitting around waiting to be put to use, which is termed so-called “depleted uranium”.
According to one source, the exact worldwide inventory of depleted uranium is 1,188,273 tonnes .
The total electricity production across the world today is about 19.02 trillion kWh .
Therefore, total worldwide stocks of depleted uranium, used efficiently in fast reactors, could provide every bit of worldwide electricity production for about 550 years.
That’s not forever, but it’s a surprisingly long time. And that’s just “depleted uranium” stocks; not including the stocks of HEU and plutonium from the arsenals of the Cold War, and not including the large stockpile of uranium and plutonium that exists in the form of “used” LWR fuel.
I know some thorium proponents aren’t going to like this; but there’s a strong case to be made here that uranium-238 based nuclear energy has a clear advantage over thorium, simply became of these huge stockpiles of already-mined uranium, for which there exists no comparable thorium resource already mined. The 3,200 tonnes of thorium nitrate at NTS is tiny compared to the uranium “waste” stockpile, but they’re both really useful energy resources which can replace the need for more mining.
Any type of breeder or burner reactor utilising 238U, or 232Th, as fuel requires an initial charge of fissile material to “kindle” it; however this requirement for fissile material is quite small; and personally, I think the inventories of HEU and weapons-grade plutonium recovered from the gradual dismantlement of the arsenals of the Cold War are perfectly suited to this purpose – destroying those weapons materials, whilst putting them to a valuable use.
Then again, with the means to completely replace the use of coal and fossil fuels in a way that requires very little or no uranium mining, I really hope the rest of the world keeps buying that iron and copper and bauxite. Alternatively, we’re going to have to start developing a more technologically based economy in this country to make up the reduction in exports of these commodities – perhaps developing and selling reactor technology?
Developing uranium enrichment technology, such as SILEX, is of limited usefulness because the relatively inefficient thermal-neutron fission of 235U, and hence the need for enrichment, will not supply any large portion of world energy demand in a sustainable fashion over the long term. The small amount of 235U in nature is of limited significance, over the long term.
Alternatively, perhaps a shake up of agriculture, using extensive desalination to supply fresh water requirements, might be used to replace Australia’s income from coal and uranium. I’m not sure.
Tip ‘o the hat to Barry at Brave New Climate for pointing out this article.
: From the World Factbook, 2008 ed. (jokes about the integrity of CIA’s intelligence aside…)
…I’ve been a little busy lately, and you may have noticed the absence of many new posts.
More regrettably, though, is that there seem to be a large number of comments accumulating in the pending-comment-moderation queue, and they haven’t been posted.
So, if you’ve been trying to post comments without success, this is why, and I will make sure they’re all posted now. (Unless there are any I really insist on moderating, but that’s unlikely.) I will have to try experimenting with the WordPress settings regarding spam filtering and automatic comment approval a little bit.