# Physical Insights

An independent scientist’s observations on society, technology, energy, science and the environment. “Modern science has been a voyage into the unknown, with a lesson in humility waiting at every stop. Many passengers would rather have stayed home.” – Carl Sagan

## Archive for the ‘nuclear energy’ Category

I’m going to focus this post on radiation dosimetry – because radiation dosimetry is what really matters in terms of deciding whether anybody can actually get hurt. So far, nobody around Fukushima has been hurt by radioactivity, although of course tens of thousands are still dead or missing because of this great tragedy.

It doesn’t matter what you do or don’t do to the reactors or the used fuel, or what condition they’re in – at the end of the day, the radiation dose to the public is how we measure the effect of this incident on the public and its potential for harm.

To be honest, I’m really not concerned much with what the dose rates are in the plant itself.

The men and women who work there understand dose rates and health physics quite well. They routinely work in areas of elevated above-background dose, and they know how to work safely in those environments. They understand how to measure and quantify the radiation field in the working environment, and the accumulated doses that they’re personally receiving.

They understand how to manage shielding, exposure time, radiation measurement and dosimetry in order to get the work done safely and effectively.

Even with abnormally significantly elevated radiation fields in some areas as a result of these incidents, they still know how to work safely. If the radiation dose rate in some particular area is so highly elevated that it cannot be entered safely for any length of time at all, then they won’t be entering it.

It’s pointless to scare the public with elevated on-site dose rate measurements. They’re not working on the site. Leave that for the people with health physics training. I’m much more interested in off-site dose rate measurements, personally, as those are the measurements that are actually of relevance to the public.

The KEK accelerator physics complex in Tsukuba (165 km from Fukushima) has a webpage showing their real-time measurements of the environmental gamma dose rate (counted with a GM tube). They’re currently measuring 0.17 μSv/h, at the time I write this, which has been fairly constant over the last few days, except for a brief, narrow spike up to 0.6 μSv/h, which they observed on the 16th.

(NOTE: Just to avoid any ambiguity, “m” means milli, 10-3. “μ”, or “u” if you don’t have access to the Greek alphabet in your software, means micro, 10-6. And when I say milli I’m quite sure I mean milli, and when I say micro I’m quite sure I mean micro. Because these terms are important, personally, I make damn sure to get them right.)

Each year, a resident of the United States receives an average total dose from background radiation of about 3.1 mSv. This is the radiation dose from natural background sources; from natural radioactivity in the Earth, and cosmic rays from space. That’s equal to 0.354 μSv/h.

In practice, the average dose that a person receives each year in the United States is significantly higher than that natural background dose, about twice that, once you’ve added on the dose from medical imaging and things like that.

The radiation dose rate being measured in Tsukuba right now, after the Fukushima accident, is less than half of the average natural background radiation dose rate that a person receives in the United States. This includes all sources of radiation in Tsukuba, including natural geological radioactivity, cosmic radiation, and any radioactivity released at Fukushima, as well as any ionising radiation from the particle accelerators at KEK, which is what these sensors are actually intended to monitor.

That brief, narrow spike seen in the radiation field measured at KEK doesn’t really concern me. The radiation dose you’ll receive if you hang around in that area for an extended period of time is the area under the graph – the integral – over that period of time. For such a short, sharp spike, the overall potential dose is still quite small.

In order to quantify the potential harm from a significant release of radioactivity, it would make more sense to “filter” that dose-rate data from the detector as a rolling average, to make it simpler to make a more straightforward interpretation of the potential to receive any significant radiation dose.

KEK is also measuring the concentration of 131I and the short-lived fission product 132Te in the atmosphere and reporting regular updates to this data online. The concentrations we’re looking at here are extremely small – on the order of 10 microbecquerels per cubic centimeter – but they are concentrations which they are able to accurately measure at KEK, using a high-volume air sampler and a high-purity germanium gamma-ray spectrometer.

This site gives us a continually updated log of the gamma-ray dose rate in Tokyo.

The environmental gamma-ray dose rate measured in Tokyo between 11 pm and 12 am, averaged across one hour, on March 18th, was 0.0471 μSv/h. This radiological monitor in Tokyo returned its highest reading yet on the 16th, from 05:00 to 05:59, at a dose rate of 0.143 μSv/h.

So, that most recent figure from Tokyo is 13% of the average natural background radiation dose rate in the United States. One banana dose is something like 0.1 μSv, so what we’re measuring in Tokyo at the moment comes in at just under 0.5 banana per hour. (One banana per hour, and you’re going to triple that dose rate.)

The highest figured measured at all in recent days, 0.143 μSv/h, is equal to 40% of the average natural background in the United States.

The radiation level in Saitama, outside Tokyo, is also being recorded and charted on the web. As of 21:00 on the 18th of March, they report a dose rate of 0.058 μSv/h. The maximum value reported, at 1 am on March 17, is 0.067 μSv/h. These figures are 16% and 19% of US natural background, respectively.

Two things are apparent from this data.

(a) Japan has very low levels of natural background radiation compared to the continental United States. (This is interesting in itself! It’s probably a combination of both low elevation providing shielding from cosmogenic radiation and a relatively low abundance of uranium and its daughter products in the ground.)

(b) The background ionising receive dose rate that people receive across Japan has not been elevated significantly at all, at least outside the immediate vicinity of the plant, as a result of the Fukushima damage.

(ASIDE: If you can’t read Japanese – I can’t – a little bit of Google’s automatic translation goes a long way in helping you sort through this important data.)

If we look at the 5 monitoring sites closest to the 30 km radius marked on the chart, we see that the last three measurements marked on the chart, for each of those sites are 52, 52, 52, 140, 140, 150, 40, 45, 45, 8.5, 9.0, 8.7, 1.6, 1.6, and 2.0 μSv/h.

This tells us that there is detectable radioactivity which is moving in a narrow plume in the atmosphere – it is not distributed out isotropically, which is indeed exactly what you would expect from thinking about the meteorology.

This chart of compiled radiation measurements also tells us a very similar story.

At that 30 km radius, the average dose rate from that monitoring station which reports the high outlier values – the one corresponding to the location of the plume – is 143 μSv/h.

I wonder what radionuclides are present in that plume? The presence of 131I, 132Te and 133Xe would tell us that this radioactivity has come from a reactor, the absence of these short-lived fission products will tell us it has come from used fuel in the pool. A little bit of gamma spectroscopy, and we would have the answers.

The presence of these radionuclides as measured at KEK confirms that at least a tiny bit of radioactivity has been released from the reactors themselves.

That’s fairly high, but it’s not obviously high enough to hurt people. If you stood in the location of that plume for an entire week, you would receive 24 mSv over the course of a week – which is a dose figure which would be consistent with a relatively-high-dose nuclear imaging procedure using something like 201Tl to make an image of a tumor, or something like that.

If we remove those three outliers corresponding to the plume location from the above set of numbers and we take the mean of the remaining values, this gives us a rough idea of the mean dose rate elsewhere along the 30 km radius, outside the location where the source term of radioactivity is passing in a plume of wind. That mean value, then, is 26 μSv/h.

If you were standing in that radiation field, 26 μSv/h, for five hours per day every day for a year, you would reach a total annual dose of 47 mSv, which is just above the allowed occupational radiation dose – above natural and non-occupational background – of 50 mSv per year, for people working around radioactivity, such as nuclear power plant employees. (This is the limit set in the United States by the NRC; I’m not sure what the corresponding dose limit is in Japan, but it will be something loosely similar.)

But it’s well worth remembering that that radioactivity that is present now, in very low levels, will not be sticking around for a whole year. It is dispersing rapidly, and it drops away exponentially as you move away from the Fukushima site. As we move further out from the 30 km radius marked on that map, the dose rates recorded are all at harmless levels, consistent with background radiation dose rates experienced by people in the United States and elsewhere across the world.

In Ramsar, Iran, the natural background radiation dose rate is unusually high, at 260 mSv per year in some places. That is 30 μSv/h, which is higher than the mean value of about 26 μSv/h measured at these monitoring stations 30 km west of Fukushima, as I described above.

The people of Ramsar experience a background radiation dose significantly above that which most other people across the world experience – but they do not seem to experience any ill health effects at all from this.

I hope all the above helps to put these dose rates in context.

The composition of the radionuclides that are responsible for most of the radioactivity in used nuclear fuel that has been stored in a cooling pool for a few months is very different than in nuclear fuel in a reactor that is operating, or has just been shut down.

Fuel straight out of an operating reactor contains a number of rather short lived, rather high specific activity fission product radionuclides which are of the largest health physics significance in the time immediately following severe reactor accidents.

Some of these short-lived fission products include iodine-131, xenon-133, xenon-135, tellurium-131, tellurium-132, and ruthenium-105. These short-lived fission products were very significant contributions to people’s radiation doses in the environment around Chernobyl in the time immediately following the Chernobyl disaster, for example, when they were dispersed from “hot” nuclear fuel from the reactor.

However, they are not present to any significant level in stored nuclear fuel, because they decay away relatively fast, and they cannot contribute any significant source term into the environment in some sort of accident scenario involving used nuclear fuel which has been stored for a month or three post-defueling.

So, what radionuclides are present in stored fuel? The main ones of interest here are the longer-lived fission products. 137Cs, 85Kr and 90Sr are the most significant ones. Of these, 85Kr is a gas, and has the most potential to be readily released from the fuel into the atmosphere. 137Cs accounts for most of the radioactivity of the used nuclear fuel, and it is usually the most feared radionuclide in the used fuel inventory, in terms of the potential source term released from an accident with a used-fuel pool.

Fuel-pool water evaporation

“In this house, we obey the laws of thermodynamics!”
— Homer Simpson

With the used fuel heating the water in the Unit 4 fuel transfer pool, how long will it take for all the water to be boiled away? It is actually possible to know this, without any real speculation. The physics is pretty simple.

We know that there is a full core-load of used fuel in the Unit 4 defueling pool, which was put there after the reactor was shut down for inspection on November 30.

(Assumption I’ve made here which may possibly be wrong: That that single core-load of fuel is the only fuel in the pool. Is there additional fuel in the pool? If there is, somebody needs to tell me how much and how long it has been cooling for, so we can re-run the numbers – or you can take what I’ve explained here and re-calculate it for yourself.)

That fuel has been cooling for the last 3.5 months, or approximately 9.2 * 106 s.

After this time, the radiothermal power output of the used fuel is small. Looking at this decay heat chart, we read that the decay heat is approximately 2 MWt. However, this chart is for a power reactor with a thermal power rating of 3000 MW. (And I’ve done a not-really-precise job of eyeballing that chart.)

But the Fukushima-I Unit 4 reactor has an electrical power capacity of 784 MW; that’s about 2352 MW thermal. So, we need to scale back the above figure commensurately; it’s approximately 1.6 MWt, from the entire core load of fuel.

The latent heat of vaporisation of water, at 100 C, is 2260 kJ/kg. (Let’s assume, conservatively, that the water in the pool is boiling; it’s at 100 degrees C, and the only route of energy dissipation from the system is through vaporisation of the water. This also assumes that none of the energy released is stored in the water by means of a rise in the water’s temperature, because it’s already at boiling point, and that there is no functioning mechanism for otherwise cooling that water.)

1.6 MW / (2260 kJ/kg * 1000 g/L) = 0.7 litres per second. 61 cubic meters per day.

The used fuel pool at Vermont Yankee, which is also a GE BWR-4, is 40 feet long, 26 feet wide and 39 feet deep, and is normally filled with 35,000 cubic feet of water. I will make an assumption that the Fukushima I Unit 4 used fuel pool has the same dimensions.

The level of water in the used fuel pool is normally 16 feet above the top of the fuel assemblies. With the water level evaporating at the rate described above, the water level will drop by 2 feet per day.

Uncovery of the fuel assemblies will take eight days. (Beginning from the point where the water level reached boiling point, after active cooling ceased.)

(Working assumption which you may subject to some skepticism: That there is no form of leakage or other water loss pathway from the used fuel pool.)

It seems plausible that a fire hose or something can be used to add water to the pool at a rate equal to this loss rate of 700 ml per second. The Chernobyl-style helicopter drops seem like overkill, and they are doing an effective job of whipping up “Chernobyl again” fear, and images that the newspapers are having a field day with.

Written by Luke Weston

March 18, 2011 at 6:19 pm

## All right, it’s time to stop the Fukushima hysteria.

“Is it true they have nuke stuff inside of them?”
“What happens if one gets busted open? Everyone gets all mutated?”
“If you ever find yourself in the presence of a destructive force powerful enough to decapsulate those isotopes,” Ng says, “radiation sickness will be the least of your worries.”

— Neal Stephenson, Snow Crash

At the present time, the people of Japan are struggling to deal with one of the most serious natural disasters anywhere in the world in recent recorded history. My thoughts are with them.

But I’ll get straight to the point. All right, what do we actually know about the effects of this disaster on Unit 1 at the Fukushima I Nuclear Power Station?

There are 53 operational nuclear power reactors in Japan today. Most of them were operating normally at the time of the recent earthquake, and continued to operate normally, since they were relatively far away from the earthquake’s epicenter. Some of them were offline at this time for routine scheduled maintenance or refueling. Several reactors, closer to the earthquake’s epicentre, experienced normal, automatic reactor trips (“SCRAMs”, in Western nuclear engineering parlance) controlled by the RPS (the Reactor Protection System), exactly as you would expect either in the presence of ground acceleration under earthquake conditions, or due to a loss of electricity grid connectivity to the plant (which is known as a Loss of Offsite Power, or LOOP, in nuclear power engineering parlance), which is a very likely event during a severe earthquake.

For the purposes of designing safe nuclear power plants, loss of offsite power is recognized as a relatively frequent, relatively high probability event. For the purposes of designing safe nuclear power plants, especially in Japan, it is recognized that the plant can be subjected to a severe earthquake – and on the Japanese coast, to tsunami surges as well.

And of those 53 power reactors, only one is behaving in a somewhat abnormal way in its shutdown state – Unit 1 at the Fukushima I Nuclear Power Station. (Fukushima Dai-Ichi, as opposed to Fukushima Dai-Ni, which is the Fukushima II plant.) The other 52 are completely normal, either operating or behaving as predicted in a shutdown state. To see that 52 out of 53 are behaving completely normally, and many are still operating normally, generating electricity on the grid, in the wake of one of the strongest earthquakes the world has ever seen in an industrialized area, shows you just how resilient nuclear power infrastructure is in response to natural forces like this.

Meanwhile, oil refineries and natural gas plants are pretty much all going up in flames in Japan’s earthquake-affected areas.

The absolute worst case scenario that we could potentially be looking at here is partial melting damage to the nuclear fuel – similar to the Three Mile Island accident. This will not harm any people or harm the environment, but it will have serious financial and political costs for TEPCO, it may write off the reactor, and it will be a significant political and rhetorical advantage for anti-nuclear activism and FUD.

If there is any real environmental damage to come out of this accident, it will come as a result of increased use of coal and fossil fuels instead of nuclear energy.

Fukushima I-1 is a General Electric BWR (Boiling Water Reactor), with a (relatively small) nameplate capacity of 460 MW. It first achieved criticality in October 1970, only 3 years after its construction commenced in 1967.

Fukushima I Unit 4, Unit 5 and Unit 6 were all already offline for maintenance and fueling operations at the time of the earthquake, and Units 1, 2 and 3 were shutdown automatically at the time of the earthquake, either by the RPS seismic sensors or by the RPS relays opening when off-site power was disrupted, completely as intended.

The control rods are all already fully driven into the reactors, and the reactors are fully subcritical. The systems are not even close to criticality and cannot reach criticality – or any measure of supracriticality – at all. This has been the case at all times following the initial RPS trips and control rod insertion at the time of the earthquake.

However, for a limited period of time following reactor shutdown, cooling of the reactor core still has to be maintained, to dissipate the decay heat of the short-lived fission products in the nuclear fuel. And that cooling, and how it is maintained, or not maintained, in the absence of offsite power, is at the root of our discussion of all this fuss at Fukushima.

Even with the reactor in a subcritical configuration, with control rods inserted, if the reactor core coolant level drops excessively and it is not replenished, over the course of the next 48 hours or so following reactor shutdown, the fuel can eventually heat up excessively from its decay heat, leading to core damage – partial melting of the fuel, which will be very difficult and costly to fix. This is not significantly dangerous for the people and the environment around the nuclear reactor. This worst-case scenario, damage to the fuel in the reactor core, is not dissimilar to the damage to the Three Mile Island Unit 2 PWR in the United States in 1979; although it is worth noting here that the TMI reactors are Pressurized Water Reactors and the Fukushima reactors are BWRs.

As the name suggests, however, the decay heat will decay away fairly rapidly – and the fuel’s thermal power output will drop below levels which are potentially problematic in the absence of proper cooling after a few days. It will take a few days for the fuel rods to stabilise their own temperature, in the absence of active water cooling, as the short-lived fission products in the fuel which are generating the heat continue to decay. The reactor will be then in what is known as “cold shutdown”. At that point, only minimal coolant injection into the reactor will be required, and preparations can be began to remove the nuclear fuel from the core.

We’re probably already not far from reaching this point, chronologically, at Fukushima I-1. The decay heat from the fission products in the fuel has been decaying constantly for the last couple of days, ever since control rod insertion at the time of the earthquake. We’re now reaching the point, over the coming few days, where the risk of further potential core damage has passed.

Following LOOP, most of a reactor’s instrumentation and emergency systems are generally transferred to a backup auxiliary power supply provided on site by diesel generators, or to batteries in the case of some systems. However, it appears that these generators at Fukushima I-1 were damaged by the earthquake. The diesel generator appeared to start up correctly, and then it stopped abruptly about an hour later.

So, what happens to a BWR, in terms of its decay heat removal, when the reactor is tripped, and offsite power is offline, and the auxiliary electric power supply from the diesel generators is offline?

To find out, we need to take a closer look at the BWR. To start with, here are a few little diagrams that basically illustrate the architecture of a typical BWR of the kind we’re discussing here. Click through for the full-sized images.

Wikipedia has a surprisingly good page on the safety systems of a Boiling Water Reactor, and I think that’s a very good place to start. It’s not too technical, since it is Wikipedia, after all, but it’s impressively good basic technically literate material for a Wikipedia article.

Like all light-water reactors, the GE BWR has a negative void coefficient. In other words, as the proportion of steam to liquid water increases within the reactor, the moderation of neutrons within the reactor is decreased, since the lower-density steam is less effective as a moderator, and the reactor’s average neutron energy spectrum hardens a little, and this causes the reactor’s neutronic power output to decrease, since the reactor is operating with an enriched uranium fuel in a typical neutron energy spectrum where hardening the neutron energy spectrum will decrease the fission power output.

A sudden increase in steam pressure within the BWR (caused, for example, by the closing of the main steam isolation valve from the reactor) will cause a sudden increase in the proportion of liquid water to steam within the reactor, which will cause an increase in the reactor’s power output, due to the negative void coefficient. Such an event is known as a pressure transient.

The BWR is specifically designed to suppress such pressure transients, by safely venting the overpressure through safety relief valves to below the surface of a pool of liquid water within the containment. This toroidal-shaped tank, known as the torus, is shown on the drawings above. There are 11 safety overpressure relief valves on the older generation of BWRs such as the ones at Fukushima, and only a couple of them need to be opened in order to completely mitigate a pressure transient.

Although a pressure transient will cause a transient in the fission power output for a brief moment, the rapid actuation of the pressure relief valves will cause the pressure to drop off rapidly, and correspondingly, the neutronic power will rapidly drop off once the valves are opened, to a level far below nominal operating power.

There is an intrinsic physical relationship between temperature, pressure and fission power output in a light-water reactor, because of the void reactivity coefficient.

The Emergency Core Cooling System, the ECCS, of a light-water reactor is made up a set of many interrelated, redundant layers of different safety systems which are designed to protect the nuclear fuel within the reactor pressure vessel from overheating in the event of the loss of coolant level, by maintaining that coolant level. To understand what’s going on at Fukushima, it is good to have a basic understanding of what these different systems are.

The Emergency Core Cooling System(s)
————————————————————

The High Pressure Coolant Injection System (HPCI) is the first line of defense in the ECCS. The HPCI is designed to inject substantial quantities of water into the reactor while it is at high pressure, and to prevent the activation of the additional, redundant low-pressure “layers” of the ECCS. HPCI can deliver approximately 19,000 L/min to the core at any core pressure above 690 kPa (100 psi). This is usually enough to keep the water levels sufficiently high to avoid activating the low-pressure “layers” of ECCS except in a major contingency, such as a large break in the makeup water line. The HPCI necessarily operates at a high pressure because it injects water into the reactor at a high flow rate against the high pressure already within the reactor, without releasing that pressure.

It’s worth noting here that whilst the Fukushima reactor may be losing coolant level at a limited rate through steam venting through the pressure relief valves into the torus, there is no pipe break, no stuck-open valve, or any other serious large-scale LOCA scenario here with a serious rate of coolant loss, which is the kind of thing the ECCS is designed to safely compensate for.

The HPCI system is powered by steam from the reactor – its operation is not dependent on off-site power, or power from the diesel generators, or battery power. It is powered by the heat remaining in the reactor itself.

It is completely plausible that a turbine trip, with sudden closure of the main steam isolation valve (MSIV) between the reactor and the turbine hall, will cause a significant power transient in the reactor, for the reasons described above, and steam venting into the relief valves as a result of that transient will cause some loss of the coolant level. The HPCI system is more than adequate to make up the reactor water level in this scenario.

The next one of the redundant components of the ECCS is the Reactor Core Isolation Cooling System, or RCIC. RCIC is also one of the high-pressure coolant injection systems, capable of injecting approximately 2000 L/min of water into the reactor core. The RCIC is able to operate with no source of electric power other than battery power, and is capable of providing decay heat removal by itself in the event of a station blackout, where off-site power is lost and the backup power supply from the diesel generators also fails.

If the water level cannot be maintained with the HPCI and/or the RCIC, and the core water level is still falling below some present point even with these systems working full-bore, then the next systems in the stack of ECCS systems respond. If, for some reason such as a large-break LOCA, the water level cannot be maintained, we then move to looking at the next layers of redundancy in the ECCS – the depressurisation and low-pressure coolant injection systems.

For the low-pressure coolant injection components of the ECCS to operate, the pressure within the reactor must be reduced, by the depressurization system. The Automatic Depressurization System (ADS) is designed to activate in the event that the reactor pressure vessel is retaining pressure, but the water level cannot be maintained using high pressure cooling alone, and low pressure cooling must be initiated. When the ADS activates, it rapidly releases pressure from the reactor vessel in the form of steam, through pipes that are piped to below the water level in the the torus, which is designed to condense the steam released into it, bringing the reactor vessel pressure below 32 atmospheres, allowing the low pressure components of the ECCS to be activated.

The low-pressure ECCS systems have extremely large capacities compared to the high pressure systems and are powered by multiple different power sources. They will maintain any required water level, and in the event of a worst-case LOCA, such as a break of a large water pipe feeding into the reactor vessel below core level, which could potentially lead to temporary fuel rod “uncovery”, they will rapidly return the water level over the fuel in the core prior to the fuel heating to the point where core damage could occur.

The Low Pressure Core Spray System (LPCS) is the first of the low-pressure ECCS components, designed to suppress steam generated by a major contingency. As such, it prevents the reactor vessel pressure from re-increasing above the LPCI coolant injection pressure, 32 atmospheres. It activates while the pressure in the reactor is still below 32 atmospheres, and delivers approximately 48,000 L/min of water in a deluge from the top of the core.

The Low Pressure Coolant Injection System, LPCI, is the final piece of the ECCS, the “heavy artillery” in the ECCS. Consisting of 4 pumps driven by diesel engines, it is capable of injecting a mammoth 150,000 L/min of water into the core. Combined with the core spray system to keep steam pressure in the core sufficiently low, the LPCI can suppress all core-cooling contingencies by rapidly and completely flooding the core with coolant. One should also note that the diesel-engine driven pumps that run the LPCI are completely independent of off-site electrical grid power, they are independent of steam power being extracted from the reactor (unlike HPCI), and they are independent of the diesel generators that provide the backup electricity supply for the plant in the event of the loss of offsite power.

The Standby Liquid Control System, the SLCS, is used in the event of major contingencies as a last-ditch measure to prevent core damage. It is not intended ever to be used, as the RPS and ECCS are designed to respond to all contingencies, even if multiple components of those systems fail, but if a complete ECCS failure occurs, it could be the only thing capable of preventing core damage. The SLCS consists of a tank containing a large quantity of water loaded with soluble nuclear poisons (such as boron) protected by explosively-opened valves and redundant battery-operated pumps, allowing the injection of the water into the reactor against any pressure within it. This water is fully capable of dissipating heat from the nuclear fuel; and the nuclear poisons in this water will send the system fully subcritical even if, somehow, insertion of the control rods has completely failed (which is not the case at present for any of these Japanese nuclear power reactors.)

The SLCS is a system that is never meant to be activated unless all other measures have failed to maintain integrity of the nuclear fuel. In the older generation of existing BWRs its activation could cause sufficient damage to the plant (due to the salts used as neutronic poisons causing corrosion and contamination of the whole nuclear steam supply system) that it could make the reactor inoperable without a complete overhaul.

There is now talk of pumping seawater into the reactor building; although the information in the press on this subject seems to be vague and confused. There is very little good, unambiguous information out there. Are we talking about spraying seawater within the reactor building, in order to condense steam and reduce the temperature and pressure? That seems to make sense. Are we talking about spraying seawater within the drywell to help cool the reactor pressure vessel, and reduce temperatures within the drywell? That also makes sense.

Surely it wouldn’t make sense to actually inject seawater within the actual Nuclear Steam Supply System, would it? This would cause significant problems with regards to contamination and corrosion of the entire nuclear steam supply system, which would be difficult, time consuming and expensive to rectify. Why would this ever be considered, when the SLCS and the ECCS systems are designed to perform the same function, safely and reliably, under adverse emergency conditions, without ruining the reactor? I do not expect that this is actually what is being planned – but again, the information that is tricking out through the hysterical mass media is so bad, it’s hard to tell.

The Emergency Core Cooling Systems and the Design Basis Accident
——————————————————————————————————–

(I’ve borrowed most of the material for this example scenario illustrated here from here.)

The Design Basis Accident (DBA) for a nuclear power reactor is the most severe possible single accident that the designers of the plant and the regulatory authorities could realistically imagine, as a contingency which the operators of the plant must be able to handle. It is, also, by definition, the accident the safety systems of the reactor are designed to respond to successfully, even if it occurs when the reactor is in its most vulnerable state.

The DBA for the BWR consists of the total rupture of a large coolant pipe in the location that is considered to place the reactor in the most danger of harm – specifically, for the older generations of existing BWRs, such as the Fukushima BWRs, the DBA consists of a “guillotine break” in the coolant loop of one of the recirculation jet pumps, which is substantially below the core waterline, and as such, has the makings of a very serious Loss of Coolant Accident or LOCA. The DBA scenario combines this large-scale loss of coolant with a simultaneous loss of feedwater to make up for the water boiled in the reactor (a loss of proper feedwater or LOFW), combined with a simultaneous collapse of the regional power grid, resulting in a loss of power to certain reactor emergency systems, or LOOP.

The BWR is designed to shrug this accident off without core damage.

The Design Basis Accident is not directly relevant to what happened to the reactor at Fukushima, but it is a good example to use to illustrate how the various different layers of the ECCS and the Reactor Protection System work under severe accident conditions, which is important background to a good understanding of what happened at Fukushima.

The immediate result of such a large-scale pipe break (we’ll call this time T+0) would be a pressurized stream of water well above boiling point shooting out of the broken pipe into the drywell, which is at atmospheric pressure. As this water stream flashes into steam, the pressure sensors within the drywell will report a pressure increase to the Reactor Protection System, within no more than 300 milliseconds; that is, by T+0.3. The RPS will interpret this pressure increase signal as the sign of a break in a pipe within the drywell. As a result, the RPS immediately initiates a full SCRAM, closes the Main Steam Isolation Valve (isolating the containment building), trips the turbines, attempts to spin up RCIC and HPCI using the residual steam, and starts the diesel-driven pumps for LPCI and the core spray.

Now, let’s assume that the LOOP occurs at this time, at T+0.5. The RPS is on an uninterruptable power supply, so it continues to function. The RPS immediately detects the loss of offsite power, however, and already enters a fully defensive state and trips the reactor and the turbine, if it has not already. Within less than a second from power outage, auxiliary batteries and compressed air supplies are starting the emergency diesel generators. Power will be restored by T+25 seconds.

(Remember that at Fukushima I, the backup diesel generator failed shortly after, but there was no real pipe break or LOCA. But never mind that, in the scenario we’re looking at here. In any case, remember that many of the ECCS sub-systems have different, redundant energy sources.)

Due to the rapid escape of coolant from the reactor core, HPCI and RCIC will fail rapidly due to loss of steam pressure, but this is immaterial, as the 2,000 L/min flow rate of RCIC available after T+5 is insufficient to maintain the water level; nor would the 19,000 L/min flow of HPCI, available at T+10, be enough to maintain the water level, even if it could work without steam, in the event of such a serious LOCA. At T+10, the temperature of the reactor core, at approximately 285 °C at this point, begins to rise as enough coolant has been lost from the core that voids begin to form in the coolant between the fuel rods and they begin to heat rapidly. By T+12 seconds from the initial LOCA, fuel rod uncovery begins. At approximately T+18, parts of the rods have reached 540 °C.

At T+40, the core temperature is at 650 °C and rising steadily; the LPCI and the pressure-regulating core spray kick in and begin deluging the steam above the core, and then the core itself. A large amount of hot steam still trapped above and within the core has to be knocked down first, or the water will be flashed to steam prior to it hitting the fuel. This happens after a few seconds, as the approximately 200,000 L/min of water these systems release begin to cool first the top of the core, with the LPCI deluging the fuel rods, and the core spray suppressing the generated steam until at approximately T+100 seconds, when all of the fuel is now subject to this deluge and the last remaining hot spots at the bottom of the core are now being cooled.

The peak temperature that is attained in the fuel elements in this scenario, even with temporary uncovery of the fuel rods, is 900 °C, well below the maximum of 1200 °C which is acceptable before fuel damage begins, at the bottom of the core, which was the last hot spot to be cooled by the water deluge.

The core is now cooled rapidly and completely by the LPCI, and following cooling to a reasonable temperature, below that consistent with the generation of steam, the core spray is shut down and the LPCI flow rate is decreased to a level consistent with maintenance of a steady temperature of the fuel rods, which will drop over a period of days due to the decay in fission-product decay power output within the fuel. After a few days, decay heat will have sufficiently abated to the point that defueling of the reactor is able to commence. Following defueling, LPCI can be shut down completely.

The Explosion
———————

On March 12, there was an explosion near the Fukushima I-1 reactor building. What happened?

There are about 5 different layers of containment which exist, in a power reactor reactor like the ones at Fukushima, between the people outside and the potentially dangerous radioactive fission products within the nuclear fuel.

The fuel rods themselves are clad in tubes of zirconium alloy, and that represents one such layer. That nuclear fuel is inside the reactor pressure vessel, which is made of steel six inches thick, and that reactor vessel is the next such layer. The reactor pressure vessel is within the primary containment vessel, the drywell, which is made of steel one inch thick, and that represents the next such layer. The primary containment vessel is within the secondary containment structure, which is made of steel-reinforced, pre-stressed concrete between 4 and 8 feet thick. The reactor building which is built around the secondary containment structure is the last of these multiple layers of containment, and it is also made of steel-reinforced, pre-stressed concrete, between 30 cm and 1 m thick.

If every possible measure standing between safe operation of the plant and severe core damage and melting of the nuclear fuel fails, the containment can be sealed indefinitely, and it will prevent any significant release of radioactivity to the outside environment occurring under any circumstances.

Now, let’s look at some diagrams of these structures. Click-through for the full resolution images.

The outermost layer of the multiple layers of containment – the reactor building – has walls and a roof made of solid concrete, and it’s roughly cube-shaped.

On top of the concrete reactor building, however, there is an additional part of the structure – it is not made of concrete, but it is made of steel, with steel sheets over a steel frame. This steel building on top of the reactor building houses the fuel transfer crane, and it is built on top of the concrete roof of the reactor building. I’m referring to the part of the structure above the concrete shield plug and the refueling platform at the top of the concrete reactor building, as shown on the first of the diagrams above.

It is this relatively weak steel structure on top of the reactor building, which is not really part of the reactor building proper, which seems to have been blown out by a hydrogen explosion.

The explosion at Fukushima I-1 does not appear to have occurred within nor does it appear to have breached any of the fundamental layers of containment structure described above.

Now, an explosion has not occured as a result of a release of nuclear energy. That is a scenario that is simply outside the laws of reality. An explosion can be caused by one of two things; a chemical explosion, such as an ignition of a hydrogen-oxygen mixture, or a sudden release of stored gas or steam pressure.

It appears that the structure has probably been damaged as a result of a hydrogen explosion. It’s probable that excessive hydrogen generation within the reactor core, either radiolytically or chemically by reduction of water in the presence of the zirconium cladding at significantly elevated temperatures, has been vented into the torus, and as temperatures and pressures have began to rise within the torus steam pressure in the torus has been vented out into the reactor building surrounding the torus. From there, the hydrogen mixed with that steam and water vapor has risen, as hydrogen does, and worked its way through the reactor building, escaping at the top of the reactor building, and accumulating at the top, in the area around the fuel transfer crane. It appears that the accumulated hydrogen has then mixed with air and exploded.

——————————————————————————————

When a light-water reactor is operating, some of the oxygen-16 in the water is activated into radioactive nitrogen-16, by the 16O(n, p)16N reaction. 16N is very short lived, with a half-life of only 7 seconds, but its specific activity is correspondingly very high. When a BWR nuclear power station is operating, the entire nuclear steam supply system, including the turbine hall, is a radiological controlled area, due to the radioactivity from 16N. However, after reactor shutdown, the 16N decays very quickly, reducing the radiation dose around the turbines to negligible levels basically immediately. This is one key difference between a BWR power plant and a PWR power plant – since the secondary coolant loop that drives the turbine in a PWR is isolated from the reactor’s primary coolant by the steam generator, the secondary coolant is never radioactive during reactor operation.

There can also be very small amounts of other radionuclides formed within the reactor coolant, for example tritium, which is formed by the fission of boron used as a soluble reactivity shim in the reactor coolant (if you really want to know: neutron capture on 10B forms an excited state in 11B, which splits apart into two 4He and 1 3H nuclei. A similar reaction occurs beginning from 11B, with the re-emission of one additional neutron in the breakup of the excited 12B nucleus), and 14C, which is formed from nitrogen compounds such as hydrazine which are added for pH control and oxygen scavenging in the reactor coolant.

If excessive pressure within the torus or within the primary containment vessel is vented out into the reactor building, and from there it is allowed to escape out into the atmosphere, then small amounts of these radionuclides may be released out into the atmosphere, which is a possible scenario we might be seeing at Fukushima.

(A quick note on terminology: Radiation is not a substance, and it cannot leak, nor contaminate a person. To speak of an escape of radiation from a nuclear power station is kind of like to speak of an accidental leak of light from a lightbulb factory. What we are talking about here is a possible release of radioactivity, of a substance containing a radioactive nuclide.)

We will know, over the next few days, exactly what the true situation is regarding the composition, and quantity, of any releases of radionuclides into the outside environment. It’s very easy to detect radioactivity, to measure it quantitatively with high precision, and to discriminate the presence of different radionuclides and identify them.

What I suspect we might see from some anti-nuclearists, however, is something that we saw after Three Mile Island, and something that we still see on rare occasions up to the present day in regards to TMI – the conspiracy theories.

Some people will probably try and claim that there were actually enormous releases of radioactivity into the environment and it was never really measured or documented – or that it was measured and known that there were huge releases of radioactivity into the environment at Fukushima, but there’s a big conspiracy by big bad unethical TEPCO or by the Big Bad Nuclear Industry in a more general sense (and by the evil government and the conspirators at the IAEA, and the armies of Big Nuclear Shill bloggers, of course!) to cover it all up! We saw this once or twice after TMI, and I think we’ll see it again from those who are truly devout believers in the absolute, unmoderated evil of the Big Bad Nuclear Industry.

Of course, that’s absolute nonsense, for exactly the same reasons that it’s nonsense in the context of TMI. You simply cannot ever, in any context, release a very large amount of radioactivity into the atmosphere and cover it up or keep it quiet.

Look at Chernobyl for example. The Soviets didn’t tell the West about it immediately – they didn’t even tell their own nuclear scientists. Soviet nuclear experts found about it when radiation sensors at nuclear research sites and nuclear power plants (eg. the Ignalina plant in what is now Lithuania) across the eastern USSR started going off, and the West found out about it when radiation sensors at Sweden’s Forsmark NPP and other Swedish nuclear engineering facilities started going off. (For more on this note regarding Chernobyl, see the excellent first chapter of Richard Rhodes’ excellent Arsenals of Folly.)

Nuclear power plants and other facilities that use radioactive materials are all over the place in our society, and they all have sensors and instruments to make sure everything is safe and radioactive contamination does not occur. If a Chernobyl-style event occurs, you will detect it at any such site. Any nearby NPP. Any nearby molecular biology lab working with radiolabels. Any nearby physics lab. Any nearby clinic working with X-rays or medical imaging. Anyone nearby developing photo film.

If a person who has recently had a radiopharmaceutical medical imaging procedure walks into a nuclear power plant or physics lab, or a radiation detector installed at a border crossing or port around the USA, they’ll set off alarms.

Radioactivity is so easy to detect that in 1896 Becquerel discovered it accidentally.

I remember that there was a case, in November of 2008 I think, where a little bit of radioactive 133Xe was vented from the ANSTO Lucas Heights radiopharmaceuticals facility… this was quickly detected in Melbourne by the atmospheric radiochemistry monitoring station which is part of the network being developed for the CTBTO for CTBT verification… one of a large network of such sites, which are extremely sensitive, all around the world which are used to detect any possible nuclear weapon test.

Japan, and Hong Kong and mainland China have plenty of expertise and infrastructure that they can use to, for example, perform the sensitive analysis of fission-product radionuclides in the atmosphere to monitor nuclear weapons testing and nuclear fuel processing in the DPRK… so they can also certainly analyse the presence of traces of artificial radionuclides in the atmosphere from this nuclear power plant incident.

How many nuclear power stations are there in the United States that are located relatively close to TMI, in the states geographically around TMI? What did their radiological monitors show? Anything? Photographic films from everyone around the area was collected and looked at – no radiation was recorded.

Basically, the whole idea of such an enormous cover up is just an enormous, impractical conspiracy theory – which would need to involve the state government, the federal government, the nuclear energy industry, and huge numbers of the public and huge numbers of scientists and industries – like an Apollo hoax conspiracy theory.

We will know, over the next few days, exactly what the true situation is regarding the composition, and quantity, of any releases of radionuclides into the outside environment. There are no coverups or conspiracies in this context – there simply cannot be.

I hope you’ve found this post helpful. Please feel free to post comments, with any further discussion, questions, criticisms or what-have-you. I will likely follow up this post with a future post, following future developments of this issue, and responding to questions or new information.

Written by Luke Weston

March 13, 2011 at 8:55 pm

## Rethinking nuclear power.

ABC Unleashed has recently featured an article by environmentalist Geoffrey Russell; Rethinking Nuclear Power.

I like the idea of closing down uranium mines, and using existing stocks of mined uranium efficiently.

Uranium mining is far less environmentally intensive than mining coal, of course, but it’s basically inevitable that all mining is fairly environmentally intensive, and it’s always an appealing prospect if we can mine less material (whilst still maintaining our energy supplies and our standards of living, of course.)

I have to admit, when I first saw Geoff’s claim that we could completely eliminate uranium mining, I was skeptical. So I took a more detailed look.

A nuclear reactor which is efficiently consuming uranium-238 and driving a relatively high efficiency engine (typically, a Brayton-cycle gas turbine) will require approximately one tonne of uranium input for one gigawatt-year of energy output. This high efficiency use of U-238 could be best realized something like an IFR or a liquid-chloride-salt reactor (the latter is essentially the fast-neutron uranium fueled variant of a LFTR). This figure of one tonne of input fertile fuel per gigawatt-year is also comparable for the efficient use of thorium in a LFTR.

There are about one million tonnes of already mined, refined uranium in the world, just sitting around waiting to be put to use, which is termed so-called “depleted uranium”.

According to one source, the exact worldwide inventory of depleted uranium is 1,188,273 tonnes [1].
The total electricity production across the world today is about 19.02 trillion kWh [2].

Therefore, total worldwide stocks of depleted uranium, used efficiently in fast reactors, could provide every bit of worldwide electricity production for about 550 years.

That’s not forever, but it’s a surprisingly long time. And that’s just “depleted uranium” stocks; not including the stocks of HEU and plutonium from the arsenals of the Cold War, and not including the large stockpile of uranium and plutonium that exists in the form of “used” LWR fuel.

I know some thorium proponents aren’t going to like this; but there’s a strong case to be made here that uranium-238 based nuclear energy has a clear advantage over thorium, simply became of these huge stockpiles of already-mined uranium, for which there exists no comparable thorium resource already mined. The 3,200 tonnes of thorium nitrate at NTS is tiny compared to the uranium “waste” stockpile, but they’re both really useful energy resources which can replace the need for more mining.

Any type of breeder or burner reactor utilising 238U, or 232Th, as fuel requires an initial charge of fissile material to “kindle” it; however this requirement for fissile material is quite small; and personally, I think the inventories of HEU and weapons-grade plutonium recovered from the gradual dismantlement of the arsenals of the Cold War are perfectly suited to this purpose – destroying those weapons materials, whilst putting them to a valuable use.

Then again, with the means to completely replace the use of coal and fossil fuels in a way that requires very little or no uranium mining, I really hope the rest of the world keeps buying that iron and copper and bauxite. Alternatively, we’re going to have to start developing a more technologically based economy in this country to make up the reduction in exports of these commodities – perhaps developing and selling reactor technology?

Developing uranium enrichment technology, such as SILEX, is of limited usefulness because the relatively inefficient thermal-neutron fission of 235U, and hence the need for enrichment, will not supply any large portion of world energy demand in a sustainable fashion over the long term. The small amount of 235U in nature is of limited significance, over the long term.

Alternatively, perhaps a shake up of agriculture, using extensive desalination to supply fresh water requirements, might be used to replace Australia’s income from coal and uranium. I’m not sure.

Tip ‘o the hat to Barry at Brave New Climate for pointing out this article.

[1]: http://www.wise-uranium.org/eddat.html
[2]: From the World Factbook, 2008 ed. (jokes about the integrity of CIA’s intelligence aside…)

Written by Luke Weston

May 3, 2009 at 8:56 am

## Burning money with solar power in Victoria. Again.

It has been announced this week that the Victorian Government will promote renewable energy by spending $100 million to establish a new regional solar power station, subject to the Federal Government matching its commitment. Premier John Brumby will announce both initiatives today, focusing on the plan for a 330-gigawatt hours per year solar plant with the capacity to power the equivalent of 50,000 homes. All right. More kumbaya and rainbows and sunshine courtesy of Brumby. This proposed new solar power station will supposedly generate 330 gigawatt-hours of electrical energy per year. (The Age article originally mentioned a “330 gigawatt” plant, but they later caught the egregious mistake and edited it.) How much energy is that? In 2006, Loy Yang unit A in Victoria generated 15,995 GWh of electrical energy, sent to the grid. (In doing so, it emitted 19,314,994 tonnes of CO2 equivalent, and a whole lot of other environmentally and aetiologically nasty, dangerous, toxic waste, such as fly ash, SO2 and NO2, as well.) That’s just one example of one of the coal-fired generators, of course. Therefore, this proposed solar power station is generating about 1.88 percent of that one single coal-fired generating station. How much will this plant cost? We don’t know. The article doesn’t say, nor does Brumby’s original press release. We don’t know how much it costs, and I doubt Brumby knows, either. …promote renewable energy by spending$100 million to establish a new regional solar power station, subject to the Federal Government matching its commitment.

OK… we know that it costs at least $200 million. There is actually a convenient benchmark which we can use to make an estimate of how much the whole project will actually cost, and that is the$420 million solar energy installation planned by Solar Systems for northwestern Victoria. This is another expensive solar energy project that the Victorian government just loves to talk about as a poster child for their clean, green ways.

The Solar Systems project, with 154 MW of nameplate capacity, will generate 270 GWh per annum, and will cost 420 million dollars. If we assume that the newly proposed 330 GWh/annum installation might cost about the same, for a given amount of capacity, then we can expect that it will cost 513 million dollars.

To replace Loy Yang A, to have the equivalent amount of energy generation, you’d need 49 such installations of this size, at a cost of approximately 25 billion dollars to construct.

If you build a modern* nuclear power plant, with two 1100 MWe reactors operating with a 90% capacity factor, the plant will generate about 17,356 GWh per annum. That is, such a plant will replace Loy Yang A’s output about 1.09 times over; it’s more than sufficient.

How much does it cost, to build such a nuclear power plant?
Go on, consider an exaggerated, extra-conservative cost estimate from your local greenies. 9 billion dollars? 12 billion? 14 billion? 15 billion?

In every case, even with the most pessimistic cost estimates for nuclear power, it’s far, far cheaper than solar, assuming that you’re actually capable of counting kilowatt-hours.

(* Modern, but not bleeding edge. We’ll consider the presently available modern Generation III LWRs such as Westinghouse AP1000 that are available immediately, not Generation IV fast spectrum reactors, liquid fluoride reactors, or things like that, just to be a little conservative about it.)

Brumby’s press release says that they aim to have the plant operating by 2015. So, they aim to have the plant operating within six years.

Six years? To think that opponents of nuclear energy say that it takes too long to deploy.

If it takes six years to build, and you need 49 of them to replace one coal-fired station, well, would it take 294 years for them to accomplish that goal? Well, perhaps I’m being a tiny bit mendacious. You never know, perhaps they could achieve faster deployment constructing them in parallel, and maybe it would only take 200 years, or 150 years. Maybe.

Six years is in fact sufficient time to construct a nuclear power plant, if you’re serious about doing it and don’t allow it to be delayed. All the nuclear units at the Kashiwazaki-Kariwa nuclear generating station in Japan were each constructed in timescales of between three and five years; Kashiwazaki-Kariwa Unit 2 and Unit 5 both commenced construction in 1985, and both were completed by the end of 1990, within 5 years. Obviously the Japanese operators failed to see any relevance what so ever of a certain ill-fated Soviet graphite pile to their operations.

Even if you want to talk about conservative, drawn out timescales for the construction of new nuclear power in Australia, say, 10 years maybe, it’s still a far, far faster option, for a given amount of energy delivered, than solar or wind.

Written by Luke Weston

March 11, 2009 at 12:50 pm

## Nuclear power and terrorist proliferation of nuclear weapons

Is the plutonium that is potentially formed within certain types of fuels in nuclear fission power reactors really suitable for the construction of nuclear weapons? How accessible and usable is such plutonium for such a purpose? How hard would it be to construct a nuclear weapon employing such material? Could terrorists steal nuclear fuel from a nuclear power reactor and construct a working nuclear explosive device, practice?

What characteristics would such a device have? Given the terrible power of nuclear weapons, and the very real threat of terrorists who would love nothing more than to wield such power, these are perhaps important questions to consider.

I assert that, no, there is no real threat here that is anywhere near as plausible in the real world as it is sometimes beaten up to be. Can terrorists steal nuclear fuel, and build a nuclear weapon? No. I don’t think so.

I mainly just wrote this because (i) I just wanted to get this off my chest, and it’s good to have a go at the unrealistic nonsense that gets bandied about without any real factual evidence to back it up, and (ii) because I found the Kessler paper interesting.

This little piece of writing of mine owes a lot to the always entertaining and scientifically interesting posts of NNadir, especially this one, and this one, where I was pointed to the interesting publications of Kessler and colleagues. Love your work, NNadir

My little essay is here (PDF format).

Pointing out of typos, peer review, comments, grammatical suggestions and other interesting discussion and feedback is appreciated.
(I know the sentence is too long in the last paragraph on page 5, and there’s a typo on the first line of page 13. Those are fixed in the CVS. )

I hope you find it enjoyable, interesting and/or useful.

Written by Luke Weston

March 2, 2009 at 3:41 am

## The Australian Government’s domestic solar PV subsidy…

with one comment

The federal government has recently announced it will scrap the unpopular means test for the federal subsidy for domestic solar PV arrays, which restricted the rebate to households earning less than $100,000. The size of the rebate was, formerly,$8 per watt of installed nameplate capacity, up to a maximum of $8000. The rebate will now be smaller;$5/W, up to a maximum of $7500. Sounds good, right? But it’s horrendously expensive – the government is in effect paying$5/W for the cheapest, nastiest polycrystalline silicon PVs on the market.

There are scores of companies jumping on the bandwagon to sell these little 1-1.5 kW rooftop PV systems, advertising and promoting and installing them – because they’re making a fortune from the increase in business resulting from the subsidy.

The government rebate does not cover the full cost of such a system – therefore, in order to get as much interest as possible, the vendors are trying to keep the costs of such systems as low as absolutely possible, so that the cost that the customer pays is as small as possible. Therefore, all such systems are exclusively cheap, inefficient, basic polysilicon devices. After all, an advanced solar-concentrating collector with a high-efficiency CdTe cell or stacked heterojunction cell or sliver cell or whatever does not attract any higher subsidy than the basic polycrystalline Si device.

Advocates such as the Australian Greens say that such a scheme “supports the solar industry” – but all it does is supports the environmentally-damaging low-cost manufacturing of polycrystalline silicon in China, and doesn’t support innovation in advanced PV technology or anything like that.

What if the same amount of subsidy might be better spent elsewhere? Here’s a hypothetical idea to think about.

1. Go and find a suburb or a city or a community which has about 31,000 households. I’m certain there are 31,000 households in this country who support what I’m about to elucidate.

2. Get each household to put up AUD $1200 or so, temporarily. 3. Take that 25 million US dollars and purchase a 25 MWe Hyperion Power Module, or something similar. 4. At 25 MWe divided between 31,000 households, that’s a little over 25 GJ per year, which is a little more than Australia’s present average household electricity consumption. This doesn’t just generate a fraction of your household electricity needs – it generates 100% of it, and there will be no more electricity bills. 5. That corresponds to a nameplate capacity of 807 watts per household. Since the government hands out a subsidy of$5/W for solar photovoltaics with a 20% capacity factor, they should hand out $22.50/W for nuclear energy with a 90% capacity factor, right? 6. Collect your$18,157.50 rebate from the government. Less the $1200 investment, that’s$16,957.50 immediate profit in your pocket. This is exactly the same rate of payment per energy produced that presently exists in the form of the PV subsidy.

7. Go to the pub. Got to stimulate that economy, you know.

I wonder how many ordinary Australian households would support nuclear energy if you paid them $17,000 for doing so? To replace one Loy Yang type coal-fired power station* with solar cells, we would need 6,082,342 homes equipped with 1.5 kW solar photovoltaic arrays. With an$7500 rebate for each one, that would cost the government 45.6 billion dollars per each large coal-fired power station.

* (Loy Yang generated 15,995 GWh in 2006.)

Solar photovoltaics typically have a capacity factor of about 20%, and we’ll suppose the panels have a lifetime of, say, 30 years.
Therefore, this scheme costs the government 9.5 cents per kWh generated.

If the government purchases nuclear power plants, they will cost, say, 10 billion dollars (let’s be conservative) for a nuclear power plant with two 1100 MW nuclear power reactors which will operate with a 90% capacity factor and a lifetime of 50 years. The capital cost of plant dominates the overall cost of nuclear energy.

Therefore, the nuclear power plants would cost the government 1.15 cents per kWh – 12% percent of the cost of the solar rebate scheme. That’s the government’s rebate alone – without the rest of the price of these systems.

All this solar rebate is is another mendacious political enterprise involving renewable energy which can’t be scaled up, which hands out free money to the public, makes a bunch of money for the solar panel vendors (including many dangerous fossil fuel vendors such as British Petroleum), and mendaciously makes the government look like they’re actively getting the country running on clean energy.

ASIDE: I’m going to start cross-posting some blog content on the Daily Kos. I think it’s a nice site to engage with many, many readers – many of whom perhaps aren’t already so convinced of the virtue of nuclear energy – so, there’s plenty of engaging, active discussion, and the opportunity to maybe convince some people – even if that’s just a few people it’s still a very positive thing.

Written by Luke Weston

December 18, 2008 at 6:43 am

## Thought for the day.

Of all the G20 nations, there are only a few without nuclear power. There is only one nation among the G20 which has no nuclear power reactors, and has no active interest in implementing them.

Written by Luke Weston

November 18, 2008 at 2:50 am

## Thermodynamics, stars, uranium, life and everything: Part II

with one comment

The amount of time necessary to exhaust nuclear energy provided by existing uranium deposits, unused energy in current reserves of used radioactive “waste”, heat produced by the radioactive decay of uranium, thorium and potassium deep inside the Earth (in other words, geothermal energy), and uranium in seawater could indeed last billions of years – approaching the evolution of the sun off the main sequence, and with that, the end of life on this world.

If the energy from the Sun is “renewable”, so too is nuclear energy equally every bit as renewable.

The concentration of uranium in seawater in the world ocean is about 3.3 parts per billion. The total mass of Earth’s hydrosphere is about 1.4×1021 kilograms, therefore putting the total mass of uranium in the world ocean at 4.62 billion tonnes.

Current total world demand for electricity stands at 16,330 TWh per year. Let’s conservatively suppose that, over the millennia to come, the average total world demand for electricity is four times what it is at present, or 65,320 TWh. Conventional LEU fueled light-water reactors and inefficient once-through fuel use in these reactors consume about 200 tonnes of uranium mined per gigawatt-year of electric power generation.

Hence, if we make the assumption that all the nuclear energy generation over these coming millenia is performed with this inefficient once-though LEU fuel chain and no recycling or reprocessing of nuclear fuels is performed, then the world demand for uranium can be expected to be 1.49 million tonnes per year.

Hence, consuming 1.49 million tonnes of uranium per year to supply all the world’s electricity, the 4.62 billion tonnes of uranium presently dissolved in the ocean will supply the world’s electricity for 3100 years.

$\mathrm{\frac{4.62 \times 10^{9}\ tonnes}{(200\ tonnes\ per\ GW\cdot year) \cdot (65320\ TWh/year)}\ =\ 3100\ years}$

Here we have assumed that no use is made of efficient, advanced reactors or breeder reactors and no use is made of the excess “depleted” uranium-238 or natural thorium, no deuterium is used for nuclear fusion, and no uranium is mined on land. Such assumptions are of course ridiculous, but let’s just be as conservative as possible, for argument’s sake for the purposes of this baseline, worst-case scenario.

If we considered a truly efficient efficient use of nuclear fuel, we may consider an efficient, advanced reactor such as a molten-salt reactor, efficiently transmuting uranium-238 into plutonium-239 in situ to generate energy. We may assume that 200 MeV of energy is released per fission event, and that the efficiency of the 238U transmutation and liberation of useful energy output from these nuclear processes within the reactor is, say, 75% overall. If we assume that this thermal energy is converted in a Brayton-cycle power plant with a thermodynamic efficiency of 50%, then hence we know the amount of natural uranium required to fuel the reactor.

$\mathrm{\frac{1\ GW\ \cdot\ 1\ year\ \cdot\ 238\ u\ \cdot\ 1.66 \times 10^{-24}\ grams/u}{200\ MeV\ \cdot\ 75\% \cdot\ 50\%}\ =\ 1.04\ tonnes}$

Just over one tonne of natural uranium is required, to generate one gigawatt-year of energy. (That number is basically the same if we’re looking at efficiently burning thorium in a MSR, incidentally, also.) If we utilised nuclear energy efficiently, like this, then the 4.62 billion tonnes of uranium presently dissolved in the ocean would supply the energy we discussed above, 65,320 TWh, for (just under) an astonishing 600,000 years!

$\mathrm{\frac{4.62 \times 10^{9}\ tonnes}{(1.038\ tonnes\ per\ GW\cdot year) \cdot (65320\ TWh/year)}\ =\ 597,558\ years}$

However, we are not finished yet. Elution of the uranium in the Earth’s crust into the ocean occurs on an ongoing basis, adding 3.24×104 tonnes of uranium to the ocean annually.

It was motivated by Cohen* that we could recover uranium from seawater at perhaps half of that rate; 16,000 tonnes of uranium from seawater per year. This quantity of uranium would supply 15.4 TW of electric power, if used efficiently as outlined above. In order to supply 65,320 TWh of electricity per year, four times the current worldwide demand for electricity, we only require 7750 tonnes of uranium per year, less than half that figure of 16,000 tonnes.

[* Many of you will be familiar with Cohen's work, but if you are not, that book is highly recommended.]

Cohen argues that given the geophysical cycles of erosion, subduction and uplift, the uranium elution into the oceans would last for five billion years, at a rate of withdrawal of 6500 tonnes per year. At a rate of consumption of 7750 tonnes per year, in the absence of the use of any uranium and thorium mined on the crust, or the use of deuterium for nuclear fusion, the uranium from the oceans alone can be expected to meet world demand for electricity, at 65,320 TWh of electricity per year, for 4.2 billion years. Over a timeframe on the order of 109 years, of course, some non-trivial fraction will be lost, simply due to radioactive decay – however, at the same time, we have not even begun to consider the use of uranium and thorium reserves in the crust, or the use of the vast supply of deuterium as an energy source.

Clearly nuclear energy remains a viable resource on the Earth for a time scale of approximately five billion years – these nuclear fuels will not be consumed or depleted over a timeframe comparable to the life of the sun on the main sequence. Just as the finite hydrogen within the core of the Sun is a “renewable” energy resource, so too is the finite resource of terrestrial nuclear energy an equally renewable energy resource.

However, there is one final point we have overlooked. Even during its life in the main sequence, the Sun is evolving, as with all such stars. The Sun is gradually increasing in luminosity, by about 10% every one billion years, and its surface temperature is correspondingly slowly rising. This increase in the luminosity of the sun is such that in about one billion years, the surface temperature of the Earth will permanently have become too high for liquid water to exist, the oceans will evaporate and a catastrophe of the most immense proportions imaginable will overtake our planet. The Aztecs foretold a time `when the Earth has become tired… when the seed of Earth has ended’. All life on Earth will be extinguished, billions of years before all the nuclear fuels will be depleted.

In the meantime, our descendants will have evolved into something quite different, as far divergent from us in evolutionary terms as we are from the simplest one-celled organisms to have existed on the Earth. If they still inhabit the Earth, our descendants will leave, perhaps to Mars, or to the moons of the gas giants, Europa, perhaps, rich in water and perhaps not dissimilar to Earth if warmed up a little, or perhaps to a younger, more distant world, orbiting a younger star, around which their civilization will flourish once more.

Written by Luke Weston

October 12, 2008 at 1:12 pm

## Green Heretic: environmentalist Mark Lynas investigates nuclear energy.

“Card carrying” UK Green and climate change expert Mark Lynas has been scorned by eco-colleagues for that greatest of big-G Green heresies: Daring to investigate and discuss nuclear energy, in any rational fact-motivated way.

“Except, well, I don’t believe that any more. Just a month ago I had a Damascene conversion: the Green case against nuclear power is based largely on myth and dogma.”

This investigation of what Lynas has learned, how he’s thought about it, how his views have changed, and how people have responded to him for it, is very much worth reading.

I was going to back up Lynas’ positions by posting this on the discussion thread on his website – but it’s a bit long and I don’t think they will post it, so I’ll post the following here instead.

I will cite or respond to a number of previous comment posts, in chronological order as you read down through the thread:

Written by Luke Weston

October 3, 2008 at 12:55 am

## Nuclear fuel recycling in the United States

Earlier this month an editorial was posted on GreenvilleOnline.com titled Nuclear reprocessing is risky and impractical, laying out the case against recycling of nuclear fuels (or at least the case against conventional methods for recycling of conventional nuclear fuels). (Thanks to Atomic Insights for the story tip.)

The editorial states:

Nuclear reprocessing separates plutonium from radioactive waste so that it can be reused to generate additional energy. However, reprocessing also has an unfortunate side effect: It dramatically increases the volume of radioactive waste.

Of course, if the alternative to nuclear fuel recycling is to take all the used fuel and label it as supposed “waste” material, and of course that is the alternative, then it’s a universally accepted fact that of course recycling of the nuclear fuel reduces the volume of material that is considered “waste”.

Typical used fuel from a typical LWR with a LEU fuel consists of approximately 96% uranium-238 and 235, which is completely unchanged in the reactor from the original fuel, about 3% of fission product nuclides, about 1% of plutonium, about half of which is plutonium-239 and half of which is comprised of other plutonium nuclides, and small trace quantities of other actinides, including a little U-236, U-232, Np-237, Am-241 and what have you.

Even if nuclear reprocessing involves only taking the uranium from that nuclear fuel, then immediately, with uranium separation alone, you’ve removed 96% of the mass of the radioactive “waste” that you need to deal with – and that’s without any consideration of the valuable, useful materials which constitute the other four percent.

If “nuclear waste” is such a terrible concern, the the first thing that should be done is to make sure we’re not wasting it.

The separation of plutonium is not necessary in any way for the use of nuclear energy, nor is it required at any point for the efficient recycling of used uranium fuels. The separation of plutonium, contrary to popular belief, is not the point of nuclear recycling. Separation of plutonium is an integral part of nuclear weapon building, and it is certain technologies which were developed for this latter purpose which have, historically, been applied to the recycling of power reactors fuels.

To construct a nuclear fission weapon from plutonium does indeed require the chemical separation of pure plutonium from uranium irradiated within a nuclear reactor – but that’s the only thing that requires separation of plutonium. This is why separation of plutonium, or the possibility of it, seems to be viewed with distrust and suspicion, especially at the Savannah River Site, perhaps, given its historical mission of the production of weaponisable plutonium via nuclear reactors and PUREX extraction.

Even if you want to use plutonium from used civilian reactor fuel efficiently, and recycle it back into the recycled nuclear fuel, where it serves as a potent, valuable energy source, chemical separation of plutonium is not needed. Even though most established, mature efforts for the recycling of nuclear fuels at the industrial scale involve the PUREX process, which was designed and established specifically to support the production of separated plutonium for nuclear weapons, there is no reason why this process is essential at all. It’s quite straightforward to modify the chemistry of the solvent extraction process so that the plutonium is kept combined with the other actinides, so that this material can be recycled into new nuclear fuel without any material being produced that presents any proliferation risk. That is what is done with the COEX or DIAMEX chemical processes, and what can be done even better via pyroprocessing or in-situ separation of nuclear poisons in a molten salt reactor.

Even if the potential for diversion and weaponisable plutonium was considered so grave that we were insistent of taking the plutonium and disposing of it in some kind of deep geological repository, this would only constitute 1% of the fuel – so, we wouldn’t be losing much of the fuel, really. However, plutonium-239 is a moderately long lived nuclide – with a half-life of 24,400 years, it doesn’t just go away overnight if put in a geological repository. So, in decades to come, the material could still be removed, and weaponised.

The only proper way to get rid of plutonium, if you’re really concerned about nuclear weapons proliferation, is to fission it in a nuclear reactor – and, lo and behold, you get plenty of clean, safe energy to boot, at the same time.

According to the Union of Concerned Scientists, “After reprocessing … the total volume of nuclear waste will have been increased by a factor of twenty or more ….”

Of course, that’s simply absurd. What sort of definition of reprocessing are they using? What evidence is provided for such a claim?

For instance, discharges of iodine-129, a very long-lived carcinogen, have contaminated the shores of Denmark and Norway at levels 1,000 times higher than nuclear weapons fallout.

Well, does that tell us anything? What is the background dose rate to the public as a result of the nuclear weapons fallout, and what is the contribution added to the dose rate to the public as a result of nuclear fuel reprocessing?

Health studies indicate that significant excess childhood cancers have occurred near French and English reprocessing plants.

Is there any peer-reviewed, scientifically motivated, literature which demonstrates the existence of such excess childhood cancers, and demonstrates, or even reasonably motivates, a causal connection between the two?

In 2003, for example, researchers from Harvard’s Kennedy School of Government said that reprocessing costs more than twice as much as safe, on-site interim storage of nuclear waste.

The report cited, from the Belfer Center for Science & International Affairs at Harvard University, The Economics of Reprocessing vs. Direct Disposal of Spent Nuclear Fuel, states:

At a uranium price of $40/kgU (comparable to current prices), reprocessing and recycling at a reprocessing price of$1000/kgHM would increase the cost of nuclear electricity by 1.3 mills/kWh. Since the total back-end cost for the direct disposal is in the range of 1.5 mills/kgWh, this represents more than an 80% increase in the costs attributable to spent fuel management (after taking account of appropriate credits or charges for recovered plutonium and uranium from reprocessing).

Furthermore, the editorial’s authors continue with much the same assertion:

In 2007, the National Academies of Science (NAS) noted that no reprocessing technology currently on the table “is at a stage of reliability and understanding that would justify commercial-scale construction” and the report therefore concluded “there is no economic justification for going forward with this program at anything approaching a commercial scale.”

The nuclear industry has reached a similar conclusion. A 2007 report by the Keystone Center, underwritten by various utility companies, said “reprocessing of spent fuel will not be cost-effective in the foreseeable future.”

The “reprocessing is not economically competitive” argument basically boils down to the idea that recycling the used fuel is more expensive than the inefficient, once-through use of newly mined uranium.

People are frequently concerned about the environmental intensiveness of uranium mining and the handling of radioactive wastes from nuclear power – and yet recycling and efficient re-use of nuclear fuels minimise the requirement for both of these things. To me, the argument against recycling because recycling costs more is ridiculous, and it’s essentially equivalent to eschewing the use of alternative energy systems in favor of more coal, because coal is cheaper.

Written by Luke Weston

September 16, 2008 at 11:49 am