Posts Tagged ‘nuclear energy’
I’m going to focus this post on radiation dosimetry – because radiation dosimetry is what really matters in terms of deciding whether anybody can actually get hurt. So far, nobody around Fukushima has been hurt by radioactivity, although of course tens of thousands are still dead or missing because of this great tragedy.
It doesn’t matter what you do or don’t do to the reactors or the used fuel, or what condition they’re in – at the end of the day, the radiation dose to the public is how we measure the effect of this incident on the public and its potential for harm.
To be honest, I’m really not concerned much with what the dose rates are in the plant itself.
The men and women who work there understand dose rates and health physics quite well. They routinely work in areas of elevated above-background dose, and they know how to work safely in those environments. They understand how to measure and quantify the radiation field in the working environment, and the accumulated doses that they’re personally receiving.
They understand how to manage shielding, exposure time, radiation measurement and dosimetry in order to get the work done safely and effectively.
Even with abnormally significantly elevated radiation fields in some areas as a result of these incidents, they still know how to work safely. If the radiation dose rate in some particular area is so highly elevated that it cannot be entered safely for any length of time at all, then they won’t be entering it.
It’s pointless to scare the public with elevated on-site dose rate measurements. They’re not working on the site. Leave that for the people with health physics training. I’m much more interested in off-site dose rate measurements, personally, as those are the measurements that are actually of relevance to the public.
Off-site radiological dose rates
The KEK accelerator physics complex in Tsukuba (165 km from Fukushima) has a webpage showing their real-time measurements of the environmental gamma dose rate (counted with a GM tube). They’re currently measuring 0.17 μSv/h, at the time I write this, which has been fairly constant over the last few days, except for a brief, narrow spike up to 0.6 μSv/h, which they observed on the 16th.
(NOTE: Just to avoid any ambiguity, “m” means milli, 10-3. “μ”, or “u” if you don’t have access to the Greek alphabet in your software, means micro, 10-6. And when I say milli I’m quite sure I mean milli, and when I say micro I’m quite sure I mean micro. Because these terms are important, personally, I make damn sure to get them right.)
Each year, a resident of the United States receives an average total dose from background radiation of about 3.1 mSv. This is the radiation dose from natural background sources; from natural radioactivity in the Earth, and cosmic rays from space. That’s equal to 0.354 μSv/h.
In practice, the average dose that a person receives each year in the United States is significantly higher than that natural background dose, about twice that, once you’ve added on the dose from medical imaging and things like that.
The radiation dose rate being measured in Tsukuba right now, after the Fukushima accident, is less than half of the average natural background radiation dose rate that a person receives in the United States. This includes all sources of radiation in Tsukuba, including natural geological radioactivity, cosmic radiation, and any radioactivity released at Fukushima, as well as any ionising radiation from the particle accelerators at KEK, which is what these sensors are actually intended to monitor.
That brief, narrow spike seen in the radiation field measured at KEK doesn’t really concern me. The radiation dose you’ll receive if you hang around in that area for an extended period of time is the area under the graph – the integral – over that period of time. For such a short, sharp spike, the overall potential dose is still quite small.
In order to quantify the potential harm from a significant release of radioactivity, it would make more sense to “filter” that dose-rate data from the detector as a rolling average, to make it simpler to make a more straightforward interpretation of the potential to receive any significant radiation dose.
KEK is also measuring the concentration of 131I and the short-lived fission product 132Te in the atmosphere and reporting regular updates to this data online. The concentrations we’re looking at here are extremely small – on the order of 10 microbecquerels per cubic centimeter – but they are concentrations which they are able to accurately measure at KEK, using a high-volume air sampler and a high-purity germanium gamma-ray spectrometer.
The environmental gamma-ray dose rate measured in Tokyo between 11 pm and 12 am, averaged across one hour, on March 18th, was 0.0471 μSv/h. This radiological monitor in Tokyo returned its highest reading yet on the 16th, from 05:00 to 05:59, at a dose rate of 0.143 μSv/h.
So, that most recent figure from Tokyo is 13% of the average natural background radiation dose rate in the United States. One banana dose is something like 0.1 μSv, so what we’re measuring in Tokyo at the moment comes in at just under 0.5 banana per hour. (One banana per hour, and you’re going to triple that dose rate.)
The highest figured measured at all in recent days, 0.143 μSv/h, is equal to 40% of the average natural background in the United States.
The radiation level in Saitama, outside Tokyo, is also being recorded and charted on the web. As of 21:00 on the 18th of March, they report a dose rate of 0.058 μSv/h. The maximum value reported, at 1 am on March 17, is 0.067 μSv/h. These figures are 16% and 19% of US natural background, respectively.
Two things are apparent from this data.
(a) Japan has very low levels of natural background radiation compared to the continental United States. (This is interesting in itself! It’s probably a combination of both low elevation providing shielding from cosmogenic radiation and a relatively low abundance of uranium and its daughter products in the ground.)
(b) The background ionising receive dose rate that people receive across Japan has not been elevated significantly at all, at least outside the immediate vicinity of the plant, as a result of the Fukushima damage.
(ASIDE: If you can’t read Japanese – I can’t – a little bit of Google’s automatic translation goes a long way in helping you sort through this important data.)
If we look at the 5 monitoring sites closest to the 30 km radius marked on the chart, we see that the last three measurements marked on the chart, for each of those sites are 52, 52, 52, 140, 140, 150, 40, 45, 45, 8.5, 9.0, 8.7, 1.6, 1.6, and 2.0 μSv/h.
This tells us that there is detectable radioactivity which is moving in a narrow plume in the atmosphere – it is not distributed out isotropically, which is indeed exactly what you would expect from thinking about the meteorology.
This chart of compiled radiation measurements also tells us a very similar story.
At that 30 km radius, the average dose rate from that monitoring station which reports the high outlier values – the one corresponding to the location of the plume – is 143 μSv/h.
I wonder what radionuclides are present in that plume? The presence of 131I, 132Te and 133Xe would tell us that this radioactivity has come from a reactor, the absence of these short-lived fission products will tell us it has come from used fuel in the pool. A little bit of gamma spectroscopy, and we would have the answers.
The presence of these radionuclides as measured at KEK confirms that at least a tiny bit of radioactivity has been released from the reactors themselves.
That’s fairly high, but it’s not obviously high enough to hurt people. If you stood in the location of that plume for an entire week, you would receive 24 mSv over the course of a week – which is a dose figure which would be consistent with a relatively-high-dose nuclear imaging procedure using something like 201Tl to make an image of a tumor, or something like that.
If we remove those three outliers corresponding to the plume location from the above set of numbers and we take the mean of the remaining values, this gives us a rough idea of the mean dose rate elsewhere along the 30 km radius, outside the location where the source term of radioactivity is passing in a plume of wind. That mean value, then, is 26 μSv/h.
If you were standing in that radiation field, 26 μSv/h, for five hours per day every day for a year, you would reach a total annual dose of 47 mSv, which is just above the allowed occupational radiation dose – above natural and non-occupational background – of 50 mSv per year, for people working around radioactivity, such as nuclear power plant employees. (This is the limit set in the United States by the NRC; I’m not sure what the corresponding dose limit is in Japan, but it will be something loosely similar.)
But it’s well worth remembering that that radioactivity that is present now, in very low levels, will not be sticking around for a whole year. It is dispersing rapidly, and it drops away exponentially as you move away from the Fukushima site. As we move further out from the 30 km radius marked on that map, the dose rates recorded are all at harmless levels, consistent with background radiation dose rates experienced by people in the United States and elsewhere across the world.
In Ramsar, Iran, the natural background radiation dose rate is unusually high, at 260 mSv per year in some places. That is 30 μSv/h, which is higher than the mean value of about 26 μSv/h measured at these monitoring stations 30 km west of Fukushima, as I described above.
The people of Ramsar experience a background radiation dose significantly above that which most other people across the world experience – but they do not seem to experience any ill health effects at all from this.
I hope all the above helps to put these dose rates in context.
The composition of the radionuclides that are responsible for most of the radioactivity in used nuclear fuel that has been stored in a cooling pool for a few months is very different than in nuclear fuel in a reactor that is operating, or has just been shut down.
Fuel straight out of an operating reactor contains a number of rather short lived, rather high specific activity fission product radionuclides which are of the largest health physics significance in the time immediately following severe reactor accidents.
Some of these short-lived fission products include iodine-131, xenon-133, xenon-135, tellurium-131, tellurium-132, and ruthenium-105. These short-lived fission products were very significant contributions to people’s radiation doses in the environment around Chernobyl in the time immediately following the Chernobyl disaster, for example, when they were dispersed from “hot” nuclear fuel from the reactor.
However, they are not present to any significant level in stored nuclear fuel, because they decay away relatively fast, and they cannot contribute any significant source term into the environment in some sort of accident scenario involving used nuclear fuel which has been stored for a month or three post-defueling.
So, what radionuclides are present in stored fuel? The main ones of interest here are the longer-lived fission products. 137Cs, 85Kr and 90Sr are the most significant ones. Of these, 85Kr is a gas, and has the most potential to be readily released from the fuel into the atmosphere. 137Cs accounts for most of the radioactivity of the used nuclear fuel, and it is usually the most feared radionuclide in the used fuel inventory, in terms of the potential source term released from an accident with a used-fuel pool.
Fuel-pool water evaporation
“In this house, we obey the laws of thermodynamics!”
— Homer Simpson
With the used fuel heating the water in the Unit 4 fuel transfer pool, how long will it take for all the water to be boiled away? It is actually possible to know this, without any real speculation. The physics is pretty simple.
We know that there is a full core-load of used fuel in the Unit 4 defueling pool, which was put there after the reactor was shut down for inspection on November 30.
(Assumption I’ve made here which may possibly be wrong: That that single core-load of fuel is the only fuel in the pool. Is there additional fuel in the pool? If there is, somebody needs to tell me how much and how long it has been cooling for, so we can re-run the numbers – or you can take what I’ve explained here and re-calculate it for yourself.)
That fuel has been cooling for the last 3.5 months, or approximately 9.2 * 106 s.
After this time, the radiothermal power output of the used fuel is small. Looking at this decay heat chart, we read that the decay heat is approximately 2 MWt. However, this chart is for a power reactor with a thermal power rating of 3000 MW. (And I’ve done a not-really-precise job of eyeballing that chart.)
But the Fukushima-I Unit 4 reactor has an electrical power capacity of 784 MW; that’s about 2352 MW thermal. So, we need to scale back the above figure commensurately; it’s approximately 1.6 MWt, from the entire core load of fuel.
The latent heat of vaporisation of water, at 100 C, is 2260 kJ/kg. (Let’s assume, conservatively, that the water in the pool is boiling; it’s at 100 degrees C, and the only route of energy dissipation from the system is through vaporisation of the water. This also assumes that none of the energy released is stored in the water by means of a rise in the water’s temperature, because it’s already at boiling point, and that there is no functioning mechanism for otherwise cooling that water.)
1.6 MW / (2260 kJ/kg * 1000 g/L) = 0.7 litres per second. 61 cubic meters per day.
The used fuel pool at Vermont Yankee, which is also a GE BWR-4, is 40 feet long, 26 feet wide and 39 feet deep, and is normally filled with 35,000 cubic feet of water. I will make an assumption that the Fukushima I Unit 4 used fuel pool has the same dimensions.
The level of water in the used fuel pool is normally 16 feet above the top of the fuel assemblies. With the water level evaporating at the rate described above, the water level will drop by 2 feet per day.
Uncovery of the fuel assemblies will take eight days. (Beginning from the point where the water level reached boiling point, after active cooling ceased.)
(Working assumption which you may subject to some skepticism: That there is no form of leakage or other water loss pathway from the used fuel pool.)
It seems plausible that a fire hose or something can be used to add water to the pool at a rate equal to this loss rate of 700 ml per second. The Chernobyl-style helicopter drops seem like overkill, and they are doing an effective job of whipping up “Chernobyl again” fear, and images that the newspapers are having a field day with.
Well, long time no post. I hope all my readers are well.
So, apparently today is something called “Blog Action Day“, and this year the topic of interest is anthropogenic forcing of the climate system, and mitigating the potential thereof.
So, OK, I thought I’ll write a blog post about it. The day is supposed to be about action, as the name suggests, so let’s talk about specific actions, with a view towards making a significant mitigation, in a realistic way, of Australia’s anthropogenic carbon dioxide emissions.
Australia’s brown coal (lignite) fired electricity generators have by far the highest specific carbon dioxide emissions intensity per unit of electrical energy generated, since they’re burning relatively high moisture brown coal. They are the most concentrated point contributors to the anthropogenic GHG output. Therefore, these are the “low-hanging fruit” – a very valuable target to look at first and foremost if we want to make the greatest realistic mitigation of the country’s carbon dioxide emissions in a practical way, followed by black coal-fired generators.
Australia’s total net greenhouse gas emissions in 2006 were 549.9 million tonnes of CO2 equivalent.
If we look at the three main sets of lignite-fired generators in the Latrobe valley in Victoria, they represent a very concentrated point source of CO2 output, so they’re a very good case to focus on specifically.
In 2006, Hazelwood generated 11.6 TWh of electrical energy, and 16,149,398 tonnes of carbon dioxide to atmosphere.
In 2006, Loy Yang A generated 15.994 TWh of electrical energy sent out to the grid and 19,326,812 tonnes of carbon dioxide to atmosphere.
I’ll exclude Loy Yang B from this list for the moment, since its numbers are eluding me.
In 2006, the Yallourn power station generated 10.392 TWh of electrical energy sent out to the grid and 14,680,000 tonnes of carbon dioxide to atmosphere.
If you look at the the total contribution of just those three brown-coal-fired plants combined, you’re looking at 9.12 percent of Australia’s total anthropogenic carbon dioxide emissions. If you replace those with clean technology that can deliver an equivalent electricity output, you get a 9.12 percent reduction in Australia’s CO2 emissions. (When you include Loy Yang B, I think it’s approximately 11-12%.)
That’s not a bad target for Australia to implement for the relatively short term for a real reduction in CO2 emissions. It can actually be done, if the real political will exists to do it.
Now, I’m not interested in this “100% renewable energy by 2020” business from the extremist any-excuse-for-a-protest Socialist Alternative set, because it is nonsense.
Replacing all the coal-fired and gas-fired generators in this country inside 10 years (and presumably only using wind turbines and solar cells, not nuclear energy of course since it doesn’t fit their para-religious ideology)? That’s complete bullshit, of course, because in the real world it cannot be done.
There’s a difference between setting a challenging target and setting a nonsense target. Unless you’re only trying to implement a political bullshit stunt instead of actually trying to hit your targets.
Of course, you don’t just close down the coal-fired generators. You’ve actually got to build their clean replacements first. So what do you use that can realistically replace a coal-fired power station? Nuclear power, of course.
Now, again, to be realistic, we probably can’t build LFTR/MSR, PBMR/HTGR, IFR/PRISM or any kind of nuclear fusion based generation capacity on a large scale to generate grid-connected energy right now. That’s not to say that pilot-scale research and development on those very cool technologies shouldn’t continue, but right now, getting more nuclear energy on the grid means advanced light water reactors – or maybe heavy water CANDU-type things, or conventional sodium-cooled fast reactors maybe. The most practical thing for serious deployment in the relatively short term is advanced LWR technology. In the slightly longer term, there is certainly a place to be encouraging both Gen. IV and fusion.
To get the same amount of energy as the total output from those coal plants, as above, which we’re talking about replacing, we need 4.56 GW of installed nuclear capacity, assuming a 95% capacity factor.
With 4 x 1154 MWe Westinghouse AP1000s, with a 95% capacity factor, you’ve got 4.62 GW, which is a little more than what’s needed.
You can easily have four nuclear power reactors integrated into one nuclear power plant.
Now, how much does it cost?
On March 27, 2008, South Carolina Electric & Gas applied to the Nuclear Regulatory Commission for a COL to build two AP1000s at the Virgil nuclear power plant in South Carolina. On May 27, 2008, SCE&G and Santee Cooper announced an engineering, procurement, and construction contract had been reached with Westinghouse. Costs are estimated to be approximately $9.8 billion for both AP1000 units, plus transmission facility and financing costs.
That gives you an idea of how much a nuclear power plant costs today, in the current financial environment, in the current regulatory environment.
If we double that figure of USD$9.8 billion, it’s AUD $21.4 billion. There will be some saving since we’re considering building four reactors at one plant, not two independent two-reactor plants.
How much that saving will be, quantitatively, I don’t really know. If the cost is reduced by 30%, we’re looking at 15 billion Australian dollars.
How long would it take? If the real political will exists to do it, 10 years is heaps of time. We could probably do even more in that timeframe if we really, really wanted to. AP1000 construction takes 36 months from first concrete poured to fuel load, if you ignore any political protest rubbish.
This is really just a base-line relatively achievable “base case”. After this decade, of course, the rate of nuclear power deployment – and related GHG emissions mitigation – could foreseeably accelerate.
What about the uranium input? About 600 tonnes of natural uranium per year total, for all four reactors. Australia’s present production, off the top of my head, is something like 10,000-11,000 tonnes. Australia’s present uranium production can very, very easily provide for Australia’s total electricity production even without expansion of uranium production – again, considering the inefficient once-through use of low-enriched uranium in conventional LWRs.
What about the so-called “waste”?
Roughly 80-85 tonnes of used uranium fuel per year. 96% of that is unchanged uranium, so that 76.8 tonnes of uranium can be seperated and re-used. It’s just uranium, so it’s not going to hurt you.
The remaining 3200 kg is made up of the valuable, interesting and unique byproduct materials from a nuclear reactor – unique resources with all kinds of different technological applications, which aren’t all radioactive, which you cannot get anywhere else.
Anyway, that’s one scenario which I happen to think has a lot of merit.
Maybe you don’t agree – but if you don’t agree, I’d love to see you elucidate an alternative scenario which can deliver the equivalent greenhouse gas emissions mitigation – shown to be accurate in a quantitative way – within a comparable timeframe and within a comparable cost.
It will not be inexpensive, and it will not happen overnight – but I have yet to see any scenario which can honestly do the same job faster and cheaper, when some real quantitative analysis is applied.
ABC Unleashed has recently featured an article by environmentalist Geoffrey Russell; Rethinking Nuclear Power.
It’s worth reading.
I like the idea of closing down uranium mines, and using existing stocks of mined uranium efficiently.
Uranium mining is far less environmentally intensive than mining coal, of course, but it’s basically inevitable that all mining is fairly environmentally intensive, and it’s always an appealing prospect if we can mine less material (whilst still maintaining our energy supplies and our standards of living, of course.)
I have to admit, when I first saw Geoff’s claim that we could completely eliminate uranium mining, I was skeptical. So I took a more detailed look.
A nuclear reactor which is efficiently consuming uranium-238 and driving a relatively high efficiency engine (typically, a Brayton-cycle gas turbine) will require approximately one tonne of uranium input for one gigawatt-year of energy output. This high efficiency use of U-238 could be best realized something like an IFR or a liquid-chloride-salt reactor (the latter is essentially the fast-neutron uranium fueled variant of a LFTR). This figure of one tonne of input fertile fuel per gigawatt-year is also comparable for the efficient use of thorium in a LFTR.
There are about one million tonnes of already mined, refined uranium in the world, just sitting around waiting to be put to use, which is termed so-called “depleted uranium”.
According to one source, the exact worldwide inventory of depleted uranium is 1,188,273 tonnes .
The total electricity production across the world today is about 19.02 trillion kWh .
Therefore, total worldwide stocks of depleted uranium, used efficiently in fast reactors, could provide every bit of worldwide electricity production for about 550 years.
That’s not forever, but it’s a surprisingly long time. And that’s just “depleted uranium” stocks; not including the stocks of HEU and plutonium from the arsenals of the Cold War, and not including the large stockpile of uranium and plutonium that exists in the form of “used” LWR fuel.
I know some thorium proponents aren’t going to like this; but there’s a strong case to be made here that uranium-238 based nuclear energy has a clear advantage over thorium, simply became of these huge stockpiles of already-mined uranium, for which there exists no comparable thorium resource already mined. The 3,200 tonnes of thorium nitrate at NTS is tiny compared to the uranium “waste” stockpile, but they’re both really useful energy resources which can replace the need for more mining.
Any type of breeder or burner reactor utilising 238U, or 232Th, as fuel requires an initial charge of fissile material to “kindle” it; however this requirement for fissile material is quite small; and personally, I think the inventories of HEU and weapons-grade plutonium recovered from the gradual dismantlement of the arsenals of the Cold War are perfectly suited to this purpose – destroying those weapons materials, whilst putting them to a valuable use.
Then again, with the means to completely replace the use of coal and fossil fuels in a way that requires very little or no uranium mining, I really hope the rest of the world keeps buying that iron and copper and bauxite. Alternatively, we’re going to have to start developing a more technologically based economy in this country to make up the reduction in exports of these commodities – perhaps developing and selling reactor technology?
Developing uranium enrichment technology, such as SILEX, is of limited usefulness because the relatively inefficient thermal-neutron fission of 235U, and hence the need for enrichment, will not supply any large portion of world energy demand in a sustainable fashion over the long term. The small amount of 235U in nature is of limited significance, over the long term.
Alternatively, perhaps a shake up of agriculture, using extensive desalination to supply fresh water requirements, might be used to replace Australia’s income from coal and uranium. I’m not sure.
Tip ‘o the hat to Barry at Brave New Climate for pointing out this article.
: From the World Factbook, 2008 ed. (jokes about the integrity of CIA’s intelligence aside…)
It has been announced this week that the Victorian Government will promote renewable energy by spending $100 million to establish a new regional solar power station, subject to the Federal Government matching its commitment.
Premier John Brumby will announce both initiatives today, focusing on the plan for a 330-gigawatt hours per year solar plant with the capacity to power the equivalent of 50,000 homes.
All right. More kumbaya and rainbows and sunshine courtesy of Brumby.
This proposed new solar power station will supposedly generate 330 gigawatt-hours of electrical energy per year. (The Age article originally mentioned a “330 gigawatt” plant, but they later caught the egregious mistake and edited it.)
How much energy is that?
In 2006, Loy Yang unit A in Victoria generated 15,995 GWh of electrical energy, sent to the grid.
(In doing so, it emitted 19,314,994 tonnes of CO2 equivalent, and a whole lot of other environmentally and aetiologically nasty, dangerous, toxic waste, such as fly ash, SO2 and NO2, as well.) That’s just one example of one of the coal-fired generators, of course.
Therefore, this proposed solar power station is generating about 1.88 percent of that one single coal-fired generating station.
How much will this plant cost? We don’t know. The article doesn’t say, nor does Brumby’s original press release. We don’t know how much it costs, and I doubt Brumby knows, either.
…promote renewable energy by spending $100 million to establish a new regional solar power station, subject to the Federal Government matching its commitment.
OK… we know that it costs at least $200 million. There is actually a convenient benchmark which we can use to make an estimate of how much the whole project will actually cost, and that is the $420 million solar energy installation planned by Solar Systems for northwestern Victoria. This is another expensive solar energy project that the Victorian government just loves to talk about as a poster child for their clean, green ways.
The Solar Systems project, with 154 MW of nameplate capacity, will generate 270 GWh per annum, and will cost 420 million dollars. If we assume that the newly proposed 330 GWh/annum installation might cost about the same, for a given amount of capacity, then we can expect that it will cost 513 million dollars.
To replace Loy Yang A, to have the equivalent amount of energy generation, you’d need 49 such installations of this size, at a cost of approximately 25 billion dollars to construct.
If you build a modern* nuclear power plant, with two 1100 MWe reactors operating with a 90% capacity factor, the plant will generate about 17,356 GWh per annum. That is, such a plant will replace Loy Yang A’s output about 1.09 times over; it’s more than sufficient.
How much does it cost, to build such a nuclear power plant?
Go on, consider an exaggerated, extra-conservative cost estimate from your local greenies. 9 billion dollars? 12 billion? 14 billion? 15 billion?
In every case, even with the most pessimistic cost estimates for nuclear power, it’s far, far cheaper than solar, assuming that you’re actually capable of counting kilowatt-hours.
(* Modern, but not bleeding edge. We’ll consider the presently available modern Generation III LWRs such as Westinghouse AP1000 that are available immediately, not Generation IV fast spectrum reactors, liquid fluoride reactors, or things like that, just to be a little conservative about it.)
Brumby’s press release says that they aim to have the plant operating by 2015. So, they aim to have the plant operating within six years.
Six years? To think that opponents of nuclear energy say that it takes too long to deploy.
If it takes six years to build, and you need 49 of them to replace one coal-fired station, well, would it take 294 years for them to accomplish that goal? Well, perhaps I’m being a tiny bit mendacious. You never know, perhaps they could achieve faster deployment constructing them in parallel, and maybe it would only take 200 years, or 150 years. Maybe.
Six years is in fact sufficient time to construct a nuclear power plant, if you’re serious about doing it and don’t allow it to be delayed. All the nuclear units at the Kashiwazaki-Kariwa nuclear generating station in Japan were each constructed in timescales of between three and five years; Kashiwazaki-Kariwa Unit 2 and Unit 5 both commenced construction in 1985, and both were completed by the end of 1990, within 5 years. Obviously the Japanese operators failed to see any relevance what so ever of a certain ill-fated Soviet graphite pile to their operations.
Even if you want to talk about conservative, drawn out timescales for the construction of new nuclear power in Australia, say, 10 years maybe, it’s still a far, far faster option, for a given amount of energy delivered, than solar or wind.
Is the plutonium that is potentially formed within certain types of fuels in nuclear fission power reactors really suitable for the construction of nuclear weapons? How accessible and usable is such plutonium for such a purpose? How hard would it be to construct a nuclear weapon employing such material? Could terrorists steal nuclear fuel from a nuclear power reactor and construct a working nuclear explosive device, practice?
What characteristics would such a device have? Given the terrible power of nuclear weapons, and the very real threat of terrorists who would love nothing more than to wield such power, these are perhaps important questions to consider.
I assert that, no, there is no real threat here that is anywhere near as plausible in the real world as it is sometimes beaten up to be. Can terrorists steal nuclear fuel, and build a nuclear weapon? No. I don’t think so.
I mainly just wrote this because (i) I just wanted to get this off my chest, and it’s good to have a go at the unrealistic nonsense that gets bandied about without any real factual evidence to back it up, and (ii) because I found the Kessler paper interesting.
This little piece of writing of mine owes a lot to the always entertaining and scientifically interesting posts of NNadir, especially this one, and this one, where I was pointed to the interesting publications of Kessler and colleagues. Love your work, NNadir🙂
My little essay is here (PDF format).
Pointing out of typos, peer review, comments, grammatical suggestions and other interesting discussion and feedback is appreciated.
(I know the sentence is too long in the last paragraph on page 5, and there’s a typo on the first line of page 13. Those are fixed in the CVS.🙂 )
I hope you find it enjoyable, interesting and/or useful.
The federal government has recently announced it will scrap the unpopular means test for the federal subsidy for domestic solar PV arrays, which restricted the rebate to households earning less than $100,000.
The size of the rebate was, formerly, $8 per watt of installed nameplate capacity, up to a maximum of $8000. The rebate will now be smaller; $5/W, up to a maximum of $7500.
Sounds good, right? But it’s horrendously expensive – the government is in effect paying $5/W for the cheapest, nastiest polycrystalline silicon PVs on the market.
There are scores of companies jumping on the bandwagon to sell these little 1-1.5 kW rooftop PV systems, advertising and promoting and installing them – because they’re making a fortune from the increase in business resulting from the subsidy.
The government rebate does not cover the full cost of such a system – therefore, in order to get as much interest as possible, the vendors are trying to keep the costs of such systems as low as absolutely possible, so that the cost that the customer pays is as small as possible. Therefore, all such systems are exclusively cheap, inefficient, basic polysilicon devices. After all, an advanced solar-concentrating collector with a high-efficiency CdTe cell or stacked heterojunction cell or sliver cell or whatever does not attract any higher subsidy than the basic polycrystalline Si device.
Advocates such as the Australian Greens say that such a scheme “supports the solar industry” – but all it does is supports the environmentally-damaging low-cost manufacturing of polycrystalline silicon in China, and doesn’t support innovation in advanced PV technology or anything like that.
What if the same amount of subsidy might be better spent elsewhere? Here’s a hypothetical idea to think about.
1. Go and find a suburb or a city or a community which has about 31,000 households. I’m certain there are 31,000 households in this country who support what I’m about to elucidate.
2. Get each household to put up AUD $1200 or so, temporarily.
3. Take that 25 million US dollars and purchase a 25 MWe Hyperion Power Module, or something similar.
4. At 25 MWe divided between 31,000 households, that’s a little over 25 GJ per year, which is a little more than Australia’s present average household electricity consumption. This doesn’t just generate a fraction of your household electricity needs – it generates 100% of it, and there will be no more electricity bills.
5. That corresponds to a nameplate capacity of 807 watts per household. Since the government hands out a subsidy of $5/W for solar photovoltaics with a 20% capacity factor, they should hand out $22.50/W for nuclear energy with a 90% capacity factor, right?
6. Collect your $18,157.50 rebate from the government. Less the $1200 investment, that’s $16,957.50 immediate profit in your pocket. This is exactly the same rate of payment per energy produced that presently exists in the form of the PV subsidy.
7. Go to the pub. Got to stimulate that economy, you know.
I wonder how many ordinary Australian households would support nuclear energy if you paid them $17,000 for doing so?
To replace one Loy Yang type coal-fired power station* with solar cells, we would need 6,082,342 homes equipped with 1.5 kW solar photovoltaic arrays.
With an $7500 rebate for each one, that would cost the government 45.6 billion dollars per each large coal-fired power station.
* (Loy Yang generated 15,995 GWh in 2006.)
Solar photovoltaics typically have a capacity factor of about 20%, and we’ll suppose the panels have a lifetime of, say, 30 years.
Therefore, this scheme costs the government 9.5 cents per kWh generated.
If the government purchases nuclear power plants, they will cost, say, 10 billion dollars (let’s be conservative) for a nuclear power plant with two 1100 MW nuclear power reactors which will operate with a 90% capacity factor and a lifetime of 50 years. The capital cost of plant dominates the overall cost of nuclear energy.
Therefore, the nuclear power plants would cost the government 1.15 cents per kWh – 12% percent of the cost of the solar rebate scheme. That’s the government’s rebate alone – without the rest of the price of these systems.
All this solar rebate is is another mendacious political enterprise involving renewable energy which can’t be scaled up, which hands out free money to the public, makes a bunch of money for the solar panel vendors (including many dangerous fossil fuel vendors such as British Petroleum), and mendaciously makes the government look like they’re actively getting the country running on clean energy.
ASIDE: I’m going to start cross-posting some blog content on the Daily Kos. I think it’s a nice site to engage with many, many readers – many of whom perhaps aren’t already so convinced of the virtue of nuclear energy – so, there’s plenty of engaging, active discussion, and the opportunity to maybe convince some people – even if that’s just a few people it’s still a very positive thing.