Category Archives: GenTech

Is nuclear a good investment?

In the much belated follow up to the eligibility criteria post on nuclear power, I consider the first merit criterion; is nuclear power a good investment? Is building a nuclear power plant a good way to turn a pile of money into a bigger pile? Or are there better ways of doing that?

Imagine you’ve been saving for a while and you have $50 billion burning a hole in your pocket and want to invest it in some clean electricity production. Nuclear looks okay, people keep telling you it’s clean and reliable, and everyone in the neighbourhood seems to have solar now as well, so might look at that too.

There’s a bit to consider; nuclear takes a long time to build, but once it’s running it produces electricity almost all the time. Solar is very quick to build, but only produces power when the sun shines. Which will give you more electricity from your investment?

In the first post I assumed an 8-year construction schedule, the Nuclear Energy Agency allows 4-8 years depending on where you are, and excluding permitting, finance raising and design. As this would be the first ever nuclear power plant built in Australia I think ten years is reasonable. It’s taken Hinkley Point, with full government support, at least two years to even decide if they’re going to build it and they’re expecting electricity in 2022 or so.

$50b is about the expected cost for the Hinkley Point reactor. For that price you get 3200MW, provided by two steam turbines. These will generate power almost all the time, for a modeled capacity factor of 85%. This might be a touch low, but it doesn’t make much difference in the model; the US hit about 91% capacity factor last year, a record, and their long term average is around 90%. France in comparison was closer to 80%, but nuclear makes up a much higher proportion of generation in France, so has to ramp up and down to follow the load. The construction period is about ten years, so construction costs are around $5b each year.

How much solar would that buy? I’ve been using $2/Watt as an estimate for about 18 months, and it’s probably a bit high now for rooftop where the installation market is very competitive. I’ve seen commercial systems go in recently at $1.1 – $1.4/Watt, but they have all been on flat rooftops. If you spend $5b a year on solar it will require more space than some factory roofs, so the cost of installing in a field is more accurate. These tend closer to $2.4/Watt, which is what I’ve modeled.

Solar’s capacity factor is quite easily determined using climate data, driven by maps like this from  the Bureau of Meteorology. Find the insolation, multiply it by the panel efficiency and it’s possible to calculate the average energy output per kW. Like this table. For the model I have assumed 4kWh/kW/day, slightly more than Sydney’s average, but a lot less than Brisbane, Cairns and Alice Springs. 4kWh/kW per day is an implied capacity factor of 17%.

Solar is much easier to build. Australia has been adding about 800MW each year for the last 5 years, without really trying. At $2.40/Watt, $5b will buy a touch over 2GW of capacity. That represents a pretty big increase in current installation rates, so maybe 2GW in the first year is a bit ambitious. If this were an actual 10-year program then your installation capabilities would ramp up and should easily install 2GW/year by the third or fourth. I have modeled a constant program of 2GW per year to keep it simple. Again this doesn’t make much difference to the result.

I also haven’t modeled any change in the price of solar, which is extremely generous to nuclear. Installed prices have roughly halved in the last 5-years, continuing the trend of the last 40-years. It would be quite reasonable to assume that the cost of solar would halve again during this period.

The price of nuclear hopefully doesn’t change during the build, although these plants have a bad record of hitting their construction quotes and timelines. This excellent article in Grist gives a summary of some studies into power projects and how often they run over time and budget. Nuclear is the worst, on average being more than 100% over budget, and almost 100% of projects running over time. Of generating technologies, solar is the most reliable, with average cost overrun of less than 10%, with 40% of projects running over schedule.

First the graph of capacity over time. Solar installs just over 2GW a year, nuclear installs nothing for ten years. This is just peak output though, nuclear can provide that around 90% of the time, solar less than 20%.

.Capacity

The interaction between capacity and capacity factor are shown in the annual output graph. Solar’s output increases each year for the first ten as new capacity is added, then decays over time as the panels degrade. I’ve modeled 1% reduction in capacity each year. Nuclear generates nothing for ten years, then 3200MW forever. Degrading at 1% solar’s annual output falls to match nuclear’s some time after year 30. The solar array would have replaced some panels and inverters by this time, but many of the original panels would still be working.

Annual MWh

Solar gets a massive headstart through easy construction and modular design, which theoretically generates electricity from day 1. Even with the panels degrading and nuclear power’s famous high capacity factor, it doesn’t generate as much electricity as solar until about 80 years into this scenario.

Cumulative generation

I’ve just realised that I graphed Year in all three. MWh 

There’s a gentle curve in solar as the panels wear over time, but in anything less than a 50-year investment solar is going to make more electricity per dollar than nuclear, by some margin. Solar provides returns much faster and with much lower risk than nuclear, with the strong chance that construction will get cheaper throughout a program, rather than more expensive. The numbers used above are extremely favourable for nuclear, assuming that the nuclear project will be delivered on time and on budget, which is demonstrably unlikely, and assuming that solar will not experience any cost improvement, despite doing the same thing every year for ten years, with a product demonstrated to lower its manufactured price over time. Nuclear produces nowhere near as much electricity as solar on conservative estimates and it’s only going to get worse.

The absurd complexity in manufacture and design of nuclear power plants is a weakness that leads to severe cost and timeline uncertainty, while the opposite is true of solar. No one will build a nuclear power plant in Australia without significant government intervention. And it won’t just require money, but the political stamina to commit to a project of staggering cost and no discernible output for over a decade. It won’t happen. Nuclear has already lost.

Advertisements

Waste heat?

I thought this was a great piece this week by Glenn Platt on the anatomy of the Australian electricity network. I just wanted to expand on one point that I’ve been meaning to write about for a while.

“A typical coal-fired power station loses (or wastes) almost 70% of the energy that goes into it, when converting the energy in coal to electricity, and up to a further 10% is lost during the transmission and distribution stage. An old-fashioned light bulb then loses 98% of this energy to make light.”

These numbers are all of similar magnitude to what I have heard, except no one really uses incandescent lights anymore do they? What many people are surprised by in these figures is the 70% waste out of a coal fired power station. Surely there is something we can do with all that wasted energy?

The range of coal plant efficiencies in Australia is approximately 25% for the oldest plants to about 35% for the newest. The old plants could be improved with new technology, but the cost of retrofitting is so high that it is often cheaper to build a new plant. Or just keep operating the old one.

The theoretical maximum efficiency of any system that uses heat to create electricity is governed by the Carnot relationship, or even better the Chambadal-Novikov efficiency. Leaving out the maths, both of these relationships state that a conversion process is more efficient when the hot part is hotter and the cold part is colder. In a power station the hot part is the water in the coal boiler and is usually somewhere between 500 and 600 degrees Celsius. This temperature is limited by the materials available and their cost. An old unit might use copper or stainless steel, where a newer plant would be more exotic materials like titanium alloys. The material needs to be able to tolerate the 1000 degrees and more in the burning coal stream and not corrode due to the steam passing through it. When a steam tube fails it leaks steam into the boiler and once enough go there’s no point running the boiler any more. Shutting down and restarting a coal boiler is a two day job and so they don’t want tubes to fail.

The cold part is essentially ambient temperature, but power stations do a few cunning things with water to create a cold vacuum on the turbine and maintain efficiency. The huge curved concrete cooling towers we associate with power stations, both nuclear and coal, are one of the solutions to keeping water cool. This technology has generally been surpassed and newer plants favour fan forced cooling.

So the maximum heat in the steam tubes governs maximum efficiency, and materials science governs the maximum temperature.

There are however reasons beyond this that plants do not extract every last joule from their coal. Coal is essentially pure carbon, burning which gives CO2. But buried alongside the coal are other elements and compounds, some of which hinder the combustion process while others combust to give other damaging compounds. Water occurs in quite high concentrations in brown coal, as high as 60% in some places. Water, somewhat obviously, hinders the combustion process because some of the energy that should have been heating the steam pipes is actually heating the water in the coal. There are also sulphur compounds, which combust to give SO2, sulphur dioxide, one of the compounds responsible for acid rain. Acid rain forms when water and SO2 mix, and since there is water in the exhaust stream these can combine and form acid in the smoke stack, which causes big problems. To stop this happening they keep the stack temperature elevated, around 130 and above, to ensure that acid does not precipitate in their stack.

Others have regulatory restrictions placed on them which mandate minimum stack temperatures. I know of one plant whose state regulatory authority mandates an exit gas temperature above 150 degrees. This higher temperature helps the exhaust gas rise faster to be dispersed in the jet stream.

Coal plants aren’t running as efficiently as they theoretically could be, but there are good technical and economic reasons why they aren’t.

There aren’t many waste-heat opportunities in the electricity sector that haven’t been exploited to some degree. The best example of this ruthless efficiency is the combined cycle gas turbine. In this system a gas turbine, essentially a plane engine, burns gas and generates electricity. Carnot does apply here, but since the combustion is happening inside the engine the theoretical maximum efficiency is much higher. Practically though, an ‘open cycle’ gas turbine achieves somewhere around 40% efficiency. Gas turbines are mostly trying to convert kinetic energy, expanding gas, into electricity, so the gas that comes out is roaring hot. To capture this heat combined cycle gas turbines employ a steam boiler in the gas stream which also drives a turbine and creates electricity. This arrangement pushes the efficiency up to 60% and higher.

These efficiencies are possible with coal, but it’s difficult and currently cutting edge technology. In these proposed systems the coal is liquefied and sprayed into an internal combustion engine, conceptually similar to a diesel engine. In this case the hot bit is inside the cylinder, with no need to heat water, so the maximum possible efficiency is much higher, over 50% some have suggested. But this is hard to do; coal has all sorts of things in it that you would not want inside your tractor motor  and removing them costs energy.

It’s getting harder to see why anyone bothers burning the stuff at all.

 


If the answer is nuclear you don’t understand the question.

When selecting technology to solve a problem, a common method employed is to have “eligibility criteria” and “merit criteria”. Eligibility criteria are the must haves; if you are selecting a new machine to generate electricity “does it generate electricity” is the obvious one. Eligibility criteria are all pass/fail tests. Merit criteria are what you use to split all of the options that pass. Say you’re comparing a diesel or natural gas engine to power a remote site, a merit criteria could be “is fuel available all year round?”

In this post I will focus on the eligibility criteria for powering Australia while reducing the greenhouse gas emissions associated with electricity production. I will spend a long time arguing that, no, nuclear should not be considered, because it is utterly incapable of solving the problem. In later posts I will discuss the merit criteria, where again nuclear falls down.

Continue reading


The Flaws Inherent in Capacity Factor

Capacity Factor is a performance indicator for generation assets, and must always be calculated retrospectively. It is a measure of how a generator has performed. Strictly speaking it is the annual generation divided by nameplate capacity times 8760 (hours in a year). So it is a percentage indicator, of how much electricity the generator produced, compared to its theoretical maximum.

It is often used, badly, in internet arguments about the relative merits of the available generating technologies. But, like any of these gross indicators if you don’t fully understand what is bundled up in the calculation, you don’t really understand the metric.

The big coal-burners aim for very high capacity factors. As noted in the first baseload post, their business is based on burning as much coal as possible, ideally running flat out all the time. Anything over 90% would be a pretty solid result, with numbers around 70% far more common, and I’ve even seen some in the 50s. As you can be pretty sure these plants want to run all the time, the capacity factor is a very gross indicator of how much maintenance the plant needs, and their operating profile within the market. Coal is a pretty cheap fuel, and their maintenance costs are relatively fixed, so they run as often as possible to smear that maintenance cost across more generation.

Consider too all of this in the context of “baseload”. A plant that only runs half the time is considered baseload? This is our benchmark for reliable electricity production?

For the gas burners, their fuel cost is the main driver, so they run less frequently, waiting for the moments when the price for electricity is high enough to make some money. I will do a series on generation-technology, but for the moment it is sufficient to say that gas lends itself more readily to this sort of on/off operation. For these generators, the capacity factor, taken in isolation, is an indicator of volatility in the market, a proxy for how often electricity was valuable enough to cover the costs of burning gas. Not surprisingly, there is a huge range in gas-plant capacity factors, with 50% being very high, and low single figures for some of the acute peaking plants. I have even heard a story of a small gas engine that was commissioned the day before the market went to their maximum price (it is capped at $12,500/MWh), ran for a few hours, paid off the capital and made some money, and then mothballed it the next day to wait for the next maximum.

Using capacity factor as the sole comparison with renewables is farcical.

For wind power, the problem comes from the inclusion of “nameplate capacity” in the calculation. Unlike steam turbines, wind turbines don’t really have a hard upper limit. In a standard coal plant, the 4 x 250MW turbines will all produce 250MW when running hard; they will never go above this. Their nameplate capacity is the limit of their performance.

For a wind turbine there is an element of arbitrariness in the declaration of capacity. Sure, they have a safe upper limit, at which the turbine does a few things to protect itself, but that’s not the nameplate capacity. That capacity is usually chosen to represent the output at a given wind-speed; say 5m/s. So they might call it a 2MW turbine, but that is not the maximum output, nor even the average output. And that difference is important, because the relationship between wind-speed and output is non-linear; ie the difference between output at 3m/s and 4m/s is LESS than the difference between 4m/s and 5m/s. The output gets bigger, faster as wind-speed increases. So averaging that output over a year, and dividing it by the flawed theoretical maximum gives some idea of how much they were used, but not a very detailed picture. It is little better than a guess at how likely a wind-farm is to meet production, within a one year window, which is a proxy for how windy it was last year. It is essentially a weather report.

Knowing all of that, capacity factors for wind-farms range from about 20% to 40%.

While this is linked to a long-term problem I have with qualitative analysis in this field, there are better ways to talk about renewables output. They will be complex, but renewable energy interactions with our demand profile is a pretty dynamic problem and so we need a more complicated dialogue to make sense of it.

There are two major flaws with using capacity factor to describe the output of renewables; the first is that there is no comment on the “shape” of the profile. A 40% capacity factor suggests that the farm ran flat out exactly 40% of the time. But is that how wind works? The wind blows at exactly 5m/s four out of ten days? Seems unlikely. Perhaps it ran at half capacity for 80% of the time? Or better yet, maybe it ran at exactly 40% capacity all the time? That would be terrific! My main point here is that the detail is lost, and no consideration of how this output meets our needs fits into such a coarse indicator. So, and we’re nearing the limits of my maths here, there might be a fancy Fourier transform one could do with a wind-farm’s output data-set, to describe the profile more accurately. My understanding is that the Fourier-transform of a continuous function paints a picture of the data; how often it achieves maximum, how long it is at zero, the range of values, that sort of thing. It is a frequency histogram of the values in the data set. Complicated? Yes, no doubt. Useful? Absolutely. An even simpler method might be just to describe the mean and standard deviation; the average production and how often production is near that average. Again though, this will be thrown out severely by days of zero production, where the Fourier-transform would capture this.

The second is related to the first, and is particularly pertinent when considering solar electricity. Solar is regularly reported as having a capacity factor of around 25%, and given that it requires sunlight this isn’t really surprising. There is even a decent argument that 20% should really be 40%; the “theoretical maximum” for solar surely doesn’t include night time does it? Leaving that aside, this suggests that solar panels achieve maximum output for about 6 hours a day. Intuitively we know this is probably three hours before and after midday. The internet argument goes that 25% capacity factor isn’t enough, and that means we need to install four times as much generation. But this idea is mired badly in the baseload mindset and utterly ignores the fact that demand changes during the day, and by quite a lot. Again a better descriptor is possible, considering the relationship with what is essentially free energy, and network demand. What if solar output matched air conditioner demand? That seems pretty likely too; hot days are sunny days. I’ve linked to this article before, but it’s still useful. Look at how solar power is contributing to our grid at the moment. Perhaps with a bit of clever engineering and some insulated houses we could directly match solar output to air conditioners, and come home to a house that has been cooled automatically, with power generated on-site?

I’ll leave it there for the moment, but I wanted to seed some ideas. There are some huge opportunities to link renewable energy supply with electricity use, and even some very interesting options for direct energy use, in say a thermal cycle air-conditioner, rather than a compressor driven unit.

I’ll summarise the whole lot with “I don’t really like capacity factor as a stand alone descriptor” and to encourage you to be skeptical of anyone who does like it. You’re not getting the whole picture.