Category Archives: Baseload

Is nuclear a good investment?

In the much belated follow up to the eligibility criteria post on nuclear power, I consider the first merit criterion; is nuclear power a good investment? Is building a nuclear power plant a good way to turn a pile of money into a bigger pile? Or are there better ways of doing that?

Imagine you’ve been saving for a while and you have $50 billion burning a hole in your pocket and want to invest it in some clean electricity production. Nuclear looks okay, people keep telling you it’s clean and reliable, and everyone in the neighbourhood seems to have solar now as well, so might look at that too.

There’s a bit to consider; nuclear takes a long time to build, but once it’s running it produces electricity almost all the time. Solar is very quick to build, but only produces power when the sun shines. Which will give you more electricity from your investment?

In the first post I assumed an 8-year construction schedule, the Nuclear Energy Agency allows 4-8 years depending on where you are, and excluding permitting, finance raising and design. As this would be the first ever nuclear power plant built in Australia I think ten years is reasonable. It’s taken Hinkley Point, with full government support, at least two years to even decide if they’re going to build it and they’re expecting electricity in 2022 or so.

$50b is about the expected cost for the Hinkley Point reactor. For that price you get 3200MW, provided by two steam turbines. These will generate power almost all the time, for a modeled capacity factor of 85%. This might be a touch low, but it doesn’t make much difference in the model; the US hit about 91% capacity factor last year, a record, and their long term average is around 90%. France in comparison was closer to 80%, but nuclear makes up a much higher proportion of generation in France, so has to ramp up and down to follow the load. The construction period is about ten years, so construction costs are around $5b each year.

How much solar would that buy? I’ve been using $2/Watt as an estimate for about 18 months, and it’s probably a bit high now for rooftop where the installation market is very competitive. I’ve seen commercial systems go in recently at $1.1 – $1.4/Watt, but they have all been on flat rooftops. If you spend $5b a year on solar it will require more space than some factory roofs, so the cost of installing in a field is more accurate. These tend closer to $2.4/Watt, which is what I’ve modeled.

Solar’s capacity factor is quite easily determined using climate data, driven by maps like this from  the Bureau of Meteorology. Find the insolation, multiply it by the panel efficiency and it’s possible to calculate the average energy output per kW. Like this table. For the model I have assumed 4kWh/kW/day, slightly more than Sydney’s average, but a lot less than Brisbane, Cairns and Alice Springs. 4kWh/kW per day is an implied capacity factor of 17%.

Solar is much easier to build. Australia has been adding about 800MW each year for the last 5 years, without really trying. At $2.40/Watt, $5b will buy a touch over 2GW of capacity. That represents a pretty big increase in current installation rates, so maybe 2GW in the first year is a bit ambitious. If this were an actual 10-year program then your installation capabilities would ramp up and should easily install 2GW/year by the third or fourth. I have modeled a constant program of 2GW per year to keep it simple. Again this doesn’t make much difference to the result.

I also haven’t modeled any change in the price of solar, which is extremely generous to nuclear. Installed prices have roughly halved in the last 5-years, continuing the trend of the last 40-years. It would be quite reasonable to assume that the cost of solar would halve again during this period.

The price of nuclear hopefully doesn’t change during the build, although these plants have a bad record of hitting their construction quotes and timelines. This excellent article in Grist gives a summary of some studies into power projects and how often they run over time and budget. Nuclear is the worst, on average being more than 100% over budget, and almost 100% of projects running over time. Of generating technologies, solar is the most reliable, with average cost overrun of less than 10%, with 40% of projects running over schedule.

First the graph of capacity over time. Solar installs just over 2GW a year, nuclear installs nothing for ten years. This is just peak output though, nuclear can provide that around 90% of the time, solar less than 20%.


The interaction between capacity and capacity factor are shown in the annual output graph. Solar’s output increases each year for the first ten as new capacity is added, then decays over time as the panels degrade. I’ve modeled 1% reduction in capacity each year. Nuclear generates nothing for ten years, then 3200MW forever. Degrading at 1% solar’s annual output falls to match nuclear’s some time after year 30. The solar array would have replaced some panels and inverters by this time, but many of the original panels would still be working.

Annual MWh

Solar gets a massive headstart through easy construction and modular design, which theoretically generates electricity from day 1. Even with the panels degrading and nuclear power’s famous high capacity factor, it doesn’t generate as much electricity as solar until about 80 years into this scenario.

Cumulative generation

I’ve just realised that I graphed Year in all three. MWh 

There’s a gentle curve in solar as the panels wear over time, but in anything less than a 50-year investment solar is going to make more electricity per dollar than nuclear, by some margin. Solar provides returns much faster and with much lower risk than nuclear, with the strong chance that construction will get cheaper throughout a program, rather than more expensive. The numbers used above are extremely favourable for nuclear, assuming that the nuclear project will be delivered on time and on budget, which is demonstrably unlikely, and assuming that solar will not experience any cost improvement, despite doing the same thing every year for ten years, with a product demonstrated to lower its manufactured price over time. Nuclear produces nowhere near as much electricity as solar on conservative estimates and it’s only going to get worse.

The absurd complexity in manufacture and design of nuclear power plants is a weakness that leads to severe cost and timeline uncertainty, while the opposite is true of solar. No one will build a nuclear power plant in Australia without significant government intervention. And it won’t just require money, but the political stamina to commit to a project of staggering cost and no discernible output for over a decade. It won’t happen. Nuclear has already lost.

Climate Change Reporting and the Truth

I’d wondered previously how much “cherry picking” goes on among those who report on climate change, in particular those unique individuals who do not believe that climate change is a real thing and no we really shouldn’t do anything about it.

The “windmills don’t stop coal being burnt” report doing the rounds this week has been an astonishing example. Here’s my take on it from last week.

The centre of the story is Hamish Cummings, retired engineer, who made some phone calls and used some AEMO dispatch data to conclude, fairly ambitiously, that when the windmills come on the coal plants don’t turn down, so the windmills aren’t doing anything. This showed conclusively, using “real data” that the windmills weren’t contributing. It was seized upon enthusiastically by Jo Nova, Independent Australia, Andrew Bolt (whose comment moderators deleted my link to the post where I strongly disagreed), Australian Climate Madness, and of course The Australian.

All of these outlets thought a story on the carbon intensity of the grid and whether or not windmills influenced it was worthy of a story.

No doubt then all of these people will be happy to know there is an index of carbon intensity, published monthly by Pitt and Sherry, who like Hamish, are engineers.

It shows pretty clearly that in about 2009 emissions stopped following the pattern of demand and started decreasing. It also shows a corresponding decline in black coal and an increase in renewables. Brown coal is probably not the first to go in the bidding order because it is the cheapest. The recent carbon price will have an impact on that though.
So a retired engineer releases a report, based on phone calls, letters and publicly available network data (read very low resolution) completely disagrees with an index which has been published monthly for years, and they publish the retired engineer. I suppose I shouldn’t be surprised.


Tristan over at Climate Spectator has a good post on this as well.





Sigh. Windmills Don’t Work Because Coal is Baseload

I suppose I shouldn’t be so despondent about this article; nonsense like this is why I started this blog. I am referring to an article in the Australian yesterday, which talks about a study which suggests that wind-farms don’t actually save any emissions, because the coal fired power which they are displacing is baseload.

It ran in the Australian, so to engage with this “debate” you’ll need to follow this link, then click on the first article “Hopes of slashing greenhouse emissions blowing in the wind.” Go and read the whole catastrophe and come back.

While you’re at it, if you haven’t read the first baseload post here that will provide some background.

The thrust of it is some research conducted by Victorian mechanical engineer Hamish Cumming, who has looked at “publicly available data” and determined that because there is no evidence in these data that the coal plants are burning less coal, then obviously the windmills aren’t abating any carbon.

I will leave aside the carbon permit and subsidies claims, that will take another post to tackle properly, and concentrate on this analysis. How right is he? What assumptions support these findings? Is the data actually available to make this determination?

From the article: “A forensic examination of publicly available power-supply data shows Victoria’s carbon-intensive brown-coal power stations do not reduce the amount of coal they burn when wind power is available to the grid.”

Is evidence of whether or not the coal fired power-stations are ramping down when wind comes on, evidence that the turbines are abating carbon? No.

The only reliable measure of how much carbon is abated by renewables and whether or not they are making a difference is in the long-run averages; by looking back over a year of data and comparing the MWh generated across the network with the tonnes released. If we start generating more electricity per tonne of CO2 then both renewables are contributing and the carbon price has had an impact. The data that is being described above does not include any hard data about tonnes emitted; this is a closely kept secret of the generating companies as it directly affects their competitiveness in the market. Note too that he has only considered Victorian power-stations; there is no requirement that the stations that power down are the ones near by, nor any requirement that the most polluting ones power down. Sure, it would be good if that did happen, but current thinking is that with a carbon price the most polluting plants will be proportionally less competitive, and so will be dispatched less frequently. If abating carbon was the only goal of the market then the bidding order would be based on carbon intensity and Hazelwood would only get dispatched a couple of times a year. But, the market is based on cost, with the addition of a carbon price a pollution proxy and so the most expensive generation is turned off first. Not the most polluting.

“Cumming says surplus energy is wasted to make room for intermittent supplies from wind.”

I have considered a whole post on this idea previously, but we can tackle it now. This statement show a disturbing lack of understanding for how electricity and the network actually operate, with a fair degree of entrenched baseload thinking to really skew the statement . What on earth is surplus energy? Our grid operates on alternating current (posts on this are at my old blog, here, here and here) and is managed to maintain frequency. Demand loads come and go during the day and generation is tuned to meet demand. If the frequency drops that means more demand has come on and so more generation will be dispatched. Obviously the reverse applies as well. The frequency is managed extremely tightly; if it gets too far from the 50Hz that equipment across the grid is designed to use, that equipment starts breaking. So frequency is managed to 50Hz, +/- 0.15Hz and if it goes beyond that emergency protection settings are enacted and loads get dumped. Note that nowhere in here is any mention of surplus energy (actually electricity, but that distinction doesn’t seem to matter to many). We always generate exactly the same amount of electricity as we use. Always. So while the signal of Victorian brown coal generators turning off at exactly the same time as wind turning on is not visible, it is wrong to use this information to make claims about whether or not any greenhouse emissions have been saved.

“Cumming’s findings have been confirmed by Victoria’s coal-fired electricity producers and by independent energy analysts who say it is more efficient to keep a brown-coal power-station running than turn it down and then back up.”

There is some ambiguous language here, but I am sure that the coal-fired power-stations would have confirmed that they prefer not to change load. That’s what baseload means. That doesn’t mean that they can’t or don’t change load and nor does it confirm the findings. A fairly typical 250MW turbine can change generation down to 30% of its rated load, at increments of about 5MW per minute, and a typical power-station will have 4 of these turbines. These generators have always had the capability to change loads; demand has always been changeable, and there is no network difference between an decrease in demand and an increase in generation. So wind turbines or other intermittent generators are just another element dragging the grid frequency away from 50Hz, but it is entirely manageable.

Here is the reference for the coal data used in this study:

In a letter to Victorian Attorney-General Robert Clark, Cumming said the owners of Yallourn, Hazelwood and Loy Yang power stations had confirmed in writing that the power stations combined consume about 7762 tonnes of coal an hour.

“They have confirmed that the power stations do not change the coal feed intake 24 hours a day, seven days a week, 365 days a year. The coal consumed by these three power stations alone makes base-load power available at a rate of 6650 megawatts,” Cumming wrote.

Given that Yallourn is partially under-water we know this is not true.

Beyond that though, this is an extremely technical field and I would expect this sort of analysis would have to include actual feed-rates of coal, compared to output, correlated across a wide number of sites, to have any hope of making the claim that wind-turbines do not displace coal fired generation. This seems to be an anecdote, aggregating across four (Loy Yang A and B) major power-stations and based on the assumption that output is constant.

There is one extremely technical point that is a possible mechanism for wind power not abating emissions, and I would be reluctant to say which way this fell. Baseload plants have a Best Efficiency Point, when their electricity produced per tonne of coal is maximised. Any deviation from that point means slightly less efficient generation. But there are two problems with this; the change in efficiency is minute and it assumes that the plant was already operating at its best efficiency point. The whole-cycle efficiency of a plant might be 30% plus or minus just a couple of points, until the extremes of their operating range. The question then is does efficiency move so far that it wasn’t worth turning the wind turbine on at all? Only long run data will confirm or deny this. Ditto with the operating point assumption. There is just no point commenting on the electricity network unless you have reams of data. It’s too complicated not to.

This study has been quite widely covered, getting a run with Andrew Bolt, Jo Nova and the piece above in the Australian.

One hopes that those mentioned would like to discuss an alternative viewpoint. I am also a mechanical engineer and I’ve got a bit of an idea how electricity works. While Hamish Cummings analysis is a valuable contribution to the debate, I think it is a starting point only, and would strongly benefit from some more data. It is not possible to make those findings from the data presented.

I’ll post this on their blogs and we’ll see how it goes. Maybe if you see someone else running this line you could point them here. I think we could have a more nuanced discussion than this.
And big hat-tip to reader @andrew_hedge who pointed the Oz piece at me.
PS I’ve had some twitter discussions since this about what the actual abatement is: this report is a good starting point. But, these are modeled values which will upset a lot of people. Like I said, long run data is the only way to crack this one.

The Flaws Inherent in Capacity Factor

Capacity Factor is a performance indicator for generation assets, and must always be calculated retrospectively. It is a measure of how a generator has performed. Strictly speaking it is the annual generation divided by nameplate capacity times 8760 (hours in a year). So it is a percentage indicator, of how much electricity the generator produced, compared to its theoretical maximum.

It is often used, badly, in internet arguments about the relative merits of the available generating technologies. But, like any of these gross indicators if you don’t fully understand what is bundled up in the calculation, you don’t really understand the metric.

The big coal-burners aim for very high capacity factors. As noted in the first baseload post, their business is based on burning as much coal as possible, ideally running flat out all the time. Anything over 90% would be a pretty solid result, with numbers around 70% far more common, and I’ve even seen some in the 50s. As you can be pretty sure these plants want to run all the time, the capacity factor is a very gross indicator of how much maintenance the plant needs, and their operating profile within the market. Coal is a pretty cheap fuel, and their maintenance costs are relatively fixed, so they run as often as possible to smear that maintenance cost across more generation.

Consider too all of this in the context of “baseload”. A plant that only runs half the time is considered baseload? This is our benchmark for reliable electricity production?

For the gas burners, their fuel cost is the main driver, so they run less frequently, waiting for the moments when the price for electricity is high enough to make some money. I will do a series on generation-technology, but for the moment it is sufficient to say that gas lends itself more readily to this sort of on/off operation. For these generators, the capacity factor, taken in isolation, is an indicator of volatility in the market, a proxy for how often electricity was valuable enough to cover the costs of burning gas. Not surprisingly, there is a huge range in gas-plant capacity factors, with 50% being very high, and low single figures for some of the acute peaking plants. I have even heard a story of a small gas engine that was commissioned the day before the market went to their maximum price (it is capped at $12,500/MWh), ran for a few hours, paid off the capital and made some money, and then mothballed it the next day to wait for the next maximum.

Using capacity factor as the sole comparison with renewables is farcical.

For wind power, the problem comes from the inclusion of “nameplate capacity” in the calculation. Unlike steam turbines, wind turbines don’t really have a hard upper limit. In a standard coal plant, the 4 x 250MW turbines will all produce 250MW when running hard; they will never go above this. Their nameplate capacity is the limit of their performance.

For a wind turbine there is an element of arbitrariness in the declaration of capacity. Sure, they have a safe upper limit, at which the turbine does a few things to protect itself, but that’s not the nameplate capacity. That capacity is usually chosen to represent the output at a given wind-speed; say 5m/s. So they might call it a 2MW turbine, but that is not the maximum output, nor even the average output. And that difference is important, because the relationship between wind-speed and output is non-linear; ie the difference between output at 3m/s and 4m/s is LESS than the difference between 4m/s and 5m/s. The output gets bigger, faster as wind-speed increases. So averaging that output over a year, and dividing it by the flawed theoretical maximum gives some idea of how much they were used, but not a very detailed picture. It is little better than a guess at how likely a wind-farm is to meet production, within a one year window, which is a proxy for how windy it was last year. It is essentially a weather report.

Knowing all of that, capacity factors for wind-farms range from about 20% to 40%.

While this is linked to a long-term problem I have with qualitative analysis in this field, there are better ways to talk about renewables output. They will be complex, but renewable energy interactions with our demand profile is a pretty dynamic problem and so we need a more complicated dialogue to make sense of it.

There are two major flaws with using capacity factor to describe the output of renewables; the first is that there is no comment on the “shape” of the profile. A 40% capacity factor suggests that the farm ran flat out exactly 40% of the time. But is that how wind works? The wind blows at exactly 5m/s four out of ten days? Seems unlikely. Perhaps it ran at half capacity for 80% of the time? Or better yet, maybe it ran at exactly 40% capacity all the time? That would be terrific! My main point here is that the detail is lost, and no consideration of how this output meets our needs fits into such a coarse indicator. So, and we’re nearing the limits of my maths here, there might be a fancy Fourier transform one could do with a wind-farm’s output data-set, to describe the profile more accurately. My understanding is that the Fourier-transform of a continuous function paints a picture of the data; how often it achieves maximum, how long it is at zero, the range of values, that sort of thing. It is a frequency histogram of the values in the data set. Complicated? Yes, no doubt. Useful? Absolutely. An even simpler method might be just to describe the mean and standard deviation; the average production and how often production is near that average. Again though, this will be thrown out severely by days of zero production, where the Fourier-transform would capture this.

The second is related to the first, and is particularly pertinent when considering solar electricity. Solar is regularly reported as having a capacity factor of around 25%, and given that it requires sunlight this isn’t really surprising. There is even a decent argument that 20% should really be 40%; the “theoretical maximum” for solar surely doesn’t include night time does it? Leaving that aside, this suggests that solar panels achieve maximum output for about 6 hours a day. Intuitively we know this is probably three hours before and after midday. The internet argument goes that 25% capacity factor isn’t enough, and that means we need to install four times as much generation. But this idea is mired badly in the baseload mindset and utterly ignores the fact that demand changes during the day, and by quite a lot. Again a better descriptor is possible, considering the relationship with what is essentially free energy, and network demand. What if solar output matched air conditioner demand? That seems pretty likely too; hot days are sunny days. I’ve linked to this article before, but it’s still useful. Look at how solar power is contributing to our grid at the moment. Perhaps with a bit of clever engineering and some insulated houses we could directly match solar output to air conditioners, and come home to a house that has been cooled automatically, with power generated on-site?

I’ll leave it there for the moment, but I wanted to seed some ideas. There are some huge opportunities to link renewable energy supply with electricity use, and even some very interesting options for direct energy use, in say a thermal cycle air-conditioner, rather than a compressor driven unit.

I’ll summarise the whole lot with “I don’t really like capacity factor as a stand alone descriptor” and to encourage you to be skeptical of anyone who does like it. You’re not getting the whole picture.


Unravelling the Baseload Furphy – part 1 of X

The Australian Energy Market Operator (AEMO) have released their 2012 Statement of Opportunities report and there are some very interesting outcomes for the enthusiast. However, the main reason I want to talk about this today is to start addressing some of the common misconceptions about the electricity market and point you at some of the more accurate answers.

Baseload. It’s the word everyone knows about electricity generation now, bandied around in pubs and on QandA. Broadly speaking, baseload generation can be thought of as ‘energy supply that is available all the time’. It was mostly created as a term to differentiate fossil fuel power from renewables, whose energy source is intermittent. A less conciliatory way of thinking of baseload is “electricity supply which is too inflexible to switch on or off”.

It extends from the idea that the big thermal power plants (in particular coal and gas boilers, not turbines) require a long time to start up and shut down. Big coal burners I’ve worked on in the past require 24 hours notice to heat a boiler and get to full load; gas boilers are slightly faster because the fuel ignites more easily but we’re still talking about 16 hours. At the other end, shutting down a boiler from flat out can take more than 8 hours.

Each power station will have a number of boiler-turbine trains, rated to about a quarter of the output. So in a week where demand goes from zero, to maximum and back to zero a powerplant would spend 4 days (of a total of 28 “boiler days”) starting boilers and not making electricity, and another full day shutting them all down.

To avoid start-ups and shut-downs, generators do some cunning things. Off-peak hot water is the classic; this allows residential consumers to receive a signal from the network that they could start heating water now, and that spare load overnight is dumped into hot water for reduced price. This does however give an artificial indication of how much electricity we need overnight. Related, I have heard that Japan solved this problem (nukes operate much the same way as a coal plant) by encouraging similar cheap power overnight, leading to the extraordinary streetlights in Tokyo we see today.

So, think of baseload supply as “on regardless of if you want it or not”. One does not build more baseload supply to meet higher peak demands.

And finally we are at today’s furphy; that baseload supply is “planned” and “controlled” by government or AEMO. That we “need” baseload supply and that someone should fix it.

AEMO’s Statement of Opportunities, linked at the top of the page, is about as close to planning and controlling the market as they get. This is an assessment of network capability over the coming years. A guess at how our electricity demand will grow (although it has shrunk in the last three years, hence the revision of requirements) in coming years, and whether we have the generation capacity to meet it.

But that’s the end of AEMO’s involvement. Now, potential investors will look at this information and decide if they really do want to build a power station. And if they do, what sort will it be? There is no mandate that new generation be baseload, intermittent or peaking. Each individual investing corporation decides for themselves whether or not they can make money in the current market. And given what I’ve told you above, do you think baseload generation would be a new investor’s first choice?

The table on page 3 of the Executive Summary supports this; new generation projects announced across the NEM, include 14GW of wind, a little under 12GW of Open-cycle gas (classic peaking power) and about 3GW of both black coal and combined cycle gas, the two major baseload technologies.

The rest of the report discusses outages and developments on existing equipment. Again there are some interesting bits and bobs in here (I didn’t realise there was any hope of Munmorah or Morwell ever running again). In particular a 566MW open-cycle gas turbine plant in Mortlake is a large development, equating to being able to deliver about half a coal plant’s output in under an hour.

And lastly, mostly for my interest, the 60MW upgrade to Eraring powerstation, with another 60MW coming. I’ve seen this before in other plants, and in those cases the extra capacity has come from an uprated turbine; all other components are unchanged. This suggests that there are STILL developments being made in turbine design, which I find endlessly fascinating. That we can still take a plant that was designed 40+ years ago and wring another 5 or 10% out of it with a new turbine design blows my mind.