Monthly Archives: August 2012

The Flaws Inherent in Capacity Factor

Capacity Factor is a performance indicator for generation assets, and must always be calculated retrospectively. It is a measure of how a generator has performed. Strictly speaking it is the annual generation divided by nameplate capacity times 8760 (hours in a year). So it is a percentage indicator, of how much electricity the generator produced, compared to its theoretical maximum.

It is often used, badly, in internet arguments about the relative merits of the available generating technologies. But, like any of these gross indicators if you don’t fully understand what is bundled up in the calculation, you don’t really understand the metric.

The big coal-burners aim for very high capacity factors. As noted in the first baseload post, their business is based on burning as much coal as possible, ideally running flat out all the time. Anything over 90% would be a pretty solid result, with numbers around 70% far more common, and I’ve even seen some in the 50s. As you can be pretty sure these plants want to run all the time, the capacity factor is a very gross indicator of how much maintenance the plant needs, and their operating profile within the market. Coal is a pretty cheap fuel, and their maintenance costs are relatively fixed, so they run as often as possible to smear that maintenance cost across more generation.

Consider too all of this in the context of “baseload”. A plant that only runs half the time is considered baseload? This is our benchmark for reliable electricity production?

For the gas burners, their fuel cost is the main driver, so they run less frequently, waiting for the moments when the price for electricity is high enough to make some money. I will do a series on generation-technology, but for the moment it is sufficient to say that gas lends itself more readily to this sort of on/off operation. For these generators, the capacity factor, taken in isolation, is an indicator of volatility in the market, a proxy for how often electricity was valuable enough to cover the costs of burning gas. Not surprisingly, there is a huge range in gas-plant capacity factors, with 50% being very high, and low single figures for some of the acute peaking plants. I have even heard a story of a small gas engine that was commissioned the day before the market went to their maximum price (it is capped at $12,500/MWh), ran for a few hours, paid off the capital and made some money, and then mothballed it the next day to wait for the next maximum.

Using capacity factor as the sole comparison with renewables is farcical.

For wind power, the problem comes from the inclusion of “nameplate capacity” in the calculation. Unlike steam turbines, wind turbines don’t really have a hard upper limit. In a standard coal plant, the 4 x 250MW turbines will all produce 250MW when running hard; they will never go above this. Their nameplate capacity is the limit of their performance.

For a wind turbine there is an element of arbitrariness in the declaration of capacity. Sure, they have a safe upper limit, at which the turbine does a few things to protect itself, but that’s not the nameplate capacity. That capacity is usually chosen to represent the output at a given wind-speed; say 5m/s. So they might call it a 2MW turbine, but that is not the maximum output, nor even the average output. And that difference is important, because the relationship between wind-speed and output is non-linear; ie the difference between output at 3m/s and 4m/s is LESS than the difference between 4m/s and 5m/s. The output gets bigger, faster as wind-speed increases. So averaging that output over a year, and dividing it by the flawed theoretical maximum gives some idea of how much they were used, but not a very detailed picture. It is little better than a guess at how likely a wind-farm is to meet production, within a one year window, which is a proxy for how windy it was last year. It is essentially a weather report.

Knowing all of that, capacity factors for wind-farms range from about 20% to 40%.

While this is linked to a long-term problem I have with qualitative analysis in this field, there are better ways to talk about renewables output. They will be complex, but renewable energy interactions with our demand profile is a pretty dynamic problem and so we need a more complicated dialogue to make sense of it.

There are two major flaws with using capacity factor to describe the output of renewables; the first is that there is no comment on the “shape” of the profile. A 40% capacity factor suggests that the farm ran flat out exactly 40% of the time. But is that how wind works? The wind blows at exactly 5m/s four out of ten days? Seems unlikely. Perhaps it ran at half capacity for 80% of the time? Or better yet, maybe it ran at exactly 40% capacity all the time? That would be terrific! My main point here is that the detail is lost, and no consideration of how this output meets our needs fits into such a coarse indicator. So, and we’re nearing the limits of my maths here, there might be a fancy Fourier transform one could do with a wind-farm’s output data-set, to describe the profile more accurately. My understanding is that the Fourier-transform of a continuous function paints a picture of the data; how often it achieves maximum, how long it is at zero, the range of values, that sort of thing. It is a frequency histogram of the values in the data set. Complicated? Yes, no doubt. Useful? Absolutely. An even simpler method might be just to describe the mean and standard deviation; the average production and how often production is near that average. Again though, this will be thrown out severely by days of zero production, where the Fourier-transform would capture this.

The second is related to the first, and is particularly pertinent when considering solar electricity. Solar is regularly reported as having a capacity factor of around 25%, and given that it requires sunlight this isn’t really surprising. There is even a decent argument that 20% should really be 40%; the “theoretical maximum” for solar surely doesn’t include night time does it? Leaving that aside, this suggests that solar panels achieve maximum output for about 6 hours a day. Intuitively we know this is probably three hours before and after midday. The internet argument goes that 25% capacity factor isn’t enough, and that means we need to install four times as much generation. But this idea is mired badly in the baseload mindset and utterly ignores the fact that demand changes during the day, and by quite a lot. Again a better descriptor is possible, considering the relationship with what is essentially free energy, and network demand. What if solar output matched air conditioner demand? That seems pretty likely too; hot days are sunny days. I’ve linked to this article before, but it’s still useful. Look at how solar power is contributing to our grid at the moment. Perhaps with a bit of clever engineering and some insulated houses we could directly match solar output to air conditioners, and come home to a house that has been cooled automatically, with power generated on-site?

I’ll leave it there for the moment, but I wanted to seed some ideas. There are some huge opportunities to link renewable energy supply with electricity use, and even some very interesting options for direct energy use, in say a thermal cycle air-conditioner, rather than a compressor driven unit.

I’ll summarise the whole lot with “I don’t really like capacity factor as a stand alone descriptor” and to encourage you to be skeptical of anyone who does like it. You’re not getting the whole picture.

 


Government and Renewables – Submissions

There are a couple of consultation and submission processes open at the moment, which have the capacity to significantly change the way Government engages with the renewable energy sector.

The first is a process to discuss ARENA’s (The Australian Renewable Energy Agency, formerly ACRE (the Australian Centre for Renewable Energy, which I worked on)) funding strategy:

Submissions—draft general funding strategy.

And second, there is a review into the Mandatory Renewable Energy Target:

Renewable Energy Target

ARENA is a statutory authority outside of departmental/ministerial structures, governed by a board of industry experts. They are allocated money in the Federal budget, and then decide how to spend it. ARENA inherited a couple of programs already running and with contract management responsibilities to complete, such as the REDP, (renewable energy demonstration program) Solar Flagships and a few other little odds and ends. Full list here.

I hope this consultation process does influence their decisions, as the past few years have been difficult in renewable support programs. The very large ones, such as Solar Flagships, have struggled to get contracts signed and money out the door. Why? It’s hard to put a finger on any one cause, but my suspicion is that as the programs become more ambitious (Solar Flagships would use $1B to fund “the largest solar plant in the world”) the risk profile extends beyond what government wants to engage with. Most of these programs are on a 1:1 or 1:2 funding agreement, meaning private sector funds must be secured before the government will sign.

There is a giant body of work on how government should engage with renewables, the cornerstones of which are the Wilkins review from 2008 and the Garnaut review from 2011. Of these gigantic documents, chapter 6 of Wilkins is particularly interesting, discussing the most effective way for government to support renewable energy technologies. The thrust of this is that government involvement should be related to how “mature” the technology is, and that the earlier they intervene in the innovation pathway the more beneficial to the Australian economy. On the pathway from Basic Research -> Technology Research (application) -> Market Demonstration (which includes “demonstration scale” then “Pilot scale”) -> Commercialisation -> Market Accumulation -> Diffusion/Acceptance, government should stop being involved somewhere around demonstration/pilot stage then let the “pull” of the market do the rest. My understanding is that this is how ARENA will operate, so expect to see a change from the giant projects of the last few years, to more of the early stage/university research projects. This submission process will flesh out whether or not the rest of the sector think this is a good way to go.

The RET review has been a big deal in the press, as everyone in the industry has an opinion on it. As an example, here is piece from Climate Spectator, summarising some of the industry opinions.

The RET was created in 2001 as a way of supporting renewable energy. I won’t go into the mechanism in too much detail today, but I’ve talked about it previously here.

The “problem” that is being reviewed here is about the target. Originally the RET was created to ensure 20% of Australia’s electricity came from renewables, but it was legislated with a giga-watt hours target; not a percentage of generation target. This seems esoteric at first, but it means they projected what electricity use might be in the future, and calculated what 20% of that is. However, since electricity use has dropped in the last three years, those projections are now too high and it looks like the RET is going to lead to more than 20% renewables as a result.

Why is that a problem?

A lot of people don’t think it is a problem; too much renewable electricity seems like a good problem to have. But the difficulty is that the RET drives up electricity prices. Not much, compared to network costs in the last few years, but that *is* the purpose of the policy; it is a “cross subsidy” where the costs are smeared across all consumers and renewables are supported. So with the political pressure of higher power prices, there is a temptation to bring the RET back to 20% of actual generation, and reduce upward pressure on prices. But, the flipside of that is what will happen to the renewable energy sector? The concern within the sector is that changing the target will reduce incentives for renewables, which is true, leading to projects falling over or becoming unprofitable. Whether or not this is true remains to be seen, but I would expect most of these sorts of projects would include a bit of REC price volatility in their revenue estimates. Obviously though, this straw might be the one that breaks the camel’s back.

What to do then? Grant King, CEO of Origin Energy has suggested raising the target to 25%, but also moving the date back 5 years. Essentially just extending that same rate of growth, out past the end of the program. They could reign the target back in and fix it to a percentage of all electricity again or do nothing. It’s a tough call, and I’m glad it’s for Minister Combet to make, not me.


How Solar PV is Changing our Grid

The Conversation.

Just a short one from me, to link to this excellent piece in The Conversation. Tells the story of how Solar PV is changing our electricity market. Graphs! Analysis! Please, more coverage like this.


More Dams: Why Not?

Hydro power has been used as a stick to beat the left for years, fairly ignorantly it turns out. As recently as last week, luminaries such as Piers Akerman, renowned hydrogeologist and power-systems engineer have been hoping to wedge the left by pointing out their perceived hypocrisy. The old adage goes: If the left were really serious about addressing climate change, they would dam the country to capacity, and power our economy to hydroelectric, zero emissions victory!

There are three major errors bundled into this statement, all easily disproved. I’m surprised Piers’ engineering training hadn’t led him to these conclusions himself, given how confidently hydro is supported, calling it “the only real alternate source of renewable energy”.

For starters, the renewable and sustainable credentials of large-scale hydro-electric power are very much in dispute. From a sustainability point of view, the problem is methane. Dams require large areas of land to be flooded, usually containing plant material. The water excludes oxygen, and causes the plant material to anaerobically decompose. Anaerobic digestion leads to methane emissions and methane is a powerful greenhouse gas. The maths of the comparison are complicated, but lowering CO2 emissions by emitting more methane, which is 20 times more potent as a greenhouse gas, is a pretty risky strategy.

The argument over whether damming a river is a renewable energy source is debatable too. I assume in this case Piers is referring to how wonderfully dispatchable hydro power is. One literally just turns on the tap and electricity comes out. But, obviously, rain is required for there to be any water to let out. Even really wet places run out of water and therefore run out of electricity. The Basslink DC cable from Tasmania to the mainland was built to transfer the abundant hydro power they had and make some money in the NEM. However, it stopped raining, and rather than exporting, Tasmania imported 70% of their electricity in 2007. For the same reasons that “more dams” is not a suitable potable water supply policy option, more dams will not guarantee our electricity supply.

How much capacity are we talking about anyway? If we built every single dam available to us in Australia, how much electricity could we produce? The current best estimate for “technically feasible” (at any cost) hydroelectric power in Australia is 60TWh annually. In 2009-10, we used closer to 240TWh, about four times the total hydro power available.

So can hydro-electricity power Australia? Not really. Is it renewable anyway? Almost certainly not. Are there actually low carbon alternatives? Absolutely. And I’ll get to them in future posts.


The NEM, Bidding Orders and the Carbon Price

Understanding the relationship between “bidding orders” and the National Electricity Market (NEM) is central to any understanding of the carbon price and its implications. This is where the carbon price rubber hits the carbon road and the market is intervened with. Again in this post I’m going to avoid the politics of the carbon price and talk about the where and how of the policy. Suffice to say the current policy is not exactly as I would have done it, but I’m not an elected representative, so it’s hardly my problem, is it?

This post is intended to address the common fallacy that since households are receiving compensation for the carbon price impact, there will be no change in consumer behaviour. Here I hope to demonstrate that firstly this isn’t true, but also that household behaviour doesn’t really matter either.

The NEM, is an odd short hand for the electricity network connecting Queensland, NSW, Victoria, SA, Tasmania and obviously the ACT. If you’re an enthusiast, here are 28 pages on the topic. There is a separate network on the western seaboard that hasn’t been connected across the Nullabor, called the South West Interconnected System, which I’m not that familiar with, but I understand they work very similarly. Internationally, the NEM is regarded as about the largest independent electricity network, which I assume means they’ve discounted the interconnected European grid. The trading practices are well regarded internationally, delivering low cost per kWh electricity, but in recent years the related electricity network expenses have dragged the whole electricity supply price up (PDF from Electricity Users Association, comparing international electricity prices).

This market determines which generators get dispatched, when, and at what price. For today’s post we’ll just concentrate on the burners, as intermittent renewables make this whole concept a lot more complex.

Everyday, each individual generator bids into the market the price they are prepared to offer capacity at. As an example, consider a 1000MW, black-coal burning plant, on an average day. I will be completely making these numbers up, but I assure you they are about right.

The generator will know what their cost of production is, as a bundled cost including fuel costs and operational expenses (OPEX). Let’s say the fuel to supply a MW for an hour costs $10 and the OPEX for the same amount of energy is also about $10. Adding these together the operators, or “traders” for the business might decide that they need to receive $40/MW to bother operating, so they offer their first 200MW at $40. They also decide that it’s unlikely to reach very high levels of demand today, so bid to get as much as possible dispatched. The next 400MW is offered at $50/MW; $150/MW for the next 200MW and they exclude the last 200MW to perform maintenance.

Every generator in the NEM does this everyday. There are some very complex rules about reserving bids and notifying of maintenance, but on a normal day this is what happens.

Then AEMO takes the bidding structures for each of the 50-odd generators across the network, and ranks their bids. Maybe the brown coal generators in Victoria fill most of the lower capacity, then black coal in NSW and Qld, then a few gas plants around the country if things get desperate.

As electricity demand rises during the day, AEMO keep dispatching from plants with higher bids. As a new price gets dispatched, everyone getting dispatched at that moment gets the new higher price.

Cheaper electricity gets dispatched the most frequently.

Pricing carbon dioxide emissions in this mix changes the OPEX calculations for each generator, and thus their bids into the NEM. Those with higher carbon intensity will probably bid more than those with lower carbon intensity and the bidding order will change.

The carbon intensity of each generator is strongly related to their fuel source and weakly related to the equipment the generator is using. A casual glance through the National Greenhouse Accounts Factors Workbook (page 12 of the pdf) gives a good indicator of the emissions intensities of each fuel source. Anthracite (black coal) and brown coal have similar emissions intensity (88.2 to 92.7 kg per GJ) but black coal contains almost three times the energy, per tonne (29 to 10.2 GJ/tonne).

Coming back to the example, the black coal generator in NSW would expect a carbon price of about $23/MWh added onto their OPEX calculations. So, where previously they might have bid in at $40/MW, they now bid in at $60/MW. However, as the fee only appears as an expense to the company, they may choose not to pass through the full cost.

This is why it is so difficult to accurately model the costs of the carbon price; each individual generator can choose how much of the price they pass through.  Some of the higher-intensity plants might only pass through three-quarters of the price to ensure they are dispatched. Others might pass it all through because it suits their business model.

Overall though, the plants with the highest carbon intensities will be dispatched less frequently and Australia’s carbon emissions will probably decrease. Notice though that the purchasing decision happens well away from the consumer, who only experiences an increased cost. While electricity has always been a household cost, the trickle through of the carbon price may lead individuals to respond to this increase and improve their household energy efficiency. Maybe also this will encourage the generation sector to consider their efficiency options more carefully, as the benefits of energy efficiency are magnified by the carbon price. Many of Australia’s plants were built in the 1980s and well before, perhaps the carbon price will encourage a few to upgrade some significant bits of gear.

And so this price signal will play out throughout the economy. Again, electricity has always been a cost of manufacturing goods, so that proportion of the cost of manufacture will increase a little bit, and may differentiate some products. My suspicion though is you probably wouldn’t notice it until you were buying aluminium by the tonne.


Unravelling the Baseload Furphy – part 1 of X

The Australian Energy Market Operator (AEMO) have released their 2012 Statement of Opportunities report and there are some very interesting outcomes for the enthusiast. However, the main reason I want to talk about this today is to start addressing some of the common misconceptions about the electricity market and point you at some of the more accurate answers.

Baseload. It’s the word everyone knows about electricity generation now, bandied around in pubs and on QandA. Broadly speaking, baseload generation can be thought of as ‘energy supply that is available all the time’. It was mostly created as a term to differentiate fossil fuel power from renewables, whose energy source is intermittent. A less conciliatory way of thinking of baseload is “electricity supply which is too inflexible to switch on or off”.

It extends from the idea that the big thermal power plants (in particular coal and gas boilers, not turbines) require a long time to start up and shut down. Big coal burners I’ve worked on in the past require 24 hours notice to heat a boiler and get to full load; gas boilers are slightly faster because the fuel ignites more easily but we’re still talking about 16 hours. At the other end, shutting down a boiler from flat out can take more than 8 hours.

Each power station will have a number of boiler-turbine trains, rated to about a quarter of the output. So in a week where demand goes from zero, to maximum and back to zero a powerplant would spend 4 days (of a total of 28 “boiler days”) starting boilers and not making electricity, and another full day shutting them all down.

To avoid start-ups and shut-downs, generators do some cunning things. Off-peak hot water is the classic; this allows residential consumers to receive a signal from the network that they could start heating water now, and that spare load overnight is dumped into hot water for reduced price. This does however give an artificial indication of how much electricity we need overnight. Related, I have heard that Japan solved this problem (nukes operate much the same way as a coal plant) by encouraging similar cheap power overnight, leading to the extraordinary streetlights in Tokyo we see today.

So, think of baseload supply as “on regardless of if you want it or not”. One does not build more baseload supply to meet higher peak demands.

And finally we are at today’s furphy; that baseload supply is “planned” and “controlled” by government or AEMO. That we “need” baseload supply and that someone should fix it.

AEMO’s Statement of Opportunities, linked at the top of the page, is about as close to planning and controlling the market as they get. This is an assessment of network capability over the coming years. A guess at how our electricity demand will grow (although it has shrunk in the last three years, hence the revision of requirements) in coming years, and whether we have the generation capacity to meet it.

But that’s the end of AEMO’s involvement. Now, potential investors will look at this information and decide if they really do want to build a power station. And if they do, what sort will it be? There is no mandate that new generation be baseload, intermittent or peaking. Each individual investing corporation decides for themselves whether or not they can make money in the current market. And given what I’ve told you above, do you think baseload generation would be a new investor’s first choice?

The table on page 3 of the Executive Summary supports this; new generation projects announced across the NEM, include 14GW of wind, a little under 12GW of Open-cycle gas (classic peaking power) and about 3GW of both black coal and combined cycle gas, the two major baseload technologies.

The rest of the report discusses outages and developments on existing equipment. Again there are some interesting bits and bobs in here (I didn’t realise there was any hope of Munmorah or Morwell ever running again). In particular a 566MW open-cycle gas turbine plant in Mortlake is a large development, equating to being able to deliver about half a coal plant’s output in under an hour.

And lastly, mostly for my interest, the 60MW upgrade to Eraring powerstation, with another 60MW coming. I’ve seen this before in other plants, and in those cases the extra capacity has come from an uprated turbine; all other components are unchanged. This suggests that there are STILL developments being made in turbine design, which I find endlessly fascinating. That we can still take a plant that was designed 40+ years ago and wring another 5 or 10% out of it with a new turbine design blows my mind.

 

 

 


Why are electricity bills going up?

Gillard and Abbott are trading rhetoric at the moment, trying to blame someone for increases in electricity bills. I’m not touching the politics of this issue as I’m pretty confident the political discussion will be neither edifying nor enlightening.

I’ve talked a bit on my other blog about how an electricity bill is structured. This is important background in considering how electricity bills have increased over time.

This pdf, from the NSW Government looking at the drivers behind the recent price rises is pretty unequivocal on the recent causes:

“At least 80% of the percentage increases in the IPART 2010 determination of regulated retail tariffs are attributed to increased network charges.”

There are definitely other contributors; the Mandatory Renewable Energy Target; feed-in-tarrifs for solar panels, in recent times Yallourn powerstation being underwater isn’t helping, and a few other little regulatory odds and ends have contributed. Also, now that the carbon price has been introduced, that will increase bills as well.

The rest of the bill, the cost of energy, hasn’t changed nearly as dramatically in the last few years. A glance through the Australian Energy Market Operator’s numbers at the bottom of the page confirms this.

So why are we suddenly spending so much on the network? Again there are a multitude of reasons; assets reaching the end of their life and needing replacement; and some perverse incentives for State governments to “gold plate” their networks, but that contribution isn’t likely to be too significant. No, the big reason is our increased prosperity and desire to air condition our homes.

This excellent piece from Matthew Wright at Climate Spectator describes the economics of increased air conditioning demand and the corresponding network demand outcomes. Being more of an industrial specialist, I was astonished by the size of some of these numbers. Residential air-conditioning loads across the grid could be as high as 16GW; for comparison, that means if everyone turned on their AC at once, about 16 large coal-fired power plants must ramp up to full power. Off the top of my head 16 is about the number of plants of that size in Australia.

So, when all the consumers suddenly have the capacity to double their electricity consumption with the flick of a switch, what does this do to an electricity network? One of the more illuminating parts of Gillard’s recent speech on this topic was the “Roads are like Electricity Networks” analogy. To avoid congestion (which, stretching the metaphor, means black outs) the network must be sized to handle the maximum possible demand.

Are there alternatives? The Draft Energy White paper puts the electricity network upgrade costs at about $7000 per 2kW (which is pretty standard) air conditioner installation. Since residents aren’t charged connection fees for new big equipment, and nor are they charged demand fees like industrial customers, this extra network construction cost must be averaged across the grid in “network costs”. Even if you don’t have a glorious new LG ArtCool to drop your temperature during the cricket.

One idea is to make customers pay that upgrade cost at the point of sale. “That’ll be $2000 for the Air Con, and $7k for your network upgrade”. While broadly sensible, I doubt many politicians will find quadrupling the sticker price on an air conditioner palatable.

Improve the efficiency of air conditioners? Wright’s article above suggests this approach and I agree it has merit. The problem I see is the political reality of this sort of intervention. The so-called Pink Batts scheme was also designed to address this problem; better insulated houses means lower air con loads and lower peak demand. Further, there is good evidence that it was successful in this aim as electricity use in the NEM (the Eastern states and SA) has decreased for the last three years. There were other factors of course, but on the only metric available it seems successful. But, despite the technical success it was a political disaster for Labor and Peter Garrett. I suspect politicians will be nervy in future about committing to this sort of diffuse incentive for fear of the repercussions. Incentives for solar hot water could also have an impact, and the Small-Scale Renewable Energy Target does help a little in this area. However, as most electricity use replaced by solar is off-peak the peak demand impact is likely to be small.

My preference leans toward a couple of different demand side response ideas. First, and there are various moves afoot to achieve this, a market that trades demand would provide incentives for large industrial users to switch off at times of high demand. This will probably work at lowering demand, but the customers switching off will be the first to benefit financially, with the trickle down to residential consumers likely quite slow.

The standout, and for a number of reasons, is a proper smart meter network. There would be flow on effects, which will make a lot of people grumpy, like time-of-use pricing and perhaps the ability to remotely switch off loads, but it is the most effective way for consumers to make informed decisions about their energy use and costs. The common cry from those opposed is that “it will make me pay more”, which is not entirely true. More accurate would be to say that smart meters COULD make you pay more, but they will also give you powerful tools to pay less.

Trials as part of the Smart Cities program showed customers on Magnetic Island were saving up to $25 a month on their electricity bills, by responding to price signals through their smart meters. Why not let people in the wider network have this opportunity, and if some people’s bills go up, maybe they will have a stronger incentive to curb their energy use?