Some problems with calculating energy efficiency benefits: Part 2
In Part 1, we saw that the simplifying assumptions necessary to calculate the Energy Star program’s energy, cost, and emissions savings introduce a deal of uncertainty in those calculations. In Part 2, below, we’ll look at two reasons to worry that Energy Star’s numbers actually overestimate the actual savings attributable to the program.
Counterfactual consternation
In Part 1, we saw that the simplifying assumptions necessary to calculate the Energy Star program’s energy, cost, and emissions savings introduce a deal of uncertainty in those calculations. In Part 2, below, we’ll look at two reasons to worry that Energy Star’s numbers actually overestimate the actual savings attributable to the program.A fundamental difficulty in measuring the effects of an intervention is that it requires attempting to quantify a counterfactual – we want to know the difference between what actually happened and what would have happened without the intervention. This can sometimes be relatively straightforward. Suppose mosquito bed netting were distributed in a region that has consistently seen ~100 annual cases of malaria over the prior decade. If there are only 53 cases the next year, then we can be reasonably confident that the netting prevented some 40+ cases of malaria. However, that confidence is only warranted if everything else stayed the same. If the area also saw a severe drought that year, then some of those avoided malaria cases would likely be the result of a reduction in mosquito breeding grounds and perhaps even a reduction in population due to deaths caused by the drought. Likely, the netting helped, but it is very difficult to say how much.
This sort of complication is especially relevant to Energy Star’s estimates of its impacts on emissions. In a footnote, it explicitly says that these numbers “do not account for overlapping impacts of regulatory programs and may be affected by other dynamics on the electrical grid.” That “may be” surely undersells the point, particularly for the historical figures. Even a casual survey of the effects of “other dynamics” on the electrical grid over the past three decades is the work of a book, not a blog post. And those effects just as surely pale in comparison to the effects of regulation. Indeed, the ever-quickening pace of energy efficiency-focused regulation means that it is likely to skew even annual numbers. Without taking such effects into account, we simply cannot say what effect Energy Star has had with any precision.
Hell is (predicting) other people
Above, we looked at some unavoidable assumptions that go into Energy Star’s benefits calculation. The final issue we’ll consider is an assumption that underlies the entire methodology.
The benefits touted by Energy Star are based on amounts of avoided energy consumption. However, it doesn’t have (and can’t feasibly collect and process) the data it would need to directly determine those amounts. Instead, it calculates them based on the data it does have –the energy efficiencies for products (and homes, etc.) that it certifies compared to those it doesn’t. The problem is that the actual amount of energy consumed is a product of energy efficiency and amount of use. Energy Star assumes that when a consumer purchases an Energy Star product it makes a difference to the former, but not the latter.
In reality, the amount of use is determined by human users with human psychologies, and human psychologies are complicated. They are heuristic kluges subject to myriad effects such as “moral licensing,” in which doing something good in one respect (e.g. buying an energy efficient washer) lets us feel like we can let ourselves slide in another (e.g. washing smaller loads of laundry more frequently).[1] Indeed, we needn’t even appeal to “psychological fallacies” to see the perils of assuming that consumers will use Energy Star products the same way they use others. One might perfectly rationally upgrade to an energy efficient HVAC system not to save on the electric bill, but to keep the bill the same while enjoying warmer winters and cooler summers. Regardless of reasons, such changes in behavior are well known to result the “rebound effect” whereby increases in energy efficiency are at least partially offset by increases in use (sometimes to the point of increasing overall consumption).[2]
While data is still thin on the ground, there is at least some evidence of an Energy Star rebound effect. Ohler, et al.[3] examined two household surveys on energy use specifically to determine the effects on actual energy consumption of Energy Star refrigerators, clothes washers, dishwashers, computers and televisions. Of these, only refrigerators yielded a statistically significant reduction in annual energy use. It makes sense that if only one category of product made a difference it would be refrigerators, which are usually in more or less constant use. However, even here, the average reduction in energy consumption was only 3%, despite Energy Star certification for the category requiring at least 10% lower electricity usage than minimum federal standards.
Ohler, et al. also give some insight into another way that consumer behavior can undermine the benefits of energy efficiency. Their results suggested that Energy Star televisions may actually be associated with increased energy usage. This is because consumers can turn off energy efficiency features of TVs in order to get better performance. This finding aligns with research on energy efficiency in another major product category – thermostats. For instance, a 2022 working paper found that smart thermostats provided no statistically significant energy savings.[4] The problem, it seems, is that users frequently override thermostat settings in ways that undermine the projected 10–23% benefits.[5]
A cautionary conclusion
To be clear, the purpose of this post is not to (ahem) throw shade at Energy Star. Problems with its benefits methodology do not mean it isn’t a beneficial program. For one, it has almost certainly increased consumer awareness of lifetime costs, as opposed to initial costs, and has generally helped drive an overall market trend of increasing energy efficiency. Too, there is no reason to think that all of the benefits it touts are undone by the rebound effect or due to parallel regulatory changes. There is simply reason to be cautious about anything like precise accountings of those benefits unless and until such effects are incorporated into their calculation.
[1] Wikipedia: Self-Licensing.
[2] Wikipedia: Rebound effect (conservation).
[3] Ohler, et al., 2020. “A study of electricity savings from energy star appliances using household survey data.” Energy Policy.
[4] Brandon, et al., 2022. “The Human Perils of Scaling Smart Technologies: Evidence from Field Experiments.” National Bureau of Economic Research. This study does not seem to have used Energy Star certified thermostats (I am not aware of any that do), though there is no reason to expect a difference in the frequency or nature of user overrides.
[5] The data is not completely univocal on this. Huchuk, et al. found that the negative effects of user overrides were largely offset by overrides that reduced consumption, though these results appear to be in the minority.
Leave a Reply