“pact-a-mole”: the case against using theceeep.rutgers.edu/wp-content/uploads/2016/02/wp3... ·...
TRANSCRIPT
![Page 1: “PACT-a-Mole”: The Case Against Using theceeep.rutgers.edu/wp-content/uploads/2016/02/WP3... · (Latiner 2009) and for others it is the “invisible fuel” which is of course](https://reader031.vdocuments.net/reader031/viewer/2022011912/5fa533418feb3021537f12e3/html5/thumbnails/1.jpg)
Center for Energy, Economic &
Environmental Policy
Rutgers, The State University of New Jersey
33 Livingston Avenue, First Floor
New Brunswick, NJ 08901
http://ceeep.rutgers.edu/
732-789-2750
Fax: 732-932-0394
Working Paper #3
“PACT-a-Mole”: The Case Against Using the
Program Administrator Test for Energy
Efficiency Programs
Frank A. Felder and Rasika Athawale
January 2016
![Page 2: “PACT-a-Mole”: The Case Against Using theceeep.rutgers.edu/wp-content/uploads/2016/02/WP3... · (Latiner 2009) and for others it is the “invisible fuel” which is of course](https://reader031.vdocuments.net/reader031/viewer/2022011912/5fa533418feb3021537f12e3/html5/thumbnails/2.jpg)
2
“PACT-a-Mole”: The Case Against Using the Program Administrator Test for Energy
Efficiency Programs
Frank A. Felder1 and Rasika Athawale
2
Abstract
Flat to declining wholesale electricity, natural gas prices and demand combined with policies
pursuing all cost-effective energy efficiency are resulting in the reevaluation of cost-benefit
analysis of the energy efficiency programs. In particular, the Total Resource Cost test and the
Societal Cost test are being questioned as to whether they should be replaced by the Program
Administrator Cost test. This paper makes the case against replacing these tests.
Keywords
Cost effectiveness, Energy efficiency programs, Net energy saving, Discount Rate, Non-energy
benefit
1 Center for Energy, Economic & Environmental Policy, Edward J. Bloustein School of Planning
and Public Policy, Rutgers University, 33 Livingston Avenue, New Brunswick, NJ 08901, USA
([email protected]) 2 Center for Energy, Economic & Environmental Policy, Edward J. Bloustein School of Planning
and Public Policy, Rutgers University, 33 Livingston Avenue, New Brunswick, NJ 08901, USA
![Page 3: “PACT-a-Mole”: The Case Against Using theceeep.rutgers.edu/wp-content/uploads/2016/02/WP3... · (Latiner 2009) and for others it is the “invisible fuel” which is of course](https://reader031.vdocuments.net/reader031/viewer/2022011912/5fa533418feb3021537f12e3/html5/thumbnails/3.jpg)
3
I. Introduction
In the United States, the deployment of energy efficiency (EE) programs has made use of
ratepayer monies since the energy crisis of 1973. Because of the use of public funds, cost-
benefit analysis (CBA) of such spending has been an integral part of EE program planning,
delivery and evaluations.3 A combination of factors including a deep and long recession starting
in 2008 led up to flattening of electricity demand, and dramatic reduction in wholesale natural
gas prices. This corresponded with downward pressure on wholesale electricity prices and
contributed to a reevaluation of the metrics used by policymakers’ evaluation of EE programs.4
Some energy efficiency analysts want to reevaluate whether the Total Resource Cost
test (TRC) and the Societal Cost Test (SCT) should be used in the analysis and evaluation of EE
programs. One stylized example used to motivate this reevaluation illustrates that under current
supply cost assumptions, Home Performance with Energy Star, a common EE program, would
not save enough to be justified under a TRC test (Neme and Kushler 2010). If programs that
save 25-30% of heating usage cannot be justified, these authors rhetorically ask for justification
of deep energy retrofits.5 They conclude that something must be wrong with the TRC test.
Others (Vine et.al. 2012; SEE Action 2015) have proposed revising the TRC test so that more
emphasis is placed on carbon emissions reduction.
A common refrain, if not a mantra, is that EE is the most cost-effective option.
Statements such as “Energy efficiency is one of the easiest and most cost effective ways to
combat climate change, clean the air we breathe, improve the competitiveness of our businesses
and reduce energy costs for consumers” (DOE, undated) are frequently made. The U.S.
Environmental Protection Agency released a report with a variation of this claim in its title
(Prindle 2009). For some EE is analogous to the “low-hanging fruit” for the U.S. energy policy
(Latiner 2009) and for others it is the “invisible fuel” which is of course the cheapest (The
Economist 2015). Vine et. al. (2012) make even a stronger claim: “First, energy efficiency has
proven itself as a cost-effective resource and is widely regarded as the least-cost utility system
resource available; so much of the historically intense scrutiny has faded in many (but not all)
states.” Along with this claim lies a corollary theme that the existing energy efficiency gap
(Jaffe and Stavins 1994) – underinvestment in energy efficiency – is solely a result of market
4 U.S. wholesale natural gas prices at Henry Hub have dropped from $8.86/mmBTU in 2008 to
$4.39/mmBTU in 2014 (US EIA, 2015). Retail electricity demand has remained flat from 2007
through 2014 with a major dip in demand in 2009 (US EIA, 2015). 5 A deep energy retrofit is a whole-building analysis and construction process that uses
integrative design to achieve much larger energy savings than conventional energy retrofits.
Deep energy retrofits are often very expensive propositions and some experts have voiced
skepticism against use of public funds for such projects which they believe are not cost efficient
when compared to other available options for achieving energy goals. For one such expert
opinion, see “Deep energy retrofits are often misguided” (Holladay 2014).
![Page 4: “PACT-a-Mole”: The Case Against Using theceeep.rutgers.edu/wp-content/uploads/2016/02/WP3... · (Latiner 2009) and for others it is the “invisible fuel” which is of course](https://reader031.vdocuments.net/reader031/viewer/2022011912/5fa533418feb3021537f12e3/html5/thumbnails/4.jpg)
4
failures that can be corrected with policy and market interventions (LBNL 1996).
However given the recent experiences of aggressive, let alone standard, EE programs fail
the TRC test, so how does one believe in the EE mantra? In addition, if the investment hurdle
rate is applied and it is larger than the discount rate used by most analysts who ignore
irreversibility of investments and the option to wait (Jaffe et. al. 2004), then even more programs
would fail the TRC tests. One possible resolution is to question the validity of the mantra either
in general or claim that there are numerous exceptions to it. This approach involves working
through the long-standing and ongoing split in the literature regarding just how cost effective
energy efficiency is (Felder 2013; Allcott and Greenstone, 2012). One of the many examples is a
recent experimental study of the largest U.S. residential weatherization program that finds that
the costs of EE programs outweigh their benefits (Fowlie et. al. 2015, working paper 020).
Another option is to claim that the TRC and SCT do not include significant benefits,
specifically non-energy benefits (NEBs), and therefore systematically and materially
underestimate the value of EE. A complimentary attack on the TRC and SCT is that estimating
the incremental cost of EE measures, which both tests require but the Program Administrator
Cost Test (PACT)6 does not, is “inherently difficult” (NHPC 2011) and fraught with error.
(Publicly known databases of incremental measure costs such as California’s 2004-2005
Database for Energy Efficiency Resources – DEER Study – only account for equipment costs
and do not include soft costs such as those related to design, risk mitigation and transaction costs
(Mahone 2009)). Analysts who advance the NEB claim then propose one of the four approaches
for policymakers (Neme and Kushler 2010; Muncaster et. al. 2011; Skumatz 2015): 1. Adjust
the TRC to only include the costs associated with energy savings; 2. Include NEBs in the TRC
and SCT; 3. Modify the TRC; or 4. Replace the TRC and SCT with the PACT.
This paper critiques the proposal to replace the TRC (and also the SCT) with the PACT.
After summarizing the various EE CBAs, Section II examines the claims and counterclaims
related to the case for replacing the TRC and SCT with the PACT. Section III discusses future
directions.
II. The Case against the Total Resource Cost Test and Counterarguments
A. Description of Energy Efficiency Cost-benefit Analyses
EE can refer to measures (e.g., an air conditioner, a refrigerator), projects (an integrated
set of measures that should be evaluated as one due to the complex interactions between them),
programs (e.g., residential appliance or commercial lighting), or portfolios (integrated
6 Also called the Utility Cost Test, depending upon whether the program administrator is a utility
or other administrative entity.
![Page 5: “PACT-a-Mole”: The Case Against Using theceeep.rutgers.edu/wp-content/uploads/2016/02/WP3... · (Latiner 2009) and for others it is the “invisible fuel” which is of course](https://reader031.vdocuments.net/reader031/viewer/2022011912/5fa533418feb3021537f12e3/html5/thumbnails/5.jpg)
5
combinations of programs). To avoid repeating the terms measures, project, programs or
portfolios, the word program generically refers to all of them when distinguishing between them
is not relevant to the specific point being made.
The five standard EE CBA calculations are presented in Table 1. Each of these calculates
a different measure of cost effectiveness from a different perspective as indicated by their names.
The use of CBA to evaluate EE programs is long-standing and prevalent (California Public
Utility Commission and California Energy Commission 1983; Kushler et. al. 2012). A recent
survey of EE CBA, however, concludes that “diversity and inconsistency among states is the
rule” with respect to EE CBA (Kushler et al. 2012; Skumatz 2015).
Table 1: Benefits and Costs Considered by Various Energy Efficiency Cost Benefit
Analyses
Component PCT PACT RIM TRC SCT
Question the CBA test
answers
Are
participants
better off?
Will utility
bills
decrease?
Will rates
decrease?
Will energy
costs
decrease?
Is society
better off?
Avoided Costs A A A A
Savings from Other
Resources
S S
Non-monetized
Benefits
Np Ns
Transaction Costs -T -T
Incremental
Equipment and
Installation Costs
-I -I -I
Program Overhead
Costs
-P -P -P -P
Rebates and other
Incentive Payments
R -R -R
Bill Savings B -B
EPA 2008 with modifications made by authors.
The energy gap can be formalized based upon Table 1. This gap occurs in theory when a EE
program would not be implemented by the participant without a rebate or incentive and the
program passes the SCT. These two conditions result in the following pair of inequalities:
(1) Np – T – I + B < 0 < A + S + Ns – T – I - P
Although these five tests are commonly grouped and referred to as cost-effectiveness
analysis, they answer five very different questions as indicated in Table 1. A related issue is that
the terms cost-effective, least expensive and efficient are sometimes used interchangeably.
However, the term efficient, refers to a societal perspective whereas cost-effective and least
expensive may be applied either to a societal perspective or the perspective of a particular group
![Page 6: “PACT-a-Mole”: The Case Against Using theceeep.rutgers.edu/wp-content/uploads/2016/02/WP3... · (Latiner 2009) and for others it is the “invisible fuel” which is of course](https://reader031.vdocuments.net/reader031/viewer/2022011912/5fa533418feb3021537f12e3/html5/thumbnails/6.jpg)
6
within society. Thus, it is entirely possible, and even likely, that a specific EE program is cost-
effective from the perspective of one or more subgroups (participant, utility, ratepayer), but still
inefficient, and not cost-effective from a societal perspective. In this case the EE program is not
Pareto efficient, meaning that not adopting the measure would result in all parts of society being
the same or better off than by adopting it. The confusion among these terms leads to some
fundamental errors as discussed further.
The word test suggests that an EE program would pass or fail if the ratio of benefits to
cost is greater than one (on a present value basis, i.e., accounting for the time value of money).
Although the term cost-benefit is used, typically when analysts refer to the ratio, they are
referring to the ratio of benefits to cost. In this case, when the ratio is greater than one, the
benefits exceed the costs (and hence the Program “passes” the cost-benefit test); otherwise the
costs exceed the benefits.
The CBA calculations, however, do not have to be formally used as a test. Whether to
use any, one, or more than one test is a policy decision.7 So it is important to distinguish between
the calculations and the policy decision regarding if and how to use the results of those
calculations. A policymaker may choose not to have any of these calculations performed, have
only a few performed, or have all five performed at the measure, project, program and/or
portfolio level. A policymaker may then decide whether to have a formal test to determine
which EE programs to pursue using some or all of these calculations as tests. Moreover, if
policymakers want to use one or more of these calculations as a test, they do not necessarily have
to set the passing threshold to one.
B. Analysis of the Case for Replacing the Total Resource Cost Test with the
Program Administrator Test
In this section, the proposal to replace the TRC with the PACT is examined. The first
version of this claim, Claim 1a, is that the PACT evaluates the cost-effectiveness of EE more
accurately than the TRC. The error with this claim is that it compares apples to oranges.
Although analysts do recognize that the five different tests are answering the question about cost-
effectiveness from five different perspectives, this important caveat becomes lost in statements
such as the following: “In particular, we suggest that there is a need to reconsider the current
reliance on the TRC for determining whether an energy efficiency measure or program is cost-
effective” (Kushler and Neme 2010). Such confusion leads to investigations of which test is the
best. In some jurisdictions, the results of the various tests are averaged (Dunsky et. al. 2012),
presumably based upon the incorrect assumption that what is being attempted to be measured is
7 While many state utility commissions use TRC as the basis for approval or disapproval of
energy efficiency program expenditure, some states such as Michigan, Connecticut, Texas and
others use the PACT (or UCT) as the primary cost-effectiveness screening test. The state of
California, which originally used TRC for program screening, has shifted to a weighted TRC and
UCT test (2/3rd
TRC to 1/3rd
UCT).
![Page 7: “PACT-a-Mole”: The Case Against Using theceeep.rutgers.edu/wp-content/uploads/2016/02/WP3... · (Latiner 2009) and for others it is the “invisible fuel” which is of course](https://reader031.vdocuments.net/reader031/viewer/2022011912/5fa533418feb3021537f12e3/html5/thumbnails/7.jpg)
7
improved by measuring it in multiple ways and averaging the results.8 Once this conflation is
untangled, it becomes clear that a statement such as “the CBA Test X is inaccurate therefore
CBA Test Y should be used” is a non sequitur. Even if Test Y is 100% accurate, it is not
answering the question Test X is attempting to answer.
The second version, Claim 1b, is that since the TRC test does not consider non-energy
benefits (NEBs), it should be replaced with the PACT (Neme and Kushler 2010; SEE Action
2015) or modified (Muncaster et. al. 2011). PACT proponents argue that since the TRC in
practice does not contain non-energy benefits, which are large (and at the same time difficult,
expensive and controversial to quantify). The incremental costs of EE measures should also be
ignored and the result is that socially cost-effective EE measures, otherwise ignored, would be
implemented using the TRC (Neme and Kushler 2010). This approach therefore, according to
these same authors, avoids the difficulties and expenses of estimating NEBs and incremental
costs. Using the notation from Table 1 and subtracting the TRC from the PACT, the result is the
following:
(2) PACT - TRC = - R - S + I
If NEBs are larger than this difference
(3) NEBs > (I - R) - S
then replacing the TRC with the PACT still results in socially efficient outcomes.
The PACT proponents’ argument that “two wrongs make a right” should be evaluated
closely in this context. According to those supporting Claim 1b, EE has many types of NEBs that
collectively have a large magnitude (Skumatz 2015; Neme and Kushler 2010). Notice that to
substantiate Claim 1b, the very studies that proponents want to avoid have to be conducted in
order to test its validity.
Furthermore, it does not follow that there should necessarily be EE programs that provide
incentives for programs that satisfy Equation 3. If the private benefits are large relative to the
private costs (and Skumatz 2015 reports total participant non-energy benefits to four significant
digits of 144.1% of the energy savings value), then it is not clear why government intervention is
necessary because the private entity may already have the incentive to participate in such EE
program. In notation, if
(4) Np + B > I
then, the private entity should invest in EE without government incentives. Using Skumatz’s
value for NEB, equation (4) becomes
8 Another possible interpretation is that the averaging reflects the “weights” of the various
stakeholder perspectives that policymakers want to consider.
![Page 8: “PACT-a-Mole”: The Case Against Using theceeep.rutgers.edu/wp-content/uploads/2016/02/WP3... · (Latiner 2009) and for others it is the “invisible fuel” which is of course](https://reader031.vdocuments.net/reader031/viewer/2022011912/5fa533418feb3021537f12e3/html5/thumbnails/8.jpg)
8
(5) 2.441*B > I
Second, the PACT’s argument either ignores or minimizes non-energy costs. The
importance of transaction costs is well known in general (Coase 1960) and in the context of EE
(Björkqvist and Wene 1993). Examples of these costs include time to apply for a program
rebates, hiring and supervising contractors, and risks of poor performance or fraud, reduced
convenience and increased disposal or recycling costs (Fowlie et. al. 2015, working paper 016;
US DOE 2012; US GAO 2010; EPA 2008). In fact it has been recognized that whereas enough
emphasis has been placed on including NEBs, the same for inclusion of non-energy costs is
missing (EPA 2008). Fortunately, there is a test that does account for other non-monetized
benefits and costs, the SCT, so it is even more surprising that Claim 1 is made.
Third, some of the non-energy benefits are not “benefits” as economists use the term but
are transfer payments. This important distinction is recognized in EE CBA. As indicated in
Table 1, the TRC does not include “rebates and other incentives” to program participants as a
“benefit” because they are a transfer from one part of society (program non-participants) to
another part (program participants). It is common to classify the long list of claimed non-energy
benefits into three major categories: utility, participant, and societal. For example, reducing
utility’s bad debt and arrearages from a societal perspective are a transfer from utility
shareholders and other ratepayer (depending if and to what extent the utility is allowed to recover
these costs from other ratepayers). The reduction of the administrative costs associated with
these debts, however, is a cost saving because it avoids the use of resources that would otherwise
be necessary.
Often at times the NEBs logic takes a somewhat moral turn whereby investing in an EE
measure (even if it is socially inefficient) is the most righteous thing to do. Consider the
following argument from Dunsky et. al. (2012)
“Take, for example, the hypothetical case of a near-zero energy new home…. Its proud
owners may have been motivated by a combination of factors: their personal dedication
to environmental responsibility, their belief that their young children would grow up in a
healthier and more comfortable home environment, and their calculation that the
incremental mortgage payments are expected to be nearly offset by reduced energy bills
over time. Most DSM Pas and advocates would likely want to showcase the project, and
many policymakers would be glad to be associated with it in one way or another. Yet by
considering only the utility’s avoided costs (often lower than utility rates) and none of the
other decision motivators, the TRC result would likely be negative.”
One non-energy benefit that some claim is frequently ignored by the TRC is the positive
macro-economic benefits or “multiplier effect” of EE program investments. Proponents of a
modified version of TRC cite precedents such as the British Columbia Utilities Commission’s
(BCUC) decision to account for NEBs either by using a deemed rate (equal to 15% of long-run
![Page 9: “PACT-a-Mole”: The Case Against Using theceeep.rutgers.edu/wp-content/uploads/2016/02/WP3... · (Latiner 2009) and for others it is the “invisible fuel” which is of course](https://reader031.vdocuments.net/reader031/viewer/2022011912/5fa533418feb3021537f12e3/html5/thumbnails/9.jpg)
9
avoided costs) or by a customized rate determined for a particular program based on a detailed
study by the program administrator (Muncaster et. al. 2011). In short, EE creates jobs and other
economic activity beyond the actual financial investments associated with producing and
installing EE measures. This effect, however, is already captured in the CBA. The value of the
benefits and costs reflect the corresponding economic value of deploying a set of costs to obtain
a set of benefits. If the SCT is less than 1, that means the social costs exceed the social benefits.
As this socially inefficient investment ripples through the economy, the inefficiency multiplies.
Moreover, assume that the SCT for a program is greater than one, for example 1.5. Now
assume that in order to fund that program, resources are reallocated from another economic
activity whose SCT is 2. In this case, society overall is less efficient by investing in EE than in
not doing so. Thus, if claims are to be made about the macro-economic impact of EE, those
claims must also demonstrate that resources were not diverted from more efficient economic
activities than the EE program.
Another claim, Claim 2, is that estimating incremental costs is difficult to impossible
(especially for measures such as whole building and integrated designs (Mahone 2009)) and
therefore the TRC test is not reliable and should be replaced with the PACT (NHPC 2011). This
claim can be made independently to justify replacing the TRC and SCT with the PACT or made
in tandem with Claim 1b above. It is not enough, however, to only claim that incremental costs
are uncertain. Other assumptions used by other tests are also uncertain such as avoided costs,
savings from other resources, and other non-monetized benefits and costs. Claim 2 must also
include sub-claims about the uncertainty of incremental costs that are unique to incremental costs
compared to other assumptions used by the PACT that prevent the TRC from being reliable.
Specifically, Claim 2 must also make two other supporting claims: a) the uncertainty in
estimating incremental costs is fundamentally harder than for the other uncertainties that are part
of the other tests, and b) the uncertainty in estimating incremental costs cannot be handled by
existing techniques. Without both of the supporting claims, 2a and 2b, rejecting the TRC test
and accepting another one such as the PACT is not logical. If Claim 2a is false, then the
uncertainties associated with estimating incremental costs are analytically no different from other
uncertainties so it would be logically inconsistent to reject one test but accept another due to an
issue that both have. To state this another way, one could flip Claim 2 and state that the PACT
should be rejected and replaced with the TRC using the same erroneous reasoning used by
advocates of Claim 2.
In fact, a reasonable case can be made that avoids cost assumptions that are more
uncertain, both in degree and kind, then incremental cost assumptions. Avoided costs require
forecasts of electricity and heating fuel prices out for at least a decade or more. Electricity
forecasts depend on numerous uncertain assumptions, many of which cannot be easily
characterized by probability distributions such as fuel prices, technological changes, public
policies such as greenhouse gas policies, transmission investments, etc. In contrast, estimating
![Page 10: “PACT-a-Mole”: The Case Against Using theceeep.rutgers.edu/wp-content/uploads/2016/02/WP3... · (Latiner 2009) and for others it is the “invisible fuel” which is of course](https://reader031.vdocuments.net/reader031/viewer/2022011912/5fa533418feb3021537f12e3/html5/thumbnails/10.jpg)
10
incremental costs, which may be difficult in many cases, are bounded by the difference between
the cost of the baseline Project and the EE Project.
Of course, CBA inputs are always prone to uncertainty, which may propagate through the
analysis and introduce uncertainty into the ultimate cost and benefits estimates. Sensitivity
analyses are often used to characterize uncertainty in CBA’s. Often, the highest and lowest
possible values for various inputs are used to assess the impact on net benefits. As stated by Jaffe
and Stavins (2007), there are three critical problems with this type of analysis. First, a sensitivity
analysis fails to use all available information on the assumed value of parameters. Generally, the
values at either extreme of a given range are less likely to occur than the base case assumption.
Second, a sensitivity analysis does not provide information about the variance of the distribution
of net benefits. In some cases, two policies may have very similar net benefits, but one policy
may have a smaller variance. Policymakers would want to choose that policy because it will have
a higher likelihood of producing the expected benefits. A third limitation of conventional
sensitivity analysis is that this type of analysis perturbs one or two values at a time in isolation,
which is not a real world simulation. In actuality, many parameters are interacting with one
another simultaneously. These limitations are all overcome by Monte Carlo analysis, which uses
probability distributions to vary the uncertainty of several parameters at once. Cost benefit
calculations are carried out thousands of times to produce a probability distribution of net
benefits. This type of analysis allows policy makers to assess the probability of particular
outcomes.
Now assume that Claim 2a is true but Claim 2b is false, one could address the harder
uncertainties associated with incremental costs using existing and well-developed analytical
techniques that address uncertainty (Ting et. al. 2013). Specifically, given that Claim 2b is false,
the uncertainty of incremental costs can be propagated when calculating the TRC. There are
three possible outcomes: the range of uncertainty in the TRC is entirely above 1, entirely below
1, or spans 1. In the first and second cases, the EE Program would pass (or fail) the TRC test and
the Program Administrator can proceed with confidence.
In the third outcome (TRC spans 1), the Program Administrator needs a decision rule to
figure out what to do. To give a concrete example, assume that the TRC test is calculated for an
EE Project and the answer is 1 +/- 0.3, meaning that the range of the TRC is between 0.7 and 1.3.
From an analytical perspective, this is an entirely satisfactorily answer. In short, the analyst does
not know, given the uncertainties associated with the calculation, whether the EE Program in
question would pass the TRC test. Presumably, a Program Administrator would have policies
put in place to determine what to do. In this unlikely case in which the result is literally on the
knife edge between passing and failing, the Program Administrator needs a rule to break the tie
such as flipping a coin or having the tie always resulting in not implementing that Program. In
much more likely cases in which the uncertainty is not evenly divided on both sides of one, the
obvious decision rule is to calculate the expected TRC result and proceed accordingly if it is
above or below one.
![Page 11: “PACT-a-Mole”: The Case Against Using theceeep.rutgers.edu/wp-content/uploads/2016/02/WP3... · (Latiner 2009) and for others it is the “invisible fuel” which is of course](https://reader031.vdocuments.net/reader031/viewer/2022011912/5fa533418feb3021537f12e3/html5/thumbnails/11.jpg)
11
Another claim, Claim 3, made is that the PACT is the appropriate test because it properly
compares supply side options with energy efficiency ones (Neme and Kushler 2010. This claim
is that an apples-to-apples comparison should be made between supply side options and demand
options (energy efficiency ones) and that the PACT does this. For instance, take the example of
a Program Administrator procuring supply and demand side options. If a supply option costs
$0.10/kWh and a demand option costs $0.08/kWh, then the claim is that the Program
Administrator should, on behalf of ratepayers, procure the $0.08/kWh option since it is less
expensive. By doing so, the Program Administrator is only paying $0.08/kWh on behalf of
ratepayers and should not account for any costs borne by the Program Participant.
Claim 3 makes two errors. First, it wrongly assumes that the Program Administrator
should not consider other costs and benefits ̶ in the language of economists negative and
positive externalities ̶ associated with supply options. In fact, the development of Integrated
Resource Planning (sometimes called Least Cost Planning) has emphasized since the 1970s that
the emissions (negative externalities) associated with supply options should be internalized
(added to) the supply cost to have an apples-to-apples comparison between supply and demand
side options (Felder 2013; Ottinger et. al. 1990). Not accounting for these costs would not be
least cost planning and hence the motivation for that phrase. By ignoring the difference between
Incremental Costs and Rebates, the PACT is externalizing the cost and therefore incorrectly
calculates the cost of the EE Program and ignores the costs associated with emissions. The
second error is even more fundamental. Claim 3 ignores a ratepayer cost. The Participant has to
pay the difference in Incremental Cost minus Rebate in order to obtain the energy savings.
Although this difference in cost does not show up on the Participant’s utility bill, it is
nonetheless a cost.
There is a variation on Claim 3 that states that when analyzing the costs of supply
options, the Program Administrator or its equivalent does not analyze the details of the supplier’s
cost or its components and therefore, by analogy, this Program Administrator should not analyze
the components, specifically the difference between Incremental Costs and Rebates for particular
EE Programs (Neme and Kushler 2010). This variation, however, as with Claim 3, fails for the
same reason Claim 3 does: it ignores costs that occur to procure that EE Program.
A specific example illustrates this point. Assume a supplier quoted a price for electricity
at $0.10/kWh at the generator’s location and assume it costs $0.02/kWh in transmission and
distribution costs to deliver the electricity to retail customers. The Program Administrator would
consider the cost of that supply option of $0.12/kWh because it would have to pay the supplier
$0.10/kWh and the ratepayer would have to pay the transmission and the distribution owner
$0.02/kWh. Likewise, ignoring the cost borne by the EE Program Participant is incorrect.
Another variation of Claim 3 suggests that since TRC counts public subsidies as cost, it
creates an artificial barrier for demand-side resources as compared to supply-side resources. This
claim proposes that there are large public subsidies deployed in support of supply-side resources
![Page 12: “PACT-a-Mole”: The Case Against Using theceeep.rutgers.edu/wp-content/uploads/2016/02/WP3... · (Latiner 2009) and for others it is the “invisible fuel” which is of course](https://reader031.vdocuments.net/reader031/viewer/2022011912/5fa533418feb3021537f12e3/html5/thumbnails/12.jpg)
12
that are not counted as “costs incurred by power suppliers.” Likewise, the subsidies provided to
consumers should not be treated as cost – which is not the case if TRC is used as cost-
effectiveness screening tool – and therefore PACT is more appropriate (Neme and Kushler 2010;
NHPC 2011).
Claim 4 (Neme and Kushler 2010) is that the PACT is simpler than the TRC because it
does not require quantifying non-energy benefits and results in less complexity and controversy.
An example can be quoted here of the Michigan utilities which in their joint response to Public
Act 295 urged the Michigan Public Service Commission to prefer PACT (the term used therein
is Utility System Resource Cost Test – USRCT) over other options, as it is “most practical and
straightforward to implement” (MPSC 2013). In fact, another supporting claim is that
determining non-energy benefits is “methodologically challenging” and therefore “expensive”
and thus any exercise in quantification (required for TRC & SCT, but not for PACT) will further
harm the benefit to cost ratio (SEE Action 2015). One wonders what is the limiting principle
regarding simplicity?
According to Lazar and Colburn (2013), while it is easy to quantify program costs, doing
the same for program benefits, and especially non-energy benefits, is difficult. As a result most
of the cost-effectiveness screening tests systematically undervalue EE benefits. If private NEBs
are so high9, why do we even need the EE programs to channelize money into them to achieve
NEBs? It does not follow as a matter of logic that if EE is cost-effective then it should be
funded.
C. Limitations of the Program Administrator Cost Test
The fundamental limitation of the PACT is that it is not Pareto efficient, i.e., there are
situations in which the result of the PACT is either below or above one in which all of society
would be the same or better off if the Program Administrator did the opposite of what the PACT
results indicates. Neme and Kushler (2010) acknowledge this problem. Some examples will help
illustrate the limitations of the PACT.
In Case 1, assume the PACT is greater than one but the TRC test is less than one. Also
assume in this example that the Savings from Other Resources and Program Overhead Costs are
both zero for simplicity only. So if the TRC test is less than 1, this means that Avoided Costs are
less than the Incremental Equipment and Installation Costs, or in notation I > A. As an example,
$15 is being spent to save $10. Someone has to pay for this $5 loss. The TRC test is $10/$15 =
0.67. The PACT, however, can be greater than 1 even if the TRC test is less than 1. Using the
9 “Because these benefits are so large, failing to include them in the TRC and SCT can bias
regulatory decisions against cost-effective efficiency investments – to the detriment of our
economy and society” (Lazar and Colburn 2013).
![Page 13: “PACT-a-Mole”: The Case Against Using theceeep.rutgers.edu/wp-content/uploads/2016/02/WP3... · (Latiner 2009) and for others it is the “invisible fuel” which is of course](https://reader031.vdocuments.net/reader031/viewer/2022011912/5fa533418feb3021537f12e3/html5/thumbnails/13.jpg)
13
above numbers, assume that the Rebate and other Incentive Payments total to $8. The PACT is
$10/$8 = 1.25.
The reason that the TRC test can be less than 1 and the PACT greater than one is that the
PACT does not consider all of the costs. It ignores the difference between the incremental costs
and the rebate. The Program participant pays this difference. Thus, by splitting the costs
between two groups (Program participants and non-participants) and ignoring the costs borne by
one group, the PACT can be greater than 1, incorrectly suggesting that the Program is cost-
effective.
The energy efficiency industry has long pointed out an analogous problem in a different
context. The problem is referred to as “split incentives,” which is shorthand for “split incentives
and costs” (Gillingham et.al. 2012). A standard example involves a landlord and a tenant. The
landlord purchases the refrigerator but the tenant pays the electric bill. The landlord does not
purchase an energy efficiency refrigerator because the tenant receives the benefit of a reduced
electric bill. The tenant is not willing to purchase the energy efficient refrigerator because the
tenant will likely move and not take the refrigerator before the refrigerator’s end of life. The
conclusion from this and other similar examples is energy efficiency programs need to correct
this inefficiency.
By splitting the incremental costs between two groups and ignoring the costs borne by the
program participant, the PACT is, in effect, the mirror image of the landlord-tenant example. It
splits the costs and in doing so results in implementation of inefficient Programs when the PACT
exceeds one but the SCT and/or TRC is less than one.
Some experts support such investments and claim that by using PACT more
measures/programs can be implemented (which otherwise would not have passed TRC) which in
a way can lead up to increasing their market adoption rates and thus reduce costs over time
(Cadmus 2011; Dunsky et. al. 2012). Therefore, according to them, the TRC test impedes
broader, long-term objectives.
III. Future Directions
At the foremost, more thought is required on what really is the underlying question that is
answered via cost-effectiveness analysis of EE programs. Opting for a change in primary
screening test (PACT over TRC) or for modifications to the test, may lead to better benefit to
cost ratios. However, it does not help answer whether the purpose of cost-effectiveness is to aid
in selection of programs (given a certain budget that can be expended on energy efficiency) or to
inform decisions in program design (including type of measures, target participants, reach etc.).
Given the heterogeneity of benefit recipients, one may also explore whether different set
of rules (benefit to cost ratio above one, primary screening test) can be applied for different
![Page 14: “PACT-a-Mole”: The Case Against Using theceeep.rutgers.edu/wp-content/uploads/2016/02/WP3... · (Latiner 2009) and for others it is the “invisible fuel” which is of course](https://reader031.vdocuments.net/reader031/viewer/2022011912/5fa533418feb3021537f12e3/html5/thumbnails/14.jpg)
14
participants. For example, one set of rules can be applicable to say low-income residential
consumers and a different set of rules for industrial consumers.
Finally since complexity in determining incremental costs and non-energy benefits has
been repeatedly cited as reasons for their non-inclusion, it is worthwhile for researchers to
develop suitable quantification methods.
References
Allcott, H., & Greenstone, M. (2012). Is there an energy efficiency gap? The Journal of
Economic Perspectives, 26(1), 3-28.
Björkqvist, O., & Clas-Otto, W. (1993?). A study of transaction costs for energy investments in
the residential sector. Paper presented at the 1993 Summer Study, Stockholm.
Cadmus Group. (2011). Whose Perspective? The Impact of the Utility Cost Test.
California Public Utility Commission and California Energy Commission. (1983). Standard
Practice for Cost-Benefit Analysis of Conservation and Load Management Programs. California:
California Public Utility Commission and California Energy Commission.
Coase, R. (1960). The Problem of Social Cost. Journal of Law and Economics, Vol. 3, 1-44.
Department of Energy (DOE), http://www.energy.gov/science-innovation/energy-efficiency.
Accessed 17 Oct. 2015.
Dunsky, P., Boulanger F., Mathot P. (2012). Screening DSM: When the TRC Blocks Efficiency,
What’s Next? 2012 ACEEE Summer Study
http://newbuildings.org/sites/default/files/ScreeningDSM-Dunsky_Boulenger_Mathot.pdf.
Accessed 22 January 2016.
Environmental Protection Agency. National Action Plan for Energy Efficiency (2008).
Understanding Cost-Effectiveness of Energy Efficiency Programs: Best Practices, Technical
Methods, and Emerging Issues for Policy-Makers. Energy and Environment Economics, Inc. and
Regulatory Assistance Project.
Holladay, M. (2014). Deep energy retrofits are often misguided. Green Building Advisor.
http://www.greenbuildingadvisor.com/blogs/dept/musings/deep-energy-retrofits-are-often-
misguided. Accessed 20 January 2016.
Felder, F. (2013). The evolution of demand side management in the US,” End of Electricity
Demand Growth: How energy efficiently can put an end to the need for more power plants, F.
Sioshansi (ed.), Elsevier Press, 2013, 179-200.
![Page 15: “PACT-a-Mole”: The Case Against Using theceeep.rutgers.edu/wp-content/uploads/2016/02/WP3... · (Latiner 2009) and for others it is the “invisible fuel” which is of course](https://reader031.vdocuments.net/reader031/viewer/2022011912/5fa533418feb3021537f12e3/html5/thumbnails/15.jpg)
15
Fowlie, M., Greenstone, M., Wolfram, C. (2015). Are the Non-Monetary Costs of Energy
Efficiency Investments Large? Understanding Low Take-up of a Free Energy Efficiency
Program. E2e Working Paper, 016.
Fowlie, M., Greenstone, M., Wolfram, C. (2015). Do Energy Efficiency Investments Deliver?
Evidence from the Weatherization Assistance Program. E2e Working Paper, 020.
Gillingham, K., Harding, M., Rapson, D. (2012). Split Incentives in Household Energy
Consumption. The Energy Journal, 33(2), 37-62.
Jaffe, A., Newell, R. G., Stavins, R. N. (2004). Economics of Energy Efficiency. Encyclopedia of
Energy, 2.
Jaffe, A., & Stavins, R. N. (2007). On the value of formal assessment of uncertainty in regulatory
analysis. Regulation & Governance, 1, 154-171.
Jaffe, A., & Stavins, R. N. (1994) The energy-efficiency gap. What does it mean? Energy Policy,
22(10), 804-810, doi: 10.1016/0301-4215(94)90138-4
Kushler, M., Nowak, S., Witte, P. (2012). A National Survey of State Policies and Practices for
the Evaluation of Ratepayer-funded Energy Efficiency Programs. ACEEE Report.
Latiner, J. (2009). How Big Energy Efficiency? Exploring further possibilities. American
Council for an Energy-Efficient Economy. http://www.eesi.org/files/laitner_020509.pdf.
Accessed 20 January 2016.
Lazar, J. & Colburn, K. (2013). Recognizing the Full Value of Energy Efficiency: What’s Under
the Feel-Good Frosting of the World’s Most Valuable Layer Cake of Benefits. RAP Report.
Lawrence Berkeley National Laboratory. (1996). Market Barriers to Energy Efficiency: A
Critical Reappraisal of the Rationale for Public Policies to Promote Energy Efficiency.
https://emp.lbl.gov/sites/all/files/lbnl-38059_0.pdf. Accessed 22 January 2016.
Mahone, D. (2009). Incremental Measure Costs in New Construction Programs: White Paper on
Best Practices and Regulatory Issues. CALMAC Study ID: PGE0273.01
Michigan Public Service Commission. (2013). Readying Michigan to Make Good Energy
Decisions: Energy Efficiency.
http://switchboard.nrdc.org/blogs/pkenneally/Governor%20Report.pdf. Accessed 22 January
2016.
Muncaster, K., Pape-Salmon, A., Smith, S., Warren, M. (2011). Adventures in Tweaking the
TRC: Experiences from British Columbia. 2012 ACEEE Summer Study on Energy Efficiency in
Buildings. http://aceee.org/files/proceedings/2012/data/papers/0193-000258.pdf. Accessed 20
January 2016.
Prindle, W. (2009). National Action Plan for Energy Efficiency. (2009). Energy Efficiency as a
Low-Cost Resource for Achieving Carbon Emissions Reductions.
![Page 16: “PACT-a-Mole”: The Case Against Using theceeep.rutgers.edu/wp-content/uploads/2016/02/WP3... · (Latiner 2009) and for others it is the “invisible fuel” which is of course](https://reader031.vdocuments.net/reader031/viewer/2022011912/5fa533418feb3021537f12e3/html5/thumbnails/16.jpg)
16
National Home Performance Council (NHPC). (2011). Getting to Fair Cost-Effectiveness
Testing: Using the PACT, Best Practices for the TRC, and Beyond.
http://www.homeperformance.org/sites/default/files/trc.pdf. Accessed 22 January 2016.
Neme, C. & Kushler, M. (2010). Is it Time to Ditch the TRC? Examining Concerns with Current
Practice in Benefit-Cost Analysis. ACEEE Summer Study on Energy Efficiency in Buildings.
http://aceee.org/files/proceedings/2010/data/papers/2056.pdf. Accessed Dec 15, 2015.
Ottinger et. al. (1990). Environmental Costs of Electricity. New York, NY: Oceana Publications,
Inc.
Skumatz, L. A. (2015). Efficiency Programs’ Non-Energy Benefits: How States Are Finally
Making Progress in Reducing Bias in Cost-effectiveness Tests. The Electricity Journal, 28(8),
96-109.
State and Local Energy Efficiency Action Network (SEE Action). (2015). A Policymaker’s
Guide to Scaling Home Energy Upgrades. Prepared by Robin LeBaron and Kara Saul-Rinaldi of
the Home Performance Coalition.
The Economist. (2015). Invisible Fuel: The biggest innovation in energy is to go without.
http://www.economist.com/news/special-report/21639016-biggest-innovation-energy-go-
without-invisible-fuel. Accessed 1 February 2016.
Ting, M., Rufo, M., Messenger, M., Loper, J. (2013). Measure costs – the forgotten child of
energy efficiency analysis. ECEEE Summer Proceedings.
US Department of Energy (DOE), Office of Inspector General, Office of Audits and Inspections.
(2012). California Energy Commission – Energy Efficiency and Conservation Block Grant
Programs Funds Provided by the American Recovery and Reinvestment Act of 2009. OAS-RA-
13-01.
US Energy Information Agency (EIA), www.eig.gov/forecasts/steo/query/ Accessed 23 Nov
2015.
US General Accountability Office (GAO). (2010). Energy Star Program Covert Testing Shows
the Energy Star Program Certification Process is Vulnerable to Fraud and Abuse. Report to the
Ranking Member, Committee on Homeland Security and Governmental Affairs, U.S. Senate.
GAO-10-470.
Vine, E., Hall, N., Keating, K. M., Kushler, M., Prahl, R. (2012). Emerging issues in the
evaluation of energy-efficiency programs: the US experience. Energy Efficiency 5, 5–17.