Next Article in Journal
Using Machine Learning to Identify the Potential Marginal Land Suitable for Giant Silvergrass (Miscanthus × giganteus)
Previous Article in Journal
State Feedback Speed Control with Periodic Disturbances Attenuation for PMSM Drive
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Determinants of Demand Response Program Participation: Contingent Valuation Evidence from a Smart Thermostat Program

Department of Economics, The University of New Mexico, 1915 Roma Ave. NE 1019 ECON 1006E, Albuquerque, NM 87131, USA
*
Author to whom correspondence should be addressed.
Energies 2022, 15(2), 590; https://doi.org/10.3390/en15020590
Submission received: 17 December 2021 / Revised: 7 January 2022 / Accepted: 13 January 2022 / Published: 14 January 2022
(This article belongs to the Topic Electricity Demand-Side Management)

Abstract

:
As renewable electricity generation continues to increase in the United States (US), considerable effort goes into matching heterogeneous supply to demand at a subhour time-step. As a result, some electric providers offer incentive-based programs for residential consumers that aim to reduce electric demand during high-demand periods. There is little research into determinants of consumer response to incentive-based programs beyond typical sociodemographic characteristics. To add to this body of literature, this paper presents the findings of a dichotomous choice contingent valuation (CV) survey targeting US ratepayers’ participation in a direct-load-control scheme utilizing a smart thermostat designed to reallocate consumer electricity demand on summer days when grid stress is high. Our results show approximately 50% of respondents are willing to participate at a median willingness-to-accept (WTA) figure of USD 9.50 (95% CI: 3.74, 15.25) per month that lasts for one summer (June through August)—or slightly less than USD 30 per annum. Participation is significantly affected by a respondent’s attitudes and preferences surrounding various environmental and institutional perspectives, but not by sociodemographic characteristics. These findings suggest utilities designing direct-load-control programs may improve participation by designing incentives specific to customers’ attitudes and preferences.

1. Introduction

The United States (US) electric grid has seen ongoing modernization efforts to incorporate renewable energy sources (e.g., hydroelectric, wind, biomass, solar, and geothermal) as well as increasing reliability and resilience (e.g., battery storage, distributed-feeder microgrids). Renewable generation in the US is being widely adopted, accounting for more than 19% of consumption in 2019 [1]. With renewable generation capacity increasing annually, specifically for wind and solar energy, intermittency in renewables generation poses a challenge for electricity producers to meet peak energy demand cycles. For residential consumers, these peaks occur in the morning as customers prepare to leave for work and when they return in the evening. Wind and solar generation generally do not correspond to the demand cycles of residential customers, with a majority of production occurring throughout the day when customers are away from their homes. This mismatch has spurred investments and research into programs designed to reduce residential electricity use during peak hours, collectively known as demand response programs [2].
Demand programs currently used in the US include “price-based” programs and “incentive-based” programs. Price-based programs utilize various forms of pricing mechanisms to influence consumer behavior. For example, utilities can impose time-of-use (TOU) pricing, which fixes higher electric rates to times of the day when demand is historically highest, or critical peak pricing (CPP) programs where utilities change rates to mitigate high-energy-demand days. Additionally, real-time pricing (dynamic pricing) programs exist which tie electric rates to production costs. Programs utilizing price-based incentives have been shown to be influential in prompting electricity customers to respond to price signals [3,4,5,6]. Incentive-based programs utilize mechanisms that leave prices unchanged but offer some monetary incentive for behavior changes such as a rebate for reducing peak demand or compensation for allowing the electric provider to control internet-connected appliances (sometimes called direct-load-control programs). (For a more in-depth explanation of price-based and incentive-based programs, see Parrish et al. [7].) Popular examples of direct-load-control programs are various “smart thermostat programs” (STPs) [8]. STPs typically include the installation of a smart thermostat in the home of the participant that can be controlled by the electric utility, which can then raise the temperature setting in the summer during high-grid-stress events to reduce the demand on the system. Participants are typically compensated for their discomfort through credits applied to their bill. Additionally, direct-load-control programs have great potential at the electricity aggregator level to minimize costs and better meet demand [9,10,11].
Demand response programs, both trials and established programs, have seen an increase in participation over recent years, but include a relatively low percentage of customers. A study by Parrish et al. [7] reviewed residential demand response programs across the world and found that for programs offering an opt-in enrollment scheme, participation was 10% or less of the target population, with active engagement being less in some cases. Results vary widely by the type of demand response program, opt-in versus opt-out strategies, and other institutional and regional factors. A recent report by the US Federal Energy Regulatory Commission by Burns et al. [12] shows that participation in US incentive-based programs has increased 6%, or 565,000 customers, since 2013. Regardless, participation in demand response programs needs to scale with the continued integration of renewable energy sources. A paucity of research exists on the effectiveness of STPs both in terms of their ability to reduce peak energy loads and their ability to retain participants. Most research to date is descriptive and does not aim to maximize participation in demand response programs nor determine the predictors of who participates and why [7,13,14,15]. The difficulty associated with participation is in its voluntary nature. Participation should be voluntary due to the uncertain effects of the STP on a household’s electric bill and their experience with the program, and thus customers should be allowed to re-evaluate whether their level of compensation is sufficient. Additionally, customers may have valid privacy concerns with the level of data and control that the provider would have on the household [16]. Additionally, most DLC programs allow the customer to manually override the DLC intervention; Sarran et al. [17] found an average override rate of 12.9%. A mandatory enrollment in this program may lead to legal ramifications for invasions of privacy and loss of agency in controlling electricity use. In an effort to increase enrollment, this study attempts to better understand the determinants of incentive demand response program participation, using a smart thermostat program as an exemplar program type, in order to inform on-going efforts by electric utilities across the US to increase participation in these types of programs.
This work aims to determine residents’ willingness to participate in a demand response program that incorporates a utility-controlled smart thermostat into the household given some level of compensation. Willingness to participate is elicited using a contingent valuation survey, which is a widely used stated preference technique for eliciting nonmarket values [18]. In this study, we are particularly interested in understanding determinants of ratepayers’ willingness to accept (WTA) compensation for participation in a hypothetical smart thermostat program, which has both market (e.g., electricity prices) and nonmarket components (e.g., public grid reliability, availability of electricity for critical services during disaster events).
Our findings suggest sociodemographic characteristics are not significant determinants of smart thermostat program participation compared to energy and environmental attitudes, preferences, and ideologies. We also find that the median compensation required for participation is USD 9.50 (95% CI: 3.74, 15.25) per month for each of the summer months. This compensation amount is robust to the inclusion or exclusion of determinants for participation, indicating that monetary compensation is relatively homogeneous. Our results further show that in addition to direct ratepayer compensation, electric providers seeking to increase participation in STPs should also carefully tailor their programs to customer preferences, which has the potential to significantly increase participation rates beyond what can be achieved by compensation alone.

2. Background and Literature Review

Demand response programs aiming to shift residential electricity demand away from peak demand times or reduce demand entirely are becoming more common as renewable energy is generated throughout the day when most residential users are not home. A recent literature review categorized residential demand response programs into price-based schemes and incentive-based schemes [7]. Price-based schemes attempt to alter residential electricity consumption by shifting demand to off-peak times through a dynamic pricing structure. These programs simply charge more for electricity when the grid is strained, and the marginal cost of production is high, and charge less when the demand is low. This price-based mechanism requires consumers to make intertemporal decisions regarding electricity use to successfully optimize with respect to their budget constraints. Incentive-based schemes attempt to produce the same outcome without allowing the price of electricity to fluctuate throughout the day. These means are achieved by offering rebates for customers reducing electricity use during peak times or by allowing the utility to control the scheduling and use of certain appliances—known often as direct load control. A large majority of demand response research centers around price-based programs and their efficacy and retention rate.
A review of the current literature suggests there is an overarching theme of complication that customers associate with dynamic pricing schemes. Studies show that dynamic pricing complicates the optimization problem for customers and introduces uncertainty in the electric bill [15,19]. Dynamic pricing programs require considerable effort to optimize consumption to achieve the same level of spending as before and may not equate to the same level of utility as before due to changing of habits and loss of convenience. In addition, dynamic pricing can be tied to market prices for inputs, causing unexpected fluctuations and severe inequity in bill savings exists among groups of customers [20]. Dynamic pricing is also further complicated by the fact that smart meters and advanced metering infrastructure (AMI) are a prerequisite for participation. AMI enables high-resolution two-way communications with the electric provider that incur capital, operational, and maintenance costs. While the net benefits of AMI are positive in utility-scale projects, they require large amounts of capital to implement. Cost estimates for AMI vary greatly across the world. Costs can range from USD 140–450 per meter in the Northeast USA to EUR 180–200 per meter in the European Union [21,22]. It is important to note that the true cost to utility coffers is convoluted by public assistance and grant funding. Additionally, larger utilities can negotiate lower costs per meter by leveraging their buying power.
The incentive-based approach to demand response, specifically direct-load-control programs, may offer a less complicated alternative to price-based programs. For example, customers on a direct-load-control program are not subject to the risk associated with changing market prices and do not need to rely on self-optimization. STPs are a common type of direct-load-control program in the US. Such programs typically involve the installation of an internet-enabled thermostat, and participants in the program agree to a set number of temperature increases during the summer in order to reduce demand on the grid during critical peak periods. In several studies done by the US Department of Energy, customers with programmable control thermostats saw higher and more predictable reductions in electricity demand than those without [23]. In addition, all studies experienced positive benefit–cost ratios. While much of the research centers on the US, several studies show that STPs have the potential to reduce residential electricity demand in the Kingdom of Saudi Arabia [24,25], Turkey [26], and Canada [27] just to mention a few. It is for these reasons that STPs are a worthwhile companion in the ongoing effort to increase the prevalence of demand response programs as the “set it and forget it” nature of smart thermostats allows for customers to more easily participate in demand response programs [28]. The question then becomes, how do we increase enrollment rates in a smart-thermostat-style program?
The determinants of enrollment in these programs are of significant interest to the industry. Several large electric providers in the US have already implemented STPs (e.g., Public Service Company of New Mexico (PNM) with a customer base of ~500,000 people, Nevada Energy (NVEnergy) with a customer base of 1.3 million people) without a clear understanding of what types of people will join, or how to target their efforts to maximize enrollment. In order for direct-load-control programs to affect residential demand in a meaningful way, an empirical-based approach to recruitment and optimal compensation amount is required (which this the contribution of this work). Such knowledge is critical as the US continues in its era of grid modernization, as STPs are an important way for increasing the reliability and resilience of electric grids, such as by reducing blackouts and brownouts [29,30]. STPs are designed to reduce the electricity demand of a number of households during periods of high grid stress—thereby reducing the probability of lapses in electricity service. In combination with other demand response strategies, these types of direct-load-control programs are crucial in the solution for intermittency-related problems.
There is a limited body of research focusing on determinants of enrollment as well as the compensation required for direct load control programs. A recent Belgian survey showed respondents were willing to accept USD 49 annually to participate in a smart-appliance-based demand response program, with environmental attitudes, length of control, and sociodemographic characteristics impacting participation [31]. When focusing specifically on direct-load-control programs of heating and cooling appliances (using smart thermostats), the research is sparse. A 2018 study of California, Texas, Virginia, and Tennessee ratepayers found approximately half of respondents were willing to participate in a summer smart thermostat program without compensation, and that participation rates are boosted if an override option is allowed and an incentive of USD 30 is given [32]. While research into this area is becoming more popular, questions on the determining characteristics of direct-load-control participation and their magnitude remain.

3. Data and Methods

3.1. Overview of Contingent Valuation

This study adopts the stated-preference contingent valuation (CV) methodology. The use of CV is helpful for determining the total economic value of a good or service—which is a combination of use and nonuse values [18]. Use values are benefits that the participant receives directly from participation. In the case of a smart thermostat program, use values could include both a monetary incentive or a potential expenditure decline due to reduced electricity use. These values are often measurable through market structures (e.g., the electricity market), but the CV methodology allows us to additionally elicit more difficult nonmarket values for private goods that a respondent may have as well. These nonmarket values could include the value of participating in a cause or the value of future benefits to themselves and others [33]. Grid modernization, of which smart thermostats are a part, encompasses a broad spectrum of both market and nonmarket benefits. Grid modernization efforts increase reliability, resilience (e.g., reductions in the number of brownouts and blackouts), and the number and type of renewable generation sources. By using the CV methodology in this work, we provide estimates of the total economic value to society of a hypothetical smart thermostat program, which we operationalize as eliciting monetary estimates of ratepayers’ willingness to accept (WTA) compensation to participate in the program. The WTA framework is used as it most accurately reflects the real-world conditions in which smart thermostat programs are marketed to customers—customers are asked to participate in the program given some form of incentive, often monetary. Beyond simply estimating WTA, we also devote considerable attention to comparing and contrasting monetary and nonmonetary determinants of program participation, including how participation varies by attitudinal and socioeconomic characteristics.

3.2. Survey Design

In June of 2019, a nationwide survey of 497 US electric ratepayers was conducted using the Qualtrics online platform. (Israel [34] shows that the sample size for a population larger than 100,000, a 95% confidence interval, and a 5% margin of error is 400. Additionally, Bujang et al. [35] show that sample sizes may need to be larger when using logistic regression and find that a sample size of 500 is sufficient for a large population. Additionally, calculating the minimum sample size for a 95% CI, margin of error of 5%, and 50% population proportion using Cochran’s sample size formula following Taherdoost [36] results in a sample size of 385). Qualtrics uses a double opt-in process where potential respondents sign up for a panel, and if their demographics match the quotas that are set for census representation, they then receive an invitation to participate. Respondents are notified via email and invited to participate in the survey for a given incentive (typically equivalent to a few US dollars). To ensure our sample consisted only of electric ratepayers, potential respondents had to pay their electric utility bill every month and be willing or able to install a smart thermostat in their home (if the respondent was a renter then they must have landlord approval to do so). The survey took an average of 22 min to complete, and those that completed the survey were awarded an average of USD 3.30 as a participant incentive. The survey questions were designed using the best practices for internet-based surveys found in the work Dillman et al. [37], including the use of pretesting and the tailored design method, and we further followed the recommendations of Johnston et al. [38] for designing a stated preference valuation instrument.
Survey respondents were presented with a hypothetical smart thermostat program that is similar to ones currently in use across the US (e.g., PNM’s Power Saver (New Mexico), NVEnergy’s Powershift (Nevada), and SoCalGas’s Smart Therm Program (California)). In the scenario, the electricity provider would install a smart thermostat free of charge, under the condition that during high-peak-demand events, the provider would automatically increase the respondent’s temperature setting (by various amounts and for various lengths of time) to lower grid stress. (The smart thermostat that was described to the respondent was a Wi-Fi-enabled digital thermostat that would be installed by their electric provider and would allow the provider to adjust the temperature setting on specific days.) Several potential benefits of the program were presented to the respondents, including a reduction in their household’s electric bill, potential gains to the reliability of their power supply if this program was widely adopted in their service area (a nonmarket dimension), and that additional infrastructure investments in power plants and transmission lines could be delayed due to the reduction in grid stress (a full description of the impacts of the program presented to respondents is available in Appendix A). Respondents were also told about potential costs of the program, such as losing direct control of their air-conditioner (A/C) or other cooling system for short periods of time on high-grid-stress days, potential discomfort from a warmer home, and any inconvenience associated with learning how to operate a new thermostat device. Following a dichotomous choice format, respondents were shown a level of compensation picked randomly from a discrete uniform distribution ranging from USD 1 to USD 20. Respondents were also presented with a cheap talk script which aims to reduce the hypothetical bias associated with CV results [39]. If the respondent agreed to participate at the presented level of compensation, the monthly reward would be in addition to any savings on the electric bill that the respondent received from reduced electricity use during peak times. (Offering compensation in addition to bill savings is required to elicit nonmarket values associated with a hypothetical program.) The specific language of the valuation question asked of all respondents is presented below (the full valuation text can be found in Appendix A):
Assuming that you do not know by how much your electric bill would decrease under [the smart thermostat program], would your household participate in [the smart thermostat program] for one summer (June–August) if your electric provider gave you a USD [uniform random offered payment amount between USD 1 and USD 20] monthly money reward for each of the months of June, July, and August?
0—No
1—Yes
2—Not Sure
When asked to participate from June through August given a randomly distributed compensation amount, 49.3% of respondents said Yes, 29.4% of respondents said No, and 21.3% of respondents said they were Not Sure. This finding suggests that nearly a majority of surveyed US electric ratepayers are willing to participate in a smart thermostat program if they receive some form of compensation that randomly varies between USD 1 and USD 20. In the next section, we outline the empirical methods for calculating the average amount of compensation required for participation and the methods used to investigate determinants of program participation.

3.3. Determinants of Participation and Median WTA

The empirical analysis focuses on (i) determinants of respondent participation in the smart thermostat program and (ii) estimation of median WTA for program participation. For estimating (i), a logistic maximum likelihood estimation (often referred to as a logit) is used to investigate the magnitude by which attitudes, preferences, and sociodemographic characteristics impact a respondent’s willingness to participate in the smart thermostat program when some nonzero amount of compensation is provided. The use of a logit model in economics to provide this type of inference is well documented [40] and a precursor to (ii) estimation of the median WTA for the average ratepayer [41]. The probability that a respondent chose Yes is defined to be logistic in nature where participation is dependent on a vector of observable characteristics, X . These characteristics take on the form of attitudes, preferences, and sociodemographic characteristics that were asked throughout the survey. The log-likelihood that a respondent participates is then described as
l n = i = 1 n y i * ln ( Φ ( X β ) ) + ( 1 y i * ) ln ( 1 Φ ( X β ) )
In Equation (1), if respondent i votes to participate in the program, the latent class variable y i * is equal to 1; otherwise, it is equal to 0. Equation (1) is fit using a logistic distribution with the cumulative density function (CDF) including the logged payment level, Φ ( X β ) = ( 1 + exp ( X γ β t / σ ) ) 1 . When maximized, the γ vector represents the point estimates that can be converted to marginal effects to be evaluated at the mean of each variable; β is the point estimate for the payment level a respondent was assigned, t ; and finally, σ is the estimated variance. These estimates describe how the likelihood of compensated participation by a respondent changes with a deviation from the average response.
Given the hypothetical nature of the program in question and the stated preference methodology, respondents face questions that elicit protest response behavior following best practices [42]. This question is only offered to a respondent who answers No to the valuation question, and the respondent is asked to select the most appropriate reason for answering No from 13 possible choices (exact question wording available in Appendix B). Only 3 of the 13 possible choices represented a “true” economic No. (True economic No responses indicate a disagreement with the valuation of the program as presented to them. These responses were “I need more information about how my electricity provider would decide on which days to raise my home temperature”, “This program is not worth it to me”, or “The offered money reward is too small”.) All other possible responses represented various forms of protest to the direct-load-control concept itself or the survey instrument. Table 1 categorizes respondent answers to the valuation question using the results from the protest question. Approximately 15% of No responses to the valuation question represent some form of protest behavior.
There is no consensus in the literature for how Protest No and Not Sure responses should be treated [43,44]. We take a conservative approach, following Carson et al. [45], and code Protest No and Not Sure responses as No. This decision is empirically justified based on the results of pairwise likelihood ratio tests for combining Protest No, Not Sure, and No responses, following in the spirit of Jones et al. [46]. (The null hypothesis is that a pair of categories can be combined. Failures to reject the null hypothesis indicate that combining the two categories is empirically supported. True No and Protest No: χ 2 = 2.21 ,   p = 0.137 ; True No and Not Sure: χ 2 = 0.86 ,   p = 0.354 ; Protest No and Not Sure: χ 2 = 0.422 ,   p = 0.490 . These tests are also performed by collapsing each above category one at a time and then retesting. By collapsing True No and Protest No into a single category and testing the combination of that category being combined with Not Sure, the results are χ 2 = 0.017 ,   p = 0.896 , indicating that we can indeed further collapse dissenting responses into a single “No” category.)
Several predictors of participation in the smart thermostat program are evaluated in order to be inclusive of potential heterogeneity among the diverse group of survey respondents. The characteristics chosen encompass a wide range of attitudes and preferences that may impact enrollment. Given that the program has potential environmental benefits, environmental attitudes are included. Given that the program contains some loss of agency, questions regarding their electric provider are included. Finally, various characteristics that might affect electricity use habits or enrollment are also included. The specific respondent characteristics included in X in Equation (1) are as follows: the importance of energy conservation, whether their electric provider would consider the results of this survey, whether the respondent would relinquish control of some smart appliances to the provider, their concern for air and water pollution from electricity production, their political ideology, whether a household member works from home at least three days per week, and the likelihood that their neighbors would participate if asked to do so. In addition, several key sociodemographic characteristics are included: age, gender, educational attainment, and household income. Table 2 presents summary statistics for these variables. Respondents reflected various attitudes, preferences, and behaviors but were fairly representative of sociodemographic characteristics in the US population. Respondents averaged a household income bracket of USD 50,000 and USD 74,999, a middle-of-the-road political ideology, above-average education relative to census estimates, and a representative distribution of gender. (The Qualtrics sampling procedure allowed quotas to be set for demographics so that survey respondents match US national averages per the 2018 American Community Survey. We used quotas for age, race, and census region.)
The median WTA compensation amount is then calculated following processes described by Cameron and James [41]. Median WTA is defined as
M D ε ( W T A ) = exp [ z δ * ]
where the inner product of ( X γ β t / σ ) taken from the CDF in Equation (1) is rewritten as ( t , X ) [ β / σ γ / σ ] = z δ * and δ * is a vector of covariate averages. The value obtained from Equation (2) represents the minimum amount of money needed by the respondent to participate in the hypothetical smart thermostat program, conditional on the included respondent characteristics. (Versions of Equation (2) with and without respondent characteristics will be estimated and presented to show how characteristics impact median WTA.) Median WTA is calculated as opposed to the mean due to mathematical limitations exposed by the variance of the error term that arises during the derivation of the WTA for our data. (For an in-depth explanation of why mean WTA cannot be calculated in some cases, see Appendix B of Haab and McConnell [47].)

4. Results and Discussion

4.1. Determinants of Participation

Three versions of Equation (1) are estimated: (1) a parsimonious model in which only the offered payment amount is considered, (2) a model which includes the attitudes and preferences listed in Table 2, and (3) a model which includes all the covariates listed in Table 2. Table 3 displays the results for these three models. The presented point estimates represent the average marginal effects of a one-unit increase from the mean of that variable. Interpretation of these point estimates is that a one-unit increase from the mean, on average, results in a point estimate percent change in the likelihood of a respondent voting yes to the smart thermostat program.
An important aspect of determination is that the compensation amount is a significant and positive predictor across all three specifications. This indicates that as the compensation amount is increased, the probability of respondent enrollment is also increased. With the inclusion of covariates across specifications, the magnitude in which a log-dollar increase impacts participation is similar. (Compensation undergoes a log e ( x ) transformation in accordance with the methods described by Haab and McConnell [47].) For example, an increase in compensation by 1 log-dollar increases the probability of enrollment by 6.4% in model (1), 6.2% in model (2), and 6.7% in model (3). The magnitude of this effect is suggestive that monetary compensation can indeed increase respondent participation in the program; however, as will be shown shortly, other, nonmonetary factors and characteristics have the potential to be more impactful, by increasing participation by greater amounts than what can be achieved from pure compensation alone.
In model (2), the attitudes and preferences of the respondent are significant and consistent predictors of program participation. The importance of energy conservation is positively associated with an increase in a respondent’s probability of enrollment by a statistically significant amount. As a respondent places a higher level of importance on energy conservation, the probability of enrollment increases by 5.7% for every 1-point increase. Whether a respondent believed their utility would consider the results of this survey when developing their smart thermostat program was a significant and positive predictor of enrollment as well, providing an important check that respondents viewed the survey instrument as consequential [48]. A respondent who believed the utility would consider these results has a 16.9% higher probability of enrolling than one who did not. If a respondent is amenable to the utility having direct control of their smart appliances (not limited to A/C), there is a significant and positive impact on the likelihood of enrollment. A respondent who agreed to direct control had an 11.7% higher probability of enrolling than one who did not. This makes sense as a positive response to this question is associated with respondents being open to the concept of direct load control, while a negative response may be associated with a reluctance to allow utility control of appliances which could stem from a variety of reasons. The likelihood of enrolling declines for respondents more concerned about air and water pollution. While this result may seem counterintuitive, there is a potential explanation. Respondents who rank their concern higher for pollution may associate an electric provider with said pollution even if the provider does not directly generate electricity. This direct or pass-through relationship may have an adverse impact on a respondent’s willingness to enroll in STPs, but these relationships were not elicited in this study. Certainly, this suggests an avenue for future research as enrollment in demand response programs is highly contingent upon the level of trust that a utility has with its customers [49,50]. The political ideology of the respondent is also a significant and positive determinant of enrollment. Given the scale that ideology is measured on, a higher number indicates that a respondent is more conservative. Therefore, respondents who are more conservative have a decreasing likelihood of participation. As a respondent ranks higher on the scale (more conservative) by 1 point, their probability of enrolling decreases by 2.2%. This may be associated with some ideological characteristics common amongst conservative respondents—the relationship between conservative ideologies and views on energy-efficiency attitudes and choices is well-established [51]. Whether the household contains an individual who works from home at least 3 days per week increased the likelihood of enrollment in a statistically significant way. Households in this category have an 8.2% higher probability of enrolling than those that are not. This could be attributed to higher electricity use in the home initially, or the underlying cause of working from home could be correlated with conservation efforts. Peer pressure or neighborhood effects also appear to significantly impact smart thermostat program participation. Results show that those who live near neighbors they think would participate have a 10.9% higher probability of enrolling than those who live near neighbors they think would not participate if asked. This result is not surprising, as social influences from neighbors have been shown to be a significant determinant of energy conservation behavior [52]. Given the magnitude of the impact that this determinant has, it may prove to have viable strategy implications for providers, as is discussed in the conclusion and policy implications section of the paper.
In model (3), we include the set of sociodemographic characteristics shown in Table 2. In this model, most attitude and preference determinants remained similar in magnitude to those found in model (2) save for one notable difference. The inclusion of these characteristics led to the loss of statistical significance in political ideology. This may be associated with correlations between political ideology and the sociodemographic characteristics. To avoid the effects of collinearity between some sociodemographic characteristics such as income and age, or income and education, each variable was added to the regression piecewise, and the results were robust in determining that they are not significant. (Piecewise results available upon request.) Overall, model (3) results are consistent with current research into how sociodemographic characteristics impact participation in demand response programs. In a review of the literature, Xu et al. [32] found inconsistent evidence of statistical linkage between enrollment in price-based programs and sociodemographics. Our research is the first to conduct a parametric statistical analysis on sociodemographics and enrollment in an incentive-based program, and we find there is no association. This may seem counterintuitive since a clear relationship exists between sociodemographic characteristics and energy consumption [53,54]. With a direct-load-control program, the behavior response becomes less associated with the respondent’s electricity use profile and more associated with their willingness to accept a loss of agency for some level of compensation. This decision-making process may be loosely correlated with electricity consumption and more correlated with the attitudes and preferences that were used in this study.
Note that across models (2) and (3), we find consistent evidence that respondent attitudes and preferences such as views on direct utility control of at-home appliances (including but not limited to their thermostat), survey consequentiality, neighbor participation, and working from home are each larger predictors of program participation than an additional log-dollar of compensation alone. This is not to suggest that compensating smart thermostat program participants is not somehow warranted (indeed our results show that compensation can increase participation); rather, the implication of the results in Table 3 is that compensation is only one of several factors (and perhaps not even the most salient factor) that should be considered for program recruitment purposes. Electric providers, based on the results presented here, likely need to carefully consider respondent and household characteristics, especially surrounding issues of trust and neighborhood effects, when designing their STPs and not only the level of compensation provided.
As some additional results, some literature finds that variables that may be associated with at-home electricity usage would be significant determinants of participation in demand response programs [53,54]. To examine these potential relationships, Table 4 presents estimated versions of Equation (1) that include controls for the size of a home, the number of people in the home, the average bill size, and whether or not the household has more than one sensitive group (e.g., seniors, small children). The point estimates represent the average marginal effects evaluated at the mean and follow the same interpretation of those found in Table 3. All of these characteristics except for the number of people in a household are insignificant in explaining the variation in participation in this program. The results of the model in column 2 show that for each additional member in the household above the mean of 2.68, the likelihood of participation in the proposed program increases by 3.86%. When these four determinants were added in various combinations to column 3 of Table 3, the result is statistical insignificance for all, including the number of people in the household. This suggests that traditional explanatory variables that focus on potential electricity use and sociodemographics are eclipsed by more powerful determinants such as the attitudinal and preferential explanatory variables described in Table 2 and Table 3.

4.2. Median WTA

Using the results in Table 3, Equation (2) is used to calculate the median WTA for the smart thermostat program. The median WTA when no covariates are considered except for the offered payment level, model (1) from Table 3, is USD 9.71 (95% CI: 3.03, 16.39). (All confidence intervals calculated using the delta method.) When the attitudes and preferences from model (2) are included, the median WTA is USD 9.84 (95% CI: 3.35, 16.33). When all the covariates are included as in model (3), the median WTA is USD 9.50 (95% CI: 3.74, 15.25). These findings calculate to approximately USD 30 per annum, which is similar to established programs that offer USD 25 per annum (e.g., PNM Power Saver in New Mexico). (The Turnbull empirical distribution estimator of WTA was also used to test whether nonparametric techniques resulted in similar outcomes as the parametric technique used in this paper [55]. The estimated lower bound of WTA was USD 7.78 with a variance of USD 1.50.) This small change in WTA with the inclusion of covariates may indicate that while the likelihood of participation can be significantly influenced by these covariates, the amount of compensation required may not. These results strongly suggest that participation in these types of programs is more significantly impacted by the attitudes and preferences of the respondents (as discussed in the previous section) as opposed to the amount of compensation provided. This suggests that simply increasing the compensation amount to improve participation numbers may not be the most effective strategy for electric providers. Given that the probability of enrollment is increased by 6.2–6.7% for an increase in the log-dollar amount, an optimal strategy would be to use resources on determinants that had a larger impact on enrollment. For example, targeting households with an individual who works at home and targeting those who live near neighbors who already participate in the program could lead to an increase in the probability of participation by 8.2% and 10.9%, respectively. Both of these determinants have stronger impacts on the probability of enrollment than an increase in compensation.
It is possible that varying levels of compensation are required depending on institutional or social constructs. To expand on this concept, a heterogeneity analysis of median WTA was conducted. Several potential sources of heterogeneity were examined such as political ideology, geographic region, and state-level renewable portfolio standards (RPSs)—see Table 5.
The values in Table 5 represent the median WTA for each category as compared to the listed base category. In terms of regional heterogeneity, the southern census region has a statistically different median WTA than the Midwest (base) at USD 14.21. This indicates that the minimum compensation required by the average respondent living in the South is USD 14.21 higher than the average respondent living in the Midwest. This result is robust to changing the base case. (Evidence from the sample suggests that the southern census region respondents are on average more conservative—with 74.8% of respondents identifying as middle-of-the-road or above on the conservative scale shown in Table 3 where 1 is liberal, 4 is middle-of-the-road, and 8 is conservative. This suggests that the difference in compensation required for the South could be driven by political ideology.) Median WTAs amongst political ideologies displayed expected point estimates but were not statistically different. Point estimates suggest that conservative respondents required more compensation than middle-of-the-road respondents and liberal respondents required less, but this finding is not statistically supported due to overlapping confidence intervals. The magnitude of a state’s RPS has a significant impact on median WTA. The median amount of compensation depending on the magnitude of RPS may be reflecting institutional attention to electricity policy. If a respondent lives in a state where the RPS required renewables in the energy mix to be between 1% and 50%, their median WTA was USD 11.84 higher than those without any RPS. If a respondent lives in a state where the legal requirement was to be between 50% and 100%, the median WTA was USD 6.76 higher, but this estimate is only significant at the 85% confidence interval. While RPS levels may impact the amount of press regarding electricity generation and conservation in the state, it is also possible that there are unobserved characteristics that are shared across states in each category. For example, states with an RPS requirement between 50% and 100% are California, Colorado, Maine, Maryland, Nevada, New Jersey, New Mexico, New York, Oregon, and Washington, as well as the District of Columbia. Respondents from these states may experience shared determinants that impact the amount they are willing to accept for participation and can be related to economic, environmental, political, or ideological similarities.

5. Conclusions and Policy Implications

Demand response programs have become synonymous with grid modernization efforts in the US. Many US electric providers offer some form of price- or incentive-based program. The ability to use direct-load-control programs to reduce residential demand during off-peak times has the potential to avoid grid instability and high-cost generation. Participation in these programs varies widely, and up until now, determinants of participation specifically related to a direct-load-control program were widely understudied. This paper utilized an original contingent valuation study to elicit determinants for ratepayer participation in a compensated smart thermostat demand response program. Findings suggest nearly half of all respondents were willing to participate in the smart thermostat program for USD 9.50 per month, from June through August. This amounts to USD 28.50 per annum. Given that this paper is the first to conduct a WTA analysis on such a direct-load-control program, there are no examples in the current literature for a direct comparison.
Our major contribution to the literature revolves around the determinants of participation. Our findings suggest that sociodemographic characteristics are not significant determinants of participation when attitudes and preferences of various other topics are also considered. Participation in the direct-load-control program outlined in this paper was largely driven by non-sociodemographic characteristics, most notably the respondents’ attitudes and preferences surrounding their views on energy conservation, the ability for this survey to impact utility decisions, their willingness to relinquish control over appliances to their utility, their political ideology, whether or not someone in the household works from home, and whether or not they think their neighbors would participate if asked. Unfortunately, data regarding these characteristics are not readily available to most electric providers. While the amount of compensation that is offered will result in a statistically positive increase in enrollment, the use of a monetary-focused strategy may not be an efficient one. When compared to the attitudes and preferences of the respondent, the magnitude associated with the compensation amount was reduced heavily. Powerful determinants of enrollment were replaced by the aforementioned attitudes and preferences. To successfully increase enrollment in a smart thermostat program, electric providers should consider focusing their energy and resources in areas that rival compensation and may impact enrollment at a larger scale. Electric providers can use the marginal effects presented in columns 2 and 3 of Table 3 and Table 4 that are often provided by census level data (i.e., income) and compare the mean of this survey with the mean of their customer base. This will allow utilities to determine whether their customers will be more or less likely to participate. A limitation is that many of the most impactful effects are from nonpublic attitudes and characteristics. Electric providers should survey their customers on a wide variety of topics, and our work suggests certain characteristics (e.g., views on pollution, climate, energy conservation) that providers may want to include to better estimate appropriate compensation amounts.
An example of this would be the peer effect that was elicited from the survey. Respondents were 10.9–11.1% (depending on the model specification) more likely to enroll in the program if they thought that their neighbors would do so if asked, a magnitude much higher than the 6.2–6.7% increase in participation for each log-dollar of compensation. This type of peer pressure behavior is common in decision making, which may prove useful to program designers. Advertisements or public service announcements related to the smart thermostat program could include statistics or words of encouragement for neighborhood or large-scale adoption of the program. To fully optimize spending, electric providers may simply target households surrounding a current program participant. This may be a low-cost alternative to increasing the compensation level. In general, strategies that aim to shape customer bases may also be an alternative strategy to increased compensation levels. For example, campaigns focused on increasing a target population’s importance on energy conservation and the trust relationship with the provider prove, based on our results, to have real and positive impacts on enrollment at a magnitude much larger than that of simply increasing the compensation level. Electric providers may find that acquiring the attitudes and preferences of their customer base is difficult but necessary. As mentioned from the above findings, these characteristics will inform program designers on the best approach for increasing enrollment in STPs. Complementary to the results we have presented here, our work suggests avenues for future research. The most apparent would be to test alternative STP structures in terms of duration, payment method, informational treatments, etc. While this type of analysis would focus less on the overall consumer response, it would provide useful insights into what program structure would maximize enrollment.
This research utilized a stated preference technique to elicit the median WTA estimates for electric ratepayers in the US. Another valuation technique that could be used for this research is a revealed preference approach, in which existing program data are used to estimate the WTA amount. Revealed preference approaches remove the potential for hypothetical bias and thus should be a priority for future research on the topic of STPs. More specifically, electric utilities could require new and current customers to be surveyed for characteristics outlined in this paper. This would allow the utility to determine which characteristics impact enrollment, retention, and satisfaction, which are the ultimate goals of STPs, which we leave to future research.

Author Contributions

Conceptualization, J.K., B.J. and J.C.; methodology, J.K., B.J. and J.C.; software, J.K.; validation, J.K. and B.J.; formal analysis, J.K.; data curation, J.K., B.J., and J.C.; writing—original draft preparation, J.K.; writing—review and editing, J.K., B.J. and J.C.; supervision, B.J. and J.C.; project administration, J.C. and B.J.; funding acquisition, J.C. All authors have read and agreed to the published version of the manuscript.

Funding

This material is based upon work supported in part by the National Science Foundation EPSCoR Cooperative Agreement OIA-1757207. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.

Institutional Review Board Statement

The study was approved by the Institutional Review Board of The University of New Mexico (1132419-4, 4 August 2020) for studies involving humans.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available at the moment but will be in the future at a to-be-determined date.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Consider a program where a free “smart” digital thermostat will be professionally installed in your home by your current electricity provider. This thermostat will be provided to you at no initial cost and with no re-occurring cost. After installation, the technician will explain the main features of the smart thermostat to you and answer any questions you may have regarding its use.
Once installed, the smart thermostat will operate just like a regular digital thermostat that can be adjusted to a desired temperature setting. However, during the summer months (June–August) the thermostat may:
  • Automatically raise your temperature setting by 2–3 °F (1.11–1.67 °C) above your average weekday setting for up to 90 min at a time on summer weekdays when there is an increased risk of a blackout or brownout.
  • Automatically raise your temperature setting by up to 5 °F (2.78 °C) (but never higher than 79 °F (26 °C)) above your average weekday setting for up to 90 min at a time on very hot summer weekdays when the outdoor highest temperature is over 95 °F (35 °C).
In total, no more than 10 separate automatic temperature increases will occur in your household over an entire summer.
Whether the smart thermostat actually raises your home temperature will be decided on a daily basis by your electricity provider. Typically, this will only occur on summer weekdays when total electricity demand in your community is high.
No temperature setting changes will occur on summer weekends or during the non-summer months.
Impacts of [the smart thermostat program]:
  • Reduced electricity use, which may lower your household’s monthly electric bill.
  • Improvements to the reliability of the power supply, thereby decreasing the likelihood of blackouts or brownouts in your service area.
  • Delay the need for additional infrastructure investments in power plants and transmission lines.
At this point in time, it is not certain what the monthly electric bill savings would be to any specific household participating in [the smart thermostat program]. Therefore, electricity providers are considering offering a monthly financial money reward to encourage household participation in the program.
This money reward would be in addition to any reductions in your electric bill due to reduced electricity use under the program. The money reward would be in the form of a monthly dollar credit applied to your electric bill balance during the summer months of June–August.
Since the final amount of the money reward has not been determined by providers, we are asking different households about different amounts. Even if the amount we ask seems very low or very high, please answer carefully. This will allow us to determine whether people think the program is worthwhile for their household at whatever level the final money reward is determined to be.
For this study, it is important that you tell us which money reward you prefer, based only on your personal evaluation of what incentive would be required for your household to participate in [the smart thermostat program].
Aggregate results from this study will be made available to US electricity providers and state and federal electric regulatory agencies. However, no individual-specific results will ever be shared.
Assuming that you do not know by how much your electric bill would decrease under [the smart thermostat program], would your household participate in [the smart thermostat program] for one summer (June–August) if your electric provider gave you a USD monthly money reward for each of the months of June, July, and August? [No, Yes, Not Sure].
Note: Respondents also had access to the information from the previous description to ensure a fully informed response. Follow-up questions were asked to determine a respondent’s level of certainty with their decision and their level of understanding, and if a No or Not Sure answer was given, the participant was then asked why. This allowed us to determine the difference between a true economic No and a protest No.

Appendix B

We would like to know why your household would not participate in Program A. Please select the most important reason.
  • I’m opposed to giving my electric provider automatic control of my thermostat.
  • The proposed temperature setting changes would make it too hot in my home.
  • I don’t like smart digital thermostats.
  • I don’t feel safe having somebody come into my home to install the thermostat.
  • I need more information about how my electric provider would decide on which days to raise my home temperature.
  • I don’t trust my electric provider.
  • This program is not worth it to me.
  • The program lasts too long (i.e., one summer is too long).
  • The duration of the temperature change (90 min) is too long.
  • The offered money reward is too small.
  • I’m concerned that my smart thermostat could be hacked.
  • Other reason (please specify).
Note: Responses were presented in random order to the respondent.

References

  1. U.S. Energy Information Administration. Annual Energy Outlook 2020 with Projections to 2050; U.S. Energy Information Administration, Office of Energy Analysis: Washington, DC, USA, 2020.
  2. Albadi, M.H.; El-Saadany, E.F. Demand Response in Electricity Markets: An Overview. In Proceedings of the 2007 IEEE Power Engineering Society General Meeting, Tampa, FL, USA, 24–28 June 2007; IEEE: Tampa, FL, USA, 2007; pp. 1–5. [Google Scholar]
  3. Allcott, H. Rethinking Real-Time Electricity Pricing. Resour. Energy Econ. 2011, 33, 820–842. [Google Scholar] [CrossRef] [Green Version]
  4. Herter, K. Residential Implementation of Critical-Peak Pricing of Electricity. Energy Policy 2007, 35, 2121–2130. [Google Scholar] [CrossRef] [Green Version]
  5. Jessoe, K.; Rapson, D. Knowledge Is (Less) Power: Experimental Evidence from Residential Energy Use. Am. Econ. Rev. 2014, 104, 1417–1438. [Google Scholar] [CrossRef] [Green Version]
  6. Wolak, F.A. Do Residential Customers Respond to Hourly Prices? Evidence from a Dynamic Pricing Experiment. Am. Econ. Rev. 2011, 101, 83–87. [Google Scholar] [CrossRef] [Green Version]
  7. Parrish, B.; Gross, R.; Heptonstall, P. On Demand: Can Demand Response Live up to Expectations in Managing Electricity Systems? Energy Res. Soc. Sci. 2019, 51, 107–118. [Google Scholar] [CrossRef]
  8. Chanana, S.; Arora, M. Demand Response from Residential Air Conditioning Load Using a Programmable Communication Thermostat. Int. J. Electr. Comput. Eng. 2013, 7, 1670–1676. [Google Scholar] [CrossRef]
  9. Ruiz, N.; Cobelo, I.; Oyarzabal, J. A Direct Load Control Model for Virtual Power Plant Management. IEEE Trans. Power Syst. 2009, 24, 959–966. [Google Scholar] [CrossRef]
  10. Shayegan-Rad, A.; Zangeneh, A. Optimal Contract Pricing of Load Aggregators for Direct Load Control in Smart Distribution Systems. Turk. J. Electr. Eng. Comput. Sci. 2019, 27, 167–180. [Google Scholar] [CrossRef]
  11. Zakernezhad, H.; Nazar, M.S.; Shafie-khah, M.; Catalão, J.P.S. Optimal Resilient Operation of Multi-Carrier Energy Systems in Electricity Markets Considering Distributed Energy Resource Aggregators. Appl. Energy 2021, 299, 117271. [Google Scholar] [CrossRef]
  12. Burns, D.; Bialecki, T.; Gil, G.; Kathan, D.; Lee, M.; Peirovi, S.; Puram, R. 2020 Assessment of Demand Response and Advanced Metering; Federal Energy Regulatory Commission: Washington, DC, USA, 2020.
  13. Arimura, T.H.; Li, S.; Newell, R.G.; Palmer, K. Cost-Effectiveness of Electricity Energy Efficiency Programs. Energy J. 2012, 33, 63–99. [Google Scholar] [CrossRef] [Green Version]
  14. Asensio, O.I.; Delmas, M.A. Nonprice Incentives and Energy Conservation. Proc. Natl. Acad. Sci. USA 2015, 112, E510–E515. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Parrish, B.; Heptonstall, P.; Gross, R.; Sovacool, B.K. A Systematic Review of Motivations, Enablers and Barriers for Consumer Engagement with Residential Demand Response. Energy Policy 2020, 138, 111221. [Google Scholar] [CrossRef]
  16. McKenna, E.; Richardson, I.; Thomson, M. Smart Meter Data: Balancing Consumer Privacy Concerns with Legitimate Applications. Energy Policy 2012, 41, 807–814. [Google Scholar] [CrossRef] [Green Version]
  17. Sarran, L.; Gunay, H.B.; O’Brien, W.; Hviid, C.A.; Rode, C. A Data-Driven Study of Thermostat Overrides during Demand Response Events. Energy Policy 2021, 153, 112290. [Google Scholar] [CrossRef]
  18. Freeman III, A.M.; Herriges, J.A.; Kling, C.L. The Measurement of Environmental and Resource Values: Theory and Methods; Routledge: London, UK, 2014. [Google Scholar]
  19. Ito, K. Do Consumers Respond to Marginal or Average Price? Evidence from Nonlinear Electricity Pricing. Am. Econ. Rev. 2014, 104, 537–563. [Google Scholar] [CrossRef] [Green Version]
  20. Horowitz, S.; Lave, L. Equity in Residential Electricity Pricing. Energy J. 2014, 35, 1–23. [Google Scholar] [CrossRef] [Green Version]
  21. Northeast Energy Efficiency Partnerships, Inc. Advanced Metering Infrastructure: Utility Trends and Cost-Benefit Analyses in the NEEP Region; Northeast Energy Efficiency Partnerships, Inc.: Lexington, MA, USA, 2017. [Google Scholar]
  22. Tounquet, F.; Alaton, C. Benchmarking Smart Metering Deployment in the EU-28; European Commission: Brussels, Belgium, 2019. [Google Scholar]
  23. U.S. Department of Energy. Customer Acceptance, Retention, and Response to Time-Based Rates from the Consumer Behavior Studies; U.S. Department of Energy Office of Electricity Delivery and Energy Reliability: Washington, DC, USA, 2016.
  24. Alshahrani, J.; Boait, P. Reducing High Energy Demand Associated with Air-Conditioning Needs in Saudi Arabia. Energies 2018, 12, 87. [Google Scholar] [CrossRef] [Green Version]
  25. Krarti, M. Evaluation of Occupancy-Based Temperature Controls on Energy Performance of KSA Residential Buildings. Energy Build. 2020, 220, 110047. [Google Scholar] [CrossRef]
  26. Duman, A.C.; Erden, H.S.; Gönül, Ö.; Güler, Ö. A Home Energy Management System with an Integrated Smart Thermostat for Demand Response in Smart Grids. Sustain. Cities Soc. 2021, 65, 102639. [Google Scholar] [CrossRef]
  27. Vellei, M.; Martinez, S.; Le Dréau, J. Agent-Based Stochastic Model of Thermostat Adjustments: A Demand Response Application. Energy Build. 2021, 238, 110846. [Google Scholar] [CrossRef]
  28. U.S. Department of Energy. Smart Grid System Report; 2018 Report to Congress; U.S. Department of Energy: Washington, DC, USA, 2018.
  29. Conchado, A.; Linares, P. The Economic Impact of Demand-Response Programs on Power Systems. A Survey of the State of the Art. In Handbook of Networks in Power Systems I; Sorokin, A., Rebennack, S., Pardalos, P.M., Iliadis, N.A., Pereira, M.V.F., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 281–301. ISBN 978-3-642-23193-3. [Google Scholar]
  30. Siano, P. Demand Response and Smart Grids—A Survey. Renew. Sustain. Energy Rev. 2014, 30, 461–478. [Google Scholar] [CrossRef]
  31. Srivastava, A.; Van Passel, S.; Kessels, R.; Valkering, P.; Laes, E. Reducing Winter Peaks in Electricity Consumption: A Choice Experiment to Structure Demand Response Programs. Energy Policy 2020, 137, 111183. [Google Scholar] [CrossRef]
  32. Xu, X.; Chen, C.; Zhu, X.; Hu, Q. Promoting Acceptance of Direct Load Control Programs in the United States: Financial Incentive versus Control Option. Energy 2018, 147, 1278–1287. [Google Scholar] [CrossRef]
  33. Carson, R.T.; Flores, N.E.; Meade, N.F. Contingent Valuation: Controversies and Evidence. Environ. Resour. Econ. 2001, 19, 173–210. [Google Scholar] [CrossRef]
  34. Israel, G.D. Determining Sample Size. Qual. Health Res. 1992, 10, 3–5. [Google Scholar]
  35. Bujang, M.A.; Sa’at, N.; Sidik, T.M.I.T.A.B.; Joo, L.C. Sample Size Guidelines for Logistic Regression from Observational Studies with Large Population: Emphasis on the Accuracy Between Statistics and Parameters Based on Real Life Clinical Data. Malays. J. Med. Sci. MJMS 2018, 25, 122–130. [Google Scholar] [CrossRef] [PubMed]
  36. Taherdoost, H. Determining Sample Size; How to Calculate Survey Sample Size. Int. J. Econ. Manag. Syst. 2017, 2, 237–239. [Google Scholar]
  37. Dillman, D.A.; Smyth, J.D.; Christian, L.M. Internet, Phone, Mail, and Mixed-Mode Surveys: The Tailored Design Method; John Wiley & Sons: Hoboken, NJ, USA, 2014. [Google Scholar]
  38. Johnston, R.J.; Boyle, K.J.; Adamowicz, W.; Bennett, J.; Brouwer, R.; Cameron, T.A.; Hanemann, W.M.; Hanley, N.; Ryan, M.; Scarpa, R.; et al. Contemporary Guidance for Stated Preference Studies. J. Assoc. Environ. Resour. Econ. 2017, 4, 319–405. [Google Scholar] [CrossRef] [Green Version]
  39. Penn, J.; Hu, W. Cheap Talk Efficacy under Potential and Actual Hypothetical Bias: A Meta-Analysis. J. Environ. Econ. Manag. 2019, 96, 22–35. [Google Scholar] [CrossRef]
  40. Cramer, J.S.; Ridder, G. The Logit Model in Economics. Stat. Neerl. 1988, 42, 297–314. [Google Scholar] [CrossRef]
  41. Cameron, T.A.; James, M.D. Efficient Estimation Methods for “Closed-Ended” Contingent Valuation Surveys. Rev. Econ. Stat. 1987, 69, 269–276. [Google Scholar] [CrossRef]
  42. Carson, R.T.; Czajkowski, M. The Discrete Choice Experiment Approach to Environmental Contingent Valuation. In Handbook of Choice Modelling; Edward Elgar Publishing: Cheltenham, UK, 2014. [Google Scholar]
  43. Caudill, S.B.; Groothuis, P.A.; Whitehead, J.C. The Development and Estimation of a Latent Choice Multinomial Logit Model with Application to Contingent Valuation. Am. J. Agric. Econ. 2011, 93, 983–992. [Google Scholar] [CrossRef] [Green Version]
  44. Champ, P.A.; Alberini, A.; Correas, I. Using Contingent Valuation to Value a Noxious Weeds Control Program: The Effects of Including an Unsure Response Category. Ecol. Econ. 2005, 55, 47–60. [Google Scholar] [CrossRef]
  45. Carson, R.T.; Hanemann, W.M.; Kopp, R.J.; Krosnick, J.A.; Mitchell, R.C.; Presser, S.; Ruud, P.A.; Smith, V.K.; Conaway, M.; Martin, K. Referendum Design and Contingent Valuation: The NOAA Panel’s No-Vote Recommendation. Rev. Econ. Stat. 1998, 80, 335–338. [Google Scholar] [CrossRef]
  46. Jones, B.A.; Berrens, R.P.; Jenkins-Smith, H.; Silva, C.; Ripberger, J.; Carlson, D.; Gupta, K.; Wehde, W. In Search of an Inclusive Approach: Measuring Non-Market Values for the Effects of Complex Dam, Hydroelectric and River System Operations. Energy Econ. 2018, 69, 225–236. [Google Scholar] [CrossRef]
  47. Haab, T.C.; McConnell, K.E. Valuing Environmental and Natural Resources: The Econometrics of Non-Market Valuation; New Horizons in Environmental Economics; Edward Elgar Publishers: Cheltenham, UK, 2002. [Google Scholar]
  48. Carson, R.; Groves, T.; List, J.; Machina, M. Probabilistic Influence and Supplemental Benefits: A Field Test of the Two Key Assumptions Underlying Stated Preferences. Department of Computational Social Science, UC San Diego La Jolla, CA. 2004; Manuscript in Preparation. [Google Scholar]
  49. Morris, P.; Buys, L.; Vine, D. Moving from Outsider to Insider: Peer Status and Partnerships between Electricity Utilities and Residential Consumers. PLoS ONE 2014, 9, e101189. [Google Scholar] [CrossRef] [Green Version]
  50. Stenner, K.; Frederiks, E.R.; Hobman, E.V.; Cook, S. Willingness to Participate in Direct Load Control: The Role of Consumer Distrust. Appl. Energy 2017, 189, 76–88. [Google Scholar] [CrossRef]
  51. Gromet, D.M.; Kunreuther, H.; Larrick, R.P. Political Ideology Affects Energy-Efficiency Attitudes and Choices. Proc. Natl. Acad. Sci. USA 2013, 110, 9314–9319. [Google Scholar] [CrossRef] [Green Version]
  52. Nolan, J.M.; Schultz, P.W.; Cialdini, R.B.; Goldstein, N.J.; Griskevicius, V. Normative Social Influence Is Underdetected. Pers. Soc. Psychol. Bull. 2008, 34, 913–923. [Google Scholar] [CrossRef]
  53. Guerin, D.A.; Yust, B.L.; Coopet, J.G. Occupant Predictors of Household Energy Behavior and Consumption Change as Found in Energy Studies Since 1975. Fam. Consum. Sci. Res. J. 2000, 29, 48–80. [Google Scholar] [CrossRef]
  54. Hayn, M.; Bertsch, V.; Fichtner, W. Electricity Load Profiles in Europe: The Importance of Household Segmentation. Energy Res. Soc. Sci. 2014, 3, 30–45. [Google Scholar] [CrossRef]
  55. Haab, T.C.; McConnell, K.E. Referendum Models and Negative Willingness to Pay: Alternative Solutions. J. Environ. Econ. Manag. 1997, 32, 251–270. [Google Scholar] [CrossRef]
Table 1. Respondent participation results for the smart thermostat program.
Table 1. Respondent participation results for the smart thermostat program.
ResponsePercentN
Yes49.30%245
True No14.49%72
Protest No14.89%74
Not Sure21.33%106
Note: A true no represents a respondent not participating due to an economic reason such as the compensation being too low. A protest response represents a respondent who chose not to participate because of opposition to the smart thermostat as a program and not the economic value of it.
Table 2. Summary statistics of determinants.
Table 2. Summary statistics of determinants.
VariablesDescriptionCodingNMeanS.D.MinMax
Compensation OfferAmount offered for participationDiscrete between USD 1 and USD 20 (USD)49710.85.71.020.0
Attitudes and Preferences
Energy ConservationImportance of energy conservation0–4, 0 = not at all important, 4 = very important4973.00.90.04.0
Utility ConsiderationThinks utility will consider survey results0 = no, 1 = yes4960.70.50.01.0
Utility ControlWould allow utility control of major appliances0 = no, 1 = yes4950.30.50.01.0
PollutionConcern for air and water pollution from electricity production0–4, 0 = not at all concerned, 4 = very concerned4972.61.10.04.0
Political IdeologyPolitical ideology1–7, 1 = strongly liberal, 4 = middle of the road, 7 = strongly conservative4954.01.81.07.0
Work at HomeWhether a HH member works from home at least 3 times a week0 = no, 1 = yes4970.30.50.01.0
Neighbor ParticipationLikelihood neighbors would participate if asked0–3, 0 = not likely, 3 = likely4971.10.80.03.0
Sociodemographics
AgeAge of respondentContinuous49747.916.018.084.0
GenderGender of respondent0 = male, 1 = female, 2 = other4970.50.50.02.0
High EducationEducation level of respondent0 = less than Bachelor’s, 1 = more than Bachelor’s4970.60.50.01.0
IncomeHousehold income bracket1 = <USD 20,000, 2 = USD 20,000–29,999, 3 = USD 30,000–49,999, 4 = USD 50,000–74,999, 5 = USD 75,000–99,999, 6 = USD 100,000–149,999, 7 = USD 150,000–199,999, 8 = >USD 200,0004924.91.81.08.0
Table 3. Average marginal effects of participating in the smart thermostat program.
Table 3. Average marginal effects of participating in the smart thermostat program.
Variable(1)(2)(3)
 Log (Offered Payment)0.0642 **0.0617 **0.0671 **
(0.0288)(0.0266)(0.0267)
Attitudes and Preferences
 Energy Conservation 0.0572 **0.0556 **
(0.0257)(0.0260)
 Utility Consideration 0.169 ***0.175 ***
(0.0458)(0.0460)
 Utility Control 0.117 **0.113 **
(0.0521)(0.0524)
 Pollution −0.0362 *−0.0375 *
(0.0218)(0.0220)
 Political Ideology −0.0216 *−0.0192
(0.0122)(0.0125)
 Work at Home 0.0815 *0.0806 *
(0.0449)(0.0458)
 Neighbor Participation 0.109 ***0.111 ***
(0.0289)(0.0292)
Sociodemographics
 Age −0.00107
(0.00140)
 Gender 0.0361
(0.0439)
 High Education 0.0142
(0.0476)
 Log (Income) 0.0199
(0.0487)
Observations497492484
Pseudo R20.007070.1390.148
Percent Correct52.3166.6766.53
Note: Each column presents results from a different estimated version of Equation (1). All point estimates represent the average marginal effects of a one-unit increase. Standard errors in parentheses. *** p < 0.01, ** p < 0.05, * p < 0.1.
Table 4. Determinants associated with electricity use.
Table 4. Determinants associated with electricity use.
Variables(1)(2)(3)(4)
Log (Offered Payment)0.0649 **0.0680 **0.0667 **0.0660 **
(0.0289)(0.0287)(0.0289)(0.0289)
Household size (sqft)−0.00620
(0.0220)
Number of people 0.0386 **
(0.0186)
Average bill −0.0206
(0.0196)
At least one sensitive group 0.0323
(0.0481)
Observations497496497497
Pseudo R20.007180.01340.008670.00772
Percent Correct51.5153.4354.7353.92
Note: Dependent variable is whether a respondent is a Yes vote. Point estimates are average marginal effects. Sensitive groups are defined as seniors, children, babies and toddlers, individuals with physical disabilities, or individuals with special needs. Standard errors in parentheses. ** p < 0.05.
Table 5. Heterogeneity analysis.
Table 5. Heterogeneity analysis.
SpecificationMedian WTA
Regional Variation
 SouthUSD 14.21 (8.25) *
 NortheastUSD 6.86 (5.45)
 Midwest (Base)-
 WestUSD 3.32 (2.83)
Political Ideology
 More LiberalUSD 8.28 (9.97)
 Middle of the Road (Base)-
 More ConservativeUSD 10.53 (12.02)
Maximum RPS a
 No Mandate (Base)-
 1–50% RenewablesUSD 11.84 (6.80) **
 50–100% RenewablesUSD 6.76 (4.69) †
Note: All point estimates estimated using the covariates listed in column 3 of Table 3. Median WTA estimated using Equation (2) in the text. a The state must have a legally mandated percentage of renewable electricity generation by any time frame. This measures whether a state has an RPS at all, as well as the magnitude of it. ** p < 0.05, * p < 0.1, † p < 0.15.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kaczmarski, J.; Jones, B.; Chermak, J. Determinants of Demand Response Program Participation: Contingent Valuation Evidence from a Smart Thermostat Program. Energies 2022, 15, 590. https://doi.org/10.3390/en15020590

AMA Style

Kaczmarski J, Jones B, Chermak J. Determinants of Demand Response Program Participation: Contingent Valuation Evidence from a Smart Thermostat Program. Energies. 2022; 15(2):590. https://doi.org/10.3390/en15020590

Chicago/Turabian Style

Kaczmarski, Jesse, Benjamin Jones, and Janie Chermak. 2022. "Determinants of Demand Response Program Participation: Contingent Valuation Evidence from a Smart Thermostat Program" Energies 15, no. 2: 590. https://doi.org/10.3390/en15020590

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop