Next Article in Journal
Research on the Development Level, Spatiotemporal Evolution Characteristics, and Sustainable Development Path of the Digital Business Environment
Next Article in Special Issue
Reconceptualising Preservice Teachers’ Subject Knowledge in Climate Change and Sustainability Education: A Framework for Initial Teacher Education from England, UK
Previous Article in Journal
Wood Production and Its Correlation with Socioeconomic and Environmental Indicators in the Amazon Region
Previous Article in Special Issue
Experiences from a School–University Partnership Climate and Sustainability Education Project in England: The Value of Citizen Science and Practical STEM Approaches
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Development and Validation of the Climate Capability Scale

1
Department of Psychology, Faculty of Human and Health Science, Swansea University, Swansea SA2 8PP, UK
2
School of Management, Faculty of Human and Social Science, Swansea University, Swansea SA1 8EN, UK
3
Helen’s Place Education Consultancy, 34 Southwood Road, Wiltshire BA14 7BZ, UK
4
Department of Communications and Journalism, Bournemouth University, Dorset BH12 5BB, UK
*
Author to whom correspondence should be addressed.
Sustainability 2023, 15(15), 11933; https://doi.org/10.3390/su151511933
Submission received: 18 May 2023 / Revised: 24 July 2023 / Accepted: 26 July 2023 / Published: 3 August 2023

Abstract

:
Climate change poses a serious existential threat to life on our planet. If we are to mitigate the most damaging impacts of climate change, there is a need for citizens who are willing and able to make changes to their individual behaviours, but who are also politically engaged and motivated to participate in, and advocate for, systemic change; there is a need for citizens who are Climate Capable. However, there is no scale currently available with which to measure the climate capability of adults and adolescents. Through an iterative process across three studies with 849 UK adults, we developed and validated a 24-item Climate Capability Scale. In a further study, with 458 UK adolescent participants (aged 12–15), we validated the scale for use with adolescents. We demonstrate that the scale is internally consistent, has good test–retest reliability, correlates with measures of related constructs such as environmental worldview and scientific literacy, and predicts self-reported pro-environmental behaviour. The Climate Capability Scale may have particular value in educational and public engagement contexts for measuring the effectiveness of programs and interventions designed to increase Climate Capability, as well as similar approaches to heighten engagement with the climate crisis.

1. Introduction

Climate change is one of the most imminent and serious threats to humans, plants, and animal species on Earth [1,2]. We have limited time to reduce carbon emissions if we are to slow down warming trends and prevent the worst potential outcomes from being realized [3,4]. Achieving meaningful change requires citizens who are informed, motivated, and empowered to make behavioural changes and to push for systemic transformation [5,6]. In the present study, we outline and develop the concept of “Climate Capability”, which builds on and updates the earlier work of Seyfang et al. [5] and which is intended to capture some of the key ways in which citizens are engaged in tackling the climate crisis. We propose a novel instrument designed to measure Climate Capability and to report our approach to its development and validation.
Seyfang et al. [5] outlined three dimensions of “Carbon Capability”: (i) decision-making; (ii) individual behaviours and practices; and (iii) broader engagement with systems of provision and governance. The decision-making dimension incorporates factual knowledge and understanding, along with motivation, judgments, and skills. The individual behaviour and practices dimension relates to private-sphere environmental behaviours [7], such as energy conservation and individual decisions surrounding transportation, food, and purchasing. The governance dimension relates to engagement with policy and government via democratic participation. A “Carbon Capable” public is necessary if we are to meet carbon reduction targets; we need citizens who are willing and empowered to make individual lifestyle changes, but who also engage with broader systems of governance to influence environmental policies, infrastructure, and systems of provision. [5,8]
Whitmarsh et al. [8] argue that a “Carbon Capable” individual is one who understands the limits of individual action and the need for systemic change, collective action and governmental solutions. A Carbon Capable individual is also one with the skills to evaluate the reliability of various sources of information about climate change and about carbon emissions reduction.
In this paper, we broaden Seyfang et al.’s [5] Carbon Capability construct to Climate Capability. We define Climate Capability as the degree to which individuals: (i) have the skills, understanding, and motivation to make behavioural changes that will reduce their individual contribution to climate change; and (ii) appreciate the need for collective action and governance to limit the magnitude of climate change and mitigate its effects. This is because the general public, despite being generally aware that carbon emissions contribute to climate change, harbour significant misconceptions about carbon, its relationship with private-sphere behaviour, and the mechanisms through which carbon dioxide affects the climate. For example, in a postal survey of a representative sample of the UK general public, participants tended to connect carbon emissions to unrelated issues such as ozone depletion, and they tended to connect carbon emissions with industry rather than with domestic behaviours; some respondents also noted uncertainty about the relationship between carbon emissions and climate change [8]. Thus, rather than focusing on individuals’ understanding of carbon and carbon emissions (the antecedents of climate change), we focus more directly on their understanding of climate change itself.

1.1. The Need for a Validated Scale of Climate Capability

A validated Climate Capability Scale has many potential applications, including tracking changes in Climate Capability in the general public over time; exploring demographic and cultural associations with Climate Capability; measuring relationships between Climate Capability and important real-world outcomes (e.g., adoption of lower-carbon lifestyles, political engagement, and support for environmental policies); and evaluating the effectiveness of interventions such as public messaging campaigns, public engagement activities, and educational programs.
In the educational sphere, climate change is likely to become an increasingly important focus within school and university curricula as nations attempt to meet their obligations to cut carbon emissions [9]. This is already happening in Wales, UK, where the Welsh Government has explicitly written climate change into its new curriculum, expecting students to “show their commitment to the sustainability of the planet” [10]. This shift presents opportunities for the development of innovative interventions to increase Climate Capability amongst children and adolescents [11,12,13]. Indeed, adolescents are a pivotal group to engage with climate change. Not only will they be living with the consequences of climate change throughout their lives, but they also have less firmly established worldviews and habits than adults, and are in a strong position to become agents of change [11]. As interventions targeted at this age group proliferate, there will be an increasing need for valid and reliable tools with which to measure attitudes, beliefs, and understanding of climate change.
There are many existing scales of related constructs, such as environmental attitudes (e.g., the Environmental Attitudes Inventory [14]; the New Ecological Paradigm Scale—Revised [15]) and climate change literacy (e.g., the Climate Change Knowledge Scale [16]). However, these scales tend to focus on the first dimension of Climate Capability—attitudes, understanding, and judgment. Our goal was to develop a scale that covered all three dimensions—attitudes and understanding, willingness to make behavioural changes, and appreciation of the need for systemic change.
If such a scale is to be useful in educational and public engagement contexts, it also needs to be brief and accessible. We developed potential scale items in collaboration with practising schoolteachers, including a dyslexia consultant, to ensure that the items were as clear as possible. In addition, we avoided items that included negations, and we used a yes/no response scale.
There is considerable debate concerning whether scales should include positively and negatively worded items. Mixing item types can counteract acquiescence bias (the tendency to agree with statements, regardless of their content) and facilitate screening of inattentive responders (e.g., by removing participants who provide inconsistent responses across items with opposing meanings). However, negatively worded items tend to decrease internal reliability (e.g., Swain et al. [17]) and are less accessible to children, people with lower reading ages, and adults with lower educational levels [18,19,20]. Assuming that there are other methods in place to ensure that participants are responding attentively, many authors argue that negatively worded items are best avoided (e.g., Barnette [21]). To maximise accessibility, we therefore avoided items that included negations.
Mellor and Moore [22] demonstrated that children (aged 6–13) struggle to use Likert scales to describe internal thoughts, feelings, and beliefs, although this varies with factors such as age, item concreteness, and the type of Likert scale being used (e.g., numerical vs. verbal). Children (aged 5–12) also tend to use the extreme endpoints on Likert scales, which can undermine the reliability and validity of the measure [23]. Even among adults, there are sub-populations of respondents who have difficulty using Likert scale items, including individuals with low reading ages [20]. For maximal accessibility, we used a “Yes/No/Don’t Know” response scale.

1.2. Aims of the Present Study

We aimed to develop and validate a reliable scale which captured the multi-faceted construct of Climate Capability—understanding of climate change and its effects; the motivation and willingness to reduce individual contributions to climate change; and an appreciation of the role of governance and systemic change in tackling climate change.
We generated an initial pool of 70 items. The aim of Study 1 was to pilot test and reduce our 70-item pool in a sample of 300 adult participants, thus creating a CCS. In Study 2, with 300 participants, we aimed to examine the internal reliability, test–retest reliability, and convergent and divergent validity of the CCS using an independent sample. In Study 3, we aimed to examine the predictive validity of the CCS using an independent sample of participants. Overall, in Studies 1–3, we developed and validated the Climate Capability Scale (CCS) in adult participants, investigating its internal consistency, test–retest reliability, factor structure, convergent and divergent validity, and external validity. In Study 4, we examined the suitability of the Climate Capability Scale for use with adolescents (12–15 years old) and explored the differences between adults’ and adolescents’ responses.

2. Materials and Methods

Data availability statement: Anonymised data are available at the following link (blinded for peer review): https://osf.io/8y6s5/?view_only=00665152c9ec4e7fa74f5ba19b373cda, accessed on 1 December 2022.
All studies reported in this manuscript received ethical approval from the Department of Psychology Ethics Committee at Swansea University.

2.1. Study 1

2.1.1. Study 1: Item Development and Refinement

First, we generated potential scale items. We looked to existing scales of climate change knowledge and attitudes (e.g., Cordero et al. [9]; Field et al. [24]) to compile a database of potential items, grouped by theme. This allowed us to identify gaps in coverage in existing scales (e.g., ability to seek out and evaluate reliable sources of information, appreciation of the need for governmental and societal action). Where gaps were identified, we drafted new potential items.
We next reduced the initial item pool to minimise redundancies, reduce biased language, and increase accessibility by simplifying the vocabulary and syntax. Because our goal was to develop a scale that could be used with adolescents as well as with adults, we worked closely with two experienced secondary school teachers, one of whom was a specialist in science teaching, and the other of whom was a specialist in special educational needs, including dyslexia. After this process, 70 items remained. These were organised along the three dimensions of Seyfang et al.’s Carbon Capability model [5], each of which was divided into further subthemes. The full item pool can be seen in Table S1 in the Supplementary Materials.
Dimension 1 (knowledge and attitudes) included four subthemes: acceptance and understanding of anthropogenic climate change (9 items; e.g., “Most scientists agree that humans are causing climate change”); knowledge of warming trends (7 items, e.g., “In the next 50 years, Earth is going to get hotter”); understanding the consequences of climate change (9 items, e.g., “Climate change will cause some animals and plant species to go extinct”); and climate change scepticism (4 items, e.g., “Scientists exaggerate how much the climate will change”).
Dimension 2 (individual behaviours) included three subthemes: private-sphere behavioural change (10 items, e.g., “I have made changes to how I live to reduce my effect on the planet”); information seeking (10 items, e.g., “I have watched documentary programmes about climate change”); and self-efficacy (10 items, e.g., “I know how to reduce my effect on the planet”).
Dimension 3 (governance) included two subthemes: role of governance (3 items, e.g., “Government action is needed to tackle climate change”); and engagement with governance (6 items, e.g., “I have chosen to write to my MP about climate change”).
We also included an optimism theme (3 items, e.g., “I am hopeful that we can reduce the effects of climate change”). Though this theme did not fit neatly within any of the three Carbon Capability dimensions, optimism and hope have been incorporated into some models of environmental behaviour change (e.g., Cantell et al. [25], Ross et al. [26]), so we sought to capture those factors here.
Our aims for Study 1 were to:
  • Pilot test the 70-item pool in a sample of adult participants;
  • Identify problematic items;
  • Identify the best items to create a short, yet comprehensive, Climate Capability Scale;
  • Explore the internal reliability of the items left at the end of this process;
  • Explore the factor structure of the items at the end of this process.

2.1.2. Study 1: Participants

We planned to recruit 300 participants through the participation platform Prolific (www.prolific.co, accessed on 15 July 2021). This target sample size was selected to be large enough to provide meaningful parameter estimates while also balancing resource constraints. To be eligible, participants had to be currently residing in the UK and have a minimum approval rate of 90% (i.e., at least 90% of their previous Prolific submissions had to have been approved).
In total, 301 participants completed the study. We excluded participants who responded incorrectly to any of three attention check questions (n = 2; ~0.6%) and participants who provided nonsensical, irrelevant, or incomprehensible responses to the open-ended prompt “in your own words, what was this study about?” (n = 9). The final study, therefore, consisted of 290 participants. Demographic characteristics are shown in Table 1.

2.1.3. Study 1: Materials

Study 1 consisted of a single, 70-item survey, which is reproduced in full in the Supplementary Materials (Table S1).

2.1.4. Study 1: Procedure

Participants were invited to participate in a study on attitudes towards climate change via their Prolific dashboards. Upon selecting the survey, participants were presented with an information screen and provided their informed consent.
Participants were then presented with the 70 items in a random order. For each item, participants were required to make a “Yes”, “No”, or “Don’t Know” response. Three attention check questions were embedded in the survey, which directed participants to make a specific response (e.g., “Please select Don’t Know”).
After completing the survey, participants were asked about their age, gender, education, and employment status. Finally, participants were required to type a response to the prompt “In your own words, what do you think this questionnaire was about?” Participants were then debriefed.

2.2. Studies 2 and 3: Internal Reliability, Test–Retest Reliability, Convergent and Divergent Validity, and Predictive Validity

2.2.1. Studies 2 and 3: Overview

In Study 2, we aimed to examine the internal reliability, test–retest reliability, and convergent and divergent validity of the CCS in an independent sample. To test convergent validity, participants completed two scales of constructs that we would expect to be strongly related to Climate Capability. First, we measured ecological worldview using the New Ecological Paradigm Scale—Revised (NEP-R) [15]. This scale was selected in part because of its very wide usage in the environmental psychology literature, but also because of its brevity (15 items). Second, we measured knowledge of climate change using the Climate Change Knowledge Scale (CCK) [16]. The CCK was chosen again because of its relative brevity, but also because it is restricted very narrowly to the understanding of facts relating to climate change, without incorporating any moral or attitudinal items. To test divergent validity, we measured basic scientific literacy using the National Science Foundation Scientific Literacy Indicator Items (NSF Indicators) [27]. The NSF Indicator items were chosen because they were developed to be understandable for a general population with varying levels of literacy and education. Divergent validity would be demonstrated if the relationship between the NSF Indicators and the CCS were weaker than the relationships between the CCS, the NEP-R, and the CCK. We predicted that CCS scores would be positively correlated to all other measures, but that the correlations would be stronger for the NEP-R and CCK than for the NSF Indicator items.
In Study 3, we aimed to examine the predictive validity of the CCS in an independent sample of participants. Specifically, we aimed to examine how strongly the CCS predicted self-reported regular private sphere environmental behaviours, as well as less frequent environmental behaviours (e.g., engagement with government, charitable volunteering, and larger lifestyle changes). To this end, participants completed the CCS along with two indices of their pro-environmental behaviours, both of which were created for the current research.

2.2.2. Studies 2 and 3: Participants

We planned to recruit 300 participants for each study through the participation platform Prolific (www.prolific.co, accessed on 20 July 2021). As in Study 1, participants had to be current residents of the UK, have a minimum Prolific approval rate of 90%, and have not participated in any previous Climate Capability Scale study.
In total, 300 participants completed Study 2. We excluded participants who responded incorrectly to any of the attention check items (n = 4; ~1.3%) and participants who provided nonsensical, irrelevant, or incomprehensible responses to the open-ended prompt “In your own words, what was this survey about?” (n = 24; ~8%). Therefore, the final sample for Study 2 consisted of 272 participants. To assess test–retest reliability, 100 participants were invited to complete the CCS again one week later. In total, 87 participants provided complete, usable data for both sessions.
In total, 301 participants completed Study 3. We excluded participants who responded incorrectly to any of the attention check items (n = 4; ~1.4%) and participants who provided nonsensical, irrelevant, or incomprehensible responses to the open-ended prompt “In your own words, what was this survey about?” (n = 10; ~3.3%). Therefore, the final sample consisted of 287 participants. The demographics for Studies 2 and 3 are shown in Table 1.

2.2.3. Studies 2 and 3: Materials

The Climate Capability Scale (CCS)

The CCS consisted of 24 items. The item “I know how much different activities affect the planet” was added to the 23 items retained from Study 1. This item was added because it captured an important aspect of Climate Capability which was missing from our original item pool. We expected it to load heavily with the “Self-Efficacy” factor, as it was conceptually rated to the other items within that factor.
For each item, participants responded either Yes, No, or Don’t Know. For most items, Yes responses were scored as 2 points, Don’t Know responses were scored as 1 point, and No responses were scored as 0 points. Items marked with an asterisk were reverse-scored. Total CCS scores were obtained by summing the item points. Thus, scores could range from 0 to 48.

The New Ecological Paradigm Scale—Revised (NEP-R)

The NEP-R [15] consists of 15 items answered on a five-point scale (strongly agree, mildly agree, unsure, mildly disagree, strongly disagree). Positive items (e.g., “The balance of nature is very delicate and easily upset”) were scored from 0 (strongly disagree) to 4 (strongly agree). Negative items (e.g., “Humans were meant to rule over the rest of nature”) were scored from 0 (strongly agree) to 4 (strongly disagree). The scores were summed such that total scores could range from 0 to 60. Higher scores on this scale indicate a stronger ecological worldview.
The items were presented in a random order, and participants were required to respond to every item. Internal reliability was acceptable; McDonald’s ω = 0.81, Cronbach’s α = 0.80.

The Climate Change Knowledge Scale (CCK)

The CCK [16] consists of 19 statements to which participants respond either Correct, Wrong, or Don’t Know. Correct items (e.g., “Greenhouse gases partly retain the Earth’s heat radiation”), were scored such that a “Correct” response was assigned 2 points, a Don’t Know response was assigned 1 point, and a Wrong response was assigned 0 points. For incorrect items (e.g., “CO2 is harmful to plants”), reverse coding was used. The scores were summed such that the total scores could range from 0 to 38.
The items were presented in a random order, and participants were required to respond to every item. The internal reliability for our sample was below the conventionally accepted 0.80 threshold (McDonald’s ω = 0.68, Cronbach’s α = 0.69).

The National Science Foundation’s Scientific Literacy Indicator Items (NSF Indicators)

The NSF indicators [27] consist of nine scientific statements, some true and some false. For each item, participants responded True, False, or Don’t Know. For true statements (e.g., “Electrons are smaller than atoms”), “True” responses were assigned 2 points, “Don’t Know” responses were assigned 1 point, and “False” responses were assigned 0 points. For false statements (e.g., “Antibiotics kills viruses as well as bacteria”), reverse coding was used. The scores were summed; total scores could range from 0 to 18.
Because the NSF indicator items were not designed to measure a single underlying construct, a reliability analysis was not conducted.

Regular Private Sphere Environmental Behaviour Index

This index was created to capture self-reported engagement in everyday private sphere behaviours that contribute towards the minimisation of one’s individual carbon footprint. In total, there were 15 items, many of which were adapted from Whitmarsh et al. [8]. Participants indicated how frequently they engaged in each behaviour on a five-point scale (Never, Rarely, Sometimes, Often, Always). For all items, Never responses were scored as 0, Rarely as 1, Sometimes as 2, Often as 3, and Always as 4. The scores were summed; therefore, the total index scores could vary between 0 and 60.

Infrequent Environmental Behaviour Index

This index was created to capture pro-environmental behaviours that people might engage in less frequently. The index consisted of eight items, which covered political engagement (three items), charitable engagement (three items), and behaviours related to flying and carbon offsetting (two items).
Participants were asked whether they had ever engaged in each of these actions. Their response options were “No—never” (scored as a 0), “Yes—once or twice” (scored as a 1), or “Yes—several times” (scored as a 2). The scores were totalled across all items such that the total scores could vary from 0 to 16.

2.2.4. Studies 2 and 3: Procedure

The participants for Studies 2 and 3 were invited to participate in a study on attitudes towards climate change via their Prolific dashboards. Participants were presented with an information screen and were asked to provide their informed consent.
In both studies, participants first completed the CCS. Items were presented in a random order, and were interspersed with three attention check questions which directed participants to make a specific response (e.g., “Please select No”). Participants were required to provide a response for each item.
In Study 2, following completion of the CCS, participants completed the NEP-R, the CCK, and the NSF Indicators. The order of these three scales was randomised for each participant, as was the order of the items within each scale. Participants were required to respond to each item.
In Study 3, following completion of the CCS, the participants completed the Regular Private Sphere Environmental Behaviour Index and the Infrequent Environmental Behaviour Index. The order of these two indices was randomly determined for each participant. The order of items within each measure was randomised for each participant. Three attention checks were embedded in the CCS and one was embedded in each of the behaviour indices. These attention checks directed participants to make a specific response (e.g., “Please select No—Never”).
Finally, participants provided demographic information and were asked to write, in their own words, what they thought the questionnaires had been measuring. All participants were then debriefed.
In Study 2, the test–retest sample of participants were invited to participate in Session 2 exactly 7 days after they completed Session 1; the survey was kept open for one week. The majority of participants who returned (~75%) did so on the first day.

2.3. Study 4: Adolescents’ Scale Responses

2.3.1. Study 4: Overview

Our final aim was to explore adolescents’ responses to the CCS. Specifically, we aimed to: (1) establish whether the scale and its subscales had acceptable internal reliability in adolescents; (2) assess the factor structure in a sample of adolescents; and (3) explore differences between adolescent and adult responses to the CCS.

2.3.2. Study 4: Participants

Teachers from five secondary schools were recruited through academic networks and through Twitter. A total of 615 adolescents from 5 secondary schools completed the CCS as part of a classroom activity. The data from 76 participants were removed because they did not fully complete the survey. A further 58 participants were removed because they did not consent for their data to be used for research purposes, and data from 23 participants were removed because the responses indicated inattentive responding (i.e., “straight-lining”, implausibly fast completion times). The final adolescent sample therefore included 458 participants.
Thirty participants (6.6%) did not provide their school year groups. Of the remaining 428 participants, 77 (18.0%) were in Year 8 (12–13 years old), 145 (33.9%) were in Year 9 (13–14 years old), and 206 (45.0%) were in Year 10 (14–15 years old). Due to an oversight, data on gender were not recorded at one school (n = 28). Of the remaining 430 participants, 85 (19.8%) were female, 302 (70.2%) were male, and 17 (4.0%) indicated that they were non-binary; 26 participants (6.0%) chose not to report their gender.
Note that the gender imbalance in this sample occurred because the school that contributed the most participants (n = 244) was a boys’ school. To address this imbalance, all analyses were repeated with this school removed. These analyses are reported in full in the Supplementary Materials, but there were no notable discrepancies compared to the full sample’s results.

2.3.3. Study 4: Materials

The 24-item Climate Capability Survey (CCS) was used.

2.3.4. Study 4: Procedure

Participants were invited to complete the CCS as part of a classroom activity by their teachers. Students were instructed that they did not have to participate if they did not want to, and that they could withdraw at any point by closing their internet browser.
Upon opening the survey, the participants read a brief information screen which informed them of the purposes of the study. Before proceeding to the survey, the participants were presented with two options: they could allow their survey responses to be used for research purposes, or they could opt out. They were also reminded that they could close their browser at any time if they did not wish to take part.
Participants then completed the CCS independently. Items were presented in a random order. Participants were required to provide a response to each question.
Following the CCS, the participants were asked to indicate their school year group and their gender identity. They were also asked to provide a memorable word which they could use to request withdrawal of their data at a later date. Finally, the participants were thanked for their participation and debriefed.

3. Results

3.1. Study 1: Results

The aim of Study 1 was to pilot test and narrow down our item pool. We first examined the distribution of responses across each item (see Table S2 in the Supplementary Materials) to identify items with low response variability (>85% “Yes” or “No” responses), which lack the ability to discriminate between respondents. Sixteen such items were identified as candidates for removal. Next, we examined the item–rest correlation for each item. Three items were identified which correlated poorly with the rest; all of these items were part of the optimism theme.
We removed 17 of the 19 candidates for removal before proceeding to an Exploratory Factor Analysis (EFA). We retained two of the low-variability items, both of which were under the role of governance subtheme (“Government action is needed to tackle climate change” and “The whole system we live in needs to change to tackle climate change”) because of their theoretical importance within the Carbon Capability framework (Seyfang et al., 2007) [5]. The EFA was conducted on the remaining 53 items using JASP 0.13.1 (www.jasp-stats.org, accessed on 13 February 2022). Factors were extracted using parallel analysis with oblique promax rotation.
The EFA revealed a five-factor structure, with factors that could be interpreted as scepticism (e.g., “Scientists exaggerate how much the climate will change”); behaviour change (e.g., “I have made changes to the way I live to reduce my effect on the planet”); self-efficacy (e.g., “I know how to find out how much different activities affect the planet”); information seeking (e.g., “I look for news articles about climate change”); and knowledge of warming trends (e.g., “The Earth is hotter now than it was 100 years ago”).
We used this EFA to identify further candidates for removal. First, we identified items with factor loadings below 0.40. For larger factors, we also identified items that were redundant. However, our decisions were not purely data-driven; we retained the three items relating to role of governance even though they did not form a distinct cluster in this analysis because of their theoretical importance.
We retained the 23 items shown in Table 2, which we submitted to a second EFA. We first examined a scree plot, which suggested a possible six-factor solution. We determined the number of allowable factors by allowing all factors with eigenvalues over 0. In accordance with the scree plot, six factors emerged which were readily interpretable. In addition to the five factors that emerged from the larger item set, the three role of governance items formed their own distinct factor.
We assessed the fit of the model to the data by examining the root mean square error of approximation (RMSEA). The RMSEA can vary from 0 to 1, with smaller numbers indicating better fit; the conventional cut-off for acceptable fit is <0.08. We also examined the Tucker–Lewis Index (TLI), which adjusts model fit estimates for parsimony; the conventional cut-off for this index is ≥0.90. The RMSEA was acceptable at 0.07, though the TLI was just below the 0.90 threshold (0.89).
Finally, we examined the internal consistency of the scale. We provided the commonly reported index Cronbach’s α, but will focus on McDonald’s ω, which has been recommended as a superior metric because it does not assume that all items have equal covariance with the true score (Deng and Chan, 2017; Hayes & Coutts, 2020) [28,29]. McDonald’s ω can be interpreted in a similar way to Cronbach’s α; that is, an internally consistent scale will have ω ≥ 0.80. Both Cronbach’s α; (0.87) and McDonald’s ω (0.83) indicated acceptable internal reliability.
In Study 1, we tested an initial pool of 70 items in an online adult sample. After eliminating items with low variability and low item–total correlations, we were left with 53 items, which were submitted to an Exploratory Factor Analysis. Items with low factor loadings and/or high redundancy were removed, leaving a 23-item scale with a six-factor structure and good internal consistency.

3.2. Study 2 and 3: Results

Our first analyses were conducted on the pooled CCS data from Studies 2 and 3 (n = 559), as they followed identical procedures up until completion of the CCS. Pooling the data therefore allowed us to increase the reliability of our parameter estimates.

3.2.1. CCS—Descriptive Statistics

Table 3 shows the frequencies of “Yes”, “No” and “Don’t know” responses for each item. We successfully avoided ceiling and floor effects for most items. However, the variability was relatively low for Items 1 to 3, which assessed factual knowledge about warming trends; more than 80% of people correctly responded that the Earth is warming, that it will continue to warm, and that sea levels will rise. Variability was also low for Items 13 to 15, which all concern the need for societal and political action. More than 90% of respondents stated that government action and systemic change are needed, and more than 85% of respondents indicated that new laws were needed in order to tackle climate change.
Variability was generally higher for behaviour change items. While most people indicated that they had made (~79%) or were going to make (~81%) some changes to their behaviour, fewer were able to endorse that they had made changes in any specific areas of their lives, such as their eating habits (~50%) or their purchasing habits (~74%).
Items related to information seeking also showed considerable variability. Relatively low numbers of participants indicated that they seek out information about climate change, whether through news articles (~38%), documentary programmes (~45%), social media (~26%), or websites/blogs (~18%).
Responses were also highly variable across the self-efficacy items 20 to 24. Only ~32% of participants responded that they knew a lot about climate change. Just over half indicated that they knew where to find good sources of information; a similar figure indicated that they knew how to find credible information about how much different activities affect the planet.
Total scores on the CCS ranged from 2 to 48, with a median score of 34.0. The mean CCS score was 33.54 (SD = 8.81). The scores were negatively skewed (see Figure 1), with very few scores at the low end of the scale. However, the responses were not at the ceiling, as there was a positive tail of scores of 40 or higher.

3.2.2. CCS—Confirming the Factor Structure of the CCS

We attempted to replicate the factor structure that had emerged from the Exploratory Factor Analysis in Study 1 using the pooled sample from Studies 2 and 3 (n = 559). The 24 CCS items were entered into a Confirmatory Factor Analysis, with the factor structure specified in Table 2. Because we expected factors to be correlated, we used Maximum Likelihood estimation with robust standard errors.
We assessed the fit of the model to the data by examining the RMSEA, which assesses absolute model fit, and the Comparative Fit Index (CFI) and the Tucker–Lewis Index (TLI), which adjust model fit estimates, for parsimony. The RMSEA was acceptable at 0.067, 90% CI [0.062, 0.072], suggesting an adequate absolute fit. However, both the CFI (0.889) and the TLI (0.871) were slightly below the conventional cut-offs of 0.90, suggesting that a more parsimonious model might be preferred.
To explore this possibility, we conducted an Exploratory Factor Analysis using parallel analysis to identify factors. This analysis returned five factors. Notably, the three governance items did not cluster together to form a single factor. Indeed, they did not significantly load on any factor. However, we did not want to discard these items because the understanding that there is a need for broader system change is an important aspect of Climate Capability [5]. We, therefore, retained the 24-item scale with a six-factor structure.
Finally, we assessed the internal reliability of each of the six subscales (see Table 4). Despite each subscale consisting of between just three and five items, the reliability was generally acceptable. The lowest reliability found was for the knowledge subscale, while the highest was found for the scepticism subscale. These results are encouraging, and suggest that it would be reasonable to examine subscales within the CCS rather than using only the global measure.

3.2.3. Internal Consistency and Test–Retest Reliability

In both Study 2 and Study 3, the internal consistency of the CCS was acceptable: Study 2, α = 0.85; ω = 0.84, 95% CI [0.82, 0.87]; Study 3, α = 0.88; ω = 0.85, 95% CI [0.86, 0.90]. All items correlated positively with the total score of the remaining items, and the no removal of individual items would have meaningfully increased either α or ω.
To examine test–retest reliability, we correlated the Session 1 scores with the Session 2 scores for the retest sub-sample (n = 87) in Study 2. The correlation was positive and very large—r = 0.90, 95% CI [0.85, 0.93]—indicating high test–retest reliability.

3.2.4. Convergent and Divergent Validity

In Study 2, we examined the convergent and divergent validity of the CCS by correlating scores with two other scales of related constructs: environmental worldview (the NEP-R) and climate change literacy (the CCK). We also correlated the CCS scores with the NSF Indicators, which assess general scientific literacy. We expected the CCS to correlate positively with each of these three measures. However, because of the more substantive conceptual overlap, we expected that the CCS would correlate more strongly with the NEP-R and the CCK than with the NSF Indicators. Thus, convergent validity would be indicated by strong, positive correlations with the NEP-R and CCK, and divergent validity would be indicated by a significantly weaker positive correlation with the NSF Indicators.
Figure 2 visualises the relationships between the CCS and each of the other three measures. As predicted, the CCS correlated positively with all three measures: NEP-R, r = 0.52, 95% CI [0.43, 0.60]; CCK, r = 0.49, 95% CI [0.39, 0.58]; and NSF Indicators, r = 0.27, 95% CI [0.16, 0.38]. Thus, higher CCS was associated with ecological worldview, climate change literacy, and general scientific literacy.
To test the hypothesis that the CCS would correlate more strongly with the NEP-R and CCK than with the NSF indicator items, we used Steiger’s (1980) method for comparing related correlation coefficients, which we implemented using Lee and Preacher’s (2013) online calculator (http://quantpsy.org/corrtest/corrtest2.htm, accessed on 20 March 2022). As predicted, the CCS correlated more strongly with the NEP-R than with the NSF indicator items (z = 3.90, p < 0.001), and the CCS correlated more strongly with the CCK than with the NSF indicator items (z = 4.21, p < 0.001). This pattern of results confirms that the CCS has good convergent and divergent validity.

3.2.5. External Validity

In Study 3, we examined the external validity of the CCS by correlating CCS scores with self-reported pro-environmental behaviours. Breakdowns of responses to the behavioural indices can be seen in Table 5 (Regular Environmental Behaviour Index) and Table 6 (Infrequent Environmental Behaviour Index). The mean score on the Regular Environmental Behaviour Index was 40.49 (SD = 7.53), with scores ranging from 15 to 59. The distribution exhibited slight negative skew, as shown in Figure 3a. In contrast, the distribution of scores on the Infrequent Environmental Behaviour Index showed extreme positive skew, with the majority of scores bunched at the bottom of the scale (Figure 3b). The median score was 2.00, with an interquartile range of 3.00. Scores ranged from 0 to 14.
The reliability of the Regular Private Sphere Environmental Behaviour Index was acceptable: α = 0.78; ω = 0.79, 95% CI [0.74, 0.82]. An inspection of individual item statistics did not reveal any problematic items. The reliability of the Infrequent Environmental Behaviour Index was similar: α = 0.75; ω = 0.76, 95% CI [0.70, 0.79]. Again, no problematic items were identified by our inspection of individual item statistics. All items were therefore retained.
We examined the predictive validity of the CCS by examining correlations with the behavioural indices. Because of the extreme skew in the Infrequent Environmental Behaviour Index, we used the more conservative Spearman’s rho coefficient. The scores on the CCS were strongly correlated with the scores on the Regular Environmental Behaviour Index: rs = 0.62, 95% CI [0.54, 0.68], p < 0.001. The scores on the CCS were also strongly positively correlated with the scores on the Infrequent Environmental Behaviour Index: rs = 0.54, 95% CI [0.45, 0.62], p < 0.001. These correlations are visualised in Figure 3 (Panels c and d).
In summary, Study 3 established that scores on the CCS are strongly predictive of self-reported pro-environmental behaviours. Participants who score highly on the CCS are likely to engage in more regular private-sphere environmental behaviours (e.g., conserving water and energy, walking instead of driving). They are also more likely to report engaging in more infrequent behaviours, including political engagement and collective activism. Taken together, these findings suggest that the CCS has good predictive validity.

3.3. Study 4: Adolescents’ Scale Responses

3.3.1. Internal Reliability

The internal reliability was good, as assessed by Cronbach’s α (0.82) and McDonald’s ω (0.82; 95% CI [0.80, 0.85]). These values were slightly lower than those for the pooled adult sample (α = 0.87, ω = 0.84), though still above the conventional cut-off of 0.80. An inspection of item statistics revealed that all items were positively correlated with the total score; the lowest item–rest correlation was 0.224 (Item 3: “In the next 50 years, sea levels will rise”). The reliability would not have been meaningfully improved by dropping any individual items; therefore, all items were retained.

3.3.2. Confirming the Factor Structure in Adolescent Responses

A Confirmatory Factor Analysis was conducted by applying the six-factor structure from the adult data, using Maximum Likelihood estimation with robust standard errors. The fit indices were very similar to those found for the adult participants. The RMSEA was 0.053, 90% CI [0.047, 0.058], indicating an acceptable model fit. Both the CFI (0.898) and the TLI (0.882) were slightly below the conventional 0.90 threshold. Reliability coefficients for each of the six subscales are shown in Table 7. They ranged from 0.66 to 0.81, and were generally slightly lower than the corresponding reliabilities of the adult samples.
To investigate the possibility that there may be a more parsimonious factor structure, we conducted an Exploratory Factor Analysis on the adolescent data, with Oblique promax rotation and Parallel analysis. This analysis revealed a five-factor structure. In contrast to the adult data, the three “Knowledge of Warming Trends” items clustered with the three “Role of Governance” items in a distinct “Knowledge and Governance” factor. These findings suggest that basic climate literacy may be closely intertwined with an appreciation of the role of and need for governance within this age group. The internal reliability of this “Knowledge and Governance” factor was ω = 0.709, 90% CI [0.667, 0.750], α; = 0.712. However, our re-analysis of the data with the boys’ school excluded returned a six-factor structure which mirrored the factor structure found in the adult data. Specifically, the knowledge of warming trends and role of governance items clustered into two distinct factors, as in the adult data. Further research in adolescents will be needed in order to confirm the factor structure; for now, we recommend retaining the six-factor structure with separate knowledge of warming trends and role of governance factors.
Finally, item 24 (“I know how much different activities affect the planet”) did not significantly load on any factor. However, the overall scale reliability was not substantially improved by its removal. Furthermore, the reliability of the self-efficacy subscale would have been adversely affected by its removal, indicating that the item should be retained in the CCS when used with adolescents.

3.3.3. Comparing Adult and Adolescent Responses to the CCS

The mean CCS score for adolescents was 29.98 (SD = 7.92). We compared adolescents’ total CCS scores with adults’ total CCS scores from the pooled Study 2 and 3 sample (M = 33.54, SD = 8.81). A Levene’s test indicated that the variances were unequal: F(1) = 6.47, p = 0.013. In addition, Shapiro–Wilk tests indicated that both the adult data (W = 0.971, p < 0.001) and the adolescent data (W = 0.992, p = 0.018) were not normally distributed. Consequently, we compared the adult and adolescent data using a Mann–Whitney U test, which makes no assumptions about the shapes of the underlying distributions. This test indicated that total CCS scores were significantly lower for adolescents than for adults: U = 95,377, p < 0.001, r = 0.255.
To answer the question of whether adolescents’ scores were systematically lower than adults’ scores across the entire scale or whether there were specific areas of divergence, we compared the adolescents’ and adults’ subscale scores. We used a Bonferroni-corrected alpha level of 0.008 (0.05/6) for these comparisons. The subscale totals are shown in Table 8, along with the inferential comparisons. The adolescents’ scores were significantly lower than the adults’ on the scepticism, behaviour change, role of governance, and information seeking subscales. The largest effect was found for the behaviour change subscale (r = 0.344).
Interestingly, adolescents’ self-efficacy scores were significantly higher than adults’, although the effect size was small (r = 0.102). Adults and adolescents did not significantly differ in their scores on the knowledge of warming trends subscale, with an effect size close to zero (r = 0.012).
Table S2, in the Supplementary Materials, shows comparisons between adolescents and adults for each individual item. Briefly, in keeping with the subscale analysis, adolescents tended to make fewer “yes” responses than adults across most items, and this was particularly evident in the behaviour change and information seeking items. Adults and adolescents’ responses were quite similar across most of the knowledge of warming trends and self-efficacy items, though there were some exceptions to this. Notably, adolescents were more likely than adults to respond “yes” to the item “I know a lot about climate change”.
In summary, Study 4 showed that the CCS can be used with 12–15 year olds. Within this age group, the CCS retained acceptable internal consistency and a readily interpretable factor structure. Differences between adults’ and adolescents’ responses shed light on some potentially fruitful targets for intervention. For example, increasing media literacy and knowledge of the scientific process might ameliorate scepticism within this age group, and identifying where adolescents see opportunities for behavioural change may also be important in helping adults become effective agents of change.

4. Discussion

Reducing carbon emissions to mitigate the worst impacts of climate change will require a Climate Capable populous—a general public who are able and motivated to reduce their individual contributions to climate change, but who are also politically engaged and willing to push for change at the societal level [5,8]. A reliable and valid measure of Climate Capability has many applications, including tracking population-level changes in Climate Capability over time and evaluating the effectiveness of educational programmes, public engagement activities, and public messaging campaigns. Across four studies with a total of 849 adult and 458 adolescent participants, we demonstrated that the 24-item CCS is internally consistent and reliable, has good convergent and divergent validity, and predicts self-reported pro-environmental behaviours. Below, we consider each of these properties in more depth before considering the insights that can be gleaned from comparisons between adults’ and adolescents’ responses to the CCS.

4.1. Reliability and Validity of the CCS

In every sample of participants, the internal consistency of the CCS was good (ω > 0.80). Analyses of individual item statistics showed that all items correlated positively with the total score, and there were no individual items that had a substantial adverse effect on scale consistency. In Study 2, we examined test–retest reliability after a seven-to-nine-day delay. The responses were very consistent across time points, indicating excellent reliability. Taken together, these findings give us confidence that the CCS reliably captures stable individual differences in Climate Capability.
In Study 2, we examined the convergent and divergent validity of the CCS. The CCS correlated strongly with related constructs, namely, ecological worldview (measured by the NEP-R; [15]) and climate change literacy (measured by the Climate Change Knowledge Scale; [16]). In other words, participants with high CCS scores were also likely to have more ecological worldview, and to be more knowledgeable about the causes and consequences of climate change. These correlation coefficients were around 0.5, indicating that around 25% of the variance was shared between the CCS and each of the other two scales. Thus, while there was substantial overlap in these constructs, they were also clearly distinct; the overlap was not sufficiently high to suggest that we were simply measuring the same construct using a different name (the “jangle fallacy”; [30]).
The CCS scores correlated positively with broader scientific literacy (as assessed by the NSF indicator items), but this correlation was substantially weaker than for the more closely aligned constructs of environmental attitudes and climate change literacy. This finding indicates good levels of divergent validity; the CCS is clearly distinct from—though weakly related to—general scientific literacy.

4.2. Insights from the CCS

Whitmarsh et al. [8] argued that, while some basic level of scientific literacy may be necessary for Climate Capability, it is not sufficient. Indeed, people’s attitudes toward climate change and their support for environmental policies are influenced by many factors, including political ideology [31], religiosity [32], and worldview (e.g., Poortinga et al. [33]). Our own data underscore the complexity of the relationship between knowledge and Climate Capability, as the shared variance between Climate Capability and climate change knowledge was modest (around 24%). Indeed, Figure 2 makes it clear that knowledge and Climate Capability are separable—there are individuals who score very highly on measures of climate change knowledge but who have low Climate Capability, and vice versa.
Examining individual item responses within the CCS also illuminates the disconnect between an understanding of climate change and the motivation to act. Most participants (approximately 80–90%) correctly identified that the Earth is warmer now than it was a century ago, that it will continue to warm, and that the sea levels will rise as a result of this warming. Encouragingly, scepticism was low; less than 10% of participants indicated that they believed that scientists are exaggerating the extent to which the climate will change and the consequences of climate change. These results are broadly in line with previous research which has documented that climate change scepticism is a minority view among the British public (Poortinga et al. [33]).
However, despite high levels of understanding of basic warming trends and low levels of scepticism, fewer than half of our participants reported that they actively sought out information about climate change. A minority of our adult participants felt that they knew a lot about climate change. Many indicated that they did not know what sources of information they could trust to learn more about climate change and about the contributions of different activities to climate change. Whitmarsh et al. [8] argued that Carbon Capable individuals should have the ability to evaluate the reliability of different sources of information for their reliability. Our findings underscore this point and suggest that increasing media literacy and equipping individuals with the skills to evaluate information will be an important target for Climate Capability interventions.

4.3. Adolescents’ Responses to the CCS

The CCS was designed to be accessible to adolescents, as well as to adults. Our initial investigation here, with 458 adolescents aged 12 to 15, indicates that the CCS is internally consistent with respondents in this age group, and that it returns an interpretable factor structure. These properties make the CCS a potentially valuable tool for measuring the effectiveness of educational programs on Climate Capability. We note, however, that further work needs to be carried out to establish validity and test–retest reliability within the adolescent population (e.g., by validating scores against other related scales that are appropriate for use with adolescents).
Adolescents tended to have lower CCS scores than adults. However, this difference was not found across all subscales. Adolescents scored similarly to adults on the knowledge of warming trends subscale, and they actually had higher scores than adults on the self-efficacy subscale. This is perhaps not surprising, as British adolescents have grown up in a society in which climate change is widely accepted and is increasingly being incorporated into school science curricula. In contrast, the wide range of ages in our adult sample will translate into quite wide differences in educational experiences with climate change. Many adults within our sample would have received no formal education on climate change, or very limited education. Furthermore, scientific understanding of climate change has evolved over the past few decades, which also will have been reflected in our adult participants’ educational experiences.
The subscale in which adolescents and adults diverged most strongly was the Behaviour Change subscale. This discrepancy is not surprising. Adolescents have less autonomy than adults; decisions about food, purchasing, transport, etc., will likely fall to caregivers. An important direction for future research will be identifying where adolescents perceive that they have the agency and opportunity to make pro-environmental changes to their own behaviours, and where they perceive opportunities to influence the behaviours of their families, peers, and local communities.
Adolescents were also less likely than adults to report that they had spoken to their families about how they could reduce their contributions to climate change. For many adolescents, starting these conversations at home may be one of the most powerful tools that they have at their disposal. There are likely many reasons why adolescents are not having these conversations at home, and further research will likely be needed to understand what these reasons might be and how they might be overcome. We suggest that Climate Capability interventions with adolescents may wish to focus on increasing the motivation and willingness to begin conversations in the home about reducing environmental impact.

4.4. Limitations and Future Directions

While we hope that the CCS will be a useful tool for researchers and practitioners alike, we acknowledge that there is scope for further refinement and adaptation. For instance, the responses to the role of governance items tended to cluster near the ceiling, with approximately 90% agreement to each item. Future refinements of the scale may focus on generating new items that discriminate more finely between participants with different views on the roles of government in climate change mitigation and adaptation.
Through the online platform Prolific (www.prolific.co, accessed on 15 July 2021), we were able to recruit adult participants who were diverse in terms of their age, educational background, and employment status. However, an important limitation of our study was that women were persistently over-represented in our adult samples, comprising around two-thirds of the total adult sample. The opposite was true for our adolescent data, with over-representation of boys. A re-analysis of the data from Study 4 with the removal of a large all-boys’ school is reported in the Supplementary Materials; there were no substantive differences in the results between the two samples. Nonetheless, we would encourage future researchers to strive for more gender-balanced samples in future research to improve the generalisability of the results.
Climate Capability is shaped by educational, social, and cultural forces. We focused here on participants currently residing in the UK. It seems quite probable that participants in other countries and regions would respond quite differently to many of the items in the CCS; it is possible that the psychometric properties of the scale, including the factor structure and reliability indices, would differ across regions. An important goal for future research will be to adapt and validate the CCS for different countries, allowing for the exploration and examination of cross-cultural differences in Climate Capability.
The CCS was designed to be short enough to be used in a variety of settings including classrooms and public outreach, while also providing sufficient coverage of the Climate Capability construct. We recommend that the CCS be used as a whole scale rather than using individual factor scores, for which reliability tended to be lower. However, the development of a long-form CCS scale, with 10–15 items per factor, might allow for a more fine-grained assessment of Climate Capability, including evaluating the effectiveness of interventions designed to increase specific facets of Climate Capability (e.g., behaviour change, appreciation of the need for governance, self-efficacy).
We demonstrated that the CCS predicts self-reported environmental behaviour. However, participants may inflate the frequency of their self-reported environmental behaviour to appear more positively. We are encouraged that responses to our behavioural indices were quite similar to those reported by Whitmarsh et al. [8]. In addition, there were many items that received low levels of endorsement, suggesting that participants did not systematically inflate their responses across all behavioural indices. Nonetheless, correlating CCS scores with objective data on specific environmental behaviours (e.g., purchasing behaviour, food consumption, and transportation) would allow for a more rigorous assessment of predictive validity.

5. Conclusions

If we are to minimise the extent of climate change, we need Climate Capable citizens who are motivated and empowered to make changes within their own lives, but who are also politically engaged and willing to push for systemic change [5,8]. A valid and reliable tool for measuring Climate Capability has many potential applications, including measuring the effectiveness of educational programs and public engagement activities, tracking trends in Climate Capability in the general public over time, and exploring a wide range of demographic, attitudinal, and personality-related correlates of Climate Capability. Across four studies with more than 1300 adult and adolescent participants, we developed and validated the 24-item Climate Capability Scale (CCS). The CCS is internally consistent and reliable; it positively correlates with existing measures of environmental worldview, climate literacy, and general scientific literacy; and it predicts self-reported pro-environmental behaviours. The CCS also points to particular areas that should become of Climate Capability interventions, including increasing media literacy and political engagement, as well as supporting adolescents in order to identify opportunities for behaviour change.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/su151511933/s1, Table S1. Response frequencies to the 70 items tested in Study 1. Table S2. Item-level comparisons between adolescent and adult responses to the CCS. Table S3. McDonald’s ω (with 95% Confidence Intervals) for the CCS subscales in the reduced and full adolescent samples. Table S4. Comparisons between adults and adolescents’ CCS subscale scores. Table S5. Item-level comparisons between the reduced adolescent and adult responses to the CCS.

Author Contributions

Conceptualization, R.H. and J.A.R.; methodology, R.H. validation, R.H.; formal analysis, R.H.; investigation, J.A.R., R.H. and H.R.; writing—original draft preparation, R.H.; writing—review and editing, J.A.R.; funding acquisition, J.A.R. and R.L.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by UK EPSRC, through the Impact Acceleration Account grant EP/R511614/1 administered by Swansea University.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board (or Ethics Committee) of Swansea University for studies involving humans.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Anonymised data for this project can be found at https://osf.io/8y6s5/?view_only=00665152c9ec4e7fa74f5ba19b373cda, accessed on 1 December 2022.

Acknowledgments

We would like to thank Martyn Steiner for advising us on suitable language and questions to use with adolescents. We thank all the teachers and students of the participating schools. We acknowledge Stuart Capstick for his thorough reading and comments on a draft of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Urban, M.C. Accelerating extinction risk from climate change. Science 2015, 348, 571–573. [Google Scholar] [CrossRef] [Green Version]
  2. Mora, C.; Dousset, B.; Caldwell, I.R.; Powell, F.E.; Geronimo, R.C.; Bielecki, C.R.; Counsell, C.W.W.; Dietrich, B.S.; Johnston, E.T.; Louis, L.V.; et al. Global risk of deadly heat. Nat. Clim. Chang. 2017, 7, 501–506. [Google Scholar] [CrossRef] [Green Version]
  3. IPCC. Global Warming of 1.5 °C. An IPCC Special Report on the Impacts of Global Warming of 1.5 °C above Pre-Industrial Levels and Related Global Greenhouse Has Emission Pathways, in the Context of Strengthening the Global Response to the Threat of Climate Change, Sustainable Development, and Efforts to Eradicate Poverty; Organization at William & Mary, Ed.; IPCC: Geneva, Switzerland, 2018. [Google Scholar]
  4. Climate Change Committee. Independent Assessment of UK Climate Risk: Advice to Government for the UK’s Third Climate Change Risk Assessment (CCRA3); Climate Change Committee: London, UK, 2021. [Google Scholar]
  5. Seyfang, G.; Lorenzoni, I.; Nye, M. Personal carbon trading: Notional concept or workable proposition? Exploring theoretical, ideological and practical underpinnings. In CSERGE Working Paper EDM 07-03; UEA: Norwich, UK, 2007. [Google Scholar]
  6. Prosser, A.M.B.; Whitmarsh, L. Net Zero: A Review of Public Attitudes and Behaviours; CAST: Bath, UK, 2022. [Google Scholar]
  7. Stern, P.C. New Environmental Theories: Toward a Coherent Theory of Environmentally Significant Behavior. J. Soc. Issues 2000, 56, 407–424. [Google Scholar] [CrossRef]
  8. Whitmarsh, L.; Seyfang, G.; O’Neill, S. Public engagement with carbon and climate change: To what extent is the public ‘carbon capable’? Glob. Environ. Chang. 2011, 21, 56–65. [Google Scholar] [CrossRef] [Green Version]
  9. Cordero, E.C.; Centeno, D.; Todd, A.M. The role of climate change education on individual lifetime carbon emissions. PLoS ONE 2020, 15, e0206266. [Google Scholar] [CrossRef] [Green Version]
  10. Welsh Government. The Four Purposes of the Curriculum for Wales; Welsh Government: London, UK, 2018. [Google Scholar]
  11. Harker-Schuch, I. Why Is Early Adolescence So Pivotal in the Climate Change Communication and Education Arena? In Climate Change and the Role of Education; Filho, W.L., Hemstock, S.L., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 279–290. [Google Scholar]
  12. Kluczkovski, A.; Lait, R.; Martins, C.A.; Reynolds, C.; Smith, P.; Woffenden, Z.; Lynch, J.; Frankowska, A.; Harris, F.; Johnson, D.; et al. Learning in lockdown: Using the COVID-19 crisis to teach children about food and climate change. Nutr. Bull. 2021, 46, 206–215. [Google Scholar] [CrossRef]
  13. Rudd, J.A.; Horry, R.; Skains, R.L. You and CO2: A Public Engagement Study to Engage Secondary School Students with the Issue of Climate Change. J. Sci. Educ. Technol. 2020, 29, 230–241. [Google Scholar] [CrossRef] [Green Version]
  14. Milfont, T.L.; Duckitt, J. The environmental attitudes inventory: A valid and reliable measure to assess the structure of environmental attitudes. J. Environ. Psychol. 2010, 30, 80–94. [Google Scholar]
  15. Dunlap, R.E.; Van Liere, K.D.; Mertig, A.G.; Jones, R.E. New Trends in Measuring Environmental Attitudes: Measuring Endorsement of the New Ecological Paradigm: A Revised NEP Scale. J. Soc. Issues 2000, 56, 425–442. [Google Scholar]
  16. Tobler, C.; Visschers, V.H.M.; Siegrist, M. Consumers’ knowledge about climate change. Clim. Chang. 2012, 114, 189–209. [Google Scholar] [CrossRef]
  17. Swain, S.D.; Weathers, D.; Niedrich, R.W. Assessing Three Sources of Misresponse to Reversed Likert Items. J. Mark. Res. 2008, 45, 116–131. [Google Scholar] [CrossRef]
  18. Marsh, H.W. Negative item bias in rating scales for preadolescent children: A cognitive-developmental phenomenon. Dev. Psychol. 1986, 22, 37–49. [Google Scholar] [CrossRef]
  19. Melnick, S.A.; Gable, R.K. The use of negative item stems: A cautionary note. Educ. Res. Q. 1990, 14, 31–36. [Google Scholar]
  20. Williams Susan, A.; Melvin, S.S. The Effect of Reading Ability and Response Formats on Patients’ Abilities to Respond to a Patient Satisfaction Scale. J. Contin. Educ. Nurs. 2001, 32, 60–67. [Google Scholar] [CrossRef] [PubMed]
  21. Barnette, J.J. Effects of Stem and Likert Response Option Reversals on Survey Internal Consistency: If You Feel the Need, There is a Better Alternative to Using those Negatively Worded Stems. Educ. Psychol. Meas. 2000, 60, 361–370. [Google Scholar] [CrossRef]
  22. Mellor, D.; Moore, K.A. The Use of Likert Scales With Children. J. Pediatr. Psychol. 2014, 39, 369–379. [Google Scholar] [CrossRef] [Green Version]
  23. Chambers, C.T.; Johnston, C. Developmental Differences in Children’s Use of Rating Scales. J. Pediatr. Psychol. 2002, 27, 27–36. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Field, E.; Schwartzenberg, P.; Berger, P. Canada, Climate Change and Education: Opportunities for Public and Formal Education (Formal Report for Learning for a Sustainable Future); York University Printing Services: North York, ON, Canada, 2019. [Google Scholar]
  25. Cantell, H.; Tolppanen, S.; Aarnio-Linnanvuori, E.; Lehtonen, A. Bicycle model on climate change education: Presenting and evaluating a model. Environ. Educ. Res. 2019, 25, 717–731. [Google Scholar] [CrossRef]
  26. Ross, H.; Rudd, J.A.; Skains, R.L.; Horry, R. How Big Is My Carbon Footprint? Understanding Young People’s Engagement with Climate Change Education. Sustainability 2021, 13, 1961. [Google Scholar] [CrossRef]
  27. National Science Board. Science and Engineering Indicators 2018; National Science Board: Alexandria, VA, USA, 2018; p. NSB-2018-1. [Google Scholar]
  28. Deng, L.; Chan, W. Testing the Difference Between Reliability Coefficients Alpha and Omega. Educ. Psychol. Meas. 2017, 77, 185–203. [Google Scholar] [CrossRef] [Green Version]
  29. Hayes, A.F.; Coutts, J.J. Use Omega rather than Cronbach’s Alpha for Estimating Reliability. Commun. Methods Meas. 2020, 14, 1–24. [Google Scholar] [CrossRef]
  30. Kelley, T.L. Interpretation of Educational Measurements; World Book: New York, NY, USA, 1927. [Google Scholar]
  31. Smith, E.K.; Mayer, A. Anomalous Anglophones? Contours of free market ideology, political polarization, and climate change attitudes in English-speaking countries, Western European and post-Communist states. Clim. Chang. 2019, 152, 17–34. [Google Scholar] [CrossRef]
  32. Morrison, M.; Duncan, R.; Parton, K. Religion Does Matter for Climate Change Attitudes and Behavior. PLoS ONE 2015, 10, e0134868. [Google Scholar]
  33. Poortinga, W.; Spence, A.; Whitmarsh, L.; Capstick, S.; Pidgeon, N.F. Uncertain climate: An investigation into public scepticism about anthropogenic climate change. Glob. Environ. Chang. 2011, 21, 1015–1024. [Google Scholar] [CrossRef]
Figure 1. Distribution of CCS scores in the pooled data from Studies 2 and 3.
Figure 1. Distribution of CCS scores in the pooled data from Studies 2 and 3.
Sustainability 15 11933 g001
Figure 2. Scatterplots depicting the relationships between CCS scores and (a) NEP-R scores; (b) CCK scores; (c) NSF indicator scores.
Figure 2. Scatterplots depicting the relationships between CCS scores and (a) NEP-R scores; (b) CCK scores; (c) NSF indicator scores.
Sustainability 15 11933 g002
Figure 3. Relationships between CCS scores and environmental behaviour. (a) Histogram of Regular Environmental Behaviour Index scores. (b) Histogram of Infrequent Behaviour Index scores. (c) Scatterplot of the relationship between CCS scores and Regular Environmental Behaviour Index scores. (d) Scatterplot of the relationship between CCS scores and Infrequent Environmental Behaviour Index.
Figure 3. Relationships between CCS scores and environmental behaviour. (a) Histogram of Regular Environmental Behaviour Index scores. (b) Histogram of Infrequent Behaviour Index scores. (c) Scatterplot of the relationship between CCS scores and Regular Environmental Behaviour Index scores. (d) Scatterplot of the relationship between CCS scores and Infrequent Environmental Behaviour Index.
Sustainability 15 11933 g003
Table 1. Demographic characteristics of the samples tested in Studies 1, 2, and 3. Study 1 had 290 participants, Study 2 had 272 participants, and Study 3 had 287 participants.
Table 1. Demographic characteristics of the samples tested in Studies 1, 2, and 3. Study 1 had 290 participants, Study 2 had 272 participants, and Study 3 had 287 participants.
DemographicResponseStudy 1Study 2Study 3
GenderFemale72.76%57.35%67.94%
Male27.24%41.18%31.01%
Other gender identity0.00%1.47%1.05%
AgeRange18–6518–7318–72
Median32.0026.0033.00
Mean (SD)33.84 (10.77)29.13 (10.39)35.22 (11.54)
Highest level of educationNo formal qualifications0.00%0.74%0.35%
Secondary school qualifications11.03%19.48%6.97%
College/sixth form qualifications27.24%23.90%30.31%
Undergraduate degree27.24%36.40%45.30%
Master’s degree14.73%16.91%13.24%
Doctoral degree2.76%1.83%3.83%
Employment statusUnemployed and seeking work10.69%10.29%3.14%
Unemployed and not seeking work3.45%3.31%1.39%
Homemaker5.52%2.21%3.48%
In part-time employment17.59%13.97%12.89%
In full-time employment39.66%38.60%55.05%
Self-employed7.59%6.99%6.62%
In education or training10.35%17.65%10.11%
Retired1.03%1.50%3.48%
Other/Missing4.14%4.78%1.05%
Table 2. Factor loadings for the 23-item CCS in Study 1.
Table 2. Factor loadings for the 23-item CCS in Study 1.
Factor
ItemScepticismBehaviour ChangeInformation SeekingSelf-EfficacyKnowledge of Warming TrendsRole of Governance
Earth is hotter now than it was 100 years ago 0.591
In the next 50 years, Earth is going to get hotter 0.777
In the next 50 years, sea levels will rise 0.505
Scientists exaggerate how much the climate will change0.748
Scientists exaggerate the effects of climate change0.723
The media exaggerate how much the climate will change0.935
The media exaggerate the effects of climate change0.916
I have made changes to how I live to reduce my effect on the planet 0.821
I am going to make changes to how I live to reduce my effect on the planet 0.740
I have made changes to the food I eat to reduce my effect on the planet 0.615
I have made changes to what I buy to reduce my effect on the planet 0.655
I have talked to my family about ways we can reduce our effect on the planet 0.730
Government action is needed to tackle climate change 0.619
We need new laws to tackle climate change 0.830
The whole system we live in needs to change to tackle climate change 0.585
I look for news articles about climate change 0.732
I look for documentary programmes about climate change 0.721
I look for videos on social media about climate change 0.846
I look for blogs and websites about climate change 0.719
I know a lot about climate change 0.461
I know where to find good information about climate change 0.869
I know what sources of information I can trust to learn more about climate change 0.888
I know how to find more about how much different activities affect the planet 0.702
Table 3. Item frequencies (%) across all CCS items in the combined data from Study 2 (Session 1) and Study 3. The * denotes reverse scored items.
Table 3. Item frequencies (%) across all CCS items in the combined data from Study 2 (Session 1) and Study 3. The * denotes reverse scored items.
ItemYesNoDon’t Know
1. Earth is hotter now than it was 100 years ago82.831.2515.92
2. In the next 50 years, Earth is going to get hotter86.231.7911.99
3. In the next 50 years, sea levels will rise85.332.5012.17
4. Scientists exaggerate how much the climate will change *6.6279.7913.60
5. Scientists exaggerate the effects of climate change *6.0881.9311.99
6. The media exaggerate how much the climate will change *13.9569.4113.95
7. The media exaggerate the effects of climate change *14.6769.4115.92
8. I have made changes to how I live to reduce my effect on the planet79.4317.712.86
9. I am going to make changes to how I live to reduce my effect on the planet81.407.6910.91
10. I have made changes to what I eat to reduce my effect on the planet49.9145.444.65
11. I have made changes to what I buy to reduce my effect on the planet74.4221.294.29
12. I have talked to my family about reducing our effect on the planet64.0434.351.61
13. Government action is needed to tackle climate change93.922.863.22
14. We need new laws to tackle climate change87.124.658.23
15. The whole system we live in needs to change to tackle climate change90.164.295.55
16. I look for news articles about climate change37.5759.213.22
17. I look for documentary programmes about climate change45.0852.242.68
18. I look for videos on social media about climate change25.5873.700.72
19. I look for blogs and websites about climate change17.5380.861.61
20. I know a lot about climate change31.6650.9817.35
21. I know where to find good information about climate change52.5928.2719.14
22. I know what sources of information I can trust to learn more about climate change53.6724.8721.47
23. I know how to find out how much different activities affect the planet60.1126.1213.78
24. I know how much different activities affect the planet60.2923.6116.10
Table 4. Reliability of the six CCS subscales in the pooled sample from study 2 and study 3.
Table 4. Reliability of the six CCS subscales in the pooled sample from study 2 and study 3.
SubscaleMcDonald’s ω
(with 95% CIs)
Cronbach’s α
Knowledge of warming trends0.70 [0.64, 0.73]0.68
Scepticism0.90 [0.87, 0.90]0.89
Behaviour change0.80 [0.77, 0.82]0.80
Role of governance0.79 [0.75, 0.81]0.78
Information seeking0.76 [0.71, 0.78]0.75
Self-efficacy0.79 [0.76, 0.81]0.79
Table 5. Distribution of responses to items on the Regular Environmental Behaviour Index.
Table 5. Distribution of responses to items on the Regular Environmental Behaviour Index.
ItemNeverRarelySometimesOftenAlways
Reuse or repair items instead of throwing them away0.7%5.9%34.8%43.2%15.3%
Recycle0.7%0.3%3.5%32.1%63.4%
Choose products that are more environmentally friendly1.4%7.7%36.6%40.4%13.9%
Choose items with less packaging2.4%6.6%35.5%45.3%10.1%
Buy second hand clothing and other items9.4%21.3%31.0%30.7%7.7%
Donate, swap, or hand down items to be reused2.8%5.2%22.3%39.0%30.7%
Reduce the amount you buy2.1%12.9%36.6%38.7%9.8%
Compost or separate your kitchen waste17.8%16.4%13.9%16.7%35.2%
Avoid eating meat17.8%22.6%28.2%13.6%17.8%
Eat food which is locally grown or in season1.7%13.6%41.1%38.7%4.9%
Turn off lights you are not using0.0%0.0%7.0%34.8%58.2%
Turn off the tap while you brush your teeth2.8%3.8%10.5%22.0%61.0%
Line dry washing instead of using a dryer (when weather conditions are appropriate)3.8%4.9%13.2%26.5%51.6%
Walk, cycle, or take public transport for short journeys (i.e., less than three miles)2.8%7.7%22.6%42.2%24.7%
Save water by taking fewer or shorter showers or baths7.0%14.6%28.6%32.8%17.1%
Table 6. Distribution of items on the Infrequent Environmental Behaviour Index.
Table 6. Distribution of items on the Infrequent Environmental Behaviour Index.
ItemNo—NeverYes—Once or TwiceYes—Several Times
Signed petitions about environmental issues36.2%38.3%25.4%
Written to your MP about an environmental issue89.2%8.7%2.1%
Joined a protest or march about an environmental issue90.6%8.0%1.4%
Donated money to an environmental cause47.7%39.7%12.5%
Volunteered for an environmental charity or cause78.7%15.3%5.9%
Engaged in tree planting activities62.4%27.9%9.8%
Used “carbon offsetting”74.9%18.8%6.3%
Pledged not to fly for a period of time (e.g., 1 year, 2 years, ever again)81.9%10.5%7.7%
ItemNo—neverYes—once or twiceYes—several times
Table 7. Reliability of the six CCS subscales in the adolescent sample.
Table 7. Reliability of the six CCS subscales in the adolescent sample.
SubscaleMcDonald’s ω (with 95% CIs)Cronbach’s α
Knowledge of warming trends0.70 [0.66, 0.75]0.70
Scepticism0.78 [0.75, 0.81]0.75
Behaviour change0.81 [0.78, 0.84]0.80
Role of governance0.66 [0.61, 0.72]0.66
Information seeking0.73 [0.69, 0.77]0.76
Self-efficacy0.69 [0.64, 0.73]0.68
Table 8. Comparisons between adults and adolescents’ CCS subscale scores.
Table 8. Comparisons between adults and adolescents’ CCS subscale scores.
SubscaleAdolescents
M (SD)
Adults
M (SD)
Comparison
Knowledge of warming trends5.48 (1.09)5.49 (0.99)W = 129,513.5, p = 0.67, r = 0.012
Scepticism5.69 (2.32)6.59 (2.25)W = 96,137.0, p < 0.001, r = 0.249
Behaviour change5.25 (3.33)7.23 (3.09)W = 83,991.5, p < 0.001, r = 0.344
Role of governance5.18 (1.31)5.59 (1.10)W = 108,895.5, p < 0.001, r = 0.149
Information seeking1.60 (2.23)2.60 (2.72)W = 101,961.5, p < 0.001, r = 0.203
Self-efficacy6.68 (2.74)6.05 (3.16)W = 141,068.0, p = 0.005, r = 0.102
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Horry, R.; Rudd, J.A.; Ross, H.; Skains, R.L. Development and Validation of the Climate Capability Scale. Sustainability 2023, 15, 11933. https://doi.org/10.3390/su151511933

AMA Style

Horry R, Rudd JA, Ross H, Skains RL. Development and Validation of the Climate Capability Scale. Sustainability. 2023; 15(15):11933. https://doi.org/10.3390/su151511933

Chicago/Turabian Style

Horry, Ruth, Jennifer A. Rudd, Helen Ross, and R. Lyle Skains. 2023. "Development and Validation of the Climate Capability Scale" Sustainability 15, no. 15: 11933. https://doi.org/10.3390/su151511933

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop