Next Article in Journal
A Quantitative Study on the Factors Influencing Implementation of Cybersecurity Laws and Regulations in Pakistan
Next Article in Special Issue
Semantic Networks of Election Fraud: Comparing the Twitter Discourses of the U.S. and Korean Presidential Elections
Previous Article in Journal
Gender Bias in the Australian Construction Industry: Women’s Experience in Trades and Semi-Skilled Roles
Previous Article in Special Issue
Emotions of Candidates on Twitter in the 2023 Seville City Council Election: A Second-Order Campaign?
 
 
Article
Peer-Review Record

“The Only Thing We Have to Fear Is Fear Itself”: Predicting College Students’ Voting Behavior Using the Extended Parallel Process Model

Soc. Sci. 2023, 12(11), 628; https://doi.org/10.3390/socsci12110628
by Anthony J. Roberto 1,*, L. D. Mattson 1, Paige A. Von Feldt 1 and Xin Zhou 2
Reviewer 1: Anonymous
Reviewer 2:
Soc. Sci. 2023, 12(11), 628; https://doi.org/10.3390/socsci12110628
Submission received: 31 August 2023 / Revised: 10 October 2023 / Accepted: 3 November 2023 / Published: 10 November 2023
(This article belongs to the Special Issue Political Communication and Emotions)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

1. The manuscript furthers our understanding on how a classic theory (extended parallel process model’s) can be used to predict and explain college students’ voting behavior. The authors hypothesized that the EPPM accurately predicted danger control outcomes (i.e., severity, susceptibility, self-efficacy, and response-efficacy predicted voting intentions, and voting intentions predicted voting behavior), but also one fear control outcome.

The authors develop a robust methodological approach to show the relationship between fear and different factors influencing voting behavior. As stated by the EPPM, a fear appeal is understood as a persuasive message to change intentions by generating emotional responses. These perceptions shape a significant threat for the public.

2. The article has a scientific structure. Clear hypotheses, together with a structural model that hypothesized relationships, are provided, using statistical tests to respond both. In this sense, Table 1 is quite illustrative as shows scale items for each variable. Besides that, the authors described the research design (over a seven-day period 30-40 days before the election and over a seven-day period immediately following the election), but the importance of the 2022 midterm elections should be emphasized. Why is it worth analyzing them?

Maybe it would be helpful to come back to this information (hypotheses) in a broader way in the Results. This section is too small. Although the implications of the study are addressed in the discussion, further details should be given to contextualize the results.

3. The list of references is up-to-date and suitable for the present study.

4. This paper allows that one can clearly learn a lot about predicting college students’ voting behavior, but the sample seems small (only 178 participants were employed to answer the research questions). In my opinion, this is the main flaw of the article as it is acknowledged by the authors. However, I think that the strong methodological design counteracts this limitation.

5. The authors highlight the relevance of applying a well-regard theory to a new context, but as aforementioned the selection of the midterm elections requires further explanation. Surprisingly, results were largely inconsistent with the EPPM’s predictions in terms of the fear control processes, pointing out here some reasons. The authors also acknowledge limitations and future venues of research.

6. Lastly, the discussion is sometimes worded confusingly. I would recommend to identify two or three empirical contributions of this paper to the line of inquiry. In terms of clarity, sometimes a blank space is missing between the subsections of the introduction.

7. In short, this paper is an interesting contribution for the political field, particularly aligned with prior scholarship on the extended parallel process models. Only some minor changes are requested as reinforcing the results and providing more context on the chosen elections.

Author Response

We would like to thank both reviewers for their helpful feedback.  It is very much appreciated, and we believe it has led to a stronger manuscript. Below we outline how we addressed each of the main issues raised by the reviewers.

 

REVIEWER 1

 Reviewer 1 raised four key points regarding the manuscript: 

Point #1 asked us to discuss the importance of studying the 2022 midterm elections (i.e., “why is it worth analyzing them”).

We now list three key reasons the 2022 midterm elections were worthy of study in the introduction.

 

Point #2 asked us to expand the results section by incorporating information about the 2022 election.

 While we agree including information about the 2022 elections is important (addressed above), we do not believe it is appropriate to include such information in the results section. However, we are happy to reconsider this position if we missed or misunderstood something (please excuse us if that is the case).

 

Point #3 focused on why the EPPM did such a poor job predicting the fear control process.

We now review 6 possible reasons for this finding.  These include (1) amount and type of media exposure, (2) possible changes in perceived threat and efficacy between T1 and T2, (3) differences between the voting and health domains, (4) the possibility that other, unmeasured, fear control process may have been affected, (5) the fact that (unmeasured) background/contextual factors may impact predictive and explanatory power of the EPPM, and (6) the possibility that this portion of the model may have worked differently for individuals with different political philosophies. 

 

Point #4 focused on the clarity of the discussion section.  

We significantly rewrote the theoretical implications portion of the discussion section (see Point #3).  We realized there were some redundancies and omissions based on this and your previous point, and hope this has addressed the problem. However, if we have missed or misunderstood anything, we are happy to fix it (in which case we would appreciate any specific guidance you can provide).

 

Point # 5 focused on a few formatting issues.

We were not able to find the exact formatting issue you mentioned but are happy fix them if you can provide specific locations.  As a reminder, we submitted our manuscript in APA format, and it was converted by the publisher before being submitted to the reviewers. Our best guess is that this is when any formatting issues emerged. 

We would also like to note that we were directed to edit the publisher formatted document for the revision.  Thus, it is possible there are additional formatting errors currently. We are confident that we and/or the publisher will be able to address these if the manuscript is ultimately accepted for publication.

Reviewer 2 Report

Comments and Suggestions for Authors

The manuscript under review reports the results of a longitudinal study intended to predict college students' voting behavior during the 2022 U.S. midterm elections. The hypotheses and methods are based on the extended parallel process model. The authors find support for hypotheses predicting that danger control variables are able to explain variations in voting intentions. Voting intentions predicted voting behavior as expected.

As a political psychologist, I find these research questions interesting. However, I do have substantial concerns about the way that the study was conducted as well as the hypotheses and analyses. I detail those concerns below.

I do not recall reading any statements on open science practices. Were the methods and hypotheses preregistered? Are the data available in a public repository? If so, be sure to state so and provide a link to the registration and data. If not, please explain why this was not possible or considered not to be necessary.

I also do not recall any multivariate hypotheses. Am I mistaken or were all hypotheses predicting bivariate relationships? If so, then it would seem that the primary analyses should have been Pearson's correlations. The authors instead report what they call a confirmatory factor analysis but present figures that illustrate what looks more like a path model. The path model is more interesting, but I do not see how your hypotheses or analyses support or test a path model. 

I could see reframing all of the bivariate hypotheses in a way that would be amenable to conducting several linear multiple regression analyses (e.g., "The combination of perceived susceptibility, self-efficacy, and response-efficacy will explain significant variance in voting intention. When controlling for each of the other predictors, each predictor in the model will explain significant unique variance in voting intention in a positive direction."). This would be more interesting than the correlations and more appropriate than a confirmatory factor analysis. Separate figures would then illustrate each regression analysis for each outcome variable.

Alternatively, or in addition, mediational hypotheses could be posited (e.g., Perceived susceptibility, self-efficacy, and response-efficacy will positively, indirectly predict voting behavior through the mediating mechanism of voting intention.")

The analysis on this would be complication (the way I know how to do it). You would need to use Hayes' PROCESS macro and the following strategy (since each iteration of PROCESS can only handle one variable in the X position and you would need 3 in my example above): PROCESS can estimate a mediation model with statistical controls, so it can also estimate a model with multiple X variables. However, in order to estimate the direct and indirect effects of all k X variables, PROCESS must be executed k times, each time putting one X in the model as X and the remaining k − 1 X variables as covariates. Each time PROCESS is run, the direct and indirect effects of the variable listed as X will be generated. Repeating k times generates the indirect effects for all k X variables. Mathematically, all resulting paths, direct, and indirect effects will be the same as if they had all been estimated simultaneously (as in a structural equation modeling program).

 

Comments on the Quality of English Language

N/A

Author Response

We would like to thank both reviewers for their helpful feedback.  It is very much appreciated, and we believe it has led to a stronger manuscript. Below we outline how we addressed each of the main issues raised by the reviewers.

 

REVIEWER 2

Reviewer 2 raised two key points regarding the manuscript:

 

Point #1 asked if the methods and hypotheses preregistered and/or if the data were available in a public repository.

The short answer to both questions is “no.”  The key reason is that this is an uncommon practice in the communication field.  For example, the lead author has been conducting research for over 32 years and (1) has never preregistered hypotheses and methods or used a public repository before, and (2) has not been asked this question before. That does now mean we are not open to doing so in the future. Unfortunately, it is not possible for us to preregister our hypotheses and methods at this point. We are open to depositing our data file in a public repository if required. However, for now, we have simply added a statement that the data file will be made available upon request to the first author (which is more common in our field and appears to be an acceptable practice based on our review of other manuscripts recently published in Social Sciences). 

 

Point #2 posed several different questions regarding data analysis.

We believe the current data analytic plan is appropriate as it is common to use structural equation modeling (SEM) to analyze data when testing theories like the EPPM which was proposed as a set of structural relations between variables (Stephenson, Holbert & Zimmerman, 2006). Also, SEM is a multivariate analytical technique that combines path analysis, factor analysis, and simultaneous modeling. It differs from path analysis or mediation analysis because it allows researchers to extract measurement errors and estimate the systematic relationship among the latent variables in the model. Finally, prior research has used Structural Equation Modeling to test the EPPM (e.g., Author, 2023; Guidry et al., 2019; Birmingham et al., 2015; Zahra et al., 2022).

 

Author (2023).

Birmingham, W. C., Hung, M., Boonyasiriwat, W., Kohlmann, W., Walters, S. T., Burt, R. W., ... & Kinney, A. Y. (2015). Effectiveness of the extended parallel process model in promoting colorectal cancer screening. Psycho‐Oncology24, 1265-1278.

Guidry, J. P., Carlyle, K. E., Perrin, P. B., LaRose, J. G., Ryan, M., & Messner, M. (2019). A path model of psychosocial constructs predicting future Zika vaccine uptake intent. Vaccine, 37, 5233-5241.

Stephenson, M. T., Holbert, R. L., & Zimmerman, R. S. (2006). On the use of structural equation modeling in health communication research. Health Communication20, 159-167.

Hosseini, Z., Mouseli, A., Aghamolaei, T., Mohseni, S., Shahini, S., & Dadipoor, S. (2022). Predictors of adopting smoking preventive behaviors by university students: the extended parallel process model fitness test. Journal of Substance Use, 1-8.

 

Back to TopTop