Next Article in Journal
How Does Voluntary Contact with the Police Produce Distrust? Evidence from the French Case
Next Article in Special Issue
Clinician Delivery of Virtual Pivotal Response Treatment with Children with Autism during the COVID-19 Pandemic
Previous Article in Journal / Special Issue
Adolescent Perspectives on How an Adjunctive Mobile App for Social Anxiety Treatment Impacts Treatment Engagement in Telehealth Group Therapy
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

Adaptation of an Evidence-Based Online Depression Prevention Intervention for College Students: Intervention Development and Pilot Study Results

Tracy R. G. Gladstone
L. Sophia Rintell
Katherine R. Buchholz
1 and
Taylor L. Myers
Wellesley Centers for Women, Wellesley College, Wellesley, MA 02481, USA
Boston Children’s Hospital, Boston, MA 02115, USA
Author to whom correspondence should be addressed.
Soc. Sci. 2021, 10(10), 398;
Submission received: 1 September 2021 / Revised: 29 September 2021 / Accepted: 13 October 2021 / Published: 16 October 2021
(This article belongs to the Special Issue Technological Approaches for the Treatment of Mental Health in Youth)


College and university students across the United States are experiencing increases in depressive symptoms and risk for clinical depression. As college counseling centers strive to address the problem through wellness outreach and psychoeducation, limited resources make it difficult to reach students who would most benefit. Technology-based prevention programs have the potential to increase reach and address barriers to access encountered by students in need of mental health support. Part 1 of this manuscript describes the development of the Willow intervention, an adaptation of the technology-based CATCH-IT depression prevention intervention using a community participatory approach, for use by students at a women’s liberal arts college. Part 2 presents data from a pilot study of Willow with N = 34 (mean age = 19.82, SD = 1.19) students. Twenty-nine participants (85%) logged onto Willow at least once, and eight (24%) completed the full intervention. Participants positively rated the acceptability, appropriateness, and feasibility of Willow. After eight weeks of use, results suggested decreases in depressive symptoms (95% CI (0.46–3.59)), anxiety symptoms (95% CI (0.41–3.04)), and rumination (95% CI (0.45–8.18)). This internet-based prevention intervention was found to be acceptable, feasible to implement, and may be associated with decreased internalizing symptoms.

1. Introduction

Depression presents a significant and increasingly common problem for students on college and university campuses across the United States. According to a review by Ibrahim et al. (2013), the rate of depression among college students ranges from 10 to 85%, with a mean prevalence of 30.6%. A 2019 report of the American College Health Association’s National College Health Assessment (ACHA-NCHA) indicates that 45.1% of college students report they “felt so depressed that it was difficult to function” in the past 12 months, an increase from 36.1% in 2015 (American College Health Association 2015, 2019). This assessment also reports that 20.0% (22.5% female, 11.7% male) of college students were “diagnosed or treated by a professional” for depression within the past 12 months, an increase from 14.9% (17.0% female, 9.3% male) in 2015 (American College Health Association 2015, 2019). Moreover, a recent analysis of 5 years of national data from college counseling centers reveals increasing rates of depression symptoms, anxiety symptoms, and suicidal thinking among students on campus (Xiao et al. 2017). This literature suggests that depression risk and depression diagnoses among college students are increasing and represent a significant challenge for young adults.
Among college students with a diagnosed mood disorder or with elevated symptoms of depression, fewer than half seek mental health treatment (Blanco et al. 2008; Eisenberg et al. 2007). The low rate of help-seeking behaviors among college students may be partially explained by students’ perceptions of public stigma, lack of time, or privacy concerns, as well as a lack of concern about their symptoms (Eisenberg et al. 2009; Hunt and Eisenberg 2010). Data also suggest that students from certain demographic groups are particularly unlikely to access mental health services, including students from lower socioeconomic backgrounds, and students who identify as Asian or Pacific Islander (Eisenberg et al. 2007). Finally, most campus counseling centers are already overwhelmed with demand. Directors of counseling centers report increased numbers of students who require mental health support and increased demand for mental health services. As a result, they have expanded their external referral networks, increased counseling staff, and resorted to waitlists and limits on the number of sessions in order to manage students’ needs (Gallagher 2014; Xiao et al. 2017).
Technology-based interventions have the potential to address many of the concerns expressed by students with depressive symptoms who are reluctant to seek mental health support at their campus counseling centers or are unable to access support due to limited resources at counseling centers. These interventions offer several distinct advantages over more traditional face-to-face approaches, including easy access, privacy, personalization, and decreased cost (Munoz et al. 2010; Van Voorhees et al. 2010). In fact, technology-based interventions are non-consumable, which means that, once developed, they can be used repeatedly, and globally, without losing their preventive power. They may also be accessed by multiple people simultaneously (Munoz et al. 2010) without cost to people who may otherwise not have the resources for the prevention or treatment of depressive symptoms.
To date, while a handful of internet-based depression intervention programs have been developed for adolescents and young adults, these programs have been associated with only small to moderate effect sizes over relatively short follow-up intervals (Abuwalla et al. 2018). Such intervention programs are being implemented across college campuses in the United States, but these programs generally focus on decreasing symptoms in students already diagnosed with clinical depression rather than preventing the onset of clinical depression among students with subthreshold symptoms of depression (Davies et al. 2014). Technology-based prevention programs, which aim to identify individuals who are at risk and provide them with skills and resources in order to prevent the onset of disease, have been found to be an effective method of decreasing disease burden (van Zoonen et al. 2014). While low-intensity, online interventions for depression have been found to be as effective as traditional interventions, few preventive interventions have been developed for and evaluated in the college student population specifically (Cook et al. 2019; Furukawa et al. 2021; Harrer et al. 2019).
Developed initially for use in primary care, the CATCH-IT (Competent Adulthood Transition with Cognitive Behavioral Humanistic and Interpersonal Training) intervention is a self-guided prevention intervention resource targeting adolescents at risk for depression (i.e., experiencing depressive symptoms below the threshold needed for a diagnosis of clinical depression). CATCH-IT is based on a theoretical model of strengthening individual coping using internet-based learning, and it incorporates character stories, peer videos, and design elements to meet current social media standards. CATCH-IT includes 14 web-based modules and a motivational interview to engage adolescents. Modules teach strategies from Behavioral Activation, Cognitive Behavioral Therapy (CBT), and Interpersonal Psychotherapy (IPT), as well as relaxation techniques. The preventive program aims to help symptomatic adolescents at risk for clinical depression to increase healthy activities, develop skills to maintain a flexible and balanced cognitive style, improve interpersonal relationships, and engage more positively in their community.
Preliminary studies reveal that the CATCH-IT intervention is acceptable to adolescents from a range of social, economic, and racial backgrounds, and that the use of this intervention is associated with long-term decreases in depressive symptoms and disorders at follow-up intervals ranging from 6 to 12 months (Van Voorhees et al. 2008, 2009). Specifically, a clinical trial of the CATCH-IT intervention showed decreased symptoms of depression across a 12-month follow-up interval (Ip et al. 2016), and a multi-site trial revealed that the CATCH-IT program, when used by adolescents who presented with elevated symptoms of depression (i.e., at risk for clinical depression), was associated with decreased risk for depressive disorders across a 6-month follow-up interval (Gladstone et al. 2018). However, to date, the CATCH-IT intervention has not been adapted for use on college campuses.
While not all high school graduates pursue post-secondary education, a 2017 report found that two-thirds of U.S. high school graduates attended post-secondary institutions (U.S. Department of Labor 2018). Given that there are so few evidence-based interventions available for young adults struggling with mental health concerns, and that college and university campuses are home to the majority of young adults in America, Hunt and Eisenberg suggest that interventions such as CATCH-IT be adapted for college students (Hunt and Eisenberg 2010).
To this end, and consistent with the ecological validity model of program adaptation (Bernal et al. 1995), we used a community participatory approach (Israel et al. 1998) to develop a technology-based depression prevention program that (1) targets symptomatic college students at risk for clinical depression and (2) could be sustained within the existing infrastructure of the college. Community-Based Participatory Research (CBPR) is defined by Israel et al. (1998) as “a collaborative, partnership approach to research that equitably involves, for example, community members, organizational representatives, and researchers in all aspects of the research process. Partners contribute their expertise and share responsibilities and ownership to increase understanding of a given phenomenon, and incorporate the knowledge gained with action to enhance the health and well-being of community members.” This method is often used to adapt findings from controlled trials to real-world interventions in diverse contexts (Wallerstein and Duran 2010). The strengths of CBPR include incorporating community practices, beliefs, and theories that may inform interventions; utilizing bi-directional learning and collective decision making; developing sustainable systems and community ownership such that the intervention does not depend on research funding; and developing trusting partnerships between researchers and communities (Wallerstein and Duran 2010).
In this manuscript, we present in Part 1 the adaptation process we employed, and we share data from qualitative analyses of focus groups we conducted with stakeholders in the college community. In Part 2, we present data from an open trial pilot study of this new, technology-based resource.

2. Part 1: Intervention Development

2.1. Introduction

The intervention development process involved six key phases, each of which involved collaboration with community members and stakeholders. Phase 1 was meeting with the college President and Dean of Students to share information about evidence-based depression prevention and to introduce the CATCH-IT intervention as a model for adaptation to the college student community. Support and endorsement from college leadership was essential to building credibility and establishing relationships with stakeholders at every level of the project. The college leadership provided introductions to the head of the counseling center, members of the student life team, and other key stakeholders in the community. Phase 2 was a series of meetings with a variety of community stakeholders, including faculty and staff from the offices of Religious and Spiritual Life, Disability Services, Intercultural Education, Athletics, LGBTQ+ Support, Student Wellness, and other individual faculty members. The research team conducted semi-structured interviews with stakeholders to find out the characteristics and needs of students, whether there is a perceived need for a depression prevention intervention, and what key issues should be addressed in this intervention. When describing the proposed intervention during these meetings with community stakeholders, the importance of including a motivational component was discussed, and stakeholders committed to providing this motivational support to students they referred to an online depression prevention intervention. Phase 3 consisted of a series of student focus groups aimed at eliciting key issues and challenges to be addressed in the intervention, as well as suggestions for implementing the program on campus. Qualitative analysis of focus groups informed the adaptation of the intervention and will be presented in this manuscript. Phase 4 of the development process involved direct student input. A Student Advisory Board was assembled, consisting of 14 students who met monthly to discuss topics central to intervention development (e.g., feedback on program content, outreach methods). Additionally, student interns were hired to assist in drafting and revising intervention content. Phase 5 was the website creation and adaptation of the CATCH-IT intervention content for the college student population. Phase 6 involved the beta testing and pilot testing of the new intervention.

2.2. Methods

Davies et al. (2014) recommend utilizing qualitative methods, such as focus groups, in order to ensure that interventions targeting college student populations meet students’ needs. Below, we provide an analysis of the data collected from student focus groups and describe how student input informed the adaptation of the intervention.

2.2.1. Recruitment of Participants

Two recruitment strategies were used: campus-wide email advertisements and targeted outreach to specific identity-based campus organizations (e.g., racial/cultural groups, LGBTQ+, peer educators, students with disabilities). Focus groups for specific populations served the dual purposes of ensuring the representation of a broad range of student identities, as well as providing a comfortable environment for students to discuss personal or sensitive issues. A total of 88 students (47 first-years, 15 sophomores, 8 juniors, 16 seniors) participated in 13 focus groups of approximately five to twelve students each. Each group lasted for 60 minutes. Diverse populations represented in the sample include African/African Americans, Asian/Asian Americans, Latinx individuals, athletes, first-generation college students, international students, LGBTQ+ individuals, peer educators, students with disabilities, transfer students, and non-traditional students. Student interest in the focus groups was high, so additional sessions were scheduled to include as many participants as possible.

2.2.2. Focus Group Protocol

Sessions were conducted following the framework for collecting focus group data provided by Onwuegbuzie et al. (2009). In-person sessions were conducted on the college campus, facilitated by members of the research team consisting of a psychologist and research associate. A waiver of written informed consent, including consent to audio record, was obtained via email prior to each focus group session. Permission to record was requested again at the beginning of each session. Five sessions were not recorded in order to honor students’ preferences. All participants were given $15 in appreciation of their participation.
Facilitators led a semi-structured conversation addressing students’ positive and challenging experiences in college and how such experiences impact mood and mental health. Specifically, the discussions focused on three research questions: (1) What issues, positive or negative, are central to the experience of college students at this institution? (2) What strategies do students suggest for the implementation of an online depression prevention intervention at this institution? And (3) What kinds of people, and what experiences should be represented in the intervention? Recordings were transcribed, and written notes were typed for groups that were not recorded.

2.2.3. Data Analysis

The focus group data were analyzed using a thematic analysis approach as described by Braun and Clarke (2006, 2012). To begin, Researcher 1 read and annotated the dataset, inductively generating a preliminary list of codes. Then Researchers 1, 2, and 3 independently coded two transcripts using the codes generated by Researcher 1. Codes were not mutually exclusive, and each response could contain multiple codes. Together, all three researchers compared their coding and reconciled discrepancies by clarifying and editing the codes. With the revised list of codes, all three researchers independently coded two additional transcripts. Intercoder reliability was calculated to protect against investigator bias, using Miles and Huberman’s (1994) formula: number of agreements divided by the sum of agreements and disagreements. The resulting rate of intercoder reliability was 0.92. The remaining eleven transcripts were divided between the researchers and coded independently. Researchers reviewed the coded data to identify areas of similarity and overlap between codes and to generate themes. The themes that were both prevalent and salient to the research questions were included in the results.

2.3. Results

Participants provided detailed information about their experiences as students at this institution, with particular regard to impact on mental health as well as suggestions for the adaptation and implementation of the intervention. They talked openly about positive experiences as well as challenges they and their peers have experienced related to academic and social pressure, navigating on-campus resources, and the needs that a depression prevention intervention could serve within the community. Upon categorizing the diverse responses to discussion topics, themes emerged and are presented within the three central research questions. Examples of direct quotations from the focus groups relating to these themes are available in Table S1.

2.3.1. Research Question #1: What Issues, Positive or Negative, Are Central to the Experience of College Students at This Institution?

Academic Pressure. Many participants took pride in the institution’s academic rigor and praised the faculty. Students also reported struggles to earn good grades, find time to complete all of their work, and get enough sleep. The concept of “stress culture” was named in nearly all focus groups, meaning a culture in which stress is normalized as a necessary element of academic success. Academic pressure was also described in the form of perceived judgment by peers and competition between students, which reportedly feels isolating to some students, making it difficult to seek out support from their peers.
Financial Stress. Participants shared how socioeconomic status impacted multiple elements of the student experience, from securing campus jobs to balancing work and academics, participating in social life, and meeting family expectations. Some students with jobs on campus discussed feeling overwhelmed by having to manage time between work, academics, extracurricular activities, and family responsibilities and noted the gap between students who work and students who do not have jobs. Some students noted the burden of financial stress on campus as well, mentioning social stratifications based on financial limitations.
Access to Resources. Participants noted a desire to locate and utilize more of the available resources at the institution, but they identified barriers to doing so. Namely, some students did not know that certain resources were provided to students or they were unsure where to find the resources they needed. In some cases, students found existing resources insufficient for their needs. Another participant felt that the college is now making progress supporting first-generation and underrepresented students and encouraged the continuation of efforts to support such students. Participants suggested that the online intervention should incorporate a centralized comprehensive list of existing resources that could help students in times of need.
The Community. The subject of campus community emerged as a theme with both positive and negative implications. When the idea of the community came up in a positive light, participants spoke to the support they receive from faculty, deans, residence advisors, and peers. They also reported feeling empowered, respected, and inspired by others in the community. Some negative aspects of community culture, which impacted mental health, were cited as well. For example, students shared experiences of being ‘misgendered’ by professors who refused to use a student’s stated pronouns. Participants also discussed instances of feeling excluded on campus, citing selective student social organizations.
Physical Environment. The physical campus environment was celebrated for its beautiful scenery, which students said had a positive effect on their mood. In addition to the beauty of the campus, participants felt safe on campus, reporting feeling comfortable keeping dorm rooms unlocked, leaving personal belongings unattended in public spaces on campus, and walking through campus late at night. Some students also found the campus isolating because of long walks between buildings, the distance from the nearest big city, and lack of social spaces. Some students expressed discomfort in the college town in which the campus is situated, noting they sometimes feel unwelcome by the local community.

2.3.2. Research Question #2: What Content Should the Intervention Cover, and How Should the Intervention Be Delivered?

Content to Include. Many students advocated that the intervention should teach specific skills and strategies, rather than provide general advice. For example, students asked for guided writing activities, time management skill building, strategies for balancing responsibilities, and relaxation methods. Participants criticized other mental health resources for being standardized, general, and providing only superficial advice focused on ‘self-care,’ and they discussed their preference for concrete strategies and steps to take to improve low mood. Participants also overwhelmingly suggested that psychoeducation be included in the intervention, noting that many students have trouble identifying that they may be struggling and may need support.
Likewise, participants suggested that the intervention incorporate examples of student stories that could help students to feel less alone in their struggles. Participants asked for the intervention to feature a compilation of resources that students often need when they are struggling, so that it is easier to find help during hard times. For example, a participant suggested including guides for communicating with professors to ask for extensions.
Development and Implementation. Much of the feedback participants provided related to the process by which the intervention would be developed, introduced to the student body, and utilized by students. Participants were supportive of the private, flexible, and accessible nature of the intervention, as it offered an option very different from in-person counseling and could be used for short periods of time at the user’s convenience. Participants also found it meaningful to know that the intervention is rooted in evidence-based strategies. Participants suggested that these qualities be highlighted in the campus introduction to the intervention and rollout for the purpose of attracting students to use it. A key suggestion from participants was that the intervention development and implementation be driven by students. They suggested involving peer leaders in the roll-out of the intervention and responded well to the idea of a Student Advisory Board that would help to develop and approve the intervention content. It was important to participants that the online depression prevention intervention be differentiated from the college counseling center. However, students did suggest emphasizing the fact that intervention users would be connected to a point-person at the college, as participants felt that human connection would be important to supplement the online program.

2.3.3. Research Question #3: How Can the Intervention Best Reflect and Address the Diversity of the Student Population?

Diverse Personal Experiences. A key goal of the project was to customize the intervention for the student community at this particular college. We asked focus group participants what topics and common experiences should be represented within the intervention. Participants suggested including representations of students who are dealing with mental illness or managing the implications of physical, chronic, and invisible illnesses or diseases. Participants also recommended that the intervention include stories of students with diverse family backgrounds or those managing problems in their family life at home. Other suggested topics include homesickness, relationship problems, academic struggles, holding a leadership position, and navigating major transitions such as starting college or preparing to graduate.
Self-Identification. The focus groups generated a long list of identities and traits that participants felt should be represented in the intervention. These characteristics included racially and ethnically diverse students, students who identify as LGBTQ+, international students, first-generation college students, transfer students, religious students, low-income students, and athletes.

2.4. Discussion

Analysis of data from the student focus groups illuminated common experiences and core values of students at the college that were central to the development and implementation of a depression prevention intervention. Participants highlighted elements of student life that can impact mental health, including a high-stress academic environment, financial hardship, challenges accessing and utilizing resources for support on campus, social exclusivity, discomfort in the local college town, supportive faculty, helpful peer residence advisors, engaged student body, empowering community environment, and the scenic campus. The focus groups also generated feedback on specific features students would like to see in the intervention, (i.e., specific skill-building strategies, psychoeducation, and a resource list), suggestions for the implementation of the intervention on campus (i.e., including student involvement, connection to a live person for support, emphasis of evidence-based strategies, privacy, flexibility, and use of example student stories) and representation within the intervention, including diverse personal experiences and demographic identities.
Feedback from stakeholders and focus group participants was used to develop the intervention, an adaptation of the CATCH-IT intervention targeting students at a northeastern liberal arts college. The adaptation process involved taking the content, references, language, and imagery targeted for the adolescent audience and replacing it with alternatives appropriate for the social, intellectual, and developmental needs of college students. We honored students’ suggestion that the process be student-driven. Thus, students were involved in the most significant modifications—student interns created campus-specific resource lists to be featured in the intervention, wrote video blog scripts, created characters that represented the diverse characteristics suggested by focus group participants, acted in the video blogs, and recorded voiceover audio. Student artwork and photographs were featured throughout the intervention. A Student Advisory Board reviewed new material, in consultation with the research team. In addition, a student intern and the Student Advisory Board generated several possible names for the intervention and selected the name Willow based on a quote by Robert Jordan: “The oak fought the wind and was broken, the willow bent when it must and survived.”
The college’s information and technology department partnered with the research team to develop a platform for the intervention using Qualtrics, which is HIPAA-compliant. The platform enables users to navigate between modules in the intervention, and it supports interactive activities such as matching games, choose your own adventure scenarios, and text-entry worksheets. The online intervention was configured to track user data, including progress, number of modules completed, overall time elapsed, and text inputted into the program. Four students participated in preliminary beta testing of the intervention and provided feedback to the research team. Minor errors in functionality and content were corrected. Part 2 describes the pilot study of the intervention.

3. Part 2: Pilot Study

3.1. Introduction

The Willow intervention was tested in an open trial pilot study with students at the small northeastern college. The primary goal of the pilot study was to establish the feasibility and acceptability of the intervention for use with young-adult college students. Data were collected pertaining to participant use of the intervention, satisfaction, and clinical outcomes. While the intervention primarily targets symptoms of depression in college students, anxiety symptoms were measured as well, given the frequent overlap between symptoms of these disorders and the potential for cross-over effects on comorbid symptoms from interventions targeting one or the other disorder (Gladstone et al. 2020).

3.2. Method

3.2.1. Participants

Participants included 35 1st–4th year students from a northeastern liberal arts women’s college. One participant was removed from data analysis after receiving an incorrect intervention link. Participants ranged in age from 18 to 22 (mean age = 19.82, SD = 1.19) and primarily identified as female (97%), with 3% identifying as gender queer. Demographic data are presented in Table 1 below.

3.2.2. Procedure

Students were referred to the study by stakeholders in the college community (i.e., class deans, deans of intercultural education, athletic staff, religious chaplains, residence life staff, etc.), who, upon receiving study fliers from the research team, shared them with students. All students who expressed interest in the study were invited to participate, regardless of demographic characteristics or baseline depression status.
Prior to beginning the study, students completed a virtual informed consent process and provided online written consent. Then, the participants completed baseline questionnaires administered through Qualtrics. Students also participated in a brief (5–10 min) motivational interview by phone with a clinician from the research team in order to enhance the change process and help them weigh the pros and cons of participating in this preventive intervention. Upon completion of the first survey and motivational interview, participants were provided a log-in username and password for the program. Participants were able to access the program through a website on any internet-connected device. A second motivational interview was conducted by phone after four weeks of program use. Time 2 questionnaires in addition to satisfaction measures were administered at eight weeks. Participants received 20 dollars at each time point (baseline and eight weeks) to compensate them for their time. The study period started in April 2021 and continued after the end of the academic year. Of note, baseline measures were collected during the semester, and follow-up measures were collected after the end of the semester for almost all participants.

3.2.3. Measures

Demographics Form: Demographic information was gathered using a study-specific form. Information about gender identity, race, ethnicity, and parent/guardian level of education was collected.

Willow Usage Variables

The Qualtrics platform hosting Willow collected usage variables of interest including the number of Modules Used, the number of Characters Typed into Willow as part of the free-response questions, the Overall Time spent in the Willow program, and the Time spent per Module.

Participant-Reported Feasibility, Acceptability, and Usability Measures

Acceptability of Intervention Measure (AIM): The AIM is a four-item measure of implementation that assesses acceptability, “the perception among implementation stakeholders that a given treatment, service, practice, or innovation is agreeable, palatable, or satisfactory.” This measure has adequate psychometric properties. Test/retest reliability coefficients range from 0.73 to 0.88 in prior research (Weiner et al. 2017). The Cronbach’s alpha in this sample is 0.89.
Intervention Appropriateness Measure (IAM): The IAM is a four-item measure of implementation that assesses appropriateness, “the perceived fit, relevance, or compatibility of the innovation or evidence-based practice for a given practice setting, provider, or consumer.” This measure has adequate psychometric properties. Test/retest reliability coefficients range from 0.73 to 0.88 in prior research (Weiner et al. 2017). In this sample, the Cronbach’s alpha is 0.90.
Feasibility of Intervention Measure (FIM): The FIM is a four-item measure of implementation that assesses feasibility, “the extent to which a new treatment or innovation can be successfully used or carried out within a given agency or setting.” This measure has adequate psychometric properties. Test/retest reliability coefficients range from 0.73 to 0.88 in prior research (Weiner et al. 2017). The Cronbach’s alpha in this sample is 0.79.
The Usefulness, Satisfaction, and Ease of Use Questionnaire (USE): The USE questionnaire is a 28-item measure of the subjective usability of a product or service. A sample item is: “It is simple to use.” This measure has adequate psychometric properties (overall alpha in prior research = 0.98) (Gao et al. 2018). In this sample, the Cronbach’s alphas are 0.82 for usefulness, 0.77 for ease of use, 0.79 for ease of learning, and 0.91 for satisfaction.
Treatment Utilization Measure: This measure (Eisenberg et al. 2009) is an adaptation of that used by Eisenberg et al. (2007), which measures perceived need for mental health services in the past year and past year medication use.
Willow Open Response Questionnaire: This is a study-specific form comprised of 12 open response questions addressing participant satisfaction with specific elements of the online intervention program.

Participant-Reported Mental Health Measures

PHQ-8: The Patient Health Questionnaire (PHQ-8) is an eight-item self-report screening measure for depression that is used rather than the PHQ-9 (including an item assessing suicidal thinking/behavior) when phone and internet-based assessments make it difficult to address safety concerns that may arise when the 9th item is included (Kroenke et al. 2009). The PHQ-8 assesses symptoms of depression over the past two weeks using a four-point Likert scale. The PHQ-8 demonstrates adequate psychometric properties with an alpha value of 0.89 in prior research (Shin et al. 2019), and it is highly correlated with the PHQ-9 measure (r = 0.97; (Kroenke 2002)). Cronbach’s alphas for this study are 0.80 at baseline and 0.89 at Time 2.
GAD-7: The GAD-7 is a seven-item self-report screening measure for generalized anxiety disorder. Each item is scored from 0 to 3, with total scores ranging from 0 to 21. The GAD-7 demonstrates adequate psychometric properties with an alpha value of 0.92 and test/retest reliability intraclass correlation of 0.83 in prior research (Spitzer et al. 2006). Cronbach’s alphas for this sample are 0.84 at baseline and 0.86 at Time 2.
Ruminative Response Scale (RRS): The RRS is a 22-item self-report instrument that measures ruminative coping styles. Respondents answer questions on a scale of 1 (almost never) to 4 (almost always). This scale has adequate psychometric properties with an alpha coefficient of 0.90 and a test/retest correlation of 0.67 in prior research (Treynor et al. 2003). In this sample, the Cronbach’s alphas are 0.93 at baseline and 0.91 at Time 2.
Dysfunctional Attitude Scale (DAS-9): The DAS-9 is a nine-item self-report measure that aims to determine stable negative attitudes that people with depression hold about themselves, the world, and their future. The measure uses a five-point Likert scale to assess how much the respondent agrees or disagrees with each statement and has adequate psychometric properties. The internal consistency is 0.93 in prior research, and this measure is correlated with the Beck Depression Inventory at 0.65 (Weissman 1978). Cronbach’s alphas for this sample are 0.82 at baseline and 0.81 at Time 2.
Multidimensional Scale of Perceived Social Support (MSPSS): The MSPSS is a 12-item scale that measures perceived support from family, friends, and a significant other (“special person”). A sample item is “My family really tries to help me.” Respondents answer items on a seven-point Likert scale. This scale has adequate psychometric properties with a Cronbach’s alpha of 0.93 in prior research (Canty-Mitchell and Zimet 2000). For this sample, the Cronbach’s alphas are 0.83 at baseline and 0.67 at Time 2.

3.3. Data Analysis

IBM SPSS Statistics 28 was used for all quantitative data analysis. One participant was removed from all analysis since she was sent an incorrect link to the Willow program. Usage data (modules completed, overall time, and characters typed into the program) was downloaded directly from Qualtrics, the platform in which Willow was created. The overall time for one participant was an extreme outlier (over two standard deviations away from the mean). To preserve data but provide a more useful assessment of time spent on Willow, we used a Winsorized estimate for the data point by substituting the outlier with the next highest value plus one (Reifman and Garrett 2010). Time per module was calculated by dividing the overall time by the number of modules completed by the participant. Descriptive statistics were presented for both the full sample (including those who did not log on to Willow) and just the subsample who logged on to Willow at least once.
Descriptive statistics were conducted on measures of feasibility, acceptability, and usability for the subgroup of participants who logged on to Willow at least once, since all items of these measures were directly related to use of Willow and only administered at Time 2.
Analyses of the mental health measures were conducted using an intent-to-treat (ITT) approach that included all cases regardless of whether or not participants logged onto Willow. An ITT design is a practical approach that helps avoid the overstatement of effectiveness that can occur when reporting only on participants who have engaged in interventions exactly as prescribed by the researchers (Heritier et al. 2003; Hollis and Campbell 1999). To address missingness in our data, we used multiple imputation to impute missing data across baseline and Time 2. We imputed at the item level of each measure as opposed to the scale level (Gottschall et al. 2012). To best estimate each missing value, we included the following variables: age, gender, ethnicity, parent education, modules completed, and all items of the baseline and Time 2 PHQ-8, GAD-7, RRS, DAS, and MSPSS (Bartlett et al. 2011; Buuren 2018). The multiple imputation method selected for the data was a fully conditional specification model, which is an iterative Markov chain Monte Carlo (MCMC). The iteration limit was 10, and a linear regression modeling was used to impute missing data into five different datasets.
For each mental health measure, we conducted a paired t-test to compare the means of the baseline and Time 2 scores. SPSS provides a pooled result for paired t-tests with results for each of the five datasets combined into one pooled estimate. The fraction of missing information (fmi) is provided for each estimate. Fmi is an estimate of the ratio of missing information to complete information (Buuren 2018). We have chosen to report these pooled estimates alongside results from the observed data (where pairwise deletion was used) for comparison.

3.4. Results

3.4.1. Descriptives of Missing Data

Of the 29 participants included in the descriptive analysis for measures of feasibility, acceptability, and usability, 23 participants (79%) had complete data on these measures. At the item level, 5% of the data is incomplete. The percentage of missing data across the seven feasibility, acceptability, and usability measures is 3–8%. Little’s MCAR test was nonsignificant, indicating that we could not reject the null hypothesis that the data were missing completely at random.
For the full intent-to-treat sample, twenty-five participants (73.5%) had complete data across the two timepoints. At the item level, 4% of the data is incomplete. The percentage of missing data across the five mental health variables from both timepoints ranged from 0 to 5%. While Little’s MCAR test was nonsignificant, given the nonresponse on the Time 2 survey from two participants, it is likely that the missing data are better categorized as missing at random instead of missing completely at random (Jakobsen et al. 2017).

3.4.2. Usage Results

Participants’ usage of Willow is summarized in Table 2. Twenty-nine participants (85%) logged onto Willow at least once, and eight participants (24%) completed the full 14 modules. There were no significant differences in baseline depressive or anxiety symptoms between participants who logged onto Willow and those who did not. The mean number of modules completed was 6.32 (SD = 5.41) for the full sample. Participants engaged in the interactive part of Willow to varying degrees. Characters typed ranged from 0 to 7555 with a mean of 1857.88 (SD = 2176.68). The overall time that participants spent on Willow ranged from 0 min to 353.01 min with one outlier participant who spent 26,927 min on the program. As described above, we used a Winsorized estimate to replace this single outlying value. The mean time spent on Willow was 88.26 (SD = 101.15) minutes with participants averaging 13.24 (SD = 13.32) minutes per module.

3.4.3. Feasibility, Acceptability, and Usefulness Results

Participants (n = 23) rated the acceptability, appropriateness, and feasibility of Willow on a five-point scale (higher scores were more favorable). All three variables were found to be positive with mean ratings of 3.89 (SD = 0.75) for acceptability, 3.89 (SD = 0.83) for appropriateness, and 4.07 (SD = 0.43) for feasibility. Usefulness, ease of use, ease of learning, and satisfaction were rated on a 7-point scale (higher scores were more favorable). Once again, all these variables were found to be positive with mean ratings of 4.67 (SD = 1.09) for usefulness, 5.66 (SD = 0.73) for ease of use, 6.47 (SD = 0.64) for ease of learning, and 4.65 (SD = 1.25) for satisfaction. Participants had a mean rating of 5.17 (SD = 1.30) on a seven-point scale (higher scores were more favorable) asking how likely they were to recommend the program to a friend.

3.4.4. Mental Health Results

Descriptive statistics are presented in Table 3. Participants’ baseline PHQ-8 scores ranged from 1 to 18 with a pooled mean estimate of 8.36 (SD = 4.72). Nine participants reported minimal symptoms of depression, 14 participants reported mild symptoms, seven participants reported moderate symptoms, and four participants reported moderately severe symptoms. Participants’ baseline GAD-7 scores ranged from 0 to 18 with a pooled mean estimate of 7.12 (SD = 4.60). Eleven participants reported minimal symptoms of anxiety, 12 reported mild symptoms, 8 reported moderate symptoms, and 3 reported severe symptoms. At baseline, 25 participants (73.5%) reported that they thought they needed help for emotional or mental health problems in the past 12 months, and nine participants (26.5%) reported taking medication for mental health problems in the past 12 months.
Results from the paired t-test conducted on the imputed datasets suggested significant decreases in depressive symptoms (95% CI (0.46–3.59)), anxiety symptoms (95% CI (0.41–3.04)), and rumination (95% CI (0.45–8.18)). Participants’ scores on measures of social support and attitude style showed no change. Similar results were found using the observed data with pairwise deletion.

3.5. Discussion

We conducted a pilot study to test the acceptability and feasibility of the Willow intervention among college students attending a women’s college in the northeast. While our sample size was small (N = 34), participants represented a diverse group of students.
Participants’ engagement with the Willow program over the two-month study period was good. The majority of participants (85%) logged onto Willow at least once and completed an average of 6 out of 14 modules. Participants spent an average of 88 min on the program. Data from an RCT testing CATCH-IT in adolescents found that adolescents completed an average of three out of 15 modules and spent an average of 100 min on the CATCH-IT website (Gladstone et al. 2018), demonstrating that Willow users generally completed more modules than adolescent CATCH-IT users, but in less time. It is possible that the college students have had more exposure to online module-based learning programs and thus are able to move more efficiently through them. While it has been suggested that increased engagement in online interventions may predict better outcomes in the prevention and treatment of depression (Christensen et al. 2002; Christensen et al. 2006; Van Voorhees et al. 2011), the relationship between use and outcome may not be linear (Donkin et al. 2013). Moreover, it is possible that our current measures of adherence (e.g., modules completed, time logged in) may not be reliable if, for example, some intervention users practice applying the concepts they have learned while not logged into the program (Lenhard et al. 2019). At this point, there is no evidence to suggest that there is a threshold effect for the use of this or related interventions; in fact, participants in this and similar technology-based depression prevention programs exhibited preventive benefits using only a few modules (Gladstone et al. 2018; Ip et al. 2016). It is also possible that different populations would benefit equally from different amounts of engagement based on personal characteristics and developmental level.
Participants reported baseline depressive and anxiety symptoms in the minimal to severe ranges with the majority of participants reporting minimal to mild symptoms of depression and anxiety. Overall, participants reported significantly lower levels of depressive and anxiety symptoms after having access to the Willow program for two months. In addition, participants’ rumination scores significantly decreased over the course of the study. We did not find differences in participants’ scores on a measure of social support, nor did we find any change in participants’ overall cognitive style. Given the short (2-month) follow-up interval used for this pilot study, we are not surprised that changes in social support did not emerge, as it often takes some time for young adults’ social networks to respond to changes in interpersonal style or social problem-solving skills. Similarly, while Willow teaches healthier ways of responding to negative life events, it would likely take longer than 2 months for participants to consolidate this information such that it would translate into overall changes in cognitive style. The absence of changes in social support and cognitive style also suggests that other mechanisms (e.g., increase in pleasurable activities, reduced ruminative coping) may better account for any preventive effects associated with the Willow intervention.
While we did not have a control group and therefore are unable to provide evidence of the intervention’s efficacy, the decrease in scores suggests that the use of Willow did not increase participants’ distress or symptoms. It is possible that the intervention provided some support for students through the stress of the final exam period. However, it is also possible that the decrease in students’ symptom scores reflects their response to the two brief motivational interviews, which is a standard part of the Willow program, and not to their use of the actual modules. Additionally, given that the Time 2 survey was administered after the academic year ended, it is also possible that the observed decrease in symptoms and rumination reflects the decrease in stress that comes after finals rather than intervention effects alone. Similarly, the study took place during the COVID-19 pandemic, and strict safety precautions were in place; over the study period, these precautions were eased, possibly influencing students’ symptoms of depression and anxiety.
It is worth noting that the pilot study recruited students who presented with a broad range of depressive and anxiety symptoms. In the CATCH-IT trial, recruited participants were considered at risk for depression (i.e., history of depressive episode and/or current subthreshold symptoms), and the results suggested that teens who were more symptomatic at baseline benefitted more from CATCH-IT (Gladstone et al. 2018). In the Willow pilot study, in contrast, 14 of the 34 participants would not have been considered at risk for depression based on their baseline PHQ-8 scores, which may have impacted our findings. However, nearly three-fourths of the participants indicated that they felt they had needed mental health treatment over the past year suggesting that the Willow users may have been more likely to find the intervention relevant to their needs. This may have been reflected in participants’ favorable ratings of Willow, with ratings for feasibility, ease of use, and ease of learning in the high range.
Overall, the pilot study provides data that support the use of Willow, an adapted web-based depression prevention intervention, in a college student population. Our results suggest that students found Willow to be useful and appropriate, and it was feasible to implement. Students engaged in the program and reported symptom decreases over the study period.

4. Overall Discussion

We have presented a model for adapting and testing a web-based depression prevention intervention for college campuses. Through our six-phase community participatory process, we were able to engage key stakeholders (e.g., administration, faculty, staff, students, campus clinicians) in the adaptation of an existing evidence-based depression prevention intervention for adolescents. The stakeholders provided support and knowledge that guided the adaptations and helped the researchers to develop an intervention that was specific to the campus environment. In addition, the stakeholders’ involvement in the development of the Willow program increased their knowledge about the intervention and their positive expectations regarding its success.
Consistent with the literature on the value of tailoring interventions to specific target populations (e.g., Bernal et al. 1995), many students expressed their appreciation for the specificity of the Willow program to their college campus. In fact, interventions that are adapted to particular groups’ cultural and demographic characteristics have been associated with better adherence and benefit (e.g., Huey and Tilley 2018; Vogel et al. 2019). The model that we used to adapt and customize the Willow program for a particular college campus community can be replicated at other universities and has the potential to aid in the creation of new interventions targeting specific groups of college students. After initial development and adaptation, Willow or a Willow-like program can be sustained without the need for delivery by mental health professionals. Rather, campus stakeholders (e.g., faculty, deans, athletic staff, residential support staff) can be trained to increase student engagement and motivation for using such a web-based resource, ultimately reducing symptoms of depression and anxiety, and potentially preventing the onset of more significant mental health concerns. While the delivery of motivational interviews could add to the complexity of web-based interventions, such brief interviews can be scripted and have been associated with increased engagement with online interventions (Evers et al. 2005; Verheijden et al. 2007).
As stated above, the pilot data provided preliminary evidence that the Willow program is acceptable to students, feasible to implement among the college community, and does not increase distress or symptoms in students. Our results showing a significant decrease in depressive and anxiety symptoms as well as rumination also suggest that Willow may have a positive impact on students. Further research with a larger sample and a control group is needed to gain a better understanding of the Willow program’s potential in preventing depression and supporting students in managing academic stress.
Although Willow is intended as a preventive intervention for college students with subthreshold symptoms of depression, it may also have utility as a bridge for students who are reluctant to seek face-to-face mental health support in the community. Since Willow is customized to campus culture and can be used independently and flexibly, it may be an effective way to provide initial support for students who are reluctant to seek mental health resources or treatment and to encourage students to seek additional supports, as needed. In fact, the final module of the Willow program encourages users to seek mental health support if their symptoms persist or worsen, and the video blogs embedded within the intervention model students recognizing that they need mental health treatment and seeking such resources on campus. Thus, in addition to possibly reducing depressive symptoms among at-risk college students, this program may serve an important function in helping students address the barriers to pursuing treatment for clinical depression, when indicated.
Study findings should be considered in light of several limitations. First, the pilot study consisted of a self-selecting convenience sample with over two-thirds of the sample reporting that they had sought assistance for emotional problems over the past 12 months, and with nine participants reporting minimal depressive symptoms at baseline. Given that this intervention is designed to be used as a depression prevention intervention for symptomatic college students, the pilot study sample may not be reflective of the population intended to use Willow. Relatedly, the self-report assessment battery used in the pilot sample only enabled us to observe changes in depressive symptomatology but did not examine suicidal thinking/behavior or allow us to determine the effects of the intervention on the onset of depressive episodes. Ultimately, to evaluate any preventive benefits of this intervention in a randomized trial, it will be important to assess for changes in depressive symptoms and evaluate the presence of more significant manifestations of depressive illness over time.
Second, the Willow program was adapted specifically for students at a northeast liberal arts women’s college, and the participants in the study identified exclusively as female or gender queer. It is well known that there are significant gender differences in risk, expression, and response to depression. Depression is twice as prevalent in adolescent and adult females compared to males (Avenevoli et al. 2015; Kessler et al. 1993; Seedat et al. 2009), although symptom profiles vary by gender (Martin et al. 2013), and men are less likely to seek help for their symptoms than women (Addis and Hoffman 2017). Considering the homogeneity of our sample, the acceptability of the program cannot be generalized to all colleges or universities. However, we believe that these findings are particularly pertinent given that depression is such a significant problem among female students. Thus, we provide a detailed account of our adaptation process for this specific campus such that the same process can be used to produce a tailored intervention for other college settings.
Third, as noted above, the timing of the pilot study was such that participants completed the baseline assessment measures during the final quarter of the spring semester and the follow-up measures after final exams. Thus, it is possible that the lower symptom levels reported by students at the follow-up assessment reflect a reduction in stress that students experienced once the final exam period was over rather than any benefits they received through the use of the Willow program. However, it is also the case that several students reported increased stress with the end of the school year and the return to their family homes. In order to examine environmental factors that may contribute to observed decreases in symptoms over time, a life events measure could be added to future evaluations of the Willow program. Additionally, conducting such evaluations at different points during the school year would shed light on this issue.
Finally, the pilot study took place during the COVID-19 pandemic, when the campus environment presented unique challenges for students with and without symptoms of depression and when rates of depression increased particularly for females (e.g., Chen et al. 2020; Gladstone et al. 2021; Magson et al. 2021). Moreover, follow-up assessments for this pilot study were conducted when COVID-19 cases were decreasing and social distancing restrictions were being lifted. As such, it is difficult to discern the effects of Willow against these environmental changes that may influence participants’ mood and levels of stress.

5. Conclusions

The Willow intervention holds promise as a low-cost, accessible, and non-consumable resource to support college students with symptoms of depression and anxiety. Moreover, the adaptation process used in developing this intervention may inform the development and deployment of campus-tailored depression prevention interventions for college students across a range of college and university environments. While Willow and similar individually-oriented interventions hold promise as low-cost, accessible ways to support young adults who are struggling with symptoms of depression and anxiety, broader social and environmental action is essential to address the contextual factors that account for the increasing incidence of depression and anxiety among youth in America.

Supplementary Materials

The following are available online at, Table S1: Direct quotes from focus group participants relating to the themes that emerged from each research question.

Author Contributions

Conceptualization, T.R.G.G., L.S.R. and K.R.B.; methodology, T.R.G.G., L.S.R. and K.R.B.; formal analysis, K.R.B.; investigation, T.R.G.G., L.S.R., K.R.B. and T.L.M.; resources, T.R.G.G. and K.R.B.; data curation, L.S.R., K.R.B. and T.L.M.; writing—original draft preparation, T.R.G.G., L.S.R., K.R.B. and T.L.M.; writing—review and editing, T.R.G.G., L.S.R., K.R.B. and T.L.M.; supervision, T.R.G.G. and K.R.B.; project administration, L.S.R. and T.L.M.; funding acquisition, T.R.G.G. All authors have read and agreed to the published version of the manuscript.


This research was funded by the Huiying Memorial Foundation (GR26486).

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Review Board of Brandeis University, which provides IRB services to Wellesley College (protocol code 21022R approved 13 July 2020).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.


Thank you to the students, administrators, faculty, and staff members at a liberal arts college in the northeast for your support of this project and for your participation in all stages of this effort. Thank you to members of the Willow Student Advisory Board, student interns, student writers, student artists, student beta testers, and student actors and voice actors for your assistance with all aspects of intervention development. We are grateful to Erica Plunkett for assistance conducting focus groups; Christine Arumainayagam for her writing and editorial assistance; Anne Diehl for her help communicating with college alumnae; Amy Brooks for her assistance with audio for the intervention and video blogs; Sue Sours and Keng Wai Woo for their assistance with all technological aspects of the intervention; and Benjamin Van Voorhees for his central role in the development of the CATCH-IT intervention, on which the Willow adaptation is based. Thank you also to the Huiying Memorial Foundation for their generous support of this work.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.


  1. Abuwalla, Zach, Maureen D. Clark, Brendan Burke, Viktorya Tannenbaum, Sarvanand Patel, Ryan Mitacek, Tracy Gladstone, and Benjamin Van Voorhees. 2018. Long-term telemental health prevention interventions for youth: A rapid review. Internet Interventions 11: 20–29. [Google Scholar] [CrossRef] [PubMed]
  2. Addis, Michael E., and Ethan Hoffman. 2017. Men’s depression and help-seeking through the lenses of gender. In The Psychology of Men and Masculinities. Washington, DC: American Psychological Association, pp. 171–96. [Google Scholar]
  3. American College Health Association. 2015. American College Health Association-National College Health Assessment II: Reference Group Undergraduates Executive Summary. Hanover: American College Health Association. [Google Scholar]
  4. American College Health Association. 2019. American College Health Association-National College Health Assessment II: Undergraduate Student Executive Summary Spring 2019. Silver Spring: American College Health Association. [Google Scholar]
  5. Avenevoli, Shelli, Joel Swendsen, Jian-Ping He, Marcy Burstein, and Kathleen Ries Merikangas. 2015. Major depression in the national comorbidity survey-adolescent supplement: Prevalence, correlates, and treatment. Journal of the American Academy of Child & Adolescent Psychiatry 54: 37–44. [Google Scholar]
  6. Bartlett, Jonathan W., Chris Frost, and James R. Carpenter. 2011. Multiple imputation models should incorporate the outcome in the model of interest. Brain 134: e189; author reply e90. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Bernal, Guillermo, Janet Bonilla, and Carmen Bellido. 1995. Ecological validity and cultural sensitivity for outcome research: Issues for the cultural adaptation and development of psychosocial treatments with Hispanics. Journal of Abnormal Child Psychology 23: 67–82. [Google Scholar] [CrossRef] [PubMed]
  8. Blanco, Carlos, Mayumi Okuda, Crystal Wright, Deborah S. Hasin, Bridget F. Grant, Shang-Min Liu, and Mark Olfson. 2008. Mental health of college students and their non-college-attending peers: Results from the National Epidemiologic Study on Alcohol and Related Conditions. Archives of General Psychiatry 65: 1429–37. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Braun, Virginia, and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative Research in Psychology 3: 77–101. [Google Scholar] [CrossRef] [Green Version]
  10. Braun, Virginia, and Victoria Clarke. 2012. Thematic Analysis. In APA Handbook of Research Methods in Psychology, Vol 2 Research Designs: Quantitative, Qualitative, Neuropsychological, and Biological. Edited by Harris E. Cooper, Paul M. Camic, Debra L. Long, Abigail T. Panter, David E. Rindskopf and Kenneth J. Sher. Washington, DC: American Psychological Association, pp. 57–71. [Google Scholar] [CrossRef]
  11. Buuren, Stef. 2018. Flexible Imputation of Missing Data, 2nd ed. Boca Raton: CRC Press. [Google Scholar]
  12. Canty-Mitchell, Janie, and Gregory D. Zimet. 2000. Psychometric properties of the Multidimensional Scale of Perceived Social Support in urban adolescents. American Journal of Community Psychology 28: 391–400. [Google Scholar] [CrossRef]
  13. Chen, Fangping, Dan Zheng, Jing Liu, Yi Gong, Zhizhong Guan, and Didong Lou. 2020. Depression and anxiety among adolescents during COVID-19: A cross-sectional study. Brain, Behavior, and Immunity 88: 36–38. [Google Scholar] [CrossRef]
  14. Christensen, Helen, Kathleen M. Griffiths, and Ailsa Korten. 2002. Web-based cognitive behavior therapy: Analysis of site usage and changes in depression and anxiety scores. Journal of Medical Internet Research 4: e3. [Google Scholar] [CrossRef]
  15. Christensen, Helen, Kathleen M. Griffiths, Andrew J. Mackinnon, and Kylie Brittliffe. 2006. Online randomized controlled trial of brief and full cognitive behaviour therapy for depression. Psychological Medicine 36: 1737–46. [Google Scholar] [CrossRef]
  16. Cook, Lorna, Mohammod Mostazir, and Edward Watkins. 2019. Reducing Stress and Preventing Depression (RESPOND): Randomized Controlled Trial of Web-Based Rumination-Focused Cognitive Behavioral Therapy for High-Ruminating University Students. Journal of Medical Internet Research 21: e11349. [Google Scholar] [CrossRef] [PubMed]
  17. Davies, E. Bethan, Richard Morriss, and Cris Glazebrook. 2014. Computer-delivered and web-based interventions to improve depression, anxiety, and psychological well-being of university students: A systematic review and meta-analysis. Journal of Medical Internet Research 16: e130. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Donkin, Liesje, Ian B. Hickie, Helen Christensen, Sharon L. Naismith, Bruce Neal, Nicole L. Cockayne, and Nick Glozier. 2013. Rethinking the dose-response relationship between usage and outcome in an online intervention for depression: Randomized controlled trial. Journal of Medical Internet Research 15: e231. [Google Scholar] [CrossRef] [PubMed]
  19. Eisenberg, Daniel, Ezra Golberstein, and Sarah E. Gollust. 2007. Help-seeking and access to mental health care in a university student population. Medical Care 45: 594–601. [Google Scholar] [CrossRef]
  20. Eisenberg, Daniel, Marilyn F. Downs, Ezra Golberstein, and Kara Zivin. 2009. Stigma and help seeking for mental health among college students. Medical Care Research and Review 66: 522–41. [Google Scholar] [CrossRef]
  21. Evers, Kerry E., Carol O. Cummins, James O. Prochaska, and Janice M. Prochaska. 2005. Online health behavior and disease management programs: Are we ready for them? Are they ready for us? Journal of Medical Internet Research 7: e27. [Google Scholar] [CrossRef]
  22. Furukawa, Toshi A., Aya Suganuma, Edoardo G. Ostinelli, Gerhard Andersson, Christopher G. Beevers, Jason Shumake, Thomas Berger, Florien Willemijn Boele, Claudia Buntrock, Per Carlbring, and et al. 2021. Dismantling, optimising, and personalising internet cognitive behavioural therapy for depression: A systematic review and component network meta-analysis using individual participant data. Lancet Psychiatry 8: 500–11. [Google Scholar] [CrossRef]
  23. Gallagher, Robert P. 2014. National Survey of College Counseling Centers 2014. Alexandria: The International Association of Counseling Services, Inc. [Google Scholar]
  24. Gao, Meiyuzi, Phil Kortum, and Frederick Oswald. 2018. Psychometric Evaluation of the USE (Usefulness, Satisfaction, and Ease of use) Questionnaire for Reliability and Validity. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 62: 1414–18. [Google Scholar] [CrossRef] [Green Version]
  25. Gladstone, Tracy, Daniela Terrizzi, Allison Pauson, Jennifer Nidetz, Jason Canel, Eumene Ching, Anita Berry, James Cantorna, Joshua Fogel, Milton Eder, and et al. 2018. Effect of Internet-based Cognitive Behavioral Humanistic and Interpersonal Training vs. Internet-based General Health Education on Adolescent Depression in Primary Care: A Randomized Clinical Trial. JAMA Network Open 1: e184278. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Gladstone, Tracy R. G., Jennifer A. J. Schwartz, Patrick Possel, Amanda M. Richer, Katherine R. Buchholz, and L. Sophia Rintell. 2021. Depressive Symptoms Among Adolescents: Testing Vulnerability-Stress and Protective Models in the Context of COVID-19. Child Psychiatry & Human Development. [Google Scholar] [CrossRef]
  27. Gladstone, Tracy, Katherine R. Buchholz, Marian Fitzgibbon, Linda Schiffer, Miae Lee, and Benjamin W. Van Voorhees. 2020. Randomized Clinical Trial of an Internet-Based Adolescent Depression Prevention Intervention in Primary Care: Internalizing Symptom Outcomes. International Journal of Environmental Research and Public Health 17: 7736. [Google Scholar] [CrossRef]
  28. Gottschall, Amanda C., Stephen G. West, and Craig K. Enders. 2012. A Comparison of Item-Level and Scale-Level Multiple Imputation for Questionnaire Batteries. Multivariate Behavioral Research 47: 1–25. [Google Scholar] [CrossRef]
  29. Harrer, Mathias, Sophia H. Adam, Harald Baumeister, Pim Cuijpers, Eirini Karyotaki, Randy P. Auerbach, Ronald C. Kessler, Ronny Bruffaerts, Matthias Berking, and David D. Ebert. 2019. Internet interventions for mental health in university students: A systematic review and meta-analysis. International Journal of Methods in Psychiatric Research 28: e1759. [Google Scholar] [CrossRef] [Green Version]
  30. Heritier, Stephane R., Val J. Gebski, and Anthony C. Keech. 2003. Inclusion of patients in clinical trial analysis: The intention-to-treat principle. Medical Journal of Australia 179: 438–40. [Google Scholar] [CrossRef] [PubMed]
  31. Hollis, Sally, and Fiona Campbell. 1999. What is meant by intention to treat analysis? Survey of published randomised controlled trials. BMJ 319: 670–74. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  32. Huey, Stanley J., and Jacqueline L. Tilley. 2018. Effects of mental health interventions with Asian Americans: A review and meta-analysis. Journal of Consulting and Clinical Psychology 86: 915–30. [Google Scholar] [CrossRef]
  33. Hunt, Justin, and Daniel Eisenberg. 2010. Mental health problems and help-seeking behavior among college students. Journal of Adolescent Health 46: 3–10. [Google Scholar] [CrossRef]
  34. Ibrahim, Ahmed K., Shona J. Kelly, Clive E. Adams, and Cris Glazebrook. 2013. A systematic review of studies of depression prevalence in university students. Journal of Psychiatric Research 47: 391–400. [Google Scholar] [CrossRef]
  35. Ip, Patrick, David Chim, Ko Ling Chan, Tim M. Li, Frederick Ka Wing Ho, Benjamin W. Van Voorhees, Agnes Tiwari, Anita Tsang, Charlie Wai LeungChan, Matthew Ho, and et al. 2016. Effectiveness of a culturally attuned Internet-based depression prevention program for Chinese adolescents: A randomized controlled trial. Depress Anxiety 33: 1123–31. [Google Scholar] [CrossRef]
  36. Israel, Barbara A., Amy J. Schulz, Edith A. Parker, and Adam B. Becker. 1998. Review of community-based research: Assessing partnership approaches to improve public health. Annual Review of Public Health 19: 173–202. [Google Scholar] [CrossRef] [Green Version]
  37. Jakobsen, Janus C., Christian Gluud, Jørn Wetterslev, and Per Winkel. 2017. When and how should multiple imputation be used for handling missing data in randomised clinical trials—A practical guide with flowcharts. BMC Medical Research Methodology 17: 162. [Google Scholar] [CrossRef] [Green Version]
  38. Kessler, Ronald C., Katherine A. McGonagle, Marvin Swartz, Dan G. Blazer, and Christopher B. Nelson. 1993. Sex and depression in the National Comorbidity Survey. I: Lifetime prevalence, chronicity and recurrence. Journal of Affective Disorders 29: 85–96. [Google Scholar] [CrossRef] [Green Version]
  39. Kroenke, Kurt, and Robert L. Spitzer. 2002. The PHQ-9: A new depression diagnostic and severity measure. Psychiatric Annals 32: 509–15. [Google Scholar] [CrossRef] [Green Version]
  40. Kroenke, Kurt, Tara W. Strine, Robert L. Spitzer, Janet B. Williams, Joyce T. Berry, and Ali H. Mokdad. 2009. The PHQ-8 as a measure of current depression in the general population. Journal of Affective Disorders 114: 163–73. [Google Scholar] [CrossRef] [PubMed]
  41. Lenhard, Fabian, Kajsa Mitsell, Maral Jolstedt, Sarah Vigerland, Tove Wahlund, Martina Nord, Johan Bjureberg, Hanna Sahlin, Per Andrén, Kristina Aspvall, and et al. 2019. The Internet Intervention Patient Adherence Scale for Guided Internet-Delivered Behavioral Interventions: Development and Psychometric Evaluation. Journal of Medical Internet Research 21: e13602. [Google Scholar] [CrossRef] [Green Version]
  42. Magson, Natasha R., Justin Y. A. Freeman, Ronald M. Rapee, Cele E. Richardson, Ella L. Oar, and Jasmine Fardouly. 2021. Risk and Protective Factors for Prospective Changes in Adolescent Mental Health during the COVID-19 Pandemic. Journal of Youth and Adolescence 50: 44–57. [Google Scholar] [CrossRef] [PubMed]
  43. Martin, Lisa A., Harold W. Neighbors, and Derek M. Griffith. 2013. The experience of symptoms of depression in men vs women: Analysis of the National Comorbidity Survey Replication. JAMA Psychiatry 70: 1100–6. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  44. Miles, Matthew B., and A. Michael Huberman. 1994. Qualitative Data Analysis: An Expanded Sourcebook. Thousand Oaks: SAGE Publications, Inc. [Google Scholar]
  45. Munoz, Ricardo F., Pim Cuijpers, Filip Smit, Alinne Z. Barrera, and Yan Leykin. 2010. Prevention of major depression. Annual Review of Clinical Psychology 6: 181–212. [Google Scholar] [CrossRef] [PubMed]
  46. Onwuegbuzie, Anthony J., Wendy B. Dickinson, Nancy L. Leech, and Annmarie G. Zoran. 2009. A qualitative framework for collecting and analyzing data in focus group research. International Journal of Qualitative Methods 8: 1–21. [Google Scholar] [CrossRef] [Green Version]
  47. Reifman, Alan, and Kristina Garrett. 2010. Winsorize. Encyclopedia of Research Design, 1636–38. [Google Scholar]
  48. Seedat, Soraya, Kate M. Scott, Matthias C. Angermeyer, Patricia Berglund, Evelyn J. Bromet, Traolach S. Brugha, Koen Demyttenaere, Giovanni de Girolamo, Josep M. Haro, Robert Jin, and et al. 2009. Cross-national associations between gender and mental disorders in the World Health Organization World Mental Health Surveys. Archives of General Psychiatry 66: 785–95. [Google Scholar] [CrossRef] [PubMed]
  49. Shin, Cheolmin, Seung-Hoon Lee, Kyu-Man Han, Ho-Kyoung Yoon, and Changsu Han. 2019. Comparison of the Usefulness of the PHQ-8 and PHQ-9 for Screening for Major Depressive Disorder: Analysis of Psychiatric Outpatient Data. Psychiatry Investigation 16: 300–5. [Google Scholar] [CrossRef] [PubMed]
  50. Spitzer, Robert L., Kurt Kroenke, Janet B. Williams, and Bernd Löwe. 2006. A brief measure for assessing generalized anxiety disorder: The GAD-7. Archives of Internal Medicine 166: 1092–27. [Google Scholar] [CrossRef] [Green Version]
  51. Treynor, Wendy, Richard Gonzalez, and Susan Nolen-Hoeksema. 2003. Rumination Reconsidered: A Psychometric Analysis. Cognitive Therapy and Research 27: 247–59. [Google Scholar] [CrossRef]
  52. U.S. Department of Labor, Bureau of Labor Statistics. 2018. College Enrollment and Work Activity of Recent High School and College Graduates. Available online: (accessed on 5 August 2021).
  53. Van Voorhees, Benjamin W., Joshua Fogel, Mark A. Reinecke, Tracy Gladstone, Scott Stuart, Jackie Gollan, Nathan Bradford, Rocco Domanico, Blake Fagan, Ruth Ross, and et al. 2009. Randomized Clinical Trial of an Internet-Based Depression Prevention Program for Adolescents (Project CATCH-IT) in Primary Care: 12-Week Outcomes. Journal of Developmental and Behavioral Pediatrics 30: 23–37. [Google Scholar] [CrossRef]
  54. Van Voorhees, Benjamin W., Karen Vanderplough-Booth, Joshua Fogel, Tracy Gladstone, Carl Bell, Scott Stuart, Jackie Gollan, Nathan Bradford, Rocco Domanico, Blake Fagan, and et al. 2008. Integrative internet-based depression prevention for adolescents: A randomized clinical trial in primary care for vulnerability and protective factors. Journal of the Canadian Academy of Child and Adolescent Psychiatry 17: 184–96. [Google Scholar]
  55. Van Voorhees, Benjamin W., Natalie Watson, John F. P. Bridges, Joshua Fogel, Jill Galas, Clarke Kramer, Marc Connery, Ann McGill, Monika Marko, Alonso Cardenas, and et al. 2010. Development and pilot study of a marketing strategy for primary care/internet-based depression prevention intervention for adolescents (the CATCH-IT intervention). Primary Care Companion to the Journal of Clinical Psychiatry 12. [Google Scholar] [CrossRef]
  56. Van Voorhees, Benjamin W., Nicholas Mahoney, Rina Mazo, Alinne Z. Barrera, Christopher P. Siemer, Tracy Gladstone, and Ricardo F. Muñoz. 2011. Internet-based depression prevention over the life course: A call for behavioral vaccines. Psychiatric Clinics 34: 167–83. [Google Scholar] [CrossRef] [Green Version]
  57. van Zoonen, Kim, Claudia Buntrock, David Daniel Ebert, Filip Smit, Charles F. Reynolds 3rd, Aartjan T. F. Beekman, and Pim Cuijpers. 2014. Preventing the onset of major depressive disorder: A meta-analytic review of psychological interventions. International Journal of Epidemiology 43: 318–29. [Google Scholar] [CrossRef]
  58. Verheijden, Marieke W., Marielle P. Jans, Vincent H. Hildebrandt, and Marijke Hopman-Rock. 2007. Rates and determinants of repeated participation in a web-based behavior change program for healthy body weight and healthy lifestyle. Journal of Medical Internet Research 9: e1. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  59. Vogel, Erin A., Johannes Thrul, Gary L. Humfleet, Kevin L. Delucchi, and Danielle E. Ramo. 2019. Smoking cessation intervention trial outcomes for sexual and gender minority young adults. Health Psychology 38: 12–20. [Google Scholar] [CrossRef]
  60. Wallerstein, Nina, and Bonnie Duran. 2010. Community-based participatory research contributions to intervention research: The intersection of science and practice to improve health equity. American Journal of Public Health 100: S40–S46. [Google Scholar] [CrossRef]
  61. Weiner, Bryan J., Cara C. Lewis, Cameo Stanick, Byron J. Powell, Caitlin N. Dorsey, Alecia S. Clary, Marcella H. Boynton, and Heather Halko. 2017. Psychometric assessment of three newly developed implementation outcome measures. Implementation Science 12: 108. [Google Scholar] [CrossRef] [PubMed]
  62. Weissman, Arlene N. 1978. Development and Validation of the Dysfunctional Attitude Scale [Microform]: A Preliminary Investigation/Arlene N. Weissman and Aaron T. Beck. Edited by Aaron T. Beck. Washington, DC: Distributed by ERIC Clearinghouse. [Google Scholar]
  63. Xiao, Henry, Dever M. Carney, Soo Jeong Youn, Rebecca A. Janis, Louis G. Castonguay, Jeffrey A. Hayes, and Benjamin D. Locke. 2017. Are we in crisis? National mental health and treatment trends in college counseling centers. Psychol Services 14: 407–15. [Google Scholar] [CrossRef] [PubMed]
Table 1. Summary of participant demographics.
Table 1. Summary of participant demographics.
Participants (N = 34)
Class year, %
   4th 15
Age, mean (SD)19.82 (1.19)
Gender identity, %
   Gender queer3
Race, %
   Asian or Asian American38
   Black or African American or Caribbean12
   Latinx or Hispanic3
   Native American or Alaskan Native3
   Other/Prefer not to respond12
Highest level of parent/guardian education, %
   Graduate degree65
   Bachelor’s degree17
   Some college15
   High school or equivalent degree3
Table 2. Usage data from Willow.
Table 2. Usage data from Willow.
Full Sample
(N = 34)
Participants Who Logged On
(n = 29)
Modules Used6.325.417.415.11
Characters Typed1857.882176.682256.002204.40
Overall Time *88.26109.1594.53102.28
Time Per Module *N/AN/A13.2413.32
* A Winsorized estimate was used for one outlier for Overall Time. This estimate was used to calculate Time Per Module for this participant.
Table 3. Clinical measures at baseline and eight weeks.
Table 3. Clinical measures at baseline and eight weeks.
95% Confidence Intervals of the Difference
MeasureBL Mean (SD)T2 Mean (SD)Mean
Missing Info.
Observed Estimates (Without Imputed Data)
PHQ-8 (depressive symptoms)8.23 (4.78)6.23 (5.43)2.000.323.6830
GAD-7 (anxiety symptoms)7.00 (4.61)5.34 (4.45)1.660.273.0431
RSS (rumination)49.82 (14.42)44.79 (12.59)5.040.499.5927
MSPSS (social support)69.45 (9.40)67.66 (7.10)1.79−0.854.4428
DAS (cognitive styles)32.83 (6.73)33.63 (6.18)−0.80−2.520.9229
Pooled Estimates (Imputed Data)
PHQ-8 (depressive symptoms)8.36 (4.72)6.34 (5.21)2.020.463.5930850.037
GAD-7 (anxiety symptoms)7.12 (4.60)5.39 (4.34)1.730.413.0497400.020
RSS (rumination)49.02 (14.30)44.70 (13.57)4.320.458.1819,0100.015
MSPSS (social support)70.21 (9.43)67.75 (7.34)2.46−0.145.0613,2190.018
DAS (cognitive styles)32.29 (6.86)33.18 (6.14)−0.89−2.440.6624950.041
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Gladstone, T.R.G.; Rintell, L.S.; Buchholz, K.R.; Myers, T.L. Adaptation of an Evidence-Based Online Depression Prevention Intervention for College Students: Intervention Development and Pilot Study Results. Soc. Sci. 2021, 10, 398.

AMA Style

Gladstone TRG, Rintell LS, Buchholz KR, Myers TL. Adaptation of an Evidence-Based Online Depression Prevention Intervention for College Students: Intervention Development and Pilot Study Results. Social Sciences. 2021; 10(10):398.

Chicago/Turabian Style

Gladstone, Tracy R. G., L. Sophia Rintell, Katherine R. Buchholz, and Taylor L. Myers. 2021. "Adaptation of an Evidence-Based Online Depression Prevention Intervention for College Students: Intervention Development and Pilot Study Results" Social Sciences 10, no. 10: 398.

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop