3.1. Quantitative Analysis
Quantitative analysis involved an Analysis of Variance (ANOVA) on the collected data set for items Q1 ÷ Q20 (
Table 3). The statistical analysis was conducted to examine the differences between groups on a particular measure. The groups in the data set were the different questions (Q1, Q2, Q3, etc.), and the measures being analyzed were the responses given to each question. These responses were given in numbers, where each number represented an option on a 1–7 Likert scale (
Appendix A—
Figure A1). The items for public engagement must show a common variant, correlate with each other, and, at the same time, correlate each item with the score that reflects this attribute.
After conducting the ANOVA with Two-Factor Without Replication the results include the source of variation, the sum of squares (SS), the degrees of freedom (df), the mean squares (MS), the F-ratio, the p-value, and the F critical value. These indicate that there is a significant difference between the means of the groups on the measure being analyzed (p-value is less than 0.05), and the source of variation was broken down into three main parts: Rows, Columns, and Error.
The Rows source of variation demonstrates that there is a significant difference between the means of the groups that were formed by rows. The Rows source of variation in the ANOVA results refers to the variation in the responses between the different questions. The calculated value of SS of 2102.727,
df of 43,
MS of 48.90063,
F of 23.51994,
p-value of 3.6·10
−114, and
F crit of 1.394538 are all indicators of the statistical significance of the variation between the questions. The results suggest that there is a significant difference in the responses given to the 20 questions, with a large
F-ratio and a very small
p-value. Thus, all values are significant, indicating that there is a difference in means among the groups. The relevance of these values is that they can be used to identify which questions are most important to the participants, which questions are not well understood, and which questions are measuring different aspects of public engagement. The Columns source of variation shows that there is a significant difference between the means of the groups that were formed by columns. The
SS is 113.1636,
df is 19,
MS is 5.955981,
F is 2.864672,
p-value is 4.31·10-05, and
F crit is 1.599272. The calculated values are significant, indicating again that there is a difference in means among the groups. The Columns source of variation in this analysis refers to the variation in responses between the different questions. The relevance of the calculated values in terms of the questions can be determined by looking at the
p-value and the F-value for each question. A low
p-value (typically below 0.05) and a high
F-value represent that there is a significant difference in the responses between the different questions, indicating that the question is measuring a different aspect of public engagement. For example, if we analyze the question “Does the university offer incentives and rewards to promote public engagement?” (Q6), the
p-value and
F-value are both low, indicating that there is a significant difference in responses between this question and the other questions. Thus, offering incentives and rewards is an important factor in promoting public engagement [
12,
39]. On the other hand, if we look at the question “Does the university integrate external services into its portfolio of services to promote public engagement?” (Q8), the
p-value and F-value are both relatively high, indicating that there is not a significant difference in responses between this question and the other questions. This shows that integrating external services may not be a major factor in promoting public engagement [
15,
18,
19]. The Error Source of Variation is the variability that is not explained by the other sources of variation. It represents the random variation or noise in the data set. In terms of the questions, it represents the degree to which the responses to each question vary from the overall mean of the sample. A lower error variance corresponds to more consistent and less random responses for a given question, while more variable and less consistent responses have a higher error variance.
Focusing on the need to assess the consistency and reliability of the scale used, Cronbach’s Alpha was used to assess the reliability and internal consistency in the development and validation stages. The ANOVA undertaken for public engagement has a Cronbach’s Alpha of 0.957483, which is a strong indicator of the internal consistency of the questionnaire, which means that the items on the scale or questionnaire are measuring the same underlying construct and the results are reliable. Results show that there is a significant difference between the means of the groups or conditions on the measure being analyzed, and the source of variation in the difference is coming from both Rows and Columns. Moreover, the Cronbach’s Alpha coefficient was used in the analysis of the results as the main indicator of the measurement accuracy of the test. Since F > F crit (23.51994 > 1.394538), the null hypothesis will be rejected. Population means are not all equal. Which means that at least one of the means is different. Because p < 0.001, it means that at least two means differ highly significantly from each other.
To further analyze the significance of each question,
Table 4 was put together, containing information about the number of respondents (Count), the sum of scores (Sum), the average of scores, and the variance and standard deviation (Std. Dev.) for each item (Q1 ÷ Q20). The results show that there is a range of averages and variances among the questions. The average ranges from 3.477 to 4.795, and the variance ranges from 3.469 to 5.465, indicating that there is a significant difference between the means of the questions and the measure being analyzed. It is also worth noting that the variance is an indicator of the spread of the data; the larger the variance, the more spread out the data is, and it could involve the presence of outliers.
A low standard deviation means that most of the scores are near the mean, and a high value means that the scores are more dispersed. To identify which questions are considered more significant by the participants, the average scores were evaluated and contrasted among the questions. Questions with higher average scores are considered more significant by the participants. Furthermore, questions with a lower standard deviation imply that the responses are more consistent; hence, it is more likely that the question is considered more important by the participants. Based on the results from
Table 4, in hierarchical order, starting with the most important, questions Q1, Q12, Q13, Q9, and Q10 are the most significant for the participants in terms of importance and consistency.
To determine which questions are not well understood, apart from the standard deviation, the distribution of responses was calculated and analyzed. The distribution of scores is a measure of how the scores are distributed across the range for each question. It can be visualized for all 20 questions using the histogram and the frequency distribution presented in
Figure 1.
For example, for question Q1, the frequency of scores is given by {1:4, 2:1, 3:8, 4:5, 5:5, 6:11, 7:10}. Four respondents gave a score of 1, one respondent gave a score of 2, eight respondents gave a score of 3, and so on. Questions with a wide range of responses and a high standard deviation are generally not well understood. For all 20 questions, the calculated range was 6. Although the standard deviation for all questions is low, the study requires further clarifications for question Q15. The average values for the question range from 3.477 to 4.795, with Q1 having the highest average value of 4.795. The participants generally agreed that the universities’ commitment to public engagement is clearly defined. However, it is worth noting that the average for Q1 is only slightly above the midpoint of the scale (4.5), which means that the results are not overwhelmingly in favor of the statement. There were some participants who disagreed or were uncertain about the statement; thus; there is a need for further investigation [
18].
Regarding the documentation of public commitment (Q2), the lowest results were recorded in Greece (with an average of 3.85) and the best results were recorded in Germany with an average of 6.33, indicating that German universities have the best practices for documentation of public engagement activities. The results suggest that the commitment to public engagement is well documented, but there may be room for improvement in terms of clarity and dissemination of information. As other research shows, confusion on the subject can be due to a lack of consistency in the channels of information and the diversity of tools [
11,
34]. In order to further investigate this issue, Q3 was analyzed.
According to the respondents, most universities make efforts so that their documented commitment to public engagement is known and understood; there are no significant differences between the partner countries. The conclusion aligns with several other findings at a European level and can be explained mainly due to cultural and societal similarities but also due to strategic collaboration paths between institutions [
6,
7,
9,
22,
24]. Based on the results, it can be inferred that the universities may need to improve their efforts to ensure that their documented commitment to public engagement is also publicly known and understood. Such strategies are implemented and actively promoted by universities and institutions worldwide, but with notable differences in the effectiveness of the tools [
26,
33]. Depending on the cultural approach, universities need to establish the most effective methods for undertaking public engagement documentation.
When asked if people from different levels of the university are responsible for the implementation of the public involvement agenda (Q4), the respondents appreciated the efforts of the university staff, suggesting that there is a fair level of responsibility among people at different levels of the university for implementing the public engagement agenda. European universities tend to assume a high level of responsibility in undertaking academic third-mission actions, endeavors sustained by a variety of common efforts and initiatives [
6,
7,
12,
22]. However, there is still room for improvement as the mean score is not the highest, indicating that there may be some lack of clarity or understanding of the responsibilities related to public engagement across different levels of the university. Several studies found that lack of clarity can be due to improper communication throughout the universities’ management and organizational hierarchies [
17,
19].
Surveyed universities are concerned with investments to encourage public involvement (average = 4.159 for Q5), but they are less involved in offering incentives and rewards to promote audience involvement (average = 3.773 for Q6). Some universities have been known to strongly encourage public engagement through student involvement, which has proven beneficial in the long-term development of third mission strategies [
37]. The EU has promoted continuous development of public engagement through the academic third mission of universities [
6], so as to counteract the gap between academia and entrepreneurs. The average score for Q6 is 3.773, which is relatively low compared to the other questions. For this question, the respondents generally disagree with or are neutral in their opinion that their universities offer incentives and rewards to promote public engagement. The standard deviation of 1.975 also infers that there is a significant amount of variation in the responses, indicating that some respondents may strongly disagree while others may be more neutral or slightly disagree. There is definitely room for improvement in this area for the universities in terms of offering incentives and rewards to promote public engagement. This is mainly performed through structural funds [
8,
9], but also through local initiatives [
13,
15].
The results for questions Q7, Q8, and Q9 were very close to the central tendency (average: Q7 = 3.818, Q8 = 3.477, Q9 = 4.295). Training activities to support public involvement are not sufficient, and services to promote public involvement are less satisfactory in surveyed universities. A fair interpretation of the obtained results could be that the respondents do not believe that the university is effectively integrating external services into its portfolio to promote public engagement. This was also the case for several other institutions outside of the study [
15,
20,
21,
30]. Thus, this is a clear area for improvement for the university in terms of its public engagement efforts and is in correlation with other literature findings [
32,
39].
For questions Q10 and Q11 there are no significant differences between the results collected from different countries. These results reflect, in the opinion of the respondents, the satisfactory preoccupation of universities in using updated methods and approaches to develop public engagement skills among students and in the integration of public engagement practices in study programs [
23]. The general opinion of the respondents is that they do not believe that the university is effectively integrating public engagement practices into its degree programs. For this question respondents stated that there are universities where the public is involved to some extent in the study programs. The justification for this statement is based, in the opinion of the respondents, on the fact that the universities consider the opinion of the public based on the feedback received from them, especially formulated during internships, and volunteering. It could be beneficial to follow up with strategies that have proven successful over one common framework [
18,
22,
24].
By identifying the needs of external stakeholders (Q13 = 4.636), the universities are involved in the promotion of interdisciplinary educational paths (Q12 = 4.705), as the surveyed professors claim. Most of the participants think that their university is effectively promoting interdisciplinary educational paths. The results show that universities effectively promote interdisciplinary educational paths, and this is something that is positively perceived by the respondents, a result that aligns with most literature research [
20,
21,
32].
Regarding the evaluation of the activities and results of public commitment (Q15 = 4.023) and indicators used (Q14 = 3.636), the best results were recorded in the universities of Romania and Lithuania, and lower results were obtained in Greece. These results could be explained by the fact that the respondents from Romania are teaching staff directly involved in the evaluation activity, compared to Greece, where doctoral students were involved in the survey. This context also explains the average obtained for question Q16 = 3.795 regarding the communication of the evaluation results on the impact of the institutions’ activities. This issue is of particular importance in the process of standardization, and universities should address their challenges based on proven strategies [
16]. Results suggest that the respondents feel that the universities are not effectively using indicators to measure their activities and public engagement results, and it may be beneficial for universities to review and improve their methods for measuring and evaluating the effectiveness of public engagement activities. Insight into these processes is given by literature and professionals [
11,
14,
20]. The low average score and large variation in responses suggest that this may be an area where the university could improve in terms of public engagement efforts [
2]. This set of data shows that there is a need for the universities to improve in integrating the results of their public engagement activities into future planning and organizational development [
2,
4]. The standard deviation of 2.103 for Q17 means that the responses to this question are relatively spread out. This is also supported by the distribution of scores. In the ANOVA table, the values reveal that there is a significant difference between the means of the different rows, inferring that the responses to this question vary between different groups. Regarding the influence of universities at the local and regional level in Q17, the lowest average was obtained for universities in Greece; for the other countries, the results were approximately equal.
Social impact from public involvement activities and the definition at the university level are not fully satisfactory for respondents from all countries (Q18, Q19), with the averages obtained being close to the recorded central tendency. This satisfactory result was also recorded for question Q20 regarding the integration of interested parties in the management of the institution. Based on the obtained results, it can be concluded that the universities are generally successful in setting and communicating the goals and objectives of their public engagement activities and have a clear sense of direction in terms of how they want to create impact. This is a positive indication and hints at the fact that the universities effectively communicate their purpose and objectives with regard to public engagement with their communities and stakeholders [
13,
15]. Relationships with various stakeholders are crucial for universities in order to train students for real-life case scenarios and offer a smooth transition to the job market. Integration initiatives include joint labs, entrepreneurship accelerators, spin-off communities, and many others, for the mutual benefit of universities and companies alike [
13,
20,
21,
36,
39].
In order to avoid the dependence between two quantitative variables in the sample of data collected by applying the questionnaire, Pearson’s correlation coefficient (r) was determined. The obtained coefficients had values between –1 (perfectly negative correlation) and 1 (perfectly positive correlation). The sign of the coefficient represents the meaning of the correlation, namely: the positive value corresponds to the variations of the same meaning and the negative one to those of the opposite direction. The absolute values of the correlation coefficients, presented in
Table 5, express the intensity of the association between the items. Thus, for α < 0.05, values of the correlation coefficient from −0.25 to 0.25 were obtained, representing a weak or zero correlation, from 0.25 to 0.50 (or from −0.25 to −0.50) acceptable degree of association, from 0.50 to 0.75 (or from −0.50 to −0.75) moderate to good correlation, and from 0.75 to 1 (or from −0.75 to −1) very good correlation.
Among all the survey items in the first part of the questionnaire, only positive values were recorded that corresponded to variations of the same meaning. There are some moderate-to-strong positive relationships between the different questions. For example, Q2 and Q3 have a correlation coefficient of 0.84, indicating a strong positive relationship between the two questions.
Q4 and Q5 have a correlation coefficient of 0.66, indicating a moderately positive relationship between the two questions. Similarly, Q5 and Q6 have a correlation coefficient of 0.74, indicating a moderately positive relationship between the two questions. The highest association was recorded between items Q18 and Q19 (0.87), Q2 and Q3 (0.84), and Q15 and Q19 (0.80). However, it can also be seen that there are some weaker or no relationships between certain questions. For example, Q10 and Q14 have a correlation coefficient of 0.35, indicating a weak relationship between the two questions, and Q8 and Q17 have a correlation coefficient of 0.41, indicating a moderate relationship between the two questions.
The weakest correlation between items was recorded between items Q12 and Q1 (0.12), Q12 and Q5 (0.13), and Q12 and Q3 (0.17). These results suggest that there are moderate to strong positive relationships between some of the questions, indicating that the answers to these questions may be related to one another. However, there are also some weaker or no relationships between certain questions, indicating that the answers to these questions may not be as related to one another. It is important to keep in mind that correlation does not imply causation, and further analysis would be needed to understand the underlying relationships between the variables.
3.2. Relative Importance of Community Engagement
The questionnaire was put together so that the answers reflect a different facet of community engagement in European universities. Questions do not overlap in information but rather offer a complementary vision on how universities integrate community engagement practice into their academic third missions. Thus, each question is viewed both as a separate entity, with its own value in the setting of the overall objective of the questionnaire, and as a puzzle piece in the development of transformative actions.
In this context, results obtained by ANOVA and Pearson’s correlation showed that further analysis is necessary to substantiate the construction of a cohesive framework that could impact the decision-making process regarding community engagement in European universities.
Given the complexity of the analyzed issue, the Analytic Hierarchy Process (AHP) was applied to define the importance of each one of the 20 questions, respectively, as an underlying component of community engagement. The authors identified AHP as the most suitable method, attributing its effectiveness to its ability to minimize biases in the results of the decision-making process [
40,
41]. This approach necessitated a total of 190 pairwise comparisons among all 20 questions. In AHP, a consistency ratio below 10% is considered acceptable for maintaining result accuracy [
42]. Goepel’s AHP Online System facilitated the analysis [
43].
A decision matrix needs to be put together, evaluating the importance of each question in relation to all others and the degree of that importance. The used AHP scale was: 1—Equal Importance, 3—Moderate Importance, 5—Strong Importance, 7—Very Strong Importance, 9—Extreme Importance (2, 4, 6, 8 values in-between). To set the values for each pair of questions, the calculated standard deviation (
Table 4) was used.
There are two important steps in putting together the matrix, as follows: 1. Which question is more important than the other; 2. How much more important is one question than the other based on the AHP scale. The first step is straight-forward as the question with the lowest standard deviation is the most important of the two being compared.
The second step involves weighing the differences in standard deviation and spreading them across the 9-point scale. A square matrix is used to calculate the standard deviation differences (1).
where
xij is the difference between the standard deviation of question
Qi and the standard deviation of question
Qj. If
xij has a negative value, then Question
Qi is more important than question
Qj. Based on the maximum absolute value amongst these differences, each question gets assigned a point on the AHP scale, according to the procedure shown in
Table 6.
Using the criteria given in
Table 6, 190 comparisons were made in pairs and an AHP decision matrix was put together (
Figure 2a). The relative importance of each question was calculated based on the decision matrix, using the principal eigenvector solution with five iterations and a delta value of 4.7 × 10
−8. Each question’s weight was assigned based on the priority in the AHP Ranking, as shown in
Figure 2b.
The consolidated results of the AHP reveal a consistency ratio of 3.5% (
Figure 3), significantly lower than the predetermined threshold. Consequently, the model’s inconsistencies are within an acceptable range, allowing the derived importance coefficients to be reliably utilized in subsequent decisions.
AHP shows that the most important questions relate to the promotion of interdisciplinary educational paths (Q12), the clarity of the public engagement definition (Q1), the integration of external services into universities’ portfolios of services to promote public engagement (Q8), and the offer of incentives and rewards to promote public engagement (Q6). Q12, although the most important for the survey participant universities, has the lowest correlation coefficient of all questions, implying that this is a mandatory area of improvement and further investigation for all universities.
It is interesting to note that ANOVA identified Q1 as having the highest average value amongst the group, and according to AHP, it is the second most important component for universities. In this regard, there is a balance between value and importance, and further steps might involve improving functionality rather than value.
The ANOVA on Q8 showed that European universities do not effectively integrate external services into their portfolio to promote public engagement. This result corroborated its’ importance. AHP shows that universities should implement a more efficient framework targeting practical solutions to external service integration. Q6 has strong positive values, with all other questions showing the grounded connection in research, making its’ importance valuable for further analysis and improvement. Based on the AHP and ANOVA results the authors put together a set of recommendations and limitations fort the current study.
3.3. Recommendations and Study Limitations
The Academic Third Mission refers to the engagement of universities with their local communities through activities such as research, education, and services [
5,
23]. Public engagement, or the involvement of citizens in these activities, is crucial for the success of the Third Mission [
35]. However, the results of the current study indicate that there are a number of challenges to effective public engagement in tertiary education. These challenges include a lack of awareness and understanding of the Third Mission among citizens, difficulty in involving citizens in decision-making processes, and conflicts of interest that arise in the participatory process. In light of these challenges, it is essential to develop strategies for improving public engagement in tertiary education through the Academic Third Mission [
18,
19,
22]. Some possible strategies include increasing awareness and understanding of the Third Mission among citizens, involving citizens in decision-making processes and providing them with the tools and resources to participate effectively, and addressing conflicts of interest in the participatory process. Based on the obtained results, the authors propose nine different strategies (S1 ÷ S9) for further development.
Improving public engagement in tertiary education requires a multifaceted approach, emphasizing transparency, early involvement, and a culture of participation. A key strategy is enhancing transparency and communication between universities and the community (S1). This can be effectively achieved by regularly publishing the results of participatory activities on the university’s website and establishing a dedicated online channel to listen to and implement citizens’ recommendations. Involvement of citizens should begin at the initial stages (S2), including the collection and processing of context data, identification of priorities, and planning and programming of interventions. Such early engagement ensures that their needs and perspectives are integral to decision-making processes. Additionally, fostering a culture of participation within the university is crucial (S3). This involves providing training and support to staff and students in participatory methods and encouraging active participation in decision-making processes. The formation of interest groups and coalitions during debates ensures diverse perspectives in decision-making (S4). Equally important is the regular evaluation and monitoring of the participation process (S5) to identify areas for improvement, ensuring inclusivity and fairness. Diverse participatory methods, such as town meetings, deliberative surveys, and design workshops, are essential to represent varied viewpoints (S6). Collaboration with other organizations and experts is another key aspect (S7), providing access to a broad range of perspectives and expertise in decision-making. It is also important to consider the available resources and the level of conflict (S8) related to the intervention area and the local community before implementing any strategy. Finally, supporting citizens to understand their needs and make informed decisions is paramount (S9). This includes informing them of the outcomes of the participatory process, the work conducted by researchers and experts, and collecting feedback for potential interventions and improvements. A specific online channel for listening to and implementing citizens’ recommendations further supports this strategy, making for a more robust and inclusive approach to public engagement in tertiary education.
In order to facilitate the implementation of the above strategies, the study showed that there are still several areas in which universities can improve their engagement with citizens through the Academic Third Mission [
1,
4]. In order to effectively involve citizens in the decision-making process and ensure that their needs are being met, universities should consider implementing a variety of good practices. First, universities should prioritize transparency and communication throughout the participatory process. This includes clearly communicating the goals and objectives of the participatory process to citizens, as well as providing regular updates on the progress of the process and the outcomes achieved [
2]. Universities should also make an effort to ensure that the results of the participatory process are widely shared and easily accessible to citizens, such as through a dedicated section on the university website. Second, universities should actively involve citizens in the planning and implementation of the Third Mission activities. This can be achieved through a variety of methods, such as working groups, town meetings, and participatory budgeting [
20]. By involving citizens in the planning process, universities can ensure that their needs and priorities are taken into account and that the resulting interventions are more effective. Third, universities should consider providing support to citizens to understand their needs and make informed decisions. This can be achieved through a variety of methods, such as information desks, listening points, and providing information about the final result produced by the participatory process and the work conducted by researchers and experts [
21,
23,
30]. Fourth, in order to prevent conflicts of interest, universities should have a clear policy in place to identify and address such situations. This can include the establishment of a conflict-of-interest committee, the implementation of a code of conduct, and the provision of training to staff and stakeholders on how to handle conflicts of interest [
33,
35]. Finally, universities should conduct regular evaluations of the participatory process to identify areas for improvement and ensure that the needs and priorities of citizens are being met. This can include conducting surveys or focus groups to gather feedback from citizens, as well as conducting internal evaluations of the process [
37].
The study revealed the main areas of improvement for the involved European universities and some important recommendations were proposed for further development. Based on these an initial framework is proposed in
Figure 4.
To substantiate the framework and apply the identified sustainable strategies, the project consortium developed an online platform which enables stakeholders to get involved, participate and decide on sustainable academic contexts. The platform is available at
www.tenacityplatform.com (accessed on 15 November 2023) and allows sustainable implementation of academic deliberative arenas for open science and innovation, and the delivery of an e-learning platform for academic deliberative practitioners. In accordance with study findings, the platform allows six main categories of stakeholders to participate in the creation of sustainable academic practices, namely: citizen, policy maker, professor, researcher student and teacher.
An important feature of this interactive tool is the iterative feedback loop which allows participants to the deliberative process to improve on any subroutine, enhancing the overall sustainability and probability of use for future applications. This approach also lowers the impact of identified limitations, all the way to potentially eliminating some of them. Multifunctionality was also promoted, and organic development of novel avenues was permitted, all leading to sustainable product development in academic settings.
Nevertheless, the study brings with it limitations which should be considered when assimilating the presented information and conclusions. One potential limitation of this study is the small sample size of the survey participants. With only 44 participants, it is difficult to generalize the findings to the larger population of citizens and universities. Small samples may have limited representativeness and statistical power, and assumptions such as normality can be more challenging to meet. Nonetheless, even a small quantitative study can establish baseline data on a topic, providing a starting point for future research and comparisons.
Additionally, the survey responses were self-reported and may not accurately reflect the true experiences and perspectives of the participants. The study also relies on the assumption that the participants have a clear understanding of the term “participatory practices” and have had similar experiences in their participation in university activities. There could also be a bias in the survey responses, as the participants may have had a vested interest in presenting their experiences in a certain way. Another limitation is that the study does not consider other factors that may influence the implementation of participatory practices in universities. For example, the survey does not take into account the specific political, economic, and cultural context of each university or the level of resources available to support participatory practices.
One mentionable limitation is that the study does not consider how the COVID-19 pandemic may have affected the ability of citizens and universities to participate in participatory practices, such as the shift to online engagement or the reduced availability of resources. The small sample size and self-reported nature of the survey responses, along with the assumptions made about the participants’ understanding and experiences, may limit the generalizability of the findings. Also, the study does not take into account other factors that may influence the implementation of participatory practices in universities. To overcome the study limitations, it is recommended to conduct quantitative analysis and further research on larger studies. Future actions include the use of the current study as a pilot to inform a larger, more comprehensive research project. Additional qualitative methods, such as focus groups or case studies, will also supplement the survey data to provide a richer, more nuanced understanding of the third mission in different European HEIs, further developing the proposed framework.
The advantages of using ANOVA in our design analysis also counteract some of the study limitations. It allowed us to quantify trends and patterns for community engagement, even with the small sample size. This provided initial insights and identified potential areas of interest for further qualitative analysis. The quantitative data collection involved standardized instruments; the survey used Likert scales, allowing for consistency in data collection and facilitating comparisons across respondents and institutions.