Next Article in Journal
Reviewing the Impact of Vehicular Pollution on Road-Side Plants—Future Perspectives
Previous Article in Journal
Study of the Hydrodynamic Unsteady Flow Inside a Centrifugal Fan and Its Downstream Pipe Using Detached Eddy Simulation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Instrumentalization of a Model for the Evaluation of the Level of Satisfaction of Graduates under an E-Learning Methodology: A Case Analysis Oriented to Postgraduate Studies in the Environmental Field

by
Eduardo García Villena
1,2,*,
Silvia Pueyo-Villa
3,4,
Irene Delgado Noya
1,5,*,
Kilian Tutusaus Pifarré
1,2,*,
Roberto Ruíz Salces
1 and
Alina Pascual Barrera
5,6
1
Higher Polytechnic School, Universidad Europea del Atlántico (UNEATLANTICO), Isabel Torres 21, 39011 Santander, Spain
2
Department of Environment and Sustainability, Universidad Internacional Iberoamericana (UNINI-PR), Arecibo 00613, Puerto Rico
3
Department of Languages and Education, Universidad Europea del Atlántico (UNEATLANTICO), Isabel Torres 21, 39011 Santander, Spain
4
Department of Language, Education and Communications Sciences, Universidad Internacional Iberoamericana (UNINI-PR), Arecibo 00613, Puerto Rico
5
Department of Project Management, Universidad Internacional Iberoamericana (UNINI-MX), Campeche 24560, Mexico
6
Area of Environmental Engineering, Universidade Internacional do Cuanza, Rua Padre Fidalgo s/n, Municipio do Cuito, Bié, Bairro Sede (6335,09 km), Angola
*
Authors to whom correspondence should be addressed.
Sustainability 2021, 13(9), 5112; https://doi.org/10.3390/su13095112
Submission received: 31 March 2021 / Revised: 28 April 2021 / Accepted: 28 April 2021 / Published: 3 May 2021
(This article belongs to the Topic Scientific Advances in STEM: From Professor to Students)

Abstract

:
The purpose of this article was to evaluate the level of satisfaction of a sample of graduates in relation to different online postgraduate programs in the environmental area, as part of the process of continuous improvement in which the educational institution was immersed for the renewal of its accreditation before the corresponding official bodies. Based on the bibliographic review of a series of models and tools, a Likert scale measurement instrument was developed. This instrument, once applied and validated, showed a good level of reliability, with more than three quarters of the participants having a positive evaluation of satisfaction. Likewise, to facilitate the relational study, and after confirming the suitability of performing a factor analysis, four variable grouping factors were determined, which explained a good part of the variability of the instrument’s items. As a result of the analysis, it was found that there were significant values of low satisfaction in graduates from the Eurasian area, mainly in terms of organizational issues and academic expectations. On the other hand, it was observed that the methodological aspects of the “Auditing” and “Biodiversity” programs showed higher levels of dissatisfaction than the rest, with no statistically significant relationships between gender, entry profile or age groups. The methodology followed and the rigor in determining the validity and reliability of the instrument, as well as the subsequent analysis of the results, endorsed by the review of the documented information, suggest that the instrument can be applied to other multidisciplinary programs for decision making with guarantees in the educational field.

1. Introduction

Mejías and Martínez [1] refer to satisfaction as the level of students’ state of mind towards their institution, and which results from comparing the perception they have regarding the fulfillment of their needs, expectations and requirements.
According to González, Tinoco and Torres [2], the abundant current literature on the relationship between the satisfaction of graduates and the degree of educational quality in obtaining different academic degrees, as well as its quantification, demonstrates the importance of this topic for universities interested in attracting new generations of students to the different modalities of higher education.
Pichardo and García [3] consider there to be an increase in the interest in knowing the needs those university students have about the contexts to improve their higher education process.
In parallel, studies on student satisfaction are required by national and international university evaluation bodies [4] and are indicative of areas of improvement for better positioning in academic performance among higher education institutions [5].
However, educational quality is often confused with the quality of the service offered. Indeed, according to Mejías and Martínez [1], the objective is to find out what needs the student has and not so much how efficient the service provided is, despite the obvious positive correlation between student satisfaction and the quality of the service provided [4,5,6].
Most of the models found in the bibliography are directed to face-to-face teaching and, to a lesser extent, to the distance modality, being a common practice in the former to attribute little relevance to new technologies, which, in case they appear, usually does so as one more element of the teaching itself, as it occurs in the proposal of Mejías and Martínez [1], instead of granting them the separate protagonism they deserve.
Virtual education plays a preponderant role in satisfying the right (and the need) that everyone has to access education [7]. In this sense, already at the end of the 20th century, and in the first decade of the 21st century, there were two approaches to virtual education: a partial one, where the training activity, training materials, platforms and the cost/benefit ratio were evaluated [8,9,10], and a global one, referring to evaluation systems centered on models and/or standards of total quality and on the practice of benchmarking. In this context, according to Pereira and Gelvez [11], as in face-to-face education programs, in the systemic models of quality evaluation in virtual education, there is no clear concept of what quality is and that determines the criteria to be assessed, so it is important to establish standards for the quality of e-learning.
One of these standards is “Quality Management. Quality of Virtual Training” [12], which determines the three factors involved in meeting the needs and expectations of students: recognition of training for employability, learning methodology and accessibility.
In this context, the lack of knowledge of the methodology involved in e-learning and the digital divide represent two major challenges, since not all students feel comfortable with the procedures or have equal opportunities in accessing technologies, respectively [13].
Precisely, the pandemic caused by the appearance of the SARS COV-2 coronavirus at the end of 2019 and the first quarter of 2020 has only deepened the problem of inequalities, since, on the one hand, the lack of access to education with the help of ICTs by some families became evident, either due to a lack of resources or a lack of technical support [14]; on the other hand, confinement and teleworking led to a significant increase in the number of accesses to the programs offered by educational institutions (Figure 1).
Other standards are the EFQM model and the ISO 9000 family of standards, which indicate, as a requirement for a quality management system, the need to establish a process for measuring customer or user satisfaction [4].
The reason for measuring student satisfaction lies in the fact that students are the main factor and guarantee of the existence and maintenance of educational organizations. Students are the recipients of education; they are the ones who can best value it and, although they have a partial vision, their opinion provides a reference that should be taken into account [4,15].
According to Mejías and Martínez [1], “measuring student satisfaction in a consistent, permanent and adequate manner would guide the right decisions to increase their strengths and remedy their weaknesses.”
Romo, Mendoza and Flores [16] emphasize the importance of measuring student satisfaction through tests of contrasted validity and reliability, in order to establish plans for improving educational quality.
The validity of the measurement instrument can be content, criterion and construct validity. In general, it is customary in models to carry out content validity by means of a literature review and consultation with panels of experts [1].
Reliability is given, among others, by the statistical parameter “Cronbach’s alpha,” widely used in the university environment [17], as a simple way of measuring how reliable the measuring instrument is on a scale where the approximation to unity is the most desirable.
The importance of having valid and reliable instruments (from a statistical point of view) is therefore evident, given the implications that may arise from their use [18].
Surdez et al. [5] used as an evaluative instrument a questionnaire to estimate university students’ satisfaction with education (SEUE), proposed by [15], and adapted by the authors themselves (Table 1).
In this context, Fainholc [19] suggests that the traditional application of business quality models or others based on management rather than on the teaching–learning process to measure user satisfaction in distance education based on the completion of an opinion questionnaire can lead to misunderstandings if the institution and its technological-educational proposal are not known in depth.
Such a proposal should be based on indicators related, among others, to the fulfillment of the Sustainable Development Goals (SDGs) and the needs and expectations of stakeholders in the university context [20].
In the scope of this research, the delivery of environmental programs is fundamental to develop competencies in graduates that contribute to sustainability within the framework of the SDGs. In this sense, the sustainability model should be one of the guiding principles to be included transversally in the different curricula of an institution [21].
Based on the background described above, the purpose of this research was to develop a model for an educational institution to evaluate the degree of satisfaction of the graduates of several postgraduate programs in the field of the environment, through a valid and reliable instrument, applied to a test directed to a sample of 150 participants. It is hoped that the results of this report will contribute to the continuous improvement of the quality of training, to the promotion of new university programs and to an increase in the competency profile of the participants, which will undoubtedly result in better performance at the professional level to develop sustainable policies in the future.

2. Materials and Methods

The methodology followed in this work is based on a cross-section survey suited to the purposes of a descriptive and relational type of research, with a quantitative, non-experimental, transect approach, due to the fact that no hypotheses were posed and no variables were manipulated, but indeed “data were measured, evaluated or collected on various aspects, dimensions or components of the phenomenon to be investigated [in its natural working environment and in a single time]” [4,22].
As it can be seen from the purpose that guided this research, the methodological scope comprised, broadly speaking, four distinct stages: firstly, based on the literature review, a Likert scale instrument was developed. Secondly, the validity and reliability of this instrument was determined thanks to the data collected from a quantitative approach [22] on the satisfaction of a sample of 150 graduates of various online postgraduate programs in the environmental field and in relation to different aspects of training. Thirdly, a factor analysis was carried out with the aim of grouping variables or items of similar characteristics into a reduced number of factors and, in this way, obtaining a simplified model. Finally, a study was carried out to determine possible relationships based on the results obtained.

2.1. Population and Sample

The target population consisted of a total of 215 graduates from different online graduate programs in the environmental area. To determine the necessary sample size, and given that the intention was to estimate distributions or percentages when working with qualitative variables, the following sample size calculation formula for a finite population was used [23]:
n N Z 1 α 2 2 ( p q ) ( N 1 ) ε 2 + Z 1 α 2 2 ( p q )
where:
  • n = required sample size;
  • N = population size;
  • Z1− α/2 = 1.96 (Z-statistic calculated at a 95% confidence level);
  • p= q = 0.5 (typical values for worst-case conditions);
  • Error (epsilon) = 0.05.
Substituting the values in the formula resulted in the sample size required for the study to be at least n 138 .
The sampling procedure was non-probability convenience sampling.

2.2. Data Collection

At the end of the corresponding postgraduate program, the final project director sent the final grade along with an email and an access link, inviting the 215 recent graduates of the postgraduate programs between the dates of 01 January 2016 and 31 December 2018 to voluntarily participate in the survey.
The data were collected through the “Google Forms” platform, which allows administering questionnaires over the Internet.
On the other hand, participants were assured that their contribution to the research was confidential.
During the analyzed period, survey data were received from a total of 168 students (78%). Of these, 18 were discarded when deficiencies were detected in the filling out of the form. SPSS version 26 statistical software was used for data analysis.

2.3. Methodology

In particular, the following activities were carried out:
(1)
Selection of theories, models and tools for training evaluation, through bibliographic review techniques and analysis of the educational institution’s processes.
(2)
Finding indicators based on the models, literature review and authors’ experience.
(3)
Selection of nominal qualitative variables (gender, origin and program) and ordinal variables (age group, entry profile and graduate satisfaction).
(4)
Development of a measurement instrument or questionnaire for the variable “graduate satisfaction” on a Likert scale (“1. Strongly disagree” to “4. Strongly agree”), using Microsoft Word text editing software.
(5)
Determination of the validity of the instrument by requesting a panel of experts.
(6)
Application of the instrument to a final sample of 150 graduates belonging to different online environmental postgraduate programs.
(7)
Findings of the reliability of the proposed measuring instrument using Cronbach’s alpha statistic in SPSS version 26 statistical software.
(8)
Testing the adequacy of a factor analysis by finding the Kaiser–Meyer–Olkin (KMO) sampling adequacy statistic and Bartlett’s test of sphericity.
(9)
Execution of factor analysis and determination of factors or homogeneous groups of variables.
(10)
Finding of possible significant relationships between variables by means of the chi-square statistical parameter in the SPSS version 26 statistical software, Infostat 2020 and Excel spreadsheet.
(11)
Interpretation of results for decision making.

3. Results

3.1. Selection of Nominal and Ordinal Qualitative Variables

The nominal qualitative variables considered during the research were “gender,” “origin” and “program” studied by the graduate. Each of them was assigned a categorical value to differentiate their categories. The assignment of these characteristics was exhaustive and mutually exclusive, i.e., it did not have any idea of quantity, order or hierarchy associated with them [18].
On the other hand, the chosen ordinal qualitative variables did have an implicit ordering of the attribute. In this case, the “age group,” “entry profile” and “graduate satisfaction” variables were considered.

3.2. General Characteristics of the Graduates

As shown as Table 2, it was observed that 70.7% were male and the remaining 29.3% were female; 37.3% came from South America, 32% from North America, 24% from Central America and 6.7% from Eurasia. In relation to age, the 30–39 years age group accounted for 40.7%, the 20–29 years age group for 22%, the 40–49 years age group for 20%, the 50–59 years age group for 13.3% and the 60–69 years age group for 4%. In reference to previous studies, 66% completed a degree/diploma/bachelor, 15.3% a master’s degree, 14.7% a postgraduate degree and 4% a doctorate. Finally, 44.7% studied the Climate Change program, 19.3% the Marine Sciences program, another 16% the Water program and, finally, 4% the Biodiversity program.

3.3. Determination of the Indicators of the “Graduate Satisfaction” Variable

Based on the literature review, a total of ten dimensions with their corresponding indicators were identified (Table 3).

3.4. Instrument Design Based on the Measurement Criteria

Once the list of indicators was available, the corresponding measurement criteria for the diagnosis were proposed. To this end, a bibliographic search and the contribution of a panel of experts were used.
In this way, thirteen items or measurement criteria were obtained, which provided the basis for a Likert scale questionnaire with scale categories that gradually ranged from “1. Strongly disagree” to “4. Strongly agree” to measure the variable of graduate satisfaction in relation to the reference postgraduate programs (Table 4).

3.5. Validity and Reliability of the Measurement Instrument

The validity of the measurement instrument was determined based on the relevance, pertinence and clarity of each of the items by a panel of experts [4].
Ultimately, the aim was to see what proportion of the measurement criteria was accepted by each of the experts. To do this, each expert filled in a table in which he or she considered whether that question was of significant interest (relevant), whether it was suitable (feasible, novel, ethical and interesting) and, finally, whether it was unambiguously written (clear).
For example, issues were considered ambiguous, such as “The tutor has adequately complied with the teaching plan and the qualification of exercises”, if they included two activities for only one answer, or if they made reference to assumptions, for example, “I consider that my classmates adequately value the activities of the tutor”.
Once the tables were completed, the frequencies were determined and weighted with a test proportion of 85%, using the binomial in the SPSS program. In this way, significance levels were determined that allowed us to accept in all cases the null hypothesis that the accepted criteria ratio in the instrument was 85%.
The reliability of the instrument was based on the determination of Cronbach’s alpha and included all the ordinal qualitative variables or items associated with the “graduate satisfaction” variable.
Cronbach’s alpha provided a result of 0.834, considered a good value for internal consistency [18]. Consequently, we proceeded to the study of the information collected.
Table 5 shows the statistics associated with Cronbach’s alpha. In the last column, it can be seen that all are greater than 0.808 and that the elimination of item 3 could improve the value obtained. However, it was considered that the small improvement in the statistic (from 0.834 to 0.836) was not significant enough to compensate for the loss of information that the exclusion of the item from the analysis would entail, so it was decided to leave it.

3.6. Measures of Central Tendency and Dispersion for Ordinal Qualitative Variables

Table 6 shows the measures of central tendency (mean) and dispersion (standard deviation) for the ordinal qualitative variables.
Therefore, the mean and the standard error of the mean were as follows (Table 7):
It could be observed that the mean of age group was in the range of 30–39 years old and that the mean of entry profile was located between undergraduate and postgraduate.

3.7. Exploratory Factor Analysis

The proposal is to group the items into a reduced number of factors, capable of explaining the greatest proportion of the total variability contained in them. To this end, the variables or items must meet the assumptions of normality, homoscedasticity (equality of variances) and multicollinearity (correlation between variables).
From the statistical analysis, it was observed that the data did not follow a normal distribution. However, according to Hair et al. [24], these three assumptions can be ignored, as factor analysis is more conceptual than statistical; in fact, these checks are rarely carried out.
Aiquipa [25] suggests the need to use the Kaiser–Meyer–Olkin (KMO) sampling adequacy coefficient, complemented by Bartlett’s test of sphericity, to check whether factor analysis is applicable.
As shown in Table 8, being 0.793 ≥ 0.5, it was observed that there was a strong correlation between the items; therefore, factor analysis was applicable [26].
Likewise, the p-value of Bartlett’s test of sphericity, which is very sensitive to the sample size, turned out to be lower than the significance level in the social sciences (0.05), thus confirming, with more consistency, the suitability of performing factor analysis.
It is also convenient to review the data of the diagonal of the anti-image matrix to see to what extent they are close to unity or, if necessary, if it is necessary to eliminate any variable or item.
In this sense, Hair et al. [24] recommend eliminating, by default, the variable with the lowest value, which, as shown in Table 9, corresponded to item No. 3: “I am satisfied with the attention received prior to enrollment”.
Once this item was eliminated, factor analysis was performed again, showing an improvement in the KMO statistic (Table 10).
Table 11 shows how the anti-image matrix also generally yields values closer to unity than in the previous case.
The communalities matrix also improved in this case. Table 12 shows that the model is able to reproduce 65.1% of the variability of the first item, 68.5% of the second item, etc.
On the other hand, despite the fact that the sedimentation graph recommended selecting three factors for the analysis, i.e., those with eigenvalues above unity, it was decided to choose a number of four, based on the literature review, which recommends reaching an approximate value of 60–65% in social sciences for the rotated cumulative explained variance [27].
As shown in Table 13, after the rotation, there was a redistribution of the variability among the factors, with the four components explaining 63.095% of the total variability, which is an appropriate percentage in the context of the research.
The matrix of rotated components provided the items corresponding to each of the factors (Table 14).
Table 14 shows that factor 1 refers to the methodology followed during the program. Factor 2 is related to the organization. Factor 3 refers to the fulfillment of the academic expectations of the graduate. Finally, factor 4 is related to the work of the teaching staff.
Table 15 shows the measurement criteria grouped by factors and other indicators.

3.8. Analysis of Variance (ANOVA)

ANOVA assumes normality and homoscedasticity.
Table 16 shows that residuals were normally distributed (p-value > 0.05).
Figure 2 illustrates that the residuals do not have any information, i.e., they do not follow a definite trend, such as a “funnel shape,” in which case the hypothesis of equal variances would not be fulfilled. However, this is only a graphical test, which must then be confirmed with Levene’s statistic (Table 17).
Levene’s homogeneity test effectively confirms that analysis of variance can be performed (Significance Level > 0.05).
Analysis of variance confirmed Significance Level > 0.05, which means that differences in the mean factors were attributed to random chance (Table 18).

3.9. Categorization of Factors

Based on the four factors previously found, as many new variables were created as a result of the sum of the corresponding items for each of the 150 participants.
These values were then classified into different categories: “very low”, “low”, “medium” and “high,” which are intended to give an idea of the level of satisfaction corresponding to each factor.

3.10. Overall Relationship between Variables

Before studying whether there are significant relationships between the factors and the rest of the variables, it is useful to approach this same objective from a global perspective.

3.10.1. Creation of the “Level of Satisfaction” Variable

To this end, a new global variable called “level of satisfaction” was created from the sum of the items for each of the 150 participants.
The descriptive data of central tendency and dispersion of this new variable are shown in Table 19.

3.10.2. Normality Test of the Data of the Variable “Satisfaction Level”

In order to determine whether the “satisfaction level” variable was distributed according to a normal distribution, and given that the sample was larger than 50 individuals, the Kolmogorov–Smirnov test was carried out in the SPSS software version 26 [28].
The result is shown in Table 20.
Since the resulting p-value (0.001) is less than 0.05, the null hypothesis of a normal distribution of the data is rejected, so these values do not follow a normal distribution. This conditioned the notion that the method to be used later to determine a possible significant relationship between different variables would be nonparametric.

3.10.3. Categorization of the Variable “Level of Satisfaction”

Next, the “level of satisfaction” variable was grouped into a range of three categories: low, medium and high. For this purpose, two cut-off points were sought to establish these categories.
The following were considered:
44.58 0.75 3.876 42
44.58 + 0.75 3.876 47
Figure 3 shows the frequencies of the “graduate satisfaction” variable with the lower and upper cut-off values.
As a result, the ranges shown in Table 21 were obtained, to which a categorical value and level of satisfaction were assigned.
Table 21 shows that 53% of the graduates had a medium level of satisfaction with the institution’s online environmental postgraduate programs, 23% had a low level of satisfaction and the remaining 24% were highly satisfied after completing the corresponding postgraduate program.
Since the assumptions of normality of the grouped “level of satisfaction” variable were not met, parametric tests such as Pearson’s test could not be applied, so it was necessary to resort to nonparametric, a priori, less robust ones.
Since there are nominal variables, it was not possible to apply Spearman’s correlation; therefore, it was justified to resort to the chi-square test.
The results of applying the chi-square test to each of the five variables of the model against the grouped “level of satisfaction” variable gave rise to as many contingency tables, which are summarized in Table 22.
Since, in all cases, the p-value was ≥0.05, it was concluded to accept the null hypothesis of there being no relationship between variables in all cases.

3.11. Relationship between the Factors and the Rest of the Variables of the Model

As in the previous case, the results of applying the chi-square test to each of the four factors of the model vs. the rest of the variables gave rise to a set of contingency tables summarized in Table 23.
In this case, significant relationships between variables do appear. In scenarios where the p-value is < 0.05, it is concluded to reject the null hypothesis of there being no relationship between variables. A stacked bar chart is then justified in these cases.
This case is proof of how important and useful it is to perform a factor analysis, since by analyzing each factor separately, information unnoticed when approaching the problem from a global perspective comes to light.
Figure 4 shows that, in general, there is a medium to high level of satisfaction with the methodology followed; however, the “Biodiversity” and “Audits” programs show high levels of dissatisfaction with this factor.
Figure 5 shows a fairly homogeneous level of satisfaction with the organization depending on the country of origin, except in the case of Eurasia, where graduates have very significant values of dissatisfaction, around 40%. There are also significant levels of dissatisfaction among graduates from South and Central America.
Figure 6 shows that, with regard to academic expectations, there is a high level of satisfaction by areas of origin, except in the case of graduates from Eurasia, whose values contrast once again with the rest.

4. Discussion

In this research article, the need to develop a model was proposed based on a series of variables collected in the bibliography to evaluate the satisfaction of 150 graduates of various online postgraduate programs in the environmental field, and to study possible relationships between them.
In this research, the estimated global mean of the variable “graduate satisfaction” was 3.72, very close to the maximum value of the Likert scale, with a standard error of 0.02, which means that the potential error made in the estimation with respect to the true average does not exceed 0.04 (with 95% confidence).
The objective was to implement the model in a Likert scale questionnaire (“1. Strongly disagree” to “4. Strongly agree”), in order to measure the variable of “graduate satisfaction” in relation to the reference postgraduate programs. This result is endorsed in most of the consulted literature references, which obtain the items of the measurement instrument, starting from indicators already contemplated in research by one or more authors, even when they differ in the number of used scales [5,6,16]. For example, Álvarez et al. [6] rely on the studies of [15]. In this sense, it is concluded that it is possible to obtain the items of the instrument from the indicators of other models, regardless of the number of scales contemplated.
With the aim of establishing its validity and reliability, the results yielded a value of Cronbach’s alpha statistic of 0.836, which indicates that the measurement instrument is consistent over time and is within the range of values of good reliability. These results are similar to those found in most of the literature, where the values of this coefficient are above 0.80 [5,16], which hints at good to excellent reliability [18]. In most models, the validity of the content of the instrument is verified by means of a literature review and an expert panel consultation. If it is verified to be possible from a statistical point of view, an exploratory factor analysis is carried out, in order to find the factors based on the grouping of items [4,5]. In other cases, a criterion and construct validity is also performed [1].
In order to group the homogeneous variables into factors and simplify the complexity of the instrument, the feasibility of an exploratory factor analysis was investigated, resulting in a sampling adequacy KMO (Kaiser–Meyer–Olkin) of 0.807 and a Bartlett’s test of sphericity, whose p-value was 0.000. Thus, the suitability of the application of factor analysis was demonstrated. The exploratory factor analysis yielded a total of four factors: methodology, organization, academic expectations and teaching work, which explained 63% of the total variance. The results are in agreement with some studies, such as Hair et al. [24], who allude to the elimination, by default, of the item with the lowest coefficient of the diagonal of the anti-image matrix, or Pardo and Ruíz [26], who use KMO values higher than 0.5 and a Bartlett’s p-value of null sphericity to provide a good fit and thus find a reduced number of homogeneous groups or factors that can be crossed, in turn, with other variables to establish possible relationships between them [29]. As a result, it is concluded that with this data reduction technique, in addition to obtaining an instrument with good internal consistency, the operation with the data matrix is considerably simplified, as we work with only four factors after regrouping.
In this research, when determining the level of satisfaction with environmental postgraduate programs in general, it was found that 77% of the graduates had a medium-high level of satisfaction. In this sense, it was found that there were no statistically significant differences between the averages of the factors of the “graduate satisfaction” variable, so the estimated differences between averages were attributed to random chance.
Regarding the objective of finding some type of relationship between the level of graduate satisfaction and the rest of the variables (gender, entry profile, origin, program and age group), the results of the chi-square test applied globally did not yield significant results. Some research works corroborating these results are those of [4,5,16,30], who also found no significant differences between satisfaction and some dimensions such as gender, average years at university and school cycle. However, Kuo et al. [31], after conducting a preliminary test with a set of 111 students from the United States to measure their satisfaction in an online course, showed that satisfaction was conditioned by the handling of ICTs and that there were differences between gender, academic level (undergraduate and graduate) and time spent. However, the chi-square test applied to each individual factor did show significant relationships in the aspects of methodology, organization and expectations in relation to the programs studied and the origin of the graduate. In this sense, significant levels of dissatisfaction were found in the methodology of the “Auditing” and “Biodiversity” programs and from the point of view of organization and academic expectations in graduates from Central America, South America and the Eurasia zone, respectively. It is worth mentioning that, from an overall point of view, no significant differences were found among the variables, which provides an idea of the importance of performing factor analysis to discover findings that would otherwise go unnoticed. Finally, the fact that no other significant relationships were found between other variables does not mean that the relationship does not exist, but simply that it could not be demonstrated with a significance level of 0.05 and the number of study units with which it worked.

5. Conclusions and Recommendations

Throughout the investigation, the following points were found:
  • The importance for educational institutions to know the degree of satisfaction of postgraduate graduates, either to attract new generations of students [2], to respond to third parties [4] or, fundamentally, to find out their needs [1].
  • The existence of a great heterogeneity of models for measuring student satisfaction in general and graduate satisfaction in particular, some of them quite complex.
  • The existence of partial and global approach models for virtual education, as well as standards for this training modality.
  • The possibility of developing a Likert scale instrument to measure the satisfaction of 150 graduates of various online postgraduate programs in the environmental area.
  • The validity and reliability of the measurement instrument, thanks to the assessment of a team of experts and the determination of a Cronbach’s alpha value of 0.834, respectively. This value, close to unity, guarantees good internal consistency.
  • The adequacy of factor analysis by providing KMO sampling adequacy values of 0.793, greater than 0.5, and a p-value close to or equal to zero for Bartlett’s test of sphericity.
  • An improvement in the KMO coefficient (from 0.793 to 0.807) when removing one of the variables or items from the study, as well as a very insignificant increase in Cronbach’s alpha value (from 0.834 to 0.836).
  • The existence of four new underlying variables or factors as a result of the factor analysis: methodology, organization, academic expectations and teaching work, which together explain 63% of the total variance.
  • A medium-high level of satisfaction in the order of 77% with environmental online postgraduate programs.
  • It was found that there were no statistically significant differences between the averages of the dimensions of the “graduate satisfaction” variable, so the estimated differences between averages were attributed to random chance.
  • That the results of the chi-square test indicate that there is no significant overall relationship (α = 0.05) between the level of satisfaction and the rest of the variables (gender, entry profile, origin, program and age group).
  • That, at the level of individual factors, it is possible to establish significant relationships between methodology, organization and academic expectations with the programs and the origin of the graduate.
  • That graduates from Central and South America have higher dissatisfaction values than the rest in the organizational sphere.
  • That graduates from the Eurasian zone have higher dissatisfaction values than the rest in organizational aspects and academic expectations.
  • That the “Audit” and “Biodiversity” programs present higher levels of dissatisfaction than the rest in relation to methodological issues.
To conclude, some recommendations would be the following:
  • Find out what causes dissatisfaction rates of graduates from Central and South America, but especially from Eurasia, and afterwards review organizational processes and academic expectations.
  • Review the methodological issues of the “Audit” and “Biodiversity” programs.
  • Perform a comparison with other online, face-to-face postgraduate or undergraduate degrees [4].
  • Complement the results obtained with the opinions of the teaching staff [32].
  • Expand the sample with more participants from Europe and Asia.
  • Improve the institution’s policy to facilitate access to training according to the participants’ social and economic context.

Author Contributions

Conceptualization, R.R.S. and A.P.B.; data curation, A.P.B.; formal analysis, R.R.S.; investigation, S.P.-V.; methodology, E.G.V.; project administration, I.D.N.; resources, K.T.P.; software, I.D.N.; supervision, S.P.-V.; validation research and measurement instrument validity, E.G.V., S.P.-V. and I.D.N.; visualization, S.P.-V.; writing—original draft, E.G.V.; writing—review and editing, I.D.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research is part of European Project Erasmus + Lovedistance (Reference: 609949-EPP-1-2019-1-PTEPPKA2-CBHE-JP), funded by SODERCAN (Society for Regional Development of Cantabria), in conjunction with CITICAN (Investigation and Technology Centre of Cantabria).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to the conditions of the project contract with the funder (Society for Regional Development of Cantabria).

Acknowledgments

The authors would like to thank to the Centro de Investigación y Tecnología Industrial de Cantabria (CITICAN) and the Universidad Europea del Atlántico for their valuable collaboration.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mejías, A.; Martínez, D. Desarrollo de un instrumento para medir la satisfacción estudiantil en educación superior. Docencia Univ. 2009, 10. Available online: http://saber.ucv.ve/ojs/index.php/rev_docu/article/view/3704 (accessed on 15 March 2021).
  2. González, R.; Tinoco, M.; Torres, V. Análisis de la satisfacción de la experiencia universitaria de los egresados en 2015 de la Universidad de Colima. Paradig. Económico 2017, 8, 59–84. [Google Scholar]
  3. Pichardo, M.; García, B.A. El estudio de las expectativas en la universidad: Análisis de trabajos empíricos y futuras líneas de investigación. REDIE: Rev. Electrónica Investig. Educ. 2007, 9, 1–16. [Google Scholar]
  4. Pérez, F.J.; Martínez, P.; Martínez, M. Satisfacción del estudiante universitario con la tutoría. Diseño y validación de un instrumento de medida. Estud. Educ. 2015, 29, 81–101. [Google Scholar]
  5. Surdez, E.G.; Sandoval, M.d.C.; Lamoyi, C.L. Satisfacción estudiantil en la valoración de la calidad educativa universitaria. Educ. Educ. 2018, 21, 9–26. [Google Scholar] [CrossRef] [Green Version]
  6. Álvarez, J.; Chaparro, E.M.; Reyes, D.E. Estudio de la satisfacción de los estudiantes con los servicios educativos brindados por instituciones de educación superior del Valle de Toluca. REICE. Rev. Iberoam. Calid. Efic. Cambio Educ. 2015, 13. Available online: https://revistas.uam.es/reice/article/view/2788 (accessed on 20 March 2021).
  7. García, L. La Educación a Distancia: De la Teoría a la Práctica; Ariel: Barcelona, Spain, 2002. [Google Scholar]
  8. Vann Slyke, C.; Kittner, M.; Belanger, F. Identifying Candidates for Distance education: A telecommuting perspective. In Proceedings of the America’s Conference on Information System, Baltimore, MD, USA; 1998; pp. 666–668. [Google Scholar]
  9. McArdle, G.E. Training Design and Delivery; American Society for Training and Development: Alexandria, VA, USA, 1999. [Google Scholar]
  10. Kirkpatrick, D.L. Evaluating Training Programs: The Four Levels; Berret Koehler Publishers: San Francisco, CA, USA, 1994. [Google Scholar]
  11. Pereira, A.; Gelvez, L.N. Propuesta de un modelo latinoamericano para apoyar la gestión de calidad de la educación virtual. Un enfoque dinámico sistémico. Available online: https://reposital.cuaed.unam.mx:8443/xmlui/bitstream/handle/20.500.12579/5314/VEAR18.0426.pdf?sequence=1&isAllowed=y (accessed on 15 March 2021).
  12. Asociación Española de Normalización y Certificación, Gestión de la Calidad. Calidad de la Formación Virtual, (UNE 66181:2012). 2012.
  13. Cabero-Almenara, J.; del-Carmen Llorente-Cejudo, M.; Puentes-Puente, A. Online Students’ Satisfaction with Blended Learning. Comunicar 2009, 18, 149–157. [Google Scholar] [CrossRef] [Green Version]
  14. García, N.; Rivero, M.L.; Ricis, J. Brecha digital en tiempo del COVID-19. Rev. Educ. HEKADEMOS 2020, 28, 76–85. [Google Scholar]
  15. Gento, S.; Vivas, M. EL SEUE: Un Instrumento para Conocer la Satisfacción de los Estudiantes Universitarios con su Educación. Acción Pedagógica 2003, 12, 16–27. [Google Scholar]
  16. Romo, J.R.; Mendoza, G.; Flores, G. Relaciones conceptuales entre calidad educativa y satisfacción estudiantil, evaluadas con ecuaciones estructurales: El caso de la facultad de filosofía y letras de la Universidad Autónoma de Chihuahua. 2012. Available online: http://cie.uach.mx/cd/docs/area_04/a4p11.pdf (accessed on 7 March 2021).
  17. González, J.; Pazmiño, M. Cálculo e interpretación del Alfa de Cronbach para el caso de validación de la consistencia interna de un cuestionario, con dos posibles escalas tipo Likert. Rev. Publicando 2015, 2, 62–67. [Google Scholar]
  18. Rodríguez, J.; Reguant, M. Calcular la fiabilidad de un cuestionario o escala mediante el SPSS: El coeficiente alfa de Cronbach. REIRE Rev. d’Innovació Recer. Educ. 2020, 13, 1–13. [Google Scholar] [CrossRef]
  19. Fainholc, B. La calidad en la educación a distancia continúa siendo un tema muy complejo. Rev. Educ. Distancia 2004. Available online: https://revistas.um.es/red/article/view/25311 (accessed on 11 March 2021).
  20. Perero, G.; Isaac, C.L.; Díaz, S.; Ramos, Y. Propuesta de indicadores valorativos de la sostenibilidad de universidades ecuatorianas. Ing. Ind. 2020, 41, e4125. [Google Scholar]
  21. Piza-Flores, V.; Aparicio, J.L.; Rodríguez, C.; Beltrán, J. Transversalidad del eje “Medio ambiente” en educación superior: Un diagnóstico de la Licenciatura en Contaduría de la UAGro. RIDE. Rev. Iberoam. Investig. Desarro. Educ. 2018, 8, 598–621. [Google Scholar] [CrossRef] [Green Version]
  22. Hernández, R.; Fernández, C.; Baptista, P. Metodología de la Investigación, 3rd ed.; McGraw-Hill: New York, NY, USA, 2003. [Google Scholar]
  23. Torres, M.; Karim, P. Tamaño de una muestra para una investigación de mercado. Facultad de Ingeniería. Universidad Rafael Landívar. Boletín Electrónico 2021, 2. Available online: https://docplayer.es/424351-Tamano-de-una-muestra-para-una-investigacion-de-mercado.html (accessed on 13 March 2021).
  24. Hair, J.; Anderson, R.; Tatham, R.; Black, W. Análisis Multivariante, 5th ed.; Prentice Hall: Madrid, Spain, 1999. [Google Scholar]
  25. Aiquipa, J. Diseño y validación del inventario de dependencia emocional. Rev. Investig. Psicol. 2012, 15, 133–145. [Google Scholar] [CrossRef] [Green Version]
  26. Pardo, A.; Ruiz, M. SPSS11. Guía para el Análisis de Datos, 1st ed.; McGraw Hill: Madrid, Spain, 2002. [Google Scholar]
  27. Vailati, P. Alfa de Cronbach y Análisis Factorial en SPSS—Investigación de Mercados II UADE. 2020. Available online: https://www.youtube.com/watch?v=PjZZeajjZYU&t=1603s (accessed on 23 March 2021).
  28. De la Garza, J.; Morales, B.N.; González, B.A. Análisis Estadístico Multivariante; McGraw Hill: New York, NY, USA, 2013. [Google Scholar]
  29. Johnson, D. Métodos Multivariados Aplicados al Análisis de Datos; International Thomson Editores: London, UK.
  30. Troyano, Y.; García, A.J. Expectativas del alumnado sobre el profesorado tutor en el contexto del Espacio Europeo de Educación Superior. Boletín RED-U 2009, 7. [Google Scholar] [CrossRef]
  31. Kuo, Y.C.; Walker, A.E.; Belland, B.R.; Schroder, K.E.E. A predictive study of student satisfaction in online education programs. Int. Rev. Res. Open Distrib. Learn. 2013, 14, 16–39. [Google Scholar] [CrossRef] [Green Version]
  32. Llorent, V.; Cobano, V. Análisis crítico de las encuestas universitarias de satisfacción docente. Rev. Educ. 2019, 385, 91–117. [Google Scholar] [CrossRef]
Figure 1. Number of accesses and connection hours of different programs of an educational institution. Note. The overprinted circle illustrates the increase in accesses and connection hours to an educational institution’s online programs, starting in the first quarter of 2020.
Figure 1. Number of accesses and connection hours of different programs of an educational institution. Note. The overprinted circle illustrates the increase in accesses and connection hours to an educational institution’s online programs, starting in the first quarter of 2020.
Sustainability 13 05112 g001
Figure 2. Residuals vs. fitted values.
Figure 2. Residuals vs. fitted values.
Sustainability 13 05112 g002
Figure 3. Frequency distribution of the “level of satisfaction” variable and normal distribution curve (mean = 44.58 and standard deviation = 3.876) overprinted. Note. The lower (42) and upper (47) cut-off values for determining the categories are shown.
Figure 3. Frequency distribution of the “level of satisfaction” variable and normal distribution curve (mean = 44.58 and standard deviation = 3.876) overprinted. Note. The lower (42) and upper (47) cut-off values for determining the categories are shown.
Sustainability 13 05112 g003
Figure 4. Level of satisfaction referring to methodology by program.
Figure 4. Level of satisfaction referring to methodology by program.
Sustainability 13 05112 g004
Figure 5. Level of satisfaction referring to the organization by origin.
Figure 5. Level of satisfaction referring to the organization by origin.
Sustainability 13 05112 g005
Figure 6. Level of satisfaction referring to academic expectations by origin.
Figure 6. Level of satisfaction referring to academic expectations by origin.
Sustainability 13 05112 g006
Table 1. Variables and associated dimensions used as an evaluative instrument.
Table 1. Variables and associated dimensions used as an evaluative instrument.
VariablesDimensions
Section I: demographic variablesSchool situation
Individual profile
Section
II: student satisfaction
Teaching–learning profile
Respectful treatment of the people with whom he/she must interact to achieve his/her academic goals
Teaching–learning space infrastructure
Self-realization
Note. Own elaboration based on Surdez et al. [5].
Table 2. Educational and demographic characteristics of the graduates.
Table 2. Educational and demographic characteristics of the graduates.
AttributeCategoryn%
Gender1Male10670.7
2Female 4429.3
Origin1North America4832
2Central America3624
3South America5637.3
4Eurasia106.7
Program1Audits1510
2ISO 1400196
3Biodiversity64
4Waters2416
5Marine Sciences2919.3
6Climate Change6744.7
Age group (years)120–293322
230–396140.7
340–493020.0
450–592013.3
560–6964
Entry profile1PhD64
2Master’s degree2315.3
3Postgraduate2214.7
4Degree/Dip./Bachelor9966
Table 3. Dimensions and indicators of the “graduate satisfaction” variable.
Table 3. Dimensions and indicators of the “graduate satisfaction” variable.
DimensionsIndicators
Self-realizationLevel of achieved academic satisfaction
Relevance for trainingDegree of learning achieved
Academic program requirementsTime pressures, attention effort,
complexity of the tasks…
Economic and social context of the participantNumber of scholarships granted, payment facilities…
Interaction between participants, tutors
and other interested parties
Degree of satisfaction with the channels established for external communication
Academic programDegree of satisfaction with the curriculum design
Continuous evaluationDegree of satisfaction with continuous
evaluation activities
Teacher evaluationDegree of satisfaction with academic
tutors
Degree of satisfaction with the tutor of
the final project
Accessibility to the product and other services offered by
the institution
Number of shipments of teaching material, reception times…
Virtual campusDegree of ease of use of the virtual
campus
Technical support and number of incidents
Table 4. Likert scale questionnaire for measuring graduate satisfaction with postgraduate programs.
Table 4. Likert scale questionnaire for measuring graduate satisfaction with postgraduate programs.
Item No.Measurement Criteria
1Delivery of didactic materials has been punctual and on time.
2My tutor has gone out of his/her way to help me
3I am satisfied with the attention I received prior to enrollment
4The handling of the virtual campus has been very user friendly
5Overall, I am satisfied with the program
6My assessment of the organization of the program is very
satisfactory
7The Institution has provided me with facilities to carry out the
study
8My Final project manager has been accessible
9This program will be relevant to my professional training and
performance
10I found the information provided during the program to be
sufficient
11The academic program has met my initial expectations
12I found the contents of the program interesting
13My assessment of the continuous evaluation is very satisfactory
Table 5. Statistics associated with Cronbach’s alpha.
Table 5. Statistics associated with Cronbach’s alpha.
ItemScaling Average If the Element Has Been SuppressedScale Variance If the Element Has Been SuppressedTotal Correlation of Corrected
Elements
Cronbach’s Alpha If the Item Has Been Deleted
144.3614.4330.6740.810
244.3515.4780.3710.829
344.5815.0240.3310.836
444.4714.2640.5680.815
1344.2815.1560.4140.827
Table 6. Central tendency and dispersion statistics for ordinal qualitative variables.
Table 6. Central tendency and dispersion statistics for ordinal qualitative variables.
Variable MSD
Graduate Satisfaction
Item
13.720.493
23.730.504
33.500.673
43.610.601
53.710.651
63.630.550
73.750.480
83.730.473
93.850.408
103.650.567
113.700.576
123.710.595
133.800.543
Age Group
2.371.089
Entry Profile
3.430.893
Note. M and SD represent mean and standard deviation, respectively.
Table 7. Mean and standard error of the mean of graduate satisfaction variable.
Table 7. Mean and standard error of the mean of graduate satisfaction variable.
VariableMSE
Graduate Satisfaction3.720.02
Note. M and SE represent mean and standard error of the mean, respectively.
Table 8. KMO and Bartlett’s test.
Table 8. KMO and Bartlett’s test.
Kaiser–Meyer–Olkin Measure of Sampling Adequacy0.793
Bartlett’s test for sphericityApprox. Chi-Square601.695
df78
Sig.0.001
Table 9. Anti-image correlation matrix.
Table 9. Anti-image correlation matrix.
Item 1Item 2Item 3Item 4Item 5Item 6Item 7Item 8Item 9Item 10Item 11Item 12Item 13
Item 10.813 a−0.290−0.1110.110−0.1540.150−0.232−0.149−0.200−0.072−0.029−0.4120.015
Item 2−0.2900.766 a0.070−0.1890.0500.0070.017−0.2230.0020.068−0.0640.169−0.071
Item 3−0.1110.0700.655 a−0.5090.019−0.012−0.143−0.0180.044−0.0890.0530.1030.155
Item 40.110−0.189−0.5090.742 a0.014−0.0830.0110.016−0.0890.035−0.003−0.394−0.096
Item 5−0.1540.0500.0190.0140.830 a−0.2720.1110.051−0.082−0.1000.020−0.090−0.053
Item 60.1500.007−0.012−0.083−0.2720.706 a−0.215−0.164−0.1590.285−0.310−0.035−0.059
Item 7−0.2320.017−0.1430.0110.111−0.2150.851 a0.0320.094−0.155−0.087−0.0460.019
Item 8−0.149−0.223−0.0180.0160.051−0.1640.0320.892 a−0.015−0.205−0.115−0.0700.010
Item 9−0.2000.0020.044−0.089−0.082−0.1590.094−0.0150.853 a−0.118−0.1090.126−0.089
Item 10−0.0720.068−0.0890.035−0.1000.285−0.155−0.205−0.1180.764 a−0.3920.029−0.113
Item 11−0.029−0.0640.053−0.0030.020−0.310−0.087−0.115−0.109−0.3920.830 a−0.1280.115
Item 12−0.4120.1690.103−0.394−0.090−0.035−0.046−0.0700.1260.029−0.1280.780 a−0.352
Item 130.015−0.0710.155−0.096−0.053−0.0590.0190.010−0.089−0.1130.115−0.3520.812 a
a Measures of sampling adequacy (MSA).
Table 10. KMO and Bartlett’s test without item 3.
Table 10. KMO and Bartlett’s test without item 3.
Kaiser–Meyer–Olkin Measure of Sampling Adequacy0.807
Bartlett’s test for sphericityApprox. Chi-Square538.981
df66
Sig.0.001
Table 11. Anti-image correlation matrix without item No. 3.
Table 11. Anti-image correlation matrix without item No. 3.
Item 1Item 2Item 4Item 5Item 6Item 7Item 8Item 9Item 10Item 11Item 12Item 13
Item 10.814 a−0.2850.062−0.1530.150−0.252−0.152−0.196−0.083−0.023−0.4050.033
Item 2−0.2850.771 a−0.1780.0490.0080.028−0.222−0.0010.075−0.0680.163−0.083
Item 40.062−0.1780.835 a0.028−0.103−0.0730.008−0.078−0.0120.028−0.399−0.021
Item 5−0.1530.0490.0280.827 a−0.2720.1150.052−0.083−0.0990.019−0.093−0.056
Item 60.1500.008−0.103−0.2720.698 a−0.219−0.164−0.1590.285−0.31−0.034−0.058
Item 7−0.2520.028−0.0730.115−0.2190.837 a0.0300.101−0.17−0.08−0.0320.042
Item 8−0.152−0.2220.0080.052−0.1640.030.889 a−0.014−0.207−0.115−0.0690.013
Item 9−0.196−0.001−0.078−0.083−0.1590.101−0.0140.855 a−0.114−0.1110.122−0.097
Item 10−0.0830.075−0.012−0.0990.285−0.17−0.207−0.1140.761 a−0.3900.039−0.101
Item 11−0.023−0.0680.0280.019−0.31−0.08−0.115−0.111−0.390.829 a−0.1340.108
Item 12−0.4050.163−0.399−0.093−0.034−0.032−0.0690.1220.039−0.1340.774 a−0.374
Item 130.033−0.083−0.021−0.056−0.0580.0420.013−0.097−0.1010.108−0.3740.823 a
a Measures of sampling adequacy (MSA). Once item 3 has been removed.
Table 12. Communalities matrix.
Table 12. Communalities matrix.
No. ItemInitialExtractionNo. ItemInitialExtraction
11.00.65181.00.574
21.00.68591.00.577
41.00.572101.00.578
51.00.591111.00.611
61.00.682121.00.776
71.00.700131.00.574
Table 13. Total variance explained.
Table 13. Total variance explained.
ComponentExtraction Sums of Squared LoadingsRotation Sums of Squared Loadings
TotalVariance %Accumulated %TotalVariance %Accumulated %
14.42736.89236.8922.17118.08918.089
21.1609.66446.5572.09317.44035.529
31.0758.96055.5171.70514.20749.736
40.9097.57963.0951.60313.36063.095
Note. Extraction method: principal component analysis.
Table 14. Matrix of rotated components.
Table 14. Matrix of rotated components.
Component1234
Item 120.7930.308
Item 40.716
Item 130.715
Item 10.4830.422 0.466
Item 7 0.777
Item 6 0.6750.397
Item 10 0.665 0.343
Item 5 0.724
Item 11 0.693
Item 9 0.6160.431
Item 2 0.792
Item 8 0.440 0.564
Note. Values below 0.3 have been eliminated. Components or Items have been shaded for each factor.
Table 15. Measurement criteria grouped by factors and measures of central tendency.
Table 15. Measurement criteria grouped by factors and measures of central tendency.
FactorItem No.Mean ItemMean FactorMeasurement Criteria
MethodologyItem 123.713.71I found the contents of the program interesting
Item 43.61The handling of the virtual campus has been very user friendly
Item 133.8My assessment of the continuous evaluation is very
satisfactory
Item 13.72Delivery of didactic materials has been punctual and on time
OrganizationItem 73.753.68The Institution has provided me with facilities to carry out the study
Item 63.63My assessment of the organization of the program is very satisfactory
Item 103.65I found the information provided during the program to be sufficient
Academic expectationsItem 53.713.75Overall, I am satisfied with the program
Item 113.7The academic program has met my initial expectations
Item 93.85This program will be relevant to my professional training
and performance
Teaching work Item 23.733.73My tutor has gone out of his/her way to help me
Item 83.73My Master’s Final project director has been accessible
Table 16. Test of Normality. Shapiro–Wilk Test (modified *).
Table 16. Test of Normality. Shapiro–Wilk Test (modified *).
Variable n MSDW * p-Value (Unilateral D)
Residual Satisfaction120.000.060.910.3917
Note. (*) indicates that it is a modified version of Shapiro-Wilk Test. M and SD represent mean and standard deviation, respectively.
Table 17. Test of homogeneity of variances.
Table 17. Test of homogeneity of variances.
SatisfactionLevene Statisticdf1df2Sig.
1.303380.339
Table 18. Table of analysis of variance.
Table 18. Table of analysis of variance.
F.V.SSdfMSFSig.
Between0.00930.0030.6160.624
Within0.04180.005
Total0.0511
Table 19. Basic statistics of the level of satisfaction variable.
Table 19. Basic statistics of the level of satisfaction variable.
NMinimumMaximumMSD
Level of Satisfaction 150294844.583.876
Note. M and SD represent mean and standard deviation, respectively.
Table 20. Kolmogorov–Smirnov normality test of the Satisfaction Level variable.
Table 20. Kolmogorov–Smirnov normality test of the Satisfaction Level variable.
Normality Tests
Variable Kolmogorov–Smirnov a Shapiro–Wilk
StatisticaldfSig.StatisticaldfSig.
Satisfaction Level0.1901500.0010.8101500.001
a Lilliefors correction.
Table 21. Grouping ranges for evaluating graduate satisfaction levels.
Table 21. Grouping ranges for evaluating graduate satisfaction levels.
ValueRangeFrequency%Level of Satisfaction
1Values ≤ 423423Low
2Values between 43 and 478053Medium
3Values ≥ 483624High
Table 22. Summary of contingency tables of chi-square test values (“Grouped Level of Satisfaction” vs. “Other Variables”).
Table 22. Summary of contingency tables of chi-square test values (“Grouped Level of Satisfaction” vs. “Other Variables”).
VariablesGrouped Level of Satisfaction
(p-Value) **
Gender0.858
Age group0.666
Origin0.059
Entry profile0.778
Program0.327
Note. ** Variable statistically significant (p-value < 0.05).
Table 23. Summary of contingency tables of chi-square test values (“Level of Satisfaction by Factors” vs. “Other Variables”).
Table 23. Summary of contingency tables of chi-square test values (“Level of Satisfaction by Factors” vs. “Other Variables”).
VariablesMethodologyOrganizationAcademic ExpectationsTeaching Work
Gender0.7380.3690.0720.482
Age group0.9270.7500.9840.462
Origin0.1660.003 **0.045 **0.187
Entry profile0.3690.9540.8310.811
Program0.002 **0.1040.0730.920
Note. ** Variable statistically significant (p-value < 0.05).
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

García Villena, E.; Pueyo-Villa, S.; Delgado Noya, I.; Tutusaus Pifarré, K.; Ruíz Salces, R.; Pascual Barrera, A. Instrumentalization of a Model for the Evaluation of the Level of Satisfaction of Graduates under an E-Learning Methodology: A Case Analysis Oriented to Postgraduate Studies in the Environmental Field. Sustainability 2021, 13, 5112. https://doi.org/10.3390/su13095112

AMA Style

García Villena E, Pueyo-Villa S, Delgado Noya I, Tutusaus Pifarré K, Ruíz Salces R, Pascual Barrera A. Instrumentalization of a Model for the Evaluation of the Level of Satisfaction of Graduates under an E-Learning Methodology: A Case Analysis Oriented to Postgraduate Studies in the Environmental Field. Sustainability. 2021; 13(9):5112. https://doi.org/10.3390/su13095112

Chicago/Turabian Style

García Villena, Eduardo, Silvia Pueyo-Villa, Irene Delgado Noya, Kilian Tutusaus Pifarré, Roberto Ruíz Salces, and Alina Pascual Barrera. 2021. "Instrumentalization of a Model for the Evaluation of the Level of Satisfaction of Graduates under an E-Learning Methodology: A Case Analysis Oriented to Postgraduate Studies in the Environmental Field" Sustainability 13, no. 9: 5112. https://doi.org/10.3390/su13095112

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop