Next Article in Journal
Prospective Teachers’ Representations on the Concept of Force
Next Article in Special Issue
Validation of the Academic Self-Concept Scale in the Spanish University Context
Previous Article in Journal
Hospital School: Investigating the Practical Aspects of Teacher and Parent Training
Previous Article in Special Issue
Revealing Impact Factors on Student Engagement: Learning Analytics Adoption in Online and Blended Courses in Higher Education
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Methodological Approach for Developing and Validating a Parsimonious and Robust Measurement Tool: The Academic E-Service Quality (ACEQUAL) Model

1
Department of Engineering, Università degli Studi di Palermo, 90128 Palermo, Italy
2
ELMI S.r.l., 90146 Palermo, Italy
*
Author to whom correspondence should be addressed.
Educ. Sci. 2021, 11(10), 613; https://doi.org/10.3390/educsci11100613
Submission received: 31 July 2021 / Revised: 28 September 2021 / Accepted: 1 October 2021 / Published: 4 October 2021
(This article belongs to the Special Issue Student Preferences and Satisfaction: Measurement and Optimization)

Abstract

:
Nowadays, in the higher education sector, the quality measurement process of education-related services is assuming a crucial role to support focused and targeted improvement activities deeply centered on students’ needs/necessities. These are considered crucial factors for dealing with the current academic competitive context. Therefore, the quality measurement process has to be precise and accurate, namely the measurement model on which it is based has to be parsimonious and robust. The present work proposes an effective and easy-to-use methodological approach suitable for supporting the structuring of a measurement tool. Its effectiveness is shown with reference to the academic e-service provided at the University of Palermo. In particular, taking into account the students’ viewpoints and perspectives, a measurement model of the academic e-service quality is developed and validated, thus overcoming the lack of literature on the subject. Finally, a survey is conducted, and highlighted academic e-service quality shortcomings and criticalities are stressed and discussed. The outcomes of this study may be of interest to practitioners and researchers in the field, offering important suggestions on how to support the structuring of a measurement model, as well the data-driven service quality improvement process.

1. Introduction

Assessment processes of the perceived quality are nowadays assuming a crucial role to support the organization success. They can suitably provide for the definition of improvement policies and even represent a fundamental step in the continuous improvement cycle [1]. Furthermore, they allow us to evaluate the customer satisfaction level vs. operational-managerial aspects. Finally, the performance assessment processes deeply support the effectiveness of the internal and external communication system concerning the achieved performance levels and the organization capability to reach the promised performance levels [2]. In the service context, the precise and accurate assessment of the provided quality level is characterized by a certain complexity. In the first instance, the service quality is not an absolute and exclusive service feature, independent from customer’s perceptions, as it is in the typical tangible production. Actually, the service quality is an unphysical and latent entity that fully involves the customer cognitive sphere. Particularly, it cannot be directly measured, namely its assessment is indirectly performed by taking into account suitable service manifest aspects (i.e., items) with regard to the fundamental service performance aspects (i.e., dimensions). The relationship between items and related dimensions can be formalized by means of appropriate service quality conceptual models [3].
Based on the previous considerations, the aim of the present work is to accurately and precisely measure the quality level perceived by students of an academic context with relation to the e-service provided through the academic web portal. The literature in this area appears to be lacking despite the great relevance of such a context [4]. Actually, the academic e-service involves all the students at the university, and through it, the academic organization is introduced, tasks and related services for students are described and delivered, and information deemed useful are maximally disseminated. Furthermore, the academic e-service quality affects the students’ academic efficiency, perceived atmosphere of comfort and order, as well their overall satisfaction [5]. The latter impacts on the students’ behavioral intention and aptitude to spread word-of-mouth, which affect both the university reputation and students’ retention [6,7]. These aspects appear to be crucial in the current academic competitive context [8]. As a result, the academic e-service quality needs to be at the highest possible level, and consequently it appears to be necessary to measure it continuously via a reliable measurement tool [9]. In the light of previous considerations, in this paper a conceptual model for measuring the academic e-service quality is developed and validated, thus overcoming the lack of literature in this relevant context. For this purpose, an effective and easy-to-use methodological approach that combines engineering and statistical approaches to support the structuring of a parsimonious and robust measurement model of the service quality is herein proposed. Such a further aspect represents an additional contribution of the present work. Finally, a survey is conducted taking into account viewpoints and perspectives of students at the University of Palermo, and highlighted academic e-service quality shortcomings and criticalities are stressed and discussed.
The remainder of the present paper is organized as follows. The literature analysis is reported in Section 2, whereas materials and methods are supplied in Section 3. Section 4 synthesizes obtained results with reference to the empirical analysis conducted. Discussions and conclusions are finally reported in Section 5.

2. Literature Analysis

The service quality is an entity closely related to the customer satisfaction [10]. According to Petrick [11], the service quality represents the user cognitive assessment as regards a given service, and the customer satisfaction refers to the pleasurable way by which the perceived service performance makes them feel. Lehtinen and Lehtinen [12] defined the service quality in terms of physical, interactive and business quality attributes. Physical quality refers to the service tangible aspects, interactive quality to the customer-provider interaction, while the business quality refers to the provider image and reputation. A further conceptualization of the service quality was proposed by Rust and Oliver [13], who suggested a three-component quality model, namely the customer–provider interaction, service environment and results. Grönroos [14] stated that the service quality includes the functional quality, the technical quality and the provider image. In particular, the functional quality focuses on how the service is delivered, the technical one focus on what it is provided, while the provider image represents a mediator between the functional and technical quality. The ServQual instrument [15] represents the first conceptual model specifically developed for measuring the service functional quality. It theoretically represents the service quality as a multidimensional entity on two levels. At the upper level the dimensions are stated, while at the basic one the items are considered. In particular, ServQual consists of 5 dimensions and 22 items, and its implementation involves the conducting of a double survey with the aim of capturing both the customer perceived quality and expectations. Cronin and Taylor [16] disapproved the double ServQual questionnaire. Accordingly, the authors proposed a new conceptual model based only on the service performance assessment. Such a model hypothesizes the service quality as a one-dimensional construct in which related items are directly considered for evaluating the service quality.
Considering the higher education sector, the creation of a more suitable and pleasing learning environment along with the providing of high-quality education-related services nowadays represent the fundamental driver for facing the highly competitive pressure related to student recruitment, retention and loyalty [17,18]. In particular, the student satisfaction that arises from the providing of highly performing education-related services significantly affects the results of the so-called “ranking war” [19]. In Italy, the idea of evaluating the service quality in the academic context is quite recent. The Legislative Decree n. 19/2012 [20] introduced the implementation of an initial and periodical accreditation system, the periodical assessment of the education-related services and the employment of an effective internal and external communication system as mandatory. In particular, the National Agency for the Evaluation of Universities and Research Institutes (ANVUR) provides the procedures to be used for the self-assessment, periodical assessment and accreditation centered on requirements specified by the ISO 9001:2015—Quality management systems [21]. The fundamental aim is to promote the providing of education-related services deeply centered on the student needs/necessities.

E-Service Quality

With the great use of the e-service and e-commerce, in recent years many researchers developed conceptual models and approaches for measuring the e-service quality. Compared to the measurement tools considered for evaluating the quality level of typical services, the consideration of the hedonistic and utilitarian value takes over in these models. For example, Loiacono et al. [22] developed the WebQual tool composed by 12 dimensions and 36 items on the basis of the theory of reasoned action [23]. Yoo and Donthu [24] developed the SiteQual model consisting of 4 dimensions and 9 items. This model is able to measure the e-service quality as regards e-commerce activities. More recently, the WebQual 4.0 model was developed on the basis of the ServQual scale, and it is composed by 23 items within 3 dimensions [25]. Further service quality conceptual models recently developed in literature taking into account different e-service settings are shown in [26,27,28,29,30]. Regarding the academic e-service quality, to the best of the authors knowledge, the literature appears to be lacking despite the great relevance assumed by this issue. For that reason, a parsimonious and robust measurement model of the academic e-service quality was developed and validated, as subsequently detailed.

3. Materials and Methods

The development process of a service quality measurement model herein proposed can be summarized in three fundamental steps, as shown in Figure 1 [31].

3.1. Step 1: Context Analysis and Evaluation of Existing Literature

It is an introductory process that aims to provide an overview with reference to the fundamental performance aspects characterizing the under-analysis e-service and the reference stakeholders. In particular, the possibility of obtaining structured information about the reference context allows to better understand the fundamental characteristics of measurement model to be developed in terms of items to be included, related dimensions and relationships between them. Therefore, the context analysis has to be aimed at acquiring useful data and information, and for this purpose it is even necessary to evaluate the reference literature to support the items generation phase, as detailed below.

3.2. Step 2: Items Generation and Revision Development of the Preliminary Questionnaire

On the basis of the context analysis and evaluation of existing literature, the items deemed relevant for the service quality measurement model are generated. After this phase, it is necessary to revise the generated items in order to select the most effective ones, so as to pursue the development of a parsimonious measurement model. To carry out this task, different knowledge-based exploratory frameworks can be considered [32]. At the end of the items revising process, obtained items are transposed into the preliminary questionnaire, which is considered to conduct the data collection phase required to develop and validate the measurement model through the statistical analyses subsequently detailed.

3.3. Step 3: EFA/CFA and Development of the Final Questionnaire

The Exploratory Factor Analysis (EFA) [33] and Confirmatory Factor Analysis (CFA) [34] are multivariate statistical analyses aimed at the description of the data latent structure and the confirmation of such a structure, respectively. The latent structure shows the constructs or dimensions of the phenomenon under analysis, which are not directly observable from the collected data.

3.4. EFA

The items correlation matrix represents the EFA fundamental input, while its fundamental output refers to the extracted factors and related factor loadings, namely the correlation coefficients between items and factors. The factorial model is expressed through the Equation (1):
X = Λ   ξ + δ
where:
  • X is the items vector;
  • Λ is the factor loadings matrix;
  • ξ is the extracted factors vector;
  • δ is the random errors vector.
The relevant aspects of EFA concern the choice of the optimal factors numbers to be extracted and the factors rotation method to be employed. The former represents the optimal trade-off between two conflicting criteria. On one hand, maximizing the explained items variance by extracting a high factors number, resulting in a complex latent structure. On the other hand, extracting a low number of factors so as to obtain a strongly synthesized latent structure. Different criteria can be considered to support the analyst’s choice [35]. As regards the factors rotation, it is typically performed to simplify the conceptual understanding of extracted factors. It can be carried out via two fundamental methods. The first one orthogonally rotates the extracted factors so maintaining their independence. In contrast, the condition of orthogonality between extracted factors is not imposed considering the second one. For more details, the reader can refer to [36].

3.5. CFA

The factorial model parameters comprising the extracted factors and related factor loadings represent the CFA fundamental input, while the confirmation of the factorial model by comparing the reproduced items covariance matrix Σ ( θ ) with the experimental one Σ , represents its main output. The covariance matrix Σ ( θ ) has the formulation shown in Equation (2):
Σ ( θ ) = Λ x Φ Λ x + θ δ
where:
  • Φ is the common factors covariance matrix;
  • θ δ is the factors covariance matrix;
  • Λ x is the factor loadings matrix.
The maximum likelihood method is implemented in order to minimize the discrepancies between Σ ( θ ) and Σ via the so-called fitting functions [37]. Finally, on the basis of CFA results, the confirmed framework is transposed into the final customer satisfaction questionnaire, which in addition to items validated through CFA results, also can include those related to the student socio-personal information and related overall satisfaction.

4. Empirical Analyses Results

4.1. Context Analysis and Evaluation of Existing Literature

As aforementioned, in the present work, a conceptual model for measuring the academic e-service quality was developed and validated. This e-service involves groups of users, namely students, graduates, academics, and employees, among others. These users can be characterized by distinct needs/necessities to be satisfied and even expectation levels. Thus, perspectives and viewpoints about the academic e-service quality can present significant differentiation elements with reference to the considered users group. For example, academics may be greatly concerned by academic e-service aspects marginally taken into account by employees or students. For such a reason, the present work takes into account the students’ viewpoints about the academic e-service quality, as key stakeholder of the considered e-service. As regards existing literature in this field, to the best of the authors’ knowledge, it appears to be lacking. Hence, reference was made to the relevant context of the software and the website quality. First of all, the ISO/IEC 25010: 2011 [38,39], which introduces a software quality model including several relevant quality attributes. In addition, the following references were also considered. Baharuddin et al. [40] analyzed 25 quality dimensions for mobile applications which were synthesized and prioritized obtaining the 10 most important usability dimensions. Coursaris and Kim [41] developed an adapted an evaluation framework for the context of the mobile computing environment. Han et al. [42] developed an empirical quality model able to point out the functional relationships among usability criteria and user interface aspects. Lupo and Bellomo [43] developed a methodological framework based on a DANP model for evaluating the software quality in terms of usability. Orehovački et al. [44] proposed an articulated framework including six dimensions comprising 33 attributes, for evaluating the quality-in-use of Web 2.0 applications. Seffah [45] unified existing standards and models into a consolidated hierarchical quality model.

4.2. Items Generation and Revision Development of the Preliminary Questionnaire

This phase was carried out on the basis of the literature analysis and via the support of brainstorming activities involving highly experienced experts in the field. The brainstorming is a technique widely used in the problem-solving context [46], which has found effective applications in different industrial settings, such as teaching social studies [47], software development [48], strategies for tourism development [49], etc. The focus was initially on aspects regarding the website usability, since the latter represents a crucial feature as regards the website attractiveness and perceived quality. Thus, related items were generated. Subsequently, focusing on activities that the student carries out through the website employment appeared to be evident as the website has to assure safety, effectiveness and efficiency. Therefore, also in this case, the generation of related items was done. Finally, aspects related to the website availability and reliability of fundamental contents were considered, and related items were generated. From this analysis, 27 items were obtained as shown in Table 1.
After this phase, the revising of the items generated was done in order to select the most effective ones. For this purpose, the quantitative criterion developed by Lawshe was considered in view of its easy-of-use and high reliability level [50]. In particular, an experts panel composed by eight members was specifically selected to carry out this task, and as regards validity of both the formulation and content of the generated items, each involved expert provided the following judgment: “essential” or “useful, but not essential” or “not essential”. Then, collected evaluations were considered to assess the agreement levels among experts through the so-called Content Validity Ratio (CVR) index, as regard validity of the items formulation (CVRF) and content (CVRC), by using the relationship reported below:
C V R x = N E N / 2 N / 2       x :   F ,   C
where:
  • NE is the number of experts judging the item as “essential”;
  • N is the number of involved experts.
Wilson et al. [51] provided the validity acceptance/rejection criterion by suggesting reliable critical values for the CVR index. In particular, for a significance level equal to α = 0.95, the CVR critical value is equal to 0.69, as shown in Table 2.
Rejected items were excluded or revised according to their detailed evaluation results, while those deemed valid were directly considered. For example, as can be seen from Table 1, the items “The renewal of the password every 120 days is safety” and “Access to the website (login/logout) is safe” were merged into a single item since both aim at measuring the security service construct, thus obtaining such a new item: “The password management guarantees the access security”. On the contrary, those items that simultaneously focus on several service aspects were splinted. For example, the item “The appearance of the website is pleasant, and the contents are clear”, which aims to simultaneously investigate the “pleasantness of appearance” and “clarity of the content”, was split into two items: “The appearance of the website is pleasant” and “The contents of the website are clear”. Finally, items which were considered unclear were reformulated. At the end of this revising process, 20 final items were obtained as shown in Table 1. Then, these items were transposed into the preliminary questionnaire, which was considered to collect data needed for the subsequently detailed statistical analyses.

4.3. EFA/CFA and Development of the Final Questionnaire

On the basis of the developed preliminary questionnaire, a web-based investigation in the period from December 2019 to February 2020 was conducted involving a sample of 285 students selected from those of engineering, architecture and education science degrees, and data collected were considered to perform EFA and CFA. In particular, for each questionnaire the missing answers percentage was assessed, and were excluded those questionnaires characterized by a value of such a percentage greater than 5%, i.e., four questionnaires. Moreover, questionnaires that presented the same level of response on all considered items were also excluded, as such situations suggested a low level of interest of related respondents (i.e., acquiescent respondents), and 21 questionnaires were excluded. The next phase was related to the testing of the data univariate and multivariate normality. To verify the data univariate normality, the Kolmorov–Smirnov and Shapiro–Wilk tests were conducted, while the data multivariate normality was tested through the multivariate Curtosi coefficient. These analyses highlighted five anomalous questionnaires, which were excluded. The final number of valid questionnaires was equal to 255. To support EFA and CFA below detailed, suggestions reported in Hair et al. [52] were considered.

4.4. EFA

The feasibility of EFA needs to be preliminarily verified via suitable tests. In particular, it is necessary to conduct both the Bartlett’s test for verifying the items correlation matrix significance, as wells the Kaiser–Meyer–Olkin (KMO) test required for confirming the sample size adequacy. In the herein treated case, the Bartlett’s test was significant and the KMO test presented a value of 0.87, greater than the threshold of 0.70. Moreover, three factors were extracted considering the maximum likelihood method and according to the Catter criterion [53]. Table 3 shows the obtained factor loadings, the total variance explained and the items communalities, namely the fraction of the items’ information shared with the factorial model.
As can be seen from Table 3, the factorial model is characterized by some extremely low factor loadings values, particularly as regards to items 3, 10, 11, 19, 13, 17 and 20. In addition, these items are even characterized by extremely low communality values, highlighting a poor factorial model fitting. Thus, a new EFA was carried out by excluding these items, and the related obtained results are shown in Table 4.
The obtained new factorial model was considered satisfactory since it presents acceptable levels of total variance explained, factor loadings and items communalities. Factor 1, which is characterized by items 6 (U1), 16 (U2), 1 (U3), 5 (U4), 7 (U5) and 12 (U6), respectively, includes those service aspects related to the Website usability (D1). Factor 2, which is characterized by items 9 (S1), 4 (S2), 14 (S3) and 8 (S4), respectively, includes those service aspects related with the Website security (D2). Factor 3, which is characterized by items 2 (C1), 15 (C2), 17 (C3), respectively, includes those service aspects related to the Website fundamental contents (D3).
Finally, the internal consistency of extracted factors was verified, and in such a regard, the Cronbach’s Alpha was considered:
α = k k 1 ( 1 i = 1 k σ Y i 2 σ X 2 )
where
  • k is the item number within the factor;
  • σ X 2 is the total score variance;
  • σ Y i 2 is the item variance, with i = 1, …, k.
A Cronbach’s Alpha value greater than the threshold of 0.70 reveals an adequate internal consistency [54] and, thus, the obtained results were considered satisfactory, as shown in Table 4. In the light of previous considerations, EFA results were deemed suitable to perform CFA subsequently in detail.

4.5. CFA

The factorial model fitting was verified by the Comparative Fit Index (CFI) and the Root Means Square Error of Approximation index (RMSEA). CFI values greater than 0.90 indicate acceptable model fit. On the contrary, RMSEA ranges from 0 to 1, with smaller values indicating a better model fit. A value of 0.06 or less is indicative of acceptable model fit. Table 5 shows obtained results.
Finally, validity as measurement tool of the pointed out factorial model was assessed with reference to effectiveness of its extracted factors. The convergent validity was tested through the Composite Reliability (CR) and the Average Variance Extracted (AVE). In particular, CR and AVE values greater or equal to 0.7 and 0.5, respectively, are indicative of a good convergent validity of the factorial model. Instead, the discriminant validity was tested considering the Maximum Shared Variance (MSV) and the square root of AVE. In particular, the factorial model can be confirmed in terms of discriminant validity if MSV values ore less than AVE values, and values of the AVE square root are greater than the correlation coefficients between factors [55]. Table 6, in which values of the AVE square root are shown in bold, while those of the correlation coefficients between factors are shown in italics, shows obtained results.
On the basis of these results, the framework composed by 13 items within three dimensions shown in Table 4 represents a valid, reliable and parsimonious ACademic E-service QUALity (ACEQUAL) measurement model. Finally, such a conceptual model was transposed into the final customer satisfaction questionnaire shown in Appendix A.

4.6. Survey and Results Analysis

Based on the developed final customer satisfaction questionnaire, a web-based survey was conducted to assess the academic e-service quality provided at the University of Palermo. In particular, 218 students were involved in the period between May/June 2020 by using the Google forms application, in consideration of the pandemic period. In detail, students involved, mainly of engineering degrees but, in a limited manner, also those of architecture and education sciences were selected in relation to opportunity reasons related to the necessity to carry out such a study, within a reasonable time, during the first wave of COVID-19. Thus, the respondents’ sample was not purely random and extended to all the disciplinary areas within the University of Palermo. Actually, the latter represents the main limitation of the present study. Table 7 shows the profile of respondents involved.
Most of the respondents are aged between 19 and 21 years, 25% of them belong to the 22–24-years class, and about 20% of them to the 25–27-years class. Regarding the website frequency of use, 72% of respondents declared at least a weekly use of the academic e-service, confirming that most of them are characterized by a high knowledge level of the e-service features. Collected questionnaires were examined for verifying their validity, and 22 questionnaires were excluded as incomplete or related to acquiescent respondents. Thus, in total, 196 questionnaires were considered as valid to carry out such an analysis. Figure 2 shows the average scores as well as related standard deviations with relation to the investigated ACEQUAL items and overall satisfaction aspects.
As can be seen, the overall satisfaction levels of involved students as regards the academic website (G1), student web portal (G2) and their perceived quality level (G3) are quite high, namely 7.74/10, 8.08/10 and 7.98/10, respectively. Thus, the quality level of delivered academic e-service, taken as a whole, is seen as moderately high. As regards the items evaluation, considering the dimension D1Website usability”, except for the item U4It is easy to interact with the website”, which is scored equal to 3.51/5, all other items are characterized by average scores less than 3, representing the neutrality level of the considered evaluation scale. In particular, the item U3The appearance of the website is in harmony with the academic context” is the one with the lowest score equal to 2.79/5. Taking into account the dimension D2Website security”, it should be highlighted that all its items are characterized by scores higher than the neutrality level of the evaluation scale. Particularly, the item S2The users’ privacy is protected (personal data protection)” is the one with the highest score equal to 3.92/5. Finally, the dimension D3Website fundamental contents” the item C2The availability of pre-filled forms is useful” is scored equal to 3.00/5, while the other two items are evaluated with higher scores.
Obtained results are particularly relevant in the light of their high capability to measure in precise and accurate way the latent constructs of the academic e-service quality. The latter arises from the high validity and reliability of ACEQUAL. Therefore, these results can reliably generate targeted and focused improvements actions aimed at the improvement of the academic e-service quality.

5. Discussion and Conclusions

With reference to the academic e-service, the fundamental purpose of the present work was to show how important the service quality is, and how relevant can be its accurate and precise measurement process. Actually, the competitive pressure related to the student recruitment, retention and loyalty which is characterizing the higher education sector, encourages towards the delivering of highly performing education-related services, such as the academic e-service. In particular, the latter involves all the students at the university and allows them to take advantage of the delivering of several highly functional and useful services concerning both the education context and the organizational context. Moreover, the quality level perceived by students as regards the academic e-service, as education-related service, represents an antecedent of their overall satisfaction, which represents a crucial aspect for dealing the current academic competitive context.
For these reasons, and even considering the lack of literature in this relevant field, a parsimonious and robust measurement model of the ACademic E-service QUALity (ACEQUAL) was developed and validated, on the basis of the experts’ suggestions and recommendations and even taking into account viewpoints and perspectives of students at the University of Palermo. Furthermore, the relevant aspects regarding both the generation of the fundamental items, as well the development and validation process of the service quality measurement model, were based on an in-depth literature analysis. The suitability of ACEQUAL for measuring the academic e-service quality was established via its convergent and discriminant validities, namely the suitability of ACEQUAL items for measuring the academic e-service quality constructs, and the satisfactory discrimination level among meanings covered by the ACEQUAL dimensions, respectively. Thus, the ACEQUAL capability to measure in an accurate and precise way the academic e-service quality was quantitatively supported. On the contrary, to the best of the authors’ knowledge, methodologies developed in literature in this field of investigation are typically based on Multi-Criteria Decision-Making (MCDM) approaches. In particular, the academic e-service quality is typically scored on the basis of suitable criteria, and eventually underlying items, directly generated considering experts and decision makers proposals, literature analyses and in some cases survey results. In other words, quantitative analyses able to show the model capability to effectively measure the service quality are not considered. In this respect, for more details, the reader can refer to [56,57,58,59]. Thus, the present work represents a first attempt at evaluating the quality of the academic e-service via a measurement model in which validity and reliability are quantitatively supported. This aspect represents the main contribution of the present paper. Moreover, in the present work, it was deemed necessary to propose an effective methodological approach to support the structuring process of a service quality measurement model. Such an approach unifies and consolidates the use of techniques and methods whose effectiveness and practicality have been widely shown as regards numerous and diversified industrial settings. For example, the implementation of brainstorming activities and even quantitative approaches such as that developed by Lawshe [50] for carrying out the items generation and revision phases, represent for the authors an innovation in the field of inquiry. Finally, a survey was conducted, and highlighted service quality shortcomings and criticalities were stressed and discussed.
Future research developments in this area may concern two distinct traits. In the first instance, the students sample required to carry out the development and validation of the measurement model could be defined in a fully random manner and even extended to all disciplinary areas of the University of Palermo, so as to take into account more diversified viewpoints of perspectives about the academic e-service quality. In this way, the main limitation of the present study related to the representativeness level of the considered sample, which was defined with some limitations due to the pandemic period, could be overcome. Additionally, such a study could involve further academic settings. Actually, different performance levels of the provided academic e-services in terms of website usability, security and contents, but also with reference to its interactivity and user interfaces, which were not relevant in the considered study, could generate different expectation/need levels in the related stakeholders. Due to the latter, it may not be likely to confirm the herein obtained service quality structure in terms of dimensions and related items. However, the latter would allow a better generalization as well greater robustness of the ACEQUAL model.

Author Contributions

Conceptualization, T.L.; data curation, E.B.; formal analysis, T.L. and E.B.; investigation, E.B.; methodology, T.L. and E.B.; project administration, T.L.; supervision, T.L.; writing—original draft preparation, T.L.; writing—review and editing, T.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Student Satisfaction Questionnaire

The purpose of this questionnaire is to investigate how the student evaluates the quality of the academic e-service provided. Such a questionnaire is rigorously anonymous. Thanks for your collaboration.
SexMale
Female
Age19–21
22–24
25–27
over 27
Frequency of the academic e-service useDaily
Weekly
Monthly
Below are shown the questionnaire items with reference to aspects related to the usability, safety and contents of the student website. A 5-point-scale detailed below is used to assess the level of your quality perception:
(1) Strongly disagree; (2) Disagree; (3) Neutral; (4) Agree; (5) Strongly agree
Website usability
U1The appearance of the website is pleasant12345
U2The website is easy to use12345
U3The appearance of the website is in harmony with the academic context12345
U4It is easy to interact with the website12345
U5Surfing on the website is simple12345
U6The structure of the website is adaptable to mobile devices12345
Website security
S1The available information is reliable12345
S2The users’ privacy is protected (personal data protection)12345
S3Confidentiality during booking exams is assured by the website12345
S4Payment and enrollment transactions are supported with confidence12345
Website fundamental contents
C1The website provides comprehensive information12345
C2The availability of pre-filled forms is useful12345
C3Study plans are easy to find out12345
Below are shown the final items of the questionnaire related to your overall satisfaction. Here, the 10-point-scale detailed below is used:
(1) Strongly dissatisfied; …; (5) Neutral; …; (10) Strongly satisfied.
G1Express you overall satisfaction level about the academic website12345678910
G2Express your overall satisfaction level about your student web portal12345678910
G3Express your overall perceived e-service quality level12345678910

References

  1. Buer, S.V.; Fragapane, G.I.; Strandhagen, J.O. The data-driven process im-provement cycle: Using digitalization for continuous improvement. IFAC-PapersOnLine 2018, 51, 1035–1040. [Google Scholar] [CrossRef]
  2. Lupo, T.; Bellomo, E. DINESERV along with fuzzy hierarchical TOPSIS to support the best practices observation and service quality improvement in the restaurant context. Comput. Ind. Eng. 2019, 137, 106046. [Google Scholar] [CrossRef]
  3. Lupo, T.; Cusumano, M. Towards more equity concerning quality of Urban Waste Management services in the context of cities. J. Clean. Prod. 2018, 171, 1324–1341. [Google Scholar] [CrossRef]
  4. Bringula, R.P. Influence of faculty-and web portal design-related factors on web portal usability: A hierarchical regression analysis. Comput. Educ. 2013, 68, 187–198. [Google Scholar] [CrossRef]
  5. Harrati, N.; Bouchrika, I.; Tari, A.; Ladjailia, A. Exploring user satisfaction for e-learning systems via usage-based metrics and system usability scale analysis. Comp. Hum. Behav. 2016, 61, 463–471. [Google Scholar] [CrossRef]
  6. Alves, H.; Raposo, M. Conceptual model of student satisfaction in higher education. Total Qual. Manag. Bus. Excel. 2007, 18, 571–588. [Google Scholar] [CrossRef]
  7. Thomas, S. What drives student loyalty in universities: An empirical model from India. Int. Bus. Res. 2011, 4, 183–192. [Google Scholar] [CrossRef]
  8. Lupo, T. A fuzzy ServQual based method for reliable measurements of education quality in Italian higher education area. Expert Syst. Appl. 2013, 40, 7096–7110. [Google Scholar] [CrossRef]
  9. Seth, N.; Deshmukh, S.G.; Vrat, P. Service quality models: A review. Int. J. Qual. Reliab. Manag. 2005, 22, 913–949. [Google Scholar] [CrossRef] [Green Version]
  10. Rust, R.T.; Oliver, R.L. (Eds.) Service Quality: New Directions in Theory and Practice; Sage Publications: Thousand Oaks, CA, USA, 1993. [Google Scholar]
  11. Petrick, J.F. First timers and repeaters perceived value. J. Travel Res. 2004, 42, 397–407. [Google Scholar] [CrossRef]
  12. Lehtinen, U.; Lehtinen, J.R. Service Quality: A Study of Quality Dimensions; Service Management Institute: Helsinki, Finland, 1982. [Google Scholar]
  13. Rust, R.T.; Oliver, R.W. Video dial tone: The new world of services marketing. J. Serv. Mark. 1994, 8, 5–16. [Google Scholar] [CrossRef]
  14. Grönroos, C. Service Management and Marketing; Lexington Books: Lexington, MA, USA, 1990; Volume 94. [Google Scholar]
  15. Parasuraman, A.; Zeithaml, V.A.; Berry, L.L. SERVQUAL: A multiple item scale for measuring customer perceptions of service quality. J. Retail. 1988, 64, 12–40. [Google Scholar]
  16. Cronin, J.J.; Taylor, A.S. Measuring service quality: A reexamination and extension. J. Mark. 1992, 56, 55–68. [Google Scholar] [CrossRef]
  17. Danbert, S.J.; Pivarnik, J.M.; McNeil, R.N.; Washington, I.J. Academic success and retention: The role of recreational sports fitness facilities. Recreat. Sports, J. 2014, 38, 14–22. [Google Scholar]
  18. Helgesen, O.; Nesset, E. What accounts for students’ loyalty? Some field study evidence. Int. J. Educ. Manag. 2007, 21, 126–143. [Google Scholar] [CrossRef]
  19. Letcher, D.W.; Neves, J.S. Determinants of undergraduate business student satisfaction. Res. High. Educ. J. 2010, 6, 1. [Google Scholar]
  20. Italian Legislative Decree n. 19/2012. Available online: https://www.anvur.it/wp-content/uploads/2015/02/2.%20Dlg%2019_2012.pdf (accessed on 1 October 2021).
  21. International Organization for Standardization. Quality Management Systems; ISO 9001:2015; International Organization for Standardization: Geneva, Switzerland, 2015. [Google Scholar]
  22. Loiacono, E.T.; Watson, R.T.; Goodhue, D.L. WebQual: A measure of website quality. Mark. Theory Appl. 2002, 13, 432–438. [Google Scholar]
  23. Ajzen, I.; Madden, T.J. Prediction of goal-directed behavior: Attitudes, intentions, and perceived behavioral control. J. Exp. Soc. Psychol. 1986, 22, 453–474. [Google Scholar] [CrossRef]
  24. Yoo, B.; Donthu, N. Developing a scale to measure the perceived quality of an Internet shopping site (SITEQUAL). Q. J. Electron. Commer. 2001, 2, 31–45. [Google Scholar]
  25. Barnes, S.J.; Vidgen, R.T. An integrative approach to the assessment of e-commerce quality. J. Electron. Commer. Res 2002, 3, 114–127. [Google Scholar]
  26. Zeithaml, V.A. Service excellence in electronic channels. Manag. Serv. Qual. Int. J. 2002, 12, 135–139. [Google Scholar] [CrossRef]
  27. Wolfinbarger, M.; Gilly, M.C. eTailQ: Dimensionalizing, measuring and predicting retail quality. J. Retail. 2003, 79, 183–198. [Google Scholar] [CrossRef]
  28. Bauer, H.H.; Falk, T.; Hammerschmidt, M. eTransQual: A transaction process-based approach for capturing service quality in online shopping. J. Bus. Res. 2006, 59, 866–875. [Google Scholar] [CrossRef] [Green Version]
  29. Parasuraman, A.; Zeithaml, V.A.; Malhotra, A. E-S-QUAL: A multiple-item scale for assessing electronic service quality. J. Serv. Res. 2005, 2, 213–233. [Google Scholar] [CrossRef]
  30. Ding, D.X.; Hu, P.J.H.; Sheng, O. E-SELFQUAL: A scale for measuring on-line self-service quality. J. Bus. Res. 2011, 64, 508–515. [Google Scholar] [CrossRef]
  31. DeVellis, R.F. Scale Development: Theory and Applications; Sage Publications: Thousand Oaks, CA, USA, 2016; Volume 26. [Google Scholar]
  32. Lupo, T.; Delbari, S.A. A knowledge-based exploratory framework to study quality of Italian mobile telecommunication services. Telecommun. Syst. 2018, 68, 129–144. [Google Scholar] [CrossRef]
  33. Fabrigar, L.R.; Wegener, D.T. Exploratory Factor Analysis; Oxford University Press: Oxford, UK, 2011. [Google Scholar]
  34. Brown, T. Confirmatory Factor Analysis for Applied Research; The Guilford Press: New York, NY, USA, 2015; p. 72. [Google Scholar]
  35. Yong, A.G.; Pearce, S. A beginner’s guide to factor analysis: Focusing on exploratory factor analysis. Tutorials Quant. Methods Psychol. 2013, 9, 79–94. [Google Scholar] [CrossRef]
  36. Costello, A.B.; Osborne, J. Best practices in exploratory factor analysis: Four recommendations for getting the most from your analysis. Pract. Assess. Res. Eval. 2005, 10, 1–7. [Google Scholar]
  37. Barbaranelli, C.; Natali, E. Psychological Tests. Psychometric Theories and Models; Carocci: Rome, Italy, 2005. [Google Scholar]
  38. International Organization for Standardization. Product Quality Standard; ISO/IEC 25010:2011; International Organization for Standardization: Geneva, Switzerland, 2011. [Google Scholar]
  39. International Organization for Standardization. Quality in Use Standard; ISO/IEC 25010:2011; International Organization for Standardization: Geneva, Switzerland, 2011. [Google Scholar]
  40. Singh, D.; Razali, R. Usability dimensions for mobile applications—A review. Res. J. Appl. Sci. Eng. Technol. 2013, 5, 2225–2231. [Google Scholar]
  41. Coursaris, C.K.; Kim, D.J. A meta-analytical review of empirical mobile usability studies. J. Usability Stud. 2011, 6, 117–171. [Google Scholar]
  42. Han, S.H.; Yun, M.H.; Kim, K.-J.; Kwahk, J. Evaluation of product usability: Development and validation of usability dimensions and design elements based on empirical models. Int. J. Ind. Ergon. 2000, 26, 477–488. [Google Scholar] [CrossRef]
  43. Lupo, T.; Bellomo, E. A methodological framework based on a DANP model for evaluating the software quality in terms of usability: A preliminary investigation on mobile operating systems. Decis. Sci. Lett. 2020, 521–536. [Google Scholar] [CrossRef]
  44. Orehovački, T.; Granic, A.; Kermek, D. Evaluating the perceived and estimated quality in use of Web 2.0 applications. J. Syst. Softw. 2013, 86, 3039–3059. [Google Scholar] [CrossRef]
  45. Seffah, A.; Donyaee, M.; Kline, R.B.; Padda, H.K. Usability measurement and metrics: A consolidated model. Softw. Qual. J. 2006, 14, 159–178. [Google Scholar] [CrossRef]
  46. Rawlinson, J.G. Creative Thinking and Brainstorming; Routledge: London, UK, 2017. [Google Scholar]
  47. Alshammari, M.K. Effective brainstorming in teaching social studies for elementary school. Am. Int. J. Contemp. Res. 2015, 5, 70–75. [Google Scholar]
  48. Filippova, A.; Trainer, E.; Herbsleb, J. From diversity by numbers to diversity as process: Supporting inclusive-ness in software development teams with brainstorming. In Proceedings of the IEEE/ACM 39th International Conference on Software Engineering, Buenos Aires, Argentina, 20–28 May 2017. [Google Scholar]
  49. Martelo, R.J.; Acevedo, D.; Fong, W. Definition of strategies for tourism in Cartagena through brainstorming and problem trees. Contemp. Eng. Sci. 2018, 11, 1051–1058. [Google Scholar] [CrossRef] [Green Version]
  50. Lawshe, C.H. A quantitative approach to content validity. Pers. Psychol. 1975, 28, 563–575. [Google Scholar] [CrossRef]
  51. Wilson, F.R.; Pan, W.; Schumsky, D.A. Recalculation of the critical values for Lawshe’s content validity ratio. Meas. Eval. Couns. Dev. 2012, 45, 197–210. [Google Scholar] [CrossRef] [Green Version]
  52. Hair, J.F.; Black, W.C.; Babin, B.J.; Anderson, R.E. Multivariate Data Analysis, 7th ed.; Pearson Education Limited: London, UK, 2013. [Google Scholar]
  53. Cattell, R.B. The scree test for the number of factors. Multivar. Behav. Res. 1966, 1, 245–276. [Google Scholar] [CrossRef] [PubMed]
  54. Gliem, J.A.; Gliem, R.R. Calculating, interpreting, and reporting Cronbach’s alpha reliability coefficient for Likert-type scales. In Proceedings of the Midwest Research-to-Practice Conference in Adult, Continuing, and Community Education, Columbus, OH, USA, 8–10 October 2003. [Google Scholar]
  55. Gaskin, J.; Lim, J. Model Fit Measures. Gaskination’s StatWiki. 2016. Available online: http://statwiki.gaskination.com/index.php?title=Main_Page (accessed on 1 October 2021).
  56. Nagpal, R.; Mehrotra, D.; Bhatia, P.; Sharma, A. FAHP approach to rank educational websites on usability. Int. J. Comput. Digit. Syst. 2015, 4, 251–260. [Google Scholar] [CrossRef]
  57. Nagpal, R.; Mehrotra, D.; Bhatia, P.; Sharma, A. Rank university websites using fuzzy AHP and Fuzzy TOPSIS approach on usability. Int. J. Inf. Eng. Electron. Bus. 2015, 10, 29–36. [Google Scholar] [CrossRef] [Green Version]
  58. Jain, D.; Garg, R.; Bansal, A. A parameterized selection and evaluation of e-learning websites using topsis method. Int. J. Res. Dev. Technol. Manag. Sci. 2015, 22, 12–26. [Google Scholar]
  59. Delice, E.K.; Gungor, Z. The usability analysis with heuristic evaluation and analytic hierarchy process. Int. J. Ind. Ergon. 2009, 39, 934–939. [Google Scholar] [CrossRef]
Figure 1. Development process of a service quality measurement model.
Figure 1. Development process of a service quality measurement model.
Education 11 00613 g001
Figure 2. Survey results.
Figure 2. Survey results.
Education 11 00613 g002
Table 1. Generate items, CVR analysis and items revision results.
Table 1. Generate items, CVR analysis and items revision results.
Item generatedCVRFCVRC Revised Item
The website appearance is in harmony with the academic context>0.69>0.69Item 1Included
The website provides comprehensive information>0.69>0.69Item 2Included
The response times of the website are adequate>0.69>0.69Item 3Included
The users’ privacy is protected (personal data protection)>0.69>0.69Item 4Included
It is easy to interact with the website>0.69>0.69Item 5Included
The appearance of the website is pleasant and the contents are clear<0.69>0.69Item 6SpittedThe appearance of the website is pleasant
Item 20The contents of the website are clear
Surfing on the website is simple>0.69>0.69Item 7Included
The website supports operations with confidence (payments/registration)<0.69>0.69Item 8Modified—Payment and enrollment transactions are supported with confidence
The available information is reliable>0.69>0.69Item 9Included
The website allows direct communication with professors>0.69>0.69Item 10Included
The structure of the website is adaptable to mobile devices<0.69>0.69Item 11Modified—The website is easy to use on the smartphone
The website is well optimized for mobile devices>0.69>0.69Item 12Included
The access to the website (login/logout) is safe<0.69>0.69Item 13 Merged—The password management guarantees the access security
The renewal of the password every 120 days is safety<0.69>0.69
Confidentiality during booking exams is assured by the website>0.69>0.69Item 14Included
The availability of pre-filled forms is useful>0.69>0.69Item 15Included
The website is easy to use>0.69>0.69Item 16Included
Study plans are easy to find out>0.69>0.69Item 17Included
Accessibility to the different sections of the website is easy>0.69>0.69Item 18Included
The search function is effective>0.69>0.69Item 19Included
The website is updated promptly>0.69>0.69 Excluded Not essential
The website implements secretarial services>0.69< 0.69 Excluded Not essential
The booking limit for exams is adequate>0.69< 0.69 Excluded Not essential
The website is multi-language>0.69< 0.69 Excluded Not essential
The website allows the quick execution of payment/registration operations<0.69>0.69 Excluded Redundant
The payment methods are simple and safe<0.69>0.69 Excluded Redundant
Access to the website (login/logout) is simple>0.69< 0.69 Excluded Not essential
Table 2. Critical CVR values [51].
Table 2. Critical CVR values [51].
NLevel of Significance for Two-Tailed Test
0.20.10.050.020.010.002
50.5730.7360.8770.9900.9900.990
60.5230.6720.8000.9500.9900.990
70.4850.6220.7410.8790.9740.990
80.4530.5820.6930.8220.9110.990
90.4270.5480.6530.7750.8590.990
100.4050.5200.6200.7360.8150.977
110.3870.4960.5910.7010.7770.932
120.3700.4750.5660.6710.7440.892
Table 3. EFA results.
Table 3. EFA results.
FactorCommunality
123
Item 60.70 0.59
Item 160.63 0.47
Item 10.62 0.45
Item 120.62 0.50
Item 70.61 0.46
Item 50.60 0.42
Item 180.59 0.46
Item 190.53 0.29
Item 110.45 0.36
Item 30.37 0.27
Item 100.22 0.10
Item 9 0.73 0.55
Item 4 0.70 0.49
Item 14 0.64 0.47
Item 8 0.57 0.45
Item 13 0.37 0.23
Item 2 0.750.64
Item 15 0.580.43
Item 17 0.530.38
Item 20 0.370.20
Total variance explained 41.24%
Table 4. New EFA results.
Table 4. New EFA results.
FactorCommunalityCronbach’ Alpha
1 (D1)2 (D2)3 (D3)
Item 6 (U1)0.74 0.51 0.84
Item 16 (U2)0.71 0.50
Item 1 (U3)0.68 0.44
Item 5 (U4)0.67 0.44
Item 7 (U5)0.64 0.47
Item 12 (U6)0.62 0.47
Item 9 (S1) 0.79 0.58 0.78
Item 4(S2) 0.72 0.48
Item 14(S3) 0.64 0.47
Item 8(S4) 0.55 0.47
Item 2(C1) 0.790.63 0.71
Item 15(C2) 0.670.45
Item 17(C3) 0.530.47
Total variance explained49.77%
Table 5. Model fitting indices.
Table 5. Model fitting indices.
Index Recommended Value
CFI0.96>0.90
RMSEA0.05<0.06
Table 6. Convergent and discriminant validity indices.
Table 6. Convergent and discriminant validity indices.
DimensionCRAVEMSV(D1)(D2)(D3)
Website usability (D1)0.800.500.360.71
Website security (D2)0.760.520.220.430.72
Website fundamental contents (D3)0.760.510.360.600.480.71
Table 7. Profiles of respondents.
Table 7. Profiles of respondents.
No. Respondents%
SexFemale10552
Male11348
Age19–219544
22–245525
25–274320
More than 272511
Frequency of website useDaily6530
Weekly9242
Monthly6128
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lupo, T.; Buscarino, E. A Methodological Approach for Developing and Validating a Parsimonious and Robust Measurement Tool: The Academic E-Service Quality (ACEQUAL) Model. Educ. Sci. 2021, 11, 613. https://doi.org/10.3390/educsci11100613

AMA Style

Lupo T, Buscarino E. A Methodological Approach for Developing and Validating a Parsimonious and Robust Measurement Tool: The Academic E-Service Quality (ACEQUAL) Model. Education Sciences. 2021; 11(10):613. https://doi.org/10.3390/educsci11100613

Chicago/Turabian Style

Lupo, Toni, and Ester Buscarino. 2021. "A Methodological Approach for Developing and Validating a Parsimonious and Robust Measurement Tool: The Academic E-Service Quality (ACEQUAL) Model" Education Sciences 11, no. 10: 613. https://doi.org/10.3390/educsci11100613

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop