Next Article in Journal
Resilient Built Environment: Critical Review of the Strategies Released by the Sustainability Rating Systems in Response to the COVID-19 Pandemic
Previous Article in Journal
Key Barriers of Digital Transformation of the High-Technology Manufacturing: An Evaluation Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Construction of the Quality Evaluation Index System of MOOC Platforms Based on the User Perspective

1
Faculty of Education, Fujian Normal University, Fuzhou 350117, China
2
College of Chinese Language and Literature, Fujian Normal University, Fuzhou 350117, China
3
School of Economics & Management, Xiamen University of Technology, Xiamen 361024, China
*
Author to whom correspondence should be addressed.
Sustainability 2021, 13(20), 11163; https://doi.org/10.3390/su132011163
Submission received: 5 September 2021 / Revised: 4 October 2021 / Accepted: 4 October 2021 / Published: 9 October 2021

Abstract

:
Massive open online courses (MOOCs) have become a mainstream form of online learning. At present, various countries are vigorously developing MOOC platforms, which provide a helpful platform for people to acquire knowledge and skills. However, the quality of each MOOC platform is different, which is a challenge for learners seeking excellent courses. Since the evaluation of MOOC quality is a multiple criteria decision-making issue, it is important to find the major dimensions and criteria that determine the quality of platforms. This paper determines the weight of each dimension and criterion by using the best worst method (BWM). The results indicate that content accuracy has the greatest impact on MOOC quality. This paper selected five well-known domestic MOOC websites as research objects and used the VIKOR analysis method to rank the platform quality of the five chosen websites. The results show that IMOOC and Xuedong are ranked as the top two websites. This research result helps learners deepen their understanding of MOOC platforms and can serve as a reference for MOOC platforms to improve their quality. Techniques to reduce the uncertainty of expert judgments (such as rough sets, fuzzy theory, grey correlation, etc.) and models that clarify the influence relationship between criteria (DEMATEL-ANP) can be applied in future research.

1. Introduction

With the rapid development of internet information technology, the emergence of online education has changed traditional teaching methods, learning methods across time and space have been realized, and learning channels have become flexible and diverse. In 2008, Canadian scholars Cormier and Alexander proposed the concept of massive open online courses (MOOCs). Subsequently, excellent MOOC platforms such as Coursera, EdX, and Udacity have successively appeared in countries outside of China, while Chinese MOOC platforms such as XuetangX and Open Learning are also gradually developing. Although MOOC education is very popular, it also has many problems, such as a low completion rate of students’ courses, a lack of motivation for learning, difficult self-regulation of learning, wasted course resources, and an inability to adapt to the complex and changeable online education. These problems have led to low passing rates and uneven evaluation systems for MOOCs. Cabrera and Fernández-Ferrer [1] mentioned that the teaching limitation of MOOCs was teaching evaluation activity and that it was necessary to explore many online evaluation systems to make MOOCs a new teaching and learning method in the future and to ensure the survival and development of MOOCs. We tried to establish a set of scientific, effective, and reasonable MOOC platform quality evaluation index systems that fills the research gap.
As MOOC platforms involve many factors, such as course content [2,3], course form [4], and website design [5,6], the evaluation of MOOC platform quality is a multicriteria decision-making (MCDM) problem. Tzeng et al. [7] proposed a new hybrid multiple criteria decision-making (MCDM) model that breaks the independent relationship between standards by using factor analysis, and the Decision-Making Trial and Evaluation Laboratory (DEMATEL) to effectively evaluate online education. Lin [8], and Qiu and Ou [9] applied the fuzzy analytic hierarchy process (AHP) to obtain the weights of the indicator system and to evaluate the quality of specific MOOC courses to improve the quality of MOOCs. These studies have improved the evaluation system of educational service quality to a certain extent.
This paper is divided into six sections. Section 1 introduces the research background of MOOCs, the current dilemma, the research purpose, and the significance. Section 2 reviews the literature and summarizes the research on the quality of MOOC platforms and related methods. Section 3 uses the BWM model to calculate the index weights and uses the following five MOOC platforms for empirical analysis: MOOC.CN, IMOOC, XuetangX, Study.163, and Open Learning. The data analysis is implemented in Section 4. Interpretation of results, theoretical applications and implications are discussed in Section 5. Finally, this paper summarizes the development status of MOOCs and provides suggestions for their future development.

2. Literature Review

MOOCs provide an opportunity to expand access and participation in education, and the massive open characteristic of MOOCs allows learners to control their learning progress. Huang et al. [10] studied the topic and concluded that the development of online education is closely related to the indicators that affect platform quality. Terras and Ramsay [11] emphasized the importance of designing, developing, and interacting with MOOCs from the perspective of learners’ psychology and how learners could self-regulate and overcome psychological barriers to effectively using MOOCs.
Scholars have conducted research on the evaluation of MOOC education quality across various dimensions. Yousef et al. [6] identified six categories of standards for MOOC design quality, namely, instructional design, assessment, user interface, video content, social tools, and learning analysis. Lin [8] proposed dividing website quality into four standards: system quality, information quality, service quality, and attractiveness, which determine the relative weights of the website quality standards and improve the effectiveness of a website. Chiu et al. [12] claimed that satisfaction plays an important role in online learning. In addition, information, systems, services, procedures, interactions, and other factors could also affect learners’ satisfaction. Qi and Liu [13] utilized latent dirichlet allocation (LDA), an autoencoder and text classification model, to establish a curriculum evaluation system based on MOOC reviews. By combining the theoretical framework, Drake et al. [14] proposed five principles for MOOC design, including meaningful, attractive, measurable, accessible, and extensible. Miranda et al. [15] proposed using data mining and fuzzy set methods to evaluate MOOCs and finally obtained an evaluation framework, including five first-level indicators of course content, instructional design, interface design, media technology, and curriculum management. Nie et al. [16] proposed a systematic approach to diagnose and evaluate the quality of MOOC courses. This method integrated standardized rubric, expert feedback, data mining, and emotion detection into AHP. Rong et al. [17] constructed an MOOC evaluation index system containing 6 elements and 16 indicators and used a multigranular unbalanced hesitation fuzzy linguistic term set (MGUHFLTS) to describe these indicators.
Many studies have indicated that the quality of education services is an important reason for improving learners’ satisfaction and attracting new willing learners from various perspectives. Selim [18] used the usefulness and ease of the Technology Acceptance Model (TAM) to evaluate college students’ acceptance of a course website and proved that practicality and ease of use are key factors in improving the acceptance of course websites. Through investigation and research, Sun et al. [19] showed that teachers’ attitudes toward teaching courses, students’ attitudes toward the curriculum, flexibility of use, and assessment diversity were key factors influencing learners’ perceived satisfaction. Yepes-Baldó et al. [5] provided some quality indicators related to instructional design and platforms for MOOC developers to better improve the education quality of online learning platforms.
In this paper, after referring to the relevant studies by the above experts, the MOOC platform quality evaluation index system is constructed using the four aspects of system function, teaching resources, teaching effect, and social interaction as well as 22 secondary indicators at the dimension level, as follows:

2.1. System Function (X1)

When analyzing the quality of MOOC platforms, Lin [20] proposed system function reliability, accessibility, and acceptable response time as factors. These factors could help improve the learning outcomes of students on MOOC platforms. Lin [8] and Büyüközkan et al. [21] both mentioned the dimension of system function in their articles, but their focuses varied. Lin [8] mentioned updating learning materials and claimed that the update frequency was an important indicator for evaluating e-learning. MOOC platforms provide interface functions to access social networks (such as Twitter, Facebook, and LinkedIn), which can facilitate interaction between participants [2,3]. In the opinion of Büyüközkan et al. [21], system confidentiality is an important foundation of website systems, and data confidentiality is the standard to evaluate whether a website is safe. Fesol and Salam [22], and Ossiannilsson et al. [23] discussed how the flexibility of learning can make learning more convenient and can save time for learners. Liu et al. [24] believed that the elements of interface design including the layout, navigation, and links would affect the user experience. Tools such as navigational maps or frames with indices are elements that can ensure quality and help students understand their progress, where they are, and what tasks remain to be completed [3,5,25]. Based on the above literature, the indicators under the system function dimension were composed of flexibility of use, functional diversity, system reliability, system confidentiality, update frequency, and learning navigation.

2.2. Teaching Resources (X2)

High-quality teaching resources are the foundation for successful operation of MOOCs. Clear organization and the structure of course content are important indicators of the quality of teaching resources [5]. Chiu et al. [12] believed that users can improve their professional skills and learning knowledge and that high-quality course materials were crucial to their learning satisfaction. Tzeng et al. [7] stated in their article that learners noted that course websites provided them with more accurate and complete information resources, which could help them better understand learning materials. Dehghani et al. [26] designed a conceptual model for identifying teachers’ competence in MOOCs. They found that professional skills (instructional content development, instructional design, evaluation, communication, participation, management, and technical skills) were one of the main categories. Li et al. [27] and Li et al. [28] mentioned the importance of personalized learning. With the integration of MOOCs and adaptive learning technologies, MOOC users are able to easily access personalized support to effectively meet their learning needs. Students can adjust the learning content, learning processes, and activities according to their personal characteristics and learning difficulty. Castaño et al. [2] suggested that the use of diverse resources helps to focus attention on the curriculum. The richness of course resources is an important indicator [5]. In addition to the accuracy, completeness, specialty, and personalized learning of teaching resources as indicators of quality, this paper also adds the richness of teaching resources, which gives a more comprehensive evaluation of teaching resources.

2.3. Teaching Effect (X3)

The teaching effect is the scale used to measure the quality of classroom teaching. Course evaluation mirrors course quality [29]. Rotgans and Schmidt [30] showed that the instructional design elements of MOOCs gave learners autonomy, which may help motivate learners enjoy courses, spend more energy on understanding the learning content, continue to participate in classroom activities, and then positive learn emotions. Instructional design is an important part of course development, especially for MOOCs, which urgently require effective methods to attract many diverse learners [31]. The instructional design quality is positively correlated with MOOC ranking [32]. There is a direct link between a well-designed course and the motivation of its participants. Thus, students can be motivated by the attractiveness of the course content [2]. Therefore, the teaching effect included attraction, brand effect, instructional design, and course evaluation.

2.4. Social Interaction (X4)

Social interaction is an important dimension to evaluate the quality of MOOCs. When Marks [33] studied online learning, he found that there were three types of interaction behaviors: teachers with students, students with students, and students with content. The interaction between teachers and students mainly includes online and offline methods. High-quality learner activities and learner interactions are important parts of high-quality MOOCs [25]. Fesol and Salam [22] proposed that online interaction could solve the current problem, which was to deepen mutual understanding between teachers and students in a timely manner. Yepes-Baldó et al. [5] used the communication of courses through offline resources (communication methods, flyers, posters, brochures, etc.) as one of the indicators to measure the quality of MOOC platforms. Liu et al. [24] believed that interactivity was another important factor affecting MOOC learners’ engagement. In addition, Li et al. [28] believed that companies working with third-party testing institutions could address cheating and plagiarism in online testing, thereby enhancing academic integrity and enhancing users’ trust in MOOC learning. Through the above analysis, this study found that user trust could also be used as an indicator of the social interaction dimension.
In summary, the specific references of the dimensions and indicator sources of this paper are listed in Table 1.
This paper selected five MOOC websites as research objects, namely, MOOC.CN (V1), IMOOC (V2), XuetangX (V3), Study.163 (V4), and Open Learning (V5). As these five MOOC platforms are well-known websites in China, they were established earlier in China and have many users and course resources. MOOC.CN offers rich online education resources, free learning, and a high degree of openness to users. IMOOC is a specialized internet IT skills learning website with more than 21.5 million users, more than 1500 cooperative lecturers, and more than 3000 self-produced courses. XuetangX is a MOOC platform initiated by Tsinghua University in October 2013. It is a research exchange and achievement application platform of the Online Education Research Center of the Ministry of Education. Study.163 is an online practical skills learning platform created by NetEase, which was officially launched at the end of December 2012. Open Learning is now the largest MOOC learning community in China. Open Learning has gathered nearly one million learners and is committed to providing Chinese users with a platform to select courses, to comment on them, and to communicate and share them. Although many MOOC platforms charge fees, IMOOC (V2) is a free model. There are no additional services, such as follow-up protection, in the metrics. Therefore, this is the shortcoming of this paper when selecting dimensions and indicators.

3. The Research Methods

As a branch of the decision-making management research field, MCDM involves many methods, such as AHP, ANP, DEMATEL, TOPSIS, and GRAY. The MCDM model has been widely used in online education, e-commerce, supply chain management, energy management, and other industries [34,35,36]. Sadi-Nezhad et al. [34] proposed the fuzzy analysis network process (FANP) model to evaluate network learning systems and used the model to evaluate the existing network learning platform of some universities. In this study, the best worst method (BWM) and VlseKriterijumska Optimizacija I Kompromisno Resenje (VIKOR) were adopted. The BWM is an improvement based on the AHP, which compares the two-to-two comparisons to the optimal and worst dimensions or indicators compared with the remaining dimensions or indicators. Using such processing reduces the number of steps to be compared, simplifies the calculation process, and greatly reduces the amount of data. Furthermore, the smaller the consistency indicator value is, the higher the reliability of the acquired data [37]. You et al. [38] used the BWM method to determine the weights of the evaluation standard of power grid operations and evaluated the operating performance of power grid enterprises. Gupta et al. [39] and Kaa et al. [40] used the BWM method to determine the relative importance of green innovation factors, and the research contents were supplier selection based on green innovation and the construction of a green innovation framework. VIKOR is an MCDM method first developed by Opricovic in 1998. It is a method for solving complex decision situations [41]. This method is used to solve discrete decision-making problems with immeasurable and inconsistent criteria [42]. Shojaei et al. [41] used VIKOR technology to propose an evaluation and ranking model to rank the performances of some airports.

3.1. Introduction of BWM

For the best worst method (BWM), the decision maker does not need to compare all of the criteria as in the traditional analytic hierarchy process. After determining the best dimension, the worst dimension, the best criterion, and the worst criterion under each dimension through expert interviews, the method only needs to make pairwise comparisons between the best criterion and the worst criterion, and other criteria. As the BWM provides a more consistent comparison than the AHP, and the metric weights obtained by the BWM are highly reliable [43]. The BWM could produce a single solution to two or more metric problems when comparing the system to fully meet any number of standards. For comparison systems with three or more inconsistent criteria, weights could be used as intervals where multiple optimal solutions were possible [44]. The specific steps are as follows:
Step 1: Determine the set of decision indicators. The decision maker identifies n indicators for decision making Z 1 , Z 2 , Z 3 , , Z n .
Step 2: Determine the best and worst indicators. The decision maker selects the best (the most desirable, preferred, or important) and the worst (the least desirable or least important) indicators from n indicators.
Step 3: The optimal indicators and other indicators are compared. Decision makers rank the relative importance of the optimal indicators on a scale of 1 to 9. For other indicators, the best-to-other (BO) variable obtained by the allocation is expressed as A b = a b 1 , a b 2 , , a b n , where a b j represents the importance of the optimal indicator b compared with indicator j. Obviously, a b b = 1 .
Step 4: The other indicators are compared with the worst. Decision makers use a scale from 1 to 9 to show the relative importance of indicators other than the worst. The others-to-worst (OW) variable obtained is expressed as A w = a 1 w , a 2 w , , a n w T , where a j w represents the degree of importance of indicator j compared with the worst indicator w. Obviously, a w w = 1 .
Step 5: The optimal weight w 1 , w 2 , , w n is determined. The optimal weight is determined such that the maximum absolute difference between w b w j a b j and w j w w a j w for all j is the minimum. The optimal weight can be represented by a maximum and minimum model:
m i n m a x j = w b w j a b j , w j w w a j w j s . t . w j = 1 w j 0 ,   which   is   true   for   all   j s .
It can be solved by converting this into the following linear programming formula:
m i n θ s . t . w b w j a b j θ ,   which   is   true   for   all   j s . w j w w a j w θ ,   which   is   true   for   all   j s . j w j = 1 w j 0 ,   which   is   true   for   all   j s .
For any value of θ, multiply the first constraint in Formula (2) w j and the second constraint w w to obtain the solution. The solution space of Formula (2) is the intersection of 2n − 3 (n represents the number of the indicators, and n 2 ) linear constraints. Therefore, if there is a sufficiently large θ , the solution space is nonempty. By solving Formula (2), the corresponding results of the optimal weights w 1 , w 2 , , w n and θ are obtained.
Definition 1: When a i k a k j = a i j , it is sufficiently consistent that for all ks, a i k , a k j , and a i j are the preferences of the best indicator for indicator k. Indicator k is the preference of the worst indicator for the worst indicator.
Table 2 shows the maximum value of different values of θ (consistency indicator) for a i j .
Due to the consistency index (Table 2), the consistency rate (CR) is calculated as follows:
ConsitencyRate = θ Consistency index
The consistency rate belongs to [0.1]. The closer the value is to 0, the higher the consistency; conversely, the closer the value is to 1, the lower the consistency.
As mentioned above, Formula (2) can produce multiple optimal solutions. If you want to find the minimum and maximum values in the set w b w j a b j , w j w w a j w and minimize and maximize the set w b a b j w j , w j a j w w w , the problem can be expressed by the following formula:
m i n m a x j = w b a b j w j , w j a j w w w s.t. j w j = 1 w j 0 ,   which   is   true   for   all   j s .
Formula (4) can be converted into the following linear equation:
m i n θ s.t. w b a b j w j θ L ,   which   is   true   for   all   j s . w j a j w w w θ L ,   which   is   true   for   all   j s . j w j = 1 w j 0 ,   which   is   true   for   all   j s .
Formula (5) is a linear problem with a unique solution. It can obtain the optimal weights w 1 , w 2 , , w n and θ .
For this model, the closer the value of θ L is to 0, the higher the consistency.

3.2. VIKOR

The VIKOR method is an effective tool for multiple criteria decision-making technology. The method is used for decision makers who cannot or do not know how to clearly express preferences or the situation of inconsistencies and conflict between the evaluation principles. The VIKOR method can address these problems so that decision makers can accept compromise solutions. Therefore, Chitsaz and Banihabib [45] stated that VIKOR provided a compromise ranking to decision makers based on “proximity” to “ideal” solutions.
First, the ideal solution and the negative ideal solution are defined. The ideal solution is the optimal value of all the evaluation criteria in each evaluation criterion, and the negative ideal solution is the worst value of all of the evaluation criteria in each evaluation criterion. All scenarios are evaluated according to each standard function, and the ordering is performed based on the proximity of the ideal solution. This method uses the L p -metric as the aggregate function:
L p , j = i = 1 n w i h i h i j / h i h i p 1 / p 1     p     ,   j = 1 ,   2 ,   3 ,   ,   j
In the above formula, j is the scheme number; i is the evaluation criterion number; h i j represents the performance value of the jth alternative on the ith criterion; and h i and h i represent the best value and the worst value of all standard functions, respectively. P is the distance parameter of the aggregate function (generally 1, 2, or ∞; this paper takes 1), n is the number of criteria, w i represents the ith standard weight, and L p , j represents the distance from the solution to the ideal solution.
h i = m a x j h i j | i I 1 , m i n j h i j | i I 2
h i = m i n j h i j | i I 1 , m a x j h i j | i I 2
In Formulas (7) and (8) above, I 1 represents a set of revenue type criteria and I 2 represents a set of cost type criteria. Thus, the positive ideal solution and the negative ideal solution are calculated.
The second step is to calculate the group benefit S i (optimal solution) and individual regret R i (worst solution) of the comprehensive evaluation of the scheme.
S i = i = 1 n w i h i h i j h i h i
R i = m a x i w i h i h i j h i h i
where J = 1, 2, 3, …, j, where w i represents the weight of the ith indicator; S i represents the group benefit of the alternative, where the smaller the value of S i , the greater the group benefit; and R i represents individual regret, where the smaller the value of R i is, the smaller the individual regrets.
The third step is to calculate the benefit ratio Q generated by each scheme.
Q i = v S j S S S 1 v R j R R R and   S = m i n j S j   ,     S = m a x j S j R = m i n j R j ,     R = m a x j R j
where v represents the coefficient of the decision mechanism. If v > 0.5, it means that the decision is made according to the principle of benefits first; if v ≈ 0.5, the decision is made according to the principle of balanced compromise; and if v < 0.5, it means that the decision is made according to the principle of the cost supremacy decision. In this paper, v = 0.5, that is, the tradeoffs between benefits and costs are balanced.
The fourth step is to sort the alternatives according to S j , R J , Q i .
The fifth step is to sort according to the value of Q i when the following two conditions are met, and Q i is the minimum winning unit.
Condition 1: When Q(a) − Q(b) ≥ 1/(J − 1), Q(a) is the Q of the second scheme based on Q, Q(b) is the Q of the first scheme based on Q, and J is the number of all schemes. The difference between the two Qs in which the order is only one bit must exceed the value of 1/(J − 1) to determine the optimal scheme to sort the first scheme. If there are more than two schemes, the first scheme is sorted. The plan is compared with other scenarios to determine whether it meets Condition 1.
Condition 2: Admitted assurance of decision-making. After sorting according to Q, the S of the first-ranked scheme must be ranked better than the S of the second-ranked scheme or its R must be ranked better than the R of the second-ranked scheme. If there are more than two schemes, the first scheme is compared in order with other schemes to determine whether it meets Condition 2.
If the first-ranked scheme satisfies both Condition 1 and Condition 2, the first-ranked scheme is optimal. If the first-ranked scheme and the sorted second-ranked scheme or another scheme satisfy only Condition 2 but Condition 1 is not satisfied, the schemes that do not satisfy Condition 1 but satisfy Condition 2 are all optimal schemes.

4. Data Analysis

This paper completed the data collection process through expert interviews and questionnaires. Researchers designed two questionnaires: the first was a questionnaire that sought to compare the importance of various dimensions and secondary indicators of MOOC platform quality, and the second was a questionnaire regarding the performance of the quality evaluation of a MOOC website. The weighted questionnaire was designed based on the BWM method. The optimal and worst dimensions or indicators were selected in the four dimensions of the paper and the indicators in each dimension, and then, the optimal dimensions or indicators were compared with other dimensions or indicators. Finally, the dimensions and index weights were calculated. The questionnaire participants were all teachers or students who had a certain understanding of MOOCs. Both teachers from middle schools and professors from universities came from the MOOC Teaching Expert Database and were the creators and providers of MOOC courses. Each expert hosted more than three courses on average and had participated in the construction of other courses. They had more than 8 years of MOOC teaching experience. They had rich experience in MOOC course construction and teaching. We selected many students from various schools with different professional backgrounds as the research objects and included one on-the-job student. They participated in several MOOC courses during the semester. The specific information of the interviewees is shown in Table 3.
Affected by the COVID-19 pandemic, the researchers interviewed ten experts through Tencent Conference and asked them to fill out questionnaires. A total of two questionnaires were designed for this study. Questionnaire 1 was used to evaluate the importance of the six dimensions and 23 criteria, and questionnaire 2 was used to evaluate the performance of the five entrepreneurial projects. It took a month from 20 March 2020 to 16 April to contact 10 experts to fill out the questionnaire. The specific implementation process is as follows.

4.1. Weight Calculation

The BWM model is used to calculate four dimensions to measure the quality of MOOC sites. These dimensions include system functions, teaching resources, teaching effectiveness, and social interaction, and flexibility, functional diversity, and 20 other indicator weights are used. The corresponding data were obtained by issuing and collecting questionnaires. The specific process is as follows:
Step 1: Set the set of dimensions that affect the quality evaluation of a MOOC website as X. Then, the set is X i = X 1 , X 2 , X 3 , X 4 . The set of indicators is Z, and the set of all indicators is Z i = Z 1 , Z 2 , Z 3 , , Z n , i = (1, 2, 3, …, n).
Step 2: Determine the best and worst standards. Ten experts selected the optimal dimension or indicator and the worst dimension or indicator from the dimensions and indicators.
Step 3: Establish the evaluation scale between the design indicators. The specific scoring scale is shown in Table 4.
Step 4: Determine the preference of the best criteria to all other criteria. This article uses the evaluation scales in the table above to indicate the degree of preference. The result is the best-to-others (BO) vectors.
Step 5: Determine the preference of all criteria to the worst standard. This article used the evaluation scales in the above table to indicate the degree of preference. The result is the others-to-worst (OW) vectors.
This paper issued 10 questionnaires. The best and worst dimensions and indicators were selected by experts, and then, the best and worst dimensions or indicators were compared with other dimensions or indicators to obtain A b = a b 1 , a b 2 , , a b n and A w = a 1 w , a 2 w , , a n w T . Table 5 shows the dimensional comparison of an expert and its results.
After comparing the dimensions, the paper compares the indicators under each dimension in the same way. Finally, according to the data of each expert, the weights and consistency rates are calculated. Table 6 shows the optimal weight for each dimension for each expert and the CRs. Through 10 sets of weights, their average weights are obtained and presented in the table. After comparing the dimensions, the paper compares the indicators under each dimension in the same way.
To test the consistency of the pairwise comparisons mentioned in Section 3, the CR can be used as a measure of consistency. Using θ in the above table, the consistency ratio CR was obtained according to Formula (3). Obviously, the larger θ is, the higher the consistency ratio and the lower the reliability. Since the CRs listed in Table 6 are close to zero for all experts, it can be concluded that all 10 questionnaires have good consistency. That is, the questionnaire is valid, and the data have high reliability.
Step 5: The above table solved the dimension weights using step (5), and the weights of each dimension are W1 = 0.104, W2 = 0.520, W3 = 0.296, and W4 = 0.079. According to the formula Global Weight = Local Weight × Dimension Weight, the weight of each dimension and indicator is calculated and shown in Table 7:
From Table 7, according to the dimension weights, the dimensions can be sorted as follows: X2 > X3 > X1 > X4. In the same way, the table shows that, among the 20 indicators, content accuracy is the most important, and user trust and learning navigation are the least important. After deriving the weights of each dimension and indicator, the results can be used as a basis to analyze the performance of a MOOC platform. The following use VIKOR analysis to evaluate MOOC website performance.

4.2. Case Application

This paper takes the case of five MOOC websites with a high reputation in China: MOOC.CN (V1), IMOOC (V2), XuetangX (V3), Study.163 (V4), and OpenLearning (V5) The evaluation scale of the website quality evaluation is shown in Table 8.
The five websites’ performance evaluation processes are as follows:
(1) Establish a standardized evaluation matrix, which is derived from the recycled quality evaluation performance questionnaire of the MOOC platform. The number of websites evaluated is m, and the evaluation website B = B 1 , B 2 , B 3 , , B m T . The number of indicators evaluated is n, and the evaluation index C = C 1 , C 2 , C 3 , , C n T . Matrix D is an estimate of the metrics for all sites. Then, Equation (6) is used.
D = 4.8 4.1 5 4.5 4.7 4.4 5 4.9 5 4.9 5.6 4.6 3.9 4.6 4.5 4.3
(2) Determine the positive ideal solution and the negative ideal solution for each indicator and obtain Table 9 according to Formulas (7) and (8).
Due to the limited length of this paper, Table 4, Table 5, Table 6 and Table 7 are the positive ideal solutions and the negative ideal solutions for the secondary indicators under the system function dimension.
(3) The group benefit S i , the individual regret R i and the comprehensive performance ratio Q j are calculated and ranked using Formulas (9)–(11) in Section 3, and the results are shown in Table 10.
Table 10 lists the calculation results (note that v is 0.5). As described in Section 3, sorting is based on the values of Q, S, and R. The best website with the minimum Q (IMOOC V2) is assumed to be b, and the second website (XuetangX V3) is assumed to be a. To determine the best website, Condition 1 and Condition 2 should be met. Then, the value of Q(a) − Q(b) is equal to 0.156, which is not greater than DQ = 1/(5 − 1) = 0.25; therefore, Condition 1 is not satisfied. In addition, V2 (IMOOC) is ranked first in S and R, so it is a stable choice in the decision process that satisfies Condition 2.
When Condition 1 is not met, the compromise solution is XuetangX. If “IMOOC” is considered an alternative, the value of XuetangX is Q(a) − Q(b) = 0.156, which is less than DQ = 0.25. Therefore, to approach the ideal solution, we propose IMOOC and XuetangX as a set of compromise solutions.
This compromise solution is stable, where v can take any value between 0 and 1 (from the gain to the upper rule v ≥ 0.5, from the cost to the upper rule v ≤ 0.5, and the equilibrium compromise v ≈ 0.5). Table 11 shows the Qs and website ranking when v = 1/3. Table 12 shows the Qs and website ranking when v = 2/3.
Comparing the Qi values of Table 11 and Table 12, when v = 1/3 and when v = 2/3, the ordering of the website does not change. However, the difference between the Q of website V4 and website V2 in Table 11 is less than 0.25. That is, Condition 2 is satisfied and Condition 1 is not satisfied. Therefore, when v = 2/3, there is no difference in the performances of V2, V3, and V4. However, the overall change is not large, so the data have certain accuracies, and the obtained website ranking has a certain reliability.

4.3. Sensitivity Analysis

Through the sensitivity analysis of the index weights, a new performance ranking is obtained to test its robustness [39,46]. The weight of the highest weighted indicator content (X21) among the 20 indicators is adjusted, the range of the adjustment is 0.1, and the weight of content accuracy (X21) is changed from 0.1 to 0.9 to observe the change in the performance ranking, as shown in Table 12. (All index weights for content accuracy (X21) change from 0.1 to 0.9).
In Table 13, the initial ranking is (4, 1, 2, 3, 5). After the sensitivity analysis, there is no change in the ranking, indicating that the selected indicators are scientific and reasonable.
Figure 1 is obtained from the ranking tables of Table 13. The following figure can be used to judge the advantages and disadvantages of websites. When a user chooses a MOOC website for learning, according to all aspects of the comprehensive, the users take IMOOC (V2) and XuetangX (V3) as the first choices, and the order of the other three is Study.163 (V4), MOOC.CN (V1), and OpenLearning (V5). As seen from the figure below, this ranking is reliable.

5. Discussion

Based on the data collected by the indicators and performance questionnaires, the results shown above were obtained using the BWM and VIKOR methods. The results show that, among the four dimensions, teaching resources (W2 = 0.520) have the largest weight, followed by the teaching effect (W3 = 0.296), the system function (W1 = 0.104), and finally social interaction (W4 = 0.079). As can be seen from the reason why IMOOC (V2) ranks first in the performance analysis, the rich content, extensive resources, and accurate and authoritative content of the website are important factors for users to be more confident and trustworthy in the course. The teachers on IMOOC are mainly industry experts and are authoritative. Users benefit considerably from the classrooms. Therefore, the courses of IMOOC are often given “lively and interesting” evaluations, which also increases the competitive advantage of IMOOC. XuetangX is to the best website. Based on the various dimensions and indicators, the figures are relatively average, and the overall level is better. Therefore, users are very concerned about whether the content of the website is complete and accurate, and whether the teaching skills of the instructors are of a professional level. These are considered important factors when evaluating the website.
The teaching effect is a comprehensive evaluation of the curriculum. The weight of the instructional design ranks second among all of the index weights. Instructional design includes the teaching content, the design of the video courseware, the design of the teaching objectives, the characteristics of the subject, and other knowledge development, which are additional points that attract users. IMOOC (V2) uses famous teachers, so its performance in the dimension of the teaching effect is among the best. Due to the limitations of internet technology, the performance data of the system functions are distributed evenly among the five websites. With the constant innovation of information technology, the system function needs to keep improving. Compared with the traditional teaching model, online education has certain risks. Platforms should pay attention to the reliability of the systems so that users can trust and continue to use them. By strengthening the confidentiality of the platforms, and the diversity of functions and other services, the satisfaction of users can be improved.
The results from computing and analyzing data show that the scores of the five indicators of social interaction (user supervision, incentives, classroom interaction, offline communication, and user trust) are significantly lower (the averages of the upper bounds of the scores given by the 10 experts are G41 = 4.7, G42 = 5, G43 = 5, G44 = 4.6, and G45 = 5). This shows that MOOC websites still lack social interaction. In the dimension weight analysis, the proportion of social interaction is small. After all, users visit MOOC platforms mainly for learning, and they are not concerned about social interaction. However, with the rapid development of networks, the author believes that the proportion of social interaction will be slowly improved. Although online education classrooms are not as rich as traditional education classes, they can be expanded by using after-class interactions between teachers and students, by using the after-class interaction between students and students, and by designing a simple and convenient interface so that users are willing to participate in the interaction and to enhance the users’ experience. The social interaction performance of Study.163 (V4) is the best among the five websites. After-school interactions include course teachers answering questions on the BBS, assigning and grading homework, etc.; however, other websites lack a mechanism for effective communication with teachers.
In summary, by establishing a quality evaluation index system for MOOC platforms, this paper provides MOOC developers with some quality indicators related to platform construction and course design to help them better plan, design, and organize implementation, which has a positive impact on improving the teaching quality of MOOC platforms. The number of online self-learners, course providers, and online platforms for MOOCs has grown significantly in recent years. Quality evaluation of MOOC platforms has become very important. After referring to the relevant research of the above experts, this paper constructed the MOOC platform quality evaluation index system from the 4 aspects of system function, teaching resources, teaching effect, and social interaction as well as 22 secondary indicators at the dimension level. Then, the BWM method was used to calculate the weight of each dimension that affected the MOOC platform quality evaluation. Then, using VIKOR analysis, this paper analyzed the indicators of five MOOC websites and tried to obtain a reasonable evaluation of their index system.
The research results could provide the following advantages: (i) the appropriate quality evaluation criteria for MOOC platforms can be determined, (ii) advanced models can be applied to determine the weights of the dimensions and criteria of the evaluation system, (iii) a highly reliable evaluation of MOOC platform performance can be provided, and (iv) targeted suggestions for improving MOOC platform construction based on expert judgment can be provided. The evaluation system could provide guidance for the construction of MOOC platforms.

6. Conclusions

This paper constructed a MOOC platform quality evaluation index system. Through the analysis of 20 indicators such as teaching resources, teaching effects, system functions, social interaction, content accuracy, and teaching resource richness, the results show that the most important dimension affecting the quality evaluation of MOOC platforms is teaching resources. Then, empirical analysis was conducted on a case of five MOOC websites, and the performance rankings of the five websites were obtained.
Teaching resources are the most important indicator affecting the quality of a MOOC platform. The teaching content should be accurate, should pay attention to the quality and source of the course, and should provide more open resources to enhance the loyalty of users. While improving the quality of the course, it is also necessary to pay attention to system functions, to provide targeted services for users, to meet the needs of users, and to further maintain and upgrade the diversity of the functions and confidentiality of the system to enhance users’ satisfaction. Emphasis on social interaction, teachers and students through network exchanges and interactions, as well as the construction and maintenance of forums and discussion forums solves student problems in a timely manner and enhances students’ experience.
This paper used the BWM and VIKOR hybrid model to study the performance of domestic MOOC platform quality evaluation. The BWM relies on expert analysis and selection; thus, the data obtained have certain subjectivity, and the data may be slightly biased. The 10 experts who completed the questionnaire in this paper were mostly students and teachers. If we interview scholars who have performed some research in the field of MOOCs, the data obtained will be more authoritative. Therefore, the questionnaire obtained in this paper has certain limitations. In addition, the five websites for the performance evaluation of this paper are limited to domestic platforms, and the quality evaluation systems of the constructed MOOC platform may not be applicable to international platforms. Subsequent research can build a more complete dimension and indicator system and can select a more comprehensive MOOC platform to calculate performance. The results will be more reliable and accurate. Therefore, it is recommended that follow-up scholars study in this direction.

Author Contributions

P.-Y.S. analyzed the data, reviewed the literature, and wrote the article. J.-H.G. analyzed the data. Q.-G.S. designed the research protocols, wrote the article, formatted the article, and revised the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in the first half of 2021 by the Xiamen University of Technology high-level talents scientific research start-up project (YSK21007R), Xiamen University of Technology Scientific Research Climbing Plan (XPDST20002). Youth Project of the Humanities and Social Sciences of the Ministry of Education of China (18YJC630140), Social Science Fund of Fujian Province of China (FJ2020B023), Xiamen University of Technology young teachers’ research project (XPDKQ 18044), Major Projects of Fujian Social Science Research Base in 2018 (FJ2018JDZ056), and Fujian Province Social Science Planning Basic Research Annual Project in 2018 (FJ2018C028).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors are extremely grateful for the valuable comments made by the Sustainability editorial team to improve the quality of this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cabrera, N.; Fernández-Ferrer, M. Examining MOOCs: A Comparative Study among Educational Technology Experts in Traditional and Open Universities. Int. Rev. Res. Open. Dis. 2017, 18, 47–67. [Google Scholar] [CrossRef] [Green Version]
  2. Castanco, C.; Maiz, I.; Garay, U. Design, Motivation and Performance in a Cooperative MOOC Course. Rev. Comunicar 2015, 44, 19–26. [Google Scholar] [CrossRef]
  3. Sánchez-Vera, M.D.; León-Urrutia, M.; Davis, H. Challenges in the Creation, Development and Implementation of MOOCs: Web Science Course at the University of Southampton. Rev. Comunicar 2015, 22, 37–44. [Google Scholar] [CrossRef] [Green Version]
  4. Alemán, L.Y.; Sancho-Vinuesa, T.; Gómez, M.G. Pedagogical Quality Indicators for the Design of a Massive Open Online Course for Educational Update. RUSC Univ. Knowl. Soc. 2015, 12, 104–119. [Google Scholar] [CrossRef] [Green Version]
  5. Yepes-Baldó, M.; Romeo, M.; Martín, C.; García, M.Á.; Monzó, G.; Besolí, A. Quality Indicators: Developing “MOOCs” in the European Higher Education Area. Educ. Media Int. 2016, 53, 184–197. [Google Scholar] [CrossRef]
  6. Yousef, A.M.F.; Chatti, M.A.; Schroeder, U.; Marold, W. What Drives a Successful MOOC? An Empirical Examination of Criteria to Assure Design Quality of MOOCs. In Proceedings of the 14th International Conference on Advanced Learning Technologies, Athens, Greece, 7–10 July 2014; pp. 44–48. [Google Scholar] [CrossRef]
  7. Tzeng, G.H.; Chiang, C.H.; Li, C.W. Evaluating Intertwined Effects in E-Learning Programs: A Novel Hybrid MCDM Model Based on Factor Analysis and DEMATEL. Expert Syst. Appl. 2007, 32, 1028–1044. [Google Scholar] [CrossRef]
  8. Lin, H.F. An Application of Fuzzy AHP for Evaluating Course Website Quality. Comput. Educ. 2010, 54, 877–888. [Google Scholar] [CrossRef]
  9. Qiu, J.P.; Ou, Y.F. Construction and Application of Quality Evaluation Index System of MOOC. High. Ed. Dev. Eval. 2015, 31, 72–81. [Google Scholar]
  10. Huang, W.; Liu, X.; Shi, P.; Li, Y.F. Research on Online Education Model Evaluation under the Background of “Internet +”. J. Inform. 2016, 35, 124–129. [Google Scholar]
  11. Terras, M.M.; Ramsay, J. Massive Open Online Courses (MOOCs): Insights and Challenges from a Psychological Perspective. Br. J. Educ. Technol. 2015, 46, 472–487. [Google Scholar] [CrossRef]
  12. Chiu, C.M.; Chiu, C.S.; Chang, H.C. Examining the Integrated Influence of Fairness and Quality on Learners’ Satisfaction and Web-Based Learning Continuance Intention. Inform. Syst. J. 2007, 17, 271–287. [Google Scholar] [CrossRef]
  13. Qi, C.; Liu, S.D. Evaluating On-Line Courses via Reviews Mining. IEEE Access 2021, 9, 35439–35451. [Google Scholar] [CrossRef]
  14. Drake, J.R.; O’Hara, M.T.; Seeman, E. Five Principles for MOOC Design: With a Case Study. J. Inf. Technol. Educ. Innov. Pract. 2015, 14, 125–143. [Google Scholar] [CrossRef]
  15. Miranda, P.; Isaias, P.; Pifano, S. Model for the Evaluation of MOOC Platforms. J. Financ. 2015, 27, 765–777. [Google Scholar]
  16. Nie, Y.J.; Luo, H.; Sun, D. Design and Validation of a Diagnostic MOOC Evaluation Method Combining AHP and Text Mining Algorithms. Interact. Learn. Environ. 2020, 29, 315–328. [Google Scholar] [CrossRef]
  17. Rong, L.; Wang, L.; Liu, P.; Zhu, B. Evaluation of MOOCs Based on Multigranular Unbalanced Hesitant Fuzzy Linguistic Mabac Method. J. Intell. Syst. 2021, 36, 5670–5713. [Google Scholar]
  18. Selim, H.M. An Empirical Investigation of Student Acceptance of Course Websites. Comput. Educ. 2003, 40, 343–360. [Google Scholar] [CrossRef]
  19. Sun, P.C.; Tsai, R.J.; Finger, G.; Chen, Y.-Y.; Yeh, D. What Drives a Successful E-Learning? An Empirical Investigation of the Critical Factors Influencing Learner Satisfaction. Comput. Educ. 2008, 50, 1183–1202. [Google Scholar] [CrossRef]
  20. Lin, H.F. Measuring Online Learning Systems Success: Applying the Updated DeLone and McLean Model. Cyberpsychol. Behav. 2007, 10, 817–820. [Google Scholar] [CrossRef]
  21. Büyüközkan, G.; Arsenyan, J.; Ertek, G. Evaluation of E-Learning Web Sites Using Fuzzy Axiomatic Design Based Approach. Int. J. Comput. Int. Syst. 2010, 3, 28–42. [Google Scholar] [CrossRef]
  22. Fesol, S.F.A.; Salam, S. Towards MOOC for Technical Courses: A Blended Learning Empirical Analysis. Int. J. Adv. Sci. Eng. Int. 2016, 6, 1141–1147. [Google Scholar]
  23. Ossiannilsson, E.; Altınay, Z.; Altınay, F. Towards Fostering Quality in Open Online Education Through OER and MOOC Practices. In Open Education: From OERs to MOOCs; Lecture Notes in Educational Technology; Springer: Berlin/Heidelberg, Germany, 2016; pp. 189–204. [Google Scholar]
  24. Liu, S.Q.; Liang, T.Y.; Shao, S.; Kong, J. Evaluating Localized MOOCs: The Role of Culture on Interface Design and User Experience. IEEE Access 2020, 8, 107927–107940. [Google Scholar] [CrossRef]
  25. Lowenthal, P.R.; Hodges, C.B. In Search of Quality: Using Quality Matters to Analyze the Quality of Massive, Open, Online Courses (MOOCs). Int. Rev. Res. Open Dis. 2015, 16, 83–101. [Google Scholar] [CrossRef] [Green Version]
  26. Dehghani, S.; Fini, A.A.S.; Zeinalipour, H.; Rezaei, E. The Competencies Expected of Instructors in Massive Open Online Courses (MOOCs). Interdisciplinary J. 2020, 11, 69–83. [Google Scholar]
  27. Li, Y.H.; Zhao, B.; Gan, J.H. Make Adaptive Learning of the MOOC: The CML Model. In Proceedings of the 10th International Conference on Computer Science & Education, Cambridge, UK, 22–24 July 2015; pp. 1001–1004. [Google Scholar]
  28. Li, B.; Wang, X.H.; Tan, S.C. What Makes MOOC Users Persist in Completing MOOCs? A Perspective from Network Externalities and Human Factors. Comput. Hum. Behav. 2018, 85, 385–395. [Google Scholar] [CrossRef]
  29. Ye, Z.X.; Luo, R. Evaluating Online Courses: How Learners Perceive Language MOOCs. Lect. Notes Comput. Sci. 2021, 12511, 334–343. [Google Scholar]
  30. Rotgans, J.I.; Schmidt, H.G. Situational Interest and Academic Achievement in the Active-Learning Classroom. Learn. Instr. 2011, 21, 58–67. [Google Scholar] [CrossRef]
  31. Adair, D.; Alman, S.W.; Budzick, D.; Grisham, L.M.; Mancini, M.E.; Thackaberry, A.S. Many Shades of MOOCs. Internet Learn. J. 2014, 3, 3–20. [Google Scholar] [CrossRef]
  32. Wang, X.; Lee, Y.J.; Lin, L.; Mi, Y.; Yang, T.T. Analyzing Instructional Design Quality and Students’ Reviews of 18 Courses out of the Class Central Top 20 MOOCs through Systematic and Sentiment Analyses. Internet High. Educ. 2021, 50, 100810. [Google Scholar] [CrossRef]
  33. Marks, R.B.; Sibley, S.D.; Arbaugh, J.B. A Structural Equation Model of Predictors for Effective Online Learning. J. Manag. Educ. 2005, 29, 531–563. [Google Scholar] [CrossRef]
  34. Sadi-Nezhad, S.; Etaati, L.; Makui, A. A Fuzzy ANP Model for Evaluating E-Learning Platform. Lect. Notes Comput. Sci. 2010, 6096, 254–263. [Google Scholar]
  35. Li, X.Y.; Zhu, Q.H. Evaluating the Green Practice of Food Service Supply Chain Management Based on Fuzzy DEMATEL-ANP Model. In Proceedings of the Seventh International Conference on Electronics and Information Engineering, Nanjing, China, 17–18 September 2016; p. 103222J. [Google Scholar] [CrossRef]
  36. Jovanović, B.; Filipović, J.; Bakić, V. Prioritization of manufacturing sectors in Serbia for energy management improvement—AHP method. Energy Convers. Manag. 2015, 98, 225–235. [Google Scholar] [CrossRef]
  37. Guo, S.; Zhao, H.R. Fuzzy Best-Worst Multi-Criteria Decision-Making Method and Its Applications. Knowl. Based. Syst. 2017, 121, 23–31. [Google Scholar] [CrossRef]
  38. You, P.P.; Guo, S.; Zhao, H.R.; Zhao, H.R. Operation Performance Evaluation of Power Grid Enterprise Using a Hybrid BWM-TOPSIS Method. Sustainability 2017, 9, 2329. [Google Scholar] [CrossRef] [Green Version]
  39. Gupta, H.; Barua, M.K. Supplier Selection Among SMEs on the Basis of Their Green Innovation Ability Using BWM and Fuzzy TOPSIS. J. Clean. Prod. 2017, 152, 242–258. [Google Scholar] [CrossRef]
  40. Van De Kaa, G.; Scholten, D.; Rezaei, J.; Milchram, C. The Battle between Battery and Fuel Cell Powered Electric Vehicles: A BWM Approach. Energies 2017, 10, 1707. [Google Scholar] [CrossRef] [Green Version]
  41. Shojaei, P.; Haeri, S.A.S.; Mohammadi, S. Airports Evaluation and Ranking Model Using Taguchi Loss Function, Best-Worst Method and VIKOR Technique. J. Air Transpl. Manag. 2018, 68, 4–13. [Google Scholar] [CrossRef]
  42. Yuan, Y.; Guan, T.; Yan, X.B.; Li, Y.J. Supplier Selection Decision Model Based on Hybrid VIKOR Method. Control Decis. 2014, 29, 551–560. [Google Scholar] [CrossRef]
  43. Rezaei, J. Best-Worst Multi-Criteria Decision-Making Method. Omega 2015, 53, 49–57. [Google Scholar] [CrossRef]
  44. Rezaei, J. Best-Worst Multi-Criteria Decision-Making Method: Some Properties and a Linear Model. Omega 2016, 64, 126–130. [Google Scholar] [CrossRef]
  45. Chitsaz, N.; Banihabib, M.E. Comparison of Different Multi Criteria Decision-Making Models in Prioritizing Flood Management Alternatives. Water Resour. Manag. 2015, 29, 2503–2525. [Google Scholar] [CrossRef]
  46. Shao, Q.G.; Liou, J.J.H.; Weng, S.S.; Su, P.Y. Constructing an Entrepreneurship Project Evaluation System Using a Hybrid Model. J. Bus. Econ. Manag. 2020, 21, 1329–1349. [Google Scholar] [CrossRef]
Figure 1. Weight sensitivity map.
Figure 1. Weight sensitivity map.
Sustainability 13 11163 g001
Table 1. Dimensions and indicators of MOOC platform quality evaluation.
Table 1. Dimensions and indicators of MOOC platform quality evaluation.
DimensionIndicatorsLiterature Sources
System function (X1)Flexibility of use (X11)[11,22,23]
Functional diversity (X12)[2,3]
System reliability (X13)[8,33]
System confidentiality (X14)[10]
The update frequency (X15)[7,8]
Learning navigation (X16)[3,5,8,24,25]
Teaching
resources (X2)
Content accuracy (X21)[5,7,8,10]
Content integrity (X22)[8,20]
The richness of classroom resources (X23)[2,5,9]
Professional teaching skills (X24)[26]
Personalized learning (X25)[27,28]
Teaching effect (X3)Attraction (X31)[2,8]
Brand effect (X32)[10]
Instructional design (X33)[30,31,32]
Evaluation of teaching curriculum (X34)[29]
Social
interaction (X4)
User supervision (X41)[5,28]
Incentive measures (X42)[7]
Classroom interaction (X43)[22,24,25,32]
Offline communication (X44)[5]
User trust (X45)[8,10,28]
Table 2. Consistency index (CI) [43].
Table 2. Consistency index (CI) [43].
a i j 123456789
Consistency index (CI)0.000.441.001.632.303.003.734.475.23
Table 3. Basic information of experts.
Table 3. Basic information of experts.
Service UnitsCareer
1Li yang International Education International Course Mathematics Departmentteacher
2Yao Hua International School High Schoolteacher
3School of Economics and Management, Xiamen University of Technologyprofessor
4School of Economics and Management, Xiamen University of Technologyprofessor
5School of Economics and Management, Xiamen University of Technologyprofessor
6Shanghai Institute of Neuroscience, Chinese Academy of Sciencespostgraduate
7Wuhan University Health Collegepostgraduate
8Xi’an Jiaotong Universitypostgraduate
9Xi’an University of Finance and Economicspostgraduate
10Hainan Gaoting Real Estate Brokerage Co. LTDsupervisor
Table 4. Evaluation scale.
Table 4. Evaluation scale.
Evaluation ScaleDefinitionDescription
1Equally importantComparing two factors, the factors have the same importance
2--
3Slightly importantComparing two factors, one factor is slightly more important than the other
4--
5Obviously importantComparing two factors, one factor is obviously more important than the other
6--
7Very importantComparing of two factors, one factor is much more important than the other
8--
9Extremely importantComparing of two factors, one factor is far more important than the other
Table 5. Comparison between dimensions.
Table 5. Comparison between dimensions.
BOX1X3X4
X2845
OWX4--
X14--
X33--
Table 6. Comparison of dimension weights.
Table 6. Comparison of dimension weights.
DimensionAverage ValueDimensional Weight of Each Expert
12345678910
X10.1040.1080.1330.0540.1160.1220.0490.1970.1370.0650.063
X20.5200.6380.5330.3020.1400.7070.4800.5900.6190.5220.670
X30.2960.1950.2670.5220.6830.0690.3590.1480.1910.3260.205
X40.0790.0590.0670.1220.0610.1020.1120.0660.0540.0870.063
θ-1.7251.0000.7252.1131.2280.7251.0002.4692.0001.725
CR-0.3300.2680.1390.4040.2350.1390.2240.4720.4470.330
Table 7. Weights and rankings of dimensions and indicators.
Table 7. Weights and rankings of dimensions and indicators.
DimensionWeightIndexLocal WeightGlobal WeightRanking
System
functions (X1)
0.104Use flexibility (X11)0.2720.02811
Functional diversity (X12)0.1770.01813
System reliability (X13)0.1710.01814
System confidentiality (X14)0.1050.01118
Update frequency (X15)0.1860.01912
Learning navigation (X16)0.0890.00919
Teaching
resources (X2)
0.520Content accuracy (X21)0.3070.1591
Content integrity (X22)0.2050.1074
Classroom resource richness (X23)0.1300.0677
Professional Teaching Skills (X24)0.2120.1103
Personalized learning (X25)0.1460.0765
Teaching
effect (X3)
0.296Attraction (X31)0.2470.0736
Brand effect (X32)0.1490.0449
Instructional Design (X33)0.3940.1172
Teaching course evaluation (X34)0.2100.0628
Social
interaction (X4)
0.079User supervision (X41)0.1560.01217
Incentives (X42)0.1890.01515
Classroom interaction (X43)0.3760.03010
Offline communication (X44)0.1710.01416
User trust (X45)0.1080.00920
1.000 4.0001.000
Table 8. Performance evaluation scale.
Table 8. Performance evaluation scale.
Evaluation ScaleDefinitionDescription
1Very poorThe performance of each MOOC website is very poor.
2PoorThe performance of each MOOC website is poor.
3Relatively poorThe performance of each MOOC website is relatively poor.
4NormalThe performance of each MOOC website is normal.
5BetterThe performance of each MOOC website is better.
6GoodThe performance of each MOOC website is good.
7Very goodThe performance of each MOOC website is very good.
Table 9. Positive and negative ideal solutions for evaluation indicators.
Table 9. Positive and negative ideal solutions for evaluation indicators.
SolutionsX11X12X13X14X15X16
h*5.65.85.65.15.55.1
h3.94.14.544.33.3
Table 10. Website rating.
Table 10. Website rating.
WebsiteSiRiQiRank
V10.8490.1010.6924
V20.2350.0460.0001
V30.3440.0630.1562
V40.2690.1010.2713
V50.9210.1591.0005
Table 11. Qs and website ranking when v = 1/3.
Table 11. Qs and website ranking when v = 1/3.
WebsiteQiRank
V10.6254
V20.0001
V30.1552
V40.3443
V51.0005
Table 12. Qs and website ranking when v = 2/3.
Table 12. Qs and website ranking when v = 2/3.
WebsiteQiRank
V10.7604
V20.0001
V30.1572
V40.1973
V51.0005
Table 13. Weight sensitivity analysis ( w 21 from 0.1 to 0.9).
Table 13. Weight sensitivity analysis ( w 21 from 0.1 to 0.9).
Website0.1590.10.20.30.40.50.60.70.80.9
V14444444444
V21111111111
V32222222222
V43333333333
V55555555555
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Su, P.-Y.; Guo, J.-H.; Shao, Q.-G. Construction of the Quality Evaluation Index System of MOOC Platforms Based on the User Perspective. Sustainability 2021, 13, 11163. https://doi.org/10.3390/su132011163

AMA Style

Su P-Y, Guo J-H, Shao Q-G. Construction of the Quality Evaluation Index System of MOOC Platforms Based on the User Perspective. Sustainability. 2021; 13(20):11163. https://doi.org/10.3390/su132011163

Chicago/Turabian Style

Su, Pei-Yao, Jing-Hong Guo, and Qi-Gan Shao. 2021. "Construction of the Quality Evaluation Index System of MOOC Platforms Based on the User Perspective" Sustainability 13, no. 20: 11163. https://doi.org/10.3390/su132011163

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop