Next Article in Journal
Multi Perspectives Steganography Algorithm for Color Images on Multiple-Formats
Next Article in Special Issue
Promising Emerging Technologies for Teaching and Learning: Recent Developments and Future Challenges
Previous Article in Journal
Measuring the Soundscape Quality in Urban Spaces: A Case Study of Historic Urban Area
Previous Article in Special Issue
A Study on the Learning Early Warning Prediction Based on Homework Habits: Towards Intelligent Sustainable Evaluation for Higher Education
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Development and Validation of an Instrument to Evaluate Technology-Enhanced Learning and Teaching Sustainability in Teaching Spelling

Faculty of Education, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
*
Author to whom correspondence should be addressed.
Sustainability 2023, 15(5), 4256; https://doi.org/10.3390/su15054256
Submission received: 22 December 2022 / Revised: 15 February 2023 / Accepted: 22 February 2023 / Published: 27 February 2023
(This article belongs to the Collection Technology-Enhanced Learning and Teaching: Sustainable Education)

Abstract

:
(1) Background: The current advancement in technology-enhanced learning and teaching sustainability has extended its progression in teaching spelling. Teaching spelling is paramount as is the impetus for English language mastery. However, a burgeoning paucity in technology-enhanced learning and the teaching of spelling has led to the purpose of this study, which is to undertake a pioneering preliminary study in the development and validation of an instrument (initially with 43 developed items under six constructs) based on the unified theory of acceptance and use of technology (UTAUT) model to evaluate its sustainability. (2) Methods: There are two stages of content-validity determination: Stage 1: the instrument development stage, and Stage 2: the instrument validation stage. (3) Results: The first research question demonstrated 40 agreed items (I-CVI = 1) and three items with contradicting agreements (FC7 = 0.40; BI7 = 0.40; UB6 = 0.60) from five experts, which validated the instrument of this study. The second research question revealed the remaining 40 items (S-CVI(Average) = 1) and (S-CVI(Universal Agreement) = 1). (4) Conclusions: The final 40-item instrument is content-valid and could contribute to the evaluation of technology-enhanced learning and teaching sustainability in teaching spelling in a separate study, ultimately forwarding English language mastery.

1. Introduction

English language mastery is paramount because English is a global lingua franca [1,2]. Adherently, spelling is vital in English language learning and teaching because it is the foundation for developing literacy and mastering other English language skills [3,4,5,6]. Simultaneously [7] embraces the idea of 21st century, paido-centric learning and teaching, which uses information and communication technology (ICT). The use of technology-enhanced learning and teaching has become preponderant in language learning and teaching. Ref. [8] highlighted how English language teachers could boost the perceptions of their students, who are the generation Z millennials, with technology-enhanced learning and teaching. Furthermore, the unprecedented COVID-19 pandemic also accelerated emerging technologies in education [6,9], particularly technology-enhanced learning and teaching [10].
Palpably, English language teachers are considered the front liners in language education. Their opinions must be taken into consideration so that the knowledge of spelling via technology-enhanced learning and teaching can be transmitted effectively [11,12]. Evidently, there are studies that prove that teaching readiness can influence students’ accomplishments [13,14,15]. Similarly, the developed instrument in this study is later utilized to gather and analyze empirical data on the evaluation of technology-enhanced learning and teaching sustainability when teaching spelling to check on the English language teachers’ instructional preparedness after it is proven content-valid. In line with this, [16] asserted that providing quality education fulfills the fourth educational goal of the sustainable development goals (SDG): ensuring language education sustainability.
However, for the past decade, from 2012 to 2022, most studies have focused on students [6,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32]. There were only four limited studies that related to teaching spelling with or without technology applications [11,12,33,34]. Admittingly, these studies reflect paucity concerning technology-enhanced learning and teaching sustainability in teaching spelling, as there are limited studies on English language teachers compared to students in this area of research.
Thus, this study is proposed purposefully as a preliminary study to develop and validate an instrument that can evaluate technology-enhanced learning and teaching sustainability in the teaching of spelling. In this study, the item-level and scale-level content validity index (CVI) of the instrument was carried out in accordance with two research questions:
RQ1: What is the item-level content validity index (I-CVI) of the instrument to evaluate technology-enhanced learning and teaching sustainability in teaching spelling?
RQ2: What is the scale-level content validity index (S-CVI) of the instrument to evaluate technology-enhanced learning and teaching sustainability when teaching spelling?

1.1. Teaching Spelling

Paper [35] shares the fact that there are three main teaching approaches in teaching spelling, which are the phonemic, morphemic, and whole-word approaches. In the phonemic approach, Paper [36] shared how the phonics of the word is used for teaching the spelling of the word. However, as for the whole-word approach, spelling instruction uses either explicit or implicit learning. Apparently, [37] explained that the morphemic method represents a consonant–vowel–consonant (CVC) order of letters, and when a short morphograph ends and the next morphograph begins with a vowel, the final consonant is taught repeatedly.
Previously, English language teachers taught their students mainly through rote memorization method to evaluate their ability to spell words in class [4,38]. Many education systems are still using test-based and passive learning, with few technology tools used [38,39,40,41,42] and poor outcomes on the students’ spelling of words. Suggestively, there were successes in recent research related to the use of mobile learning for interventions on learning to spell among students [43,44,45,46]; these methods should be adopted by English language teachers to teach spelling.
In addition, based on the surveys conducted by [45,47] on the different levels of students, the participants in both surveys stated that they enjoyed using social media while simultaneously improving their spelling of words. In line with this, [25] observed how teaching spelling using Telegram Autobot, a social application, can improve primary school students’ learning of spelling compound nouns. Ref. [35] indicated that these knowledge, as well as the intentions and use of technology-enhanced learning and teaching for teaching of spelling by the English language teachers, would help students to improve spelling words.

1.2. Past Studies on the Use of Technology-Enhanced Learning and Teaching in Teaching Spelling

English language teachers must keep abreast with the latest technology-enhanced learning and teaching platforms to transfer spelling knowledge to their students, who are the generation Z millennials. According to [8,48], generation Z millennials are digital natives of technology. However, limited studies focus on English language teachers and the teaching of spelling despite being the front liners in transmitting spelling knowledge. As highlighted in the introduction, for the past decade, from the year 2012 to the year 2022, there have been only four related studies on the teaching of spelling [11,12,33,34].
Study [11] examined 56 Canadian teachers’ perceptions of teaching spelling in elementary grades through a survey questionnaire. The teachers suggested that spelling should be emphasized more in the curriculum, as well as teacher education and various resources for teaching spelling, including online materials via technology-enhanced learning and teaching.
On the other hand, [33] interviewed 27 primary schoolteachers regarding their opinions on the inclination of texting toward their students’ literacy development. There were mixed responses, with some teachers being supportive, while others were not supportive of the texting and technological effects on their students’ literacy.
Subsequently, [34] discovered that English language teachers should review students’ spelling development to enable organized spelling guidance to be carried out. This is to provide students with the required spelling instructions. Additionally, [12] appraised teaching trends in spelling from traditional methods to the usage of technology.
Generally, it could be observed that although there were several highlights on the English language teachers and their teaching of spelling and acknowledgement of the use of technology-enhanced learning and teaching, no studies were conducted to identify the gap between technology-enhanced learning and teaching sustainability and teaching spelling. Thus, it is imperative that a preliminary study be conducted to outline the development and validity of an instrument to evaluate technology-enhanced learning and teaching sustainability when teaching spelling, which would be the pioneer study in the realm of education. A preliminary study aimed to ensure that it was content-valid before utilizing and distributing it [49] to the English language teachers to obtain a precise result for subsequent evaluations on technology-enhanced learning and teaching sustainability for the teaching of spelling.

1.3. Development and Validation of an Instrument to Evaluate Technology-Enhanced Learning and Teaching Sustainability in Teaching Spelling

There were 43 items in relation to six constructs in the instrument, which were adapted from the unified theory of acceptance and use of technology (UTAUT) model by [50]. Ref. [50] explained that the UTAUT model was proposed to comprehend human acceptance behavior. The UTAUT model, as proposed by [50], consisted of six constructs, namely performance expectancy (PE), effort expectancy (EE), social influences (SI), facilitating conditions (FC), behavioral intention (BI), and use behavior (UB). Behavioral intention (BI) also acted as the mediator. Gender, age, experience, and voluntariness in use acted as moderators for the UTAUT model. The rationale for adopting the internal structures of the UTAUT model was that it suited the purpose of the developed and validated instrument to evaluate technology-enhanced learning and teaching sustainability in teaching spelling.
The six constructs in the instrument of this study, namely, performance expectancy (PE), effort expectancy (EE), social influences (SI), and facilitating conditions (FC), were retained as in the original UTAUT model. In addition, the function of behavioral intention (BI) as a mediator, as well as use behavior (UB), were in their position while adapting to the current context of the study to subsequently evaluate technology-enhanced learning and teaching sustainability for teaching spelling in a separate study later. However, the moderators were not highlighted in the adapted model since the focus of this study is on the development and validation of the six constructs in the UTAUT model. This is because the role of the moderators usually is to help judge the external validity of the study by identifying the limitations of the relationship between variables [51]. Figure 1 displays the constructs in the adapted UTAUT model, which were utilized in the development and validation of the instrument in this study.
In this study, performance expectancy (PE) measures how technology-enhanced learning and teaching could benefit English language teachers when teaching spelling to students. Effort expectancy (EE) measures the degree of ease associated with the use of technology-enhanced learning and teaching when teaching spelling among English language teachers. Social influences (SI) measure the degree to which English language teachers feel that opinions of others (peers, colleagues, administrators, students, parents, and family members) who are close to them are essential in influencing their behavior toward using technology-enhanced learning and teaching for the teaching of spelling. Facilitating conditions (FC) measure the degree to which English language teachers believe that resources, expertise, internet speed and training are accessible [52].
The respective factors of performance expectancy (PE), effort expectancy (EE), social influences (SI), and facilitating conditions (FC) are expected to influence behavioral intentions (BI) and the use of technology-enhanced learning and teaching for the teaching of spelling. BI measures the factors of performance expectancy (PE), effort expectancy (EE), social influences (SI), and facilitating conditions (FC) that are expected to influence behavioral intention (BI) and the use of technology-enhanced learning and teaching for the teaching of spelling. The behavioral intention (BI) acts as the mediator to observe whether there is a direct or indirect effect between the factors and the use behavior (UB) of technology-enhanced learning and teaching for teaching spelling. Use behavior (UB) measures the degree to which factors, namely performance expectancy (PE), effort expectancy (EE), social influences (SI), and facilitating conditions (FC), would influence the English language teachers’ behavioral intention (BI) and use of technology-enhanced learning and teaching for teaching spelling. This is expected to be mediated by their behavioral intention (BI).
Correspondingly, UTAUT constructs have their roots in self-determination theory. References [53,54] explained that self-determination theory (SDT) is a theory of human motivation to become self-determined when their needs for competence, relatedness, and autonomy are fulfilled. SDT proposed by [55] is applicable in explaining the English language teachers’ driving factors in the context of technology-enhanced learning and teaching in teaching spelling, whereby it would make them feel responsible for ensuring English language mastery among their students.
Similarly, the validation of experts is necessary to ensure the relevancy of the developed items to the constructs in the instrument. Additionally, content validity is pertinent in any instrument development to ensure that the instrument measures what it is supposed to measure [56,57,58]. However, in most cases, I-CVI is presented only in methodological research, describing the content validation process. Additionally, [59] commented that authors of scale development papers did not specify their S-CVI calculation method, although two methods were available to calculate the S-CVI, namely S-CVI (average) and S-CVI (universal agreement).
Figure 1. Constructs in the instrument based on adapted UTAUT model [58].
Figure 1. Constructs in the instrument based on adapted UTAUT model [58].
Sustainability 15 04256 g001
The sole study that acknowledged that there were two methods of computing this index was the study by [60]. They calculated their S-CVI using the averaging approach. They explained that they were concerned that with so many raters in their study, the content validity would be jeopardized if they utilized the S-CVI (universal agreement) method, which requires 100 percent agreement. Comparatively, in this study, one of the researchers demonstrated both S-CVI (average) and S-CVI (universal agreement) to specify them clearly and highlight that there were different ways to calculate S-CVI for content-validity purposes.

2. Methods

There are three common types of validity, namely content, construct, and criterion-related validity. Since content validity is a prerequisite for the other validities, it should be a top priority during instrument development and validation [49]. Study [58] explained that content validation refers to the evaluation of each item so that the item is suitable for the instrument’s development purpose. In this study, an instrument was developed and validated to evaluate technology-enhanced learning and teaching sustainability for teaching spelling among English language teachers.
According to [61], there are two stages of content-validity determination: Stage 1: the instrument development stage, and Stage 2: the instrument validation stage. Stage 1 involves the identification of content and constructs, followed by item generation. In Stage 2, the judgment of the content validity of the items and the complete instrument are determined. It depended on the feedback and ratings from the five experts grounded on their relevance to the constructs in the UTAUT model. Parallel to this, the content validity index (CVI) was the most often reported approach for content validity in instrument development reports because it offers clear information about each item, which can be used for instrument modification or deletion [59,62,63]. This is fundamental to ensure the general validity of an instrument [49].
Notably, the two elements focused on the content validation process are the items’ representativeness and relevancy in measuring, which are what the researcher intends to measure [64]. Previously, the development and validation of an instrument were often studied transiently [49], which is the rationale for why this study was conducted. This study demonstrates the development and validation of an instrument for evaluating technology-enhanced learning and teaching sustainability in the teaching of spelling.

2.1. Stage 1: Instrument Development Stage

In the instrument development stage, the deductive method was utilized (based on extensive past studies) and perpetuated by [65,66] to generate items. Similarly, [67] indicates that most instrument development studies use the deductive method to generate items. The constructs need to be clearly outlined to expedite the validation process in the next stage [56,68]. This has been a common practice for item generation and validation purposes [69].
In this study, six constructs were identified, namely performance expectancy (PE), effort expectancy (EE), social influences (SI), facilitating conditions (FC), behavioral intention(BI), and use behavior (UB), adapted from the UTAUT model [50] to suit the purpose of instrument development, which is to evaluate technology-enhanced learning and teaching sustainability for teaching spelling. It was proposed that the items in the instrument were to be presented in English since the target participants were English language teachers who were expected to be able to comprehend the language and respond to the items easily. The instrument in this study consisted of 43 items in 6 constructs. Performance expectancy (PE) contains 8 items, and effort expectancy (EE), social influences (SI), facilitating conditions (FC), behavioral intention (BI), and use behavior (UB) contain 7 items each.

2.2. Stage 2: Instrument Validation Stage

Expert feedback is part of pre-testing in a quantitative research design to validate the questionnaire items before proceeding with a pilot test and field study. The developed instrument in this study to evaluate technology-enhanced learning and teaching sustainability for the teaching of spelling was submitted to field experts for review to ensure that the content was appropriate and could evaluate the purpose of the study as suggested by [49,70,71]. In general, experts should have expertise in the study concepts, theories, and issues that govern the subject of the instrument, as well as expertise in the techniques of instrument construction that affect the instrument’s structural format.
Parallel to this, [61,63] recommended between five and ten content experts to be able to regulate chance agreements to a sufficient extent. On another note, [59] commented that having more than ten experts involved in the process is not recommended as an increase in experts reduces the likelihood of consensus. In this study, five experts were selected to validate the questionnaire. All five experts involved in validating the instrument of this study were academicians who worked in the field of education as lecturers.
Their area of expertise was diversified, with Experts 1, 3, 4, and 5 possessing a common background in ESL and educational technology. At the same time, Expert 2 had expertise in curriculum and alternative assessment, which is also closely related to the purpose of the study, to validate an instrument to evaluate technology-enhanced learning and teaching sustainability for teaching spelling. All five experts chosen were expected to give an appropriate rating of relevance to the items in the instrument, considering their expertise in the realm of education. The number of years of working experience attained by the five experts ranged from 5 to 26 years, in which the overall mean was 16 years, SD ± 7.56 (n = 5).
Initially, each expert panel was provided with an appointment letter to validate the instrument in this study, along with the attachment of the information on the purpose of the study, conceptual framework, detailed instruction, and a complete set of 43 items under 6 respective constructs in the instrument to be validated via email, as suggested by [72]. The experts were requested to give their professional judgment to identify deficient areas and provide feedback on rectifying the sentence structure. This was to ensure that if the English language teachers encountered any difficulties in deciphering the instructions to answer the items in the instrument when it was distributed to them later, it would be resolved at this stage.
A 4-point scale was used instead of a 3- or 5-point rating scale to avoid a neutral point [59,62]. It provided the instrument developer with specific information on calculating a meaningful CVI to validate the instrument, which would later evaluate the technology-enhanced learning and teaching sustainability for teaching spelling.
Independent of the other experts, each of them was requested to rate the relevancy of each item as either “item is not relevant (1)”, “item is somewhat relevant (2)”, “item is quite relevant (3)”, or “item is highly relevant (4)”. After the expert panels had validated the questionnaire, they returned the completed instruments individually to the researcher via email or the WhatsApp application.
Following this, the content validity index (CVI) was calculated. In this study, the viewpoints of the items were quantified by computing the item-level content validity index (I-CVI) and scale-level content validity index (S-CVI), as suggested by [59,62]. The first index involves the content validity of individual items, and the second covers the content validity of the overall scale. The I-CVI value is obtained by assigning a coded value to each rating.
The characteristic of CVI makes it robust and allows for direct interpretation, which assists in constructing valid data that are related to the content validity [73] of a new (or revised) item and scale. If the CVI’s value is low, it may indicate that the operationalization of the underlying construct was poor or the construct specifications provided to the experts were insufficient. Consequently, interpretations and discussions about the findings or their comparison with other studies are invalid [74].
For the rating of either “1” or “2”, they were coded as “0”, while the rating score of “3” or “4” was coded as “1”, as recommended by [75]. Thus, a dichotomous scale was developed. The agreed number of experts for each item with “1” was calculated and divided by the number of experts who rated the instrument.
However, I-CVI also has the possibility that the agreement between raters happens by chance. References [59,62] recommended that CVI should be “1” in the case of five or fewer experts (Table 1), as it is in this study. This means that the content validity of every item in the instrument must have the consensus of all experts. A CVI value of less than ”1” meant that the item in the instrument could insufficiently address the construct being explored because it brought up the concerns of objectivity and relevance [76] and had to be revised. In alignment to this, the selected experts in this study should also give consideration in their judgement that media or tools should not replace the English teachers’ role as these teachers scaffold learning to give a better outcome for learners [77,78].
On the other hand, there are two ways to compute S-CVI. In the average (Ave) approach, the sum of I-CVI is divided by the total number of items [59]. The universal agreement (UA) approach [59] posits that the number of items considered relevant by all the judges is divided by the total number of items. The following section will highlight the findings of this study.

3. Results

In this study, the instrument consisted of 43 items that were developed according to six constructs based on the UTAUT model. Table 2 displays the ratings of each item by the five respective experts and the CVI calculation.
The overall rubric and evaluation items for research were validated using the quantitative measure of the content validity index (CVI).

3.1. Value of Item-Level Content Validity Index (I-CVI)

The item-level content validity index (I-CVI) calculates the items’ validity and relevancy. The I-CVI value for each item was calculated in a single column using an electronic spreadsheet. Table 2 shows the I-CVI value for each item, which is calculated based on the total number of experts who gave the rating “…quite relevant (3)” and “…highly relevant (4)”, in which both these ratings were coded as “1”. The sum of the item-level content validity index (I-CVIs) of the 43 items in the six respective constructs for the instrument was 41.4 (as shown in Table 2).

3.2. Value of Scale-Level Content Validity Index (S-CVI)

The S-CVI was calculated to ensure the content validity of the overall scale. It can be illustrated in S-CVI (average) and S-CVI (universal agreement). It could be observed in Table 2, whereby all five experts agreed that 40 items were relevant (I-CVI = 1.00) but had distinct opinions on three items, which resulted in an I-CVIs of 0.40 (FC7 and BI7) and 0.60 (UB6), respectively. There were minor amendments to some of the items in the instrument, such as removing redundancy and rephrasing the words so that they were more relevant and comprehensible in the context of the study. The proportion of the 43 items that achieved a relevance scale of three or four by all experts of the S-CVI (average) for the overall sessions was 0.96, while S-CVI(UA) was 0.93, as displayed in Table 3.
Both the values of S-CVI (average) and S-CVI (universal agreement) did not meet the acceptable cut-off value of the CVI (value of 1) for five experts, as suggested by [59,62]. To meet this cut-off value, every item in the instrument must be given a rating of ‘3’ or ‘4’ (coded as ‘1’) by all five experts. Thus, in this study, three out of the 43 items were removed. This is because [59,62] stated that if an item does not reach this threshold, it would usually be deleted from the final instrument.
Table 4 shows three items with I-CVI values lower than 1.00 for deletion. The deletion of the three items is crucial for the next step of instrument validation. There were three items deleted because two items were valued at 0.40 (FC7 and BI7) and one item was valued at 0.60(UB6), which were below 1.00.
When these items (FC7, BI7, and UB6) were deleted, the I-CVI values increased to one. Thus, a content-valid instrument with the final 40 items in the six constructs based on the UTAUT model was developed and validated successfully in this study.

4. Discussion

From this study, it could be observed that the research gap has been filled in establishing a preliminary study for developing and validating an instrument to evaluate technology-enhanced learning and teaching sustainability regarding the teaching of spelling. This study exhibited detailed reporting on the initial development of the 43 items relevant to the six constructs of the adapted UTAUT model. However, after undergoing the validation process by the five experts and the CVI calculation, 40 items remained, with three items omitted to fulfill the cut-off value of 1, as suggested by [59,62]. This shows that it is imperative to acknowledge the detailed process of the development and validation of an instrument so that it is content-valid before being distributed for a pilot test or field study, such as the instrument in this study, to evaluate technology-enhanced learning and teaching sustainability for teaching spelling.
This suggests that at the initiation of the content validation process, the researchers, as the scale developers in this study, must be committed to developing good items and constructing specifications, as well as choosing a competent panel of experts [62]. In this study, 40 out of the 43 items obtained from the six constructs had a consensus validation from five experts, which evaluated technology-enhanced learning and teaching sustainability for the teaching of spelling. A total of three items were omitted due to such redundancy and ambiguity as not receiving agreement from all the experts. The items were refined based on the five experts’ feedback before being included in the pilot test and field study. This also enabled other English language teachers or like-minded researchers in the same field of research to evaluate the pertinence of the items in the instrument for consideration in their language teaching practices for the application of technology-enhanced learning and teaching in the teaching of spelling.
Palpably, the S-CVI (average) method for scale-level CVI was preferred, although there may have been solid reasons for choosing the S-CVI (universal agreement) method. The rationale is that the universal agreement would become excessively strict if there was a number of experts on the validation panel [60]. It seems highly conservative to impose a 100 percent agreement, especially if there are experts with a skewed perspective or those that cannot comprehend the task. Thus, this was the most informative way to calculate the S-CVI both ways and to provide both results, as reflected in this study.
The findings of this study contributed to three main implications. First, the measurement of content validity based on CVI (I-CVI and S-CVI) indicated that this instrument has been developed and validated to evaluate technology-enhanced learning and teaching sustainability for teaching spelling among English language teachers. Second, English is important as a global language [6]. Thus, this study contributes to developing and validating the instrument of technology-enhanced learning and teaching sustainability in teaching how to spell, which forms the basis for language learning and teaching.
Third, the findings of this study provide awareness to English language teachers for adapting and ensuring technology-enhanced learning and teaching sustainability when teaching spelling. This is because the mastery of spelling forms the basis for advancing English language skills. It is essential to integrate technology-enhanced learning and teaching into learning and teaching sessions because the students are generation Z millennials, as suggested by [6].
Similarly, the UTAUT constructs, which have their roots in SDT, are deemed applicable in explaining English language teachers’ driving factors in the context of technology-enhanced learning and teaching for teaching spelling, whereby it would make them feel responsible about the learning and teaching outcomes to ensure English language mastery among their students. However, media or tools should not replace these teachers’ role as they scaffold learning to give a better outcome for learners [77,78]. The findings of this instrument have contributed to the body of knowledge that evaluates technology-enhanced learning and teaching sustainability for the teaching of spelling.

5. Conclusions

This paper demonstrated a preliminary study to develop and validate an instrument that can evaluate technology-enhanced learning and teaching sustainability for teaching spelling, which is vital in scale development. It is a stepping stone to creating a conducive environment for sustainable education in the long term, exhibiting the impacts of technology in education. Although content validity is subjective, using this method can add objectivity. Knowledge contribution by the five experts in this study provided the researchers with valuable information to revise the instrument. Admittingly, the scale developers who calculated the CVI in their content validation efforts should be explicit about their S-CVI calculations, as displayed in this study.
Throughout this study, experiences and knowledge, alongside having strong conceptualizations of constructs, good items, and choosing experts judiciously, have been essential [63,79]. Additionally, providing explicit instructions to the five experts was necessary regarding the rudimentary constructs and the rating task [49,61] for validating the instrument and evaluating the technology-enhanced learning and teaching sustainability of teaching spelling. Despite this, the reliability (consistency) of this instrument has not been established. It is recommended to test this instrument on English language teachers as participants with various backgrounds to ascertain the instrument’s reliability.
This instrument was developed and validated as a preliminary study. After minor improvements in sentence construction and other details, this instrument could be utilized in a pilot test and field study to infer results from the data collected. It could provide a cross-reference for researchers from related fields to adapt content validation measures of CVI in instrument validation. Study [80] in the context of evaluating technology-enhanced learning and teaching sustainability in addition to promoting awareness of language education sustainability, ultimately advanced the learning and teaching of the English language to the next level of mastery for students. This was essential because, as [16] put it, providing quality education fulfills the fourth educational goal of the sustainable development goals (SDG).

Author Contributions

Conceptualization, E.L.Y.Y., H.H. and M.M.Y.; methodology, E.L.Y.Y., H.H. and M.M.Y.; software, E.L.Y.Y.; validation, E.L.Y.Y., H.H. and M.M.Y.; writing—original draft preparation, E.L.Y.Y.; writing—review and editing, E.L.Y.Y., H.H. and M.M.Y.; visualization, E.L.Y.Y.; supervision, H.H. and M.M.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Faculty of Education, Universiti Kebangsaan Malaysia (GG-2022-018).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Peter, T.; Jean, H. Proquest Firm International English: A Guide to the Varieties of Standard English, 5th ed.; Taylor and Francis: London, UK, 2013. [Google Scholar]
  2. Yunus, M.M.; Lau, E.Y.Y.; Mohd Khair, A.H.; Mohd Yusof, N. Acquisition of Vocabulary in Primary Schools via GoPic with QR Code. Int. J. Engl. Lang. Lit. Stud. 2020, 9, 121–131. [Google Scholar] [CrossRef]
  3. Yeiser, S.E.; Ehredt, A.; Haydon, M. Spelling in the Classroom. Appl. Behav. Anal. Interv. Strateg. Lit. 2012, 8, 1–6. [Google Scholar]
  4. Adoniou, M. Teaching and Assessing Spelling [Literacy Leadership Brief]. Int. Lit. Assoc. 2019, 9450, 1–16. [Google Scholar]
  5. Treiman, R. Learning to Spell Words: Findings, Theories, and Issues. Sci. Stud. Read. 2017, 21, 265–276. [Google Scholar] [CrossRef]
  6. Lau, E.Y.Y.; Mohamad, M. Spelling Mastery via Google Classroom among Year 4 Elementary School Esl Students during the COVID-19 Pandemic. J. Educ. E-Learn. Res. 2021, 8, 206–215. [Google Scholar] [CrossRef]
  7. Shaharanee, I.N.M.; Jamil, J.M.; Rodzi, A.S.S.M. The Application of Google Classroom as a Tool for Teaching and Learning. J. Telecommun. Electron. Comput. Eng. 2016, 8, 5–8. [Google Scholar] [CrossRef]
  8. Hashim, H. Application of Technology in the Digital Era Education. Int. J. Res. Couns. Educ. 2018, 1, 1. [Google Scholar] [CrossRef]
  9. Peñarrubia-Lozano, C.; Segura-Berges, M.; Lizalde-Gil, M.; Bustamante, J.C. A Qualitative Analysis of Implementing E-Learning during the COVID-19 Lockdown. Sustainability 2021, 13, 3317. [Google Scholar] [CrossRef]
  10. Rafiq, K.R.M.; Hashim, H.; Yunus, M.M. New Qualitative Perspective in Human–Computer Interaction: Designing Mobile English for STEM. Front. Psychol. 2022, 13, 863422. [Google Scholar] [CrossRef]
  11. Doyle, A.; Zhang, J.; Mattatall, C. Spelling Instruction in the Primary Grades: Teachers’ Beliefs, Practices, and Concerns. Read. Horiz. J. Lit. Lang. Arts 2015, 54, 10–19. [Google Scholar]
  12. Pan, S.C.; Rickard, T.C.; Bjork, R.A. Does Spelling Still Matter—And If So, How Should It Be Taught? Perspectives from Contemporary and Historical Research. Educ. Psychol. Rev. 2021, 33, 1523–1552. [Google Scholar] [CrossRef]
  13. Slavit, D.; Nelson, T.H.; Lesseig, K. The Teachers’ Role in Developing, Opening, and Nurturing an Inclusive STEM-Focused School. Int. J. STEM Educ. 2016, 3, 7. [Google Scholar] [CrossRef] [Green Version]
  14. Ismail, M.E.; Hashim, S.; Hamzah, N.; Samad, N.A.; Masran, S.H.; Daud, K.A.M.; Amin, N.F.M.; Shamsudin, M.A.; Kamarudin, N.Z.S. Factors That Influence Students’ Learning: An Observation on Vocational College Students. J. Tech. Educ. Train. 2019, 11, 93–99. [Google Scholar] [CrossRef]
  15. Leng, K.S.; Razali, F.; Ayub, A.F.M. The Effectiveness of Realistic Mathematics Education Approach toward Students Learning: A Systematic Literaturreview of Empirical Evidence. J. Crit. Rev. 2020, 7, 548–552. [Google Scholar] [CrossRef]
  16. Rosa, W. Goal 4: Ensure Inclusive and Equitable Quality Education and Promote Lifelong Learning Opportunities for All; Springer Publishing: New York, NY, USA, 2017. [Google Scholar]
  17. Wood, C.; Jackson, E.; Hart, L.; Plester, B.; Wilde, L. The Effect of Text Messaging on 9- and 10-Year-Old Children’s Reading, Spelling and Phonological Processing Skills. J. Comput. Assist. Learn. 2011, 27, 28–36. [Google Scholar] [CrossRef]
  18. Verheijen, L. The Effects of Text Messaging and Instant Messaging on Literacy. Engl. Stud. 2013, 94, 582–602. [Google Scholar] [CrossRef]
  19. Shokri, H.; Abdolmanafi-Rokni, S.J. The Effect of Using Educational Computer Games on Recall and Retention of Spelling in Iranian EFL Learners. Int. J. Appl. Linguist. Engl. Lit. 2014, 3, 169–175. [Google Scholar] [CrossRef]
  20. Vásquez, A.; Nussbaum, M.; Sciarresi, E.; Martínez, T.; Barahona, C.; Strasser, K. The Impact of the Technology Used in Formative Assessment. J. Educ. Comput. Res. 2017, 54, 1142–1167. [Google Scholar] [CrossRef]
  21. Lin, P.H.; Liu, T.C.; Paas, F. Effects of Spell Checkers on English as a Second Language Students’ Incidental Spelling Learning: A Cognitive Load Perspective. Read. Writ. 2017, 30, 1501–1525. [Google Scholar] [CrossRef]
  22. Moser, G.P.; Morrison, T.G.; Wilcox, B. Supporting Fourth-Grade Students’ Word Identification Using Application Software. Read. Psychol. 2017, 38, 349–368. [Google Scholar] [CrossRef]
  23. Rimbar, H. The Influence of Spell-Checkers on Students’ Ability to Generate Repairs of Spelling Errors. J. Nusant. Stud. (JONUS) 2017, 2, 1. [Google Scholar] [CrossRef] [Green Version]
  24. Ngesi, N.; Landa, N.; Madikiza, N.; Cekiso, M.P.; Tshotsho, B.; Walters, L.M. Use of Mobile Phones as Supplementary Teaching and Learning Tools to Learners in South Africa. Read. Writ. 2018, 9, 1–12. [Google Scholar] [CrossRef]
  25. Bakar, S.F.A.; Fauzi, F.H.; Yasin, N.F.M.; Yunus, M.M. Compound Chunk: Telegram Autobot Quiz to Improve Spelling on Compound Nouns. Int. J. Acad. Res. Progress. Educ. Dev. 2019, 8, 48–63. [Google Scholar] [CrossRef] [PubMed]
  26. Alasmari, N.; Alamri, N. Does the MS Spell Checker Effectively Correct Non-Native English Writers’ Errors? A Case Study of Saudi University Students. Glob. J. Hum. Soc. Sci. 2019, 19, 33–52. [Google Scholar] [CrossRef]
  27. Mudassir, Q.; Gul, S.; Siming, I.A.; Siming, G.A. The Use and Impact of Spell Checker among QUEST Undergraduates by Using Computer-Based Software Word Processor. Int. J. Comput. Sci. Netw. Secur. 2020, 20, 66–71. [Google Scholar]
  28. Lau, E.Y.Y.; Mohamad, M. Utilising E-Learning to Assist Primary School ESL Pupils in Learning to Spell during COVID-19 Pandemic: A Literature Review. Creat. Educ. 2020, 11, 1223–1230. [Google Scholar] [CrossRef]
  29. Haque, S.M.F.; Salem, N.M. al Social Media in EFL Context: Attitudes of Saudi Learners. J. Lang. Teach. Res. 2019, 10, 1029. [Google Scholar] [CrossRef]
  30. Brown, S.; Allmond, A. Emergent Bilinguals’ Use of Word Prediction Software Amid Digital Composing. Read. Teach. 2021, 74, 607–616. [Google Scholar] [CrossRef]
  31. Wong, W.L.; Maarop, A.H.; Chuah, K.P. Did You Run the Telegram? Use of Mobile Spelling Checker on Academic Writing. Multiling. Acad. J. Educ. Soc. Sci. 2022, 10, 1–19. [Google Scholar]
  32. McCarthy, K.S.; Roscoe, R.D.; Allen, L.K.; Likens, A.D.; McNamara, D.S. Automated Writing Evaluation: Does Spelling and Grammar Feedback Support High-Quality Writing and Revision? Assess. Writ. 2022, 52, 100608. [Google Scholar] [CrossRef]
  33. Wray, D. An Exploration of the Views of Teachers Concerning the Effects of Texting on Children’s Literacy Development. J. Inf. Technol. Educ. Res. 2015, 14, 271–282. [Google Scholar] [CrossRef]
  34. Treiman, R. Teaching and Learning Spelling. Child Dev. Perspect. 2018, 12, 235–239. [Google Scholar] [CrossRef]
  35. Simonsen, F.; Gunter, L. Best Practices in Spelling Instruction: A Research Summary. J. Direct Instr. 2001, 1, 97–105. [Google Scholar]
  36. Westwood, P. Learning to Spell: Enduring Theories, Recent Research and Current Issues. Aust. J. Learn. Diffic. 2018, 23, 137–152. [Google Scholar] [CrossRef]
  37. Alipour, M.; Salehuddin, K.; Stapa, S.H. An Overview of the Persian EFL Learners’ Spelling Difficulties. Int. J. Multicult. Multireligious Underst. 2019, 6, 127. [Google Scholar] [CrossRef]
  38. Wahyu, D.A. Improving Students’ English Spelling Ability Through Concentration Game and Tell a Story Game. In Proceedings of the 2nd National Conference on Teaching English for Young Learners, Jawa Tengah, Indonesia, 10–12 July 2012; pp. 205–217. [Google Scholar]
  39. Al-Sobhi, B.; Md Rashid, S.; Abdullah, A.N. Arab ESL Secondary School Students’ Attitude Toward English Spelling and Writing. Sage Open 2018, 8, 2158244018763477. [Google Scholar] [CrossRef] [Green Version]
  40. Yeung, S.S.; Qiao, S. Developmental Trends and Precursors of English Spelling in Chinese Children Who Learn English-as-a-Second Language: Comparisons between Average and at-Risk Spellers. Res. Dev. Disabil. 2019, 93, 103456. [Google Scholar] [CrossRef]
  41. Martin, K.I. The Impact of L1 Writing System on ESL Knowledge of Vowel and Consonant Spellings. Read. Writ. 2017, 30, 279–298. [Google Scholar] [CrossRef]
  42. Bear, D.R.; von Gillern, S.; Xu, W. Learning to Spell in English by Chinese Students: A Cross-Sectional Study. TESOL Int. J. 2018, 13, 47–66. [Google Scholar]
  43. Powell, D.; Dixon, M. Does SMS Text Messaging Help or Harm Adults’ Knowledge of Standard Spelling? J. Comput. Assist. Learn. 2011, 27, 58–66. [Google Scholar] [CrossRef]
  44. Shih, R.-C.; Lee, C.; Cheng, T.-F. Effects of English Spelling Learning Experience through a Mobile LINE APP for College Students. Procedia Soc. Behav. Sci. 2015, 174, 2634–2638. [Google Scholar] [CrossRef] [Green Version]
  45. Wilson, F. The Effect of Social Media on the Spelling Ability of Students: A Case Study of Federal College of Education (FCE) Yola. Edelweiss Appl. Sci. Technol. 2018, 2, 262–274. [Google Scholar] [CrossRef]
  46. Tshering, P.; Norbu, D.; Dorji, S.; Dema, N.; Dhungyel, P.R. Experience of a Gamified Spelling Solving by Students for Learning Spelling: Development of Kids Spell Dzongkha App. In Proceedings of the 2018 International Conference on Current Trends towards Converging Technologies, ICCTCT 2018, Coimbatore, India, 1–3 March 2018; pp. 1–5. [Google Scholar] [CrossRef]
  47. Rou, L.Y.; Yunus, M.M.; Suliman, A. The Influence of Social Media on Spelling Skills among Primary School Students. Int. J. Innov. Creat. Change 2019, 7, 284–297. [Google Scholar]
  48. Md Yunus, M.; Ang, W.S.; Hashim, H. Factors Affecting Teaching English as a Second Language (TESL) Postgraduate Students’ Behavioural Intention for Online Learning during the COVID-19 Pandemic. Sustainability 2021, 13, 3524. [Google Scholar] [CrossRef]
  49. Yusoff, M.S.B. ABC of Content Validation and Content Validity Index Calculation. Educ. Med. J. 2019, 11, 49–54. [Google Scholar] [CrossRef]
  50. Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User Acceptance of Information Technology: Toward A Unified View. MIS Q. 2003, 27, 425–478. [Google Scholar] [CrossRef] [Green Version]
  51. Pritha Bandhari Mediator, vs. Moderator Variables Differences & Examples. Available online: https://www.scribbr.com/methodology/mediator-vs-moderator/ (accessed on 20 December 2022).
  52. Sivathanu, B. Adoption of Digital Payment Systems in the Era of Demonetization in India. J. Sci. Technol. Policy Manag. 2019, 10, 143–171. [Google Scholar] [CrossRef]
  53. Legault, L. Self-Determination Theory. In Encyclopedia of Personality and Individual Differences; Springer International Publishing: Cham, Switzerland, 2017; pp. 1–9. [Google Scholar]
  54. Gabriel Lopez-Garrido Self-Determination Theory and Motivation. Available online: https://www.simplypsychology.org/self-efficacy.html (accessed on 11 February 2023).
  55. Ryan, R.M.; Deci, E.L. Self-Determination Theory and the Facilitation of Intrinsic Motivation, Social Development, and Well-Being. Am. Psychol. 2000, 55, 68–78. [Google Scholar] [CrossRef]
  56. DeVellis, R.F. Scale Development: Theory and Applications, 4th ed.; SAGE Publications: Thousand Oaks, CA, USA, 2017. [Google Scholar]
  57. Hair, J.F.; Black, W.C.; Babin, B.J.; Anderson, R.E. Multivariate Data Analysis, 8th ed.; Cengage Learning: London, UK, 2019. [Google Scholar]
  58. Ramli, N.F.; Talib, O.; Hassan, S.A.; Manaf, U.K.A. Development and Validation of an Instrument to Measure STEM Teachers’ Instructional Preparedness. Asian J. Univ. Educ. 2020, 16, 193–206. [Google Scholar] [CrossRef]
  59. Polit, D.F.; Beck, C.T. The Content Validity Index: Are You Sure You Know What’s Being Reported? Critique and Recommendations. Res. Nurs. Health 2006, 29, 489–497. [Google Scholar] [CrossRef] [Green Version]
  60. Rubio, D.M.G.; Berg-Weger, M.; Tebb, S.S.; Lee, E.S.; Rauch, S. Objectifyng Content Validity: Conducting a Content Validity Study in Social Work Research. Soc. Work Res. 2003, 27, 94–104. [Google Scholar] [CrossRef]
  61. Lynn, M.R. Determination and Quantification of Content Validity. Nurs. Res. 1986, 35, 382–386. [Google Scholar] [CrossRef]
  62. Polit, D.F.; Beck, C.T.; Owen, S.V. Focus on Research Methods: Is the CVI an Acceptable Indicator of Content Validity? Appraisal and Recommendations. Res. Nurs. Health 2007, 30, 459–467. [Google Scholar] [CrossRef]
  63. Zamanzadeh, V.; Ghahramanian, A.; Rassouli, M.; Abbaszadeh, A.; Alavi-Majd, H.; Nikanfar, A.-R. Design and Implementation Content Validity Study: Development of an Instrument for Measuring Patient-Centered Communication. J. Caring Sci. 2015, 4, 165–178. [Google Scholar] [CrossRef]
  64. Kamaluddin, M.R.; Nasir, R.; Wan Sulaiman, W.S.; Khairudin, R.; Ahmad Zamani, Z. Validity and Psychometric Properties of Malay Translated Religious Orientation Scale-Revised among Malaysian Adult Samples. Akademika 2017, 87, 133–144. [Google Scholar] [CrossRef] [Green Version]
  65. Hinkin, T.R. A Brief Tutorial on the Development of Measures for Use in Survey Questionnaires. Organ. Res. Methods 1998, 1, 104–121. [Google Scholar] [CrossRef]
  66. Uy, H. Development and Content Validity of the Readiness for Filial Responsibility Scale. J. Stud. Soc. Sci. Humanit. 2020, 6, 100–115. [Google Scholar]
  67. Morgado, F.F.; Meireles, J.F.; Neves, C.M.; Amaral, A.; Ferreira, M.E. Scale Development: Ten Main Limitations and Recommendations to Improve Future Research Practices. Psicol. Reflex. E Crit. 2017, 30, 3. [Google Scholar] [CrossRef] [Green Version]
  68. Bustamante, J.C.; Rubio, N. Measuring Customer Experience in Physical Retail Environments. J. Serv. Manag. 2017, 28, 884–913. [Google Scholar] [CrossRef]
  69. Mohd Yusoff, S.; Tengku Ariffin, T.F. Development and Validation of Contextual Leadership Instrument for Principals in Malaysian School Context (MyCLIPS). Lead. Policy Sch. 2021, 20, 1–16. [Google Scholar] [CrossRef]
  70. Heale, R.; Twycross, A. Validity and Reliability in Quantitative Studies. Evid. Based Nurs. 2015, 18, 66–67. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  71. De Souza, A.C.; Alexandre, N.M.C.; Guirardello, E.d.B.; de Souza, A.C.; Alexandre, N.M.C.; Guirardello, E.D.B. Propriedades Psicométricas Na Avaliação de Instrumentos: Avaliação Da Confiabilidade e Da Validade. Epidemiol. Serviços Saúde 2017, 26, 649–659. [Google Scholar] [CrossRef] [PubMed]
  72. Waltz, C.; Strickland, O.; Lenz, E. Measurement in Nursing Research; Davis Company: Philadelphia, PA, USA, 1991. [Google Scholar]
  73. Masuwai, A.; Shah Saad, N. Evaluating the Face and Content Validity of a Teaching and Learning Guiding Principles Instrument (TLGPI): A Perspective Study of Malaysian Teacher Educators Standard for Eaching Mathematics View Project Imformation Processing View Project. Geografia 2016, 12, 11–21. [Google Scholar]
  74. Vakili, M.M.; Jahangiri, N. Content Validity and Reliability of the Measurement Tools in Educational, Behavioral, and Health Sciences Research. J. Med. Educ. Dev. 2018, 10, 106–118. [Google Scholar] [CrossRef] [Green Version]
  75. Gilbert, G.E.; Prion, S. Making Sense of Methods and Measurement: Lawshe’s Content Validity Index. Clin. Simul. Nurs. 2016, 12, 530–531. [Google Scholar] [CrossRef]
  76. Sangoseni, O.; Hellman, M.; Hill, C. Development and Validation of a Questionnaire to Assess the Effect of Online Learning on Behaviors, Attitudes, and Clinical Practices of Physical Therapists in the United States Regarding Evidenced-Based Clinical Practice. Internet J. Allied Health Sci. Pract. 2013, 11, 7. [Google Scholar] [CrossRef]
  77. Gharehblagh, N.M.; Nasri, N. Developing EFL Elementary Learners’ Writing Skills through Mobile-Assisted Language Learning (MALL). Teach. Engl. Technol. 2020, 20, 104–121. [Google Scholar]
  78. Rafiq, K.R.M.; Hashim, H.; Yunus, M.M. Sustaining Education with Mobile Learning for English for Specific Purposes (Esp): A Systematic Review (2012–2021). Sustainability 2021, 13, 9768. [Google Scholar] [CrossRef]
  79. Davis, L.L. Instrument Review: Getting the Most from a Panel of Experts. Appl. Nurs. Res. 1992, 5, 194–197. [Google Scholar] [CrossRef]
  80. Nithia, K.; Yusop, F.D.; Chua, Y. Development and Validation of an Authentic-Based Competency Assessment Rubric for Secondary School Multimedia Production Subject. Int. J. Educ. Pedagog. 2020, 2, 112–121. [Google Scholar]
Table 1. The number of experts and their implication on the acceptable cut-off score of CVI.
Table 1. The number of experts and their implication on the acceptable cut-off score of CVI.
Number of ExpertsAcceptable CVI ValuesSource of Recommendation
Two expertsAt least 0.80Davis (1992) [79]
Three to five expertsShould be 1Polit & Beck (2006) [59],
Polit et al. (2007) [62]
At least six expertsAt least 0.83Polit & Beck (2006) [59],
Polit et al. (2007) [62]
Six to eight expertsAt least 0.83Lynn (1986) [61]
At least nine expertsAt least 0.78Lynn (1986) [61]
Source: [49].
Table 2. Content validity index (CVI) calculation.
Table 2. Content validity index (CVI) calculation.
Item/
Expert
Expert 1Expert 2 Expert 3Expert 4Expert 5Experts in
Agreement
I-CVIUniversal Agreement (UA)
PE 1
I find that using m-learning to teach spelling is interesting.
11111511
PE 2
I find that using m-learning to teach spelling is user-friendly.
11111511
PE 3
I find that using m-learning to teach spelling is cost-effective.
11111511
PE 4
I find that using m-learning to teach spelling can be conducted anytime.
11111511
PE 5
I find that using m-learning to teach spelling can be conducted anywhere.
11111511
PE 6
I find that using m-learning to teach spelling is beneficial for students.
11111511
PE 7
I find that I can guide students better with the use of m-learning in teaching spelling.
11111511
PE 8
I find that I can increase my work productivity with the use of m-learning in teaching spelling.
11111511
EE 1
I am good in using m-learning to teach spelling.
11111511
EE 2
I find it easy to be skillful in using m-learning to teach spelling.
11111511
EE 3
I keep up with the trend in using m-learning to teach spelling.
11111511
EE 4
I can download teaching and learning materials from many sources (such as video, audio, slides, notes and exercises) for the use of m-learning in teaching spelling.
11111511
EE 5
I can combine the use of m-learning and the strategies in teaching spelling.
11111511
EE 6
I can solve simple technical issues when using m-learning in teaching spelling.
11111511
EE 7
The use of m-learning in teaching spelling stresses me.
11111511
SI 1
I use m-learning in teaching spelling because the school administrators encourage me to integrate m-learning into my teaching.
11111511
SI 2
I use m-learning in teaching spelling because my colleagues encourage me to integrate m-learning into my teaching.
11111511
SI 3
I use m-learning in teaching spelling because my colleagues use m-learning in their teaching.
11111511
SI 4
I use m-learning in teaching spelling because I could get technical support from others.
11111511
SI 5
When I use m-learning in teaching spelling, I receive positive feedback from students.
11111511
SI 6
When I use m-learning in teaching spelling, I receive positive feedback from parents.
11111511
SI 7
People who influence my behaviour think that I should use m-learning in teaching spelling.
11111511
FC 1
I have access to teaching and learning resources for the use of m-learning in teaching spelling.
11111511
FC 2
I have access to tutorials for the use of m-learning in teaching spelling.
11111511
FC 3
I have access to create teaching and learning materials for the use of m-learning in teaching spelling.
11111511
FC 4
I have a mobile device that allows me to use m-learning in teaching spelling anytime.
11111511
FC 5
I have a mobile device that allows me to use m-learning in teaching spelling anywhere.
11111511
FC 6
I have a stable Internet connection to use m-learning in teaching spelling.
11111511
FC 7
I have a poor Internet connection to use m-learning in teaching spelling.
0011020.40
BI 1
I will try my best to use m-learning in teaching spelling.
11111511
BI 2
I will attend related workshops to improve my skills in the use of m-learning in teaching spelling.
11111511
BI 3
I will use m-learning in teaching spelling in the future.
11111511
BI 4
I will combine the use of m-learning in teaching spelling with other language skills.
11111511
BI 5
The use of m-learning will change the way I teach spelling to my students.
11111511
BI 6
I will recommend my colleagues to use m-learning in teaching spelling.
11111511
BI 7
I do not plan to use m-learning in teaching spelling in the future.
0011020.40
UB 1
I use m-learning in teaching spelling.
11111511
UB 2
I combine the use of m-learning in teaching spelling with other language skills.
11111511
UB 3
I use m-learning in teaching spelling through activities of detecting spelling mistakes in given words to students.
11111511
UB 4
I use m-learning in teaching spelling through individual practices.
11111511
UB 5
I use m-learning in teaching spelling through group practices.
11111511
UB 6
I use m-learning in teaching spelling more compared to teaching spelling through face-to-face method.
0111030.60
UB 7
The use of m-learning in teaching spelling adds additional duties to my regular work.
11111511
Proportion Relevance0.930.95110.93Sum of
I-CVI
41.440
Average proportion of items judged as relevant across the 5 experts
m-learning (also known as mobile learning).
Table 3. Sum of I-CVI and UA.
Table 3. Sum of I-CVI and UA.
Sum of I-CVI41.4Sum of UA40
S-CVI Average (Sum of I-CVI/No. of items)0.96S-CVI Relevance (Sum of UA/No. of items)0.93
Source: Adapted from [49].
Table 4. List of items for deletion with I-CVI value less than 1.00 (n = 3).
Table 4. List of items for deletion with I-CVI value less than 1.00 (n = 3).
Item (n = 43)Number of Experts Select Rating “3” or “4”Total Number of Experts AgreedI-CVI
Expert 1Expert 2Expert 3Expert 4Expert 5
FC7
I have a poor Internet connection to use m-learning in teaching spelling.
// 20.40
BI7
I do not plan to use m-learning in teaching spelling in the future.
// 20.40
UB6
I use m-learning in teaching spelling more compared to teaching spelling through face-to-face method.
/// 30.60
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lau Yen Yen, E.; Hashim, H.; Md Yunus, M. Development and Validation of an Instrument to Evaluate Technology-Enhanced Learning and Teaching Sustainability in Teaching Spelling. Sustainability 2023, 15, 4256. https://doi.org/10.3390/su15054256

AMA Style

Lau Yen Yen E, Hashim H, Md Yunus M. Development and Validation of an Instrument to Evaluate Technology-Enhanced Learning and Teaching Sustainability in Teaching Spelling. Sustainability. 2023; 15(5):4256. https://doi.org/10.3390/su15054256

Chicago/Turabian Style

Lau Yen Yen, Emily, Harwati Hashim, and Melor Md Yunus. 2023. "Development and Validation of an Instrument to Evaluate Technology-Enhanced Learning and Teaching Sustainability in Teaching Spelling" Sustainability 15, no. 5: 4256. https://doi.org/10.3390/su15054256

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop