Next Article in Journal
Integral Representation and Asymptotic Expansion for Hypergeometric Coherent States
Next Article in Special Issue
Gender Similarities in the Mathematical Performance of Early School-Age Children
Previous Article in Journal
An Improved Blind Kriging Surrogate Model for Design Optimization Problems
Previous Article in Special Issue
Adaptation Process of the Mathematic Self-Efficacy Survey (MSES) Scale to Mexican-Spanish Language
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Second Phase of the Adaptation Process of the Mathematics Self-Efficacy Survey (MSES) for the Mexican–Spanish Language: The Confirmation

by
Gustavo Morán-Soto
1,2 and
Omar Israel González Peña
3,4,5,6,*
1
Department of Basic Sciences, Instituto Tecnológico de Durango, Durango 34080, Mexico
2
Department of Basic Sciences Campus Durango, Tecnológico Nacional de Mexico, Ciudad de Mexico 03330, Mexico
3
School of Engineering and Science, Tecnológico de Monterrey, Monterrey 64849, Mexico
4
Institute for the Future of Education, Tecnológico de Monterrey, Monterrey 64849, Mexico
5
Evidence-Based Medicine Research Unit, Children’s Hospital of Mexico Federico Gómez, National Institute of Health, Ciudad de Mexico 06720, Mexico
6
National Institute of Sciences and Innovation for the Formation of the Scientific Community, INDEHUS, Anillo Periferico 4860, Arenal de Guadalupe, Tlalpan, Ciudad de Mexico 14389, Mexico
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(16), 2905; https://doi.org/10.3390/math10162905
Submission received: 16 July 2022 / Revised: 6 August 2022 / Accepted: 10 August 2022 / Published: 12 August 2022

Abstract

:
There are great economic benefits and qualities of life when a country invests in the development of professionals in STEM areas. Unfortunately, there is a gender gap, as women are lagging behind their peers, as well as minority groups such as Hispanics, who are grossly underrepresented in these careers. Therefore, it is a priority to generate assessing instruments that are adapted to the cultural context of Latino students in their language to attract a more diverse population to STEM areas. This study presents a thorough validation process of the adaptation of the Self-Efficacy Survey (MSES) to the Spanish language and Mexican engineering context. Exploratory and confirmatory factor analyses were conducted with data collected from 683 Mexican engineering students to analyze its validity. The results highlight that the original three dimensions of the MSES still show a sound structure to assess math self-efficacy, and the confirmatory factor analysis eliminated items that were outdated and out of the context of this specific population. As a result, this study presents a 12-item adaptation that could help Latino researchers to collect reliable math self-efficacy data to better understand how their students feel when they learn and practice mathematics.

1. Introduction

1.1. Cultural and General Context of Mathematics in STEM Education

There is a prevailing need for countries all over the world to look for constant development by training their human capital in the areas of science, technology, engineering, and mathematics (STEM) [1]. This need has placed countries such as the United States or Mexico to work on initiatives within their national plans to promote their economic development [2,3], making extra efforts to increase the quantity and quality of their professionals in STEM areas, as well as trying to implement and develop new technology aiming to solve their most urgent needs. However, developing professionals in STEM areas in different regions and countries has been a challenge, one of the reasons could be that there are minority groups such as African Americans and Latinos who are lagging behind in representation in these careers [4]; likewise, there is a gender gap where women are not represented in equal proportion to men in the rate of graduates in these careers, and they are not in the labor field in a proportion equal to their male peers [5,6,7,8]. Therefore, educational institutions at all levels must be prepared to promote all types of students’ interest in studying STEM careers, as well as to improve their curricular plans and pedagogical strategies to help their students to successfully complete their majors despite their possible cultural differences [9].
The current trends in educational innovation in upper secondary education and higher education contemplate several options that may help to improve the promotion of STEM careers within their students [10], considering strategies such as applying different didactic methods, improving assessment processes, involving technology in the classroom, training instructors and administrative staff, or developing both technical and transversal skills [11,12,13,14]. The success of these strategies would be of great help to fulfill the labor world market demands, as well as the UNESCO and the OECD goals of developing and enhancing young students’ creativity by performing designs or using technology that is an important part of the STEM areas [15].
Although there are many factors that may influence students’ decisions to pursue a STEM-related major [14,16,17,18], the literature suggests that students who have developed good skills in mathematics [19,20] and have taken more math courses at the high school level are the ones who most frequently enroll in STEM-related majors [21,22,23]. Having a good set of mathematical skills could also increase students’ probability of successfully completing their studies in STEM majors due to the confidence that they develop in their math abilities [24].
This study focuses on students’ perceptions about their math abilities and their feelings when they are learning and practicing math. According to math self-efficacy literature, students that reported high math self-efficacy feelings are more likely to be involved in math-related activities and use their math knowledge to solve problems using advanced calculations [25,26,27,28]. Conversely, students with low math self-efficacy feelings are usually less interested in taking math-related courses that could help them to develop their math abilities [29], making them more likely to have negative experiences while learning math and to avoid math-related activities in school and in their daily activities [30,31]. The self-efficacy concept was developed by Bandura [32], establishing that: “the perception of people about their capacities to organize themselves and carry out the actions necessary to achieve a certain level of performance” (p. 391). Self-efficacy beliefs are a core concept in Bandura’s Social Cognitive Theory, playing a determinate role in people’s activities selection and their effort, persistence, and emotional reactions to performing such activities [32].
In the end, students’ math self-efficacy feelings can influence their decision to pursue and successfully complete a STEM major, as these majors have many courses that use math as a tool to solve problems and design technological solutions. Hence, the importance of this type of research, where an instrument that could help STEM educators and professors to better understand their students’ math self-efficacy feelings, is presented, and the relevance of cultural and context adaptations is discussed.

1.2. Purpose

The goal of this study was to continue the validation process of the adaptation of the mathematics self-efficacy survey to the Spanish language and Mexican engineering context. This study is an extension of the exploratory analysis conducted by Morán-Soto et al. [33], and more factorial analyses were conducted to confirm previous findings aiming to provide sound evidence that the instrument adaptation could be used to assess engineering students’ math self-efficacy for future research projects. The thorough validation process presented in this study is important due to the relevance of current research analyzing the effects of math self-efficacy on students around the world.
Despite findings highlighting the connection between students’ math self-efficacy feelings and their interest and motivation to select and complete a STEM major [34,35], there are methodological flaws in the way that self-efficacy data are collected by some researchers. These methodological issues are usually presented when math self-efficacy is measured using well-recognized instruments without considering the differences created by students’ educational level, major, language, and cultural context [36,37,38]. If researchers collect new data sets using instruments that are not adapted and tested for validation for the target population, then the findings and conclusion of that study could be biased by a different interpretation of the items and cultural and context misunderstandings [39], jeopardizing the trustworthiness of the results [40]. Developing awareness of this trustworthiness issue could help future researchers to improve the quality of their work since there are many examples of studies that collected math self-efficacy information using the Mathematics Self-Efficacy Survey (MSES) without considering any context and cultural adaptation or thorough validation tests to confirm the suitability of this instrument in the context of their research [36,41,42].
The MSES was selected for this adaptation to the Spanish language and Mexican engineering context due to its global relevance and constant use during four decades of math self-efficacy research since its creation [43]. The MSES is frequently used in self-efficacy research despite the multiple instruments that have been created by other researchers around the world, including instruments designed to measure primary and secondary students’ math self-efficacy [44,45] and others aiming to measure this variable for college students in different contexts [46,47]. Although these tailormade instruments are well developed and are usually based on the Bandura’s guide for constructing self-efficacy scales [48], most of the math self-efficacy research found in the current literature trusts the MSES or any of their three subconstructs to assess certain levels of math self-efficacy in different contexts, including studies with adult students [49], nursing students [50,51], gifted students [52], and pre-service teachers [53], among others.
In the end, having more reliable assessment tools to collect math self-efficacy information in different contexts, languages, and cultures will help the international research community to feel more welcome and included in the current math education research agenda, which ultimately will provide engineering educators and professors with information to develop a better understanding of their students’ needs while learning and practicing math. This adaptation in particular will be of great value for Latino researchers interested in math self-efficacy research conducted in Hispanic countries since there is a lack of suitable instruments that could yield reliable assessments of this construct in the Spanish language and Latin American context and culture.

2. Materials and Methods

2.1. Participants

The data for this study were collected in two different semesters, with a sample of 396 participants from the fall 2021 semester and 287 participants from the spring 2022 semester. All participants (n = 683) were first-year students enrolled in a mandatory math course in different engineering programs at a public university in Mexico that is part of the largest technological university system in the country. All participants voluntarily agreed to answer a paper-based survey during their class time.
Data from 396 participants were collected for the first sample with 251 males (63%) and 145 females (37%) during the fall 2021 semester. The age of the participants ranged between 17 and 41 (mean = 18.87), and 7% were enrolled in biochemistry, 3% in civil, 9% in electrical, 38% in industrial, 20% in mechanical, 17% in chemistry, and 6% in computing systems engineering. Likewise, the second sample was collected from 287 participants with 150 males (52%) and 137 females (48%) during the spring 2022 semester, with 20% enrolled in biochemistry, 17% in civil, 5% in electronic, 27% in industrial, 10% in mechatronics, 4% in chemistry, and 17% in computing systems engineering. All participants from the spring 2022 sample were different from the ones who answered the survey during the fall 2021 data collection.
These samples were used in two different validation tests, with the fall 2021 sample used for an exploratory factor analysis and the spring 2022 sample for a confirmatory factor analysis. Literature suggests more than six participants for each item in the factorial analysis [54,55,56]. Therefore, the size of these two samples was in line with the minimum criterion ratio suggested in psychometrics for structural validation analysis, with a subject-item ratio of 11 for the fall 2021 sample and 8.4 for the spring 2022 sample.
Additionally, the Kaiser–Meyer–Olkin (KMO) measure of sampling adequacy [57] was checked for both samples, with a KMO = 0.94 for the fall 2021 sample (for the exploratory factor analysis) and a KMO = 0.95 for the spring 2022 sample (for the confirmatory factor analysis). These KMO values were close to 1, indicating that the patterns of correlations are relatively compact, and factor analysis should yield reliable results [57]. These results showed that both sample sizes were adequate for the factorial analyses [58].

2.2. Instrument

The instrument that was adapted in the previous study [33] was the “Mathematics Self-Efficacy Survey” [43]. This instrument was selected due to its consistent results assessing students’ mathematical self-efficacy perceptions in high school [59] and college [60,61]. The MSES was originally written in English language and consists of 52 items presented on a Likert scale form asking participants to rate their level of confidence from 0 (“no confidence at all”) to 9 (“complete confidence”). In the previous research conducted by Morán-Soto et al. [33] the MSES items were translated from the English to Spanish language, and some cultural adaptations were made to adjust and test the validity of the data collected from Mexican engineering students. Additionally, the Likert scale was modified to a 1 to 10 scale aiming to avoid possible issues coming from considering “0” as a value, keeping the original 10-point Likert scale and helping Mexican students to rate their math self-efficacy level.
The 37 items resulting from the MSES adaptation in the previous research [33] were divided into three subconstructs, 13 items for everyday math activities (e.g., calculate the total of your grocery bill in your head while you take the items one by one), 8 for math-related courses (e.g., obtaining a B or higher grade in your geometry course), and 16 for math problem solving (e.g., The average of three numbers is 30. If the fourth number is at least 10. Which is the average of the four numbers?). Continuing with the validation process of the MSES and following the suggestion made by the previous adaptation and validation phases described by Morán-Soto et al. [33], the eight math-related courses items of the MSES adaptation were removed and replaced with six math courses that were more appropriate for the engineering students’ context. The six math courses were selected from the engineering curriculum that most engineering students need to complete for their major in Mexico [62]: (1) precalculus, (2) differential calculus, (3) integral calculus, (4) linear algebra, (5) vectorial calculus, and (6) differential equations. This way, the instrument adaptation was used with 35 items to collect information for the two samples (fall 2021 and spring 2022).

2.3. Exploratory Factor Analysis

The multivariate normality properties of the data were tested aiming to determine the suitability of conducting factor analyses according to the nature of the data. Two separate multivariate normality analyses were performed using the two collected samples, the fall 2021 sample with 396 participants and the spring 2022 sample with 287 participants from the spring 2022. These samples showed coefficients in the ranges from −2.3834 to −0.4661 for skewness, from 2.4292 to 9.2376 for the kurtosis index for the fall 2021 sample, from −2.0786 to −0.4878 for skewness and from 2.4207 to 8.5523 for the spring 2022 sample. According to Kline [63], items that reach absolute values greater than 3.0 for skewness and 10.0 for kurtosis could be problematic for large samples. Hence, these two samples were considered within the acceptable range of multivariate normality.
Continuing with the validation process, an exploratory factor analysis (EFA) was conducted with the fall 2021 sample (n = 396) looking for replicability of the factorial model established in the previous work by Morán-Soto et al. [33]. This EFA was carried out aiming to confirm the previous evidence of the structural model stability, as well as to analyze how the six new items load in the math-related course subconstruct.
Before conducting the EFA, a scree-plot test was performed aiming to identify the ideal number of factors to consider when performing the EFA. The graphical results of the scree-plot are shown in Figure 1 (elbow graph). These results helped to determine that three factors were optimal for the EFA, which was in line with the factors established by the original instrument and the previous results adapting the MSES to the Spanish language [33,43].
Following the scree-plot test results, a three-factor EFA was conducted using an oblique (promax) rotation [64]. This rotation is normally used when the multivariate data normality is previously established by the skew and kurtosis analysis results [65]. Performing an additional EFA was necessary to check the stability of the suggested model structure. The structure of any instrument is the baseline to collect reliable data for variables with multiple subconstructs [65], hence the importance of conducting EFA and replications aiming to provide enough evidence that the instrument structure remains stable when data from different populations are analyzed. For this particular case, the EFA was necessary due to the modification of the six math courses aiming to evaluate the math self-efficacy of engineering students in a better context and looking for more reliable subconstructs to better explain math self-efficacy [66]. As a result, the three-factor EFA was conducted before moving to the following factorial analysis tests (confirmatory factor analysis).
The cutoff value to evaluate the items using the EFA was a loading factor of 0.32 or higher, which helped to determine if such items could be considered as part of a factor [67]. Although other researchers suggest different cutoff factor loadings such as Hinkin suggesting 0.40 [68] or Costello and Osborne suggesting 0.30 [69], this study used the 0.32 or higher suggested by Tabachnick and Fidell [67] to allow the data to define the most appropriate structure for the items assessing math self-efficacy, as there is not a global consensus about the right cutoff value for “good” factor loadings [70]. Using EFA is a common strategy when social science researchers are testing new or adapted instruments aiming to validate the appropriateness of the data collected with such instruments and the way that certain items group in multidimensional variables [71].

2.4. Confirmatory Factor Analysis

Following the EFA results, a confirmatory factor analysis (CFA) was conducted with the spring 2022 sample (n = 287). The CFA is used in social science research to analyze if the items hypothesized to measure a single underlying latent construct are occurring [72]. Unlike the EFA, the CFA explicitly tests a previously specified model aiming to determine if the items load onto a specific factor as expected by the researcher [73]. The latent factors were allowed to co-vary during the CFA, which is consistent with the oblique rotation that was previously used in the EFA [71,74].
The results of the CFA were tested with several indexes of model fit and significance of the paths between latent variables for the three subconstructs using the remaining items following the EFA results. These fit indexes included chi-square, which should be non-significant at the p > 0.05 value [75]; Comparative Fit Index (CFI), expecting acceptable values above 0.95 [72,76]; Tucker Lewis Index (TLI), expecting acceptable values above 0.95 [72,76]; and root mean square error of approximation (RMSEA), expecting values less than 0.08 for moderate fit, 0.05 for a good fit, or 0.01 for excellent fit [77]. All these fit indexes were used to evaluate the CFA results because each index provides a different facet of how the model fits the data. The fit indexes and chi-square statistics resulting from the CFA were adjusted using the Satorra–Bentler correction aiming to find a robust solution due to sample size and the nature of the data [78]. At the end of the CFA, some items were deleted one by one to conduct an additional CFA after each item deletion aiming to optimize model fit indexes and statistical significance in the CFA results [79]. These deleted items were selected from the ones that showed the lowest EFA factor loadings with the goal of reaching an acceptable model to assess math self-efficacy.

2.5. Internal Consistency

At the end of the factorial analyses, the reliability indexes for each of the three resulting math self-efficacy subconstructs were calculated using Cronbach’s Alpha (α), expecting values of 0.80 or higher to validate the internal consistency of each subconstruct [80]. All the quantitative analyses including the descriptive, factorial, and reliability analyses were performed using the statistical software R [81].

3. Results

The results of the EFA are presented in Table 1. Only one item (Q20: “In Starville, a calculation with Ө using two variables a & b is defined by a Ө b = a × (a + b). Therefore, 2 Ө 3 equals to __________”) was deleted for not reaching the minimum loading factor criterion of 0.32 to be considered part of a factor with a factor loading of 0.27 in the math problem-solving subconstruct. The remaining 34 items were grouped into three factors, explaining a total cumulative variance of 45%. These items presented a factorial load range from 0.38 to 0.88 (see Table 1), which is in line with the guidelines for optimal practice when performing exploratory factor analyses where a loading factor greater than 0.32 is recommended to interpret the EFA results [69].

3.1. Confirmatory Factor Analysis

The model fit resulting from the CFA using the spring 2022 sample (n = 287) with the 34 remaining items after the EFA (see Table 1) did not achieve acceptable standards. Hence, items with the lowest factor loadings from the EFA and the CFA results of every time that this test was conducted were systematically removed (see Table 1), aiming to achieve an acceptable model fit [72,76]. Although the EFA and the CFA factors loadings are usually similar, the CFA factor loadings change each time such that an additional CFA is conducted after deleting one item. Therefore, the deleted items are not always the ones with the lowest EFA factor loadings, but a combination of both factors, the new CFA loading factors and the EFA loading factors. According to this approach, 22 items were removed (see Table 1) from the three subconstructs of the adapted instrument to assess math self-efficacy: nine items from everyday math activities (Q1, Q2, Q4, Q5, Q8, Q9, Q10, Q11, and Q13; see Table 2), two from math-related courses (Q14 and Q19; see Table 2), and eleven from math problem solving (Q21, Q22, Q23, Q24, Q25, Q27, Q28, Q29, Q32, Q34, and Q35; see Table 2).
In the end, the final CFA model was tested with 12 items, with four items for each of the three subconstructs, resulting in good overall fit indexes, with a Comparative Fit Index of 0.954, a Tucker Lewis Index of 0.950, and a root mean square error of approximation value of 0.056. The chi-square statistic was significant with a p < 0.001, which was an expected result due to the sample size (n = 287); as such, this value is still considered within the range of a good model fit. All these fit index values suggest that this model is appropriate for future data collection with similar populations in the Spanish language. The results of the CFA are shown in Figure 2 with acceptable loading factors ranging from 0.69 to 0.88.

3.2. Internal Consistency

The factor reliability analyses of the three math self-efficacy subconstructs are presented in Table 3, showing acceptable Cronbach’s alpha values.

4. Discussion

The EFA was conducted with 35 items measuring self-efficacy in three subconstructs: (a) everyday math activities (13 items), (b) math-related courses (6 items), and (c) math problem solving (16 items). Conducting this EFA was an important step to provide further evidence that the MSES adaptation to the Spanish language clustered as the original MSES three-dimension model [43] and can replicate the model suggested by previous EFA [33].
Conducting this EFA was necessary due to the substitution of the items related to the original math-related courses in the data collection of this study. The eight items related to math-related courses were replaced by six college-level math courses that are part of the engineering curricula in most of the Mexican engineering majors [62]. These six items were not included in the previous EFA, hence the importance of conducting additional EFA including these new items. Additionally, to the analysis of these new items, the EFA was conducted aiming to evaluate the replicability of the structural solution obtained in previous factorial analyses [65].
Providing further evidence that the three MSES subconstructs are conserved as established by the original theory (everyday math activities, math-related courses, and math problem solving; see Table 1) was important for the validation process of the data collected with this instrument adaptation. These results are especially relevant since other authors have suggested that more subconstructs could be necessary for the MSES structure in their EFA validation tests. A study from Kranzler and Pajares [82] suggested that four factors would describe the MSES subconstructs in a better way, separating the math-related courses into two different subconstructs: one with the math and statistics courses, and the second one with courses where math is used but is not the main focus. Another study from Langenfeld and Pajares [83] suggested that five factors would facilitate a better assessment of math self-efficacy, separating the math-related courses subconstruct into two in a similar way to that of Kranzler and Pajares [82] and forming an additional problem-solving subconstruct with only two items.
The previous EFA results presented by Morán-Soto et al. [33] could not be fully replicated with this study EFA due to the elimination of the item Q20. Although one item did not reach a loading factor greater than 0.32 for any subconstruct, this EFA replication showed that the three-dimension model for measuring math self-efficacy was highly stable. These results provide certainty that the three-factor model proposed by the original version of the MSES and this study still work in a stable way for math self-efficacy data collection.
This study was able to conduct the confirmatory phase of the validation process trough CFA due to the positive results of the structural tests of the MSES adaptation to the Spanish language and Mexican engineering students’ context using exploratory analyses [66]. The confirmatory phase of the validation process of an instrument helps the researchers to test the belonging of each item to the factors established by the exploratory factor analyses [72]. Therefore, the validation process moved to a confirmatory modeling stage where the 34 remaining items were tested to confirm their belongness within each of the three previously established subconstructs (everyday math activities, math-related courses, and math problem solving, see Table 1).
The confirmatory phase of the validation process is stricter than the exploratory phase due to the existing hypothesis proposing that each item loads into a specific factor [72]. Therefore, deleting more items during the confirmatory process is usually necessary to fulfill an acceptable model fit for the CFA, losing items that are lacking a meaningful connection with the responses of the other items previously grouped into each factor [73]. During the confirmatory phase of the adaptation process of the MSES to the Spanish language and Mexican engineering context, 22 items were deleted before reaching an acceptable model fit for this test. Although losing 22 items is not ideal in validation tests, it is important to fulfill the minimum model fit for the CFA [76] to provide evidence that the items of each subconstruct are reliable in assessing all the dimensions of the variable; and the importance of these model fit indexes was clearly established by Hu and Bentler after adjusting this model fit from 0.90 [84] to 0.95 in their most recent analysis of the CFA results [76].The loss of these items resulted in a 12-item model assessing three subconstructs of math self-efficacy with four items for each subconstruct (everyday math activities, math-related courses, and math problem solving; see Figure 2). From the 22 items that were deleted following the CFA results, nine were from the everyday math activities, two from the math-related courses, and eleven from the math problem-solving subconstructs (see Table 1).
A detailed analysis of the deleted items after the CFA showed that most of the nine items deleted from the everyday math activities subconstruct were related to activities that are no longer part of current students’ common activities (see Table 2). These items were asking students to perform math calculations with activities that were common in students’ lifestyles several years ago, such as making your own bookshelves and curtains, or opening a savings account and investing your savings. The lack of experience in performing these kinds of activities could generate a disconnection from the math calculations of the activity, creating an extra cognitive charge for students that are not related to their confidence performing the calculations [85]. The elimination of this type of item was something expected, as the original version of the MSES was developed four decades ago, and students’ lifestyle have changed drastically during that period [86]. These findings suggest that some items of this MSES adaptation to the Mexican engineering context need to be updated. This way, students could be asked about activities that are related to their actual context and require the use of technologies, generating a better understanding of their math self-efficacy in their current daily activities [87].
Likewise, by analyzing the deleted items from the math-related courses subconstruct, there is a difference between the two math courses that were removed from the instrument adaptation and the four math courses that were kept in the instrument (see Table 2). The precalculus and differential equations courses are not a requirement for all engineering students in the Mexican engineering curricula [62]. For instances, differential equations are required only for certain majors that use these advanced math applications to solve problems such as electrical, mechatronics, and chemistry. Conversely, precalculus is only taken by students that did not reach the minimum requirement of math knowledge during the selection test; therefore, they need to take an extra semester of math courses to level up their math abilities. This context may help to understand the reasons why these two courses were not kept in the math-related courses subconstruct after the CFA, generating different responses from students that experienced the precalculus course and the ones that started in differential calculus as their first college math course, as well as students from different majors that are familiar with the topics that they will face in the differential equations course.
The subconstruct that lost more items during the CFA was the math problem solving with eleven items (see Table 2). The issue with these items was similar to what happened with the everyday math activities subconstruct where the items were outdated or out of context. This issue was greater for the math problem-solving items since engineering students usually work with advanced math topics where they used the help of technology such as calculators and software to solve all kinds of technological problems. Hence, the items presented by the original version of the MSES were not challenging enough, or they were perceived by the engineering students as different from the way they practice and develop their mathematical skills in their college math courses [28,88,89,90]. This idea is supported by the four items that this subconstruct kept, where students are asked about trigonometry applications, rates involving distances in a map, and equations that could be solved using a calculator or a model. These findings pinpoint the importance of the context when developing or adapting new instruments [48]. Proposing adaptations to the problem-solving items using college-level math problems in the context of engineering applications may present a better challenge for engineering students, resulting in a better assessment of this math self-efficacy subconstruct.
The results and discussion about the outdated items and the way that Mexican engineering students responded to these items helped to bring to light the importance of conducting thorough validation tests and considering cultural and context aspects that may affect the assessment of any variable. If researchers decide to mutilate an instrument using only one section of a multidimensional variable, then their results could be incomplete or biased due to the lack of enough information; likewise, if researchers take the decision to modify any item because they consider that it is outdated or out of context without collecting data from a sample of the target population to conduct the necessary tests, then their results and conclusions would be biased due to the lack of evidence about the way that the target population response to the items. Hence, conducting EFA and CFA will help researchers to enhance the trustworthiness of their findings, helping them to avoid rushed decisions such as deleting or modifying items without enough evidence coming from the data collected from the target population.
This study presents a thorough validation process using exploratory and confirmatory factor analyses. This detailed process may help researchers to avoid trustworthiness issues due to the lack of validation tests and may prevent them from rushing this validation process by conducting only basic content validity tests [36,91]. All the methodological steps followed by this validation process could help math educators to validate data collected with new or adapted instruments, especially if they want to assess math self-efficacy using the MSES without appropriate validation tests to adapt it to the culture, language, and context of their target population [37,53].
The adaptation of the MSES to the Mexican engineering context and Spanish language that is described in this study (see Table 4) highlights the importance of the possible context differences that different populations may face [92,93]. Most of these context and cultural issues are difficult to evaluate without the help of data collection and confirmatory factor analysis, which provides evidence of the belongingness of the items in the predicted factors based on how the target population perceives and responds to the items. In the end, performing confirmatory analysis provided certainty that future data collections using this instrument adaptation will be more accurate and will help researchers to better understand students’ experiences learning math in Spanish-speaking populations [72]. The final version of this validation process is presented in Table 4, showing the 12 items (four items for each of the three subconstructs) that have been confirmed by all the validation tests for collecting data to assess the math self-efficacy of Spanish-speaking populations in Mexico.

4.1. Implications for Practice

Developing better instruments to assess math self-efficacy in different contexts and languages will help researchers to improve the quality of their studies, facilitating the design of better strategies to motivate the interest of diverse populations in STEM majors and helping them to successfully complete their studies [94]. Having a sound assessment of engineering students’ math self-efficacy levels can contribute to a better understanding to how they learn, practice, and apply math topics to solve problems in STEM areas, which ultimately could help educators to generate better learning environments to develop students’ mathematical skills [95]. Additionally, a better understanding of the psychological aspects that may affect students’ math learning could have a positive impact on underrepresented groups in STEM-related majors [96,97], helping educators around the world to find ways to motivate them to pursue a STEM career. This way, groups with different genders and backgrounds could feel more welcome and understood, which ultimately may foster a greater diversity of people’s interest in learning and practicing mathematics to apply to STEM topics [23].

4.2. Limitations and Future Work

A limitation of this study was that the sample size was limited because this study was a follow-up of the exploratory and replication phase of the MSES adaptation to the Spanish language [33]. Therefore, the sample selection was limited to first-year students of the same institution and with similar characteristics that have not previously participated in the data collection, generating a smaller sample of students for both the fall 2021 and spring 2022 samples.
This study presents a thorough adaptation process of the MSES to the Spanish language and Mexican engineering students’ context. Although this process will facilitate data collection aiming to expand the literature about mathematics self-efficacy in Spanish-speaking communities, it is advisable to perform further cultural adaptations and validation tests if this instrument is going to be used to collect information in other Spanish-speaking countries in Latin America, Europe, or even among Spanish-speaking students in the United States. The adapted version of the MSES presented in this study will provide a good starting point to conduct other adaptation projects to other contexts and cultures, and the validation process followed in this study could be used as a guide to continue refining an instrument that will facilitate a thorough assessment of the math self-efficacy levels of students in different cultures and contexts.

5. Conclusions

This study highlights the importance of performing thorough validation tests before collecting data with new or adapted instruments. Conducting the EFA and replicating the results with additional EFA was a relevant step for providing sound evidence that the structure of the MSES adaptation to the Spanish language was stable with three dimensions as suggested by the original instrument 40 years ago. Performing some replicability tests aiming to confirm a stable structure of an instrument should be considered a necessary step by researchers looking for data validation, and this replication step should be considered before proposing a CFA for more reliable results.
This study presented evidence of the replicability of the instrument structure with three math self-efficacy subconstructs; hence, the CFA was conducted with the 34 remaining items. The 22 items eliminated according to the CFA results are clear evidence that instruments need to be adapted to the context of the population that is going to be assessed. The original MSES was developed four decades ago, and most of the items were outdated and out of context of current students’ lifestyles. Although the MSES is a well-recognized test used in math self-efficacy studies around the globe, this study suggests that many cultural and context adaptations need to be conducted before using it to assess math self-efficacy in future studies.
Although this study presented an exhaustive validation process through exploratory and confirmatory factor analyses, the data validation should be considered an ongoing process that needs to be conducted to address the needs and experiences of diverse populations. However, this detailed process can help researchers to avoid reliability issues due to the lack of validation tests and can prevent them from speeding up this validation process by performing only basic content validity tests. As the 12-item instrument (see Table 4) presented in this study went through a rigorous process using confirmatory and exploratory factor analyses to adapt the MSES to the Spanish language and Mexican engineering context, it could be used to collect data with similar populations, and it could also be used as a guide to continue with instrument refinement through further adaptations and validation tests. In the end, having better instruments to assess math self-efficacy in the Spanish language would help Latino researchers become more involved in the math education agenda with the goal of improving Latino students’ math learning.

Author Contributions

Conceptualization, G.M.-S.; Methodology, Validation, Formal Analysis, Investigation, Resources, Visualization, G.M.-S. and O.I.G.P.; Data Curation, Project Administration, and Funding Acquisition, O.I.G.P.; Writing—original draft, G.M.-S.; Writing—review and editing, O.I.G.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and was approved by the academic vice-rectory from Instituto Tecnológico de Durango since the Institutional Review Board does not exist for studies involving humans.

Informed Consent Statement

The study was conducted in accordance with the Declaration of Helsinki and was approved by the academic vice-rectory from Tecnológico Nacional de México/Instituto Tecnológico de Durango since the Institutional Review Board does not exist for studies involving humans.

Data Availability Statement

Due to institutional restrictions and protocol norms, the data are available under evaluation of non-ethical violations of a justified request.

Acknowledgments

The authors acknowledge the Writing Lab, Institute for the Future of Education, Tecnológico de Monterrey, Mexico, for the financial and logistical support in the publication of this open-access study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Thomas, B.; Watters, J.J. Perspectives on Australian, Indian and Malaysian Approaches to STEM Education. Int. J. Educ. Dev. 2015, 45, 42–53. [Google Scholar] [CrossRef]
  2. Committee on STEM Edcucation. Federal Science, Technology, Engineering, and Mathematics (STEM) Education; National Science and Technology Council: Washington, DC, USA, 2013.
  3. Gobierno de la República. Plan Nacional de Desarrollo 2019–2024; Gobierno de la República: Mexico City, Mexico, 2019.
  4. Funk, C.; Parker, K. Diversity in the STEM Workforce Varies Widely across Jobs; Pew Research Center: Washington, DC, USA, 2018. [Google Scholar]
  5. Gallindo, E.L.; Cruz, H.A.; Moreira, M.W.L. Critical Examination Using Business Intelligence on the Gender Gap in Information Technology in Brazil. Mathematics 2021, 9, 1824. [Google Scholar] [CrossRef]
  6. McCullough, L. Barriers and Assistance for Female Leaders in Academic Stem in the US. Educ. Sci. 2020, 10, 264. [Google Scholar] [CrossRef]
  7. National Science Foundation. Women, Minorities, and Persons with Disabilities in Science and Engineering; National Science Foundation: Alexandria, VA, USA, 2019.
  8. Morán-Soto, G.; González-Peña, O.I. Mathematics Anxiety and Self-Efficacy of Mexican Engineering Students: Is There Gender Gap? Educ. Sci. 2022, 12, 391. [Google Scholar] [CrossRef]
  9. English, L.D. STEM Education K-12: Perspectives on Integration. Int. J. STEM Educ. 2016, 3, 3. [Google Scholar] [CrossRef]
  10. Krstikj, A.; Sosa Godina, J.; García Bañuelos, L.; González Peña, O.I.; Quintero Milián, H.N.; Urbina Coronado, P.D.; Vanoye García, A.Y. Analysis of Competency Assessment of Educational Innovation in Upper Secondary School and Higher Education: A Mapping Review. Sustainability 2022, 14, 8089. [Google Scholar] [CrossRef]
  11. Expósito López, J.; Romero-Díaz de la Guardia, J.J.; Olmos-Gómez, M.D.C.; Chacón-Cuberos, R.; Olmedo-Moreno, E.M. Enhancing Skills for Employment in the Workplace of the Future 2020 Using the Theory of Connectivity: Shared and Adaptive Personal Learning Environments in a Spanish Context. Sustainability 2019, 11, 4219. [Google Scholar] [CrossRef]
  12. Garcia-Esteban, S.; Jahnke, S. Skills in European Higher Education Mobility Programmes: Outlining a Conceptual Framework. High. Educ. Ski. Work. Based Learn. 2020, 10, 519–539. [Google Scholar] [CrossRef]
  13. Benete, L. Transversal competencies in education policies and practice (Regional synthesis report, 2015). In Proceedings of the Central Asian Workshop on ESD and GCED, Almaty, Kazakhstan, 27 September 2017. [Google Scholar]
  14. Shaughnessy, M. Mathematics in a STEM Context. Math. Teach. Middle Sch. 2013, 18, 324. [Google Scholar]
  15. Rieckmann, M. Learning to transform the world: Key competencies in education for sustainable development. In Issues and Trends in Education for Sustainable Development; Leicht, A., Heiss, J., Byun, W.J., Eds.; UNESCO Publishing: Paris, France, 2018. [Google Scholar]
  16. De las Cuevas, P.; García-Arenas, M.; Rico, N. Why Not STEM? A Study Case on the Influence of Gender Factors on Students’ Higher Education Choice. Mathematics 2022, 10, 239. [Google Scholar] [CrossRef]
  17. Dou, R.; Hazari, Z.; Dabney, K.; Sonnert, G.; Sadler, P. Early Informal STEM Experiences and STEM Identity: The Importance of Talking Science. Sci. Educ. 2019, 103, 623–637. [Google Scholar] [CrossRef]
  18. Kennedy, T.J.; Odell, M.R.L. Engaging Students in STEM Education. Sci. Educ. Int. 2014, 25, 246–258. [Google Scholar]
  19. Ejiwale, J.A. Barriers to Successful Implementation of STEM Education. J. Educ. Learn. 2013, 7, 63. [Google Scholar] [CrossRef]
  20. Rozgonjuk, D.; Kraav, T.; Mikkor, K.; Orav-puurand, K.; Täht, K. Mathematics Anxiety among STEM and Social Sciences Students: The Roles of Mathematics Self-Efficacy, and Deep and Surface Approach to Learning. Int. J. STEM Educ. 2020, 7, 46. [Google Scholar] [CrossRef]
  21. Maltese, A.V.; Tai, R.H. Pipeline Persistence: Examining the Association of Educational Experiences with Earned Degrees in STEM among U.S. Students. Sci. Educ. 2011, 95, 877–907. [Google Scholar] [CrossRef]
  22. Trusty, J. Effects of High School Course-Taking and Other Variables on Choice of Science and Mathematics College Majors. J. Couns. Dev. 2002, 80, 464–474. [Google Scholar] [CrossRef]
  23. Tyson, W.; Lee, R.; Borman, K.M.; Hanson, M.A. Science, Technology, Engineering, and Mathematics (STEM) Pathways: High School Science and Math Coursework and Postsecondary Degree Attainment. J. Educ. Stud. Placed Risk 2007, 12, 243–270. [Google Scholar] [CrossRef]
  24. Rose, H.; Betts, J.R. Math Matters: The Links between High School Curriculum, College Graduation, and Earnings; Public Policy Institute of California: San Francisco, CA, USA, 2001. [Google Scholar]
  25. Hackett, G. Role of Mathematics Self-Efficacy in the Choice of Math-Related Majors of College Women and Men: A Path Analysis. J. Couns. Psychol. 1985, 32, 47–56. [Google Scholar] [CrossRef]
  26. Mau, W.C. Factors That Influence Persistence in Science and Engineering Career Aspirations. Career Dev. Q. 2003, 51, 234–243. [Google Scholar] [CrossRef]
  27. Lent, R.; Lopez, F.; Bieschke, K. Mathematics Self-Efficacy: Sources and Relation to Science-Based Career Choice. J. Couns. Psychol. 1991, 38, 424–430. [Google Scholar] [CrossRef]
  28. Grigg, S.; Perera, H.N.; McIlveen, P.; Svetleff, Z. Relations among Math Self Efficacy, Interest, Intentions, and Achievement: A Social Cognitive Perspective. Contemp. Educ. Psychol. 2018, 53, 73–86. [Google Scholar] [CrossRef]
  29. Lent, R.W.; Brown, S.D.; Hackett, G. Toward a Unifying Social Cognitive Theory of Career and Academic Interest, Choice, and Performance. J. Vocat. Behav. 1994, 45, 79–122. [Google Scholar] [CrossRef]
  30. Gardner, J.; Pyke, P.; Belcheir, M.; Schrader, C. Testing our assumptions: Mathematics preparation and its role in engineering student success. In Proceedings of the 2007 American Society for Engineering Education Annual Conference and Exposition, Honolulu, HI, USA, 24–27 June 2007. [Google Scholar]
  31. Middleton, J.A.; Krause, S.; Maass, S.; Beeley, K.; Collofello, J.; Culbertson, R. Early course and grade predictors of persistence in undergraduate engineering majors. In Proceedings of the Frontiers in Education Conference, FIE 2014, Madrid, Spain, 22–25 October 2014. [Google Scholar] [CrossRef]
  32. Bandura, A. Social Foundations of Thought and Action: A Social Cognitive Theory; Prentice Hall: Englewood Cliffs, NJ, USA, 1986. [Google Scholar]
  33. Morán-Soto, G.; Valdivia Vázquez, J.A.; González Peña, O.I. Adaptation Process of the Mathematic Self-Efficacy Survey (MSES) Scale to Mexican-Spanish Language. Mathematics 2022, 10, 798. [Google Scholar] [CrossRef]
  34. Marra, R.M.; Rodgers, K.A.; Shen, D.; Bogue, B. Women Engineering Students and Self-Efficacy: A Multi-Year, Multi-Institution Study of Women Engineering Student Self-Efficacy. J. Eng. Educ. 2009, 98, 27–38. [Google Scholar] [CrossRef]
  35. O’Brien, V.; Martinez-Pons, M.; Kopala, M. Mathematics Self-Efficacy, Ethnic Identity, Gender, and Career Interests Related to Mathematics and Science. J. Educ. Res. 1999, 92, 231–235. [Google Scholar] [CrossRef]
  36. Brown, S.; Burnham, J. Engineering Student’s Mathematics Self-Efficacy Development in a Freshmen Engineering Mathematics Ourse. Int. J. Eng. Educ. 2012, 28, 113–129. [Google Scholar]
  37. Briley, J.S. The Relationships among Mathematics Teaching Efficacy, Mathematics Self-Efficacy, and Mathematical Beliefs for Elementary Pre-Service Teachers. Issues Undergrad. Math. Prep. Sch. Teach. 2012, 5, 1–13. [Google Scholar]
  38. May, D.K. Mathematics Self-Efficacy and Anxiety Questionnaire. Ph.D Thesis, The University of Georgia, Athens, GA, USA, 2009. [Google Scholar]
  39. Andrews, P.; Diego-Mantecón, J. Instrument Adaptation in Cross-Cultural Studies of Students’ Mathematics-Related Beliefs: Learning from Healthcare Research. Comp. J. Comp. Int. Educ. 2015, 45, 545–567. [Google Scholar] [CrossRef]
  40. Creswell, J.W. Research Design: Qualitative, Quantitative and Mixed Methods Research; Sage Publications, Inc.: Thousand Oaks, CA, USA, 2009. [Google Scholar]
  41. Zalazar Jaime, M.F.; Aparicio Martín, M.D.; Ramírez Flores, C.M.; Garrido, S.J. Estudios Preliminares de Adaptación de La Escala de Fuentes de Autoeficacia Para Matemáticas. Rev. Argent. Cienc. Comport. 2011, 3, 1–6. [Google Scholar]
  42. Camposeco Torres, F.d.M. La Autoeficacia Como Variable En La Motivación Intrínseca y Extrínseca En Matemáticas a Través de Un Criterio Étnico. Ph.D. Thesis, Universidad Complutense de Madrid, Madrid, Spain, 2012. [Google Scholar]
  43. Betz, N.E.; Hackett, G. The Relationship of Mathematics Self-Efficacy Expectations to the Selection of Science-Based College Majors. J. Vocat. Behav. 1983, 23, 329–345. [Google Scholar] [CrossRef]
  44. Chen, P.P. Exploring the Accuracy and Predictability of the Self-Efficacy Beliefs of Seventh-Grade Mathematics Students. Learn. Individ. Differ. 2003, 14, 77–90. [Google Scholar] [CrossRef]
  45. Kitsantas, A.; Cheema, J.; Ware, H.W. Mathematics Achievement: The Role of Homework and Self-Efficacy Beliefs. J. Adv. Acad. 2011, 22, 310–339. [Google Scholar] [CrossRef]
  46. Zakariya, Y.F.; Goodchild, S.; Bjørkestøl, K.; Nilsen, H.K. Calculus Self-Efficacy Inventory: Its Development and Relationship with Approaches to Learning. Educ. Sci. 2019, 9, 170. [Google Scholar] [CrossRef]
  47. Pajares, F.; Graham, L. Self-Efficacy, Motivation Constructs, and Mathematics Performance of Entering Middle School Students. Contemp. Educ. Psychol. 1999, 24, 124–139. [Google Scholar] [CrossRef]
  48. Bandura, A. Guide for cosntructing self-efficay scales. In Self-Efficacy Beliefs of Adolescents; Pajares, F., Urdan, T., Eds.; Information Age Publishing: Charlotte, NC, USA, 1997; pp. 307–337. [Google Scholar]
  49. McMullan, M.; Jones, R.; Lea, S. Math Anxiety, Self-Efficacy, and Ability in British Undergraduate Nursing Students. Res. Nurs. Health 2012, 35, 178–186. [Google Scholar] [CrossRef] [PubMed]
  50. Riddle, K.; Domiano, L. Does Teaching Methodology Affect Medication Dosage Calculation Skills of Undergraduate Nursing Students? J. Nurs. Educ. Pract. 2020, 10, 36–41. [Google Scholar] [CrossRef]
  51. Andrew, S.; Salamonson, Y.; Halcomb, E.J. Nursing Students’ Confidence in Medication Calculations Predicts Math Exam Performance. Nurse Educ. Today 2009, 29, 217–223. [Google Scholar] [CrossRef] [PubMed]
  52. Junge, M.E.; Dretzke, B.J. Mathematical Self-Efficacy Gender Differences in Gifted/Talented Adolescents. Gift. Child. Q. 1995, 39, 22–26. [Google Scholar] [CrossRef]
  53. Bates, A.B.; Latham, N.; Kim, J. Linking Preservice Teachers’ Mathematics Self-Efficacy and Mathematics Teaching Efficacy to Their Mathematical Performance. Sch. Sci. Math. 2011, 111, 325–333. [Google Scholar] [CrossRef]
  54. Cattell, R. The Scientific Use of Factor Analysis; Plenum: New York, NY, USA, 1978. [Google Scholar]
  55. Gorsuch, R.L. Factor Analysis, 2nd ed.; Erlbaum: Hillsdale, NJ, USA, 1983. [Google Scholar]
  56. Kline, P. An Easy Guide to Factor Analysis; Routledge: New York, NY, USA, 1994. [Google Scholar]
  57. Kaiser, H.F. A Second Generation Little Jiffy. Psychometrika 1970, 35, 401–415. [Google Scholar] [CrossRef]
  58. Hutcheson, G.D.; Sofroniou, N. The Multivariate Social Scientist: Introductory Statistics Using Generalized Linear Models; Sage: London, UK, 1999. [Google Scholar]
  59. Nielsen, I.L.; Moore, K.A. Psychometric Data on the Mathematics Self-Efficacy Scale. Educ. Psychol. Meas. 2003, 63, 128–138. [Google Scholar] [CrossRef]
  60. Hackett, G.; Betz, N.E. An Exploration of the Mathematics Self-Efficacy/Mathematics Performance Correspondence. J. Res. Math. Educ. 1989, 20, 261–273. [Google Scholar] [CrossRef]
  61. Silk, K.J.; Parrott, R.L. Math Anxiety and Exposure to Statistics in Messages about Genetically Modified Foods: Effects of Numeracy, Math Self-Efficacy, and Form of Presentation. J. Health Commun. 2014, 19, 838–852. [Google Scholar] [CrossRef]
  62. Gobierno de la República. Programa de Trabajo Anual 2021; Gobierno de la República: Mexico City, Mexico, 2021.
  63. Kline, R.B. Principles and Practice of Structural Equation Modeling; Guilford Press: New York, NY, USA, 2005. [Google Scholar]
  64. Hendrickson, A.; White, P. Promax: A Quick Method for Rotation to Oblique Simple Structure. Br. J. Stat. Psychol. 1964, 17, 65–70. [Google Scholar] [CrossRef]
  65. Osborne, J.W. Best Practices in Exploratory Factor Analysis; Createspac Publishing: Scotts Valley, CA, USA, 2014. [Google Scholar]
  66. Osborne, J.W.; Fitzpatrick, D.C. Replication Analysis in Exploratory Factor Analysis: What It Is and Why It Makes Your Analysis Better. Pract. Assess. Res. Eval. 2012, 17, 15. [Google Scholar]
  67. Tabachnick, B.G.; Fidell, L.S. Using Multivariate Statistics; Pearson: Boston, MA, USA, 2001. [Google Scholar]
  68. Hinkin, T.R. A Brief Tutorial on the Development of Measures for Use in Survey Questionnaires. Organ. Res. Methods 1998, 1, 104–121. [Google Scholar] [CrossRef]
  69. Costello, A.B.; Osborne, J. Best Practices in Exploratory Factor Analysis: Four Recommendations for Getting the Most from Your Analysis. Pract. Assess. Res. Eval. 2005, 10, 7. [Google Scholar]
  70. Howard, M.C. A Review of Exploratory Factor Analysis Decisions and Overview of Current Practices: What We Are Doing and How Can We Improve? Int. J. Hum. Comput. Interact. 2016, 32, 51–62. [Google Scholar] [CrossRef]
  71. Lee, W.C.; Godwin, A.; Hermundstad, A.L. Development of the Engineering Student Integration Instrument: Rethinking Measures of Integration. J. Eng. Educ. 2018, 107, 30–55. [Google Scholar] [CrossRef]
  72. Brown, T.A. Confirmatory Factor Analysis for Applied Research, 2nd ed.; Guilford Press: New York, NY, USA, 2015. [Google Scholar]
  73. Fernández, M.; Benítez, J.; Pichardo, M.; Fernández, E.; Justicia, F.; García, T.; García-Berbén, A.; Justicia, A.; Alba, G. Análisis Factorial Confirmatorio de Las Subescalas Del PKBS-2 Para La Evaluación de Las Habilidades Sociales y Los Problemas de Conducta En Educación Infantil. Electron. J. Res. Educ. Psychol. 2010, 8, 1231–1252. [Google Scholar] [CrossRef]
  74. Fabrigar, L.; Wegener, D.; MacCallum, R.; Strahan, E. Evaluating the Use of Exploratory Factor Analysis in Psychological Research. Psychol. Methods 1999, 4, 272. [Google Scholar] [CrossRef]
  75. Byrne, B. Structural Equation Modeling with EQS and EQS/Windows: Basic Concepts, Applications, and Programming; Sage Publications: Thousand Oaks, CA, USA, 1994. [Google Scholar]
  76. Hu, L.; Bentler, P.M. Cutoff Criteria for Fit Indexes in Covariance Structure Analysis: Conventional Criteria versus New Alternatives. Struct. Equ. Model. Multidiscip. J. 1999, 6, 1–55. [Google Scholar] [CrossRef]
  77. MacCallum, R.C.; Browne, M.W.; Sugawara, H.M. Power Analysis and Determination of Sample Size for Covariance Structure Modeling. Psychol. Methods 1996, 1, 130–149. [Google Scholar] [CrossRef]
  78. Satorra, A.; Bentler, P.M. Scaling Corrections for Statistics in Covariance Structure Analysis (UCLA Statistics Series 2); University of California at Los Angeles, Department of Psychology: Los Angeles, CA, USA, 1988. [Google Scholar]
  79. Batista-Foguet, J.; Coenders, G.; Alonso, J. Análisis Factorial Confirmatorio. Su Utilidad En La Validación de Cuestionarios Relacionados Con La Salud. Med. Clín. 2004, 122, 21–27. [Google Scholar] [CrossRef] [PubMed]
  80. Thorndike, R.M.; Thorndike-Christ, T. Measurement and Evaluation in Psychology and Education, 8th ed.; Pearson: Boston, MA, USA, 2010. [Google Scholar]
  81. R-Team Computers. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2012; Available online: http://www.r-project.org (accessed on 10 July 2022).
  82. Kranzler, J.H.; Pajares, F. An Exploratory Factor Analysis of the Mathematics Self-Efficacy Scale—Revised (MSES-R). Meas. Eval. Couns. Dev. 1976, 29, 215–228. [Google Scholar] [CrossRef]
  83. Langenfeld, T.E.; Pajares, F. The Mathematics Self-Efficacy Scale: A Validation Study. In Proceedings of the Annual Meeting of the American Educational Research Association, Atlanta, GA, USA, 12–16 April 1993. [Google Scholar]
  84. Hu, L.; Bentler, P. Evaluating Model Fit. In Structural equation modeling: Concepts, Issues, and Applications; SAGE Publications, Inc.: Thousand Oaks, CA, USA, 1995; pp. 76–99. [Google Scholar]
  85. Kitchener, K. Cognitive, Metacognitive, and Epistemic Cognition; a Three Level Model of Cognitive Processing. Hum. Dev. 1983, 26, 222–232. [Google Scholar] [CrossRef]
  86. Lopez, K.A.; Willis, D.G. Descriptive versus Interpretive Phenomenology: Their Contributions to Nursing Knowledge. Qual. Health Res. 2004, 14, 726. [Google Scholar] [CrossRef] [PubMed]
  87. Ellington, A.J. A Meta-Analysis of the Effects of Calculators on Students’ Achievement and Attitude Levels in Precollege Mathematics Classes. J. Res. Math. Educ. 2003, 34, 433–463. [Google Scholar] [CrossRef]
  88. Kelley, T.R.; Knowles, J.G. A Conceptual Framework for Integrated STEM Education. Int. J. STEM Educ. 2016, 3, 11. [Google Scholar] [CrossRef]
  89. Chen, X.; Soldner, M. STEM Attrition: College Students’ Paths into and Out of STEM Fields Statistical Analysis Report; National Center for Education: Jessup, MD, USA, 2013.
  90. Geisinger, B.N.; Raman, D.R. Why They Leave: Understanding Student Attrition from Engineering Majors. Int. J. Eng. Educ. 2013, 29, 914–925. [Google Scholar]
  91. Randhawa, B.S.; Beamer, J.E.; Lundberg, I. Role of Mathematics Self-Efficacy in the Structural Model of Mathematics Achievement. J. Educ. Psychol. 1993, 85, 41–48. [Google Scholar] [CrossRef]
  92. Walther, J. Understanding Interpretive Research through the Lens of a Cultural Verfremdungseffekt. J. Eng. Educ. 2014, 103, 450–462. [Google Scholar] [CrossRef]
  93. Walther, J.; Sochacka, N.W.; Kellam, N.N. Quality in Interpretive Engineering Education Research: Reflections on an Example Study. J. Eng. Educ. 2013, 102, 626–659. [Google Scholar] [CrossRef]
  94. Bowen, G.A. Grounded Theory and Sensitizing Concepts. Int. J. Qual. Methods 2006, 5, 12–23. [Google Scholar] [CrossRef]
  95. MacFarlane, B.; MacFarlane, B. Infrastructure of comprehensive STEM programming for advanced learners. In STEM Education for High-Ability Learners Designing and Implementing Programming; MacFarlane, B., Ed.; Routledge: New York, NY, USA, 2016; pp. 139–160. [Google Scholar]
  96. Flores, A. Examining Disparities in Mathematics Education: Achievement Gap or Opportunity Gap? High Sch. J. 2007, 91, 29–42. [Google Scholar] [CrossRef]
  97. Lent, R.W.; Brown, S.D.; Hackett, G. Contextual Supports and Barriers to Career Choice: A Social Cognitive Analysis. J. Couns. Psychol. 2000, 47, 36–49. [Google Scholar] [CrossRef]
Figure 1. Description of the results of the “scree-plot” of the eigenvalues shows an inflection in the third factor.
Figure 1. Description of the results of the “scree-plot” of the eigenvalues shows an inflection in the third factor.
Mathematics 10 02905 g001
Figure 2. CFA resulting model on the adaptation process of the Mathematics Self-Efficacy Survey (MSES) to a Mexican–Spanish language instrument.
Figure 2. CFA resulting model on the adaptation process of the Mathematics Self-Efficacy Survey (MSES) to a Mexican–Spanish language instrument.
Mathematics 10 02905 g002
Table 1. Loading factors resulting from the exploratory factor analysis.
Table 1. Loading factors resulting from the exploratory factor analysis.
ItemFactor Loadings
Everyday Math ActivitiesMath-Related CoursesMath Problem Solving
Q1 *0.59
Q2 *0.48
Q30.70
Q4 *0.62
Q5 *0.64
Q60.69
Q70.76
Q8 *0.64
Q9 *0.49
Q10 *0.55
Q11 *0.64
Q120.65
Q13 *0.38
Q14 * 0.72
Q15 0.80
Q16 0.82
Q17 0.77
Q18 0.74
Q19 * 0.80
Q21 * 0.62
Q22 * 0.51
Q23 * 0.55
Q24 * 0.55
Q25 * 0.58
Q26 0.67
Q27 * 0.88
Q28 * 0.64
Q29 * 0.61
Q30 0.77
Q31 0.75
Q32 * 0.69
Q33 0.77
Q34 * 0.63
Q35 * 0.38
Items indicated with a * were ultimately removed from the CFA to strengthen the model fit.
Table 2. Items deleted from the instrument to improve CFA model fit.
Table 2. Items deleted from the instrument to improve CFA model fit.
Items
Q1Determine how much interest you will pay on a $675 USD loan over two years at 14.75 % per year.
Q2Calculate the amount of wood you need to buy to build two bookcases 2 m high and 1 m wide.
Q4Calculate how much fabric you need to buy to make curtains for two equal square windows 1.5 m on each side.
Q5Calculate how much interest you will earn on your savings account in 6 months, and analyze how that interest is calculated.
Q8Determine the amount of tip corresponding to your share of a restaurant bill divided by eight people.
Q9Set a monthly budget for yourself.
Q10Balance your expenses and your weekly income without mistake.
Q11Determine which of the two summer jobs is the best offer, one with a higher salary but no benefits, the other with a lower salary but this covers the costs of lodging, maintenance and transportation.
Q13Calculate the quantities of a recipe for a dinner for 41 people when the original recipe is for 12 people.
Q14Precalculus
Q19Differential equations
Q21Sally needs three pieces of cardboard for a school project. If the pieces are represented by rectangles A, B, and C, help Sally to arrange their areas in increasing order (assumes b > a). Mathematics 10 02905 i001
Q22The average of three numbers is 30. The fourth number is at least 10. What is the minimum average of the four numbers?
Q23To build a table, Michele needs 4 pieces of wood that are 2.5 feet long for the legs. He wants to determine how much lumber he needs for 5 tables. Please reason as follows: 5 × (4 × 2.5) = (5 × 4) 2.5. What property of real numbers is Michele using?
Q24Five points lie on a line. T is next to G. K is next to H. C is next to T. H is next to G. Determine the order of appearance of these five points on the line.
Q25There are three numbers. The second is twice the first, and the first is a third of the other number. The sum of the three is 48. Find the largest number.
Q27The hands of a clock make an obtuse angle when it marks ____ o’clock. (Consider that there is more than one correct answer).
Q28Bridget buys a package containing 9-cent and 13-cent stamps for $2.65 USD. If there are 25 stamps in the package, how many 13-cent stamps are in the package?
Q29Write an equation that expresses the condition “the product of two numbers R and S is one less than twice the sum of both”.
Q32The formula to convert temperature from Celsius to Fahrenheit is F = 9 5 C + 32. How many degrees Fahrenheit is 20 degrees Celsius?
Q34If 3X – 2 = 16, what is the value of X?
Q35Fred’s bill for some household items was $13.64 USD. If he paid with a $20 USD bill, how much change should they give him?
Table 3. Cronbach’s alpha results of the first and second sample.
Table 3. Cronbach’s alpha results of the first and second sample.
SubconstructCronbach’s Alpha
Everyday math activities0.86
Math-related courses0.88
Math problem solving0.85
Table 4. Validated items of the instrument after adjustment of the CFA model.
Table 4. Validated items of the instrument after adjustment of the CFA model.
Items
Everyday math activities
Q3Calcular los impuestos que tendrías que pagar en un año de trabajo dependiendo de tu ingreso anual (30% de ISR).
Q6Calcular cuánto interés ganarás con tu cuenta de ahorros en 6 meses, y analizar cómo ese interés es calculado.
Q7Estimar el costo total de tu mandado en tu cabeza conforme tomas los artículos.
Q12Calcular cuánto ahorrarías si hay un 15% de descuento en un artículo que deseas comprar.
Math-related courses
Q15Cálculo Diferencial
Q16Cálculo Integral
Q17Cálculo Vectorial
Q18Álgebra Lineal
Math problem solving
Q26En cierto triángulo, el lado más corto es de 6 pulgadas, el lado más largo es el doble de largo que el más corto, y el tercer lado es 3.4 pulgadas más corto que el lado más largo. ¿Cuál es la suma de los tres lados en pulgadas?
Q30Formular el problema que debe resolverse para encontrar el número que se pide en la expresión “seis menos que el doble de 4 5/6”.
Q31En cierto mapa, 7/8 de pulgada representan 200 millas. ¿Qué tan separadas están dos ciudades cuya distancia de separación en el mapa es de 3 1/2 pulgadas?
Q33Resuelve 3 3/4 − 1/2 =
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Morán-Soto, G.; González Peña, O.I. Second Phase of the Adaptation Process of the Mathematics Self-Efficacy Survey (MSES) for the Mexican–Spanish Language: The Confirmation. Mathematics 2022, 10, 2905. https://doi.org/10.3390/math10162905

AMA Style

Morán-Soto G, González Peña OI. Second Phase of the Adaptation Process of the Mathematics Self-Efficacy Survey (MSES) for the Mexican–Spanish Language: The Confirmation. Mathematics. 2022; 10(16):2905. https://doi.org/10.3390/math10162905

Chicago/Turabian Style

Morán-Soto, Gustavo, and Omar Israel González Peña. 2022. "Second Phase of the Adaptation Process of the Mathematics Self-Efficacy Survey (MSES) for the Mexican–Spanish Language: The Confirmation" Mathematics 10, no. 16: 2905. https://doi.org/10.3390/math10162905

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop