Next Article in Journal
Adoption of Pedagogical Innovations: Social Networks of Engineering Education Guilds
Previous Article in Journal
Maintaining Tensions: Braiding as an Analogy for Mathematics Teacher Educators’ Political Work
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Correlation between High School Students’ Computational Thinking and Their Performance in STEM and Language Courses

by
Aikaterini Bounou
1,
Konstantinos Lavidas
1,*,
Vassilis Komis
1,
Stamatis Papadakis
2 and
Polyxeni Manoli
1
1
Department of Educational Science and Early Childhood Education, University of Patras, 26504 Patras, Greece
2
Department of Preschool Education, University of Crete, 74150 Rethymnon, Greece
*
Author to whom correspondence should be addressed.
Educ. Sci. 2023, 13(11), 1101; https://doi.org/10.3390/educsci13111101
Submission received: 4 September 2023 / Revised: 17 October 2023 / Accepted: 26 October 2023 / Published: 31 October 2023

Abstract

:
According to numerous researchers, a clear and direct correlation exists between Computational Thinking (CT) and courses falling under the purview of Science, Technology, Engineering, and Mathematics (STEM), thereby advocating for the integration of CT into the curricula of STEM courses. Nonetheless, it is noteworthy that only a few studies have scrutinized this correlation in-depth. Most such studies connect the correlation tacitly and predominantly concentrate on the empirical assessment of CT within the curriculum of one STEM discipline. This research seeks to evaluate the Computational Thinking abilities of 80 high school students in Greece and discern the extent of correlation with their academic performance in STEM and Greek language courses. A longitudinal survey was executed to accomplish this objective, commencing with administering a test designed to gauge the fundamental components of Computational Thinking. It is worth noting that this test draws its inspiration from internationally recognized computer competitions and serves as a credible assessment tool. Subsequently, an assessment was carried out to ascertain the degree of correlation between students’ Computational Thinking aptitude and their written performance in the subjects encompassed by the STEM category and the Greek language courses. The outcomes of this investigation revealed the presence of a statistically significant correlation between students’ Computational Thinking proficiency and their performance in these academic subjects, further extending to the academic direction of study chosen by the students. Based on the findings of this research, implications and pedagogical recommendations are delineated while concurrently acknowledging the limitations encountered during this study.

1. Introduction

Computational Thinking (CT) has occupied researchers’ interest over the last fifteen years. It has been a subject of broader investigation [1,2,3,4,5,6,7,8,9,10,11,12,13,14], including research focusing on CT introduction into the curricula of primary and secondary education [2,15,16,17,18,19,20,21,22,23,24,25,26]. Research on CT also focuses on the methods of its evaluation. Many researchers assess students’ programming or computational skills, considering that these skills represent students’ levels of computational thinking. In contrast, some researchers try to assess students’ problem-solving skills. Moreover, some researchers try to assess the existence and levels of the CT trait through general questionnaires not directly related to a specific course [4,5,11,13,27,28,29,30,31,32,33,34].
Seymour Papert introduced CT in 1980 [27] to denote the changes computers could cause in thought processes during mathematical education. In 2006, Jeannette Wing reintroduced the term, trying to give the first modern definition: “the processes of human thought involving problem-solving, system designing and understanding of human behaviour, based on concepts that are fundamental to Computer Science” [14]. More recently, Wing followed up with a concise operational definition of CT [35]: “Computational thinking is the thought processes involved in formulating a problem and expressing its solution(s) in such a way that a computer—human or machine—can effectively carry out”.
Since then, many researchers have tried to define CT clearly. However, this has yet to be accomplished, given the different definitions, with several overlaps, that we find in the literature [4,32,33,36]. Many researchers directly correlate CT with computer science (CS) [12,37,38]. These researchers structure the definition of CT using computational concepts and computational skills derived from computer science. Furthermore, they correlate CT with skills required in CS and problem-solving in general. They consider CT a set of skills essential for students to deepen their understanding of computer science, solve everyday problems and succeed in scientific fields and their academic performance [39,40,41].
In this context, computational thinking has been linked to STEM education, a key component of which is problem-solving. According to Siekmann [42], the term STEM in education is defined as follows: “STEM is an acronym for the disciplines of science, technology, engineering, and mathematics taught and applied either in a traditional and discipline-specific manner or through a multidisciplinary, interconnected, and integrative approach. Both approaches are outcome-focused and aim to solve real-world challenges”. In many countries (e.g., Australia, Sweden, South Korea, Poland, and the USA), CT has been introduced, or attempts have been made to introduce it into the secondary education curricula, either through computer science or STEM courses [16,17,23,26]. Similar innovative efforts to introduce CT in secondary education are now being made in Greece, confirmed by the new curricula published in 2021 by the Institute of Educational Policy (IEP) [43]. In Greece’s new curricula, STEM subjects are identified as physics, chemistry, and biology under science, computer science (CS) under Technology and Mathematics. As far as the term Engineering is concerned, this is considered to include physics. Within the middle school curricula, CT is recognized as the most basic and most frequently used practice of CS and is taught within this course. At the same time, CT is recognized as a horizontal skill required to address complex, interdisciplinary, authentic, real-world problems. Within the high school curricula, CT is not taught as part of any course in the STEM curricula or any other course. The planning anticipates that all students, regardless of their orientation choice in 2nd grade, will have developed their computational thinking skills through STEM courses in the 1st and 2nd grades and become capable of applying computational practices, focusing on problem-solving and creating digital artefacts.
One of the most critical problems regarding promoting and integrating CT in these courses is the tests (or procedures) for measuring and assessing it. Since we accept that CT is supported and promoted through these courses [3,10,12], we should also find a way to assess and measure students’ acquisition of this trait. Many researchers have developed tests in this direction, but there is a lack of widely accepted CT assessment tests. Most of these tests involve programming environments, and few aim to assess CT independently of programming environments [4,33,34]. Another interesting finding is that the efforts to assess CT traits in students of the last grades of secondary education are significantly less than similar assessments in primary and lower grades of secondary education [33,34].
Therefore, as many researchers argue [4,11,35], constructing reliable tests to assess CT subject to the validity and reliability test is at issue. In addition, it is interesting to strengthen research into the assessment of CT in upper secondary students, bearing in mind that these tests help students to develop their abilities and upgrade their learning, i.e., they are pillars of meaningful learning of CT. Furthermore, confirming whether STEM courses promote CT and help students develop this thinking is exciting but essential, as it would facilitate the preparation of the curricula and could be an impetus for the distinct introduction of CT in the curricula of STEM courses.

2. Theoretical Perspectives

2.1. Computational Thinking and STEM Courses

Many researchers and organizations [2,6,32,44,45,46,47] correlate CT with computer science (CS) by giving CT characteristics derived directly from this science (Table 1). Bar and Stephenson [2], trying to formulate a functional definition directly relevant to teachers (especially in secondary education), included specific skills in this definition, such as abstraction, algorithmic thinking, automation, synthesis and decomposition of problems, parallelism, data simulation, data collection, analysis, and representation. In 2013, Grover and Pea [6] defined the critical components of CT through an extensive literature review. The most vital component of CT is the skill of abstraction, followed by skills such as generalization, algorithmic thinking, decomposition, rethinking, conditional logic, debugging, and the efficiency and performance of code.
In 2015, Google introduced a definition of CT on its website entitled “Exploring Computational Thinking”. According to this definition, CT is a problem-solving process involving many features and arrangements; it is a fundamental component in the development of computer science, but it can also support problem-solving in other scientific fields, such as mathematics, natural sciences and human sciences. In January 2021, the British Computer Society (BCS) updated its definition of CT as the thought process of identifying a problem and its solution so that a computer system can achieve it. In this definition, the BCS emphasizes that using computer systems in terms of CT is unnecessary, highlighting that CT can give an advantage to computer systems in problem-solving.
On the other hand, many researchers associate CT with skills required in problem-solving, such as abstraction, decomposition, logic, algorithmic thinking, automation, generalization, evaluation, and debugging [33,34] (Table 2). These researchers’ counterargument is that CT does not require computers. On the contrary, many of the practices of CT are found in both mathematics and natural sciences. In Realistic Mathematical Education (RME), the process of modelling and, simultaneously, mathematicizing a real-world problem using algorithmic thinking is one of the most common educational practices, which has been the subject of broad debate in the context of mathematics teaching over the last fifty years [48,49,50,51,52,53,54]. At the same time, modelling, abstract thinking, simulation, and problem-solving practices are integral to the evolution of the natural sciences and the experimental process within these Sciences and their didactics [55,56,57,58,59,60].
The previous observations led to a new perspective on the very nature of CT and made researchers wonder whether it is inextricably linked to CS. Through the reports of the NRC (National Research Council) in 2010 and 2011 [8,9], a new perspective on CT was introduced, according to which Computer Science does not have a monopoly on CT. “CT is a set of skills transferred between different disciplines” [9] (p. 54). Furthermore, as stated in the report: “At its core, CT is independent of technology....to be a competent user of CT is not necessarily related to one’s ability to use modern information technology” [9] (p. 61). In 2011, the CSTA (Computer Science Teachers Association) and ISTE (K-12 Computer Science Standards) [61] created a list of CT skills stating that they can be used in solving everyday problems in many different disciplines and at different levels of education. In 2016, the CSTA, in its updated Computer Science Standards [45], stated: “We believe that CT is a problem-solving methodology that extends the field of CS to all other sciences, providing the appropriate means to analyze and achieve a solution of various problems solved computationally”. Through research involving preservice teachers, Yadav et al. [41] introduced and explained five critical components of CT: problem identification and decomposition, abstraction, logical thinking, algorithms, and debugging. In 2016, Weintrop et al. [12] introduced a classification of CT components for mathematics and natural sciences. According to this classification, the predominant practices of CT in STEM courses are data management practices, modelling and simulation practices, computational problem-solving practices, and thinking practices for complex systems. The necessity for developing a CT component taxonomy in the context of mathematics and natural sciences stemmed from the fact that CT, as its definition has evolved, is an integral part of the practices developed in the evolution of these sciences. Regarding teaching these sciences in secondary education, as Weintrop et al. [12] stated, using CT deepens students’ understanding of these disciplines, and vice versa; mathematics and natural sciences provide a clear framework within which CT can be applied and mastered.

2.2. Computational Thinking and Language Courses

Unfortunately, there is not enough material in the literature that directly links computational thinking to language learning. This is likely because computational thinking skills are directly related to computer science. Nevertheless, the role of language is crucial in cultivating and promoting computational thinking skills since language is the tool through which articulation and reflection occur in all processes that require the activation of these skills. Reinforcing this belief is the presentation by Lu and Fletcher in 2009 at the 40th ACM Technical Symposium on Computer Science Education [62]. In addition, we have identified a small number of cases where attempts have been made to integrate CT processes and skills into language courses [63,64]. Some of these have involved modelling [65,66] or even the production of computer games [67].
However, we have not found any research correlating the skills students acquire in language classes with CT skills.

2.3. CT Assessment Tool

The methods for assessing CT that various researchers have used occasionally vary. In these assessments, researchers generally measure the student’s CT trait through procedures such as selected—or constructed—response tests, portfolio assessments, computational or programmatic environments, surveys, and interviews. The context in which most of these evaluations occur is the classroom, particularly the computer science course or one of the other STEM courses [33,34]. The confusion surrounding structuring CT assessment procedures is evident, primarily due to the need for more consensus on structural features and the precise definition of CT [11,33,34]. According to Tang et al. [34], the lack of a theoretical framework that would create consensus among the various definitions of CT has led to a lack of understanding of the components of CT and the difference between CT and other thought processes. This resulted in a need for more clarity in the methods for assessing CT. A significant share of CT assessment processes, either within or outside the framework of STEM courses, involves programming environments, which require familiarity and, therefore, cannot be easily used to draw safe conclusions [5,11,13,28,30,31,32,33,68]. In addition, as both Papert and Wing have pointed out, CT and programming should be treated as separate yet compatible learning tools that can broaden problem-solving processes [34].
Another method of evaluation is the use of opinion polls or surveys. However, the self-referentiality of this research is a significant disadvantage, leading to the manipulation of students’ intuitive perceptions and attitudes [34]. Another assessment method is student interviews to explore their understanding of CT and the codification of their behaviours and verbal communication. The problem with this assessment method is its high cost, long duration, and low percentage of students that can be evaluated this way [34].
Researchers also use multiple evaluation methods, including creating a file for each student under investigation containing the student’s constructions, tasks, and tests and conducting an interview [32,33,34]. A substantial proportion of CT evaluation processes concern CT assessment criteria [1,7,32], often used with psychometric tools. These tests consist of multiple-choice or open-ended questions aimed at the initial measurement of CT components. The criticism here focuses on the fact that these tests measure perceptions and attitudes while CT is a skill [28]. These tests may be the only ones that do not directly link CT to CS.
As a result, researchers aim to create tests for evaluating CT in a non-programming environment [34], which partially applies to this type of evaluation. Many of these tests [7,68] draw some of their material from the Bebras competition (http://www.bebras.org/, accessed on 18 January 2022), an international initiative to disseminate CS, programming, and CT to students of all ages. Another source is the tests of the “Talent search” of the Computer Olympiad of South Africa (Computer Olympiad ‘Talent Search’ papers, http://www.olympiad.org.za/talent-search/, accessed on 18 January 2022) [7]. Both competitions aim to promote CT, require the use of CT, and involve students from over 40 countries annually. The tests of these competitions aim to elicit students’ computational thinking skills, thus empowering them to solve problems arising from everyday life without requiring a programming device or platform [34,69].
According to Poulakis and Politis [33], the answer to the absence of common ground in terms of CT assessment is to shift the research to questions related to knowledge transfer, i.e., whether students can transfer the acquired knowledge and skills of CT and apply them to other areas, such as problems of everyday life, a view also supported by Kalelioğlu et al. [70]. Román-González et al. [69], in their classification of CT assessment tools, referred to a category of CT assessment tools called CT skill transfer tools, whose objective is to assess to what extent students are capable of transferring computational thinking skills to different kinds of problems, within different contexts and situations. Both Román-González et al. [69] and other researchers [7,61,71,72] agree that the Bebras competition and the Computer Olympiad focus on assessing the ability to transfer CT skills to everyday problems. These tools are particularly suitable for assessing the degree of retention and transfer of CT skills after a sufficient period has elapsed since acquiring these skills [69]. Accepting, therefore, that basic CT skills are informally cultivated in students through STEM courses, we chose to formulate a test using data from both the Bebras competition and the related South African Computer Science Olympiad “Talent Search” in order to measure students’ degree of acquisition of the latent CT trait.

2.4. Research Objectives

The focus of educational reform at the dawn of the 21st century in many countries, such as the USA, Finland, and Australia, has been on the so-called 21st-century skills. These are critical skills that anyone who wants to be an active citizen must have. The main factors behind the emergence of these skills were the globalized economy and market, the development of new technologies and communication, and the need for lifelong learning [47]. In education, these skills are mainly summarized in the “4Cs”, i.e., critical thinking and problem-solving, communication, collaboration, creativity, and innovation. Since their introduction, the 4Cs have gradually become accepted as fundamental elements of many school curricula worldwide.
In February 2018, S. Grover published an article entitled “The 5th ‘C’ of 21st Century Skills? Try Computational Thinking (Not Coding)” [73] in which she argues that: “There is growing recognition in the education systems around the globe that being able to problem-solve computationally-that is, to think logically and algorithmically, and use computational tools for creating artefacts including models and data visualizations-is rapidly becoming a prerequisite competency for all fields. I argue that we need computational thinking (CT) to be another core skill the “5th C” of 21st-century skills is taught to all students”. This article urgently raised the need to introduce the teaching of CT in secondary education, arguing that CT is a critical skill that students are required to master when they complete their K-12 education.
In addition, many researchers argue that the most appropriate way to introduce CT into the curricula is through its integration into STEM and language courses. They argue that CT is primarily related to thinking rather than programming [74]. It is a common belief that physics, mathematics, informatics, chemistry, and biology constitute the appropriate framework for students to promote the acquisition of essential practices of CT [75,76,77,78,79]. Moreover, this is the case whether CT is linked to programming environments, which is significantly confirmed for computer science or not necessarily, as in physics, mathematics, chemistry, and biology. Teaching these courses leads to cultivating many skills that are components of CT through their inquiry and experimental nature, even when this is not explicitly stated in the course objectives. Also, we should pay attention to the crucial role of language through its dual role in supporting thought. Good use of language supports the processing of information—it helps to think—and the communication of information [80]. CT requires students to formulate logical arguments and thinking that can be articulated and communicated through language. Therefore, we deemed it necessary to include language and STEM subjects in our research to explore whether there is a correlation with students’ CT levels.
CT and all aspects of it, especially in language and STEM courses, is an emerging field of research. One of the main reasons is that many countries are choosing to introduce CT into their curricula through integration into existing subjects. Thus, there has been an effort to investigate the correlation between CT levels and students’ academic performance in language and STEM courses in recent years. However, the number of relevant studies still needs to be increased. In particular, we identified three relevant studies related to primary education carried out in the last three years. The research by Polat et al. [81] concerns school students in Turkey and suggests a positive effect of good performance in mathematics and CS on students’ CT levels. The study states that the effect of mathematics on students’ CT levels is stronger than the effect of CS. Sun et al. [82] investigated the correlation between students’ attitudes towards STEM subjects and their CT skills in their research involving school students in China. They found that the type of learning attitude predicted students’ CT levels. Also, Sun et al. [83], in their research, which again involved Chinese students, found significant bidirectional correlations between students’ academic performance in STEM and language courses and CT skills. Finally, we identified two studies on secondary education conducted in the last three years. The research by Chongo et al. [84] involved 128 students in a science field of study from Malaysia. The study found a statistically significant correlation between students’ CT skills and mathematics achievement. Hava and Koyunlu Ünlü [85] found a significant correlation between Turkish high school students’ CT skills, interest in STEM careers, and attitudes towards inquiry learning. In addition, the above researchers’ observations confirm Lei et al.’s meta-analysis [86], which concerned earlier similar studies and reported positive correlations between CT levels and academic achievements for students.
Considering that efforts to assess students’ CT in the final years of secondary education are limited [33,34] and that the development of CT is in demand for new curricula in Greece, it would be interesting to explore the emerging connection of CT with STEM and language courses. In this context, this research aims to investigate whether students’ level of CT correlates with their performance in STEM subjects (mathematics, natural sciences, and informatics) and the Greek language subject; another aim of this study is to explore how these levels vary depending on the direction of study that students have chosen to attend in the last grade of secondary education. It should be noted that in Greece, students choose a field of study in the second grade of high school. Thus, they attend specific STEM courses in science and economics (mathematics, physics, chemistry, biology) more than in humanities. Based on this fact and given the direct relationship between STEM courses and CT, we decided to investigate the existence of a correlation between the chosen field of study and students’ CT. Students’ performances in each subject were based on their performance on written tests delivered during the first semester of the school year, which took place at the same time as the CT assessments.
Finally, in the last grade of the Lyceum, the language course contains elements of grammar and syntax of the Greek language while simultaneously emphasizing academic writing skills.
Given all the above, we came to the formation of the following research questions:
  • Is our research tool adequate for estimating students’ CT levels?
  • Is there a correlation between students’ CT levels and their performance in STEM and language courses?
  • Is there a detectable correlation between students’ CT levels and their choice of field of study?

3. Method

3.1. Settings and Participants

Our research is longitudinal [87] as it tracked the performance in the computational thinking test and the recapitulative written tests of 80 students in the final grade of an Athenian Lyceum during the first semester of the school year 2021–2022. These written tests were preceded by teaching with worksheets, laboratory activities, and problem-solving activities during the first semester of study. As such, they review the work done in class during the first semester. These data were collected in collaboration with the physics, biology, mathematics, informatics, and language teachers and with the student’s consent. Initially, a CT assessment test was conducted on 80 students of the final grade of a Lyceum in November 2021, which was delivered during the second teaching hour of the timetable within the framework of one of the STEM subjects taught. The class teachers supervised the students taking the test throughout the procedure. They ensured the conditions were appropriate for conducting the test after consultation with the teacher-researcher. The test duration was one teaching hour, i.e., 50 min. Participants were given the test printed on paper and an answer sheet with sufficient space for any rough work they might need to do to answer the test questions.
By the end of January 2022, when the first semester of the school year in secondary education expired, all students participated in the mandatory recapitulative written tests in all subjects taught, following article 86 of the relevant Law 4823/2021. The grades of these written recapitulative tests in mathematics, physics, biology, informatics and Greek language are objective since all students were examined in the same subjects and under the same conditions. The teacher-researcher collected these grades with the teachers’ consent so that they could be subjected to the appropriate statistical processing to answer as many of the research questions that had been asked as possible. The 80 students who participated were divided into five different classes. Their distribution concerning the direction of study they had chosen was as follows: 15 students in Humanities Studies, 33 in Science Studies and 32 in Economics–Informatics Studies.
Finally, the research objectives and process were described in detail to the teachers and the participating students to encourage greater participation in the survey [88]. Students’ consent to the survey was also requested; simultaneously, we assured them that the test results would be used exclusively for research purposes and would not affect their scores. Before conducting this study, the researchers received approval from the school supervisors and the Research Ethics Board (REB) designated by the University of Patras, specifically from the Department of Educational Science and Early Childhood Education (83965/4-11-2021).

3.2. Research Tool for Assessing CT Skills

The research tools used from time to time to assess CT vary. At the same time, most of them involve activities in a programming environment and a small number of activities aimed at the initial measurement of CT components without a direct connection between CT and CS. The CT assessment test used in our research aimed to construct a test that was not directly related to any of the teaching subjects of the Lyceum curricula. Therefore, this test contained questions related to the transfer of knowledge, i.e., whether the students can transfer the acquired knowledge and skills of CT and apply them to other areas, such as problems of everyday life. According to researchers [61,68,69,70,71], the Bebras competition and the Computer Olympiad belong to this category. This is one of the most important reasons we formulated a test using data from both the Bebras competition and the related Computer Science Olympiad “Talent Search” of South Africa. In addition, some of the criteria that the questions of these competitions meet are to be independent of any program of study and to be answered in 3 to 4 min with pencil and paper or computer without having a unique solution [65,70,71].
Therefore, we constructed a test (Appendix A) that fulfils the following conditions: (a). It refrains from relying on previously acquired knowledge. (b). It is not oriented to the subject of informatics/programming. (c). It corresponds to the level of students in the last grade of secondary education in Greece, regardless of the direction of studies they have chosen. (d). It should be of sufficient size that it can be solved with pencil and paper in 50 min (that is, one teaching hour). (e). It consists of questions coming from a reliable CT assessment source.
In the first stage, the test consisted of 18 questions that examined various components of CT, such as abstraction, logical thinking, decomposition, evaluation, hierarchy, algorithmic thinking, problem solving, modelling, and simulation. This test was carried out at a pilot level in two 25-member classes of third-grade students of two different Lyceums from the one where the research would take place. This process was deemed necessary to see the students’ reactions in real-time, assess the difficulty level of the questions in actual conditions, assess the clarity in the formulation of the questions, and deal with any problems that would appear. As a result of the test pilot implementation, we reformulated some questions to be more understandable to the students. Moreover, we removed four questions that were judged not to satisfy the required level of difficulty; that is, they did not have a clear resolution, either because they were straightforward and answered by everyone or because they were challenging and could not be answered by any of the students of the pilot process.

3.3. The Strategy of Data Analysis

To check the measurement quality of the test, mainly how well it measures students’ latent variable CT, we will use the item response theory (IRT) approach [89]. The IRT includes a collection of modelling techniques to analyze elements of a scale or test used to measure one or more latent variables [90]. Using IRT models, we not only improved the scoring accuracy of a latent trait but also improved the future conduct of the test by identifying which test items best measure the latent trait and should, therefore, be used in any reapplication of the test [89].
Several parametric one-dimensional models can be satisfactorily adapted to the data. For dichotomous items, as in the case of this research, parametric logarithmic models (PL) with 1 (Rasch model), 2, and 3 parameters are the most common [89,91]. In the first model, the parameter of the items that are differentiated is the difficulty (or location) of the item. In the second model, two parameters for each element, the difficulty and the discrimination, are differentiated. Finally, in the three-parameter model, the differentiation of the correct answer’s random response or guessing parameter is added [89]. In any case, the researchers should choose the model best adapted to the data. To better determine the model that fits the data, on the one hand, the adjustment check should be assessed for all items as a whole and, on the other hand, for each item separately [91]. In this direction, the −2 log-likelihood index that follows the x2 distribution and the difference of x2 were utilized. To test the reliability of the data (whether they provide adequate information on students’ latent traits), we examined the item information curve. Finally, to calculate the latent trait, we used the expected a posteriori (EAP), which is flexible and produces more accurate scores [91].
The quality control of the test according to the IRT was carried out in the environment of R [92] with the package “LTM” [93]. At the same time, parallel analysis was used to test the one-dimensional of the instrument in the R environment with the “psych” package [94]. Finally, IRT analysis was carried out in the environment of SPSS to investigate the relationship between each student’s level of CT and other variables, such as students’ grades in STEM subjects and demographic characteristics [95].

4. Results

Concerning the one-dimensional instrument, the parallel analysis revealed only one factor with an eigenvalue of 3.52, significantly higher than the 95th percentage value (1.97) of the eigenvalues produced based on 2000 random samples selected from the original sample. Therefore, the instrument measures a dimension, that is, students’ latent traits.
Moreover, the one-, two- and three-parameter models were calculated, and their fit to the data was checked. The table shows the fit indices x2 and the comparison of successive models with the index Δx2. The comparison of the first model with the second supports the second as an optimal model. Then, comparing the second with the third model confirms the prevalence of the second model as the one that acceptably describes the data. The fit indices (Table 3) of each test element confirm this fact. In Table 4, the last column clearly shows that all the elements are well adapted to the data since in none of them is the value x2 statistically significant.
Table 4 also shows the values of difficulty and distinction of each item according to the prevailing model of the two parameters. More specifically, considering the difficulty parameter of the data, we observe that this test consisted mainly of easy questions since 10 out of 14 elements had a negative difficulty. Only four items, especially 9 and 1, are tricky. This means that in this test, most data assess students’ latent traits below the average level (students with low and moderate performance in CT). Regarding the values of distinction, we observe that most are more significant than the unit. Therefore, these items can distinguish students with similar latent trait levels.
Finally, the following diagrams show the information curve provided for the latent characteristic of each item separately (Figure 1) and all the elements cumulatively (Figure 2). Overall, the test evaluates (63.06%) students’ latent traits up to a moderate level.
Therefore, this test should remove items with similar difficulty that evaluate students with low performance and add more challenging elements that will evaluate students with high performance. Thus, in subsequent uses of the test for a better and more complete assessment of students’ latent CT traits, we propose to exclude the elements q2 and q4 but not q12. Q12, with a higher value of distinction and, therefore, more critical information, covers this area of the latent trait more adequately. Also, the elements q3, q5, and q6 should be excluded since the distinguishing value of less than one unit does not indicate that they can provide sufficient information. At the same time, the value of the difficulty of these items is covered by items q7 and q8.
Table 5 shows the correlations (r coefficients and 95% Confidence interval of r) of students’ latent CT traits with the written grades that the students had in the STEM subjects. There is no statistically significant correlation with information technology. Finally, the non-parametric Kruskal–Wallis test, which was carried out to check whether CT differs in the three study directions, revealed a statistically significant difference (x2(2) = 9.174, p = 0.010) in favour of students of the Science Studies (Science Studies: M = 0.33, SD = 0.84, Humanities Studies: M = −0.36, SD = 0.61, Economics–Informatics Studies: M = −0.21, SD = 0.96).

5. Discussion

The present research aimed to investigate the latent CT trait of students in the last grade of Lyceum (research question 1) as well as its correlation with their performance in STEM subjects (mathematics, natural sciences, and informatics) and the Greek language subject (research question 2). Another focus of this study was to explore whether there was a correlation between the latent trait of CT and the orientation choice made by the students who participated in the survey (research question 3).
Regarding the first question, the test above assesses students’ latent traits to a moderate level in a very satisfactory percentage (63.06%). As it emerged from the evaluation of the questions of our research instrument, it was obvious that the 14 questions included could be reduced to 9 without losing the information it could provide. In particular, we concluded that items q2, q3, q4, q5, and q6 should be removed since, on the one hand, their difficulty level is covered by other items, and on the other hand, they have a lower discriminative capacity. However, our research tool should be more accurate in assessing the students’ latent trait levels higher on the scale. Therefore, it would be appropriate to remove questions of similar difficulty and discrimination, as questions of similar type can fulfil their role. We should also enrich our research tool with questions of incredible difficulty, with a gradual scaling and higher resolution that would allow us to distinguish the students who have achieved higher levels of CT more clearly and accurately.
Regarding the second research question on the correlation between students’ latent trait of CT and their written performance in STEM and language subjects, we found a relatively strong correlation in physics and biology and a moderate correlation with mathematics and Greek language subjects, which was not found for computer science. These findings, in general, agree with the conclusions of the studies of Polat et al. [81] and Sun et al. [83] for primary education and with the research of Chongo et al. [84] and Hava and Koyunlu Ünlü [85] for secondary education. In support of our conclusions, we should mention that according to Orban and Teeling-Smith [57], regarding physics, this is to be expected, as most of the CT skills are taught in physics. Moreover, the results of this study are in line with the reports of Beheshti [96] and Weintrop et al. [12], who found high rates of CT practice use in STEM fields even in high school, which is also supported for mathematics by Hu [51]. Furthermore, the correlation of students’ CT with the Greek language course shows language’s vital role, as it supports all subjects [80,97]. Our findings about the CS course can be attributed to the inherent context of the teaching approach of this course. Namely, in the last grade of the Greek Lyceum, CS is mainly oriented towards learning programming languages. Another justification for this finding may be related to the background of the student population that chose Economics–Informatics, where this course is taught in this particular school year and in this particular school.
Finally, for the third research question, in which we researched the correlation between students’ CT and their field of study, the results indicated that students of Science Studies showed statistically significantly higher levels of the latent trait than students of the other two fields of study. We have yet to identify any research that tests the correlation between CT levels and the field of study. However, it was appropriate to investigate the existence of this correlation since, as explained above, in science and economics studies, students take STEM subjects (mathematics, physics, chemistry, biology and CS) to a greater extent than in humanities. At this point, the research of Chongo et al. [84], which investigated the correlation between mathematics achievement and CT levels, involved students of science based on the same criterion. The result of our research was expected, as the learning background of students choosing Science Studies requires immersion in mathematics and physics throughout Lyceum. After all, these are two scientific subjects whose teaching and learning presuppose and promote similar, if not identical, thought processes to CT practices. Most of the practices of CT are widely used in the teaching, learning, and experimental processes of physics throughout high school without being explicitly defined as CT processes [56,57,58,59]. As Orban and Teeling-Smith [57] report, physics teachers who refer to Weintrop et al.’s [12] classification of the components of CT identify practices that they have already used and continue to use in their teaching. A similar finding can be observed in teaching Lyceum mathematics in the context of Realistic Mathematics Education (RMA), which is the dominant trend in contemporary mathematics education [48,49,50,51,54].

6. Limitations and Implications

The procedure of processing the students’ responses to the designed test indicated that it measures a student’s latent trait to a satisfactory degree. However, to confirm that it measures students’ computational thinking, the resulting measurement must be correlated with another tool used to measure CT [87]. In addition, we will need to redesign the test by reconstructing the questions, focusing on gradually increasing their difficulty level and their distinctiveness for the test to satisfactorily assess students’ latent CT traits across the spectrum of levels [89].
One of the limitations of the present study is that the sample selection was through the convenience sampling method. Nevertheless, this research is only the beginning of a process that aims to identify the direct correlation between CT and STEM and Greek language courses, as long as this correlation is confirmed by other research in Greece and abroad. In that case, the next stage is to recognize those practices of STEM and language courses that are essentially CT practices and strengthen and enrich them with the ultimate goal of developing and empowering future citizens’ CT. It is, therefore, imperative to enrich curricula with appropriate activities which promote and enhance students’ acquisition of the characteristics of CT in the context of STEM and language courses.
Finally, measuring CT was particularly enjoyable for the students. All students who participated in the process were preparing to participate in the Panhellenic Exams of the current school year. They considered measuring CT an enjoyable and exciting break, leading to many discussions on the test questions for several weeks after its end.

Author Contributions

Methodology, A.B., K.L. and V.K.; formal analysis, A.B. and K.L.; data curation, A.B.; writing—original draft, A.B. and K.L.; writing—review and editing, V.K., S.P. and P.M.; All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The research protocol conformed to the ethical guidelines of the European Union and was approved by the Institutional Review Board of the Department of Educational Science and Early Childhood Education of the University of Patras (protocol code: 83965; date: 4 November 2021). Permission was given by students prior to the research. Ethical considerations and guidelines concerning individuals’ privacy were carefully considered throughout the research process.

Informed Consent Statement

Informed consent was obtained from all subjects involved in this study.

Data Availability Statement

The datasets generated and analyzed during the current study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

q1. Turn the Cards (Logic)
Question:
Cards have a letter on one side and a number on the other. A beaver shows you the four cards below.
Education 13 01101 i001
The beaver says:
“If there is a vowel on one side of a card, then there is always an even number on the opposite side of the same card”.
A vowel is one of A, E, I, O or U.
You only see one side of each card, so you must know if the beaver is telling the truth.
Which cards must be flipped over to determine if the beaver is telling the truth?
Possible Answers:
All of them
E and 7
E and 2
E, V and 2
q2. Truth (Logic)
Question:
Beaver Bob only tells the truth on Monday, Wednesday and Friday and always lies on all other days of the week. Today, he says: “Tomorrow, I will tell the truth”.
Possible Answers:
What day is it?
(A) Tuesday
(B) Friday
(C) Saturday
(D) Sunday
q3. Well Placed Towers (Hierarchical Structure)
Look at the towers shown below.
A tower is “well placed” if all the towers to the left of it are shorter and all towers to its right are taller.
Education 13 01101 i002
Question:
Write the letter of all the well-placed towers in the correct box on your answer sheet.
q4. Glasses (creating and following algorithms)
Question:
There are five empty glasses on a table. One is facing down, and four are facing up.
Education 13 01101 i003
Flipping a glass changes it from facing up to facing down or from facing down to facing up.
In one turn, you must flip exactly three different glasses. The glasses which are flipped do not need to be adjacent.
What is the minimum number of turns to make all glasses face up?
Possible Answers:
Two turns
Three turns
Five turns
it is not possible to make all glasses face up
q5. Magic Word (solving a problem)
Question:
A magic word is needed to open a box. A secret code assigns each letter of the alphabet to a unique number. The code for the magic word is written outside the box.
Education 13 01101 i004
What is the magic word?
Possible Answers:
LOOSER
WINNER
LOTTOS
TICKET
q6. Red Riding Hood (solving a problem)
Little Red Riding Hood wants to pick flowers for her grandmother. Her garden is divided into several flower beds. Each flower bed has a certain number of flowers.
Little Red Riding Hood picks flowers from the top left flower bed and goes down to the bottom right. She can walk downwards (↓) or to the right (→) but in no other direction.
Question:
Select the flower beds on her path to collect the most flowers.
Education 13 01101 i005
q7. Subway (evaluation)
A train system consists of four train lines that start at the stations: Acton, Bams, Chat, and Dinmore.
Education 13 01101 i006
John went to the Zoo. He once changed train lines at Moor, Museum, Mart or Market.
Question
Which station did he start his journey from?
Acton, Bams, Chat or Dinmore
q8. Seating Arrangement (evaluation)
Eight friends are sitting in a circle. They are all facing inwards. We know the following facts about where they are sitting:
  • Alice is sitting directly opposite David.
  • Henry is sitting between Greta and Eugene.
  • Franny is not next to Alice or David.
  • There is one person between Greta and Claire.
  • Eugene is sitting immediately to David’s left.
Education 13 01101 i007
Question
Place the friends in the correct places in the circle.
Possible Answers:
(There may be multiple correct solutions; you only need to find one.)
Which of the following shows the correct seat allocation?
A: 1 = Alice; 2 = Bruce; 3 = Claire; 4 = David; 5: Eugene; 6 = Franny; 7 = Greta; 8 = Henry
B: 1 = Alice; 2 = Bruce; 3 = Henry; 4 = Greta; 5: Franny; 6 = Eugene; 7 = David; 8 = Claire
C: 1 = Alice; 2 = Franny; 3 = Eugene; 4 = Claire; 5: David; 6 = Bruce; 7 = Greta; 8 = Henry
D: 1 = Alice; 2 = Claire; 3 = Franny; 4 = Bruce; 5: David; 6 = Eugene; 7 = Henry; 8 = Greta
q9. Coloured paper (abstraction)
Beaver Bert has a long strip of coloured paper for a party. The strip has three different
colours (yellow, red, blue) in a regularly repeating pattern.
Bert’s friend, James, has cut out a section of the paper, as shown in the diagram below.
Education 13 01101 i008
James says he will return the missing paper if Bert can correctly guess the cut-out size.
Question
How many coloured squares does the missing piece of paper have?
Possible Answers:
31
32
33
34
q10. Match-up (abstraction)
Question
LEAD is to DEAL as 9514 is to ………
Possible Answers:
(a) 9514
(b) 9451
(c) 4519
(d) 4159
q11. A walk in the park (modelling and simulation)
This is the map of a park:
The green circles with letters represent the trees, and the brown lines are paths.
Education 13 01101 i009
Note that some letters are used to label more than one tree. Walking from tree F to tree B can be described as F D E C A B.
Last Sunday, two families walked in the park.
The Wilde family’s walk was B A A A C E D E E D A.
The Gilde family’s walk was F D C D A E A D E D A.
Both families started their walks at the same time.
Walking from one tree to another, down one path, takes the same time.
Question
How many times did the two families meet at a tree?
Possible Answers:
A. Once; B. Twice; C. Three times; D. They never met at any of the trees
q12. Warehouse (modelling and simulation)
In a warehouse, three robots always work as a team.
When the team gets a direction instruction (N, S, E, W), all robots in the grid will move one square in that direction simultaneously.
Education 13 01101 i010
After following a list of instructions, the robots pick up the object found in their final square.
For example, if we give the list N, N, S, S, and E to the team, robot A will pick up a cone, robot B will pick up a ring, and robot C will pick up a cone.
Question
Which list of instructions can be sent to the robots so that the team picks up precisely a sphere, a cone, and a ring?
Possible Answers:
N, E, E, E B N, E, E, S, E C N, N, S, E, N D N, E, E, S, W
q13. Encoded messages (decomposition)
Beaver Alex and Beaver Betty send each other messages using the following sequence of transformations on every word.
For example, the word “BEAVER” is transformed into “WBFCSF”.
Education 13 01101 i011
Beaver Betty receives the encoded message “PMGEP” from Beaver Alex.
Question
What did Beaver Alex want to say?
Possible Answers:
RIVER, KNOCK, FLOOD or LODGE
q14. Stars (decomposition)
Stella, the beaver, loves to draw stars. She has devised a system for labelling her stars according to their shape. She uses two numbers:
  • Some points on the star.
  • A number indicating if a line from a point is drawn to the nearest point (1), the second closest point (the number 2), etc.
Here are four examples of Stella’s labelling system:
Education 13 01101 i012
Question
How would Stella label the next star?
Education 13 01101 i013

References

  1. Arastoopour, G.I.; Dabholkar, S.; Bain, C.; Woods, P.; Hall, K.; Swanson, H.; Horn, M.; Wilensky, U. Modeling and measuring high school students’ computational thinking practices in science. J. Sci. Educ. Technol. 2020, 29, 137–161. [Google Scholar] [CrossRef]
  2. Barr, V.; Stephenson, C. Bringing computational thinking to K-12: What is Involved and what is the role of the computer science education community? ACM Inroads 2011, 2, 48–54. [Google Scholar] [CrossRef]
  3. Brief, R.R.; Ly, J.; Ion, B.A. A Framework for K-12 Science Education: Practices, Crosscutting Concepts, and Core Ideas; National Academies Press, National Research Council US: Washington, DC, USA, 2012. [Google Scholar]
  4. Cutumisu, M.; Adams, C.; Lu, C. A scoping review of empirical research on recent computational thinking assessments. J. Sci. Educ. Technol. 2019, 28, 651–676. [Google Scholar] [CrossRef]
  5. González, M.R. Computational thinking test: Design guidelines and content validation. In Proceedings of the EDULEARN15 Conference, Barcelona, Spain, 6–8 July 2015; pp. 2436–2444. [Google Scholar]
  6. Grover, S.; Pea, R. Computational thinking in K-12: A review of the state of the field. Educ. Res. 2013, 42, 38–43. [Google Scholar] [CrossRef]
  7. Gouws, L.A.; Bradshaw, K.; Wentworth, P. Computational thinking in educational activities: An evaluation of the educational game light-bot. In Proceedings of the 18th ACM Conference on Innovation and Technology in Computer Science Education, Canterbury, UK, 1–3 July 2013; pp. 10–15. [Google Scholar]
  8. National Research Council US. Report of a Workshop on the Scope and Nature of Computational Thinking; National Academies Press: Washington, DC, USA, 2010.
  9. National Research Council US. Report of a Workshop on the Pedagogical Aspects of Computational Thinking; National Academies Press: Washington, DC, USA, 2011.
  10. National Research Council US. Next Generation Science Standards: For States, by States. 2013. Available online: https://nap.nationalacademies.org/catalog/18290/next-generation-science-standards-for-states-by-states/ (accessed on 20 July 2022).
  11. Wang, C.; Shen, J.; Chao, J. Integrating computational thinking in stem education: A literature review. Int. J. Sci. Math. Educ. 2021, 20, 1949–1972. [Google Scholar] [CrossRef]
  12. Weintrop, D.; Beheshti, E.; Horn, M.; Orton, K.; Jona, K.; Trouille, L.; Wilensky, U. Defining computational thinking for mathematics and science classrooms. J. Sci. Educ. Technol. 2016, 2, 127–147. [Google Scholar] [CrossRef]
  13. Werner, L.; Denner, J.; Campe, S.; Kawamoto, D.C. The fairy performance assessment: Measuring computational thinking in middle school. In Proceedings of the 43rd ACM Technical Symposium on Computer Science Education, Raleigh, NC, USA, 29 February–3 March 2012; pp. 215–220. [Google Scholar]
  14. Wing, J.M. Computational thinking. Commun. ACM 2006, 49, 33–35. [Google Scholar] [CrossRef]
  15. Bower, M.; Wood, L.N.; Lai, J.W.; Highfield, K.; Veal, J.; Howe, C.; Lister, R.; Mason, R. Improving the computational thinking pedagogical capabilities of schoolteachers. Aust. J. Teach. Educ. 2017, 42, 53–72. [Google Scholar] [CrossRef]
  16. Heintz, F.; Mannila, L.; Färnqvist, T. A review of models for introducing computational thinking, computer science and computing in K-12 education. In Proceedings of the 2016 IEEE Frontiers in Education Conference (FIE), Erie, PA, USA, 12–15 October 2016; pp. 1–9. [Google Scholar]
  17. Hsu, Y.C.; Irie, N.R.; Ching, Y.H. Computational thinking educational policy initiatives (CTEPI) across the globe. TechTrends 2019, 63, 260–270. [Google Scholar] [CrossRef]
  18. Kafai, Y.B.; Proctor, C. A Revaluation of Computational Thinking in K-12 Education: Moving Toward Computational Literacies. Educ. Res. 2022, 51, 146–151. [Google Scholar] [CrossRef]
  19. Lockwood, J.; Mooney, A. Computational thinking in education: Where does it fit? A systematic literary review. arXiv 2017, arXiv:1703.07659. [Google Scholar] [CrossRef]
  20. Mannila, L.; Dagiene, V.; Demo, B.; Grgurina, N.; Mirolo, C.; Rolandsson, L.; Settle, A. Computational thinking in K-9 education. In Proceedings of the Working Group Reports of 2014 on Innovation & Technology in Computer Science Education Conference, Uppsala, Sweden, 23–25 June 2014; pp. 1–29. [Google Scholar]
  21. Mohaghegh, M.; McCauley, M. Computational Thinking: The Skill Set of the 21st Century. Int. J. Comput. Sci. Inf. Technol. 2016, 7, 1524–1530. [Google Scholar]
  22. Nordby, S.K.; Bjerke, A.H.; Mifsud, L. Computational thinking in the primary mathematics classroom: A systematic review. Digit. Exp. Math. Educ. 2022, 8, 27–49. [Google Scholar] [CrossRef]
  23. Settle, A.; Franke, B.; Hansen, R.; Spaltro, F.; Jurisson, C.; Rennert-May, C.; Wildeman, B. Infusing computational thinking into the middle- and high-school curriculum. In Proceedings of the 17th ACM Annual Conference on Innovation and Technology in Computer Science Education, Haifa, Israel, 3–5 July 2012; pp. 22–27. [Google Scholar]
  24. Voogt, J.; Fisser, P.; Good, J.; Mishra, P.; Yadav, A. Computational thinking in compulsory education: Towards an agenda for research and practice. Educ. Inf. Technol. 2015, 20, 715–728. [Google Scholar] [CrossRef]
  25. Yadav, A.; Hong, H.; Stephenson, C. Computational thinking for all: Pedagogical approaches to embedding 21st century problem solving in K-12 classrooms. TechTrends 2016, 60, 565–568. [Google Scholar] [CrossRef]
  26. Yadav, A.; Good, J.; Voogt, J.; Fisser, P. Computational thinking as an emerging competence domain. In Competence-Based Vocational and Professional Education; Springer: Cham, Switzerland, 2017; pp. 1051–1067. [Google Scholar] [CrossRef]
  27. Papert, S. Mindstorms: Children, Computers, and Powerful Ideas; Basic Books: New York, NY, USA, 1980. [Google Scholar]
  28. Çoban, E.; Korkmaz, Ö. An alternative approach for measuring computational thinking: Performance-based platform. Think. Ski. Creat. 2021, 42, 100929. [Google Scholar] [CrossRef]
  29. García-Peñalvo, F.J.; Cruz-Benito, J. Computational thinking in pre-university education. In Proceedings of the Fourth International Conference on Technological Ecosystems for Enhancing Multiculturality, Salamanca, Spain, 2–4 November 2016; pp. 13–17. [Google Scholar]
  30. Gouws, L.A.; Bradshaw, K.; Wentworth, P. First year student performance in a test for computational thinking. In Proceedings of the South African Institute for Computer Scientists and Information Technologists Conference, East London, South Africa, 7–9 October 2013; pp. 271–277. [Google Scholar]
  31. Korkmaz, Ö.; Xuemei, B.A.İ. Adapting computational thinking scale (CTS) for Chinese high school students and their thinking scale skills level. Particip. Educ. Res. 2019, 6, 10–26. [Google Scholar] [CrossRef]
  32. Shute, V.J.; Sun, C.; Asbell-Clarke, J. Demystifying computational thinking. Educ. Res. Rev. 2017, 22, 142–158. [Google Scholar] [CrossRef]
  33. Poulakis, E.; Politis, P. Computational thinking assessment: Literature review. In Research on E-Learning and ICT in Education; Tsiatsos, T., Demetriadis, S., Mikropoulos, A., Dagdilelis, V., Eds.; Springer Nature: Berlin, Germany, 2021; pp. 111–128. [Google Scholar] [CrossRef]
  34. Tang, X.; Yin, Y.; Lin, Q.; Hadad, R.; Zhai, X. Assessing computational thinking: A systematic review of empirical studies. Comp. Educ. 2020, 148, 103798. [Google Scholar] [CrossRef]
  35. Weintrop, D.; Beheshti, E.; Horn, M.S.; Orton, K.; Trouille, L.; Jona, K.; Wilensky, U. Interactive assessment tools for computational thinking in high school STEM classrooms. In Proceedings of the International Conference on Intelligent Technologies for Interactive Entertainment, Chicago, IL, USA, 9–11 July 2014; pp. 22–25. [Google Scholar]
  36. Wing, J.M. Computational Thinking Benefits Society. In 40th Anniversary Blog of Social Issues in Computing; DiMarco, J., Ed.; Academic Press: New York, NY, USA, 2014; Volume 2014, Available online: http://socialissues.cs.toronto.edu/index.html%3Fp=279.html (accessed on 28 March 2023).
  37. Brennan, K.; Resnick, M. New frameworks for studying and assessing the development of computational thinking. In Proceedings of the 2012 Annual American Educational Research Association Meeting, Vancouver, BC, Canada, 13–17 April 2012; p. 25. [Google Scholar]
  38. Denner, J.; Werner, L.; Ortiz, E. Computer games created by middle school girls: Can they be used to measure understanding of computer science concepts? Comp. Educ. 2012, 58, 240–249. [Google Scholar] [CrossRef]
  39. Barr, D.; Harrison, J.; Conery, L. Computational thinking: A digital age skill for everyone. Learn. Lead. Technol. 2011, 38, 20–23. [Google Scholar]
  40. Selby, C.; Woollard, J. Computational Thinking: The Developing Definition; University of Southampton, e-prints: Southampton, UK, 2013. [Google Scholar]
  41. Yadav, A.; Mayfield, C.; Zhou, N.; Hambrusch, S.; Korb, J.T. Computational thinking in elementary and secondary teacher education. ACM Trans. Comput. Educ. 2014, 14, 1–16. [Google Scholar] [CrossRef]
  42. Siekmann, G. What is STEM? The Need for Unpacking its Definitions and Applications; National Centre for Vocational Education Research (NCVER): Adelaide, Australia, 2016. [Google Scholar]
  43. Institute of Educational Policy (IEP). Primary and Secondary Education Programs of Studies. Available online: http://iep.edu.gr/el/nea-programmata-spoudon-arxiki-selida (accessed on 20 July 2022).
  44. Aho, A.V. Computation and computational thinking. Comp. J. 2012, 55, 832–835. [Google Scholar] [CrossRef]
  45. CSTA Standards Task Force. [Interim] CSTA K-12 Computer Science Standards; CSTA: New York, NY, USA, 2016. [Google Scholar]
  46. Google. Exploring Computational Thinking. Available online: https://edu.google.com/resources/programs/exploring-computational-thinking/ (accessed on 20 July 2022).
  47. Theodoropoulou, I.; Lavidas, K.; Komis, V. Results and prospects from the utilization of Educational Robotics in Greek Schools. Technol. Knowl. Learn. 2021, 28, 225–240. [Google Scholar] [CrossRef]
  48. Blum, W.; Ferri, R.B. Mathematical modelling: Can it be taught and learnt? J. Math. Modell. Appl. 2009, 1, 45–58. [Google Scholar]
  49. Gravemeijer, K.; Doorman, M. Context problems in realistic mathematics education: A calculus course as an example. Educ. Stud. Math. 1999, 39, 111–129. [Google Scholar] [CrossRef]
  50. Hiebert, J.; Carpenter, T.P.; Fennema, E.; Fuson, K.; Human, P.; Murray, H.; Olivier, A.; Wearne, D. Problem solving as a basis for reform in curriculum and instruction: The case of mathematics. Educ. Res. 1996, 25, 12–21. [Google Scholar] [CrossRef]
  51. Hu, C. Computational thinking: What it might mean and what we might do about it. In Proceedings of the 16th Annual Joint Conference on Innovation and Technology in Computer Science Education, Darmstadt, Germany, 27–29 June 2011; pp. 223–227. [Google Scholar]
  52. Kallia, M.; van Borkulo, S.P.; Drijvers, P.; Barendsen, E.; Tolboom, J. Characterizing computational thinking in mathematics education: A literature-informed Delphi study. Res. Math. Educ. 2021, 23, 159–187. [Google Scholar] [CrossRef]
  53. Lockwood, E.; DeJarnette, A.F.; Asay, A.; Thomas, M. Algorithmic Thinking: An Initial Characterization of Computational Thinking in Mathematics. In Proceedings of the Annual Meeting of the North American Chapter of the International Group for the Psychology of Mathematics Education, Tucson, AZ, USA, 3–6 November 2016. [Google Scholar]
  54. Polya, G. How to Solve it: A New Aspect of Mathematical Method; Princeton University Press: Princeton, NJ, USA, 2004; Volume 85. [Google Scholar]
  55. Gilbert, J.K. Models and modelling: Routes to more authentic science education. Int. J. Sci. Math. Educ. 2004, 2, 115–130. [Google Scholar] [CrossRef]
  56. Larkin, J.H.; McDermott, J.; Simon, D.P.; Simon, H.A. Models of competence in solving physics problems. Cogn. Sci. 1980, 4, 317–345. [Google Scholar] [CrossRef]
  57. Orban, C.M.; Teeling-Smith, R.M. Computational thinking in introductory physics. Phys. Teach. 2020, 58, 247–251. [Google Scholar] [CrossRef]
  58. Reif, F.; Heller, J.I. Knowledge structure and problem solving in physics. Educ. Psychol. 1982, 17, 102–127. [Google Scholar] [CrossRef]
  59. Heller, J.I.; Reif, F. Prescribing effective human problem-solving processes: Problem description in physics. Cognit. Instr. 1984, 1, 177–216. [Google Scholar] [CrossRef]
  60. Sengupta, P.; Dickes, A.; Farris, A. Toward a phenomenology of computational thinking in STEM education. Comput. Think. STEM Discip. 2018, 49–72. [Google Scholar] [CrossRef]
  61. ISTE; CSTA. Operational Definition of Computational Thinking for K-12 Education. 2011. Available online: https://cdn.iste.org/www-root/Computational_Thinking_Operational_Definition_ISTE.pdf?_ga=2.26204856.423629428.1680452395-598677210.1667674549 (accessed on 20 July 2022).
  62. Lu, J.J.; Fletcher, G.H. Thinking about computational thinking. In Proceedings of the 40th ACM Technical Symposium on Computer Science Education, Chattanooga, TN, USA, 4–7 March 2009; pp. 260–264. [Google Scholar]
  63. Parsazadeh, N.; Cheng, P.Y.; Wu, T.T.; Huang, Y.M. Integrating computational thinking concept into digital storytelling to improve learners’ motivation and performance. J. Educ. Comput. Res. 2021, 59, 470–495. [Google Scholar] [CrossRef]
  64. Nesiba, N.; Pontelli, E.; Staley, T. DISSECT: Exploring the relationship between computational thinking and English literature in K-12 curricula. In Proceedings of the 2015 IEEE Frontiers in Education Conference (FIE), El Paso, TX, USA, 21–24 October 2015; pp. 1–8. [Google Scholar]
  65. Rottenhofer, M.; Sabitzer, B.; Rankin, T. Developing Computational Thinking skills through modeling in language lessons. Open Educ. Stud. 2021, 3, 17–25. [Google Scholar] [CrossRef]
  66. Sabitzer, B.; Demarle-Meusel, H.; Jarnig, M. Computational thinking through modeling in language lessons. In Proceedings of the 2018 IEEE Global Engineering Education Conference (EDUCON), Santa Cruz de Tenerife, Spain, 17–20 April 2018; pp. 1913–1919. [Google Scholar]
  67. De Paula, B.H.; Burn, A.; Noss, R.; Valente, J.A. Playing Beowulf: Bridging computational thinking, arts and literature through game-making. Int. J. Child Comput. Interact. 2018, 16, 39–46. [Google Scholar] [CrossRef]
  68. López, A.R.; García-Peñalvo, F.J. Relationship of knowledge to learn in programming methodology and evaluation of computational thinking. In Proceedings of the Fourth International Conference on Technological Ecosystems for Enhancing Multiculturality, Salamanca, Spain, 2–4 November 2016; pp. 73–77. [Google Scholar]
  69. Román-González, M.; Moreno-León, J.; Robles, G. Combining assessment tools for a comprehensive evaluation of computational thinking interventions. In Computational Thinking Education; Springer: Singapore, 2019; pp. 79–98. [Google Scholar]
  70. Kalelioğlu, F.; Gulbahar, Y.; Kukul, V. A framework for computational thinking based on a systematic research review. Baltic J. Modern Comput. 2016, 4, 583–596. [Google Scholar]
  71. Calcagni, A.; Lonati, V.; Malchiodi, D.; Monga, M.; Morpurgo, A. Promoting computational thinking skills: Would you use this Bebras task? In Proceedings of the International Conference on Informatics in Schools: Situation, Evolution, and Perspectives, Helsinki, Finland, 13–15 November 2017; pp. 102–113. [Google Scholar]
  72. Dagienė, V.; Futschek, G. Bebras international contest on informatics and computer literacy: Criteria for good tasks. In Proceedings of the International Conference on Informatics in Secondary Schools-Evolution and Perspectives, Lausanne, Switzerland, 23–25 October 2008; pp. 19–30. [Google Scholar]
  73. Grover, S. The 5th ‘C’ of 21st Century Skills? Try Computational Thinking (Not Coding). 2018. Available online: https://www.edsurge.com/news/2018-02-25-the-5th-c-of-21st-century-skills-try-computational-thinking-not-coding (accessed on 28 March 2023).
  74. Li, Y.; Schoenfeld, A.H.; di Sessa, A.A.; Graesser, A.C.; Benson, L.C.; English, L.D.; Duschl, R.A. Computational thinking is more about thinking than computing. J. STEM Educ. Res. 2020, 3, 1–18. [Google Scholar] [CrossRef]
  75. Zhang, N.; Biswas, G. Defining and assessing students’ computational thinking in a learning by modeling environment. In Computational Thinking Education; Springer: Singapore, 2019; pp. 203–221. [Google Scholar]
  76. Aslan, U.; La Grassa, N.; Horn, M.; Wilensky, U. Putting the taxonomy into practice: Investigating students’ learning of chemistry with integrated computational thinking activities. In Proceedings of the American Education Research Association Annual Meeting (AERA), Virtual, 17–21 April 2020. [Google Scholar]
  77. Lapawi, N.; Husnin, H. The effect of computational thinking module on achievement in science on all thinking modules on achievement in science. Sci. Educ. Int. 2020, 31, 164–171. [Google Scholar] [CrossRef]
  78. Chongo, S.; Osman, K.; Nayan, N.A. Impact of the Plugged-In and Unplugged Chemistry Computational Thinking Modules on Achievement in Chemistry. EURASIA J. Math. Sci. Technol. Educ. 2021, 17, 1–21. [Google Scholar] [CrossRef]
  79. Weller, D.P.; Bott, T.E.; Caballero, M.D.; Irving, P.W. Development and illustration of a framework for computational thinking practices in introductory physics. Phys. Rev. Phys. Educ. Res. 2022, 18, 020106. [Google Scholar] [CrossRef]
  80. Britton, J. Language and Learning; Penguin Books: London, UK, 1970. [Google Scholar]
  81. Polat, E.; Hopcan, S.; Kucuk, S.; Sisman, B. A comprehensive assessment of secondary school students’ computational thinking skills. Br. J. Educ. Technol. 2021, 52, 1965–1980. [Google Scholar] [CrossRef]
  82. Sun, L.; Hu, L.; Yang, W.; Zhou, D.; Wang, X. STEM learning attitude predicts computational thinking skills among primary school students. J. Comput. Assist. Learn. 2021, 37, 346–358. [Google Scholar] [CrossRef]
  83. Sun, L.; Hu, L.; Zhou, D. The bidirectional predictions between primary school students’ STEM and language academic achievements and computational thinking: The moderating role of gender. Think. Ski. Creat. 2022, 44, 101043. [Google Scholar] [CrossRef]
  84. Chongo, S.; Osman, K.; Nayan, N.A. Level of Computational Thinking Skills among Secondary Science Student: Variation across Gender and Mathematics Achievement. Sci. Educ. Int. 2020, 31, 159–163. [Google Scholar] [CrossRef]
  85. Hava, K.; Koyunlu Ünlü, Z. Investigation of the relationship between middle school students’ computational thinking skills and their STEM career interest and attitudes toward inquiry. J. Sci. Educ. Technol. 2021, 30, 484–495. [Google Scholar] [CrossRef]
  86. Lei, H.; Chiu, M.M.; Li, F.; Wang, X.; Geng, Y.J. Computational thinking and academic achievement: A meta-analysis among students. Child. Youth Serv. Rev. 2020, 118, 105439. [Google Scholar] [CrossRef]
  87. Creswell, J.W. Educational Research: Planning, Conducting, and Evaluating Quantitative; Prentice Hall: Upper Saddle River, NJ, USA, 2002. [Google Scholar]
  88. Lavidas, K.; Petropoulou, A.; Papadakis, S.; Apostolou, Z.; Komis, V.; Jimoyiannis, A.; Gialamas, V. Factors Affecting Response Rates of The Web Survey with Teachers. Computers 2022, 11, 127. [Google Scholar] [CrossRef]
  89. Baker, F.B.; Kim, S.H. The Basics of Item Response Theory Using R; Springer: New York, NY, USA, 2017; pp. 17–34. [Google Scholar]
  90. De Ayala, R.J. The Theory and Practice of Item Response Theory; Guilford Publications: New York, NY, USA, 2013. [Google Scholar]
  91. Tsigilis, N. Examining Instruments’ Psychometric Properties within the Item Response Theory Framework: From Theory to Practice. Hell. J. Psychol. 2019, 16, 335–376. [Google Scholar] [CrossRef]
  92. R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2018; Available online: https://www.R-project.org/ (accessed on 10 May 2022).
  93. Rizopoulos, D. LTM: An R package for latent variable modeling and item response analysis. J. Stat. Softw. 2006, 17, 1–25. [Google Scholar] [CrossRef]
  94. Revelle, W. Psych: Procedures for Personality and Psychological Research (Version 1.8.4) (Computer Software); Northwestern University: Evanston, IL, USA, 2018. [Google Scholar]
  95. Field, A. Discovering Statistics Using IBM SPSS Statistics, 5th ed.; SAGE: Washington, DC, USA, 2018. [Google Scholar]
  96. Beheshti, E. Computational thinking in practice: How STEM professionals use CT in their work. In Proceedings of the American Education Research Association Annual Meeting, San Francisco, CA, USA, 27 April–1 May 2017. [Google Scholar]
  97. Childs, P.E.; Markic, S.; Ryan, M.C. The role of Language in the teaching and learning of chemistry. In Chemistry Education: Best Practices, Opportunities and Trends; García-Martínez, J., Serrano-Torregrosa, E., Eds.; Wiley: Hoboken, NJ, USA, 2015; pp. 421–446. [Google Scholar] [CrossRef]
Figure 1. Information curves for each element of the questionnaire.
Figure 1. Information curves for each element of the questionnaire.
Education 13 01101 g001
Figure 2. Information curve cumulatively for all elements.
Figure 2. Information curve cumulatively for all elements.
Education 13 01101 g002
Table 1. CT definitions (I).
Table 1. CT definitions (I).
Computational Thinking (Related to Computer Science)
Bar and Stephenson (2011) [2]Grover and Pea (2013) [6]
Data collectionAbstractions and pattern generalizations
Data representation and analysisSystematic processing of information
AbstractionSymbol systems and representations
Analysis and model validationAlgorithmic notions of flow of control
AutomationStructured problem decomposition (modularizing)
Testing and verificationIterative, recursive, and parallel thinking
Algorithms and proceduresConditional logic
Problem decompositionEfficiency and performance constraints
Control structuresDebugging and systematic error detection
Parallelization
Simulation
Table 2. CT definitions (II).
Table 2. CT definitions (II).
Computational Thinking (Related to Computer Science and STEM)
Yadav et al. (2014) [41]ISTE and CSTA (2011) [61]Weintrop et al. (2016) [12]
Problem identification and decompositionFormulating problems in a way that enables us to use a computer and other tools to help solve themData practices
AbstractionLogically organizing and analyzing dataModelling and simulation practices
Logical thinkingRepresenting data through abstractions, such as models and simulationsProblem-solving practices
AlgorithmsAutomating solutions through algorithmic thinking (a series of ordered steps)System thinking practices
DebuggingIdentifying, analyzing, and implementing possible solutions to achieve the most efficient and effective combination of steps and resources
Generalizing and transferring the problem-solving process to a wide variety of problems
Table 3. Fit indices of each model.
Table 3. Fit indices of each model.
Log-Likelihoodx2Δx2Δdfp-Value
1PL−622.481244.96
2PL−605.351210.7034.26140.002
3PL−598.021196.0414.66140.402
Table 4. Values of the two parameters of every element and fit indices for every element.
Table 4. Values of the two parameters of every element and fit indices for every element.
ParametersFit Indices of Every Element
ElementsDifficultyDistinctionx2Pr (>x2)
q11.2660.3244.01360.9208
q2−0.4261.1146.48670.6337
q3−1.0300.6056.07650.6436
q4−0.1361.2695.51860.8515
q5−3.0200.7059.91050.2772
q6−0.7070.7505.69620.7426
q7−1.2452.0338.06350.3168
q8−2.2081.2387.32090.4653
q91.0470.7737.22940.6436
q10−1.2001.2908.15370.5149
q110.1620.9529.58220.3663
q12−0.1224.2725.36390.495
q13−0.4862.2509.29920.2673
q140.1662.01810.13310.3564
Table 5. Correlations between the latent trait of CT and grades of STEM and Greek language courses.
Table 5. Correlations between the latent trait of CT and grades of STEM and Greek language courses.
95% Confidence Interval
ΝrLower CIUpper CI
2PL_scoresMathematics510.2700.0450.500
Physics330.5140.2080.729
Informatics32−0.046−0.3890.307
Biology130.8060.4590.940
Greek Language800.2720.0560.464
Notes: N = Number of students that participated in each lesson test.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bounou, A.; Lavidas, K.; Komis, V.; Papadakis, S.; Manoli, P. Correlation between High School Students’ Computational Thinking and Their Performance in STEM and Language Courses. Educ. Sci. 2023, 13, 1101. https://doi.org/10.3390/educsci13111101

AMA Style

Bounou A, Lavidas K, Komis V, Papadakis S, Manoli P. Correlation between High School Students’ Computational Thinking and Their Performance in STEM and Language Courses. Education Sciences. 2023; 13(11):1101. https://doi.org/10.3390/educsci13111101

Chicago/Turabian Style

Bounou, Aikaterini, Konstantinos Lavidas, Vassilis Komis, Stamatis Papadakis, and Polyxeni Manoli. 2023. "Correlation between High School Students’ Computational Thinking and Their Performance in STEM and Language Courses" Education Sciences 13, no. 11: 1101. https://doi.org/10.3390/educsci13111101

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop