Next Article in Journal
ChatGPT Translation of Program Code for Image Sketch Abstraction
Next Article in Special Issue
The Effects of an Ethics Education Program on Artificial Intelligence among Middle School Students: Analysis of Perception and Attitude Changes
Previous Article in Journal
Numerical Analysis of the Overtopping Failure of the Tailings Dam Model Based on Inception Similarity Optimization
Previous Article in Special Issue
A Java Application for Teaching Graphs in Undergraduate Courses
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Effects of the International Training Program for Enhancing Intelligent Capabilities through Blended Learning on Computational Thinking, Artificial Intelligence Competencies, and Core Competencies for the Future Society in Graduate Students

1
Division of Software Engineering, Pai Chai University, Daejeon 35345, Republic of Korea
2
Department of Nursing, Catholic Kkottongnae University, Cheongju-si 28211, Republic of Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(3), 991; https://doi.org/10.3390/app14030991
Submission received: 19 December 2023 / Revised: 14 January 2024 / Accepted: 15 January 2024 / Published: 24 January 2024
(This article belongs to the Special Issue ICTs in Education)

Abstract

:
Background: The purpose of this study is to find the effects of the international training program for enhancing intelligent capabilities through blended learning on computational thinking, artificial intelligence (AI) competency, and core competencies for the future society in graduated students enrolled in the Smart Information Communication Technology (SMART ICT) course. The teaching model followed the ADDIE framework. Methods: This study is a quasi-experimental study based on nonequivalent control group design. Study subjects were assigned to an experimental (n = 20) or control group (n = 20). The experimental group participated in the international training program in the blended learning form, real-time online classes (60 min per session for a week, six sessions) and face-to-face classes (4–8 h per session for 9 days, six sessions). The variables were measured with a self-report questionnaire and were evaluated before, right after, and in the 12th week of the program. Results: The AI competency of the experimental group was observed to be significantly changed at the points of time (F = 6.76, p = 0.002), and in comparison with that of a different group (F = 9.77, p = 0.003). Conclusions: This study suggests applying an international training program based on blended learning to strengthen intelligence capabilities such as artificial intelligence capabilities.

1. Introduction

With the advancement of computer devices and the development of software and information and communications technology (ICT), collectable data have been on the explosive rise. Today, in various fields of society, such information is used, and many kinds of intelligent activities come to be automated [1]. In order to keep up with social changes and needs, it is important to train people of talent who have the ability to solve problems on the basis of AI and software [2]. Computers are being applied in various ways across all fields, including language, mathematics, medicine, management, law, politics, and the arts [3]. At this time, a capability is required to extract the key elements of a problem to solve and to automate them with a computing device [4]. That is computational thinking, which is an efficient and systematic problem-solving competency applied to all fields of study [5]. This competency is very suitable and essential to people of talent in the present age when industrial convergence actively appears [4,6].
In line with the advent of the intelligent information society, the Korean government implements software education as part of elementary, middle, and high school curricula, and has been enhancing the software competency of not only software majors but all students through the Software-Centric University Project since 2015 [7,8,9]. As a result, there has been increased awareness of software education and continuous improvement in related curriculums and teaching methods [10,11]. A variety of software learning and experience offered by a university give non-major students confidence, and thus there have been more non-major students who apply for plural majors based on software through a double-major or interdisciplinary-studies system [1,10,12]. Recently, the Ministry of Education announced the introduction of ‘artificial intelligence (AI) education’ in the 2022 Revised Curriculum scheduled to have been applied since 2025 [13]. This education contains a variety of contents, including the understanding and experience of the AI concept and principles, problem-finding through the sharing and analysis of social phenomena, creative problem solving using data and statistics, and AI ethics of its effects on society [14,15,16]. Therefore, it is necessary to provide the subject-based interdisciplinary education to solve various problems by utilizing AI and software. To do that, AI education in a university should focus on the active operation of the programs to find and support student-led activities, rather than the school-led AI and software programs performed a single time [1]. In order to create the practical effects of education, it is required to develop the basic AI and software education program to enhance the associations of learners’ different departments and majors in a high educational environment. To do that, it is a prerequisite to analyze learners by school [17,18,19]. In addition, it is required to raise experts or enhance related competencies not only in domestic curricula but through an international training program, and to give students a change in interdisciplinary studies in the way of exploring different curricula or education methods in other countries [14,20]. Since the 2000s, universities have received new technologies and have exchanged information immediately due to globalization. As a result, their communication has become more efficient, and members of universities have built more collaboration in research and education fields through an international network [21,22]. However, there are not many cases where the international training programs and field experience aimed at undergraduate students or graduate students are objectively analyzed in comparison; their actual conditions or characteristics are determined and used for the expansion of the next international training programs [21].
Blended learning is used to explain learning that mixes various event-based activities, including face- to-face classrooms, real-time or live e-learning, and self-lead learning. NIIT categorizes that the blended learning model is skill-driven learning, attitude-driven learning, and competency-driven learning [23,24,25,26]. In particular, skill-driven blended learning is a useful learning method for developing specific knowledge and skills by combining self-directed learning with instructor support. This study involves the international training program for enhancing intelligent capabilities, seeking to apply blended learning, which is a mixture of real-time classes and face-to-face classes. Real-time classes feature facilitator interaction through discussion forums, instructor overviews, and feedback. And a traditional classroom of face-to-face classes are interactive through a system of instructor outlines, demonstrations, discussions, and feedback [23,24,25,26]. By applying these learning methods, we aim to confirm changes in computational thinking, artificial intelligence, and core competencies.
The ADDIE model consists of five stages of analysis, design, development, implementation, and evaluation, and each process is organically related. Because each step is structured in a detailed and specific logical order and includes important processes for successful achievement of the purpose, instructors should design classes by considering the interdependence between all elements. The five steps can be repeated during the program, so it is cyclical and has experiential characteristics because it allows effective decision making according to the situation based on various data. Above all, the structure and operation method of the ADDIE model is structured to meet various educational requirements in not only traditional education but also real-time online class environments, making it very useful in designing and evaluating learning experiences, courses, and educational content. Therefore, this study involves the international training program for enhancing intelligent capabilities, and it is meaningful to confirm the educational effect by applying a blended learning design based on the ADDIE model [27,28,29].
As the competencies for training people of talent who can respond to changes in the fourth industrial revolution and of the human resources needed in the hyper-connection smart industries, critical thinking, communication skills, creativity, and collaboration ability are suggested. These competencies are not only key competencies for the future society, but are significant competencies for majors. They are considered highly important in a future society [23,24]. Accordingly, this study tries to measure the effectiveness of the international training program for enhancing intelligent capabilities and to provide fundamental material for the expansion of the international training program and its improvement.
The details of the study objectives are as follows:
  • First, verify the effect of the international training program for enhancing intelligent capabilities on the subjects’ computational thinking.
  • Second, verify the effect of the international training program for enhancing intelligent capabilities on the subject’s artificial intelligence competency.
  • Third, verify the effect of the international training program for enhancing intelligent capabilities on core competencies for the future society.
The research hypotheses of this study are as follows:
H1: 
The experimental group participating in the international training program for enhancing intelligent capabilities will score higher in computational thinking than the control group that did not participate.
H2: 
The experimental group participating in the international training program for enhancing intelligent capabilities will score higher in artificial intelligence competency than the control group that did not participate.
H3: 
The experimental group that participated in the international training program for enhancing intelligent capabilities will score higher in future society core competency scores than the control group that did not participate.

2. Materials and Methods

2.1. Study Design

This is a quasi-experimental study based on nonequivalent control group design in order to examine the effects of the international training program for enhancing intelligent capabilities on computational thinking, AI competency, and core competencies for the future society.

2.2. Study Subjects and Sampling Method

The accessible population of this study consisted of graduate students participating in the course of Smart Information Communication Technology (SMART ICT) Employees in a university situated in D city. The selected study subjects were those who understood the content of the questionnaire, directly filled in the questionnaire or were able to reply in writing, never participated in any international training program similar to that in this study, had over 80% of attendance, understood the purpose and procedure of this study, and hoped to make volitional participation. The subjects not selected were those who were absent from the program at least twice, who were participating in any similar program in school, and whose insufficient answers to questions in the questionnaire led to a failure of use as research data.
The sample size of study subjects was calculated using G power 3.1.2 based on the effect size (f2 = 1.0), significance level (α = 0.05), and power (1-β = 0.80). The required sample size for each group was determined to be 17 participants. In consideration of a dropout rate, 20 participants were assigned to both the experimental and control groups. All participants successfully completed the program and responded to both pre- and post-surveys. As a result, the dataset of 40 participants was used for a final analysis.

2.3. Study Tools

2.3.1. Computational Thinking (CT)

Computational thinking refers to the procedural thinking to solve problems efficiently according to the fundamental concepts and principles of computers [25]. Computational thinking was measured with the use of the tool developed by Hong et al. [10]. This tool consists of a total of 17 items, 4 sub-factors (decomposition, pattern recognition, abstraction, algorithm), and a Likert 5-point scale. A higher score indicates a higher level of computational thinking. In the study by Hong et al. [10], Cronbach’s α was 0.972, and the reliability of each sub-factor was decomposition, 0.928; pattern recognition, 0.902; abstraction, 0.890; and algorithm, 0.928. In this study, Cronbach’s α was 0.978, and the reliability of each sub-factor was decomposition, 0.937; pattern recognition, 0.919; abstraction, 0.921; and algorithm, 0.933.

2.3.2. Artificial Intelligence (AI) Competency

AI competency refers to the ability to solve problems based on the understanding of artificial intelligence [15,16]. AI competency was measured with the use of the tool developed by Yoo et al. [15]. This tool consists of a total of 17 items, 4 sub-factors (knowledge representation and reasoning, data understanding and learning, machine learning, deep learning, AI ethics), and a Likert 5-point scale. A higher score indicates a higher level of AI competency. In the study by Yoo et al. [15], Cronbach’s α was 0.960. In this study, Cronbach’s α was 0.977, and the reliability of each sub-factor was knowledge representation and reasoning, 0.901; data understanding and learning, 0.907; machine learning, 0.878; deep learning, 0.957; and AI ethics, 0.910.

2.3.3. Core Competencies

The core competencies for future talents refer to the ones considered by the Ministry of Education in the 4th Industrial Revolution Innovative Leading University Project, representing critical thinking, communication skills, creativity, and collaboration. In this study, these core competencies were measured with the use of the tool for measuring the core competencies of future talents of the Innovative Curriculum-based Core Competency Scale developed by Kwon [30]. This tool consists of a total of 19 items, 4 sub-factors (critical thinking, communication skills, creativity, collaboration), and a 7-point Likert scale. A higher score indicates higher levels of critical thinking, communication skills, creativity, and collaboration, which fall under the innovative curriculum-based core competencies. In the study by Kwon [30], Cronbach’s α was 0.85. In this study, Cronbach’s α was 0.976, and the reliability of each sub-factor was critical thinking, 0.942; communication skills, 0.899; creativity, 0.936; and collaboration, 0.916.

2.4. Study Intervention

The Design Model of the International Training Program for Enhancing Intelligent Capabilities

The ICT, which stands for Information Communication Technology, represents information technology and telecommunication technology. It encompasses the software technology required to operate and manage information devices such as computers, media, and audiovisual equipment, as well as all methods of collecting, producing, processing, preserving, transmitting, and utilizing information using such technology [26]. To cultivate talents who have problem-solving skills based on artificial intelligence and software, it is crucial to enhance their practical competencies. The international training program for enhancing intelligent capabilities is able to provide an environment where learners actively participate in practical work and can proactively solve problems. In this study, the instructional design for the international training program is based on the ADDIE model in order to maintain an organic relationship in each stage and providing a field-based program that reflects learners’ needs [27,28,29]. This instructional model consists of five stages: analysis, design, development, implementation, and evaluation (See Table 1).
  • Analysis stage of ADDIE model
  • Curriculum
The Department of Smart ICT Convergence at this university aims to foster core talents leading the intelligence and innovation of the main regional industries. The curriculum designed in the department focuses on information and communication technologies in the smart ICT convergence field, such as wireless communication convergence, AI and big data, cyber security and XR, bio-digital, and intelligent robotics [31]. The international training program for enhancing intelligent capabilities in this provides various opportunities to engage in joint R&D convergence projects with enterprises through the connected activities with global specialized companies or overseas universities with excellent research capabilities. In this way, this program enhances the department’s core competencies: practice and challenge, creative convergence, and problem-solving ability.
2.
Learners’ characteristics
Participants in the Smart ICT Convergence Department at this university are employees working at local companies, who are responsible for various tasks related to artificial intelligence, big data, drones, and more. Therefore, they already feel the necessity of IT convergence and have the basic knowledge and qualifications. Nevertheless, in order to perform R&D convergence projects with companies, it is necessary to explore ways to elevate the understanding, application, utilization, and evaluation levels of the latest technologies among the five main information and communication technologies mentioned earlier [32]. This international training program for enhancing intelligent capabilities encourages participants to freely draw topics in a creative problem-solving way and solve problems through discussion [19]. Participants’ active engagement is essential in this program.
3.
Environment analysis
The process of creative problem solving can stimulate learners’ practice and challenge, creative convergence, and problem-solving ability [33]. Therefore, it should consist of three steps. The first step involves identifying problems and topics and conducting education based on skill levels or areas of interest. The second step includes classifying skill levels by major, providing pre-training, and conducting in-depth learning for creative problem solving. The third step is the evaluation phase, where participants can share their experiences through a creative problem-solving competition.
  • Design stage of ADDIE model
  • Definition of learning objectives
This program aims to increase learners’ understanding levels in order to improve their knowledge about the understanding, application, utilization, and evaluation of the latest technologies in the ICT field, focusing on enhancing their understanding, application, utilization, and evaluation of these technologies; to find solutions to business problems; or to create new ideas by utilizing the knowledge.
2.
Teaching–learning process plan
This program was designed to apply the blended learning education approach reflecting both real-time online classes and face-to-face classes, and to be divided into general and advanced courses according to learners’ levels [24,25,32]. The real-time online classes have six sessions over one week, each of which lasts 90 min. The face-to-face classes have six sessions over nine days, each of which lasts 4–8 h. Real-time online classes include lectures and presentations from U.S. professors and experts in the ICT industry, along with discussions. The face-to-face classes comprise special lectures and presentations from U.S. professors and experts, group discussions, one-on-one feedback on technological challenges, field trips related to the major, cultural exploration, and individual project presentations for creative problem solving. The detailed teaching–learning process plan is shown in Figure 1.
3.
Learning environment plan
The real-time online classes of this program utilize Zoom Video Communication to create a real-time environment for interactive discussions among professors, experts in the industry, and learners. Learners are formed into small-sized groups. Individuals facing similar problems or topics are grouped together for effective presentations and discussions [24,25,32].
Of the face-to-face classes, the field trip component involves planning visits to various local community organizations, companies, universities, and exhibitions in the United States that have close relevance to information and communication technology. This allows participants to interact with local experts and explore equipment, facilities, organizational culture, and more. Safety plans for participants during field trips are also prepared [24,25,32]. For expert lectures, presentations, and discussion sessions, the classrooms are set up with structures conducive to two-way communication, such as the tables arranged for effective teaching, presentations, and discussions. Cultural exploration activities were planned to enhance the understanding of American culture.
4.
Assessment plan
As for the assessment, an individual evaluation was planned and implemented. The evaluation was made by professors and experts. The evaluation criteria included the application level of knowledge for overcoming technological challenges or solving research problems, problem-solving abilities, presentation skills, and other multifaceted aspects.
  • Development stage of ADDIE model
  • Development of problems
The main education contents in this program include wireless communication convergence, AI and big data, cyber security and XR, bio-digital, and intelligent robotics [30]. This education program was developed together with two computer engineering professors and two industry experts in order to enhance its validity.
2.
Development of evaluation tool
The evaluation was determined to be based on a report. More specifically, the achievement (completeness) related to the resolution of technical difficulties and issues in line with the training objectives is evaluated. In addition, the quality of the report, presentation skill, and other relevant factors is evaluated. Individual assessments are conducted for each participant.
  • Implementation stage of ADDIE model
In this training program, the 1-week real-time online classes had 6 sessions, each of which lasted 90 min; the face-to-face classes took place in the United States over 9 days with 6 sessions, each of which lasted 4–8 h. The detailed class schedule for each day is shown in Appendix A. The real-time online classes covered such topics as ‘artificial intelligence, intelligent robots, and human-centered approaches in AI’. The face-to-face classes included ‘seminars on Data Science, Machine Learning, and AI, the understanding of U.S. patented technologies and IT trends in the US’, as well as general lectures and expert special lectures on ‘transformers, IT project management in the U.S., big data analysis’, and more. Field trips included visits to two U.S. universities related to ICT, system resource computer exhibition, WireBarley, and various local community institutions in the U.S. To enhance the understanding of U.S. culture, which is helpful for development in the ICT field, cultural exploration was incorporated into the program. The final session included individual project presentations, discussions, and an award ceremony to foster creative problem-solving skills.
  • Evaluation stage of ADDIE model
As for the evaluation in this program, individual assessments were carried out, involving both report writing and presentations. The evaluation of creative problem solving was conducted individually. It focused on the ability to overcome technological challenges and solve problems in line with the final objective of this global program. The evaluation of the ability to solve technological difficulties and problems was based on the feasibility of the content described in the first report and the verbal skills during the second presentation. The summary of main concepts was also included as an assessment criterion to check the understanding of the program’s objectives.

2.5. Statistical Analysis

The collected data were analyzed with the use of SPSS/WIN 20.0. The normality assumption of dependent variables was examined through the Kolmogorov–Smirnov (K-S) test, histograms, skewness, and kurtosis. As a result, the score for computational thinking right after the intervention and the score for core competencies in the 12th week of the intervention failed to meet the normality assumption (D range of K-S test: 0.15, p < 0.05; 0.15, p < 0.05), but other variables satisfied the normality assumption (D range of K-S test: 0.07~0.13, p > 0.05). The results indicated that the normality assumption was not met for the scores for computational thinking immediately after intervention and core competencies 12 weeks after intervention (K-S test’s D range: 0.15, p< 0.05; 0.15, p < 0.05). However, other variables satisfied the normality assumption (K-S test’s D range: 0.07 to 0.13, p > 0.05). The skewness values for the three variables ranged from −0.82 to 0.06, and kurtosis values ranged from −0.31 to 1.23, all within the −2 to +2 range. Since most variables met the normality assumption, parametric statistics were used to analyze the study results. General characteristics of the participants were analyzed with the uses of a real number and percentage, and the test of homogeneity was performed through the Chi-square test and independent t-test. To verify the effects of the international training program for enhancing intelligent capabilities on computational thinking, artificial intelligence competency, and core competencies the experimental and control groups have, a repeated-measures analysis of variance was conducted. In cases where the sphericity assumption was not met, the Greenhouse–Geisser epsilon correction was applied. Group comparisons at each point of time for changes in dependent variables were analyzed with the use of an independent samples t-test.

2.6. Data Collection Method and Procedure

This study as a research project obtained approval (2023-002-001-07) from the Institutional Review Board at University C. Data had been collected from July to October 2023. It involved the international training program for enhancing intelligent capabilities of SMART ICT graduate students. The study details, including the title, purpose, method, and procedure, were publicly posted on an online bulletin board to encourage voluntary participation of graduate students.
Participants were assigned to either the experimental group (Group 1) or the control group (Group 2). In the participant selection process, there was a notification that Group 1 was determined to participate in the international training program and graduate general course of Smart Information Communication Technology (SMART ICT) for enhancing intelligent capabilities, while Group 2 was determined to participate in the graduate general course of Smart Information Communication Technology (SMART ICT) for enhancing intelligent capabilities at the university.
The SMART ICT convergence major is a field-based master’s and doctoral degree program centered on employed people. In other words, it is a general smart information and communication technology (SMART ICT) graduate school course. The educational goal is to develop practical capabilities that can be used immediately by carrying out projects. The curriculum designed in the department focuses on information and communication technologies in the smart ICT convergence field. A total of 24 credits or more must be completed over 2 years, and the curriculum consists of an in-depth major course (12 credits or more) and a convergence project (12 credits or more).
This international training program for enhancing intelligent capabilities encourages participants to freely draw topics in a creative problem-solving way and solve problems through discussion. The global program is designed based on the ADDIE model and classes are operated in a branded manner. This program was designed to apply the blended learning education approach, reflecting both real-time online classes and face-to-face classes, and to be divided into general and advanced courses according to learners’ levels.
The experimental group participated in a global program using the vacation period after participating in one semester of the SMART ICT graduate school course from March to June 2023. For one week, students spent that week studying in real-time online courses with professors and experts from the United States in Korea, then immediately went to the United States and participated in a face-to-face course for 9 days to strengthen their intelligence capabilities. The control group participated in one semester of the SMART ICT graduate school course from March to June 2023.
The participants in the experimental group, who expressed their voluntary participation, took part in the pre-survey. After their attendance in the real-time online classes (90 min, 6 sessions, 1 week) and the face-to-face classes (4–8 h, 6 sessions, 9 days) and in the 12th week of the international training program, they participated in the questionnaire surveys on computational thinking, artificial intelligence competency, and the innovative curriculum-based core competencies.
The control group participated in surveys only, three times (one pre-survey, two post-surveys) (See Figure 2). For Group 1 and Group 2, an internet resource locator (URL) was posted on the online bulletin board three times and enabled them to respond to the surveys. The first page of the website accessed by the participants voluntarily shows the description of this study and whether to agree to join the study, and the next page presents questionnaire items. The participants who wanted to participate in the study voluntarily were asked to sign the agreement and respond to the questionnaire items related to demographic information, computational thinking, artificial intelligence competency, and the innovative curriculum-based core competencies. The study participants received a gift as a token of appreciation.

3. Results

3.1. The Study Participants’ General Characteristics and Homogeneity Testing

The study participants’ gender, age groups, education levels, and major satisfaction were analyzed. As a result, the two groups had homogeneity (See Table 2). Regarding gender, the experimental group and the control group had a similar distribution, with a 30.0% female ratio and a 70.0% male ratio. In terms of age groups, the experimental group had a slightly higher proportion of participants in their 20s, accounting for 35.0%, and the control group had the majority of participants in their 30s, accounting for 55.0%. With regard to educational levels, in the experimental group, 85.0% held a bachelor’s degree, 5.0% (one participant) held a master’s degree, and 10.0% (two participants) held a doctoral degree. The control group had a similar distribution, with 90.0% holding a bachelor’s degree and 10.0% (two participants) holding a master’s degree. Regarding satisfaction with their majors, in both the experimental and control groups, over 80.0% expressed satisfaction, 20.0% had a moderate level of satisfaction, and no one was dissatisfied.
The dependent variables—computational thinking, artificial intelligence competency, and core competencies—were not significantly different between the two groups, and thus the groups were confirmed to be homogeneous (See Table 2).

3.2. Verification of the Effects of the International Training Program for Enhancing Intelligent Capabilities

3.2.1. Group Comparison at Each Point of Time for a Change in Computational Thinking

To verify the effects of the international training program for enhancing intelligent capabilities, a repeated measures analysis of variance was carried out to analyze a change in computational thinking of the experimental and control groups at each measurement point of time. The results are as follows (see Table 3). Because computational thinking failed to meet sphericity (W = 0.75, p < 0.005), Greenhouse–Geisser’s epsilon correction (ε = 0.801) was applied. No significant interaction was found between groups, time points, and group–time point (Table 3).
According to the analysis on each subfactor, the decomposition score of the experimental group right after the program increased by 0.29 ± 0.64, compared to the score before the program, whereas that of the control group decreased by −0.02 ± 0.56. The difference in the change was not statistically significant (t = 1.62, p = 0.113). However, the decomposition score of the experimental group in the 12th week of the program increased by 0.39 ± 0.61, compared to the pre-program score, whereas that of the control group decreased by −0.06 ± 0.64. The difference in the change was statistically significant (t = 2.27, p = 0.029). According to the independent samples t-test conducted for the group comparison at each point of time for the change, the experimental group had a greater change between the pre-program and in the 12th week of the program than the control group. The algorithm score of the experimental group right after the program increased by 0.35 ± 0.71, compared to the pre-program score, whereas the control group decreased by −0.11 ± 0.46. Therefore, the difference in the change was statistically significant (t = 2.40, p = 0.021). Additionally, the algorithm score of the experimental group in the 12th week of the program increased by 0.32 ± 0.58, compared to the pre-program score, whereas the control group decreased by −0.18 ± 0.75. Therefore, the difference in the change was statistically significant (t = 2.33, p = 0.025). According to the independent samples t-test conducted for the group comparison at each point of time for the change, the experimental group had a greater change between the pre-program and in the 12th week of the program than the control group.
The pattern recognition score of the experimental group right after the program increased by 0.26 ± 0.82, compared to the pre-program score, and the control group’s score increased by 0.05 ± 0.74. However, the difference in the change was not statistically significant (t = 0.85, p = 0.397). The pattern recognition score of the experimental group in the 12th week of the program increased by 0.28 ± 0.69, compared to the pre-program score, whereas the control group decreased by −0.08 ± 0.73. Nevertheless, the difference in the change was not statistically significant (t = 1.65, p = 0.107). The abstraction score of the experimental group right after the program increased by 0.17 ± 0.88, compared to the pre-program score, and the control group’s score increased by 0.11 ± 0.55. The difference in the change was not statistically significant (t = 0.265, p = 0.790). The abstraction score of the experimental group in the 12th week of the program increased by 0.22 ±0.67, compared to the pre-program score, whereas the control group showed almost no change with a score of 0.00 ± 0.70. The difference in the change was not statistically significant (t = 1.03, p = 0.308) (Table 3).

3.2.2. Group Comparison at Each Point of Time for a Change in AI Competency

AI competency satisfied sphericity (W = 0.88, p = 0.98), and significant interaction was observed between time points (F = 6.76, p = 0.002) and between groups (F = 9.77, p = 0.003). However, there was no significant interaction between a group and time point (F = 2.77, p = 0.069). The AI competency score of the experimental group right after the program increased by 0.79 ± 0.88, compared to that before the program, and the control group’s score increased by 0.26 ± 0.71. The difference in the change was statistically significant (t = 2.11, p = 0.041). Additionally, the experimental group’s AI competency score in the 12th week of the program increased by 0.81 ± 0.68, compared to the pre-program score, and the control group’s score increased by 0.10 ± 1.36. The difference in the change was statistically significant (t = 2.06, p = 0.048). According to the independent samples t-test conducted for the group comparison at each point of time for the change, the experimental group had a greater change than the control group (Table 4).
According to the analysis on each subfactor, the deep learning score of the experimental group right after the program increased by 0.90 ± 1.10, compared to the pre-program score, and the control group’s score increased by 0.35 ± 1.05. The difference in the change was not statistically significant (t = 1.61, p = 0.115). The experimental group’s deep learning score in the 12th week of the program increased by 1.02 ± 10.93, compared to the pre-program score, and the control group’s score increased by 0.10 ± 1.49. Since the experimental group has a greater change than the control group, the change was statistically significant (t = 2.35, p = 0.024). According to the independent samples t-test conducted for the group comparison at each point of time for the change, compared to the control group, the experimental group had a greater change in the 12th week of the program than before the program.
The knowledge inference score of the experimental group right after the program increased by 0.75 ± 1.10, compared to the pre-program score, and the control group’s score increased by 0.07 ± 1.15. However, the difference in the change was not statistically significant (t = 1.89, p = 0.066). The knowledge inference score of the experimental group in the 12th week of the program increased by 0.70 ± 1.03, compared to the pre-program score, and the control group’s score increased by 0.15 ± 1.28. However, the difference in the change was not statistically significant (t = 1.49, p = 0.144). The data comprehension learning score of the experimental group right after the program increased by 0.71 ± 0.89, compared to the pre-program score, and the control group’s score increased by 0.23 ± 0.64. The difference in the change was not statistically significant (t = 1.93, p = 0.061). Additionally, the experimental group’s data comprehension learning score in the 12th week of the program increased by 0.61 ± 0.63, compared to the pre-program score, whereas the control group’s score decreased by −0.07 ± 1.55. The difference in the change was not statistically significant (t = 1.83, p = 0.074). The experimental group’s machine learning score right after the program increased by 0.71 ± 0.94, compared to the pre-program score, and the control group’s score increased by 0.30 ± 0.90. The difference in the change was not statistically significant (t = 1.41, p = 0.166). The experimental group’s machine learning score in the 12th week of the program increased by 0.78 ± 0.67, compared to the pre-program score, and the control group’s score increased by 0.10 ± 1.49. The difference in the change was not statistically significant (t = 1.88, p = 0.068). The experimental group’s AI ethics score right after the program increased by 0.91 ± 1.03, compared to the pre-program score, and the control group’s score increased by 0.35 ± 0.89. The difference in the change was not statistically significant (t = 1.87, p = 0.069). The experimental group’s AI ethics score in the 12th week of the program increased by 0.93 ± 0.98, compared to the pre-program score, and the control group’s score increased by 0.26 ± 1.46. However, the difference in the change was not statistically significant (t = 1.68, p= 0.101) (Table 4).

3.2.3. Group Comparison at Each Point of Time for Changes in Core Competencies

Core competencies did not satisfy sphericity (W = 0.65, p < 0.005), so Greenhouse–Geisser epsilon correction (e = 0.744) was applied. The difference between time points (F = 7.04, p = 0.004) was observed to be significant, but there was no significant interaction between groups, and a group and time point (Table 5).
According to the analysis on each subfactor, the experimental group’s critical thinking score right after the program increased by 0.85 ± 1.50, compared to the pre-program score, and the control group’s score increased by 0.12 ± 0.91. The difference in the change was not statistically significant (t = 1.83, p = 0.074). The experimental group’s critical thinking score in the 12th week of the program increased by 0.45 ± 1.05, but the control group’s score had almost no change with a score of 0.00 ± 1.23. The difference in the change was not statistically significant (t = 1.23, p = 0.222). The experimental group’s communication score right after the program increased by 0.74 ± 1.31 compared to the pre-program score, and the control group’s score increased by 0.13 ± 0.98. The difference in the change was not statistically significant (t = 1.60, p = 0.117). In addition, the experimental group’s communication score in the 12th week of the program increased by 0.59 ± 0.98, compared to the pre-program score, but the control group’s score decreased by −0.16 ± 1.09. The difference in change was statistically significant (t = 2.27, p = 0.029). The experimental group’s creativity score right after the program increased by 1.79 ± 1.51, compared to the pre-program score, and the control group’s score increased by 1.30 ± 0.98. The difference in the change was not statistically significant (t = 1.20, p = 0.237). The experimental group’s creativity score right in the 12th week of the program increased by 0.26 ± 0.87, compared to the pre-program score, and the control group’s score increased by –0.30 ± 1.36. The difference in the change was not statistically significant (t = 1.54, p = 0.130) (Table 5).

4. Discussion

This study aimed to provide foundational data for expanding and enhancing the ADDIE-model-based international training program for enhancing intelligent capabilities by applying the 2-week program to graduate students participating in the course of Smart Information Communication Technology (SMART ICT) Employees and examining the changes in their computational thinking, AI competency, and core competencies.
The first hypothesis, “the experimental group participating in the international training program for enhancing intelligent capabilities will score higher in computational thinking than the control group that did not participate” was rejected. The computational thinking of the experimental group participating in the international training program continuously improved, albeit to a minimal extent. In contrast, in the control group, all the scores for computing thinking were similar to the pre-scores, or decreased. Although the score for computing thinking increased continuously, the increase was quite less than that in the score for AI competency. The reason for the minimal change was that the program mainly focused on the AI field. It is interpreted that the participant group was made up of graduate students and employees working in companies, and was already equipped with fundamental abilities in terms of computational thinking. The sample size of this study was sufficient for a repeated measures analysis. Nevertheless, it is necessary to size up samples and test them through repeated research.
The second hypothesis, “the experimental group participating in the international training program for enhancing intelligent capabilities will score higher in artificial intelligence competency than the control group that did not participate” was supported. After the international training program was applied, the changes in the scores for AI competency were significantly different. The experimental group, which participated in the international training program, showed a continuous increase in the score for AI competency. In contrast, in the control group, all the scores for AI competency were similar to the pre-scores, or decreased. This suggests that the international training program in this study had a particularly significant effect on AI competency. In the study by Yoo et al. [15], who applied a 30 h AI class to university students for 15 weeks, their AI competency also improved. The Germo [33] study also showed similar results. Although the class operation procedure in the study was different from that in this study, the result indicates that systematic organization and operation of educational content contribute to program effectiveness. In this study, the international training program for enhancing intelligent capabilities was designed on the basis of the ADDIE model. It incorporates a blended learning approach with online and offline intensive education [27,34]. The real-time online classes consisted of six sessions for one week, each of which lasted 90 min. The face-to-face classes consisted of six sessions for nine days, each of which lasted 4–8 h face-to-face in the U.S. [35,36]. The program securing over 45 h of education provided enough opportunity for knowledge acquisition and extension. In particular, this program enabled participants to deepen online learning during offline sessions and divided the course into general and in-depth sections to offer tailored education based on learners’ levels [26,37]. All the special lectures and presentations on information and communication technology were based on cases. In order to enhance their practical skills, participants had field experiences in the ICT sector. Above all, presentations, small group discussions, and especially continuous one-on-one feedback were applied to help learners find solutions to technological challenges not only in their personal dimension, but in their corporate dimension. It is considered that these diverse efforts positively influenced the increase in the AI competency score and the lasting of the score that had increased from the start of the program to the 12th week of the program. The participants in this study were employees working in various positions related to AI or big data in local companies. They were required to make continuous improvements in knowledge and application abilities related to the latest technologies in their fields rather than to obtain sporadic knowledge. However, the AI competency score of the participants in this study was similar to that (4.07) of university students in previous research [38]. Therefore, it is necessary to provide education in the ICT field continuously.
The third hypothesis, “the experimental group that participated in the international training program for enhancing intelligent capabilities will score higher in future society core competency scores than the control group that did not participate” was rejected. The core competencies’ score was on the rise until the point of time right after the intervention, but then decreased. So, in comparison with the control group, it showed no significant change. In this program, discussions were included in classes, in consideration of the participants’ improvements in core competencies, such as critical thinking, communication, creativity, and collaboration skills. In addition, they were required to present individual projects to enhance creative problem-solving abilities [20,22,39]. One-on-one expert feedback was provided as much as possible in order to help them to address any technological challenges. But these results are the same as those seen in computational thinking. It is interpreted that the participant group was made up of graduate students and employees working in companies, and was already equipped with fundamental abilities in terms of core competencies.
Computational thinking is the problem-solving process based on algorithms and logic. AI is the process of solving problems with data-based reasoning and learning. These two processes are not separate but very closely related, and have many things in common [39]. Previous research already confirmed the positive correlations between AI competency, computational thinking, and core competencies [15,40]. In the world we live in at present, a variety of social fields are interconnected through information infrastructure, and a diversity of intelligent activities are automated based on AI technology [32]. In short, the current society is an intelligence information society. The ideal qualifications for talents in such a society include not only expertise in various fields, but also computational thinking to view and solve problems from a future-oriented perspective, interpret rationally, and collect and utilize information with the use of ICT; AI knowledge; critical thinking, communication, and a collaborative attitude crucial for problem solving [1,22,25,31]. In other words, it is required to obtain AI knowledge, the knowledge of computer application, and expertise in various fields [25,38]. Therefore, in order to improve AI competency, it is crucial to include the educational contents in consideration of the expansion of computational thinking and operate the project-based education with practical outputs.
This study tried to design and operate the program systematically in order to give the program participants an opportunity to reorganize their knowledge, thinking processes, and technological skills, considering that they as advanced learners already have a lot of experiences and knowledge. The results indicated that the international training program was effective for increasing the AI competency score, and that its effect lasted until the 12th week of the program. This study is meaningful in the point that the international training program for enhancing intelligent capabilities had the possibility to become a useful intervention program. Therefore, it is suggested that the international training program for enhancing intelligent capabilities can be applied to or utilized by undergraduate students and various age groups as well as graduate students. However, there are several limitations in interpreting the results. Firstly, the study was conducted in a single institution, and arbitrary allocation, rather than random sampling, was used to control the diffusion effect of the program. Secondly, although participants were chosen according to selection criteria, and demographic homogeneity between the experimental and control groups was secured, it is impossible to rule out completely the possibility of confounding variables due to the lack of random allocation. Thirdly, the study is meaningful in the point that it verified the short-term effectiveness of the international training program, because its effects on AI competency were evaluated right after the program and in the 12th week of the program. However, its long-term effectiveness failed to be defined. Therefore, in a follow-up study, it is necessary to examine the long-term effect through repeated research. Finally, in this study, the program, which could have been performed a single time, was provided with real-time online classes (six sessions for one week, 60 min per session) and face-to-face classes (six sessions for nine days, 4–8 h per session) that amounted to 30–45 h of education and were equivalent to the hours for 2–3 credits. Future AI education is learner-centric, continuous, and creative education, rather than single-time education. It emphasizes enough education hours and systematic contents. Therefore, it is suggested that the international training program for enhancing intelligent capabilities in this study can be used as a fundamental material when interdisciplinary education attempts are made in various fields.

5. Conclusions

This study was to apply a 2-week, 45 h international training program for enhancing intelligence capabilities to 40 graduate students participating in the SMART ICT course. As a result, it was confirmed that the artificial intelligence competency scores of the experimental group maintained as improved scores from immediately after the program up to 12 weeks compared to the control group that did not participate in the program.
We attribute these results to three strategies. First, the international training program applied the ADDIE instructional model to the curriculum design of the program, which is suitable for increasing knowledge, skills, and application of the latest technologies in the ICT field. Second, customized training was applied to reflect the needs of participants identified during the analysis process of the ADDIE model design phase. Third, the blended learning method with real-time online classes and face-to-face classes was applied to enhance the effectiveness of the classes for sustainable education.
Therefore, the educational significance of this study is that it involves a branded learning program designed based on the ADDIE instructional model that provides a model of a circular education system rather than one-time education. It can also be used as a long-term research model to strengthen AI capabilities. In order to implement a long-term research model, we propose to apply an educational program that can be linked to the international training program of this study and to improve the quality of education by continuing the process of evaluating artificial intelligence capabilities to produce results.

Author Contributions

Methodology, E.-Y.O.; Software, E.-Y.O.; Data curation, Y.-H.A.; Writing—original draft, Y.-H.A.; Writing—review & editing, E.-Y.O.; Visualization, Y.-H.A.; Project administration, E.-Y.O. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Innovative Human Resource Development for Local Intellectualization program through the Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korean government (MSIT) (IITP-2024-RS-2022-00156334).

Institutional Review Board Statement

This study is a research project approved by the Institutional Review Board of Catholic Kkottongnae University of South Korea (2023-002-001-07 and date of approval 31 July 2023).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study, Written informed consent has been obtained from the patient(s) to publish this paper.

Data Availability Statement

Data can be obtained by contacting the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Course operation procedures.
Table A1. Course operation procedures.
CategoriesContents
Real-time online classes1st
International training program for improvement in intelligent capabilities’ orientation: Real-time online classes
Pre-test: Computational thinking, AI competency, core competence
Team building: team composition, team slogan, team member role, ground rule
Lecture class operation
-
Topic: AI trends in the US and case-based AI education
-
Presentation of technological challenges and sharing of issues
-
Tailored class after the division into general course and advanced course according to difficult levels of technological difficulties
-
Presentation and Q&A
2nd
~
3rd
Lecture class operation
Level 1: General course
-
U.S. intelligent robot research trends and case study (1)
-
Presentation and Q&A
Level 2: Advanced course
-
U.S. intelligent robot research trends and case study (1)
-
Presentation and Q&A
4th
Lecture class operation
Level 1: General course
-
AI application case based on the trend analysis of User Experience (UX) (step 1)
-
Presentation and group discussion
Level 2: Advanced course
-
AI application case based on the trend analysis of User Experience (step 1)
-
Presentation and group discussion
5th
Lecture class operation
-
AI-User-Centered Tech
-
Presentation and group discussion
6th
International training program for improvement in intelligent capabilities’ orientation: Face-to-face classes
Ceremony of international training program for improvement in intelligent capabilities
Face-to-face classes1st
~
2nd
Start of face-to-face classes (arrival in USA): International training program for improvement in intelligent capabilities
Greeting from the president of university
Lecture class operation
-
Class related to the 4th real-time online classes
Relationship between core competencies (critical thinking, creativity, communication, and collaboration) and artificial intelligenceAI application case based on the trend analysis of User Experience (UX) (step 2)
-
Analysis on the needs and levels of learners in terms of class topics
-
Presentation and discussion
3rd
Lecture class operation
-
AI application and utilization case of Amazon (step 1)
-
Presentation, Q&A
Field trip class operation
-
Visit to: Montgomery County Department of Police
The case of AI-based digital forensics analysis and application case in the U.S. police
-
Demonstration, field experience, observation
Lecture class operation
-
Importance of digital security (case of the U.S. Administration)
-
Utilization of U.S. database and information security
-
Presentation and Q&A
4th
Lecture class operation
-
Data Science and Machine Learning: Graph Neural Network research and presentation
-
Presentation and discussion
-
One-on-one feedback on technological difficulties
Field trip class operation
-
Visit to: Amazon (step 2)
-
Field experience, observation
Field trip class operation
-
Local facility: Encryption Museum
-
WireBarley visit
-
Explanation by field expert, field experience, observation
5th
Lecture class operation
-
Special lecture class related to the 2nd and 3rd real-time online classes: U.S. intelligent robot research trends and case study
U.S. intelligent robot research trends and case study
-
Presentation and group discussion
-
One-on-one feedback on technological difficulties
Field trip class operation
-
System Source Computer Exhibits visit
-
WireBarley visit
-
Experiential learning, meeting with field experts, tours of equipment or facilities
-
Explanation by field expert
6th
Lecture class operation
-
The understanding of the cultural diversity of the U.S. (part I), English class
Lecture class operation
-
Topic: The understanding of the cultural diversity of the U.S. (part II)
Lecture class operation
Level 1: General course
-
Application case of transformers (step 1)
-
Presentation and discussion
-
One-on-one feedback on technological difficulties
-
IT Project Management of America (step 1)
-
Presentation and discussion
-
One-on-one feedback on technological difficulties
Level 2: Advanced course
-
Application case of transformers (step 2)
-
Presentation and discussion
-
One-on-one feedback on technological difficulties
-
IT Project Management of America (step 2)
-
Presentation and discussion
-
One-on-one feedback on technological difficulties
7th
Lecture class operation
-
Understanding of the U.S. Patent Technology and IT Trend in the U.S.
(application for U.S. intelligent property rights patent, management, effect)
-
Presentation and discussion
Creative problem-solving competition
-
Presentation and discussion
-
Awards
8th
~
9th
Culture experience of United States of America (3)
Post-test: Computational thinking, AI competency, core competence
Finishing of face-to-face classes (arrival in Republic of Korea)
12 weeks later
Follow-up test: Computational thinking, AI competency, core competence

References

  1. Lee, M.J.; Lee, K.S.; Ju, H.J.; Kim, Y.M. Educational program design for cultivating the basic competency of the intelligent information society: Computing thinking and AI Literacy. J. Gen. Educ. 2022, 20, 123–153. [Google Scholar] [CrossRef]
  2. Kong, H.; Yuan, Y.; Baruch, Y.; Bu, N.; Jiang, X.; Wang, K. Influences of artificial intelligence (AI) awareness on career competency and job burnout. Int. J. Contemp. Hosp. Manag. 2021, 33, 717–734. [Google Scholar] [CrossRef]
  3. Lee, S.H. Effect of software education based on computational thinking on the learning interest and career recognition of elementary school students. J. Elem. Educ. Stud. 2018, 25, 9–72. [Google Scholar]
  4. Wing, J.M. Computational thinking. Commun. ACM 2006, 49, 33–35. [Google Scholar] [CrossRef]
  5. Appio, F.P.; Broring, S.; Sick, N.; LEE, S.J.; Mora, L. Editorial deciphering convergence: Novel insights and future ideas on science technology and industry convergence. IEEE Trans. Eng. Manag. 2023, 70, 1389–1401. [Google Scholar] [CrossRef]
  6. Cansu, F.K.; Cansu, S.K. An overview of computational thinking. Int. J. Comput. Sci. Educ. Sch. 2019, 3, 17–30. [Google Scholar] [CrossRef]
  7. SW-Centered University Project Notice. Available online: https://www.iitp.kr/kr/1/business/businessNotify/view.it?ArticleIdx=676&count=true&page=1 (accessed on 28 November 2023).
  8. Lee, E. Perspectives and challenges of informatics education: Suggestions for the informatics curriculum revision. J. Korean Assoc. Comput. Educ. 2018, 21, 1–10. [Google Scholar]
  9. Jang, E.; Kim, J. Contents analysis of basic software education of non-majors students for problem solving ability improvement-focus on SW-oriented University in Korea. J. Internet Comput. Serv. 2019, 20, 81–90. [Google Scholar]
  10. Hong, S.Y.; Goo, E.H.; Shin, S.H.; Lee, T.K.; Seo, J.Y. Development the measurement tool on the software educational effectiveness for non-major undergraduate students. Korean Assoc. Comput. Educ. 2021, 24, 37–46. [Google Scholar]
  11. Pi, S.Y. A study on coding education of non-computer majors for IT convergence education. J. Digit. Converg. 2016, 14, 1–8. [Google Scholar] [CrossRef]
  12. Seo, J.Y.; Shin, S.H. A case study on the effectiveness of major-friendly contents in software education for the non-majors. J. Digit. Converg. 2020, 18, 55–63. [Google Scholar]
  13. The 19th Social Relations Ministers’ Meeting and the 1st People Investment Talent Training Council. Available online: https://if-blog.tistory.com/12712 (accessed on 28 November 2023).
  14. Jeon, I.S.; Jun, S.J.; Song, K.S. Teacher training program and analysis of teacher’s demands to strengthen artificial intelligence education. J. Korean Assoc. Inf. Educ. 2020, 24, 279–289. [Google Scholar]
  15. Yoo, S.J.; Baek, J.S.; Jang, Y.J. Analysis of the relationship between AI competency and computational thinking of AI liberal arts class students. Korean Assoc. Comput. Educ. 2022, 25, 15–26. [Google Scholar]
  16. McCarthy, J. From here to human-level AI. Artif. Intell. 2007, 171, 1174–1182. [Google Scholar] [CrossRef]
  17. Noh, J.Y. Analysis of the effectiveness of liberal SW education focused on developing computational thinking and creative problem solving ability. J. Ind. Converg. 2023, 21, 123–135. [Google Scholar]
  18. Kim, D.J.; Ha, E.Y. The future direction of information education in university according to computerization. J. Digit. Converg. 2015, 13, 33–40. [Google Scholar] [CrossRef]
  19. Benjamin, R. CLA (Collegiate Learning Assessment). New York. 2010. Available online: http://www.cae.org/cla (accessed on 28 November 2023).
  20. Berg, M. From globalization to global history. Hist. Workshop J. 2007, 64, 335–340. [Google Scholar] [CrossRef]
  21. Jeong, H.Y. An analysis of overseas field experience programs in teacher education institution focusing on strengthening global competencies of preservice teachers. J. Korean Teach. Educ. 2012, 29, 475–499. [Google Scholar]
  22. The Challenge of Establishing World-Class Universities. Available online: https://elibrary.worldbank.org/doi/abs/10.1596/978-0-8213-7865-6 (accessed on 28 November 2009).
  23. Valiathan, P. Blended learning models. Learn. Circuits 2002, 3, 50–59. [Google Scholar]
  24. Resien, C.; Sitompul, H.; Situmorang, J. The effect of blended learning strategy and creative thinking of students on the results of learning information and communication technology by controlling prior knowledge. Bp. Int. Res. Crit. Linguist. Educ. (BirLE) J. 2020, 3, 879–893. [Google Scholar] [CrossRef]
  25. Akkoyunlu, B.; Soylu, M.Y. A study on students’ views on blended learning environment. Turk. Online J. Distance Educ. 2006, 7, 43–56. [Google Scholar]
  26. Alshahrani, A. The impact of ChatGPT on blended learning: Current trends and future research directions. Int. J. Data Netw. Sci. 2023, 7, 2029–2040. [Google Scholar] [CrossRef]
  27. Spatioti, A.G.; Kazanidis, I.; Pange, J. A comparative study of the ADDIE instructional design model in distance education. Information 2022, 13, 402. [Google Scholar] [CrossRef]
  28. Morales González, B. Instructional design according to the ADDIE model in initial teacher training. Apertura 2022, 14, 80–95. [Google Scholar] [CrossRef]
  29. Rajapboyevna, X.Q.; Umarjonovna, Y.G.; Qizi, Y.D.U. The ADDIE Model. Gospod. Innow. 2022, 21, 262–263. [Google Scholar]
  30. Kwon, S.K. Key competence measurement development and validation based on innovative education curriculum: Focusing on engineering college. J. Educ. Cult. 2020, 26, 129–152. [Google Scholar]
  31. Bieser, J.C.; Hilty, L.M. Assessing indirect environmental effects of information and communication technology (ICT): A systematic literature review. Sustainability 2018, 10, 2662. [Google Scholar] [CrossRef]
  32. Alzahrani, M.G. The developments of ICT and the need for blended learning in saudiarabia. J. Educ. Pract. 2017, 8, 79–87. [Google Scholar]
  33. Germo, R.R. Blended learning approach in improving student’s academic performance in information communication, and technology (ICT). TransNav Int. J. Mar. Navig. Saf. Sea Transp. 2022, 16, 251–256. [Google Scholar] [CrossRef]
  34. Seels, B.B.; Richey, R.C. Instructional Technology: The Definition and Domain of the Field; Information Age Publishing: Washington, DC, USA, 1994. [Google Scholar]
  35. Mazhar, H.; Jam, M.; Mazhar, H. Effect of blended learning strategies on university studunts’ skill development. Pak. J. Educ. Res. 2023, 6, 263–278. [Google Scholar]
  36. Sofya, R.; Utami, P.R.; Ritonga, M. The effect of blended learning on student learning outcomes. In Proceedings of the Ninth Padang International Conference on Economics Education, Economics, Business and Management, Accounting and Entrepreneurship (PICEEBA 2022); Advances in Economics, Business and Management Research Series; Atlantis Press: Amsterdam, The Netherlands, 2023; pp. 286–295. [Google Scholar]
  37. Woo, N.C.; Hyo, L.M. The development of an instructional design model for blended learning-based extra curriculum in university education. J. Lifelong Learn. Soc. 2021, 17, 111–137. [Google Scholar]
  38. Choi, M.S.; Choi, B.J. A study on the PBL-based AI education for computational thinking. Korea Inst. Converg. Signal Process. 2021, 22, 110–115. [Google Scholar]
  39. Toffler, A. Power Shift: Knowledge, Wealth, and Power at the Edge of the 21st Century; Bantam Books: New York, NY, USA, 1963; Available online: https://books.google.co.kr/books?id=gEBQEAAAQBAJ/ (accessed on 28 November 2023).
  40. Celik, I. Exploring the determinants of artificial intelligence (Ai) literacy: Digital divide, computational thinking, cognitive absorption. Telemat. Inform. 2023, 83, 102026. [Google Scholar] [CrossRef]
Figure 1. Teaching–learning process plan.
Figure 1. Teaching–learning process plan.
Applsci 14 00991 g001
Figure 2. Flow of the study.
Figure 2. Flow of the study.
Applsci 14 00991 g002
Table 1. The instructional model.
Table 1. The instructional model.
AnalysisDesignDevelopmentImplementationEvaluation
Curriculum Analytics
Learner Analytics
Environment Analysis
Learning Objectives Assessment
Tool Design
Media Selection
Teaching Materials
Development
Material Production
Implementation
Evaluation of training performance
Table 2. The study participants’ general characteristics and homogeneity testing (N = 40).
Table 2. The study participants’ general characteristics and homogeneity testing (N = 40).
CharacteristicsCategoriesExp. (n = 20)Cont. (n = 20)x2 or tp
n (%) or M ± SDn (%) or M ± SD
GenderMale14 (70.0)14 (70.0)0.0001.000
Female6 (30.0)6 (30.0)
Age (year)20s7 (35.0%)4 (20.0)3.770.287
30s6 (30.0)11 (55.0)
40s2 (10.0)3 (15.0)
50s5 (25.0)2 (10.0)
Education (year)Bachelor’s degree17 (85.0)18 (90.0)2.360.307
Master’s degree1 (5.0)2 (10.0)
Doctoral degree2 (10.0)0 (0.0)
Satisfaction with majorSatisfied17 (85.0)16 (80.0)0.170.677
Moderate3 (15.0)4 (20.0)
Dissatisfied 0 (0.0)0 (0.0)
Computational thinking3.72 ± 0.323.91 ± 0.55−1.380.175
AI competency3.13 ± 0.503.01 ± 1.010.430.663
Core competencies5.23 ± 0.615.07 ± 0.910.620.534
Exp. = experimental group; Cont. = control group; AI = artificial intelligence; M = mean score; SD = standard deviation; x2 = Chi-square statistic; t = t-statistic; p = significance level; significance at p < 0.05.
Table 3. Effects of the international training program for enhancing intelligent capabilities on computational thinking (N = 40).
Table 3. Effects of the international training program for enhancing intelligent capabilities on computational thinking (N = 40).
VariablesGroupsPre-TestPost-TestFollow-Up TestSourcesFpDifferences
(Post–Pre)
Differences
(Follow-Up–Pre)
M ± SDM ± SDM ± SDM ± SDtpM ± SDtp
Computational thinkingExp.3.72 ± 0.323.99 ± 0.794.02 ± 0.55G0.010.9020.26 ± 0.691.380.1750.30 ± 0.542.120.040
Cont.3.91 ± 0.553.92 ± 0.603.83 ± 0.75T0.940.3760.00 ± 0.48 −0.08 ± 0.60
G*T1.700.195
DecompositionExp.3.78 ± 0.444.07 ± 0.834.17 ± 0.46G0.490.4870.29 ± 0.641.620.1130.39 ± 0.612.270.029
Cont.3.92 ± 0.633.90 ± 0.583.86 ± 0.77T1.410.249−0.02 ± 0.56 −0.06 ± 0.64
G*T2.430.095
Pattern recognitionExp.3.71 ± 0.363.97 ± 0.824.00 ± 0.72G0.060.7940.26 ± 0.820.850.3970.28 ± 0.691.650.107
Cont.3.95 ± 0.674.00 ± 0.683.86 ± 0.82T0.690.4780.05 ± 0.74 −0.08 ± 0.73
G*T0.980.367
AbstractionExp.3.66 ± 0.383.83 ± 0.873.88 ± 0.76G0.180.6730.17 ± 0.880.260.7900.22 ± 0.671.030.308
Cont.3.82 ± 0.513.93 ± 0.623.82 ± 0.72T0.750.4770.11 ± 0.55 0.00 ± 0.70
G*T0.440.643
AlgorithmExp.3.73 ± 0.444.08 ± 0.774.05 ± 0.47G0.190.664035 ± 0.712.400.0210.32 ± 0.582.330.025
Cont.3.98 ± 0.683.87 ± 0.623.80 ± 0.83T0.550.547−0.11 ± 0.46 −0.18 ± 0.75
G*T2.930.069
Exp. = experimental group; Cont. = control group; M = mean score; SD = standard deviation; G = group; T = time; G*T = group by time; F = F-statistic; t = t-statistic; p = significance level; significance at p < 0.05.
Table 4. Effects of the international training program for enhancing intelligent capabilities on AI competency (N = 40).
Table 4. Effects of the international training program for enhancing intelligent capabilities on AI competency (N = 40).
VariablesGroupsPre-TestPost-TestFollow-Up TestSourcesFpDifferences
(Post–Pre)
Differences
(Follow-Up–Pre)
M ± SDM ± SDM ± SDM ± SDtpM ± SDtp
AI competencyExp.3.13 ± 0.503.92 ± 0.843.94 ± 0.47G9.770.0030.79 ± 0.882.110.0410.81 ± 0.682.060.048
Cont.3.01 ± 1.013.28 ± 0.873.12 ± 0.80T6.760.0020.26 ± 0.71 0.10 ± 1.36
G*T2.770.069
Knowledge inferenceExp.3.07 ± 0.563.82 ± 1.013.77 ± 0.76G5.420.0250.75 ± 1.101.890.0660.70 ± 1.031.490.144
Cont.3.02 ± 1.083.10 ± 1.083.17 ± 0.94T3.250.0440.07 ± 1.15 0.15 ± 1.28
G*T1.790.173
Data understanding and learningExp.3.43 ± 0.514.15 ± 0.744.05 ± 0.50G6.980.0120.71 ± 0.891.930.0610.61 ± 0.631.830.074
Cont.3.36 ± 1.103.60 ± 0.933.28 ± 0.90T4.230.0250.23 ± 0.64 −0.07 ± 1.55
G*T2.310.116
Machine learningExp.3.18 ± 0.523.90 ± 0.873.97 ± 0.42G10.460.0030.71 ± 0.941.410.1660.78 ± 0.671.880.068
Cont.2.98 ± 1.173.28 ± 0.923.07 ± 0.84T5.250.0070.30 ± 0.90 0.08 ± 1.52
G*T2.150.123
Deep learningExp.2.93 ± 0.763.83 ± 1.033.96 ± 0.53G11.090.0020.90 ± 1.101.610.1151.02 ± 0.0932.350.024
Cont.2.78 ± 1.193.13 ± 0.962.88 ± 0.84T6.840.0020.35 ± 1.05 0.10 ± 1.49
G*T3.120.050
AI ethicsExp.3.01 ± 0.853.93 ± 0.803.95 ± 0.62G6.420.0160.91 ± 1.031.870.0690.93 ± 0.981.680.101
Cont.2.93 ± 1.183.28 ± 0.943.20 ± 0.88T7.920.0010.35 ± 0.87 0.26 ± 1.46
G*T2.010.140
AI = artificial intelligence; Exp. = experimental group; Cont. = control group; M = mean score; SD = standard deviation; G = group; T = time; G*T = group by time; F = F-statistic; t = t-statistic; p = significance level; significance at p < 0.05.
Table 5. Effects of the international training program for enhancing intelligent capabilities on core competencies (N = 40).
Table 5. Effects of the international training program for enhancing intelligent capabilities on core competencies (N = 40).
VariablesGroupsPre-TestPost-TestFollow-Up TestSourcesFpDifferences
(Post–Pre)
Differences
(Follow-Up–Pre)
M ± SDM ± SDM ± SDM ± SDtpM ± SDtp
Core competenceExp.5.23 ± 0.615.98 ± 1.135.52 ± 0.78G3.110.0860.74 ± 0.931.460.1510.28 ± 0.871.510.138
Cont.5.07 ± 0.915.46 ± 1.094.89 ± 1.24T7.040.0040.38 ± 0.60 −0.18 ± 1.09
G*T1.110.320
Critical thinkingExp.4.53 ± 0.895.38 ± 1.324.98 ± 0.96G1.620.2100.85 ± 1.501.830.0740.45 ± 1.051.230.222
Cont.4.60 ± 1.154.72 ± 0.974.60 ± 1.12T3.130.0490.12 ± 0.91 0.00 ± 1.23
G*T1.760.178
CommunicationExp.4.86 ± 1.075.60 ± 1.105.45 ± 0.99G5.730.0220.74 ± 1.311.600.1170.59 ± 0.982.270.029
Cont.4.69 ± 1.144.83 ± 0.984.52 ± 1.11T2.500.0890.13 ± 0.98 −0.16 ± 1.09
G*T2.090.131
CreativityExp.5.32 ± 0.815.69 ± 1.115.58 ± 0.95G1.540.2221.79 ± 1.511.200.2370.26 ± 0.871.540.130
Cont.5.33 ± 0.975.31 ± 1.075.03 ± 1.31T0.610.5451.30 ± 0.98 −0.30 ± 1.36
G*T1.090.339
CollaborationExp.5.47 ± 0.735.82 ± 1.016.06 ± 0.72G1.970.1680.35 ± 1.150.670.5030.59 ± 0.802.040.048
Cont.5.44 ± 1.075.54 ± 1.305.28 ± 1.54T0.790.4540.10 ± 1.18 −0.15 ± 1.43
G*T1.780.176
Exp. = experimental group; Cont. = control group; M = mean score; SD = standard deviation; G = group; T = time; G*T = group by time; F = F-statistic; t = t-statistic; p = significance level; significance at p < 0.05.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ahn, Y.-H.; Oh, E.-Y. Effects of the International Training Program for Enhancing Intelligent Capabilities through Blended Learning on Computational Thinking, Artificial Intelligence Competencies, and Core Competencies for the Future Society in Graduate Students. Appl. Sci. 2024, 14, 991. https://doi.org/10.3390/app14030991

AMA Style

Ahn Y-H, Oh E-Y. Effects of the International Training Program for Enhancing Intelligent Capabilities through Blended Learning on Computational Thinking, Artificial Intelligence Competencies, and Core Competencies for the Future Society in Graduate Students. Applied Sciences. 2024; 14(3):991. https://doi.org/10.3390/app14030991

Chicago/Turabian Style

Ahn, Yeong-Hwi, and Eun-Young Oh. 2024. "Effects of the International Training Program for Enhancing Intelligent Capabilities through Blended Learning on Computational Thinking, Artificial Intelligence Competencies, and Core Competencies for the Future Society in Graduate Students" Applied Sciences 14, no. 3: 991. https://doi.org/10.3390/app14030991

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop