Next Article in Journal
Near Real-Time Remote Sensing Based on Satellite Internet: Architectures, Key Techniques, and Experimental Progress
Previous Article in Journal
Rivet Structural Design and Process Optimization for the Double-Sided Countersunk Riveting of Composite Wedge Structures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Prediction of Training Cost and Difficulty for Aircraft-Type Transition Based on Similarity Assessment

1
Civil Aviation School, Northwestern Polytechnical University, Xi’an 710072, China
2
Shanghai Aircraft Design and Research Institute, Shanghai 200030, China,
*
Author to whom correspondence should be addressed.
Aerospace 2024, 11(2), 166; https://doi.org/10.3390/aerospace11020166
Submission received: 17 January 2024 / Revised: 12 February 2024 / Accepted: 13 February 2024 / Published: 17 February 2024

Abstract

:
As aviation technology advances, numerous new aircraft enter the market. These not only offer airlines technological and fuel efficiency advantages but also present the challenge of how to conduct pilots’ aircraft-type transition training efficiently and economically. To address this issue, this study designed a methodology to quantitatively assess the similarity in panel display control design and standard operating procedures (SOPs) between aircraft types. Then, by combining the results of a questionnaire survey on A320, A330, B737, and B777 transition training and training cost data, it was verified quantitatively that inter-aircraft similarity has a positive impact on reducing the difficulty and cost of transition training. Taking the similarity in aircraft types as a feature, the KNN algorithm was used to successfully construct a difficulty prediction model for the training program of aircraft-type transition training. To overcome the limitation of insufficient training cost data volume, this study adopts the transfer learning method to construct a prediction model of the transition training cost, and the final significant prediction accuracy proves the effectiveness of the method. The research in this paper not only provides strong data support for the resource planning and cost management of airlines’ aircraft-type transition training but also provides new research perspectives and methodological guidance for the field of aviation training.

1. Introduction

With the progressive evolution of aviation technology, there is a consistent influx of innovative aircraft models into the market. These aircraft are increasingly preferred by airlines due to their superior technological advancements, enhanced fuel efficiency, and enriched passenger experience. The incorporation of these new models necessitates that pilots acquaint themselves with distinct operating protocols, system intricacies, and technical nuances. To expedite their integration into service, airlines frequently adopt aircraft-type transition training methodologies to prepare pilots for these new aircraft types. Furthermore, as airlines augment and rejuvenate their fleet, pilots encounter the imperative to operate a diverse array of aircraft, catering to the exigencies of various routes. Pilots, post their transition training, can be adeptly allocated across differing routes and aircraft, thereby amplifying the operational efficacy of airlines [1]. Consequently, aircraft-type transition training emerges as a topic of concern for aviation enterprises.
In the domain of airline operations, a pivotal concern is the optimization of pilot training for aircraft-type transition. This entails achieving minimal duration and cost without compromising flight safety. A key consideration factor in this context is the degree of similarity (or dissimilarity) between aircraft types [2]. For instance, the Flight Standards Department of the Civil Aviation Administration of China (CAAC) [3] classified aircraft type differences into five grades, A–E, in their 2019 document titled “Reduced-Time Conversion Courses and Mixed-Fleet Flying”. The greater the differences between aircraft types, the more comprehensive the content required for transition training. Nonetheless, this document only provides classifications of differences for a limited number of aircraft types and does not offer an effective method for evaluating type differences. Notably, there is a dearth of international scholarly focus on the evaluation of aircraft type differences, and the exploration of pilot transition training methodologies and associated costs. However, numerous academic endeavors have delved into factors influencing flight training, cost analyses, and enhancements in training techniques, providing a foundational base for our research.
Kreienkamp et al. [4] used the Myers–Briggs Type Indicator to gauge the personality difference scores between male trainees and instructors, as well as the training duration for student pilots. Their study revealed a correlation between personality difference scores and the training time of student pilots. In a distinct study, Dennis et al. [5] assessed the impacts of computer-based simulation on flight training. Their controlled experiment determined that PC-based flight simulation serves as a cost-effective and efficient supplement to traditional flight training. Using regression analysis, Polstra Sr. and Philip A. [6] discerned that students under the tutelage of senior instructors required less time to complete training compared with those trained by junior instructors. Notably, total flight time emerged as the most potent predictor of instructional efficacy. Their research advocates for prioritizing flight instructors with extensive total flight time during recruitment. Li, Qiang et al. [7] delved into factors influencing flight training efficiency for Chinese pilots. They identified pilot stress, training interaction, and the application of metacognitive strategies as paramount. Their conclusions underscored that while stress indirectly affected simulator training efficiency, training interaction directly influenced it as a dynamic process. The implementation of metacognitive strategies was found to optimize pilots’ cognitive resource coordination. Tatli, Ali and colleagues [8] explored the ramifications of meteorological conditions on flight training. Their data-driven investigation highlighted that adverse weather events augmented the workload of flight trainers, extended training durations, and intensified maintenance and repair activities, culminating in a significant surge in flight training expenses. Nikola Mostarac [9] investigated the structure of flight training syllabi and their specific impact on the proactive planning of training operations for military aircraft pilots. Vivek Sharma and colleagues [10] proposed a hypothetical model to illustrate the direct and indirect relationships between certified flight instructors’ (CFIs’) personality traits, self-efficacy, risk perception, safety climate, and other factors and safe behavior and discuss their impact on flight training.
Orlansky et al. [11] explored whether the acquisition cost of flight simulators is worth the flight training time savings and provided a preliminary cost model that identifies the data needed for cost estimates used in a cost–benefit analysis of flight training. Hoeft et al. [12] analyzed and evaluated the cost–benefit of four helicopter flight training programs to find the best option to train pilots for both the CH-6OS and SH-6OR models. Young, P and Fanjoy, R [13] focused on the significance of new flight technology instrumentation and the associated cost implications. They explored various methods and tools for advanced university flight training and proposed a targeted approach to study the potential for collaboration with aviation industry partners using high-cost instruments. Pope and Talon, M [14] compared the initial, fixed, and variable costs between Pilot Training-Next (PTN) and Undergraduate Pilot Training (UPT). The results show that PTN innovations have not only resulted in significant cost savings for the U.S. Air Force but these savings also accrue annually. Glen Ross [15] conducted a review and scope delineation of the current state of research on the use of extended reality simulations as a replacement for traditional flight simulators and aircraft. The conclusion drawn is that extended reality technology has the potential to be successfully used in flight training—saving time and money while also enhancing training efficacy.
Kardi and Koesnadi [16], using a comparative analysis of training methodologies and objectives between Indonesian Air Force Undergraduate Pilot Training (IAF-UPT) and U.S. Navy Undergraduate Pilot Training (USN-UPT), advocated for the integration of audio–visual and video instruction in flight training. They further emphasized the role of simulator training and championed continuous innovation in foundational training. In a distinct study, Johnson et al. [17] pioneered scenario-based simulation training (SBST) to scrutinize pilots’ capabilities in threat and error management (TEM). Their findings revealed that SBST adeptly bridges existing voids in primary flight training, fostering enhanced simulation fidelity across all training tiers. McClernon et al. [18], using empirical investigations, discerned that incorporating stress management techniques during flight training augmented performance during genuine high-stress flight tasks, thereby highlighting the potential benefits of embedding a stress training module within the flight training curriculum. Lu, Jing [19] introduced a multivariate quantum sequence double-window search (MSDW) algorithm predicated on Euclidean distance, aiming to ameliorate the challenges associated with the suboptimal completeness of flight training subject identification within flight training datasets. Furthermore, they propounded an incremental learning-centric predictive model tailored for flight training data [20]. This model, characterized by its impressive predictive accuracy and real-time performance, augments safety paradigms in flight training. Abiodun Brimmo Yusuf [21] developed a Fuzzy Cognitive Map model driven by human factors for susceptibility to startling, which they used to analyze the causality of startle in flight with the aim of enhancing future flight training paradigms. Jiayuan Li [22] proposed an evaluation method that combines fuzzy comprehensive evaluation with the Analytic Hierarchy Process (AHP) to assess the competency of pilots during flight training. The analysis of the evaluation results indicated that this method is an effective integration of subjective and objective approaches, aligning with the subjective scores given by instructors.
A thorough review of the extant literature reveals that the majority of academic endeavors predominantly revolve around the determinants, cost implications, and methodological enhancements of flight training. Notably, scant attention has been accorded to the niche domain of aircraft-type transition training. Existing discourse concerning the influence of aircraft-type similarity on the intricacy and financial aspects of pilot transition training largely remains confined to qualitative elucidations and anecdotal accounts, conspicuously devoid of rigorous quantitative scrutiny. Such a knowledge lacuna renders airlines bereft of a robust reference during the formulation of pilot transition training strategies. In addition, data-driven training cost prediction models in some specific training segments face the challenges posed by the shortage of data volume. In light of this, this paper first provides an in-depth analysis of the design of panel display controls and the characteristics of their standard operating procedures (SOPs) across different aircraft types and constructs a scientific similarity assessment methodology accordingly. Subsequently, this study quantitatively verifies the positive effect of inter-aircraft similarity on reducing the difficulty and cost of transition training based on the results of a questionnaire survey on transition training between A320, A330, B737, and B777. Further, utilizing the similarity in transition training programs as a feature, a model predicting the difficulty of aircraft-type transition training is established using the KNN algorithm. The model’s high predictive accuracy substantiates its effectiveness. Finally, to overcome the difficulty of the insufficient data volume of transition training costs, this study innovatively adopts the transfer learning method to construct a transition training cost prediction model by utilizing the knowledge learned from the training difficulty prediction model. The completely correct prediction results on six validation samples not only validate the feasibility of the method but also demonstrate its potential application in the field of aviation training. In the above research work, the following innovations are distilled:
  • A scientific and systematic methodology dedicated to the quantitative assessment of similarity between civil aircraft types is developed.
  • The quantitative analysis confirms the positive role of aircraft-type similarity in reducing the difficulty and cost of transition training.
  • Using the KNN algorithm, this study successfully constructs a highly accurate model for predicting the difficulty of transition training.
  • Facing the challenge of the insufficient data volume of transition training costs, this study innovatively adopts the transfer learning method to construct a reliable training cost prediction model, which further enhances the practical value of this study.

2. Quantitative Analysis of Aircraft-Type Similarity

2.1. Aircraft-Type Similarity Assessment Method

For pilots, as they move from one type of aircraft to another for training, it becomes clear that while many airplanes may be similar in appearance and basic structure, there are still many differences at the operational level [23]. These differences are usually manifested in two main areas: operating objects and operating procedures. Operational objects refer to the various instruments, levers, switches, and buttons on an airplane that are directly related to flight operations. Operational procedures, on the other hand, relate to the specific steps and processes that pilots need to follow during various phases of flight, such as takeoff, cruise, and landing. While these processes may have commonalities across most airplanes, each model may have its own specific requirements and details. Therefore, the similarity in the models will be evaluated in terms of both their handling pieces and their flight maneuvering processes.
Within an aircraft’s cockpit, the display and control components of each system serve dual functions. They not only act as informational portals for pilots to discern the operational status and planning of respective systems but also function as interactive interfaces for pilots to convey control command information to the aircraft. Distinct types of display and control components possess individual manipulation characteristics and realize specific control functionalities. The design of these display controls is intrinsically linked to both system functionalities and human–computer interaction dynamics. Relevant data show that the design of control parts and the human–machine relationship is not coordinated, which is one of the main reasons for accidents [24]. However, a degree of similarity in the display and control components design facilitates pilots in transition training to leverage their knowledge and proficiency from previous aircraft models. This similarity enhances the coordination in the human–machine relationship, substantially reducing the likelihood of such incidents.
In the domain of display control design, several pivotal elements are present, encompassing layout positioning, control types, symbolic representation, appearance, and color coding [25]. It is imperative to highlight that a control’s appearance is intrinsically intertwined with its specific type. For instance, button configurations typically adhere to a matrix design, while knobs predominantly assume a circular form. Furthermore, there is a prevalent consistency in color coding across various aircraft models; controls deemed of paramount importance are often denoted with more vivid and conspicuous hues. Thus, in our endeavor to assess the similarity in display controls within civil aircraft cockpits, our analysis is channeled toward three cardinal components: layout positioning, control types, and identifications.
The cockpit panel of a typical civilian aircraft is divided into several main areas including:
  • Instrument panel: the area where the pilot has the best line of sight, generally laying out some of the most frequently viewed display information.
  • Center console: the area most accessible to the pilot, generally laying out the most frequently operated or important control components.
  • FCU: the second most accessible area to the pilot after the center console, and generally lays out the more frequently operated controls and display components.
  • Head panel: the largest area of a panel; the general layout operation using infrequent controls.
  • MCDU/CDU panel: located on either side of the center console or under the dashboard, it generally lays out the more frequently operated but less important control components.
  • Communication Panel: located on the rear side of the center console, this panel generally lays out the more frequently operated but less important control components.
First, the display and controls are divided into these six panel regions and then the similarity in the types and logos of the controls in each panel is evaluated.
Controls can be categorized by type as follows:
  • Push switch: An interactive control that the user can press to trigger some action or function.
  • Knob: A circular control that is rotated to adjust or select a specific value.
  • Toggle: Typically, an on/off design that allows the user to switch between two or more states.
  • Slide: Allows the user to select or adjust values within a range of values by sliding.
  • Joystick: Usually used for control in two or three dimensions, such as a flight stick.
  • Handwheel: Large knob, usually used for fine-tuning or manual control of a machine.
  • Pedal: A control operated by the foot.
Display pieces can be categorized by type as follows:
  • Indicator Light: A small electronic light that uses light to indicate a status or warning.
  • Dial: An instrument with markers and pointers for displaying the values of certain physical quantities.
  • Digital display: A display that shows specific values by means of numbers.
  • Display: An electronic screen that can display images, text, and video.
Each type of display control differs in terms of space required, visual effect, tactile effect, and ease of operation, and the comparison results are shown in Table 1 and Table 2.
An analysis of the data in Table 1 and Table 2 indicates that each display control can be categorized into five distinct levels based on the required space, visual effect, tactile effect, and operational convenience. Accordingly, the levels “poor (small)”, “slightly poor (slightly small)”, “average”, “slightly good (slightly large)”, and “good (large)” have been transformed into numerical scores of 0.2, 0.4, 0.6, 0.8, and 1, respectively. With this transformation, each operational method can be converted into a specific score sequence. Taking the button as an example, its score vector can be represented as [0.2, 0.2, 0.2, 0.8]. The disparity between any two operational methods can then be quantified by computing the Euclidean distance between their respective score sequence. Consequently, the similarity assessment value between various control types can be calculated as per Equation (1).
C t y p e = 1 k = 0 k = 4 x i k x j k 2 4
where, x i k denotes the score value of the k-th attribute of the ith object.
Utilizing Equation (1) in conjunction with the data presented in Table 1 and Table 2, the similarity assessment values between pairwise display control methods can be calculated, as illustrated in Table 3 and Table 4.
The identification of display controls plays a pivotal role, ensuring that pilots can swiftly and accurately discern the functionality of each control, subsequently eliciting the appropriate response. Identifications are typically represented by strings composed of vocabulary, abbreviations, or simple symbols. Assessing the similarity between two display control identifiers fundamentally entails contrasting the degree of similarity between two strings. Within the domain of natural language processing, the Levenshtein distance, also known as the Edit Distance, is a widely adopted method for gauging the similarity between two strings [26]. The Edit Distance quantifies the minimum number of editing operations required to transform one string into another [27]. The implementation steps are as follows:
(1)
Initialize a matrix of size ( m + 1 ) × ( n + 1 ) , where m and n represent the lengths of the two strings, respectively.
(2)
Set the values of the first row from 0 to m and the values of the first column from 0 to n. This indicates the number of edits required to transform an empty string into the other string.
(3)
For each pair of characters, starting from each character in the two strings, calculate each matrix element according to the following rules:
  • If the two characters are identical, the value of the current cell is taken from the cell diagonally above and to the left.
  • If the characters differ, the value of the current cell is the minimum value from the cell above, to the left, or diagonally above and to the left, incremented by 1. This corresponds to an insertion, deletion, or substitution operation.
(4)
Once the matrix is fully populated, the value in the bottom-right corner represents the Levenshtein distance between the two strings.
(5)
Calculate the similarity between the two strings using Equation (2).
C m a r k = 1 l e v m a x { m , n }
where l e v denotes the Levenshtein distance between the two strings.
In the practical process of assessing the similarity between display and control components of two aircraft types, there arises the challenge of how to pair-match the components from the two models. Our adopted approach is to match based on the functionality of the components. The functional similarity in two display and control components is categorized into three levels: high, medium, and low, with respective scores of 1, 0.66, and 0.33 assigned as weighting coefficients.
By synthesizing the similarity calculation methods for each design element of the components described above, the evaluation of the similarity in the display and control components in the cockpit of the two models is calculated as shown in Equation (3):
C M a n i p u l a t e = 1 6 C I n s t r u m e n t + C c o n t r o l + C S u n v i s o r + C O v e r h e a d + C M C D U / C D U + C C o m m u n i c a t i o n
The similarity in each panel is calculated as shown in Equation (4):
C p a n e l = 1 N n e w a i r c r a f t i n α ( β C i t y p e + ( 1 β ) C i m a r k )
where α denotes the functional similarity coefficient; β represents the weight coefficient, and in this study, it is believed that the type of control and its identification are of roughly equal importance; hence, β is set to 0.5; C i t y p e denotes the similarity in the manipulation type of the i-th apparent components; C i m a r k denotes the identification similarity in the i-th components; and N n e w _ a i r c r a f t represents the total number of display controls on that panel of the new aircraft type. It is worth noting that the panel similarity assessment between aircraft types might be influenced by the direction of transition training. This design consideration is grounded on the perspective that the difficulty in transition training varies with the change in transition training direction.
The flight maneuvering procedures of civil aircraft generally encompass standard operating procedures (SOPs), supplementary procedures, and non-normal emergency procedures. Notably, the most crucial and frequently used maneuvering procedure is the SOP, and our evaluation of maneuvering procedure similarity among aircraft models focuses on the SOP.
The standard operating procedures (SOPs) for each aircraft type are primarily divided into stages such as pre-flight preparation, taxiing, takeoff, climb, cruise, descent, approach, landing, and engine shutdown. Aircraft manufacturers customize a set of procedures for each flying phase, requiring pilots to execute the respective operations in sequence. Taking the A320 model as an example, in the pre-taxiing phase, pilots need to perform operations in the following sequence: selecting the engine start mode, activating the main engine power, monitoring engine parameters, shutting off the APU air supply, and activating the engine anti-ice switch as required, among others. It is important to note that the pre-flight preparation phase is an exception; during this phase, pilots are required to check the control statuses across various cockpit panels without a strict sequential order.
To streamline computation, a labeling mechanism for these operations has been introduced. The label assigned to each operation is based on the system its corresponding control belongs to. For example, the operation “main engine power” is labeled as “power system”, whereas the “APU air supply switch” falls under the “APU system”. Based on data collation, controls within the cockpit of civil aircraft are grouped into the following systems: air conditioning, automatic flight, communication, electrical, equipment, warning, flight controls, fuel, hydraulic, anti-ice/de-ice, indication/recording, landing gear, navigation, oxygen, APU, and power. As a result, the flight phase operations of any aircraft type can be depicted as a sequence of these 16 systems. With this foundation, the task of comparing flight operation procedures between aircraft models can be converted into an assessment of the similarity between their label sequences.
In the realm of biology, the comparison of the similarity between two gene segments typically uses the Needleman–Wunsch algorithm. This algorithm calculates the similarity in two sequences using the scoring of matches, mismatches, and gaps between sequence units [28]. Inspiration is drawn from this method, and its adaptation for computing the similarity between flight operation action sequences of two aircraft types is proposed. The implementation steps of this approach are as follows:
(1)
Matrix initialization: Create a matrix of size ( m + 1 ) × ( n + 1 ) , where m and n are the lengths of the two sequences, respectively, and the values of the first row and column are usually initialized to the cumulative gap penalty value.
(2)
Filling the matrix: Use a predefined scoring scheme to fill the matrix (for the scheme in this paper, the match score is 1 and the mismatch penalty and gap penalty are 0). For each cell in the matrix, consider the following three possible scores:
(a)
From the value of the previous cell plus the gap penalty.
(b)
From the value of the cell on the left plus the gap penalty.
(c)
From the value of the cell on the upper left plus the match or mismatch score.
The maximum of these three scores is selected as the current cell value.
(3)
Read the comparison score S from the lower right corner of the matrix.
(4)
Obtain the similarity by comparing the score S to the upper score limit, which, in this paper, is equal to the length of the SOP sequence of the new aircraft type N n e w as in Equation (5).
C s o p = S N n e w
Having accomplished the quantitative assessment of similarity in operational objects and procedural flow between two aircraft types during transition training, the similarity between the aircraft types can be derived from a weighted sum of the similarity evaluation values from the panel and the SOP. This is formally expressed as:
C = φ 6 i = 1 6 C i p a n e l + ( 1 φ ) C s o p
where φ is the weight coefficient. This paper considers that the similarity in the panel and the similarity in the SOP are equally important in transition training, so it takes the value of 0.5.

2.2. Aircraft-Type Similarity Assessment Cases and Analysis

To validate the aircraft-type similarity quantification methodology proposed in this study, the similarity evaluation between Airbus models A320 and A330, and Boeing models B737NG and B777 was taken as case studies. By referencing the FCOM manuals of each aircraft type, comprehensive data on display controls and operational procedures were gathered. Using the assessment method delineated in Section 2.1, the similarity in controls across these aircraft models was evaluated. The results of this assessment are presented in Table 5.
As can be seen from the data in Table 5, in the comparison of the same manufacturer’s types, also evaluated as short- and medium-range narrow-body aircraft versus long-range wide-body aircraft, the average degree of similarity across the panels between Airbus’s A330 and A320 is very high reaching 0.75, while the average degree of similarity across the panels between Boeing’s B737 and B777 is only about 0.4. This indicates that Airbus retains a high degree of commonality in the design of the model panels, while Boeing utilizes a more different design between the two types. In the comparison of models from different manufacturers, the average value of panel similarity between A320 and B737NG, which are also narrow-body aircraft, is only about 0.3, providing further evidence of the different strategies and philosophies of the different manufacturers in terms of aircraft design and cockpit layout. Taken together, the panel similarity in types from the same manufacturer is higher than the panel similarity in models from different manufacturers, and the panel similarity between Airbus’s A330 and A320 is much higher than that between Boeing’s B737 and B777. This result is also more in line with the empirical perceptions of the majority of the population, thus proving the reliability of the present methodology for the assessment of the inter-aircraft panel similarity.
The similarity in the SOP between the four types including Airbus’s A320 and A330 and Boeing’s B737NG and B777 is also evaluated according to the evaluation methodology in Section 2.1, and the results are shown in Table 6.
Based on the data in Table 6, it can be determined that the SOP similarity between Airbus’s A320 and A330 reaches an approximate level of 0.6, while the similarity between Boeing’s B737NG and B777 is about 0.3. This difference suggests that there is a significant difference in the philosophies adopted by the two manufacturers in the design of their new aircraft. Specifically, Airbus seems to focus more on consistency across its models, while Boeing may be more inclined to introduce technical performance innovations in its new models. In addition, the similarity in the SOP between A320 and B737NG from the two different manufacturers is approximately 0.5, which may reflect the fact that although each manufacturer has its own unique design philosophy, there are some commonalities at the operational level as both are short- and medium-haul narrow-body aircraft.

3. Difficulty and Cost Analysis of Aircraft-Type Transition Training

3.1. Analysis of the Difficulty of Transition Training

In the previous section, a quantitative assessment of the similarity between models was performed. The next challenge was the quantitative assessment of the difficulty of transition training between aircraft types. Given that the assessment of training difficulty involves the subjective experience of individuals, a questionnaire method was chosen to collect data on human perception of difficulty. Subsequently, these data were analyzed using mathematical and statistical methods to obtain quantitative indicators of training difficulty.
A questionnaire was designed for the six scenarios of A320 to A330, A330 to A320, B737 to B777, B777 to B737, B737 to A320, and A320 to B737, where the questions required choices to be made on how much effort was spent on the FCU panel, CDU/MCUD, center console, instrument panel, communication panel, top panel, and SOP training, respectively, in the course of transition training. The options were: A. none, B. very little, C. average, D. a lot, and E. very much.
Our questionnaire was administered to model instructors of a Chinese flight training company, which better meets the needs of our data source. While a comprehensive survey, i.e., a survey of each training session, would be preferable, it is difficult to achieve with objective resources and organizational conditions, so this paper adopts a sampling method. In order to ensure the reliability of the data, real-name and on-site collection methods were used to conduct questionnaire surveys for each type of instructor.
A total of 66 valid questionnaires were received, of which 15 were for A320 to A330 scenarios, 18 for A330 to A320 scenarios, nine for B737 to A320 scenarios, five for B737 to B777 scenarios, 14 for B777 to B737 scenarios, and five for A320 to B737 scenarios. The statistics of the questionnaire results are shown in Table 7.
The training difficulty level was categorized into A, B, C, D, and E (in increasing order of difficulty) based on the answers collected to the question “How much effort is spent”. The most frequently selected option was the result of the training difficulty. If two options were selected with the same frequency, then whichever option’s neighboring option was selected with a higher frequency was counted as the result of the training difficulty rating. For example, in the A320 to B737 training, in the learning difficulty questionnaire results for the communication panel, the probability of options B and C being selected was 40%, but the probability of B’s neighbor, option A, being selected was greater than the probability of C’s neighbor, option D. Therefore, it was determined that the learning difficulty of the communication panel in the A320 to B737 training was rated as B. The final ratings are shown in Table 8.
Based on the analysis of the data in Table 8, the variability in training difficulty across panels is clearly visible. Among them, the top panel presents the highest training difficulty, while the communication panel, on the contrary, shows a relatively low learning burden. This phenomenon can be attributed to the fact that in the cockpit of civil aircraft, the top panel often serves as the most complex control panel, carrying the centralized control of numerous systems. In contrast, communication panels are more simple and intuitive, with a limited number of controls, making them less difficult to learn.
Among the different aircraft-type transition scenarios, A320 and B737 have the most significant changeover difficulties, especially in the core panels such as the instrument panel, MCDU, and FCU. There may be significant differences in the operating logic and interface layout between the two models, so pilots may need to spend extra time and effort to adapt during the transition. In the case of the A320 and A330 transition, the learning difficulty is relatively low, thanks in part to Airbus’s strategy of consistency and standardization in product design.
Further comparing the difficulty of panel learning with SOP learning, it is found that SOP is generally more difficult to learn than a single-panel operation. This is mainly due to the fact that SOP consists of multiple steps that need to be executed in a specific order and timing. Pilots not only need to memorize these steps but also need to deeply understand the logic behind each step and its timing. In addition, SOP may involve multiple aircraft systems and panels, which naturally increases the difficulty by requiring more knowledge from the pilot.
Considering the context of the same pair of aircraft types but with different transition directions, differences in learning difficulty are also observed. As an example, the panel learning difficulty of A320 to A330 is 0.26 on average, while the difficulty of A330 to A320 drops to 0.18. This can be explained by the fact that the systems and functions of A330, which is a long-haul, wide-body aircraft, may surpass those of A320 in terms of complexity. Therefore, pilots transferring from A320 to A330 will need to adapt to a greater number of new functions and systems during the learning process.
As shown in Figure 1, an obvious trend can clearly be observed: with the gradual increase in the difficulty of transition training, the similarity between different types of aircraft in terms of training objects shows a significant decreasing trend. This finding not only visualizes the relationship between inter-aircraft differences and training difficulty but also further confirms the important role of inter-aircraft similarity in streamlining the transition training process by means of quantitative assessment. Therefore, this result not only provides us with an in-depth understanding of the challenges faced by pilots during the transfer training process but also highlights the importance of considering inter-aircraft similarity when designing training programs in order to develop refined transition training programs and improve training efficiency.

3.2. Analysis of the Cost of Transition Training

Transition training for pilots primarily encompasses two segments: ground theory training and simulator training. Typically, professional training institutions plan the duration of a pilot’s study in these two areas based on relevant regulations and the characteristics of each aircraft type. Generally, the per-unit-time cost of ground theory training does not show significant variation across different transition training programs. However, there is a noticeable discrepancy in the per-unit-time cost of simulator training, mainly influenced by the scale and model of the simulator’s construction. In this study, it was observed that the four aircraft models focused on—A320, A330, B737, and B777—do not present significant differences in the procurement costs of simulators. From this, it can be inferred that the cost of transition training for these models is primarily driven by the differences in aircraft types. Detailed transition training cost data (2020 data) was obtained from a flight training company in China, as shown in Table 9.
According to the data in Table 9, the training costs for a single-person transition from A320 to A330 are significantly lower than other scenarios, totaling only RMB 65,800, compared with RMB 268,200 for single-person transition from B737 to B777. This observation further confirms that the similarity between aircraft types largely influences the training cost of transition. It is worth noting that the single-person transition training cost from B777 to B737 and the single-person transition training cost between A320 and B737 both amounted to RMB 237,800. This may imply that the training providers regarded the differences among the three aircraft types as being at the same level. Therefore, in developing the training course program, the agency has adopted a uniform and standardized program. Considering that training costs are affected by annual price fluctuations and standardized training programs of airlines, it is more practical to classify training costs. Based on the recommendations of relevant professionals, this study grades training costs in 2020 according to the following intervals: [0, 5), [5, 15), [15, 25), [25, 35), and [35, ∞), which correspond to the five grades A, B, C, D, and E, respectively. The cost grades for the six tranche types of training are shown in Table 9.
Equation (6) was used to calculate the assessed values of similarity between the different models. These values were combined with the data on the cost of transition training provided in Table 9 to plot the relationship between similarity and the cost of transition training, as shown in Figure 2.
According to the demonstration in Figure 2, the single-training cost of transition training shows a clear decreasing trend with increasing type similarity, providing quantitative evidence for the positive effect of aircraft type similarity on reducing training costs for transition training. These findings and research efforts can provide valuable data support for airlines and training organizations when developing transition training strategies and cost control measures.

4. Predictive Modeling of the Difficulty and Cost of Transition Training

4.1. Predictive Modeling of Transition Training Difficulty

In the aviation industry, the process of transitioning a pilot from one aircraft type to another is challenging. This transition not only requires pilots to master the technical characteristics and operational requirements of the new aircraft type but also to adapt to different flight environments and emergency response mechanisms. Therefore, an accurate training difficulty prediction model can provide important guidance to pilots in the transition and reduce the risks associated with unfamiliarity with the new aircraft type. In addition, for airlines, effective training resource allocation and cost control are key to improving competitiveness. Using predictive modeling, companies can arrange training courses and simulator resources more rationally, thus improving training efficiency and reducing unnecessary expenses.
In this paper, it is verified that the similarity in training programs between models is an important factor affecting the difficulty of training. Therefore, a prediction model for transition training difficulty based on the similarity between models is built. As mentioned in the previous section, the main content of the training for the conversion model is the difference training of the operation panel and the difference training of the operation process. The following prediction of training difficulty also focuses on these two aspects.
In the previous section, the similarity in the panel between models was quantified as a value of 0–1, and the training difficulty was divided into five grades A–E. Therefore, the prediction model of the training difficulty of the panel of rotary models was established, i.e., a classification model of the training difficulty of the panel was established according to the panel similarity. Given the limitations in sample size and the number of features, machine learning and deep learning algorithms such as the convolutional neural network (CNN) and support vector machine (SVM) are prone to overfitting. After exploring a variety of traditional classification methods, including nonlinear regression, random forest, decision trees, etc., it was found that the best performance was demonstrated by the K-nearest neighbor (KNN) algorithm. Therefore, this study ultimately selected the KNN algorithm to construct a predictive model for the difficulty of type conversion training.
The KNN algorithm, as an instance-based learning method, is mainly applied to classification and regression tasks. The core of this algorithm is based on the intuitive assumption that similar data points tend to be close to each other [29]. It defines “proximity” by measuring the distance between data points, thus identifying the K closest neighbors to the new data point. The categories of these neighbors are then used to predict the category of the new data point using a majority voting mechanism, i.e., the new data point is classified into the most common category among its K nearest neighbors. The steps in the application of the algorithm are as follows:
  • Select the value of K: Determine the value of K. The choice of the value of K affects the results of the algorithm; too small a value of K may lead to overfitting of the model and too large a value may lead to overgeneralization. Often, it is necessary to choose the best K value using cross-validation [30].
  • Calculate the distance: For each test data point, calculate the distance between it and all the training data points. Common distance metrics include Euclidean distance, Manhattan distance, and Chebyshev distance. In this paper, Euclidean distance (Equation (1)) is used to build the model.
  • Find the nearest K neighbors: For each test data point, find the nearest K training data points.
  • Voting decision: Determine which category among these K neighbors has the most, and then categorize the test data points into this category.
  • Output the prediction result: Use step 4 to determine the predicted categorization of the new data point.
The similarity assessment values between panels in Table 5 are taken as the sample features, and the assessment results of panel transition training difficulty in Table 8 are taken as the classification results. A total of 36 sets of samples divided into the training set and the test set according to the ratio of 8:2 to build the panel training difficulty prediction model. Based on cross-validation (Figure 3), the optimal K value is 3, and the prediction effect of the established model is shown in Figure 4.
The prediction model of panel transition training built by the KNN model achieves a prediction accuracy of 90% and 86% on the training set and test set, respectively. The high accuracy on the training set indicates that the model can effectively learn and recognize the panel differences between different models to predict the training difficulty, while the same good accuracy on the test set further validates that the model has high reliability and generalization ability. Overall, this model can be used as an effective tool for predicting the transition training difficulty of the panel.
When the model for predicting the transition training difficulty of the SOP was being built, the problem of insufficient sample data (only six samples) was encountered. The calculation of the SOP similarity between models is completely different from the panel similarity calculation method, and this difference means that the model originally used to predict panel training difficulty cannot be directly applied to predict SOP transition training difficulty. However, considering the consistency in the effect of the similarity in training objects on transition training difficulty, a solution is proposed: utilizing a transfer learning approach. The core of transfer learning lies in the reuse of existing knowledge [31], which can originate from models trained on similar tasks or from tasks in different domains but with transferable characteristics. Especially when facing the challenge of data scarcity, transfer learning shows its unique advantages [32]. By migrating knowledge from existing models, effective learning with fewer data on the target task for application purposes can be achieved.
In the transfer learning model, Professor Yang Qiang [33] proposed the Marginal Distribution Adaptation (MDA) method, whose goal is to reduce the distance between the marginal probability distributions of the source domain X s and the target domain X t , so as to accomplish transfer learning. The method assumes that there exists a feature mapping Φ such that the mapped data distribution P ( Φ ( X s ) ) P ( Φ ( X t ) ) , and continues with the assumption that if the marginal distributions are close, the conditional distributions of the two domains will be close as well, i.e., the conditional distribution P ( y s | Φ ( X s ) ) P ( y t | Φ ( X t ) ) .
The panel transition training data are used as the source domain and the SOP transition training data as the target domain. Since the features are one-dimensional, the linear kernel mapping Φ needs to be found to achieve the feature distributions that are close to each other. The Anderson–Darling method, which is applicable to small samples, is used to test the normality of the source domain data features. The calculated statistic is 0.71, which is smaller than the critical value of 0.721 at the 5% significance level. Therefore, it can be assumed that the source domain features obey the normal distribution X s ~ N ( 0.478 ,   0.0546 ) . Then, feature scaling is used to find the linear kernel mapping function y = 1.997 x 0.487 for the target domain features so that the characteristics of the target domain and the source domain are approximately equally distributed (as shown in Figure 5). Then, the transformed target domain features are substituted into the panel transition training difficulty prediction model to predict the SOP transition training difficulty, and the prediction results are shown in Figure 6.
As can be seen in Figure 6, the application of the transfer learning method to the prediction of the transition training difficulty of the civilian aircraft SOP shows remarkable results, with a prediction accuracy of 83%. The prediction of the training difficulty of five out of the six samples is completely correct. The difficulty prediction of only one sample was biased, i.e., the prediction result of the questionnaire result of level B difficulty was judged to be level C difficulty. This may be because there are too few characterizing factors, which prevented the model from fully capturing all the key factors that affect training difficulty. Overall, however, this result fully demonstrates the effectiveness and feasibility of the transfer learning model for SOP training difficulty prediction. The model successfully utilizes knowledge gained in other related tasks and successfully applies it to new specific tasks, resulting in highly accurate predictions on a limited dataset.

4.2. Predictive Modeling of Transition Training Cost

With the rapid development of aviation technology and the continued expansion of route networks, pilots need frequent training to change types of aircraft in order to adapt to different types of aircraft. However, such training is usually accompanied by significant costs, including direct financial overhead and indirect time costs. Therefore, the establishment of an effective model for predicting the cost of transition training is of great significance for airlines in planning their budgets, optimizing resource allocation, and improving training efficiency.
The previous section quantitatively verified that inter-aircraft similarity is a key factor affecting the transition training cost, so the training cost can be predicted based on the evaluated value of inter-aircraft similarity. However, the problem of insufficient sample data was also encountered. Drawing on the successful experience of the SOP training difficulty prediction model, the transfer learning model is used to establish a training cost prediction model. The panel similarity data is taken as the source domain and the whole aircraft type similarity data as the target domain, and the knowledge learned from the panel transition training difficulty prediction model is used to predict the transition training cost.
Feature scaling is used to find the linear kernel mapping function y = 1.5792 x 0.2807 for the feature of aircraft type similarity so that it is close to the distribution of the feature data in the source domain. Then, substituting the transformed target domain features into the panel transition training difficulty prediction model can predict the transition training cost level, and the prediction results are shown in Figure 7.
As can be seen in Figure 7, the model constructed based on similarity and the transfer learning method for predicting the transition training cost shows remarkable results. The model successfully and accurately predicted the training cost level for each of the six different aircraft-type transition training cases. This important achievement not only highlights the efficiency and accuracy of the model but also provides a reliable cost prediction tool for airlines in transition training resource planning. However, due to the limited number of validation samples, these results are not sufficient to fully demonstrate the model’s generalization ability. Nonetheless, they fully demonstrate the great potential of inter-aircraft similarity and transfer learning-based approaches in solving the problem of transition training cost prediction.

5. Summary

In this study, a set of scientific quantitative evaluation methods was constructed to assess the similarity of the panel display control design and standard operating procedures (SOPs) of different aircraft types. Questionnaires and mathematical statistics were used to quantitatively assess the learning difficulty of each panel and SOP in transition training. The transition training between A320, A330, B737, and B777 was chosen as the research samples to quantitatively verify the influence of inter-aircraft similarity on the difficulty and training cost of transition training. Finally, the KNN algorithm was used to establish a training difficulty prediction model, and the knowledge learned from the training difficulty prediction model was used to predict the training cost of transition training using the transfer learning method. Based on the above research, this paper draws the following main conclusions:
(1)
The assessment system constructed in this study can accurately quantify the similarity between different aircraft types.
(2)
With the application of the assessment method in this paper, it is quantitatively verified that inter-aircraft similarity has a significant positive effect in reducing the difficulty and cost of transition training.
(3)
The prediction model established by using the KNN algorithm featuring the similarity of training objects can predict the difficulty of transition training programs more accurately.
(4)
The transfer learning method can effectively solve the problem of insufficient sample size. Combined with the results of an inter-aircraft similarity assessment, it can accurately predict the cost of transition training, which provides reliable decision support for airlines in terms of transition training resource planning and cost management.
In summary, by predicting training difficulty and costs through the assessment of similarity in product controls and operational processes, this study has paved new paths for research in training for product transitions. It directly ties to human cognitive processes, namely, by enhancing the similarity between learning materials to reduce cognitive load, accelerate the learning process, and improve overall learning efficiency. Our research suggests that by analyzing and evaluating the similarity between different products’ control elements and operational processes, the training process can be predicted and optimized. This strategy is not limited to the aviation sector but is also applicable to multiple industries such as automotive, shipping, and heavy machinery. For example, in the automotive industry, the difficulty and cost of driver training can be assessed by comparing the control systems of new and old vehicle models; in the shipping industry, more efficient training plans can be designed by analyzing the similarity in operational processes, ensuring that crew members can quickly master new technologies. Therefore, our study not only emphasizes the importance of similarity in assessing and optimizing the training process but also demonstrates how this concept can be applied to training design across different products and industries, offering a productive route for future research and practice. To further enhance the model’s accuracy and universality, future research could focus on expanding the size of the data sample, delving deeper into the key factors affecting training costs, and optimizing the algorithm and feature selection processes.

Author Contributions

Conceptualization, Y.Z.; methodology, K.C.; software, K.C.; validation, J.F.; formal analysis, K.C.; investigation, Y.Z.; resources, J.F.; data curation, Y.Z.; writing—original draft preparation, K.C.; writing—review and editing, Y.Z. and J.F.; visualization, K.C.; supervision, Y.Z.; All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Jansen, P.W.; Perez, R.E. Coupled Optimization of Aircraft Families and Fleet Allocation for Multiple Markets. J. Aircraft. 2016, 53, 1485–1504. [Google Scholar] [CrossRef]
  2. Dong, W.; Wang, Y.; Li, X.; Dong, D. Commonality Considerations in the Design of Flight Deck of Civil Aircraft. Proc. J. Phys. Conf. Ser. 2020, 1678, 012037. [Google Scholar] [CrossRef]
  3. Civil Aviation Administration of China. Reduced Time Conversion Courses and Mixed Fleet Flights. Available online: http://www.caac.gov.cn/XXGK/XXGK/GFXWJ/201905/t20190530_196484.html (accessed on 10 January 2024).
  4. Kreienkamp, R.A.; Luessenheide, H.D. Similarity of Personalities of Flight Instructors and Student-Pilots: Effect on Flight Training Time. Psychol. Rep. 1985, 57, 465–466. [Google Scholar] [CrossRef]
  5. Dennis, K.A.; Harris, D. Computer-Based Simulation as an Adjunct to Ab Initio Flight Training. Int. J. Aviat. Psychol. 1998, 8, 261–276. [Google Scholar] [CrossRef]
  6. Polstra Sr, P.A. Examining the Effect of Instructor Experience on Flight Training Time; Northcentral University: Scottdale, AZ, USA, 2012. [Google Scholar]
  7. Li, Q.; Li, B.; Wang, N.; Li, W.; Lyu, Z.; Zhu, Y.; Liu, W. Human-machine Interaction Efficiency Factors in Flight Simulator Training Towards Chinese Pilots. Proc. Int. Conf. Appl. Human. Factors Ergon. 2021, 1206, 26–32. [Google Scholar]
  8. Tatli, A.; Bocu, E.; Filik, T.; Karakoc, T.H. A case study on the effect of meteorological events on the efficiency of flight training organization. Aircr. Eng. Aerosp. Technol. 2022, 94, 1109–1116. [Google Scholar] [CrossRef]
  9. Mostarac, N.; Reščić, A.; Mihetec, T.; Novak, D. Flight Training Syllabus Structure Impact on Proactive Planning of High-Performance Military Aircraft Pilot Training Operations in Flexible Airspace Structures. Promet-Traff. Transp. 2022, 34, 839–848. [Google Scholar] [CrossRef]
  10. Sharma, V.; Carroll, M.B. CFIs’ Safety Behaviors at Flight Training Schools: Understanding the Effects of Personality Traits, Self-Efficacy. Available online: https://commons.erau.edu/cgi/viewcontent.cgi?article=1500&context=ntas (accessed on 10 January 2024).
  11. Orlansky, J.; String, J. Cost-Effectiveness of Flight Simulators for Military Training, Volume I: Use and Effectiveness of Flight Simulators. Inst. Def. Anal. 1977. [Google Scholar]
  12. Hoeft, T.A.; Anderson, T.P. An Economic Analysis of Restructuring Undergraduate Helicopter Flight Training; University of Florida: Gainesville, FL, USA, 1990. [Google Scholar]
  13. Young, P.; Fanjoy, R. Advanced Collegiate Flight Automation Training: What is Needed and at What Cost. Int. J. Appl. Aviat. Stud. 2003, 3, 215–225. [Google Scholar]
  14. Pope, T.M. A Cost-Benefit Analysis of Pilot Training Next. Theses Diss. 2019, 2314, AD1077553. [Google Scholar]
  15. Ross, G.; Gilbey, A. Extended Reality (Xr) Flight Simulators as an Adjunct to Traditional Flight Training Methods: A Scoping Review. CEAS Aeronaut. J. 2023, 14, 799–815. [Google Scholar] [CrossRef]
  16. Kardi, K. Innovations in Basic Flight Training for the Indonesian Air Force; Naval Postgraduate School: Monterey, CA, USA, 1990. [Google Scholar]
  17. Johnson, C.M.; Wiegmann, D.A. Scenario-Based Flight Simulation Training: A Human Factors Analysis of its Development and Suggestions for Better Design. In Proceedings of the 16th International Symposium on Aviation Psychology, Dayton, OH, USA, 2–5 May 2011; pp. 662–667. [Google Scholar]
  18. McClernon, C.K.; McCauley, M.E.; O’Connor, P.E.; Warm, J.S. Stress Training Improves Performance During A Stressful Flight. Hum. Factors. 2011, 53, 207–218. [Google Scholar] [CrossRef]
  19. Lu, J.; Deng, J.; Ren, Z.; Shi, Y. Flight Training Subject Identification Method Based on Multivariate Subsequence Search With Double Windows. IEEE Access 2022, 11, 3221–3231. [Google Scholar] [CrossRef]
  20. Lu, J.; Shi, Y.; Ren, Z.; Zhong, Y.; Bai, Y.; Deng, J. Research on Flight Training Prediction Based on Incremental Online Learning. Appl. Intell. 2023, 53, 25662–25677. [Google Scholar] [CrossRef]
  21. Yusuf, A.B.; Kor, A.-L.; Tawfik, H. Integrating the HFACS Framework and Fuzzy Cognitive Mapping for In-Flight Startle Causality Analysis. Sensors 2022, 22, 1068. [Google Scholar] [CrossRef]
  22. Li, J.; Sun, H.; Li, F.; Cao, W.; Hu, H. Non-technical Competency Assessment for the Initial Flight Training Based on Instructor Measurement Data. In Proceedings of the 2022 2nd International Conference on Big Data Engineering and Education (BDEE), Chengdu, China, 5–7 August 2022; pp. 1–7. [Google Scholar]
  23. Zhang, Y.; Kang, C.; Liang, K.; Yongqi, Z.; Wenjun, D. A Serialized Civil Aircraft R&D Cost Estimation Model Considering Commonality Based on BP Algorithm. Chin. J. Aeronaut. 2022, 35, 253–265. [Google Scholar]
  24. Zhixue, S. Analysis and Control of Accident Causes in Machinery Manufacturing. Ind. Saf. Environ. Prot. 2006, 32, 63–64. [Google Scholar]
  25. Chunling, Z. Civil Aircraft Cockpit Integrated Design and Airworthiness; Shanghai Jiao Tong University Press: Shanghai, China, 2019. [Google Scholar]
  26. Zhang, S.; Hu, Y.; Bian, G. Research on String Similarity Algorithm Based on Levenshtein Distance. In Proceedings of the 2017 IEEE 2nd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Chongqing, China, 25–26 March 2017; pp. 2247–2251. [Google Scholar]
  27. Su, Z.; Ahn, B.-R.; Eom, K.-Y.; Kang, M.-K.; Kim, J.-P.; Kim, M.-K. Plagiarism Detection Using the Levenshtein Distance and Smith-Waterman Algorithm. In Proceedings of the 2008 3rd International Conference on Innovative Computing Information and Control, Washington, DC, USA, 18–20 June 2008; p. 569. [Google Scholar]
  28. Likic, V. The Needleman-Wunsch Algorithm for Sequence Alignment; 7th Melbourne Bioinformatics Course; Bio21 Molecular Science Biotechnology Institute, The University of Melbourne: Melbourne, Australia, 2008; pp. 1–46. [Google Scholar]
  29. Imandoust, S.B.; Bolandraftar, M. Application of K-Nearest Neighbor (KNN) Approach for Predicting Economic Events: Theoretical Background. Int. J. Eng. Res. Appl. 2013, 3, 605–610. [Google Scholar]
  30. Zhang, S.; Li, X.; Zong, M.; Zhu, X.; Cheng, D. Learning K for KNN Classification. ACM Trans. Intell. Syst. Technol. 2017, 8, 1–19. [Google Scholar] [CrossRef]
  31. Xu, C.; Wang, J.; Zhang, J.; Li, X. Anomaly Detection of Power Consumption in Yarn Spinning Using Transfer Learning. Comput. Ind. Eng. 2021, 152, 107015. [Google Scholar] [CrossRef]
  32. Li, Z.; Kristoffersen, E.; Li, J. Deep Transfer Learning for Failure Prediction Across Failure Types. Comput. Ind. Eng. 2022, 172, 108521. [Google Scholar] [CrossRef]
  33. Yang, Q.; Zhang, Y.; Dai, W.; Pan, S.J. Transfer Learning; Cambridge University Press: Cambridge, UK, 2020. [Google Scholar]
Figure 1. Graph comparing transition training difficulty with the average similarity in the corresponding training objects.
Figure 1. Graph comparing transition training difficulty with the average similarity in the corresponding training objects.
Aerospace 11 00166 g001
Figure 2. Relationship diagram between model similarity and transition training cost.
Figure 2. Relationship diagram between model similarity and transition training cost.
Aerospace 11 00166 g002
Figure 3. K-value cross-validation results of the training difficulty prediction model.
Figure 3. K-value cross-validation results of the training difficulty prediction model.
Aerospace 11 00166 g003
Figure 4. Visualization of the results of the prediction of the transition training difficulty of the panel.
Figure 4. Visualization of the results of the prediction of the transition training difficulty of the panel.
Aerospace 11 00166 g004
Figure 5. Distribution of source domain and target domain data before (left) and after (right) transformation.
Figure 5. Distribution of source domain and target domain data before (left) and after (right) transformation.
Aerospace 11 00166 g005
Figure 6. Prediction result of SOP transition training difficulty.
Figure 6. Prediction result of SOP transition training difficulty.
Aerospace 11 00166 g006
Figure 7. Prediction result of the transition training cost.
Figure 7. Prediction result of the transition training cost.
Aerospace 11 00166 g007
Table 1. Table of characteristics of each type of control member.
Table 1. Table of characteristics of each type of control member.
Control TypeControl MethodRequired SpaceVisual EffectTactile EffectOperational Convenience
ButtonFinger pressSmallPoorPoorSlightly good
KnobFinger twistSlightly smallSlightly poorMediumSlightly good
ToggleFinger flickSlightly smallMediumSlightly poorGood
SliderFinger slideMediumSlightly goodMediumMedium
JoystickArm pullSlightly largeGoodGoodSlightly good
HandwheelArm twistLargeSlightly goodGoodMedium
PedalFoot pressSlightly largePoorSlightly goodMedium
Table 2. Table of characteristics of each type of display device.
Table 2. Table of characteristics of each type of display device.
Control TypeDisplay MethodRequired SpaceVisual EffectTactile EffectInformation Content
Indicator lightColor, on/offSmallGoodNoneSmall
Meter dialPointer readingMediumMediumNoneMedium
Digital displayNumeric displayMediumGoodNoneMedium
Display screenGraphics, positionLargeMediumNoneLarge
Table 3. Table of similarity assessment values of two-by-two manipulation types.
Table 3. Table of similarity assessment values of two-by-two manipulation types.
ButtonKnobToggleSliderJoystickHandwheelPedal
Button1.000.880.870.790.680.680.78
Knob0.881.000.910.880.790.790.87
Toggle0.870.911.000.870.790.760.80
Slider0.790.880.871.000.870.860.83
Joystick0.680.790.790.871.000.910.79
Handwheel0.680.790.760.860.911.000.83
Pedal0.780.870.800.830.790.831.00
Table 4. Table of similarity assessment values of two-by-two display types.
Table 4. Table of similarity assessment values of two-by-two display types.
Indicator LightMeter DialDigital DisplayDisplay Screen
Indicator light1.000.770.810.60
Meter dial0.771.000.870.81
Digital display0.810.871.000.77
Display screen0.600.810.771.00
Table 5. Cockpit panel similarity assessment results between two aircraft types.
Table 5. Cockpit panel similarity assessment results between two aircraft types.
FCUMCDUCenter
Console
Communication
Panel
Instrument
Panel
Head
Panel
Average
Value
A320 to A3300.860.750.770.870.700.560.75
A330 to A3200.910.900.720.870.600.510.75
B737 NG to B7770.380.630.200.470.240.290.37
B777 to B737 NG0.480.490.320.570.460.200.42
A320 to B737 NG0.280.300.300.360.370.120.29
B737 NG to A3200.400.330.340.340.220.080.29
Table 6. Results of the SOP similarity assessment between two aircraft types.
Table 6. Results of the SOP similarity assessment between two aircraft types.
Transition TrainingSOP Similarity
A320 to A3300.66
A330 to A3200.58
B737 NG to B7770.30
B777 to B737 NG0.41
A320 to B737 NG0.51
B737 NG to A3200.44
Table 7. Results of the questionnaire on the difficulty of transition training.
Table 7. Results of the questionnaire on the difficulty of transition training.
ScenarioObjectABCDEScenarioObjectABCDE
A320 to A330FCU311100A330 to A320FCU97110
MCDU212100MCDU98010
Center console37410Center console77310
Communication panel59100Communication panel116010
Instrument panel39300Instrument panel98010
Head
panel
17700Head
panel
215010
SOP07611SOP17631
B737 to B777FCU02210B777 to B737FCU39200
MCDU03110MCDU59000
Center console11120Center console14540
Communication panel04100Communication panel68000
Instrument panel11120Instrument panel212000
Head
panel
01220Head
panel
24143
SOP00230SOP13451
A320 to B737FCU01130B737 to A320FCU12411
MCDU01220MCDU01521
Center console01121Center console00621
Communication panel02210Communication panel05310
Instrument panel01210Instrument panel04410
Head
panel
00102Head
panel
01134
SOP01220SOP03321
Table 8. Results of the evaluation of the difficulty of transition training.
Table 8. Results of the evaluation of the difficulty of transition training.
ScenarioFCUMCDUCenter
Console
Communication
Panel
Instrument
Panel
Head
Panel
Panel
Average Value
SOP
A320 to A330BBBBBBBB
A330 to A320AABAABCA
B737 to B777CBDBDDDC
B777 to B737BBCBBDDB
A320 to B737DCDCCECD
B737 to A320CCCBDECC
Average valueBBBBBBBB
Table 9. Data related to the cost of training for aircraft type transition.
Table 9. Data related to the cost of training for aircraft type transition.
ScenarioDuration of Theoretical Training (h)Duration of Simulator Training (h)Cost of Theoretical Training (RMB 10,000)Cost of Simulator Training (RMB 10,000)Total Cost
(RMB 10,000)
Cost Level
A320 to A33040160.56.086.58B
A330 to A32040160.56.086.58B
B737 to B777140642.524.3226.82D
B777 to B737140562.521.2823.78C
A320 to B737140562.521.2823.78C
B737 to A320140562.521.2823.78C
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cao, K.; Zhang, Y.; Feng, J. Prediction of Training Cost and Difficulty for Aircraft-Type Transition Based on Similarity Assessment. Aerospace 2024, 11, 166. https://doi.org/10.3390/aerospace11020166

AMA Style

Cao K, Zhang Y, Feng J. Prediction of Training Cost and Difficulty for Aircraft-Type Transition Based on Similarity Assessment. Aerospace. 2024; 11(2):166. https://doi.org/10.3390/aerospace11020166

Chicago/Turabian Style

Cao, Kang, Yongjie Zhang, and Jianfei Feng. 2024. "Prediction of Training Cost and Difficulty for Aircraft-Type Transition Based on Similarity Assessment" Aerospace 11, no. 2: 166. https://doi.org/10.3390/aerospace11020166

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop