Next Article in Journal
Social Learning and the Exploration-Exploitation Tradeoff
Previous Article in Journal
Development of Trading Strategies Using Time Series Based on Robust Interval Forecasts
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Multiple Response Prediction Model for Dissimilar AA-5083 and AA-6061 Friction Stir Welding Using a Combination of AMIS and Machine Learning

1
Department of Industrial Engineering, Faculty of Engineering and Technology, Rajamangala University of Technology Isan, Nakhon Ratchasima 30000, Thailand
2
Department of Industrial and Environmental Management Engineering, Faculty of Liberal Arts and Sciences, Sisaket Rajabhat University, Sisaket 33000, Thailand
3
Department of Industrial Engineering, Faculty of Engineering, Rajamangala University of Technology Lanna Chiang Rai, Chiang Rai 57120, Thailand
4
Department of Computer Engineering and Automation, Faculty of Engineering and Industrial Technology, Kalasin University, Kalasin 46000, Thailand
*
Author to whom correspondence should be addressed.
Computation 2023, 11(5), 100; https://doi.org/10.3390/computation11050100
Submission received: 23 March 2023 / Revised: 3 May 2023 / Accepted: 9 May 2023 / Published: 15 May 2023
(This article belongs to the Section Computational Engineering)

Abstract

:
This study presents a methodology that combines artificial multiple intelligence systems (AMISs) and machine learning to forecast the ultimate tensile strength (UTS), maximum hardness (MH), and heat input (HI) of AA-5083 and AA-6061 friction stir welding. The machine learning model integrates two machine learning methods, Gaussian process regression (GPR) and a support vector machine (SVM), into a single model, and then uses the AMIS as the decision fusion strategy to merge SVM and GPR. The generated model was utilized to anticipate three objectives based on seven controlled/input parameters. These parameters were: tool tilt angle, rotating speed, travel speed, shoulder diameter, pin geometry, type of reinforcing particles, and tool pin movement mechanism. The effectiveness of the model was evaluated using a two-experiment framework. In the first experiment, we used two newly produced datasets, (1) the 7PI-V1 dataset and (2) the 7PI-V2 dataset, and compared the results with state-of-the-art approaches. The second experiment used existing datasets from the literature with varying base materials and parameters. The computational results revealed that the proposed method produced more accurate prediction results than the previous methods. For all datasets, the proposed strategy outperformed existing methods and state-of-the-art processes by an average of 1.35% to 6.78%.

1. Introduction

Aluminum alloys are primary materials used for parts of products in many industries, such as the aviation, automotive, railroad, and marine industries, because of their resistance to chemical reaction corrosion, robustness to force, and formability [1,2,3]. In particular, AA5XXX and AA6XXX alloys are utilized in various advanced commercial applications because of their strength and good weldability [4,5]. These materials are not avoided for welding complex dissimilar joints. However, in the fusion welding process for complex joints, dissimilar aluminum alloys have low weldability because of differences in chemical composition, mechanical properties, thermal expansion coefficient, and melting point which manifest as metallurgical problems such as distortion, shrinkage, and porosity resulting from the melting and solidification in the weld line [6,7,8]. Solid-state welding is a method used for welding dissimilar materials which has effective weldability and reduces problems in the weld seam because of the low melting point temperature [9,10,11]. The friction stir welding (FSW) process results in good weld quality, and the weld seam of the two materials has good mechanical properties. However, it is difficult to generate the parameters for dissimilar welding because of the variety of process parameter relationships and their impact on the specification of the weld seam quality [12,13].
There has been much research carried out into FSW of dissimilar aluminum alloys, including relative studies of process parameters and their effect on mechanical properties and metallurgical structure, and these are essential considerations in the configuration of the weld line [14]. The process parameters that influence weld seam quality are: tool rotational speed (TRS), tool travel speed (TTS), shoulder diameter (SD), pin diameter (PD), pin length (PL), penetration depth (PeD), tool tilt angle (TA), pin type (PT), tool traveling method (TM), type of reinforcement particles (TP), and filled additives techniques (TAD) [15,16,17,18,19,20]. Unsuitable process parameters result in insufficient heat generation and mixing of material, which lead to metallurgical changes involving the grain size, microstructure, defects, and intermetallic compound phases, as well as precipitation in the nugget (NZ), thermomechanically affected (TMAZ), and heat-affected zones (HAZ) [21,22,23,24]. As a result, the mechanical properties are not as anticipated. For FSW, the related parameters need to be controlled to achieve good metallurgical structure and mechanical strength.
Luesak et al. [25] reported optimal welding parameters using a modified differential evolution approach in the dissimilar welding of AA5083–AA6061, and particles reinforced the weld seam in the multiresponse process. The welding parameters that influence the metallurgical structure and strength of the weld seam are: rotational speed, tool travel speed, shoulder diameter, tilt angle, pin geometry, type of particles, and tool pin movement. The significant desired responses defining the quality and strength of an FSW weld line are: ultimate tensile strength (UTS), maximum hardness (MH), and heat input (HI). Controlling the welding parameters is essential for the generation of sound mechanical properties [26]. In [27], the author stated that multiobjective optimization of UTS and MH could be predicted using the controllable parameter values as inputs in a prediction model. Examples of multiobjective function prediction methods from the literature include response surface methodology (RSM) [28,29] and grey relation analysis (GRA) [30,31]. Furthermore, the authors of [32] applied an experimental design to develop an artificial multiple intelligence system (AMIS) to solve both single and multiple objectives in FSW and identify an optimal solution.
In recent years, friction stir welding processes have applied artificial intelligence (AI) methods extensively, including in industrial development research [33,34]. To predict mechanical properties, Senapati et al. [35] used an artificial neural network (ANN) simulation involving tensile and yield strength, elongation, bending stress, and grain size, and taking tool rotational speed (TRS), tool travel speed (TTS), and penetration depth (PeD) into consideration. In addition, [36] used a Mamdani-type fuzzy logic model in FSW to predict tensile strength and weld seam hardness, and [37] applied the support vector machine (SVM) method and ANN modeling. Furthermore, [38] simulated FSW parameters and responses using an ANN approach with regard to the hardness in the heat-affected zone (HAZ) and the peak temperature in the stir zone and the HAZ. The result was a solution identical to that in the literature. In the past, machine learning (ML) was often used to sciences and engineering [39] including noisy and sparse data [40]. Therefore, machine learning was used to forecast the multiobjectives of weld seam properties and remains important when combined with AI for high performance. In [41,42], the authors demonstrated the application of AI using the “unlike entirety model” to categorize types of drug resistance in tuberculosis patients and to identify and classify unsavory images. Application of the unlike entirety AI model for forecasting mechanical properties should provide higher-quality solutions. Furthermore, ensemble machine learning (EML) has been developed for use in many forecasting models together with other approaches such as ML to successfully optimize parameters [43]. In [44], the author set out an approach for the categorization of types of drug resistance in tuberculosis patients, and [31] applied the EDL approach to classify irregularities in medical images which were unnoticed when checked by radiologists. Our literature review found that the EDL and EML methods provided effective prediction results when compared to the early deep learning (DL) and ML versions of the models.
Therefore, our research used EML and AI for estimation of the UTS, MH, and HI for high weld seam quality in the FSW process. The gaps in the research (see Table 1) are as follows: (1) a gap concerning the methodology for the prediction of UTS, HM, and HI; the EML approach was based on forecasting the multiobjective in the FSW process using the predefined set of parameters; and (2) a gap regarding the types of joining material that the unlike entirety machine learning architecture uses to predict the optimal values of UTS, HM, and HI in the FSW process. In addition, this research makes the following contributions: (1) the combination of AMIS and machine learning methods was used for the first time to predict the multiple responses of dissimilar friction stir welding; and (2) two novel datasets based on the seven controlled parameters were proposed. These were utilized to construct successful algorithms for predicting multiple response friction stir welding using the test data.

2. Research Methodology

This section presents the research method applied to the artificial multiple intelligence system (AMIS)–machine learning combination model for the prediction of UTS, MH, and HI for AA5083 and AA6061 from the determined parameters. The research outline is shown in Figure 1.

2.1. Dataset Arrangement

The study dataset used in our experiment consisted of a training dataset (80%) and a testing dataset (20%), divided into two groups: 7PI-V1 and 7PI-V2. For example, 7PI-V1 had 57 datasets, divided into 45 training datasets and 12 testing datasets, and this was used to test the performance of the proposed model, as shown in Table 2. The dataset was obtained via an initial demonstration using AA-5083 and AA-6061 materials. The UTS and MH properties of the two materials were as shown in Table 3. The FSW experiment used worksheet plates that were 75 mm wide, 200 mm long, and 6 mm thick.

2.1.1. Dataset 7PI-V1

The parameters used for FSW were as proposed by Luesak et al. [25]: four continuous parameters (rotational speed, tool travel speed, tool shoulder diameter, and tool tilt angle) and three categorical variables (pin geometry, particle types, and tool pin movement). The four continuous variables were set at the variable levels shown in Table 4.

2.1.2. Dataset 7PI-V2

This dataset was experimentally designed using the Taguchi method [30]. Commercial AA-5083 and AA-6061 alloy cut from work piece plates to dimensions of 75 mm wide, 200 mm long, and 6 mm thick were used in the FSW experiment, and the parameters for the welding experiment are given in Table 5. A CNC milling machine (HAAS, Model TM2) was utilized in the FSW process. The moving direction of the tool was the rolling direction of the specimen, and the tool geometry for the experiment was as shown in Figure 2. During FSW, the heat input was measured using a thermal infrared imaging camera (FLUKE, Model Ti480 Pro). After FSW, a waterjet machine (MAXIEM, Model 1530) was used to cut all the specimens in a transverse direction following the ASTM E8M-04 standard for testing tensile strength and maximum hardness, as shown in Figure 3. The tensile testing was carried out at room temperature using a universal testing machine (LLOYD, Model LS100-Plus) at a crosshead speed of 0.5 mm/min. Microhardness testing was conducted using a Vickers hardness testing machine (Mitutoyo, Model MVK-H1) with a test load of 100 g and a 15 s holding time. The experimental and test procedures are shown in Figure 4. The results of the 7PI-V1 experiment were compared with correlated work to test and confirm the earlier predictions.

2.2. AMIS–Machine Learning Combination Model

The AMIS–machine learning combination model was designed to improve the forecasting of the UTS, MH, and HI in FSW of AA-5083 and AA-6061 aluminum alloys. The input variables were all parameters [25] determined according to the method proposed by Matitopanum et al. [59], with the addition of a new method, the AMIS–machine learning combination model, which consists of the following parts.

2.2.1. Gaussian Process Regression

Gaussian process regression (GPR) involves the generation of machine learning combination modeling and was used to improve the presented ensemble machine learning model. GPR was used to estimate feasibility, which generates data for the input (X) and output (Y) in the process. These generated models provide a conditional distribution, as shown in Figure 5.
Figure 5 shows GPR using square boxes, circles, and arrow symbols for associated nodes across the monitored values, which require a random variable group.
The general process of GPR is given by the following formula:
y = f x + N 0 , σ n 2
where f x   is input value; N 0 , σ n 2 is noise; and σ n 2 is noise variance.
One observation is the covariance function ( k x ,   x ), or radial basis function (RBF) kernel, obtained via the following formula:
k x ,   x = σ f 2 exp x   x 2 2 l 2
k x ,   x = σ f 2 exp x   x 2 2 l 2 + σ n 2 δ x ,   x
where σ f 2   is determined for the maximum covariance. If x     x , then k x ,   x approaches this maximum; f x is inseparably related with f   x   in Equation (2).
Where δ x ,   x   is a function of the Kronecker delta in Equation (3). The covariance function is likewise used to support the vector machine of the essence to generate the covariance matrix.

2.2.2. Support Vector Machine

A support vector machine (SVM) is an algorithm-supervised learning method used for regression problems. The purpose of SVM is to find the function that is an exact prediction of the output value from a given input.
Operating in the SVM, which is called the margin, is a regression using the function of distant maximizing to forecast genuine result values. It is expedient to work within the relationship variables of the input and output because the data are complex and cannot easily be modeled via a linear function.
Support vector machine regression is the prediction of the new input to the hyperplane using the identified function for calculation of the result value of prediction. It is defined as a radial basis function (RBF) kernel, using the following equation:
k (x, x′) = exp (−gamma × ||x − x′||^2)
where gamma is a scaling factor and x, x′ are the input vectors.

2.2.3. Entirety strategy

The homogenous and heterogenous structures use a similar model to proceed with the ensemble strategy. The heterogeneous structure uses the GPR method equal to 50% and the support vector machine equal to 50% as proposed by [59] and shown in Figure 6.

2.2.4. Decision Hybridization Strategy

The decision hybridization strategy is the last forecasting stage to report results in the machine learning approach. In addition, it is a way of combining the results of many GPR methods into one, superseding the solution of the presented model.
-
Unweighted average ensemble
Here, the research presents the learning output via an analysis of the total outdistanced unweighted average to define the merging decision of the model [60].
-
Weighted ensemble optimization using artificial multiple intelligence systems (WEAMIS)
The EML method improves the predictive values for the UTS, MH, and HI by developing a cluster of learning techniques, called weighted ensemble optimization, using artificial multiple intelligence systems (WEAMIS) to determine the appropriate fuse weighting approach. Then, WEAMIS is applied to the best weighting, considering the UAE, to improve the quality of the final solution.
An optimal solution model aims to combine the base learner forecast, using an optimized weighting that shows the result in its entirety, with the minimum total predicted root mean square error (RMSE), as shown in Equation (5). The calculation of y ^ i , the ensemble learners, is given by Equation (6):
RMSE = 1 I i = 1 I y i   y ^ i 2
y ^ i   =   j = 1 J ω j   Y ^ j
where   ω j 0   a n d   j = 1 k ω j = 1
The RMSE calculation of a single learner as shown above is obtained using Equation (5), where I represents a determined number of observations, yi is the actual value of the observation i, ŷi is the observed prediction i, Y ^ j is the prediction values set when using model j as the learners, and j is the total number of learners to use in the model.
The artificial multiple intelligence systems (AMIS) method uses a system with many artificial multiple intelligences to assist in the identification of optimal solutions [42,51]. The system is called an intelligence box (IB), and it has algorithms with unique properties. The AMIS comprises four steps, which are (1) the generation of a set population member known as a work package (WP), (2) performing the WP to select the specified IB, (3) updating track information, and (4) repeating WP steps 2 to 3 until the dissolving condition is met.
Step 1. Initial generation of the work package
The step generating the initial work package is random and is set by the NP random vector. The NP is set to the equitable number of SVMs and GPR on 1–100 positions, as recommended by [42,51]. This research used track i = 2 random examples, as shown in Table 6.
Table 6 shows the initial work package with the value positions 1–100. Then, the initial work package sets the randomized numbers from 0 to 1, and the next work package solution set is selected using a roulette wheel method and entered into the process intelligence box (IB) [61].
The different decisions initial work package selects an IB, individually performing the IB to improve the current solution from each track i. The search probability function process uses the following formula:
P bt = FN bt 1 + 1 F A bt 1 + MJ bt 1 bt = 1 B FN bt 1 + 1 F A bt 1 + MJ bt 1
where Pbt represents the probability of selecting the intelligence box in iteration t; Nbt−1 is the number of positions that have selected intelligence box b in the previous t iterations; Abt−1 is the average objective function of all positions that selected intelligence box b in the previous iteration; and Jbt−1 is a value which is incremented by 1 if an intelligence box finds the best result in the final iteration. Prasitpuriprecha et al. [42] has recommended that a scaling factor F = 2 and improvement factor M = 1 are good initial choices.
Step 2. Performing the WP to select the specified IB
In this step, the WP selects the specified IB to improve the quality of the current solution to obtain the optimal solution. Our research designed the following intelligence by utilizing 8 boxes recommended by [32].
Y ijq = ρ Y rjq + F 1 B j gbest Y rjq + F 2 Y mjq Y rjq  
Y ijq = Y rjq + F 1 B j gbest Y rjq + F 2 B hj pbest Y rjq
Y ijq = Y rjq + F 1 Y mjq Y njq
Y ijq =   S ij
Y ijq = Y ijq           if     S ij CR h   S ijq                 otherwise        
Y ijq = Y ijq           if     S ij CR h   B j gbest               otherwise        
Y ijq = Y ijq           if     S ij CR h   Y njq                 otherwise        
Y ijq = Y ijq                     if     S ij CR h   S ij Y ijq           otherwise        
Time the operation in the IB to execute all WPs in iteration t, and each WP will select its preferred IB result to design the method for finding the answer to each problem using Equation (8) to Equation (15), where Y ijq + 1 is the value of WP track i in iteration t + 1, Y ijq is the new value generated in the position using IBs, r,n and m are the specific elements of the WP set (1 to Q) that are not equal to q, S ij is a random number 0–1 of the WP track i element j, and, according to the iteration t, B j gbest is the value of the global best WP, and B hj pbest is the value of the personal best. A crossover rate ( CR h ) value of 0.8 is used, as recommended by [32].
Step 3. Updating track information
Some track information requires updating before it can be used in iterations using the formulae in Equations (16) and (17) as ref [42]:
Y ijt + 1 = Z ijt           if     f Z ijt   f X ijt   and   update   f X ijt = f Z ijt   X ijt + 1   otherwise
W i =   Y ijt       for   all   i   and   t .
where Y ijt is the tracked value of i and iteration t + 1, f ( Z ijt ) is the value of the objective function of Z ijt and f X ijt is the objective function value of X ijt . Then, the values of W i are connected to those of X ijt + 1 by X ijt + 1 , which is the value of updating.
Step 4. Repeating WP in steps 2 to 3.
Repeat steps 2 to 4 until the termination condition is discovered. The stop criterion is set to 100 iterations [62].

3. Results

The proposed model was simulated in Python using a PC with an Intel Core™ i3–1.70 GHz CPU and 5 GB of RAM. The framework for the experiments is shown in Figure 7.
The GPR parameters, SVM, random forest (RF), ADA boosting (AB), gradient boosting (GB), and presented models (WEAMIS), taken from [59], are set out in Table 7.

3.1. The Offered Model Testing on the Existing Dataset

This experiment was performed using datasets 11P1-V1 and 11PI-V2 as recommended by [59]. In addition to [59], the standard models used in this experiment were modified for use in [51,57,63], and [48], which relate to the GPR, SVM, RF, AB, and GB methods, respectively. The algorithm names GPR-HO-UWE, GPR-HO-WEDE, SVM-HO-UWE, SVM-HO-WEDE, HE-UWE, and HE-WEDE were taken from [59]. The proposed model is named GPR-Ho-WEAMIS, SVM-Ho-WEAMIS, and HE-WEAMIS. The disparate ensemble structure models are attributed to HE-UWE and HE-WEAMIS, as UWE and WEAMIS were used in the decision fusion method. The results from the datasets 11PI-V1 and 11PI-V2 are shown in Table 8.
Table 8 shows the best obtainable method for the evaluation of the UTS, MH, and HI with the 11 enforced parameters for the 11PI-V1 dataset, providing UTS, MH, and HI values of 10.99%, 40.51%, and 38.91%, respectively. The 11PI-V2 dataset produced a solution with UTS, MH, and HI values of 51.87%, 36.76%, and 20.14%, respectively. These solutions display less precision than the best one obtained using the HE-WEAMIS model, in which the RMSE of 11PI-V1 for the UTS reduced from 3.73 to 3.32, the MH reduced from 3.16 to 1.88, and the HI reduced from 6.58 to 4.02. Furthermore, the RMSE of 11PI-V2 for the UTS reduced from 6.69 to 3.22, the MH from 3.21 to 2.03, and the HI from 7.35 to 5.87. This showed that using WEAMIS as the decision approach provided greater accuracy. K-fold cross-validation was executed using 2-cv-, 3-cv-, and 5-cv-fold cross-validation of the model; the results of the experiment are shown in Table 9.
Table 9 provides the computational results in which HE-WEAMIS obtains the best solution of all the proposed methods, as evidenced by its lower variance and RMSE.

3.2. Testing of the Proposed Model with the Seven Parameters of the Existing Dataset (7PI-V1)

The values of the seven parameters were obtained randomly, as recommended by [25]. This dataset was tested using the methods set out in Section 3.1. The models used were machine learning and ensemble machine learning, and these provided results for UTS, MH, and HI values. These results were compared with the results from the various methods, as shown in Table 10.
Table 10 reveals that HE-WEAMIS is the best prediction method given the seven parameters of 7PI-V1, providing UTS, MH, and HI values of 49.38%, 45.95%, and 24.88%, respectively. The RMSE of 7PI-V1 for UTS reduced from 9.66 to 4.89, MH reduced from 7.40 to 4.00, and HI reduced from 6.31 to 4.74.
Table 11 shows the percentage difference of the results found. HE-WEAMIS was the best method because it provided better solutions than the other methods described in [34]. The HE-WEAMIS results increase the quality of the prediction when compared with the GPR [63], SVM [63], RF [51], AB [57], GB [48], GPR-HO-UWE, GPR-HO-WEDE, SVM-HO-UWE, SVM-HO-WEDE, HE-UWE, HE-WEDE [59], SVM-HO-WEAMIS, and GPR-HO-WEAMIS approaches. The results showed much higher percentages of 27.34%, 45.95%, and 19.25% for UTS, MH, and HI, respectively, when compared with the approach used by [34]. Furthermore, these results outperform the RF, AB, and GB methods on the UTS, MH, and HI values by 49.38%, 45.28%, and 27.85%, respectively. Then, using the AMIS (HE-WEAMIS) method, the optimum weights can improve the answer quality compared with the UWE (HE-UWE) method for UTS, MH, and HI values by 3.93%, 1.96%, and 8.85%, respectively.

3.3. The Offered Model Testing with the Unseen Dataset 7 Parameters by Using the Taguchi Method (7PI-V2)

This proposed dataset consisted of seven parameters. The dataset comprises 54 datasets, and the details and results are shown in Table 12. This dataset was analyzed by machine learning methods to compare the models’ accuracy; the results are displayed in Table 13.
Table 13 reveals that HE-WEAMIS is the best method for predicting the UTS, MH, and HI based on the seven parameters of 7PI-V2, providing a better RMSE solution for UTS, MH, and HI than the 7PI-V1 dataset. The predictive solution errors reduced from 13.15 to 5.44, 6.54 to 2.40, and 8.18 to 4.49 for UTS, MH, and HI, respectively.
Table 14 presents the percentage differences in the results, showing that HE-WEAMIS is the best model, similar to the 7PI-V1 dataset in Table 11. The UTS, MH, and HI responses provide high performance values of 58.63%, 63.30%, and 45.11%, respectively, and are higher than those of the RF, AB, and GB methods, which were 41.82%, 43.13%, and 32.38%, respectively. Therefore, using optimum weightings improved the quality of the results with the AMIS (HE-WEAMIS) approach when compared with the values for UTS, MH, and HI from the UWE (HE-UWE) method, with percentage differences in the results of 9.63%, 5.14%, and −6.15%, respectively.
Figure 8a to d shows a comparison of the errors obtained from the dissimilar forecasting models. The GPR-HO-UWE and HE-UWE methods were selected for comparison with the HE-WEAMIS approach. The RMSE values are plotted on the graph with a solid line, an error line, and a plotted error line at a 45° angle. The plotted error line shows the ranges of ±3%. All the graphs make it clear that the forecasting solutions obtained using the suggested model are close to the experimentally obtained UTS, MH, and HI values. The graph displays the forecasted and actual values of UTS, MH, and HI. The values are similar to those obtained using the suggested model together with the ensemble learning model.

4. Discussion

We presented an AMIS–machine learning combination model for multiple response prediction of AA-5083/AA-6061 friction stir welding, using the model parameters set out in Table 2, for prediction of the UTS, MH, and HI, including (1) tilt angle, (2) tool travel speed, (3) shoulder diameter, and (4) tool rotation speed. Our research executed and analyzed comparisons with the results obtained by [34,64]. The calculation results from testing the proposed approach (HE-WEAMIS) showed that it was superior to the GPR [63], SVM [63], RF [51], AB [57], and GB [48] methods in the literature. These existing methods did not perform as well as the heterogeneous ensemble network model. The resulting findings were consistent with the conclusions of [41,43]. These two studies obtained homogenous and heterogenous ensemble networks and also explained how the practical decision fusion strategy is more effective than UWA methods. The decision fusion strategy is the reason for the significantly higher performance of our proposed model. Our methodology and its results are supported by [43,45,65,66].
In summary, it can clearly be seen from the error percentages for the UTS, MH, and HI that the WEAMIS method outperformed the existing dataset. These error percentages were 15.75%, 16.75%, and 41.35%, respectively, for the 7PI-V1 training and testing datasets, and 56.25%, 59.20%, and 27.62%, respectively, for the 7PI-V2 dataset. However, this method used the difference in the dataset error rate for the training and testing and showed good execution in obtaining a slight difference between the training and testing datasets. The training and testing indicated that our method (WEAMIS) can be tolerant of dataset permutations and outperform them even if the dataset is altered; although the parameter numbers increased for prediction of the UTS, MH, and HI, our model remained effective and outperformed several methods such as SVM, GPR, RF, AB, and GB.
Table 13 shows how the calculated solutions use the controlled parameters at the different levels. Furthermore, Table 2, shows that the proposed model is the most accurate in predicting the UTS, MH, and HI values, according to the results from [62]. Using the ML approach and the cognitive learning method could reduce the RMSE and increase prediction accuracy. Table 13 presents the prediction results using the model for the UTS, MH, and HI of the parameter set, confirming that the results from this method provide the highest accuracy. The respective CC and RMSE results were 0.981 and 5.44 for the UTS, 0.980 and 2.40 for the MH, and 0.988 and 4.49 for the HI. These results are more accurate than those in [34,64].
This article shows how the values of the UTS, MH, and HI can be accurately predicted using a model with an increased number of input parameters. The model presented provides more accurate prediction of the UTS, MH, and HI values with fewer input parameters than the models proposed in [24,34,35,37]. The results of the proposed model are more accurate and can be successfully applied to predict the UTS, MH, and HI for friction stir welding with all input parameters and can identify a solution without sample destruction.

5. Conclusions

Our research studied the characteristics of seven parameters in the FSW process using a machine learning entirety model developed to predict the UTS, MH, and HI. The aluminum alloys AA-5083 and AA-6061 were joined using friction stir welding performed for the experiment. The presented approach gathers the GPR and advocates the use of vector machine models. Two unlike decision fusion methods were applied for this research: the unweighted average model (UAM) and the artificial multiple intelligence system, which was obtained from GPR and SVM. The testing of the proposed model used two datasets to predict the multiple responses of the weld seam. The WEAMIS method provided higher performance than all current methods. The ensemble structures were homogeneous and heterogeneous. When tested using the 7PI-V1 and 7PI-V2 datasets, the averages of the UTS, MH, and HI values were 5.27%, 3.30%, and 4.71%, respectively. Weighting optimization using the AMIS method improved the average solution quality compared with the UWA model. The average values for the UTS, MH, and HI were 6.78%, 3.55%, and 1.35%, respectively. The experimental results from the datasets confirmed that the proposed model outperformed existing methods. Therefore, we conclude that the heterogeneous entirety structure and AMIS improve the fusion weight of the earlier methods.
We conclude that the machine learning combination model should also be able to estimate nonform data by avoiding a full factorial design when conducting the experiment and generating the research dataset. However, the experiment also shows that the prediction results from the model remain highly accurate when using increased datasets. Future research should progress the research in the following two directions to obtain high-quality results: (1) application-based or online applications using the proposed model for welding, enabling the selection of appropriate parameters to obtain the desired UTS, MH, and HI values; and (2) the exploration of other parameters to obtain new results that could enable the model to be applied to different materials, parameters, and outputs using a progressive function system.

Author Contributions

Conceptualization, R.K. and C.C.; methodology, G.J.; software, S.G.; validation, R.K. and G.J.; formal analysis, W.S.; investigation, R.K.; resources, C.C.; data curation, S.G; writing—original draft preparation, R.K.; writing—review and editing, G.J. and W.S.; visualization, C.C.; supervision, R.K.; project administration, R.K. and C.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

This research was supported by the Department of Industrial Engineering, Rajamangala University of Technology Isan, Nakhon Ratchasima, Thailand.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Laska, A.; Szkodo, M.; Cavaliere, P.; Perrone, A. Influence of the Tool Rotational Speed on Physical and Chemical Properties of Dissimilar Friction-Stir-Welded AA5083/AA6060 Joints. Metals 2022, 12, 1658. [Google Scholar] [CrossRef]
  2. Torzewski, J.; Łazińska, M.; Grzelak, K.; Szachogłuchowicz, I.; Mierzyński, J. Microstructure and mechanical properties of dissimilar friction stir welded joint aa7020/aa5083 with different joining parameters. Materials 2022, 15, 1910. [Google Scholar] [CrossRef] [PubMed]
  3. Zainelabdeen, I.H.; Al-Badour, F.A.; Suleiman, R.K.; Adesina, A.Y.; Merah, N.; Ghaith, F.A. Influence of Friction Stir Surface Processing on the Corrosion Resistance of Al 6061. Materials 2022, 15, 8124. [Google Scholar] [CrossRef] [PubMed]
  4. Rani, P.; Mishra, R. Influence of Reinforcement with Multi-Pass FSW on the Mechanical and Microstructural Behavior of Dissimilar Weld Joint of AA5083 and AA6061. Silicon 2022, 14, 11219–11233. [Google Scholar] [CrossRef]
  5. Ogunsemi, B.; Abioye, T.; Ogedengbe, T.; Zuhailawati, H. A review of various improvement strategies for joint quality of AA 6061-T6 friction stir weldments. J. Mater. Res. Technol. 2021, 11, 1061–1089. [Google Scholar] [CrossRef]
  6. Da Silva, C.L.M.; Scotti, A. The influence of double pulse on porosity formation in aluminum GMAW. J. Mater. Process. Technol. 2006, 171, 366–372. [Google Scholar] [CrossRef]
  7. Fang, X.; Zhang, J. Effect of underfill defects on distortion and tensile properties of Ti-2Al-1.5 Mn welded joint by pulsed laser beam welding. Int. J. Adv. Manuf. Technol. 2014, 74, 699–705. [Google Scholar] [CrossRef]
  8. Guo, H.; Hu, J.; Tsai, H.-L. Formation of weld crater in GMAW of aluminum alloys. Int. J. Heat Mass Transf. 2009, 52, 5533–5546. [Google Scholar] [CrossRef]
  9. Darji, R.; Joshi, G.; Badheka, V.; Patel, D. Applications of Friction-Based Processes in Manufacturing. In Proceedings of the 6th International Conference on Advanced Production and Industrial Engineering (ICAPIE)—2021, Delhi, India, 18–19 June 2021. [Google Scholar]
  10. Rajendran, C.; Srinivasan, K.; Balasubramanian, V.; Sonar, T.; Balaji, H. Friction stir welding for manufacturing of a light weight combat aircraft structure. Mater. Test. 2022, 64, 1782–1795. [Google Scholar] [CrossRef]
  11. Singh, V.P.; Patel, S.K.; Ranjan, A.; Kuriachen, B. Recent research progress in solid state friction-stir welding of aluminium–magnesium alloys: A critical review. J. Mater. Res. Technol 2020, 9, 6217–6256. [Google Scholar] [CrossRef]
  12. Threadgill, P.L.; Leonard, A.J.; Shercliff, H.R.; Withers, P.J. Friction stir welding of aluminium alloys. Int. Mater. Rev. 2009, 54, 49–93. [Google Scholar] [CrossRef]
  13. Lakshminarayanan, A.K.; Malarvizhi, S.; Balasubramanian, V. Developing friction stir welding window for AA2219 aluminium alloy. Transac. Nonferrous Metals Soc. China 2011, 21, 2339–2347. [Google Scholar] [CrossRef]
  14. Zhu, C.; Tang, X.; He, Y.; Lu, F.; Cui, H. Characteristics and formation mechanism of sidewall pores in NG-GMAW of 5083 Al-alloy. J. Mater. Process. Technol. 2016, 238, 274–283. [Google Scholar] [CrossRef]
  15. Bisadi, H.; Tavakoli, A.; Sangsaraki, M.T.; Sangsaraki, K.T. The influences of rotational and welding speeds on microstructures and mechanical properties of friction stir weld Al5083 and commercially pure copper sheets lap joint. Mater. Design 2013, 43, 80–88. [Google Scholar] [CrossRef]
  16. Kadaganchi, R.; Gankidi, M.R.; Gokhale, H. Optimization of process parameters of aluminum alloy AA 2014-T6 friction stir welds by response surface methodology. Def. Technol. 2015, 11, 209–219. [Google Scholar] [CrossRef]
  17. Amir, H.L.; Salman, N. Effect of Welding Parameters on Microstructure, Thermal, and Mechanical Properties of Friction-Stir Welded Joints of AA7075-T6 Aluminum Alloy. Metall. Mater. Transac. A 2014, 45A, 2792–2807. [Google Scholar]
  18. Khan, N.Z.; Khan, Z.A.; Siddiquee, A.N. Effect of shoulder diameter to pin diameter (D/d) ratio on tensile strength of friction stir welded 6063 aluminium alloy. Mater. Today Proc. 2015, 2, 1450–1457. [Google Scholar] [CrossRef]
  19. Liu, H.; Zhang, H.; Pan, Q.; Yu, L. Effect of friction stir welding parameters on microstructural characteristics and mechanical properties of 2219-T6 aluminum alloy joints. Int. J. Mater. Form. 2012, 5, 235–241. [Google Scholar] [CrossRef]
  20. Elangovan, K.; Balasubramanian, V. Influences of tool pin profile and welding speed on the formation of friction stir processing zone in AA2219 aluminium alloy. J. Mater. Process. Technol. 2008, 200, 163–175. [Google Scholar] [CrossRef]
  21. Ilangovan, M.; Rajendra Boopathy, S.; Balasubramanian, V. Effect of tool pin profile on microstructure and tensile properties of friction stir welded dissimilar AA 6061eAA 5086 aluminium alloy joints. Def. Technol. 2015, 11, 174–184. [Google Scholar] [CrossRef]
  22. Ghaffarpour, M.; Kolahgar, S.; Dariani, B.M.; Dehghani, K. Evaluation of Dissimilar Welds of 5083-H12 and 6061-T6 Produced by Friction Stir Welding. Metall. Mater. Transac. A 2013, 44, 3697–3707. [Google Scholar] [CrossRef]
  23. RajKumar, V.; VenkateshKannan, M.; Sadeesh, P.; Arivazhagan, N.; Ramkumar, K.D. Studies on Effect of Tool Design and Welding Parameters on the Friction Stir Welding of Dissimilar Aluminium Alloys AA 5052—AA 6061. Procedia Eng. 2014, 75, 93–97. [Google Scholar] [CrossRef]
  24. Kasman, S.; Yenier, Z. Analyzing dissimilar friction stir welding of AA5754/AA7075. Int. J. Adv. Manuf. Technol. 2014, 70, 145–156. [Google Scholar] [CrossRef]
  25. Luesak, P.; Pitakaso, R.; Sethanan, K.; Golinska-Dawson, P.; Srichok, T.; Chokanat, P. Multi-Objective Modified Differential Evolution Methods for the Optimal Parameters of Aluminum Friction Stir Welding Processes of AA6061-T6 and AA5083-H112. Metals 2023, 13, 252. [Google Scholar] [CrossRef]
  26. Kianezhad, M.; Raouf, A.H. Effect of nano-Al2O3 particles and friction stir processing on 5083 TIG welding properties. J. Mater. Process. Technol. 2019, 263, 356–365. [Google Scholar] [CrossRef]
  27. Kahhal, P.; Ghasemi, M.; Kashfi, M.; Ghorbani-Menghari, H.; Kim, J.H. A multi-objective optimization using response surface model coupled with particle swarm algorithm on FSW process parameters. Sci. Rep. 2022, 12, 1–20. [Google Scholar] [CrossRef] [PubMed]
  28. Verma, S.; Gupta, M.; Misra, J.P. Optimization of process parameters in friction stir welding of armor-marine grade aluminium alloy using desirability approach. Mater. Res. Express 2018, 6, 026505. [Google Scholar] [CrossRef]
  29. Rajakumar, S.; Balasubramanian, V. Establishing relationships between mechanical properties of aluminium alloys and optimised friction stir welding process parameters. Mater. Des. 2012, 40, 17–35. [Google Scholar] [CrossRef]
  30. Gupta, S.K.; Pandey, K.; Kumar, R. Multi-objective optimization of friction stir welding process parameters for joining of dissimilar AA5083/AA6063 aluminum alloys using hybrid approach. J. Mater. Design Appl. 2018, 232, 343–353. [Google Scholar] [CrossRef]
  31. Kesharwani, R.; Panda, S.; Pal, S. Multi objective optimization of friction stir welding parameters for joining of two dissimilar thin aluminum sheets. Procedia Mater. Sci. 2014, 6, 178–187. [Google Scholar] [CrossRef]
  32. Pitakaso, R.; Nanthasamroeng, N.; Srichok, T.; Khonjun, S.; Weerayuth, N.; Kotmongkol, T.; Pornprasert, P.; Pranet, K. A Novel Artificial Multiple Intelligence System (AMIS) for Agricultural Product Transborder Logistics Network Design in the Greater Mekong Subregion (GMS). Computation 2022, 10, 126. [Google Scholar] [CrossRef]
  33. Kim, S.W.; Kong, J.H.; Lee, S.W.; Lee, S. Recent advances of artificial intelligence in manufacturing industrial sectors: A review. Int. J. Precis. Eng. Manuf. 2022, 2022, 1–19. [Google Scholar] [CrossRef]
  34. Eren, B.; Guvenc, M.A.; Mistikoglu, S. Artificial intelligence applications for friction stir welding: A review. Metals Mater. Int. 2021, 27, 193–219. [Google Scholar] [CrossRef]
  35. Senapati, N.P.; Panda, D.; Bhoi, R.K. Prediction of multiple characteristics of Friction-Stir welded joints by Levenberg Marquardt algorithm based artificial neural network. Mater. Today Proc. 2021, 41, 391–396. [Google Scholar] [CrossRef]
  36. Ashok, S.; Ponni alias sathya, S. A fuzzy model to predict the mechanical characteristics of friction stir welded joints of aluminum alloy AA2014-T6. Aeronaut. J. 2022, 1–13. [Google Scholar] [CrossRef]
  37. Sarsilmaz, F.; Kavuran, G. Prediction of the optimal FSW process parameters for joints using machine learning techniques. Mater. Test. 2021, 63, 1104–1111. [Google Scholar] [CrossRef]
  38. Dutt, A.K.; Sindhuja, K.; Reddy, S.V.N.; Kumar, P. Application of Artificial Neural Network to Friction Stir Welding Process of AA7050 Aluminum Alloy. Proc. ICAIASM 2019, 2021, 407–414. [Google Scholar]
  39. Frank, M.; Drikakis, D.; Charissis, V. Machine-learning methods for computational science and engineering. Computation 2020, 8, 15. [Google Scholar] [CrossRef]
  40. Poulinakis, K.; Drikakis, D.; Kokkinakis, I.W.; Spottswood, S.M. Machine-Learning Methods on Noisy and Sparse Data. Mathematics 2023, 11, 236. [Google Scholar] [CrossRef]
  41. Prasitpuriprecha, C.; Jantama, S.S.; Preeprem, T.; Pitakaso, R.; Srichok, T.; Khonjun, S.; Weerayuth, N.; Gonwirat, S.; Enkvetchakul, P.; Kaewta, C. Drug-Resistant Tuberculosis Treatment Recommendation, and Multi-Class Tuberculosis Detection and Classification Using Ensemble Deep Learning-Based System. Pharmaceuticals 2022, 16, 13. [Google Scholar] [CrossRef] [PubMed]
  42. Prasitpuriprecha, C.; Pitakaso, R.; Gonwirat, S.; Enkvetchakul, P.; Preeprem, T.; Jantama, S.S.; Kaewta, C.; Weerayuth, N.; Srichok, T.; Khonjun, S. Embedded AMIS-Deep Learning with Dialog-Based Object Query System for Multi-Class Tuberculosis Drug Response Classification. Diagnostics 2022, 12, 2980. [Google Scholar] [CrossRef] [PubMed]
  43. Yin, L.; Du, X.; Ma, C.; Gu, H. Virtual Screening of Drug Proteins Based on the Prediction Classification Model of Imbalanced Data Mining. Processes 2022, 10, 1420. [Google Scholar] [CrossRef]
  44. Karki, M.; Kantipudi, K.; Yang, F.; Yu, H.; Wang, Y.X.J.; Yaniv, Z.; Jaeger, S. Generalization Challenges in Drug-Resistant Tuberculosis Detection from Chest X-rays. Diagnostics 2022, 12, 188. [Google Scholar] [CrossRef]
  45. Mishra, A.; Sefene, E.M.; Nidigonda, G.; Tsegaw, A.A. Performance Evaluation of Machine Learning-based Algorithm and Taguchi Algorithm for the Determination of the Hardness Value of the Friction Stir Welded AA 6262 Joints at a Nugget Zone. arXiv 2022, arXiv:2203.11649. [Google Scholar]
  46. Al-Enzi, F.S.; Mohammed, S. Prediction Of Hardness And Wear Behaviour Of Friction Stir Processed Cast A319 Aluminum Alloys Using Machine Learning Technique. Eng. Res. J. 2020, 46, 16–26. [Google Scholar]
  47. Vignesh, R.V.; Padmanaban, R. Artificial neural network model for predicting the tensile strength of friction stir welded aluminium alloy AA1100. Mater. Today Proc. 2018, 5, 16716–16723. [Google Scholar] [CrossRef]
  48. Sefene, E.M.; Tsegaw, A.A.; Mishra, A. process parameter optimization of 6061AA friction stir welded joints using supervised machine learning regression-based algorithms. J. Soft Comput. Civil Eng. 2022, 6, 127–137. [Google Scholar]
  49. Anandan, B.; Manikandan, M. Machine learning approach with various regression models for predicting the ultimate tensile strength of the friction stir welded AA 2050-T8 joints by the K-Fold cross-validation method. Mater. Today Commun. 2023, 34, 105286. [Google Scholar] [CrossRef]
  50. Kumar, A.K.; Surya, M.S.; Venkataramaiah, P. Performance evaluation of machine learning based-classifiers in friction stir welding of Aa6061-T6 alloy. Int. J. Interact. Design Manuf. 2022, 2022, 1–4. [Google Scholar] [CrossRef]
  51. Verma, S.; Misra, J.P.; Popli, D. Modeling of friction stir welding of aviation grade aluminium alloy using machine learning approaches. Int. J. Modell. Simul. 2022, 42, 1–8. [Google Scholar] [CrossRef]
  52. Syah, A.; Astuti, W.; Saedon, J. Development of prediction system model for mechanical property in friction stir welding using support vector machine (SVM). J. Mech. Eng. 2018, 216–225. [Google Scholar]
  53. Verma, S.; Misra, J.P.; Singh, J.; Batra, U.; Kumar, Y. Prediction of tensile behavior of FS welded AA7039 using machine learning. Mater. Today Commun. 2021, 26, 101933. [Google Scholar] [CrossRef]
  54. Hartl, R.; Vieltorf, F.; Benker, M.; Zaeh, M.F. Predicting the ultimate tensile strength of friction stir welds using Gaussian process regression. J. Manuf. Mater. Process. 2020, 4, 75. [Google Scholar] [CrossRef]
  55. Mishra, A. Artificial intelligence algorithms for the analysis of mechanical property of friction stir welded joints by using python programming. Welding Technol. Rev. 2020, 92. [Google Scholar] [CrossRef]
  56. Upender, K.; Kumar, B.; Rao, M.; Ramana, M.V. Friction Stir Welding of IS: 65032 Aluminum Alloy and Predicting Tensile Strength Using Ensemble Learning. In Proceedings of the International Conference on Advances in Mechanical Engineering and Material Science, Andhra Pradesh, India, 22–24 January 2022; pp. 103–114. [Google Scholar]
  57. Mishra, A.; Morisetty, R. Determination of the Ultimate Tensile Strength (UTS) of friction stir welded similar AA6061 joints by using supervised machine learning based algorithms. Manuf. Lett. 2022, 32, 83–86. [Google Scholar] [CrossRef]
  58. Mishra, A.; Dasgupta, A. Supervised and Unsupervised Machine Learning Algorithms for Forecasting the Fracture Location in Dissimilar Friction-Stir-Welded Joints. Forecasting 2022, 4, 787–797. [Google Scholar] [CrossRef]
  59. Matitopanum, S.; Pitakaso, R.; Sethanan, K.; Srichok, T.; Chokanat, P. Prediction of the Ultimate Tensile Strength (UTS) of Asymmetric Friction Stir Welding Using Ensemble Machine Learning Methods. Processes 2023, 11, 391. [Google Scholar] [CrossRef]
  60. Gonwirat, S.; Surinta, O. Optimal weighted parameters of ensemble convolutional neural networks based on a differential evolution algorithm for enhancing pornographic image classification. Eng. Appl. Sci. Res. 2021, 48, 560–569. [Google Scholar]
  61. Chiaranai, S.; Pitakaso, R.; Sethanan, K.; Kosacka-Olejnik, M.; Srichok, T.; Chokanat, P. Ensemble Deep Learning Ultimate Tensile Strength Classification Model for Weld Seam of Asymmetric Friction Stir Welding. Processes 2023, 11, 434. [Google Scholar] [CrossRef]
  62. Sethanan, K.; Pitakaso, R. Improved differential evolution algorithms for solving generalized assignment problem. Expert Syst. Appl. 2016, 45, 450–459. [Google Scholar] [CrossRef]
  63. Verma, S.; Gupta, M.; Misra, J.P. Performance evaluation of friction stir welding using machine learning approaches. MethodsX 2018, 5, 1048–1058. [Google Scholar] [CrossRef] [PubMed]
  64. De Filippis, L.A.C.; Serio, L.M.; Facchini, F.; Mummolo, G.; Ludovico, A.D. Prediction of the vickers microhardness and ultimate tensile strength of AA5754 H111 friction stir welding butt joints using artificial neural network. Materials 2016, 9, 915. [Google Scholar] [CrossRef] [PubMed]
  65. Gonwirat, S.; Surinta, O. DeblurGAN-CNN: Effective Image Denoising and Recognition for Noisy Handwritten Characters. IEEE Access 2022, 10, 90133–90148. [Google Scholar] [CrossRef]
  66. Noppitak, S.; Surinta, O. dropCyclic: Snapshot ensemble convolutional neural network based on a new learning rate schedule for land use classification. IEEE Access 2022, 10, 60725–60737. [Google Scholar] [CrossRef]
Figure 1. Framework of the AMIS–ML combination model for the forecasting of UTS, MH, and HI.
Figure 1. Framework of the AMIS–ML combination model for the forecasting of UTS, MH, and HI.
Computation 11 00100 g001
Figure 2. Tool geometry for the experiment: (a) 3D and (b) 2D views.
Figure 2. Tool geometry for the experiment: (a) 3D and (b) 2D views.
Computation 11 00100 g002
Figure 3. Specimen cutting for tensile and hardness testing.
Figure 3. Specimen cutting for tensile and hardness testing.
Computation 11 00100 g003
Figure 4. The FSW experimental process.
Figure 4. The FSW experimental process.
Computation 11 00100 g004
Figure 5. Gaussian process regression [59].
Figure 5. Gaussian process regression [59].
Computation 11 00100 g005
Figure 6. Framework of homogenous and heterogenous structures [59].
Figure 6. Framework of homogenous and heterogenous structures [59].
Computation 11 00100 g006
Figure 7. Experimental framework.
Figure 7. Experimental framework.
Computation 11 00100 g007
Figure 8. Compared tested value of the actual vs. predicted UTS for (a) 11PI−V1, (b) 11PI−V2, (c) 7PI−V1, and (d) 7PI−V2 datasets.
Figure 8. Compared tested value of the actual vs. predicted UTS for (a) 11PI−V1, (b) 11PI−V2, (c) 7PI−V1, and (d) 7PI−V2 datasets.
Computation 11 00100 g008aComputation 11 00100 g008b
Table 1. Literature review of materials, parameters, responses, and methods used in earlier literature and this study.
Table 1. Literature review of materials, parameters, responses, and methods used in earlier literature and this study.
MaterialsParameterResponseMethodRef.
TRS
(rpm)
TTS
(mm/min)
SD
(mm)
PD
(mm)
PL
(mm)
PeD
(mm)
TA
(Degrees)
TM
-
PT
-
TP
-
TAD
-
Other
-
UTS
(MPa)
MH
(HV)
HI
(°C)
OtherMLEMLOther
AA6262--------------[45]
A319---------No. of Passes--wear rates--[46]
AA6061-T6 ------------[47]
AA6061---------axial forces-----[48]
AA 2050-T8-------------[49]
AA6061-T6------tool hardness--yield strength--[50]
AA6082------------- [51]
AA6061---------------[52]
AA7039--------------[53]
ENAW-6082-T6---------------[54]
AA6061-T6--------------[55]
IS:65032-------------[56]
AA6061---------axial forces-----[57]
AA5754/C11000--------------[58]
AA5083-AA6061------[59]
AA5083-AA6061--------our
Table 2. Dataset details.
Table 2. Dataset details.
Dataset Type7PI-V17PI-V2
Training dataset4543
Testing dataset1211
Total5754
Table 3. Mechanical properties of base materials.
Table 3. Mechanical properties of base materials.
Aluminum AlloysUltimate Tensile Strength
(MPa)
Maximum Hardness
(HV)
AA508330091
AA6061310107
Table 4. Input parameters for friction stir welding.
Table 4. Input parameters for friction stir welding.
Continuous Variable
ParametersLevels
−11
Tool tilt angle (degrees)03
Tool rotational speed (rpm)1501500
Tool travel speed (mm/min)15135
Shoulder diameter (mm)1825
Categorical Variable
ParametersLevels
Pin geometryStraight cylinderHexagonal cylinderThreaded cylinder
Reinforcement particles typeSilicon carbideAluminum oxide-
Tool pin movingStraightZigzagCircles
Table 5. Input parameters for friction stir welding.
Table 5. Input parameters for friction stir welding.
Continuous Variable
ParametersLevels
−11
Tilt angle (degrees)03
Tool rotational speed (rpm)8001500
Tool travel speed (mm/min)1575
Shoulder diameter (mm)1824
Categorical Variable
ParametersLevels
Pin geometryStraight cylinderThreaded cylinderHexagonal cylinder
Reinforcement particles typeSilicon carbideAluminum oxide-
Tool pin movingStraightZigzagCircles
Table 6. The track i and the position.
Table 6. The track i and the position.
Track i\Position 12345678..99100
10.390.560.240.970.820.330.06...0.140.29
20.670.510.190.720.210.480.80...0.320.45
Table 7. Parameter determination.
Table 7. Parameter determination.
RegressorsUser-Defined Parameters
GPR [63]Kernel = ‘rbf’, gamma = 7, noise = 0.2
RF [51]Learner = 100, max leaf = 1
SVM [63]Kernel = ‘rbf’, gamma = 7, C = 0.2
AB [57]Learner = 100, max leaf = 5
GB [48]Learner = 100, max leaf = 5, learning rate = 0.001
Ho-UWE, HE-UWE, Ho-WEDE, HE-UWE, GPR-HO-UWE, GPR-HO-WEDE, SVM-HO-UWE and SVM-HO-WEDE [57]
Our presented ensemble learning
(Ho-WEAMIS, WEAMIS, SVM-HO-WEAMIS and GPR-HO-WEAMIS)
learner = 100


learner = 100
Table 8. Performance of the various machine learning models on the FSW-11PI-V1 and FSW-11PI-V2 datasets.
Table 8. Performance of the various machine learning models on the FSW-11PI-V1 and FSW-11PI-V2 datasets.
Machine Learning and
Ensemble Machine Learning Method
11PI-V111PI-V2
Training DatasetTesting DatasetTesting Dataset
CCRMSECCRMSECCRMSE
UTSMHHIUTSMHHIUTSMHHIUTSMHHIUTSMHHIUTSMHHI
GPR [63]0.9830.9650.9683.733.166.580.9900.9710.9873.733.166.580.9950.9670.9686.693.217.35
SVM0.9810.9860.9714.522.065.690.9890.9870.9914.522.065.690.9830.9710.9766.633.177.09
RF0.9750.9820.9794.232.275.750.9900.9800.9904.232.275.750.9960.9730.9854.892.876.21
AB0.9810.9680.9784.632.585.770.9890.9770.9904.632.585.770.9760.9770.9834.062.516.11
GB0.9840.9760.9814.672.765.830.9880.9790.9914.672.765.830.9740.9710.9766.673.157.12
GPR-HO-UWE0.9890.9790.9883.621.924.270.9930.9870.963.621.924.270.9890.9670.9715.723.247.32
GPR-HO-WEDE0.9900.9780.9893.671.914.290.9920.9870.9963.671.914.290.9940.9760.9864.092.446.02
SVM-HO-UWE0.9850.9730.9774.232.275.450.9900.9800.9924.232.275.450.9940.9800.9894.002.315.92
SVM-HO-WEDE0.9910.9820.9914.022.114.980.9940.9820.9934.022.114.980.9890.9790.9893.902.345.93
HE-UWE0.9920.9850.9923.531.894.150.9930.9880.9973.531.894.150.9940.9810.9903.782.275.89
HE-WEDE0.9920.9900.9933.461.884.160.9950.9890.9973.461.884.160.9960.9870.9903.392.075.88
HE-WEAMIS0.9940.9880.9973.321.884.020.9960.9890.9983.321.884.020.9980.9880.9903.222.035.87
SVM-HO-WEAMIS0.9930.9870.9933.891.894.320.9910.9880.9963.891.894.320.9900.9820.9893.962.215.91
GPR-HO-WEAMIS0.9930.9800.9963.671.914.280.9950.9870.9963.671.914.280.9890.9790.9893.872.355.92
Table 9. K-fold cross-validation of the models with the training dataset.
Table 9. K-fold cross-validation of the models with the training dataset.
Machine Learning and
Ensemble Machine Learning
Method
RMSE
UTSMHHI
2-cv3-cv5-cv2-cv3-cv5-cv2-cv3-cv5-cv
GPR6.70 ± 0.126.70 ± 0.125.33 ± 0.117.38 ± 0.416.07 ± 0.414.93 ± 0.3611.78 ± 0.5610.06 ±0.568.51 ± 0.56
SVM7.02 ± 0.427.02 ± 0.425.72 ± 0.374.41 ± 0.254.13 ± 0.253.43 ± 0.2210.55 ±0.2810.08 ±0.287.71 ± 0.28
RF7.11 ± 0.217.11 ± 0.205.69 ± 0.185.23 ± 0.384.71 ± 0.383.92 ± 0.3411.38 ±0.758.89 ± 0.757.56 ± 0.75
AB [57]6.18 ± 0.386.18 ± 0.385.05 ± 0.346.63 ± 0.315.64 ± 0.314.59 ± 0.279.47 ± 0.5110.51 ±0.517.85 ± 0.51
GB [48]5.06 ± 0.235.06 ± 0.234.09 ± 0.206.7 ± 0.266.07 ± 0.265.01 ± 0.2310.41 ±0.898.98 ± 0.897.91 ± 0.89
GPR-HO-UWE4.66 ± 0.174.66 ± 0.173.91 ± 0.154.08 ± 0.404.05 ± 0.403.26 ± 0.357.55 ± 0.227.25 ± 0.225.50 ± 0.22
GPR-HO-WEDE4.6 ± 0.184.6 ± 0.183.81 ± 0.164.03 ± 0.314.11 ± 0.313.29 ± 0.288.52 ± 0.747.21 ± 0.745.53 ± 0.74
SVM-HO-UWE5.53 ± 0.215.53 ± 0.214.67 ± 0.185.75 ± 0.314.70 ± 0.313.84 ± 0.289.21 ± 0.88.48 ± 0.807.25 ± 0.80
SVM-HO-WEDE5.42 ± 0.345.42 ± 0.344.21 ± 0.305.71 ± 0.334.84 ± 0.333.79 ± 0.298.45 ± 0.417.32 ± 0.416.33 ± 0.41
HE-UWE4.36 ± 0.174.36 ± 0.173.73 ± 0.154.39 ± 0.283.86 ± 0.283.12 ± 0.256.92 ± 0.626.39 ± 0.625.43 ± 0.62
HE-WEDE [59]4.65 ± 0.184.65 ± 0.183.74 ± 0.164.43 ± 0.394.01 ± 0.393.34 ± 0.358.03 ± 0.346.29 ± 0.345.35 ± 0.34
HE-WEAMIS4.56 ± 0.304.56 ± 0.313.62 ± 0.284.84 ± 0.373.91 ± 0.373.21 ± 0.326.19 ± 0.525.47 ± 0.524.89 ± 0.52
SVM-HO-WEAMIS4.57 ± 0.284.57 ± 0.283.66 ± 0.254.25 ± 0.284.02 ± 0.283.26 ± 0.257.84 ± 0.757.18 ± 0.755.43 ± 0.75
GPR-HO-WEAMIS4.63 ± 0.164.63 ± 0.163.79 ± 0.144.78 ± 0.393.72 ± 0.393.24 ± 0.346.71 ± 0.467.13 ± 0.465.39 ± 0.46
Table 10. Performance of dissimilar machine learning models on the 7PI-V1 dataset.
Table 10. Performance of dissimilar machine learning models on the 7PI-V1 dataset.
Machine Learning and
Ensemble Machine
Learning Method
7PI-V1
Training DatasetTesting Dataset
CCRMSECCRMSE
UTSMHHIUTSMHHIUTSMHHIUTSMHHI
GPR0.9610.9300.9719.487.806.220.9870.9670.9786.737.405.10
SVM0.9590.9590.9699.626.246.070.9820.9790.9716.225.025.87
RF0.9830.9830.9975.033.762.790.9710.6680.9628.477.316.31
AB0.9830.9670.9866.144.734.260.9800.9850.9697.024.555.70
GB0.9840.9730.9916.083.993.390.9660.9710.9599.666.716.57
GPR-HO-UWE0.9930.9290.9685.043.743.030.9930.9800.9795.034.975.03
GPR-HO-WEDE0.9880.9530.9714.613.673.010.9910.9860.9805.354.424.97
SVM-HO-UWE0.9920.9810.9914.823.752.810.9900.9850.9755.664.505.51
SVM-HO-WEDE0.9890.9650.9854.713.632.980.9920.9870.9775.254.185.36
HE-UWE0.9880.9820.9954.563.692.790.9930.9880.9775.094.085.20
HE-WEDE0.9970.9830.9924.313.562.730.9960.9840.9824.914.344.85
HE-WEAMIS0.9990.9910.9944.123.332.780.9960.9900.9834.894.004.74
SVM-HO-WEAMIS0.9980.9910.9924.233.372.750.9960.9880.9814.904.074.93
GPR-HO-WEAMIS0.9970.9900.9954.173.312.760.9970.9880.9824.874.084.87
Table 11. The different ratios (%diff) of 7PI-V1 dataset.
Table 11. The different ratios (%diff) of 7PI-V1 dataset.
Machine Learning and
Ensemble Machine
Learning Method
%diff
UTSMHHI
GPR27.3445.957.06
SVM21.3820.3219.25
RF42.2745.2824.88
AB30.3412.0916.84
GB49.3840.3927.85
GPR-HO-UWE2.7819.525.77
GPR-HO-WEDE8.609.504.63
SVM-HO-UWE13.6011.1113.97
SVM-HO-WEDE6.864.3111.57
HE-UWE3.931.968.85
HE-WEDE0.417.832.27
HE-WEAMIS0.000.000.00
SVM-HO-WEAMIS0.201.723.85
GPR-HO-WEAMIS−0.411.962.67
Table 12. The results of the experiment.
Table 12. The results of the experiment.
SetsSDTATRSTTSPTTPTMUTSMHHI
118080015TCSiCZigzag182.6685.71412.66
2211.580015TCSiCZigzag180.1685.35417.50
324380015TCSiCZigzag183.4087.16409.73
4180120050StCSiCZigzag208.7188.46442.69
5211.5120050StCSiCZigzag209.8589.71438.01
6243120050StCSiCZigzag207.5988.64444.58
7180150075HCSiCZigzag239.06101.87439.67
8211.5150075HCSiCZigzag241.90100.28440.63
9243150075HCSiCZigzag236.89102.29436.69
10180120075TCSiCStraight240.57102.71440.93
11211.5120075TCSiCStraight241.55100.47436.56
12243120075TCSiCStraight237.7352.99434.88
13180150015StCSiCStraight245.41101.74450.34
14211.5150015StCSiCStraight246.24101.18449.26
15243150015StCSiCStraight242.86102.18452.78
1618080050HCSiCStraight157.8166.98399.37
17211.580050HCSiCStraight156.4865.85395.32
1824380050HCSiCStraight158.8564.82400.29
19180150050TCSiCcircles241.59102.97435.56
20211.5150050TCSiCcircles238.71100.61433.33
21243150050TCSiCcircles243.06101.67437.35
2218080075StCSiCcircles165.0070.13406.41
23211.580075StCSiCcircles163.2768.23403.09
2424380075StCSiCcircles166.0371.19410.81
25180120015HCSiCcircles234.79100.85448.10
26211.5120015HCSiCcircles239.18101.13453.24
27243120015HCSiCcircles232.8298.69445.69
28180120050TCAOZigzag218.7295.81428.15
29211.5120050TCAOZigzag216.1996.36421.99
30243120050TCAOZigzag219.7997.58432.49
31180150075StCAOZigzag241.04104.83444.76
32211.5150075StCAOZigzag238.78103.25434.49
33243150075StCAOZigzag242.32106.27446.87
3418080015HCAOZigzag181.3581.96418.53
35211.580015HCAOZigzag182.1482.84421.95
3624380015HCAOZigzag179.5279.79412.45
3718080075TCAOStraight160.0568.26406.64
38211.580075TCAOStraight158.0867.19398.61
3924380075TCAOStraight157.6866.19399.71
40180120015StCAOStraight224.4896.08435.96
41211.5120015StCAOStraight221.1995.27428.16
42243120015StCAOStraight226.4496.99438.16
43180150050HCAOStraight240.99103.9434.62
44211.5150050HCAOStraight245.25104.64439.87
45243150050HCAOStraight243.04104.17435.95
46180150015TCAOcircles237.02102.17438.54
47211.5150015TCAOcircles236.89102.52434.77
48243150015TCAOcircles241.69104.12440.27
4918080050StCAOcircles142.9261.31399.77
50211.580050StCAOcircles141.1460.41396.66
5124380050StCAOcircles141.9260.71398.73
52180120075HCAOcircles222.0290.94432.71
53211.5120075HCAOcircles223.0691.63429.62
54243120075HCAOcircles225.3693.02436.71
Table 13. Compared performance of machine learning methods with the 7PI-V2 dataset.
Table 13. Compared performance of machine learning methods with the 7PI-V2 dataset.
Machine Learning and
Ensemble Machine
Learning Method
7PI-V2
Training DatasetTesting Dataset
CCRMSECCRMSE
UTSMHHIUTSMHHIUTSMHHIUTSMHHI
GPR0.9220.8050.9279.227.856.770.8910.8500.93313.156.548.18
SVM0.9660.7810.9856.227.064.820.9200.8980.97911.315.396.23
RF0.9950.8260.9962.746.193.280.9680.9770.9897.192.534.14
AB0.9750.8010.9895.196.804.250.9450.9380.9829.354.225.69
GB0.9740.8060.9914.606.454.270.9430.9460.9789.094.056.64
GPR-HO-UWE0.9870.8080.9933.376.363.680.9720.9570.9836.593.455.03
GPR-HO-WEDE0.9940.8030.9953.126.413.430.9780.9610.9865.843.334.71
SVM-HO-UWE0.9890.8090.9933.266.423.590.9810.9590.9845.453.504.89
SVM-HO-WEDE0.9870.8070.9933.116.493.690.9790.9620.9865.743.274.71
HE-UWE0.9920.8210.9892.836.253.290.9770.9780.9896.022.534.23
HE-WEDE0.9930.8240.9902.516.153.280.9770.9790.9906.072.464.08
HE-WEAMIS0.9940.8310.9932.385.943.250.9810.9800.9885.442.404.49
SVM-HO-WEAMIS0.9910.8110.9913.026.413.430.9790.9620.9855.662.784.65
GPR-HO-WEAMIS0.9910.8240.9952.666.293.250.9780.9790.9875.862.454.56
Table 14. The different ratios (percentage difference) for the 7PI-V2 dataset.
Table 14. The different ratios (percentage difference) for the 7PI-V2 dataset.
Machine Learning and
Ensemble Machine
Learning Method
%diff
UTSMHHI
GPR58.6363.3045.11
SVM51.9055.4727.93
RF24.345.14−8.45
AB41.8243.1321.09
GB40.1540.7432.38
GPR-HO-UWE17.4530.4310.74
GPR-HO-WEDE6.8527.934.67
SVM-HO-UWE0.1831.438.18
SVM-HO-WEDE5.2326.614.67
HE-UWE9.635.14−6.15
HE-WEDE10.382.44−10.05
HE-WEAMIS0.000.000.00
SVM-HO-WEAMIS3.8913.673.44
GPR-HO-WEAMIS7.172.041.54
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kraiklang, R.; Chueadee, C.; Jirasirilerd, G.; Sirirak, W.; Gonwirat, S. A Multiple Response Prediction Model for Dissimilar AA-5083 and AA-6061 Friction Stir Welding Using a Combination of AMIS and Machine Learning. Computation 2023, 11, 100. https://doi.org/10.3390/computation11050100

AMA Style

Kraiklang R, Chueadee C, Jirasirilerd G, Sirirak W, Gonwirat S. A Multiple Response Prediction Model for Dissimilar AA-5083 and AA-6061 Friction Stir Welding Using a Combination of AMIS and Machine Learning. Computation. 2023; 11(5):100. https://doi.org/10.3390/computation11050100

Chicago/Turabian Style

Kraiklang, Rungwasun, Chakat Chueadee, Ganokgarn Jirasirilerd, Worapot Sirirak, and Sarayut Gonwirat. 2023. "A Multiple Response Prediction Model for Dissimilar AA-5083 and AA-6061 Friction Stir Welding Using a Combination of AMIS and Machine Learning" Computation 11, no. 5: 100. https://doi.org/10.3390/computation11050100

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop