Next Article in Journal
Designing of a Decentralized Pretreatment Line for EOL-LIBs Based on Recent Literature of LIB Recycling for Black Mass
Next Article in Special Issue
Intermetallic Reaction of the Bonding Interface of TA2/Q235 Explosive Welding Composite
Previous Article in Journal
Powder Metallurgy: A New Path for Advanced Titanium Alloys in the EU Medical Device Supply Chain
Previous Article in Special Issue
Study on High-Strain-Rate Deformation of Magnesium Alloy Using Underwater Shock Waves Generated by High-Voltage Electric Discharge of Thin Wire
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Application of Deep Learning Techniques to Predict the Mechanical Strength of Al-Steel Explosive Clads

by
Somasundaram Saravanan
1,*,
Kanagasabai Kumararaja
1 and
Krishnamurthy Raghukandan
2
1
Department of Mechanical Engineering, Annamalai University, Annamalainagar 608002, TN, India
2
Department of Manufacturing Engineering, Annamalai University, Annamalainagar 608002, TN, India
*
Author to whom correspondence should be addressed.
Metals 2023, 13(2), 373; https://doi.org/10.3390/met13020373
Submission received: 30 January 2023 / Revised: 8 February 2023 / Accepted: 10 February 2023 / Published: 12 February 2023
(This article belongs to the Special Issue Explosive Welding and Impact Mechanics of Metal and Alloys)

Abstract

:
In this study, the tensile and shear strengths of aluminum 6061-differently grooved stainless steel 304 explosive clads are predicted using deep learning algorithms, namely the conventional neural network (CNN), deep neural network (DNN), and recurrent neural network (RNN). The explosive cladding process parameters, such as the loading ratio (mass of the explosive/mass of the flyer plate, R: 0.6–1.0), standoff distance, D (5–9 mm), preset angle, A (0–10°), and groove in the base plate, G (V/Dovetail), were varied in 60 explosive cladding trials. The deep learning algorithms were trained in a Python environment using the tensile and shear strengths acquired from 80% of the experiments, using trial and previous results. The remaining experimental findings are used to evaluate the developed models. The DNN model successfully predicts the tensile and shear strengths with an accuracy of 95% and less than 5% deviation from the experimental result.

Graphical Abstract

1. Introduction

Aluminum-steel bimetallic clads are widely used in engineering applications, such as ship building, chemical industry, commercial and military aircrafts, due to their ability to lower the weight of structural components while improving corrosion resistance [1]. However, due to the significant variations in physical and mechanical properties, welding of aluminum-steel employing traditional fusion welding techniques is unlikely. However, solid state welding processes, such as friction welding, explosive cladding, and diffusion bonding, provide reliable options to join this combination [2]. Of the three techniques, explosive cladding is preferred due to its process time less than 50 µs [3].
The eminence of the explosive clad is dictated by the mechanical properties which are influenced by process parameters, such as loading ratio, standoff distance, preset angle, surface finish, collision velocity, flyer plate velocity, and thickness of flyer plate [4]. Recently, Kumar et al. explosively cladded aluminum with magnesium at varied loading ratios and reported increase in mechanical strength with the loading ratio [5]. The variation in microstructure and mechanical strength, subjected to varied standoff distance (1 to 10 mm), in cladding titanium-duplex steels was reported by Chen et al. [6]. In their attempt to enhance the mechanical strength of the Al-steel clad, Li et al. [7] machined a dovetail groove on the base plate and reported improved mechanical properties. Tamilchelvan et al. [8] while cladding titanium-stainless steel plates varied the preset angle between 3 and15° and recommended a maximum of 10°. However, expressing the relationship between process parameters and mechanical strength is intricate as the mechanism of the explosive cladding process is complicated [9]. In earlier studies, few researchers described the relationship between interface microstructure and mechanical strength of the dissimilar explosive clads [10,11].Though the metallurgical approach is effective, the complexity and time-consuming nature motivate researchers to look for a rapid and reliable solution. In recent years, the use of software in predicting the mechanical properties of weld joints has been increasing. ANN and SVM are the two main techniques employed to predict the mechanical properties due to their ability to solve complex nonlinear problems.
While predicting the peak temperature developed during dissimilar grade aluminum friction stir welding, Anandan and Manikandan employed DTR, RFR, LR, PR, and SVR machine learning techniques and concluded that the DTR and RFR models are superior owing to their tree type structure [12]. Likewise, five machine learning techniques were successfully employed by Mishra and Morisetty to predict the impact of process parameters (tool traverse speed, tool rotational speed, and axial force) on the UTS of the friction stir welded AA6061 alloys [13]. In this context, Feng et al. proposed a SPDTRS-CS-ANN hybrid algorithm to predict the fatigue life of EH36 grade steel friction stir weld joints with a variation below 10% [14]. In a similar attempt, Mongan et al. implemented a hybrid GA-ANN model that predicted the lap shear strength of ultrasonically welded Al 5754 joints with a 7.55% deviation from the experimental results [15]. In a novel attempt, Chen et al. determined the quality of the resistance spot-weld joint via online inspection [16].
Deep learning has lately evolved into a better and more effective technique that is being adopted by many researchers in the field of materials processing due to its larger capability to handle raw data with enhanced precision, reliability, and concise analysis [17]. Ma et al. identified the porosities formed during the laser welding of aluminum alloys using CNN [18]. Wu et al. used a twenty-layer CNN to envisage the weld strength of the ultrasonic welded joints [19]. To predict the tiny crack patterns in FRP laminates, Ding et al. successfully designed two DNN models based on regression and classification [20]. Wei et al. attempted to predict the fracture patterns using an integrated neural network and discrete simulation models, and they concluded that this technique had a higher computational efficiency [21]. In order to identify voids in friction stir welded joints, Rabe et al. used LSTM and BiLSTM approaches and found 93% successful classification [22]. By using the LSTM-RNN approach, Wu et al. accurately forecasted the mechanical behavior of structural steel at high temperatures [23]. In the work of Wang et al., one-dimensional CNN outperforms the LSTM and bidirectional LSTM models in detecting faults in glass-polymer-reinforced polymers [24].
The need for quick and accurate error detection and prediction algorithms is warranted in order to predict the mechanical properties of various explosive clads. In this context, though the deep learning approaches, e.g., recurrent neural networks (RNN), convolutional neural networks (CNN), and deep neural network (DNN), have proven their capabilities, they have not been implemented for the prediction of the mechanical strength of explosive cladding so far. Hence, single, multiple convolutional layer, deep neural network, and recurrent neural network learning models are developed to predict the mechanical properties of Al 6061-SS 304 explosive clad and the deviation with the experimental results is reported.

2. Materials and Methods

In an inclined explosive cladding configuration (Figure 1a) detailed elsewhere [25], aluminum 6061 (wt.% Cr-0.23, Si-0.5, Cu-0.28, Fe-0.45, Mg-1.1, Mn-0.15, Zn-0.25, Al-Bal) sheets and stainless steel 304 (Cr-18.9, Ni-8.4, C-0.015, Si-0.48, Cu-0.043, Mn-1.8, Fe-Bal) plates of uniform dimensions (110 mm × 50 mm) were employed as flyer (3 mm thick) and base (8 mm thick) plates, respectively. Prior to cladding, the mating surface of the base plates (SS 304) was machined along the transverse direction to create a dovetail (2 mm wide, 1 mm deep) and V-groove (2 mm wide, 1 mm deep), as illustrated in Figure 1b. The standoff distance, S, between the flyer and base plates, was varied from 5 mm to 9 mm, and the preset angle, A, between participant alloys, was varied from 0° to 10°. The chemical explosive (density: 1.2 g/cm3, detonation velocity: 4200 m/s) was packed above the flyer plate and initiated by an electrical detonator, for an explosive loading ratio, R (mass of the explosive/mass of the flyer plate), varying from 0.6 to 1.0. The range of parameters for the experimental conditions attempted (Table 1) are determined based on trial experiments.
The explosive clad specimens are shown in Figure 1c, and the characteristic undulating interface microstructures are shown in Figure 1d. When the preset angle, A is set at 10°, for the loading ratio, R, of 0.6 and a standoff distance, D, of 5 mm the Al 6061-grooveless SS 304 clad exhibits wavy morphology with a streak of molten layer (10 µm thick) at the interface. The formation of molten layer reduces the strength of the clad (Table 1), consistent with the previous study [4]. For the similar condition, the Al 6061-‘V’grooved SS 304 interface microstructure (Figure 1e) shows an undulated continuous bonding at the interface.
Three tensile test specimens were prepared for each condition in the detonation direction (Figure 1g: ASTM E8-16 sub-size standard) and tested in an automated UNITEK-94100 UTM. In a similar way, three shear test specimens (Figure 1f; ASTM B 898 standard) were prepared for each condition and tested by applying a compressive force.
The proposed deep learning models have four inputs (R, D, A, and G) and three outputs (TS, Sh.S, and IS), and were trained by the standardized data obtained from the experimental and trial experiments. Data processing, modeling, and validation are the three essential phases of deep learning [17]. The data acquired from the mechanical tests, described above, are utilized for the first phase i.e., data processing. Post processing, a model is constructed to analyze the data. The selection of algorithms, training, and developing predictions are the phases involved in modeling. Supervised deep learning models, such as CNN, DNN, and RNN, are chosen for modeling, owing to their superiority over competing algorithms. Since the demand is to predict the mechanical strength of the explosive clads, regression algorithms of the above techniques are chosen to build, train and test the proposed models. The prediction performance and accuracy of the developed models are evaluated in the final stage of the deep learning, i.e., validation. The systematic steps in the analysis are schematically illustrated in Figure 2.
Training and testing sets are performed usingthe original data for deep learning. The training is computed by utilizing 80% of the experimental data (3 specimens for each of the 48 conditions; 48 × 3 = 144 conditions), trial experiments, and previous results. The model is trained, in a python environment, using the training set (800) of data, and then validated using the test set (200), followed by validation with data not utilized for training and testing. During training, the values of the process parameters (R, A, and D) were fed in the existing form while the groove (G) wasmapped into numerical numbers (No-grove: 1, V-groove: 2, Dovetail-groove: 3). The deep learning models attempted are described below.

3. Deep Learning Models

3.1. Convolutional Neural Network

The mathematical operation of convolution, which recognizes particular features in pattern recognition tasks, such as image pixels, is the fundamental idea behind a convolutional neural network. A kernel matrix is slid across the input image matrix to provide feature mappings for the subsequent layer [17]. The indices of the resulting row and column are represented by q and r whereas an image is represented by f, the kernel by h, and i and j are the relative positions, given by [17]
( f × h ) [ q , r ] = i j h [ i , j ] . f [ q i , r j ]
The activation function (ReLU) is overlapped to produce non-linear transformation after the convolution operation, and the max-pooling layers are then applied. Max-pooling layers are used to down sample the output of the feature map in order to make the representation generally stable or sensitive to slight changes. The nodes following the pooling layers are flattened into a fully connected layer to produce predictions. To minimize error and the vanishing point, rectified linear units (ReLU), nonlinear activation functions, are applied in each layer [26]. ReLU is written in the following mathematical notation [26]:
f ( x ) = max ( 0 , x )

3.2. Deep Neural Network

DNN is a more sophisticated ANN that has more hidden layers [27]. A single or more neurons are used in each input, hidden, and output layer of the DNN. The number of hidden layers and neurons in a DNN are determined via hyperparameter tuning [28]. Each neuron in the layer is fully connected via the weight vectors. In deep neural networks (DNN), each node’s output is routed through a non-linear activation function (ReLU) in fully connected layers. In other words, each node in a layer receives input from the prior layer via a dense network of connections to make predictions.

3.3. Recurrent Neural Network

RNNs are frequently employed to solve issues with temporal correlations and those that display temporal dynamic behavior [29]. They create a circle that joins the hidden layer to the earlier ones. These recurrent units are ideal for issues whose output depends on the prior values since they have the capacity to save the historical information from the sequence [29]. In contrast to conventional ANNs, overfitting difficulties can be avoided by arbitrarily excluding or dropping out a specific percentage of neurons from the neural network when associated weights are not updated during the forward or backward pass of the training phase [30].
In neural networks, feedback connections are incorporated in two different ways, feedback on activation and feedback on output. These plans have nothing in common with neural network state space representations. A neuron in a network employing activation feedback produces the following output [31]:
v ( k ) = i = 0 M w u , i ( k ) u ( k i ) + j = 1 N W v , j ( k ) v ( k j )
y ( k ) = ϕ ( v ( k ) )
In a network with an output feedback system, the transfer function of a neutron can be written as [31]:
v ( k ) = i = 0 M w u , i ( k ) u ( k i ) + j = 1 N W y , j ( k ) v ( k j )
y ( k ) = ϕ ( v ( k ) )

4. Performance Metric

In this study, the prediction effectiveness of the attempted models was determined using three different statistical measurement parameters. In plainer terms, the evaluation parameters calculate the total amount of predicted data that is off by actual observations [32]. Coefficient of determination (R2), mean absolute error (MAE), and mean absolute percentage error (MAPE) are the statistical metrics, represented mathematically by Equations (7)–(9). The value of the R2 ranges from 0 to 1, and the closer they are to 1 the better the model fits its data. To estimate modeling error, the MAE and MAPE values are utilized; the smaller the value, the less the discrepancy between the predicted and measured values [33].
M A E = j = 1 n ( Y k y k ) n
M A E = j = 1 n ( Y k y k ) × 100 Y k
R 2 = 1 k = 1 n ( Y K y K ) 2 ( Y K , Y K _ m e a n ) 2

5. Results and Discussion

5.1. Mechanical Strength of the Dissimilar Explosive Clads

The highest tensile (392 MPa) and shear (262 MPa) strengths of the dissimilar explosive clads were obtained for the experimental condition R: 0.8, D: 7 mm, A: 5°, G: V, whereas the lowest strength was attained for the parametric condition R: 0.6, D: 5 mm, A: 0°, G: grooveless (TS: 344 MPa, Sh.S: 220 MPa). The lowest strength is attributed to the lower kinetic energy available and the absence of grooves as consistent with the earlier reports [34]. Saravanan et al. opined that the minimum strength of the clad should be higher than the weaker parent alloy, which is in agreement with this study [35]. On the other hand, for the middle range of process parameters, a ‘V’ grooved base plate produces the highest strength (14% more). The augment in strength while employing grooved base plate is due to the increase in kinetic energy utilization and bonding region.

5.2. Prediction Using Convolutional Neural Networks

5.2.1. Conventional Neural Network with Single Convolutional Layer (CNN1)

The performance of the CNN prediction model is significantly influenced by its structure. The CNN having minimum filters provides better results similar to the models having a higher number of filters, thereby improving the generalization abilities. In addition, the usage of a smaller number of filters demands fewerparameters for efficient prediction [36]. In this study, the prediction model employs 1 × 1 convolutional kernels and 2 × 1 pooling fields. Subsequently, the tensile and shear strengths of the dissimilar explosive clads with and without grooves are predicted by a CNN1, as shown in Figure 3a.
Table 2 shows the various hyperparameters (blocks, convolutional layers, dense layers, filters, and units) for CNN1 employing three optimizers viz., Adam, RMSprop, and SGD. The hyperparameters for the three optimizers arepresented in Table 3. To determine the optimal level, the Optuna optimizer framework was employed, as recommended by Kumararaja et al. [33]. Based on the Optuna framework, Adam optimizer performs better than the other two optimizers, whose values are summarized in Table 4.
The performance of CNN1 model in terms of prediction accuracy and error rates is assessed by R2, MAE, and MAPE. The R2 value for the CNN1 is 0.8873 (Figure 4) indicating 12% of the conditions deviate from the ideal prediction line (shown by a red line). In other words, if scatter points are closer to the diagonal line, the model holds a high R2 value, whereas if predictions are dispersed away from the diagonal line, the model shows weaker goodness of fit with low R2 values [37]. Similarly, the MAE and MAPE of the CNN1 are inversely proportional to R2 values and result in 1.9553 and 1.3047, respectively (Table 5).

5.2.2. Conventional Neural Network with Two and Three Convolutional Layers (CNN2 and CNN3)

The number of convolutional layers, dense layers, filters, and units isincreased, as illustrated in Figure 3b,c, in order to enhance prediction accuracy and to decrease errors [38]. The Optuna optimizer framework, as in the previous case, determines the quantity of filter units and their optimal level, which are displayed in Table 6. Most of the trials in CNN2 were achieved in the region of low objective values via hyperparameter adjustment (Figure 5), demonstrating that the Adam optimizer yields better performance.
The performance metrics of the CNN2 and CNN3 models are presented in Table 5. From Table 5, it is inferred that increasing the convolutional layer from 1 to 2 enhances the accuracy (R2 = 0.8963) by 1% and reduces the error in predictions (MAE = 1.7454 and MAPE = 1.2249). This phenomenon is similar to the reports of Kim et al. [39], who predicted the mechanical behavior of composites. A further increase in convolutional (2 to 3) and dense layers leads to a reduction in R2 value and augments the error (Table 5). The decline in R2 value is due to the overfitting of the model to the data. Bilgin and Gunestas reported a reduction in R2 value owing to overfitting of the model, consistent with the present study [40]. Figure 6 and Figure 7, respectively, display the linear regression graphs for the CNN2 and CNN3 models. The testing data in CNN2 aremore accurate than the conventional model (CNN1) in making the optimal prediction. On the other hand, 15% deviation from the ideal predictions is seen in the linear regression plot of CNN3 (Figure 6).

5.3. Prediction Using Deep Neural Networks

The deep neural network (Figure 8) was trained using the standardized data and has three nodes (tensile and shear strengths) in the output layers and four nodes (loading ratio, standoff distance, preset angle, and types of grooves) in the input layers. By changing the number of hidden layers, the number of neurons in the hidden layers, and the optimizers, numerous models were constructed (Adam, RMSprop, and SGD).
The optimal values and hyperparameter ranges are shown in Table 7. The efficiency of the Adam optimizer is superior compared to the other two optimizers (RMSprop and SGD), as seen in Figure 9. The Optuna optimizer framework delivered the final model with the highest prediction accuracy.
The performance metrics of the DNN model are displayed in Table 5. The DNN model holds a 6% improvement in accuracy (R2 = 0.9519) over the CNN model while the prediction error is also reduced (MAE = 1.0552 and MAPE = 0.7286). The improved prediction performance of DNN is attributed to high-level learning in the early stages. The R2 value for the DNN model is 0.9519, which indicates that less than 5% of the data falls away from the straight line (Figure 10). The testing data in the DNN model aremuch closer to the straight line, indicating that the errors are more normally distributed than in the CNN models, coherent with the reports of Bilali et al. [41].

5.4. Prediction Using Recurrent Neural Networks

The RNN having four inputs and three outputs is shown in Figure 11. The number of recurrent layers, hidden layers, and neurons in the hidden layers and the optimizers (Adam, RMSprop, and SGD) were altered to obtain numerous models. The hyperparameters (filters, dense layer, units in each layer) withranges attempted are shown in Table 8. The optimal values of the hyperparameters are obtained while employing Adam optimizer, which performs better than the others (RMSprop and SGD), as presented in Table 4 and Figure 12 respectively.
The prediction performance of the RNN model is better than the CNN model but less accurate than the DNN model (Table 5). Due to the vanishing gradient problem, the RNN performs less effectively in terms of prediction than the DNN, whereas the ability of the RNN to memorize previous inputs results in a better prediction than CNN models. Fei et al. [42] reported increase in error in RNN is owing to the vanishing gradient, consistent with this study. The mechanical strength of the dissimilar explosive clads predicted by RNN is 4% less accurate than the DNN model (R2-0.9146). The reduction in R2 value increases the prediction error (MAE-1.4708 and MAPE-1.0406) compared tothe DNN model. As shown in the linear regression plots (Figure 13), less than 9% of the data in the RNN model deviates from the mean value. The testing data in the RNN model aresignificantly closely aligned with the straight line compared tothe CNN models, indicating that errors are distributed more consistently. Saravanan and Gajalakshmi [43] opined that, in a linear regression plot, the closer the testing points, the fewer the errors.
Of the attempted models, CNN3 shows higher MAE and MAPE values, which results in lower R2 values. The DNN model exhibit higher accuracy in attempted deep learning models, with the lowest MAE and MAPE values and a higher R2 value. The DNN model, with 791 and 795 neurons in the first and second layers (Table 7 and Figure 8), effectively predicts the tensile and shear strengths of the Al 6061-SS 304 explosive clads. The optimal parametric conditions determined by the DNN model to attain maximum tensile and shear strengths are R-0.845, D-7.6 mm, A-6°, and G-‘V’. The experimental and predicted tensile and shear strengths for the optimal parametric conditions are exhibited in Table 9. For the same condition, the prediction values obtained by the other attempted models are also shown.

5.5. Confirmation Experiments

Confirmation experiments were performed to cross validate and confirm the accuracy of the developed models. The tensile and shear strengths for the experimental conditions are presented in Table 9. In addition, the predicted values of the attempted deep learning models are presented as well. The errors between the experimental and predicted strengths are given in Table 10. The maximum error (6.71 MPa) is obtained for CNN3 model while the better prediction with the lowest error (0.32 MPa) resulted from theDNN model. However, the maximum error is less than 7 MPa, which indicates that the deep learning techniques can effectively be employed for predicting the mechanical strengths of the explosive clads. Among the five deep learning models, the DNN model predicts the mechanical strength of the explosive clads more closely to the experimental value.

6. Conclusions and Future Recommendation

  • It is recommended to employ a ‘V’ grooved base plate with a loading ratio of R = 0.845, a standoff distance of D = 7.6 mm, and a preset angle of A = 6 degrees to attain higher Al 6061–SS 304 clad strengths.
  • In predicting the mechanical strengths of the explosive clads, the DNN model performed better than the other models. High-level learning at the initial stages of DNN is the basis of the enhanced efficiency. With an MAE of 1.0552 and a MAPE of 0.7286, the DNN model had the fewest prediction errors and the highest prediction accuracy of 0.9519.
  • The prediction performance of RNN is 4% less than that of DNN dueto the diminishing gradient during training.
  • The CNN model becomes more accurate when the number of convolutional layers is increased from one to two. Further increasing the convolutional layers, the accuracy decreases as a result of the model being overfitted to the data.
  • The prediction performance of the RNN model is superior to the CNN models due to their ability to memorize previous inputs and the presence of internal memory.
  • The model prediction accuracy and modeling errors of all five deep learning models were improved using the Adam optimization technique. These results supported the recommendations of the DNN model for predicting the mechanical strength of explosive clads.
Future research might compare and contrast the performance of the CNN model with other hybrid models, such as CNN+SVR models, or hyperparameter tune alternative hybrid models.

Author Contributions

Conceptualization, methodology, formal analysis, data curation, writing —original draft preparation, S.S.; software, validation, K.K.; writing—review & editing, visualization, supervision, K.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data is contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

Nomenclatures

Preset angleA
AluminumAl
Artificial neural networkANN
Bidirectional LSTMBiLSTM
Conventional neural networkCNN
Conventional Neural Network with single convolutional layerCNN1
Conventional Neural Network with two convolutional layersCNN2
Conventional Neural Network with three convolutional layersCNN3
Cuckoo searchCS
Standoff distanceD
Deep neural networkDNN
Decision tree regressionDTR
Imagef
Activation functionf
Fibre reinforced plasticFRP
Groove in the base plateG
Genetic algorithmGA
Kernelh
Positionsi, j
Linear regressionLR
Long short-term memory LSTM
Mean absolute errorMAE
Mean absolute percentage errorMAPE
Total number of test datasetn
Polynomial regression PR
Rowq
Loading ratioR
Columnr
Coefficient of determinationR2
Rectified linear unitsReLU
Random forest regressionRFR
Recurrent neural networkRNN
Stochastic gradient descentSGD
Shear strengthSh. S
Single-parameter decision-theoretic rough set SPDTRS
Stainless steelSS
Support vector machineSVR
Tensile strengthTS
Universal testing machineUTS
Input variablesx
Measured valuesYk
Predicted valuesyk

References

  1. Kaya, Y. Microstructural, mechanical and corrosion investigations of ship steel-aluminum bimetal composites produced by explosive welding. Metals 2018, 8, 544. [Google Scholar] [CrossRef]
  2. Gullino, A.; Matteis, P.; D’Aiuto, F. Review of aluminum-to-steel welding technologies for car-body applications. Metals 2019, 9, 315. [Google Scholar] [CrossRef]
  3. Elango, E.; Saravanan, S.; Raghukandan, K. Experimental and numerical studies on aluminum-stainless steel explosive cladding. J. Cent. South Univ. 2020, 27, 1742–1753. [Google Scholar] [CrossRef]
  4. Somasundaram, S.; Krishnamurthy, R.; Kazuyuki, H. Effect of process parameters on microstructural and mechanical properties of Ti−SS 304L explosive clad-ding. J. Cent. South Univ. 2017, 24, 1245–1251. [Google Scholar] [CrossRef]
  5. Kumar, P.; Ghosh, S.K.; Saravanan, S.; Barma, J.D. Effect of Explosive Loading Ratio on Microstructure and Mechanical Properties of Al 5052/AZ31B Explosive Weld Composite. JOM 2023, 75, 167–175. [Google Scholar] [CrossRef]
  6. Chen, X.; Inao, D.; Li, X.; Tanaka, S.; Li, K.; Hokamoto, K. Optimal parameters for the explosive welding of TP 270C pure titanium and SUS 821L1 duplex stainless steel. J. Mater. Res. Technol. 2022, 19, 4771–4786. [Google Scholar] [CrossRef]
  7. Li, X.; Ma, H.; Shen, Z. Research on explosive welding of aluminum alloy to steel with dovetail grooves. Mater. Des. 2015, 87, 815–824. [Google Scholar] [CrossRef]
  8. Tamilchelvan, P.; Raghukandan, K.; Saravanan, S. Kinetic energy dissipation in Ti-SS explosive cladding with multi loading ratios. Trans. Mech. Eng. 2014, 38, 91–96. [Google Scholar]
  9. Bataev, I.A.; Tanaka, S.; Zhou, Q.; Lazurenko, D.V.; Junior, A.J.; Bataev, A.A.; Hokamoto, K.; Mori, A.; Chen, P. Towards better understanding of explosive welding by combination of numerical simulation and experimental study. Mater. Des. 2019, 169, 107649. [Google Scholar] [CrossRef]
  10. Kaçar, R.; Acarer, M. Microstructure–property relationship in explosively welded duplex stainless steel–steel. Mater. Sci. Eng. A 2003, 363, 290–296. [Google Scholar] [CrossRef]
  11. Chen, X.; Inao, D.; Tanaka, S.; Li, X.J.; Bataev, I.A.; Hokamoto, K. Comparison of explosive welding of pure titanium/SUS 304 austenitic stainless steel and pure titanium/SUS 821L1 duplex stainless steel. Trans. Nonferrous Met. Soc. China 2021, 31, 2687–2702. [Google Scholar] [CrossRef]
  12. Anandan, B.; Manikandan, M. Machine learning approach for predicting the peak temperature of dissimilar AA7050-AA2014A friction stir welding butt joint using various regression models. Mater. Lett. 2022, 325, 132879. [Google Scholar] [CrossRef]
  13. Mishra, A.; Morisetty, R. Determination of the Ultimate Tensile Strength (UTS) of friction stir welded similar AA6061 joints by using supervised machine learning based algorithms. Manuf. Lett. 2022, 32, 83–86. [Google Scholar] [CrossRef]
  14. Feng, C.; Xu, L.; Zhao, L.; Han, Y.; Su, M.; Peng, C. Prediction of welded joint fatigue properties based on a novel hybrid SPDTRS-CS-ANN method. Eng. Fract. Mech. 2022, 275, 108824. [Google Scholar] [CrossRef]
  15. Mongan, P.G.; Hinchy, E.P.; O’Dowd, N.P.; McCarthy, C.T. Quality prediction of ultrasonically welded joints using a hybrid machine learning model. J. Manuf. Process. 2021, 71, 571–579. [Google Scholar] [CrossRef]
  16. Chen, G.; Sheng, B.; Luo, R.; Jia, P. A parallel strategy for predicting the quality of welded joints in automotive bodies based on machine learning. J. Manuf. Syst. 2022, 62, 636–649. [Google Scholar] [CrossRef]
  17. Alzubaidi, L.; Zhang, J.; Humaidi, A.J.; Al-Dujaili, A.; Duan, Y.; Al-Shamma, O.; Santamaría, J.; Fadhel, M.A.; Al-Amidie, M.; Farhan, L. Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions. J. Big Data 2021, 8, 53. [Google Scholar] [CrossRef]
  18. Ma, D.; Jiang, P.; Shu, L.; Geng, S. Multi-sensing signals diagnosis and CNN-based detection of porosity defect during Al alloys laser welding. J. Manuf. Syst. 2022, 62, 334–346. [Google Scholar] [CrossRef]
  19. Wu, Y.; Meng, Y.; Shao, C. End-to-end online quality prediction for ultra-sonic metal welding using sensor fusion and deep learning. J. Manuf. Process. 2022, 83, 685–694. [Google Scholar] [CrossRef]
  20. Ding, X.; Hou, X.; Xia, M.; Ismail, Y.; Ye, J. Predictions of macroscopic me-chanical properties and microscopic cracks of unidirectional fibre-reinforced polymer composites using deep neural network (DNN). Compos. Struct. 2022, 302, 116248. [Google Scholar] [CrossRef]
  21. Wei, H.; Yao, H.; Pang, Y.; Liu, Y. Fracture pattern prediction with random microstructure using a physics-informed deep neural networks. Eng. Fract. Mech. 2022, 268, 108497. [Google Scholar] [CrossRef]
  22. Rabe, P.; Schiebahn, A.; Reisgen, U. Deep learning approaches for force feedback based void defect detection in friction stir welding. J. Adv. Join. Process. 2022, 5, 100087. [Google Scholar] [CrossRef]
  23. Wu, Z.; Liu, Z.; Li, L.; Lu, Z. Experimental and neural networks analysis on elevated-temperature mechanical properties of structural steels. Mater. Today Commun. 2022, 32, 104092. [Google Scholar] [CrossRef]
  24. Wang, Q.; Liu, Q.; Xia, R.; Zhang, P.; Zhou, H.; Zhao, B.; Li, G. Automatic defect prediction in glass fiber reinforced polymer based on THz-TDS signal analysis with neural networks. Infrared Phys. Technol. 2021, 115, 103673. [Google Scholar] [CrossRef]
  25. Wu, X.; Shi, C.; Fang, Z.; Lin, S.; Sun, Z. Comparative study on welding energy and Interface characteristics of titanium-aluminum explosive composites with and without interlayer. Mater. Des. 2021, 197, 109279. [Google Scholar] [CrossRef]
  26. Soleymani, M.; Khoshnevisan, M.; Davoodi, B. Prediction of microhardness in thread rolling of St37 by convolutional neural networks and transfer learning. Int. J. Adv. Manuf. Technol. 2022, 123, 3261–3274. [Google Scholar] [CrossRef]
  27. Babu, T.N.; Ali, P.; Prabha, D.R.; Mohammed, V.N.; Wahab, R.S.; Vijayalakshmi, S. Fault Diagnosis in Bevel Gearbox Using Coiflet Wavelet and Fault Classification Based on ANN Including DNN. Arab. J. Sci. Eng. 2022, 47, 15823–15849. [Google Scholar] [CrossRef]
  28. Kumar, P.; Batra, S.; Raman, B. Deep neural network hyper-parameter tuning through twofold genetic approach. Soft Comput. 2021, 25, 8747–8771. [Google Scholar] [CrossRef]
  29. Ali, A.; Zhu, Y.; Zakarya, M. Exploiting dynamic spatiotemporal correlations for citywide traffic flow prediction using attention based neural networks. Inf. Sci. 2021, 577, 852–870. [Google Scholar] [CrossRef]
  30. Xia, M.; Zheng, X.; Imran, M.; Shoaib, M. Data-driven prognosis method using hybrid deep recurrent neural network. Appl. Soft Comput. 2020, 93, 106351. [Google Scholar] [CrossRef]
  31. Sherstinsky, A. Fundamentals of recurrent neural network (RNN) and long short-term memory (LSTM) network. Phys. D Nonlinear Phenom. 2020, 404, 132306. [Google Scholar] [CrossRef]
  32. Al-Qaness, M.A.; Ewees, A.A.; Fan, H.; Abualigah, L.; Abd Elaziz, M. Marine predators algorithm for forecasting confirmed cases of COVID-19 in Italy, USA, Iran and Korea. Int. J. Environ. Res. Public Health 2020, 17, 3520. [Google Scholar] [CrossRef] [PubMed]
  33. Kumararaja, K.; Khiran Kumar, C.S.; Sivaraman, B. A convolutional neural network analysis of a heat pipe with Hybrid Nanofluids. Int. J. Ambient Energy 2022, 43, 6284–6296. [Google Scholar] [CrossRef]
  34. Wilson Dhileep Kumar, C.; Saravanan, S.; Raghukandan, K. Influence of Grooved Base Plate on Microstructure and Mechanical Strength of Aluminum–Stainless Steel Explosive Cladding. Trans. Indian Inst. Met. 2019, 72, 3269–3276. [Google Scholar] [CrossRef]
  35. Saravanan, S.; Inokawa, H.; Tomoshige, R.; Raghukandan, K. Effect of silicon carbide particles in microstructure and mechanical properties of dissimilar aluminium explosive cladding. J. Manuf. Process. 2019, 47, 32–40. [Google Scholar]
  36. Joseph, R.V.; Mohanty, A.; Tyagi, S.; Mishra, S.; Satapathy, S.K.; Mohanty, S.N. A hybrid deep learning framework with CNN and Bi-directional LSTM for store item demand forecasting. Comput. Electr. Eng. 2022, 103, 108358. [Google Scholar] [CrossRef]
  37. Gao, M.; Yang, H.; Xiao, Q.; Goh, M. A novel method for carbon emission forecasting based on Gompertz’s law and fractional grey model: Evidence from American industrial sector. Renew. Energy 2022, 181, 803–819. [Google Scholar] [CrossRef]
  38. Ates, G.C.; Gorguluarslan, R.M. Two-stage convolutional encoder-decoder network to improve the performance and reliability of deep learning models for to-pology optimization. Struct. Multidiscip. Optim. 2021, 63, 1927–1950. [Google Scholar] [CrossRef]
  39. Kim, D.W.; Lim, J.H.; Lee, S. Prediction and validation of the transverse mechanical behavior of unidirectional composites considering interfacial debonding through convolutional neural networks. Compos. Part B Eng. 2021, 225, 109314. [Google Scholar] [CrossRef]
  40. Bilgin, Z.; Gunestas, M. Exploring Root Causes of CNN-Based Image Classifier Failures Using 3-Nearest Neighbors. SN Comput. Sci. 2022, 3, 452. [Google Scholar] [CrossRef]
  41. El Bilali, A.; Lamane, H.; Taleb, A.; Nafii, A. A framework based on multivariate distribution-based virtual sample generation and DNN for predicting water quality with small data. J. Clean. Prod. 2022, 368, 133227. [Google Scholar] [CrossRef]
  42. Fei, H.; Tan, F. Bidirectional grid long short-term memory (bigridlstm): A method to address context-sensitivity and vanishing gradient. Algorithms 2018, 11, 172. [Google Scholar] [CrossRef]
  43. Saravanan, S.; Gajalakshmi, K. Soft computing approaches for comparative prediction of ram tensile and shear strength in aluminium–stainless steel explosive cladding. Arch. Civ. Mech. Eng. 2022, 22, 42. [Google Scholar] [CrossRef]
Figure 1. Work plan (a) Explosive cladding arrangement (b) Grooved base plates (c) Explosive clad (d,e) interface microstructure (f) Shear test sample (g) tensile specimen.
Figure 1. Work plan (a) Explosive cladding arrangement (b) Grooved base plates (c) Explosive clad (d,e) interface microstructure (f) Shear test sample (g) tensile specimen.
Metals 13 00373 g001
Figure 2. Proposed methodology.
Figure 2. Proposed methodology.
Metals 13 00373 g002
Figure 3. CNN Architecture (a) with two hidden layers, (b) two CNN layers and two hidden layers (c) three CNN layers and four hidden layers.
Figure 3. CNN Architecture (a) with two hidden layers, (b) two CNN layers and two hidden layers (c) three CNN layers and four hidden layers.
Metals 13 00373 g003
Figure 4. Linear regression plots for CNN1.
Figure 4. Linear regression plots for CNN1.
Metals 13 00373 g004
Figure 5. Hyperparameters tuning for CNN2 model.
Figure 5. Hyperparameters tuning for CNN2 model.
Metals 13 00373 g005
Figure 6. Linear regression plots for CNN2.
Figure 6. Linear regression plots for CNN2.
Metals 13 00373 g006
Figure 7. Linear regression plots for CNN3.
Figure 7. Linear regression plots for CNN3.
Metals 13 00373 g007
Figure 8. DNN Architecture.
Figure 8. DNN Architecture.
Metals 13 00373 g008
Figure 9. Hyperparameters tuning for DNN model.
Figure 9. Hyperparameters tuning for DNN model.
Metals 13 00373 g009
Figure 10. Linear regression plots for DNN.
Figure 10. Linear regression plots for DNN.
Metals 13 00373 g010
Figure 11. RNN Architecture.
Figure 11. RNN Architecture.
Metals 13 00373 g011
Figure 12. Hyperparameters tuning for RNN model.
Figure 12. Hyperparameters tuning for RNN model.
Metals 13 00373 g012
Figure 13. Linear regression plots for RNN.
Figure 13. Linear regression plots for RNN.
Metals 13 00373 g013
Table 1. Experimental parameters and strengths.
Table 1. Experimental parameters and strengths.
No.RD (mm)A (degrees)GrooveTS (MPa)Sh. S (MPa)
10.870No371252
20.875No377259
30.875No377259
40.6510Dovetail352229
51910‘V’379253
60.895No373251
70.875Dovetail387261
80.650‘V’354227
90.6910No350229
101510Dovetail368242
111510No359246
120.6910Dovetail356230
130.8710No375254
14175Dovetail374250
150.895Dovetail381254
160.6910‘V’364235
170.8710‘V’384261
18150Dovetail365241
190.875Dovetail387261
200.875No377259
210.875Dovetail387261
22150No356242
230.895‘V’385257
240.675Dovetail358232
250.875Dovetail387261
26190No360244
270.875‘V’392262
28190Dovetail370243
290.690No346222
301910Dovetail372249
310.875‘V’392262
320.6510No349227
330.690‘V’362232
341510‘V’371248
350.6510‘V’360234
36190‘V’377252
370.870Dovetail375253
380.855‘V’380255
391910No362247
400.875Dovetail387261
410.675No351231
420.875‘V’392262
430.855No364245
440.875‘V’392262
450.875No377259
46150‘V’367247
470.650Dovetail349225
480.650No344220
490.875‘V’392262
500.875No377259
510.690Dovetail354227
52175‘V’381255
530.875No377259
540.675‘V’369237
550.855Dovetail369247
560.870‘V’382261
570.875Dovetail387261
58175No364250
590.875‘V’392262
600.8710Dovetail379256
Table 2. Hyperparameters for CNN1.
Table 2. Hyperparameters for CNN1.
ParametersRangeOptimal Value
No. of convolution blocks1 to 41
No. of filters in layer 14 to 1024421
No. of dense layers1 to 42
No. of units in layer 14 to 1024722
No. of units in layer 24 to 1024233
Table 3. Hyperparameters for CNN1with different optimizers.
Table 3. Hyperparameters for CNN1with different optimizers.
OptimizerParametersRange
RMSpropLearning rate1 × 105 to 1 × 10−1
Decay0.85 to 0.99
Momentum1 × 105 to 1 × 10−1
AdamLearning rate1 × 105 to 1 × 10−1
Decay1 × 105 to 1 × 10−1
Learning rate1 × 105 to 1 × 10−1
SGDMomentum1 × 105 to 1 × 10−1
Learning rate1 × 105 to 1 × 10−1
Table 4. Optimal values of Adam optimizer.
Table 4. Optimal values of Adam optimizer.
Adam OptimizerCNN1CNN2CNN3DNNRNN
Learning Rate0.0390.02050.02870.09780.0148
Decay0.0260.03330.05930.05390.0384
Table 5. Performance metrics.
Table 5. Performance metrics.
ModelR2MAEMAPE
CNN10.88731.95531.3047
CNN20.89631.74541.2249
CNN30.85232.31721.7123
DNN0.95191.05520.7286
RNN0.91461.47081.0406
Table 6. Hyperparameters for CNN2 and CNN3.
Table 6. Hyperparameters for CNN2 and CNN3.
ModelsParametersRangeOptimal Value
CNN2No. of convolution blocks1 to 41
No. of filters in layer 14 to 102421
No. of filters in layer 24 to 1024415
No. of dense layers1 to 42
No. of units in layer 14 to 1024907
No. of units in layer 24 to 1024774
CNN3No. of convolution blocks1 to 41
No. of filters in layer 14 to 102411
No. of filters in layer 24 to 102424
No. of filters in layer 34 to 102432
No. of dense layers1 to 44
No. of units in layer 14 to 102458
No. of units in layer 24 to 1024403
No. of units in layer 34 to 1024871
No. of units in layer 44 to 1024246
Table 7. Hyperparameters for DNN.
Table 7. Hyperparameters for DNN.
ParametersRangeOptimal Value
No. of dense layers1 to 42
No. of units in layer 14 to 1024791
No. of units in layer 24 to 1024795
Table 8. Hyperparameters for RNN.
Table 8. Hyperparameters for RNN.
ParametersRangeOptimal Value
No. of filters in recurrent layer4 to 1024430
No. of dense layers1 to 43
No. of units in layer 14 to 1024307
No. of units in layer 24 to 1024210
No. of units in layer 34 to 1024843
Table 9. Experimental and predicted strengths.
Table 9. Experimental and predicted strengths.
Tensile Strength (MPa)
RDAGExpCNN1CNN2CNN3DNNRNN
0.650No344340.35341.41339.54342.98341.26
0.690No346350.76342.55352.71344.96348.84
0.690V362359.98363.62359.49363.92361.71
190Dovetail370367.45368.35371.45369.65370.55
0.875V392389.04390.37388.07393.78390.93
0.8457.66V393389.02390.02386.86391.04388.84
Shear Strength (MPa)
RDAGExpCNN1CNN2CNN3DNNRNN
0.650No220221.55221.55222.23221.05221.25
0.690No222223.48223.63224.14222.99223.36
0.690V232229.78231.28229.38231.68231.38
190Dovetail243245.36242.06240.33244.56244.71
0.875V262259.61259.98258.81261.06260.64
0.8457.66V264262.07261.65260.06263.03261.58
Table 10. Error between experimental and predicted strengths.
Table 10. Error between experimental and predicted strengths.
Tensile Strength (MPa)
RDAGExpCNN1CNN2CNN3DNNRNN
0.650No3443.652.594.461.022.74
0.690No346−4.763.45−6.711.04−2.835
0.690V3622.02−1.622.51−1.920.295
190Dovetail3702.551.65−1.450.35−0.55
0.875V3922.961.633.93−1.781.075
0.8457.66V3933.982.926.141.964.16
Shear Strength (MPa)
RDAGExpCNN1CNN2CNN3DNNRNN
0.650No220−1.55−1.55−2.23−1.05−1.25
0.690No222−1.48−1.63−2.14−0.99−1.36
0.690V2322.220.722.620.320.62
190Dovetail243−2.360.942.67−1.56−1.71
0.875V2622.392.023.190.941.36
0.8457.66V2641.932.353.940.972.42
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Saravanan, S.; Kumararaja, K.; Raghukandan, K. Application of Deep Learning Techniques to Predict the Mechanical Strength of Al-Steel Explosive Clads. Metals 2023, 13, 373. https://doi.org/10.3390/met13020373

AMA Style

Saravanan S, Kumararaja K, Raghukandan K. Application of Deep Learning Techniques to Predict the Mechanical Strength of Al-Steel Explosive Clads. Metals. 2023; 13(2):373. https://doi.org/10.3390/met13020373

Chicago/Turabian Style

Saravanan, Somasundaram, Kanagasabai Kumararaja, and Krishnamurthy Raghukandan. 2023. "Application of Deep Learning Techniques to Predict the Mechanical Strength of Al-Steel Explosive Clads" Metals 13, no. 2: 373. https://doi.org/10.3390/met13020373

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop