Robust Process Parameter Design Methodology: A New Estimation Approach by Using FeedForward Neural Network Structures and Machine Learning Algorithms
Abstract
:1. Introduction
2. Proposed FeedForward NN StructureBased Estimation Methods
2.1. NN Structures
2.2. Proposed NNBased Estimation Method 1: FFNN Structure
2.2.1. Response Function Estimation Using FFNN
2.2.2. Number of Hidden Neurons
2.2.3. Integration into a Learning Algorithm: BackPropagation
 Steepest descent is an iterative method that finds the local minimum by moving in the direction opposite to the one implied by the gradient; the learning rate (such as traingda or traingdx) indicates how quickly it moves. Usually, the smaller the learning rate, the slower the process of convergence to a local minimum. In standard steepest descent, the learning rate remains constant throughout the training stage while the weight and bias parameter values are interactively updated.
 The resilientBP training algorithm (i.e., trainrp) is a local adaptive learning scheme for supervised batch learning in an FFNN. Resilient backpropagation is similar to the standard BP algorithm, but it is capable of training the NN model faster than the regular method without the need to specify any free parameter values. Additionally, since trainrp merely considers the size of the partial derivatives, the direction of the weight update is only affected by the sign of the derivative.
 The LevenbergMarquardt algorithm (i.e., trainlm) is a standard method for solving nonlinear leastsquares minimization problems without computing the Hessian matrix. It can be thought of as a middle ground between the steepest descent and GaussNewton methods.
2.2.4. Generalization and Overfitting Issues
2.3. Proposed NNBased Estimation Method 2: CFNN Structure
2.4. Proposed NNBased Estimation Method 3: RBFN
3. Simulation Studies
3.1. Simulation Study 1
3.2. Simulation Study 2
4. Case Study
5. Conclusions and Further Studies
Author Contributions
Funding
Conflicts of Interest
Appendix A. Weight and Bias Values of the Proposed Neural Network Structure in Simulation Study 1
a. Mean Function  
Weights  Biases  
${\mathbf{W}}_{\mathit{F}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{m}\mathit{e}\mathit{a}\mathit{n}}$  ${\left({\mathbf{t}}_{\mathit{F}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{m}\mathit{e}\mathit{a}\mathit{n}}\right)}^{\mathit{T}}$  ${\mathbf{a}}_{\mathit{F}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{m}\mathit{e}\mathit{a}\mathit{n}}$  ${\mathbf{b}}_{\mathit{F}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{m}\mathit{e}\mathit{a}\mathit{n}}$  
3.6550  4.7000  −0.5320  −5.6150  −0.3370 
4.6370  3.2890  0.7650  −5.1200  
−3.6880  4.6390  0.3910  4.1080  
5.4630  1.9730  −0.1880  −3.5370  
1.3880  −5.7220  0.2690  −2.5120  
−4.3340  3.7490  −0.2760  2.2050  
−2.5860  5.1880  −0.1570  1.3530  
1.4470  5.5880  −0.3560  −0.7180  
4.9970  2.8280  0.1800  −0.0290  
5.1210  2.5860  −0.1410  0.8320  
−5.5550  −1.5900  −0.4130  −1.4570  
5.4750  1.8190  −0.2640  2.1750  
4.6360  −3.3980  0.0280  2.9310  
−0.4150  5.7380  0.4640  −3.6420  
3.1110  −4.8490  0.0690  4.3330  
−4.8490  −2.9730  −0.0830  −5.0940  
−1.0380  −5.9250  −0.5680  −5.5340  
b. Standard Deviation Function  
Weights  Biases  
${W}_{\mathit{F}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{s}\mathit{t}\mathit{d}}$  ${\left({t}_{\mathit{F}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{s}\mathit{t}\mathit{d}}\right)}^{\mathit{T}}$  ${a}_{\mathit{F}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{s}\mathit{t}\mathit{d}}$  ${b}_{\mathit{F}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{s}\mathit{t}\mathit{d}}$  
2.3640  −2.2900  0.6380  −3.2240  0.5100 
3.1150  3.1890  0.2420  −1.9530  
−1.9210  1.7710  0.1340  2.8660  
0.7590  5.2410  −0.0100  1.6240  
1.4680  −3.0400  0.4510  1.2370  
0.8010  2.7570  0.1990  −3.3430 
Weights  Biases  

${\mathbf{R}}_{\mathit{C}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{m}\mathit{e}\mathit{a}\mathit{n}}$  ${\left({\mathbf{z}}_{\mathit{C}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{m}\mathit{e}\mathit{a}\mathit{n}}\right)}^{\mathit{T}}$  ${\left({\mathbf{q}}_{\mathit{C}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{m}\mathit{e}\mathit{a}\mathit{n}}\right)}^{\mathit{T}}$  ${\mathbf{c}}_{\mathit{C}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{m}\mathit{e}\mathit{a}\mathit{n}}$  ${\mathbf{d}}_{\mathit{C}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{m}\mathit{e}\mathit{a}\mathit{n}}$  
3.4680  4.5940  0.5890  −5.7880  −0.3310  
4.7000  3.4200  −0.1830  −5.0120  
−3.5790  4.9050  0.5820  3.9430  
5.2270  2.2140  −0.1130  −3.7440  
1.5730  −5.5360  0.2110  −2.9200  
−4.3610  3.8520  −0.5380  2.0050  
−2.6110  5.1060  −0.2640  1.4290  
1.3240  5.6320  −0.2540  −0.3850  −0.8230  
5.0350  2.7780  0.3700  −0.2030  0.3890  
5.1120  2.6770  0.0370  0.7280  
−5.4880  −1.8040  −0.0510  −1.4370  
5.4600  1.8770  0.1170  2.1230  
4.5330  −3.4080  0.3510  3.1740  
−0.6110  5.8300  0.0350  −3.4760  
3.1690  −4.8040  −0.3720  4.4170  
−4.9220  −3.0140  −0.0030  −5.0410  
−0.9670  −5.6650  −0.5560  −5.7940 
Weights  Biases  

${\mathbf{R}}_{\mathit{C}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{s}\mathit{t}\mathit{d}}$  ${\left({\mathbf{z}}_{\mathit{C}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{s}\mathit{t}\mathit{d}}\right)}^{\mathit{T}}$  ${\left({\mathbf{q}}_{\mathit{C}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{s}\mathit{t}\mathit{d}}\right)}^{\mathit{T}}$  ${\mathbf{c}}_{\mathit{C}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{s}\mathit{t}\mathit{d}}$  ${\mathbf{d}}_{\mathit{C}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{s}\mathit{t}\mathit{d}}$  
1.9650  −1.2350  −0.1600 0.1370  0.9570  −3.1820  0.9310 
2.8420  −1.4330  0.3030  1.1660  
−2.7860  −0.2660  −0.3060  0.4420  
1.7340  3.2990  −0.4380  1.9590  
0.6130  3.8920  0.1590  3.1660 
a. Mean Function  b. Standard Deviation Function  

Weights  Biases  Weights  Biases  
${\mathbf{V}}_{\mathit{R}\mathit{B}\mathit{F}\mathit{N}}^{\mathit{m}\mathit{e}\mathit{a}\mathit{n}}$  ${\left({\mathbf{u}}_{\mathit{R}\mathit{B}\mathit{F}\mathit{N}}^{\mathit{m}\mathit{e}\mathit{a}\mathit{n}}\right)}^{\mathit{T}}$  ${\mathbf{e}}_{\mathit{R}\mathit{B}\mathit{F}\mathit{N}}^{\mathit{m}\mathit{e}\mathit{a}\mathit{n}}$  ${\mathbf{f}}_{\mathit{R}\mathit{B}\mathit{F}\mathit{N}}^{\mathit{m}\mathit{e}\mathit{a}\mathit{n}}$  ${\mathbf{V}}_{\mathit{R}\mathit{B}\mathit{F}\mathit{N}}^{\mathit{s}\mathit{t}\mathit{d}}$  ${\left({\mathbf{u}}_{\mathit{R}\mathit{B}\mathit{F}\mathit{N}}^{\mathit{s}\mathit{t}\mathit{d}}\right)}^{\mathit{T}}$  ${\mathbf{e}}_{\mathit{R}\mathit{B}\mathit{F}\mathit{N}}^{\mathit{s}\mathit{t}\mathit{d}}$  ${\mathbf{f}}_{\mathit{R}\mathit{B}\mathit{F}\mathit{N}}^{\mathit{s}\mathit{t}\mathit{d}}$  
0.0  −0.5  126.7450  1.3870  31.5280  0.5  −0.5  6.8760  1.3870  33.6620 
0.5  1.0  −29.6660  1.3870  0.5  1.0  −3.6160  1.3870  
−1.0  0.5  9.1310  1.3870  −1.0  −0.5  3.9420  1.3870  
1.0  −0.5  −35.0720  1.3870  −1.0  1.0  −28.7180  1.3870  
−1.0  −1.0  55.8470  1.3870  1.0  −1.0  −24.2000  1.3870  
1.0  1.0  72.6050  1.3870  1.0  0.5  0.1440  1.3870  
−0.5  1.0  7.0470  1.3870  −0.5  −0.5  8.5470  1.3870  
−0.5  0.5  −52.0380  1.3870  0.0  0.5  −11.9050  1.3870  
−1.0  0.0  31.4830  1.3870  −0.5  −1.0  0.5760  1.3870  
1.0  −1.0  77.7170  1.3870  −1.0  0.5  5.4580  1.3870  
1.0  0.0  79.6550  1.3870  0.5  0.5  11.8210  1.3870  
−1.0  1.0  56.6220  1.3870  0.0  0.0  −4.1700  1.3870  
0.5  0.5  43.0670  1.3870  0.0  1.0  −16.5340  1.3870  
0.0  1.0  92.4890  1.3870  1.0  1.0  −23.5990  1.3870  
−1.0  −0.5  13.2020  1.3870  1.0  0.0  −21.1190  1.3870  
−0.5  −0.5  −60.8050  1.3870  0.0  −1.0  −18.9870  1.3870  
0.0  0.5  −59.6330  1.3870  0.5  0.0  −9.2400  1.3870  
−0.5  0.0  65.2090  1.3870  −1.0  0.0  −25.1250  1.3870  
−0.5  −1.0  11.5900  1.3870  0.0  −0.5  −9.8510  1.3870  
0.0  0.0  −2.2410  1.3870  −0.5  0.0  −7.7680  1.3870  
0.5  −1.0  −31.3730  1.3870  1.0  −0.5  2.7330  1.3870  
1.0  0.5  −27.4060  1.3870  −1.0  −1.0  −27.3400  1.3870  
0.5  −0.5  51.3100  1.3870  0.5  −1.0  0.6880  1.3870  
0.5  0.0  −51.9210  1.3870  −0.5  0.5  6.3400  1.3870 
Appendix B. Weight and Bias Values of the Proposed Neural Network Structure in Simulation Study 2
a. Mean Function  b. Standard Deviation Function  

Weights  Biases  Weights  Biases  
${\mathbf{W}}_{\mathit{F}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{m}\mathit{e}\mathit{a}\mathit{n}}$  ${\left({\mathbf{t}}_{\mathit{F}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{m}\mathit{e}\mathit{a}\mathit{n}}\right)}^{\mathit{T}}$  ${\mathbf{a}}_{\mathit{F}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{m}\mathit{e}\mathit{a}\mathit{n}}$  ${\mathbf{b}}_{\mathit{F}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{m}\mathit{e}\mathit{a}\mathit{n}}$  ${\mathbf{W}}_{\mathit{F}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{s}\mathit{t}\mathit{d}}$  ${\left({\mathbf{t}}_{\mathit{F}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{s}\mathit{t}\mathit{d}}\right)}^{\mathit{T}}$  ${\mathbf{a}}_{\mathit{F}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{s}\mathit{t}\mathit{d}}$  ${\mathbf{b}}_{\mathit{F}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{s}\mathit{t}\mathit{d}}$  
2.0570  3.9280  0.7630  −4.7780  0.0530  1.7820  1.8550  −0.3890  −1.6150  0.6640 
2.4000  3.9130  0.4830  −3.4680  2.2310  −0.7030  0.5740  0.0480  
−4.5010  −0.4190  0.5650  3.0550  −0.2220  −1.8100  1.0090  −2.9250  
2.9150  3.7590  0.2290  −1.3400  
1.4850  −4.3180  0.9410  −1.3350  
−4.6270  −0.2660  0.6220  −0.1490  
−2.2500  3.9870  −0.6890  −1.0590  
0.7070  4.5580  0.1550  1.5870  
2.6160  3.8280  0.5310  2.4090  
4.3540  1.5560  −0.9940  4.0990  
−2.6170  −3.7890  −1.3930  −4.6190 
Weights  Biases  

${\mathbf{R}}_{\mathit{C}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{m}\mathit{e}\mathit{a}\mathit{n}}$  ${\left({\mathbf{z}}_{\mathit{C}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{m}\mathit{e}\mathit{a}\mathit{n}}\right)}^{\mathit{T}}$  ${\left({\mathbf{q}}_{\mathit{C}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{m}\mathit{e}\mathit{a}\mathit{n}}\right)}^{\mathit{T}}$  ${\mathbf{c}}_{\mathit{C}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{m}\mathit{e}\mathit{a}\mathit{n}}$  ${\mathbf{d}}_{\mathit{C}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{m}\mathit{e}\mathit{a}\mathit{n}}$  
5.1170  −0.1120  0.3710 0.8340  −0.3840  −4.9710  0.0220 
4.2260  2.9460  0.2340  −4.0130  
−3.7930  −3.5990  0.0640  3.0950  
4.5130  −0.8420  −0.0300  −3.2350  
1.8880  4.7280  −0.2380  −1.5650  
−4.2200  2.7300  0.0610  0.9880  
−1.7950  4.8310  −0.4210  0.2780  
1.5620  4.7170  −0.5560  0.9210  
3.5040  −3.6720  0.5150  1.6860  
4.1320  2.8270  0.2500  2.6160  
−3.2120  4.0320  0.5630  −3.1850  
4.8210  1.5590  −0.5310  4.0190  
4.4480  2.5990  0.2130  4.9300 
Weights  Biases  

${\mathbf{R}}_{\mathit{C}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{s}\mathit{t}\mathit{d}}$  ${\left({\mathbf{z}}_{\mathit{C}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{s}\mathit{t}\mathit{d}}\right)}^{\mathit{T}}$  ${\left({\mathbf{q}}_{\mathit{C}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{s}\mathit{t}\mathit{d}}\right)}^{\mathit{T}}$  ${\mathbf{c}}_{\mathit{C}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{s}\mathit{t}\mathit{d}}$  ${\mathbf{d}}_{\mathit{C}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{s}\mathit{t}\mathit{d}}$  
1.2450  0.7210  0.5230 −0.5240  −0.0310  0.2590  −0.0380 
a. Mean Function  b. Standard Deviation Function  

Weights  Biases  Weights  Biases  
${\mathbf{V}}_{\mathit{R}\mathit{B}\mathit{F}\mathit{N}}^{\mathit{m}\mathit{e}\mathit{a}\mathit{n}}$  ${\left({\mathbf{u}}_{\mathit{R}\mathit{B}\mathit{F}\mathit{N}}^{\mathit{m}\mathit{e}\mathit{a}\mathit{n}}\right)}^{\mathit{T}}$  ${\mathbf{e}}_{\mathit{R}\mathit{B}\mathit{F}\mathit{N}}^{\mathit{m}\mathit{e}\mathit{a}\mathit{n}}$  ${\mathbf{f}}_{\mathit{R}\mathit{B}\mathit{F}\mathit{N}}^{\mathit{m}\mathit{e}\mathit{a}\mathit{n}}$  ${\mathbf{V}}_{\mathit{R}\mathit{B}\mathit{F}\mathit{N}}^{\mathit{s}\mathit{t}\mathit{d}}$  ${\left({\mathbf{u}}_{\mathit{R}\mathit{B}\mathit{F}\mathit{N}}^{\mathit{s}\mathit{t}\mathit{d}}\right)}^{\mathit{T}}$  ${\mathbf{e}}_{\mathit{R}\mathit{B}\mathit{F}\mathit{N}}^{\mathit{s}\mathit{t}\mathit{d}}$  ${\mathbf{f}}_{\mathit{R}\mathit{B}\mathit{F}\mathit{N}}^{\mathit{s}\mathit{t}\mathit{d}}$  
0.0  −0.5  64.8980  1.6650  38.201  0.5  −0.5  −28.2270  1.3870  −1.6450 
0.5  1.0  −6.6220  1.6650  −0.5  0.5  8.2530  1.3870  
−1.0  0.5  17.0920  1.6650  1.0  1.0  21.6710  1.3870  
1.0  −0.5  −8.6860  1.6650  −1.0  −1.0  34.5170  1.3870  
−1.0  −1.0  34.9310  1.6650  1.0  −1.0  9.1260  1.3870  
1.0  0.5  −21.9170  1.6650  −0.5  −1.0  −17.9100  1.3870  
−0.5  1.0  25.6960  1.6650  1.0  0.0  −5.4440  1.3870  
0.5  −1.0  10.9180  1.6650  0.0  −1.0  22.5250  1.3870  
−1.0  0.0  26.8410  1.6650  0.5  1.0  −3.2470  1.3870  
−1.0  1.0  31.4250  1.6650  −1.0  0.0  32.0960  1.3870  
−0.5  −1.0  34.7400  1.6650  −1.0  1.0  17.5940  1.3870  
1.0  1.0  62.8200  1.6650  0.0  0.5  4.8550  1.3870  
0.5  0.0  −47.0190  1.6650  −0.5  0.0  −11.1920  1.3870  
1.0  −1.0  40.4510  1.6650  0.0  1.0  12.6800  1.3870  
1.0  0.0  63.1560  1.6650  −1.0  −0.5  −25.7220  1.3870  
0.0  1.0  55.4460  1.6650  1.0  0.5  4.0170  1.3870  
−0.5  −0.5  −4.6920  1.6650  0.5  0.5  −7.4010  1.3870  
0.5  −0.5  62.5500  1.6650  −0.5  −0.5  20.5440  1.3870  
−0.5  0.0  17.0590  1.6650  1.0  −0.5  26.7630  1.3870  
−0.5  0.5  −11.3100  1.6650  0.5  0.0  27.5990  1.3870  
−1.0  −0.5  21.5900  1.6650  0.5  −1.0  13.4280  1.3870  
0.5  0.5  52.6940  1.6650  −1.0  0.5  −12.6070  1.3870  
0.0  0.5  −59.7970  1.6650  −0.5  1.0  −3.6890  1.3870  
0.0  0.0  35.7290  1.6650  0.0  −0.5  3.9230  1.3870 
Appendix C. Weight and Bias Values of the Proposed Neural Network Structure in the Case Study
Weights  Biases  

${\mathbf{W}}_{\mathit{F}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{m}\mathit{e}\mathit{a}\mathit{n}}$  ${\left({\mathbf{t}}_{\mathit{F}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{m}\mathit{e}\mathit{a}\mathit{n}}\right)}^{\mathit{T}}$  ${\mathbf{a}}_{\mathit{F}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{m}\mathit{e}\mathit{a}\mathit{n}}$  ${\mathbf{b}}_{\mathit{F}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{m}\mathit{e}\mathit{a}\mathit{n}}$  
1.5660  1.9450  1.1790  0.3610  −3.0880  −0.1520 
1.7450  −1.3140  2.0400  0.4270  −2.0620  
−1.6810  2.2590  0.6690  0.2180  1.4730  
1.5770  1.7570  −1.5510  0.1660  −0.9160  
1.1460  −0.0730  2.6140  0.0510  0.1540  
−1.7480  1.4660  1.7830  −0.0630  −0.4130  
−1.3760  −2.3660  0.9550  −0.1300  −1.4310  
0.5280  −0.8930  2.4800  0.2700  2.4590  
1.9980  1.7550  1.2700  −0.0220  2.8680 
Weights  Biases  

${\mathbf{W}}_{\mathit{F}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{s}\mathit{t}\mathit{d}}$  ${\left({\mathbf{t}}_{\mathit{F}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{s}\mathit{t}\mathit{d}}\right)}^{\mathit{T}}$  ${\mathbf{a}}_{\mathit{F}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{s}\mathit{t}\mathit{d}}$  ${\mathbf{b}}_{\mathit{F}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{s}\mathit{t}\mathit{d}}$  
1.9520  −2.0810  0.9550  −0.2470  −3.0060  −0.3080 
1.4520  1.6590  −1.9370  −0.5730  −2.5620  
−1.6750  1.9270  1.5750  −0.0660  1.6940  
1.9030  −0.2630  2.2310  0.1620  −1.0300  
0.9430  2.3320  1.6200  0.1020  −0.3810  
−2.0160  −1.7170  1.3500  −0.2570  −0.1860  
−1.9130  −0.4820  2.2770  0.0650  −0.9540  
0.2000  2.8330  −0.8230  0.1420  1.7220  
2.7580  1.5070  0.6520  0.3000  1.9940  
1.497  1.726  −1.141  −0.938  3.480 
Weights  Biases  

${\mathbf{R}}_{\mathit{C}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{m}\mathit{e}\mathit{a}\mathit{n}}$  ${\left({\mathbf{z}}_{\mathit{C}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{m}\mathit{e}\mathit{a}\mathit{n}}\right)}^{\mathit{T}}$  ${\left({\mathbf{q}}_{\mathit{C}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{m}\mathit{e}\mathit{a}\mathit{n}}\right)}^{\mathit{T}}$  ${\mathbf{c}}_{\mathit{C}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{m}\mathit{e}\mathit{a}\mathit{n}}$  ${\mathbf{d}}_{\mathit{C}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{m}\mathit{e}\mathit{a}\mathit{n}}$  
1.7480  2.4100  1.0820  0.5820 0.1600 0.6080  −0.0060  −3.2030  0.1530 
2.6590  −0.1570  1.7430  −0.1080  −2.6440  
−2.1980  1.7500  1.4620  −0.2100  2.0920  
2.3100  −1.9110  −0.7440  0.0770  −1.6370  
1.9620  −0.9270  2.3510  −0.0110  −0.8430  
−1.9680  1.8610  −1.6700  −0.0560  0.2480  
−1.5170  2.3360  1.6270  0.4240  −0.2960  
0.2480  2.1750  −2.3370  0.0730  0.8240  
2.7660  0.9810  −1.3500  −0.0010  1.3920  
1.8210  −1.8120  −1.8440  0.2630  2.0910  
−1.6870  1.7400  −2.0190  0.2820  −2.6670  
2.1030  1.8880  1.2460  −0.5660  3.3270 
Weights  Biases  

${\mathbf{R}}_{\mathit{C}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{s}\mathit{t}\mathit{d}}$  ${\left({\mathbf{z}}_{\mathit{C}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{s}\mathit{t}\mathit{d}}\right)}^{\mathit{T}}$  ${\left({\mathbf{q}}_{\mathit{C}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{s}\mathit{t}\mathit{d}}\right)}^{\mathit{T}}$  ${\mathbf{c}}_{\mathit{C}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{s}\mathit{t}\mathit{d}}$  ${\mathbf{d}}_{\mathit{C}\mathit{F}\mathit{N}\mathit{N}}^{\mathit{s}\mathit{t}\mathit{d}}$  
2.0800  2.0450  1.7150  −0.4480 −0.6100 0.7170  0.8060  −2.7520  −0.0920 
1.3360  2.1850  1.1350  0.0190  −3.1570  
−2.1980  −0.2670  1.5240  0.0550  2.4750  
1.6410  1.7640  2.2460  −0.5140  −1.7790  
0.9380  −2.5720  1.7050  0.1360  −0.7120  
−3.1130  −0.7290  0.9550  −0.4490  0.6660  
0.8410  2.7640  1.8200  0.4000  −1.2890  
0.9370  2.2670  −1.8640  0.8030  0.9310  
1.7920  2.1840  1.0390  −0.0180  1.7790  
2.2030  0.7220  −2.3830  0.0520  2.2580  
−1.8960  −2.6360  −0.6560  0.0710  −2.9400 
Weights  Biases  

${\mathbf{V}}_{\mathit{R}\mathit{B}\mathit{F}\mathit{N}}^{\mathit{m}\mathit{e}\mathit{a}\mathit{n}}$  ${\left({\mathbf{u}}_{\mathit{R}\mathit{B}\mathit{F}\mathit{N}}^{\mathit{m}\mathit{e}\mathit{a}\mathit{n}}\right)}^{\mathit{T}}$  ${\mathbf{e}}_{\mathit{R}\mathit{B}\mathit{F}\mathit{N}}^{\mathit{m}\mathit{e}\mathit{a}\mathit{n}}$  ${\mathbf{f}}_{\mathit{R}\mathit{B}\mathit{F}\mathit{N}}^{\mathit{m}\mathit{e}\mathit{a}\mathit{n}}$  
1.000  1.000  1.000  804.3300  1.6650  124.7800 
1.000  1.000  0.000  513.9900  1.6650  
1.000  0.000  1.000  442.760  1.6650  
0.000  0.000  1.000  288.7100  1.6650  
1.000  0.000  0.000  274.6200  1.6650  
0.000  1.000  0.000  219.6200  1.6650  
1.000  −1.000  1.000  250.5800  1.6650  
0.000  1.000  1.000  287.6200  1.6650  
1.000  0.000  −1.000  185.9900  1.6650  
1.000  −1.000  0.000  195.1100  1.6650  
0.000  0.000  0.000  193.3600  1.6650  
−1.000  1.000  0.000  121.6500  1.6650  
0.000  1.000  −1.000  110.810  1.6650  
−1.000  −1.000  1.000  89.9600  1.665  
1.000  1.000  −1.000  94.2940  1.6650  
1.000  −1.000  −1.000  64.2240  1.6650  
0.000  −1.000  1.000  75.5390  1.6650  
−1.000  0.000  1.000  45.0000  1.6650  
−1.000  0.000  0.000  27.0270  1.6650  
−1.000  1.000  1.000  21.3410  1.6650  
0.000  −1.000  −1.000  0.0000  1.6650  
−1.000  1.000  −1.000  −25.8380  1.6650  
0.000  0.000  −1.000  −18.8420  1.6650  
−1.000  0.000  −1.000  −33.2240  1.6650  
−1.000  −1.000  0.000  −42.9520  1.6650  
0.000  −1.000  0.000  −53.0300  1.6650  
−1.000  −1.000  −1.000  −95.8940  1.6650 
Weights  Biases  

${\mathbf{V}}_{\mathit{R}\mathit{B}\mathit{F}\mathit{N}}^{\mathit{s}\mathit{t}\mathit{d}}$  ${\left({\mathbf{u}}_{\mathit{R}\mathit{B}\mathit{F}\mathit{N}}^{\mathit{s}\mathit{t}\mathit{d}}\right)}^{\mathit{T}}$  ${\mathbf{e}}_{\mathit{R}\mathit{B}\mathit{F}\mathit{N}}^{\mathit{s}\mathit{t}\mathit{d}}$  ${\mathbf{f}}_{\mathit{R}\mathit{B}\mathit{F}\mathit{N}}^{\mathit{s}\mathit{t}\mathit{d}}$  
1.000  0.000  1.000  158.1600  2.7750  −0.0810 
1.000  1.000  1.000  142.3900  2.7750  
0.000  1.000  1.000  138.8700  2.7750  
−1.000  −1.000  1.000  133.8800  2.7750  
1.000  0.000  0.000  92.4780  2.7750  
0.000  1.000  0.000  88.5840  2.7750  
0.000  0.000  −1.000  80.4730  2.7750  
−1.000  1.000  0.000  63.4950  2.7750  
−1.000  1.000  1.000  55.4860  2.7750  
0.000  0.000  1.000  44.5590  2.7750  
1.000  −1.000  −1.000  42.8840  2.7750  
1.000  −1.000  0.000  32.9120  2.7750  
−1.000  0.000  1.000  29.4130  2.7750  
−1.000  1.000  −1.000  27.6230  2.7750  
1.000  1.000  −1.000  23.6910  2.7750  
0.000  −1.000  1.000  23.4430  2.7750  
1.000  1.000  0.000  21.0030  2.7750  
1.000  −1.000  1.000  18.5040  2.7750  
0.000  −1.000  0.000  17.7250  2.7750  
1.000  0.000  −1.000  16.1390  2.7750  
−1.000  0.000  0.000  15.0480  2.7750  
−1.000  −1.000  −1.000  12.5660  2.7750  
0.000  −1.000  −1.000  8.3970  2.7750  
0.000  1.000  −1.000  4.6000  2.7750  
−1.000  0.000  −1.000  3.4830  2.7750  
000  0.000  0.000  −0.0720  2.7750 
Appendix D. Summary of Abbreviations and Main Variables
Division  Description 

$BIC$  Bayesian information criterion 
BP  backpropagation 
CCD  Central composite designs 
CFNN  Cascadeforward backpropagation neural network 
CNN  Convolutional neural network 
DoE  Design of experiment 
DR  Dualresponse 
$EQL$  Expected quality loss 
FFNN  Feedforward backpropagation neural network 
LSM  Least squares method 
MLE  Maximum likelihood estimation 
MSE  Mean squared error 
NN  Neural network 
OA  Orthogonal array 
RBFN  Radial basis function network 
RD  Robust design 
RMS  root mean square 
RSM  Response surface methodology 
WLS  weighted leastsquares 
$x$  Input factor 
$x$  Vector of input factors 
$z$  Noise factors 
$y$  Output response 
$y$  Vector of output responses 
$\overline{y}$  Mean of observed data 
$s$  Standard deviation of observed data 
${s}^{2}$  Variance of observed data 
$\epsilon $  Error 
$\tau $  Desired target value of a quality characteristic 
LSM 

FFNN 

CFNN 

RBFN 

References
 Taguchi, G. Introduction to Quality Engineering: Designing Quality into Products and Processes; UNIPUB/Kraus International: New York, NY, USA, 1986. [Google Scholar]
 Box, G.; Bisgaard, S.; Fung, C. An explanation and critique of Taguchi’s contributions to quality engineering, Qual. Reliab. Eng. Int. 1988, 4, 123–131. [Google Scholar] [CrossRef]
 Leon, R.V.; Shoemaker, A.C.; Kackar, R.N. Performance measures independent of adjustment: An explanation and extension of Taguchi’s signaltonoise ratios. Technometrics 1987, 29, 253–265. [Google Scholar] [CrossRef]
 Box, G. Signaltonoise ratios, performance criteria, and transformations. Technometrics 1988, 30, 1–17. [Google Scholar] [CrossRef]
 Nair, V.N.; Abraham, B.; MacKay, J.; Nelder, J.A.; Box, G.; Phadke, M.S.; Kacker, R.N.; Sacks, J.; Welch, W.J.; Lorenzen, T.J.; et al. Taguchi’s parameter design: A panel discussion. Technometrics 1992, 34, 127–161. [Google Scholar] [CrossRef]
 Vining, G.G.; Myers, R.H. Combining Taguchi and response surface philosophies: A dual response approach. J. Qual. Technol. 1990, 22, 38–45. [Google Scholar] [CrossRef]
 Copeland, K.A.F.; Nelson, P.R. Dual response optimization via direct function minimization. J. Qual. Technol. 1996, 28, 331–336. [Google Scholar] [CrossRef]
 Del Castillo, E.; Montgomery, D.C. A nonlinear programming solution to the dual response problem. J. Qual. Technol. 1993, 25, 199–204. [Google Scholar] [CrossRef]
 Lin, D.K.J.; Tu, W. Dual response surface optimization. J. Qual. Technol. 1995, 27, 34–39. [Google Scholar] [CrossRef]
 Cho, B.R.; Philips, M.D.; Kapur, K.C. Quality improvement by RSM modeling for robust design. In Proceedings of the Fifth Industrial Engineering Research Conference, Minneapolis, MN, USA, 18–20 May 1996; pp. 650–655. [Google Scholar]
 Ding, R.; Lin, D.K.J.; Wei, D. Dualresponse surface optimization: A weighted MSE approach. Qual. Eng. 2004, 16, 377–385. [Google Scholar] [CrossRef]
 Koksoy, O.; Doganaksoy, N. Joint optimization of mean and standard deviation using response surface methods. J. Qual. Tech. 2003, 35, 239–252. [Google Scholar] [CrossRef]
 Ames, A.E.; Mattucci, N.; Macdonald, S.; Szonyi, G.; Hawkins, D.M. Quality loss functions for optimization across multiple response surfaces. J. Qual. Technol. 1997, 29, 339–346. [Google Scholar] [CrossRef]
 Shin, S.; Cho, B.R. Biasspecified robust design optimization and its analytical solutions. Comput. Ind. Eng. 2005, 48, 129–140. [Google Scholar] [CrossRef]
 Shin, S.; Cho, B.R. Robust design models for customerspecified bounds on process parameters. J. Syst. Sci. Syst. Eng. 2006, 15, 2–18. [Google Scholar] [CrossRef]
 Robinson, T.J.; Wulff, S.S.; Montgomery, D.C.; Khuri, A.I. Robust parameter design using generalized linear mixed models. J. Qual. Technol. 2006, 38, 65–75. [Google Scholar] [CrossRef]
 Truong, N.K.V.; Shin, S. Development of a new robust design methodology based on Bayesian perspectives. Int. J. Qual. Eng. Technol. 2012, 3, 50–78. [Google Scholar] [CrossRef]
 Kim, Y.J.; Cho, B.R. Development of prioritybased robust design. Qual. Eng. 2002, 14, 355–363. [Google Scholar] [CrossRef]
 Tang, L.C.; Xu, K. A unified approach for dual response surface optimization. J. Qual. Technol. 2002, 34, 437–447. [Google Scholar] [CrossRef]
 Borror, C.M. Mean and variance modeling with qualitative responses: A case study. Qual. Eng. 1998, 11, 141–148. [Google Scholar] [CrossRef]
 Fogliatto, F.S. Multiresponse optimization of products with functional quality characteristics. Qual. Reliab. Eng. Int. 2008, 24, 927–939. [Google Scholar] [CrossRef]
 Kim, K.J.; Lin, D.K.J. Dual response surface optimization: A fuzzy modeling approach. J. Qual. Technol. 1998, 30, 1–10. [Google Scholar] [CrossRef]
 Shin, S.; Cho, B.R. Studies on a biobjective robust design optimization problem. IIE Trans. 2009, 41, 957–968. [Google Scholar] [CrossRef]
 Goethals, P.L.; Cho, B.R. The development of a robust design methodology for timeoriented dynamic quality characteristics with a target profile. Qual. Reliab. Eng. Int. 2011, 27, 403–414. [Google Scholar] [CrossRef]
 Nha, V.T.; Shin, S.; Jeong, S.H. Lexicographical dynamic goal programming approach to a robust design optimization within the pharmaceutical environment. Eur. J. Oper. Res. 2013, 229, 505–517. [Google Scholar] [CrossRef]
 Montgomery, D.C. Design and Analysis of Experiments, 4th ed.; John Wiley & Sons: New York, NY, USA, 1997. [Google Scholar]
 Myers, R.H.; Montgomery, D.C. Response Surface Methodology: Process and Product Optimization Using Designed Experiments; John Wiley & Sons: New York, NY, USA, 1995. [Google Scholar]
 BoxSteffensmeier, J.M.; Brady, H.E.; Collier, D. (Eds.) The Oxford Handbook of Political Methodology; Oxford Handbooks Online: Oxford, UK, 2008; Chapter 16. [Google Scholar]
 Truong, N.K.V.; Shin, S. A new robust design method from an inverseproblem perspective. Int. J. Qual. Eng. Technol. 2013, 3, 243–271. [Google Scholar] [CrossRef]
 Irie, B.; Miyake, S. Capabilities of threelayered perceptrons. In Proceedings of the IEEE 1988 International Conference on Neural Networks, San Diego, CA, USA, 24–27 July 1988; IEEE: Piscataway Township, NJ, USA, 1988; pp. 641–648. [Google Scholar] [CrossRef]
 Funahashi, K. On the approximate realization of continuous mappings by neural networks. Neural Netw. 1989, 2, 183–192. [Google Scholar] [CrossRef]
 Cybenko, G. Approximation by superpositions of a sigmoidal function. Math. Control Signals Syst. 1989, 2, 303–314. [Google Scholar] [CrossRef]
 Hornik, K.; Stinchcombe, M.; White, H. Multilayer feedforward networks are universal approximators. Neural Netw. 1989, 2, 359–366. [Google Scholar] [CrossRef]
 Zainuddin, Z.; Pauline, O. Function approximation using artificial neural networks. WSEAS Trans. Math. 2008, 7, 333–338. [Google Scholar]
 Rowlands, H.; Packianather, M.S.; Oztemel, E. Using artificial neural networks for experimental design in offline quality control. J. Syst. Eng. 1996, 6, 46–59. [Google Scholar]
 Su, C.; Hsieh, K. Applying neural network approach to achieve robust design for dynamic quality characteristics. Int. J. Qual. Reliab. Manag. 1998, 15, 509–519. [Google Scholar] [CrossRef]
 Cook, D.F.; Ragsdale, C.T.; Major, R.L. Combining a neural network with a genetic algorithm for process parameter optimization. Eng. Appl. Artif. Intell. 2000, 13, 391–396. [Google Scholar] [CrossRef]
 Chow, T.T.; Zhang, G.Q.; Lin, Z.; Song, C.L. Global optimization of absorption chiller system by genetic algorithm and neural network. Energy Build. 2002, 34, 103–109. [Google Scholar] [CrossRef]
 Chang, H. Applications of neural networks and genetic algorithms to Taguchi’s robust design. Int. J. Electron. Bus. Manag. 2005, 3, 90–96. [Google Scholar]
 Chang, H.; Chen, Y. Neurogenetic approach to optimize parameter design of dynamic multiresponse experiments. Appl. Soft Comput. 2011, 11, 436–442. [Google Scholar] [CrossRef]
 Arungpadang, R.T.; Kim, J.Y. Robust parameter design based on back propagation neural network. Korean Manag. Sci. Rev. 2012, 29, 81–89. [Google Scholar] [CrossRef] [Green Version]
 Javad Sabouri, K.; Effati, S.; Pakdaman, M. A neural network approach for solving a class of fractional optimal control problems. Neural Process. Lett. 2017, 45, 59–74. [Google Scholar] [CrossRef]
 Hong, Y.Y.; Satriani, T.R.A. Dayahead spatiotemporal wind speed forecasting using robust designbased deep learning neural network. Energy 2020, 209, 118441. [Google Scholar] [CrossRef]
 Arungpadang, T.A.; Maluegha, B.L.; Patras, L.S. Development of dual response approach using artificial intelligence for robust parameter design. In Proceedings of the 1st Ahmad Dahlan International Conference on Mathematics and Mathematics Education, Universitas Ahmad Dahlan, Yogyakarta, Indonesia, 13–14 October 2017; pp. 148–155. [Google Scholar]
 Le, T.H.; Jang, H.; Shin, S. Determination of the Optimal Neural Network Transfer Function for Response Surface Methodology and Robust Design. Appl. Sci. 2021, 11, 6768. [Google Scholar] [CrossRef]
 Le, T.H.; Shin, S. Structured neural network models to improve robust design solutions. Comput. Ind. Eng. 2021, 156, 107231. [Google Scholar] [CrossRef]
 Box, G.E.P.; Wilson, K.B. On the experimental attainment of optimum conditions. J. R. Stat. Soc. Ser. B 1951, 13, 1–45. [Google Scholar] [CrossRef]
 Myers, R.H. Response surface methodology—Current status and future directions. J. Qual. Technol. 1999, 31, 30–44. [Google Scholar] [CrossRef]
 Khuri, A.I.; Mukhopadhyay, S. Response surface methodology. Wiley Interdiscip. Rev. Comput. Stat. 2010, 2, 128–149. [Google Scholar] [CrossRef]
 Box, G.E.P.; Draper, N.R. Empirical ModelBuilding and Response Surfaces; Wiley: New York, NY, USA, 1987. [Google Scholar]
 Khuri, A.I.; Cornell, J.A. Response Surface: Design and Analyses; CRC Press: New York, NY, USA, 1987. [Google Scholar]
 Chang, S.W. The Application of Artificial Intelligent Techniques in Oral Cancer Prognosis Based on Clinicopathologic and Genomic Markers. Ph.D. Thesis, University of Malaya, Kuala Lumpur, Malaysia, 2013. [Google Scholar]
 Hartman, E.J.; Keeler, J.D.; Kowalski, J.M. Layered neural networks with Gaussian hidden units as universal approximations. Neural Comput. 1990, 2, 210–215. [Google Scholar] [CrossRef]
 Gnana Sheela, K.; Deepa, S.N. Review on methods to fix number of hidden neurons in neural networks. Math. Probl. Eng. 2013, 2013, 425740. [Google Scholar] [CrossRef] [Green Version]
 Zilouchian, A.; Jamshidi, M. (Eds.) Intelligent Control Systems Using Soft Computing Methodologies; CRC Press: Boca Raton, FL, USA, 2001. [Google Scholar]
 Marsland, S. Machine Learning: An Algorithmic Perspective, 2nd ed.; CRC Press: Boca Raton, FL, USA, 2014. [Google Scholar]
 Vaghefi, M.; Mahmoodi, K.; Setayeshi, S.; Akbari, M. Application of artificial neural networks to predict flow velocity in a 180° sharp bend with and without a spur dike. Soft Comput. 2020, 24, 8805–8821. [Google Scholar] [CrossRef]
 Jayawardena, A.W.; Xu, P.C.; Tsang, F.L.; Li, W.K. Determining the structure of a radial basis function network for prediction of nonlinear hydrological time series. Hydrol. Sci. J. 2006, 51, 21–44. [Google Scholar] [CrossRef]
 Heimes, F.; Van Heuveln, B. The normalized radial basis function neural network. In Proceedings of the SMC’98 Conference Proceedings, 1998 IEEE International Conference on Systems, Man, and Cybernetics 2, San Diego, CA, USA, 11–14 October 1998; pp. 1609–1614. [Google Scholar] [CrossRef]
 Leong, J.; Ponnambalam, K.; Binns, J.; Elkamel, A. Thermally Constrained Conceptual Deep Geological Repository Design under Spacing and Placing Uncertainties. Appl. Sci. 2021, 11, 11874. [Google Scholar] [CrossRef]
Run  ${\mathit{x}}_{1}$  ${\mathit{x}}_{2}$  ${\mathit{y}}_{\mathit{t}\mathit{r}\mathit{u}\mathit{e}}$ 

1  0.5  −1.0  113.5914 
2  1.0  −0.5  112.9478 
3  −1.0  −0.5  104.7632 
4  1.0  0.0  100.0000 
5  0.5  0.5  78.6806 
6  0.0  0.0  100.0000 
7  0.5  0.0  100.0000 
8  −1.0  0.5  89.9294 
9  −1.0  1.0  97.0000 
10  1.0  −1.0  105.0000 
11  0.0  1.0  91.8451 
12  −1.0  0.0  100.0000 
13  1.0  1.0  99.5939 
14  −0.5  0.5  64.8503 
15  0.0  −0.5  158.0282 
16  −0.5  −1.0  105.0000 
17  −0.5  −0.5  127.4106 
18  0.5  1.0  97.0000 
19  −1.0  −1.0  100.6766 
20  0.0  −1.0  113.5914 
21  0.0  0.5  54.8669 
22  1.0  0.5  96.2952 
23  −0.5  0.0  100.0000 
24  −0.5  1.0  91.8451 
25  0.5  −0.5  145.1924 
Run  ${\mathit{x}}_{1}$  ${\mathit{x}}_{2}$  $\overline{\mathit{y}}$  $\mathit{s}$  ${\mathit{s}}^{2}$ 

1  0.50  −1.00  112.5910  5.9420  35.3060 
2  1.00  −0.50  111.1480  6.3660  40.5310 
3  −1.00  −0.50  105.1830  4.9330  24.3300 
4  1.00  0.00  100.6400  5.2010  27.0510 
5  0.50  0.50  77.4810  5.3530  28.6530 
6  0.00  0.00  99.7600  5.1250  26.2680 
7  0.50  0.00  101.0600  5.0650  25.6490 
8  −1.00  0.50  89.9690  3.9950  15.9580 
9  −1.00  1.00  96.6600  2.9110  8.4740 
10  1.00  −1.00  105.8000  6.6580  44.3270 
11  0.00  1.00  91.7450  4.2200  17.8060 
12  −1.00  0.00  99.7800  3.7970  14.4200 
13  1.00  1.00  100.3540  5.0330  25.3290 
14  −0.50  0.50  66.1700  3.4490  11.8960 
15  0.00  −0.50  158.7880  4.8550  23.5740 
16  −0.50  −1.00  104.5000  5.0920  25.9290 
17  −0.50  −0.50  127.9310  5.3540  28.6630 
18  0.50  1.00  96.5800  4.4950  20.2080 
19  −1.00  −1.00  100.8770  4.3940  19.3060 
20  0.00  −1.00  113.8910  5.2110  27.1530 
21  0.00  0.50  54.8070  4.0020  16.0170 
22  1.00  0.50  95.8950  5.4510  29.7140 
23  −0.50  0.00  100.8800  4.0340  16.2710 
24  −0.50  1.00  91.2050  3.4390  11.8270 
25  0.50  −0.50  144.5120  5.3620  28.7530 
Model  Transfer Function  Training Function  Architecture  #Epoch  

FFNN  Mean  TansigPurelin  Trainlm  2171  3 
Standard deviation  TansigPurelin  Trainrp  261  44  
CFNN  Mean  TansigPurelin  Trainlm  2171  5 
Standard deviation  TansigPurelin  Trainrp  251  28 
Model  Transfer Function  Goal  Spread  

RBFN  Mean  RadbasPurelin  1 × 10^{−7}  0.6 
Standard deviation  RadbasPurelin  1 × 10^{−7}  0.6 
Estimation Model  Optimal Factor Settings  Process Mean  Process Bias  Process Variance  EQL  

${\mathit{x}}_{1}$  ${\mathit{x}}_{2}$  
LSM  −0.5870  −1.0000  119.9030  8.0960  35.7730  101.3270 
FFNN  −0.5230  −0.5110  127.9900  0.0090  23.0860  23.0860 
CFNN  −0.3280  −0.9290  127.9830  0.0160  27.0260  27.0260 
RBFN  0.6860  −0.7690  127.9410  0.0580  24.3250  24.3280 
Run  ${\mathit{x}}_{1}$  ${\mathit{x}}_{2}$  $\overline{\mathit{y}}$  $\mathit{s}$  ${\mathit{s}}^{2}$ 

1  0.5  −1.0  117.0310  28.4560  809.7210 
2  1.0  −0.5  108.9480  28.5790  816.7760 
3  −1.0  −0.5  107.1230  17.1750  294.9700 
4  1.0  0.0  100.5800  22.3820  500.9420 
5  0.5  0.5  83.9410  22.0650  486.8490 
6  0.0  0.0  98.3400  20.3830  415.4530 
7  0.5  0.0  98.0000  23.8250  567.6330 
8  −1.0  0.5  90.1890  16.3130  266.1150 
9  −1.0  1.0  92.2200  14.7930  218.8280 
10  1.0  −1.0  100.1200  26.8040  718.4340 
11  0.0  1.0  91.2050  17.0730  291.5000 
12  −1.0  0.0  97.5800  18.1380  328.9830 
13  1.0  1.0  104.2340  21.8010  475.2960 
14  −0.5  0.5  68.7300  18.4750  341.3320 
15  0.0  −0.5  157.4680  23.2770  541.8020 
16  −0.5  −1.0  115.1800  21.6910  470.5180 
17  −0.5  −0.5  128.1910  21.9950  483.7670 
18  0.5  1.0  98.3600  18.7680  352.2350 
19  −1.0  −1.0  104.5970  20.9930  440.6870 
20  0.0  −1.0  114.6310  26.1640  684.5700 
21  0.0  0.5  53.3870  20.1200  404.8260 
22  1.0  0.5  93.0950  20.7210  429.3470 
23  −0.5  0.0  97.8600  19.4930  379.9600 
24  −0.5  1.0  95.2650  16.5790  274.8610 
25  0.5  −0.5  147.8720  25.1310  631.5690 
Model  Transfer Function  Training Function  Architecture  #Epoch  

FFNN  Mean  TansigPurelin  Trainlm  2111  5 
Standard deviation  TansigPurelin  Trainrp  231  23  
CFNN  Mean  TansigPurelin  Trainlm  2131  6 
Standard deviation  TansigPurelin  Trainrp  211  29 
Model  Transfer Function  Goal  Spread  

RBFN  Mean  RadbasPurelin  1e20  0.5 
Standard deviation  RadbasPurelin  1e20  0.6 
Estimation Model  Optimal Factor Settings  Process Mean  Process Bias  Process Variance  EQL  

${\mathit{x}}_{1}$  ${\mathit{x}}_{2}$  
LSM  −1.000  −1.000  117.2490  10.7500  413.2320  528.8130 
FFNN  −0.804  −0.561  127.2800  0.7190  359.3420  359.8600 
CFNN  −0.383  −0.274  127.2120  0.7870  445.7840  446.4040 
RBFN  −0.141  −0.190  127.0680  0.9310  422.7920  423.6600 
Model  Transfer Function  Training Function  Architecture  #Epoch  

FFNN  Mean  TansigPurelin  Traingdx  391  105 
Standard deviation  TansigPurelin  Traingda  3101  19  
CFNN  Mean  TansigPurelin  Traingda  3121  107 
Standard deviation  TansigPurelin  Trainrp  3111  9 
Model  Transfer Function  Goal  Spread  

RBFN  Mean  RadbasPurelin  1 × 10^{−27}  0.5 
Standard deviation  RadbasPurelin  1 × 10^{−27}  0.8 
Estimation Model  Optimal Factor Settings  Process Mean  Process Bias  Process Variance  EQL  

${\mathit{x}}_{1}$  ${\mathit{x}}_{2}$  ${\mathit{x}}_{3}$  
LSM  1.000  0.071  −0.2500  494.6720  5.3270  1977.5320  2005.9170 
FFNN  0.999  0.999  −0.6230  499.8690  0.1310  7.7630  7.7800 
CFNN  0.792  −0.752  0.9990  499.6870  0.3120  2.8540  2.9520 
RBFN  0.999  0.999  −0.4300  500.0520  0.0520  23.2190  23.2210 
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. 
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Le, T.H.; Dai, L.; Jang, H.; Shin, S. Robust Process Parameter Design Methodology: A New Estimation Approach by Using FeedForward Neural Network Structures and Machine Learning Algorithms. Appl. Sci. 2022, 12, 2904. https://doi.org/10.3390/app12062904
Le TH, Dai L, Jang H, Shin S. Robust Process Parameter Design Methodology: A New Estimation Approach by Using FeedForward Neural Network Structures and Machine Learning Algorithms. Applied Sciences. 2022; 12(6):2904. https://doi.org/10.3390/app12062904
Chicago/Turabian StyleLe, TuanHo, Li Dai, Hyeonae Jang, and Sangmun Shin. 2022. "Robust Process Parameter Design Methodology: A New Estimation Approach by Using FeedForward Neural Network Structures and Machine Learning Algorithms" Applied Sciences 12, no. 6: 2904. https://doi.org/10.3390/app12062904