Next Article in Journal
Innovative Techniques for Safety, Reliability, and Security in Control Systems
Next Article in Special Issue
Detection of Cotton Seed Damage Based on Improved YOLOv5
Previous Article in Journal
Phycoremediation of Synthetic Dyes Laden Textile Wastewater and Recovery of Bio-Based Pigments from Residual Biomass: An Approach towards Sustainable Wastewater Management
Previous Article in Special Issue
Optimal Control of Rural Water Supply Network Based on Intelligent Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimization of Levenberg Marquardt Algorithm Applied to Nonlinear Systems

College of Information & NetWork Engineering, Anhui Science and Technology University, Chuzhou 233100, China
*
Author to whom correspondence should be addressed.
Processes 2023, 11(6), 1794; https://doi.org/10.3390/pr11061794
Submission received: 27 April 2023 / Revised: 26 May 2023 / Accepted: 3 June 2023 / Published: 12 June 2023

Abstract

:
As science and technology advance, industrial manufacturing processes get more complicated. Back Propagation Neural Network (BPNN) convergence is comparatively slower for processing nonlinear systems. The nonlinear system used in this study to evaluate the optimization of BPNN based on the LM algorithm proved the algorithm’s efficacy through a MATLAB simulation analysis. This paper examined the application impact of the enhanced approach using the Continuous stirred tank reactor (CSTR) control system as an example. The study’s findings demonstrate that the LM optimization algorithm’s identification error exceeds 10-5. The research’s suggested control approach for reactant concentration CA in CSTR systems provides a better tracking effect and a stronger anti-interference capacity. Compared to the PI control method, the overall control effect is superior. As a result, the optimization model for nonlinear systems has a greatly improved processing accuracy. With some data support for the accuracy study of neural network models and the application of nonlinear systems, the suggested LM-BP optimization algorithm is evidently more appropriate for nonlinear systems.

1. Introduction

As part of the industrial revolution, industrial production has been optimized to be more intelligent and automated. The increasing complexity of industrial production raises the requirements for control precision and control speed. The controlled object with multiple inputs and outputs has evolved into the central component of industrial control systems [1]. In recent years, the predictive control of nonlinear systems (NS) has emerged as one of the major topics of predictive control. Dynamically predicting the output value is the key to nonlinear system control, and the system model can be optimized online during each control cycle. However, the majority of extant control optimization techniques for nonlinear systems center on the optimization of computational methods. Control and forecasting of accuracy and efficiency are relatively deficient. In rolling optimization solutions and nonlinear system modeling, the neural network (NN) algorithm is widely employed. It has strong nonlinear problem processing ability, high computing efficiency, high-efficient adaptability to uncertain systems, and enhanced nonlinear system control [2,3,4]. However, as research and application have expanded, this algorithm has revealed certain limitations regarding computational efficacy and model selection. Consequently, it is optimized and enhanced using the Levenberg-Marquardt (LM) algorithm. LM is a frequently employed method for solving optimization objective functions that applies to non-least-squares problems [5,6,7]. The BP neural network model also utilized the LM algorithm. The nonlinear system serves as the research object for strategy control simulations. It is anticipated to increase processing efficacy in nonlinear systems and achieve efficient control of intelligent industrial systems.

2. Related Work

The LM algorithm has many advantages, including a lower probability of collapsing into a local extremum and a high degree of stability, widely used in strategic control problems. To address the difficulty of routinely inspecting smart meters, Chen L et al. proposed an error estimation model for smart meters based on the genetic optimization LM algorithm. The optimization model was applied to an investigation of a local electricity meter. The results demonstrated that it could enhance the accuracy of intelligent meter error estimation [8]. Distance vector hop (DV hop) has stringent network topology requirements and poor positioning precision in practical applications. Shi Q et al. developed a weighted LM algorithm to optimize the initial location of unknown nodes in DV hop in response to this flaw. The study’s findings indicated that the revised approach had a stronger ability to adapt to changes in network topology and greatly increased placement accuracy [9]. The pile raft foundation is regarded as a novel offshore and onshore structural technology, but the load sharing and interaction behavior lacks corresponding guidelines. Deb N et al. utilized the LM algorithm to develop NN models for this flaw while employing nonlinear multiple regression and artificial neural network (ANN) models to estimate their safety factors. It served as a guide for the development and application of this novel technology [10]. Niu Y. et al. proposed the dynamic fusion of the LM and Whale optimization algorithm (WOA) to enhance the accuracy of star sensor measurements. The global search of WOA and the local optimization capacity of LM were used to address the dependence on initial values and local convergence issues of conventional algorithms. The results of the study demonstrated that this method provided superior performance, greater precision, and efficient parameter optimization for star sensors [11]. Bilski J. et al. modified the LM algorithm locally to solve the issue of inefficient computation of complex NNs. Experiments revealed that local modifications to the LM method substantially enhanced the algorithm’s performance in larger networks, suggesting ways to optimize the LM algorithm [12].
NS is a system in which output and input alterations are not proportional. Controlling NSs is one of the most active research areas in the discipline of control. He D et al. proposed a model-free prescribed time controller for NSs. To address controller inaccuracies, an adaptive RBFNN compensator was developed. Using the Lyapunov theorem, the stability of closed-loop systems with model-free prescribed time controllers was analyzed. The study results demonstrated that the proposed control strategy was effective [13,14,15,16]. Liu S. et al. proposed a framework for the dynamic identification of folded fins with free gap nonlinearity based on backbone curves. The methods proposed in this study were more direct and efficient than the majority of extant NS identification methods, and they produced accurate dynamic models for folded fin structures [17]. Kien C V et al. introduced an adaptive inverse multilayer T-S fuzzy controller (AIMFC) for optimal computation of robust control in uncertain NSs. The performance of the algorithm in both SMD systems and coupled liquid tank systems were superior to that of inverse fuzzy controllers [18], according to the results of a study that compared the performance of the algorithm under various control parameters. Zhang Z et al. adapted the dynamic frequency-based parameter identification method to NSs with a periodic response. The oscillator with nonlinear stiffness and damping assessed the performance of the identification method. Results demonstrated that the proposed method required less time and had high recognition accuracy [19]. Zheng Y et al. investigated the adaptive fuzzy event-triggered control problem for a class of uncertain nonlinear systems. To update the controller and fuzzy weight vector to accomplish aperiodic control input signals for NSs, a novel method with combined trigger (CT) behavior was proposed. The analysis revealed that this method significantly decreased controller update frequency while maintaining control performance [20].
According to the aforementioned studies, the LM algorithm and nonlinear system have relatively in-depth and abundant findings from research. In today’s studies, however, nonlinear system control strategies are predominantly based on neural networks, which are susceptible to local optima and delayed convergence. In the interim, the applicability of this control strategy is limited. Control precision and effectiveness cannot be guaranteed. Consequently, this study utilizes the LM algorithm to optimize and enhance the BP neural network. The simulation experiment is conducted in the nonlinear system in order to enhance the nonlinear-ar system’s strategy control.

3. Optimization and Identification of NS Model Based on LM-BPNN

3.1. Construction of NS Strategic Control Method Based on BPNN

Table 1 presents the definition of terms used in this paper.
The structure and operating principles of the human brain are the basis for ANNs. This structure is a simplification and abstraction of biological NNs. Figure 1 depicts the neural structure model of ANNs. There are infinite NN models with various structures, whereas ANNs have three basic structural elements: connection weights, activation functions, and summation units [21,22]. The connection weight value represents the strength of the neural connection. When the value is positive, the connection is active; when the value is negative, connection suppression is indicated. Nonlinear mapping of nonlinear issues is handled by the activation function, and the input signal is weighted by the summation unit.
Formulas (1) and (2) represent the summation result and output value result of the NN, respectively. x j represents the input value. w k j represents the connection weight value. φ is the excitation function. θ k represents the threshold value.
u k = j = 1 n w k j x j
y k = φ j = 1 n w k j x j θ k
The excitation function is the core of NNs, which is closely related to the ability of NNs to handle problems. The sigmoid-type excitation function is a commonly used NN excitation function with an input value within 0 , 1 . The sigmoid-type excitation functions can be divided into hyperbolic tangent type (tansig) excitation functions or logarithmic type (logsig) excitation functions. The function expression is shown in Formulas (3) and (4).
f x = 1 1 + e x
f x = 1 e 2 x 1 + e 2 x
BPNN is a feedforward neural network. The forward transmission of signals and the backward propagation of errors make up the two stages of the learning process. Figure 2 depicts the BPNN’s structure. Forward propagation possesses two properties. One is that the connection weights for network connections have not been updated. The second step is to return to the error backpropagation procedure when there is a discrepancy between the output and the expected value. Backpropagation is characterized by transmitting error signals to neurons in each layer and receiving error signals for each layer. In addition, the connection weight value is modified based on each layer’s error signal, thereby correcting the output value of each layer.
Although the BPNN technique offers distinct advantages for handling nonlinear mapping issues, it nonetheless has certain drawbacks in practical uses [23]. First, BPNN has a non-convex optimization problem, meaning that it is simple for this algorithm to converge to a local optimal. The second issue with BPNN is gradient disappearance. As the number of training iterations increases, the convergence rate and learning efficacy of the BPNN will decrease dramatically. Thirdly, BPNN suffers from the issue of overfitting. The characteristics of global approximation have enhanced the algorithm’s capacity for generalization to some extent, but the issue of overfitting has not been fundamentally resolved. There are currently numerous optimizations for BPNNs. The primary optimization objectives include accelerating the algorithm’s convergence rate and overcoming non-convex optimization. Momentum factor optimization is to improve the weight correction amount of the algorithm by introducing a momentum factor α into the weight correction amount, α 0 , 1 . This optimization concept aims to accelerate the algorithm’s convergence rate by suppressing and mitigating learning-process disruptions. The variable step size momentum optimization method addresses the problem of the difficult selection of learning rate η by utilizing momentum factor optimization. The variable step size algorithm determines the change in step size based on the gradient change direction of the iterative process and the rate of weight correction. Combining the optimization concept of momentum factor, Formula (5) represents the weight correction formula for the variable step size momentum optimization method.
w k + 1 = w k + η k 1 α D k + α D k 1
In momentum factor optimization and variable step size algorithms, the learning rate η of the BPNN remains constant. However, in different training periods, the learning rates suitable for BPNNs are different. Therefore, a learning rate increment and decrement factor are introduced into the BP algorithm to adjust the learning rate to the change of error. The functional expression for learning rate selection is shown in Formula (6). η represents the initial learning rate. k i n c and k d e c B represent the learning rate increment and decrement factors. E k represents the mean square error of the time.
η = η k i n c , E k 1 < E k η k d e c , E k 1 > E k
The convergence rate of Newton’s method is significantly better than that of the gradient descent method. H k represents the Hessian matrix of the error performance function E k . The following Formulas (7) and (8) can be obtained. w k + 1 represents the connection weight value at k + 1 .
w k + 1 = w k H 1 k E k
H k = 2 E k
However, Newton’s method requires the calculation of the second-order Taylor series of the error performance function E k , which has limitations in practical applications. Gauss–Newton method is an improvement over the Newton method. This method changes the Hessian matrix with approximate values so that it does not have to be calculated. This makes the optimization technique easier to use [24].

3.2. Construction of NS Strategic Control Method Based on Optimized BPNN

LM incorporates gradient descent and the Gaussian Newton method’s advantages. It possesses the property of global approximation and is capable of producing local convergence. Not only does it accomplish the global nature of the gradient descent method, but it also achieves convergence at a rate comparable to the Gauss–Newton method. Utilizing the LM algorithm, this study optimizes BPNN to enhance the processing efficacy of BPNN in NSs. The error performance function E k of the LM algorithm is expressed in the form of a sum of squares error. Formula (9) shows the functional expression. y k is the actual output value. y m k is the desired output value. e k represents the current error.
E k = 1 / 2 y k y m k 2 = 1 / 2 e 2 k
Formula (10) is the current gradient function.
E k = E k w k = e k e k w k = J T k e k
In Formula (10), J is the Jacobian matrix of the first derivative of the error function concerning the threshold and weight values. When the error performance function approaches the minimum value, the elements of the matrix can be ignored, thus obtaining Formula (11).
2 E k = J T k J k
Hessian matrices are not always invertible. Therefore, a coefficient λ is introduced to obtain an LM algorithm that updates the weight value.
w k + 1 = w k J T k J k + λ I 1 J T k e k
In the improved algorithm, when λ 0 , Formula (12) represents the Quasi-Newton method. When λ , the above formula represents the gradient descent method. Combined with the analysis process of the LM algorithm, this algorithm has three main advantages. First, the idea of this algorithm is the second derivative solution, so its convergence speed is significantly higher than the gradient descent method. Second, the J T k J k of this method is constantly positive, which means the LM algorithm always has a solution. Third, when calculating, the matrix J can be simplified, reducing the computational complexity. Figure 3 shows the training steps of the LM-BP optimization algorithm.
System identification is the inverse of the system control problem. It determines a system model from a known type of model that satisfies the equivalent requirements based on input and output signals and equivalence criteria. This system model is identical to the system being evaluated. System identification, therefore, encompasses input and output signals, known models, and equivalence criteria. Signals of input and output are the foundation of system identification. The known model must correspond to the initial system. The equivalence criterion determines the precision of the model’s operation, which is typically a function of input and output errors.
For system identification, NN models can be arbitrarily approximated and self-learned for nonlinear problems. To train a model with the characteristics of the identified system, it is necessary to ascertain the known NN model. Based on the input and output data of the identified system, learning, and weight correction are carried out using established models until the error performance function satisfies the accuracy requirements. In actual operation, it is necessary to contemplate NN identification’s limitations. One is the limit on the number of sample data, which impacts the model’s accuracy. The second is the system model’s approximate characteristics. The third factor is the presence of disturbance in the original data. To ensure that the identification results are as error-accurate as possible, the equivalent criterion function is used to evaluate the results and determine whether the parameters of the identification model need to be modified. Figure 4 depicts the serial-parallel structure of NN identification.
The NN model can monitor the input and output changes of the identified system in an adaptive manner. Therefore, NN identification is divided into parallel and serial-parallel identification structures. In the parallel identification structure, both the output feedback and the process input serve as the model’s input. The historical output value of the NN serves as the input for the identification model. As model inputs, the serial-parallel identification structure utilizes the NN’s inputs and the system’s outputs from the past. Figure 3 is a serial, parallel structure diagram for NN identification. Therefore, the study employs a serial-parallel structure that can observe the most recent system data and accomplish advanced output prediction. The LM-BP structure is identical to that of the BPNN. The excitation function between the input layer and the hidden layer in this investigation is the hyperbolic tangent excitation function (tansig) of Formula (3). The logarithmic excitation function (logsig) of Formula (4) is the excitation function between the concealed and output layers. The output value for the hidden layer is given by Formula (13). x i represents the input node value. w i j is the connection weight value between the input and the hidden layer.
h j = g n e t j , j = 1 , 2 , , m
n e t k = i = 0 n w i j x i , j = 1 , 2 , , m
The LM-BP algorithm model is a single output structure. The functional expression of the output value is shown in Equation (15). w j k represents the connection weight value from the hidden layer to the output layer.
y k = f n e t k , k = 1 , 2 , , l
n e t k = j = 0 m w j k h j , k = 1 , 2 , , l
The function of the equivalence criterion is the benchmark and objective of system identification. LM-BPNN identification uses the root mean square error (RMSE), average absolute error (AAE), and maximum absolute error (MAE) of the actual output and expected output as equivalent criterion functions.

4. Simulation Analysis Based on LM-BP Optimization Algorithm in Nss

4.1. Performance Analysis of LM-BP-Based Optimization Algorithms

To determine the efficacy of LM algorithm optimization and model identification, a simulation experiment of LM-BP algorithm model identification is conducted on NSs. Formulas (17) and (18) illustrate the functional representation of the nonlinear system model and system input signals.
y k = y k 1 1 y k 1 2 + u k 3
u k = 0.5 sin 6 k π t
MATLAB is utilized to simulate the behavior of a nonlinear system. The LM-BPNN is used to identify models with the nonlinear object of Formula (18). Table 2 outlines the simulation-specific experimental environment parameters.
The convergence rates of the BP, Gauss–Newton-Back Propagation (GN-BP), and LM-BP models are compared in order to verify the convergence ability of the LM-BP model proposed in this study. Figure 5 displays the findings. The BP model has a convergence time of 0.054 s, with a minimal convergence time of 0.028 s and a range of 0.026 s. The GN-BP model has maximum and minimum convergence times of 0.049 s and 0.026 s, respectively, with a range of 0.023 s. The LM-BP model has a maximum convergence time of 0.042 s, a minimum convergence time of 0.022 s, and a range of convergence times of 0.02 s. The average convergence speed of the BP model is 0.041 s, while those of the GN-BP and LM-BP models are 0.036 s and 0.029 s, respectively. According to the preceding analysis, the convergence time of BP and GN-BP methods is highly variable, and the convergence effect is inadequate. The convergence time of the LM BP model proposed in the study is relatively stable and fluctuates minimally across various training periods. The convergence performance is vastly superior to that of the BP model.
The ratio of the training set to the test set influences the model’s identification accuracy. Therefore, the identification accuracy of LM-BP models is contrasted under various training and test set ratios. Figure 6 depicts the effect of the model’s recognition. The maximal recognition accuracy for a 60% training set model with 100 iterations is 92.56 percent, as shown in Figure 6a. Overall, the recognition accuracy is unstable due to an abrupt increase followed by a rapid decrease. The maximum recognition accuracy for a model with 130 iterations and a 70% training set is 94.47%. Under this classification ratio, the overall accuracy of recognition exhibits a rising trend. The maximal recognition accuracy for 80% of the training set after 110 iterations is 94.51%. The overall accuracy of recognition fluctuates and is unstable under this classification condition. With 190 iterations, the 90% training set model’s maximum recognition accuracy is 95.53%. The accuracy of recognition demonstrates a relatively stable state. In other words, classifying training sets and training sets at a ratio of 9:1 can effectively enhance recognition accuracy. Machine learning models have a relatively sophisticated network structure, which necessitates the use of more training data to increase model accuracy.
A validation experiment for the LM optimization algorithm is designed based on the simulation parameters listed above. The LM-BPNN model is initially trained, then the algorithm model is validated with test data. Figure 7 depicts the identification outcomes of training samples using LM-BPNN. Figure 7 demonstrates that the object output of the training sample is largely consistent with the identification output of the neural network. The object input and output trends of the sample used for training are consistent. The simulation results of training samples indicate that system identification modeling for NSs is possible using the LM-BPNN algorithm. When 114 iterations are performed, the output value of the training sample satisfies the error accuracy requirements.
The performance of the proposed LM-BP model’s verification is evaluated. Figure 8 depicts the results of the verification of the LM-BPNN identification model using test data. Observing the images reveals that there is a certain discrepancy between the test data object output and the NN identification output. The identification error result of the test data is between ± 2 × 10 5 . Based on the analysis of the identification results of the training samples, the actual output of the NN algorithm and the model output are basically consistent in the identification results. The identification error of the test data and training samples have reached the order of 10−5, reaching the accuracy requirements of the target error.
Using an equivalence criterion function, the identification results of the LM-BP algorithm model, GN-BPNN, and BP are evaluated in order to accurately depict the modeling performance of the optimization algorithm. Three algorithm models’ modeling performance statistics are presented in Table 3. According to the results, the RMSE of the LM-BPNN optimization algorithm is 0.0451, the MAE is 0.0958, and the AAE is 0.0351. The RMSE of the BPNN is 0.0744, while the MAE and AAE are 0.1775 and 0.0536, respectively. The GN-BPNN optimization method has an RMSE of 0.0539, an MAE of 0.1108, and an AAE of 0.0443. Consequently, the model identification accuracy of the LM optimization algorithm is superior to that of the BPNN algorithm and the Gaussian Newton optimization algorithm. The LM-BPNN is more efficient at processing NSs.

4.2. Application Analysis of LM-BP Optimization Algorithm in NSs

Experiments are conducted on a nonlinear system object to validate the proposed LM BP model as a solution method for the optimization link in NN predictive control. Two hundred iteration stages are included in the experiment. Figure 9 depicts the results. According to Figure 9, the overshoot values for the BP and GN-BP methodologies are 0.25 and 0.2, indicating a significant overshoot. The control outcomes of these two approaches differ significantly. The proposed LM-BP nonlinear system control method has an overshoot of 0.1, the corresponding speed is quicker, and the control effect is relatively stable. Based on the reference output values depicted in the figure, this method has greater stability when monitoring the predetermined values of the nonlinear control system.
However, the above experiments represent the results of implementing the control method under optimum conditions. What rarely occurs in the actual production process is the optimal state. Signal interference is one of the numerous external interference factors. Therefore, k = 120 will apply an external interference signal to the system with a magnitude of d_(120) = 0.2 in order to make the research findings more in line with the actual condition. Figure 10 depicts the control effects of various control methods under external interference. The addition of interference signals affects all three control methods. The curve of the output result fluctuates. Figure 10a demonstrates that, prior to enhancement, the control effect of the BPNN model has the largest deviation from the expected results and the lowest degree of fit. It has a negligible impact on the best control of nonlinear systems. After improving the Gaussian Newton algorithm in Figure 10b, the control effect of the algorithm has been enhanced to some extent. Figure 10c demonstrates that the LM-BP optimization method obtains the best control effect, as evidenced by the highest degree of fit between the output result and the reference value. This method yields a relatively stable control effect, enabling the optimal control of nonlinear systems to be achieved.
A continuous stirred tank reactor (CSTR) is a highly nonlinear and coupled control system. The complexity of the process makes it challenging to regulate reactor performance indicators, including temperature, reactant concentration, and coolant flow. CSTR is, therefore, a prevalent method for evaluating the effectiveness of nonlinear control [25,26,27]. The proposed LM-BP model is applied to the CSTR to evaluate its efficacy and compared to the Proportional-Integral (PI) control method. In order to imitate the actual situation, Gaussian white noise with a mean value of 0 is added to interfere with it at the same time. The simulation evaluation is conducted in a MATLAB 7.14 environment. The CSTR system predicts and regulates the reactant concentration CA. Figure 11 displays the findings. Figure 11a illustrates the predictive control utilizing the optimization method proposed in the study. The results of the experiment using the PI control approach are shown in Figure 11b. After introducing noise interference, the results of both prediction methods are altered to a certain degree. The control effect of LM-BP on CA is essentially consistent with the reference results. It is capable of producing relatively optimal control effects. The tracking control effect of the PI control method varies significantly. The difference between the effect of control and the standard value is relatively substantial. This method’s control effect has not produced apparent optimal results. Comparatively, the control method proposed in this study has a better monitoring effect on reactant concentration CA and stronger anti-interference capability. The cumulative effect of control is superior to the PI control method.

5. Conclusions

The BPNN model is most frequently employed in the present research on industrial predictive control. Nonetheless, as data processing and system complexity increase, the performance progressively degrades. The LM-BP optimization algorithm is proposed for NS issues. Identification of the LM optimization model is validated through simulation analysis. When the number of iterations for the model identification of the LM optimization algorithm reaches 114, the output value satisfies the error accuracy requirements, and the identification error is on the order of 10-5. The RMSE of the LM optimization algorithm is 0.0451, the MAE is 0.0958, and the AAE is 0.0351 in the evaluation of modeling performance. Using the CSTR control system as an illustration for application effect analysis, the tracking effect of its reactant concentration CA is superior to that of the PI control method, with greater anti-interference capacity. The performance of BPNN in NSs has been considerably enhanced through LM algorithm optimization, which not only improves the processing efficiency and accuracy of BPNN in NSs but also provides data support for the study of NSs. However, in the real-world setting, the system is subject to numerous interference factors. The research only considers one signal to be interference. To enhance the model’s robustness, it is necessary to verify additional interference factors in future research.

Author Contributions

X.H.: Conceptualization, methodology, formal analysis, writing—original draft preparation, writing—review and editing; H.C.: Data curation; B.J.: Data curation, formal analysis. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

All the data supporting this study are available in the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mariappan, M.; Tamilselvan, A. An efficient numerical method for a nonlinear system of singularly perturbed differential equations arising in a two-time scale system. J. Appl. Math. Comput. 2022, 68, 1069–1086. [Google Scholar] [CrossRef]
  2. Alabedalhadi, M. Exact travelling wave solutions for nonlinear system of spatiotemporal fractional quantum mechanics equations. Alex. Eng. J. 2022, 61, 1033–1044. [Google Scholar] [CrossRef]
  3. Jin, H.Y.; Wang, Z.A. Global stabilization of the full attraction-repulsion Keller-Segel system. Discret. Contin. Dyn. Syst. -Ser. A 2020, 40, 3509–3527. [Google Scholar] [CrossRef] [Green Version]
  4. Mazumdar, S.; Gangopadhyay, G. Centre manifold analysis of 3-d nonlinear system and kinetic stability of protein assembly. J. Appl. Nonlinear Dyn. 2022, 11, 139–152. [Google Scholar] [CrossRef]
  5. Kandel, S.; Maddali, S.; Nashed, Y.S.G.; Hruszkewycz, S.O.; Jacobsen, C.; Allain, M. Efficient ptychographic phase retrieval via a matrix-free Levenberg-Marquardt algorithm. J. Opt. Express 2021, 29, 23019–23055. [Google Scholar] [CrossRef] [PubMed]
  6. Gan, H.; Xu, C.; Hou, W.; Guo, J.F.; Liu, K.; Xue, Y.J. Spatiotemporal graph convolutional network for automated detection and analysis of social behaviours among pre-weaning piglets. J. Biosyst. Eng. 2022, 217, 102–114. [Google Scholar] [CrossRef]
  7. Almaiah, M.A.; Zahrani, M.A. Multilayer neural network based on mimo and channel estimation for impulsive noise environment in mobile wireless networks. Int. J. Adv. Trends Comput. Sci. Eng. 2020, 9, 315–321. [Google Scholar] [CrossRef]
  8. Chen, L.; Huang, Y.; Lu, T.; Dang, S.L.; Kong, Z.M. Metering equipment running error estimation model based on genetic optimized LM algorithm. J. Comput. Methods Sci. Eng. 2022, 22, 197–205. [Google Scholar] [CrossRef]
  9. Shi, Q.; Xu, Q.; Zhang, J. Amended DV-hop scheme based on N-gram model and weighed LM algorithm. Electron. Lett. 2020, 56, 247–250. [Google Scholar] [CrossRef]
  10. Deb, N.; Pal, N. Interaction behavior and load sharing pattern of piled raft using nonlinear regression and LM algorithm-based artificial neural network. Front. Struct. Civ. Eng. 2021, 15, 1181–1198. [Google Scholar] [CrossRef]
  11. Niu, Y.; Zhou, Y.Q.; Luo, Q. Optimize star sensor calibration based on integrated modeling with hybrid WOA-LM algorithm. J. Intell. Fuzzy Syst. 2020, 38, 2691–2693. [Google Scholar] [CrossRef]
  12. Bilski, J.; Kowalczyk, B.; Marchlewska, A.; Zurada, J.M. Local levenberg-marquardt algorithm for learning feedforwad neural networks. J. Artif. Intell. Soft Comput. Res. 2020, 10, 299–316. [Google Scholar] [CrossRef]
  13. Li, X.; Sun, Y. Stock intelligent investment strategy based on support vector machine parameter optimization algorithm. Neural Comput. Appl. 2020, 32, 1765–1775. [Google Scholar] [CrossRef]
  14. Li, K.; Ji, L.; Yang, S.; Li, H.; Liao, X. Couple-group consensus of cooperative–competitive heterogeneous multiagent systems: A fully distributed event-triggered and pinning control method. IEEE Trans. Cybern. 2022, 52, 4907–4915. [Google Scholar] [CrossRef]
  15. Li, B.; Tan, Y.; Wu, A.; Duan, G. A distributionally robust optimization based method for stochastic model predictive control. IEEE Trans. Autom. Control. 2021, 67, 5762–5776. [Google Scholar] [CrossRef]
  16. He, D.; Wang, H.; Tian, Y. An α-variable model-free prescribed-time control for nonlinear system with uncertainties and disturbances. Int. J. Robust Nonlinear Control. 2022, 32, 5673–5693. [Google Scholar] [CrossRef]
  17. Liu, S.; Zhao, R.; Kaiping, Y.U.; Bowen, Z. Nonlinear system identification framework of folding fins with freeplay using backbone curves. Chin. J. Aeronaut. 2022, 35, 183–194. [Google Scholar] [CrossRef]
  18. Kien, C.V.; Anh, H.; Son, N.N. Adaptive inverse multilayer fuzzy control for uncertain nonlinear system optimizing with differential evolution algorithm. Appl. Intell. 2021, 51, 527–548. [Google Scholar] [CrossRef]
  19. Zhang, Z.; Wang, W.; Wang, C. Parameter identification of nonlinear system via a dynamic frequency approach and its energy harvester application. Acta Mech. Sin. 2020, 36, 606–617. [Google Scholar] [CrossRef]
  20. Zheng, Y.; Gao, S.; Zheng, W.; Dong, H.R. Fuzzy adaptive event-triggered control for uncertain nonlinear system with prescribed performance: A combinational measurement approach. J. Frankl. Inst. 2022, 359, 371–391. [Google Scholar] [CrossRef]
  21. Chen, H.X.; Liu, M.M.; Chen, Y.T.; Li, S.Y.; Miao, Y.Z. Nonlinear lamb wave for structural incipient defect detection with sequential probabilistic ratio test. Secur. Commun. Netw. 2022, 9851533. [Google Scholar] [CrossRef]
  22. Zhang, H.; Tian, Z. Failure analysis of corroded high-strength pipeline subject to hydrogen damage based on FEM and GA-BP neural network. Int. J. Hydrog. Energy 2022, 47, 4741–4758. [Google Scholar] [CrossRef]
  23. Zhang, S.; Zhang, L.; Gai, T.; Xu, P.; Wei, Y. Aberration analysis and compensate method of a BP neural network and sparrow search algorithm in deep ultraviolet lithography. Appl. Opt. 2022, 61, 6023–6032. [Google Scholar] [CrossRef] [PubMed]
  24. Hu, K.; Wang, L.; Li, W. Forecasting of solar radiation in photovoltaic power station based on ground-based cloud images and BP neural network. IET Gener. Transm. Distrib. 2022, 16, 333–350. [Google Scholar] [CrossRef]
  25. Wang, B.; Zhang, Y.; Zhang, W. A composite adaptive fault-tolerant attitude control for a quadrotor uav with multiple uncertainties. J. Syst. Sci. Complex. 2022, 35, 81–104. [Google Scholar] [CrossRef]
  26. Mule, G.M.; Kulkarni, S.; Kulkarni, A.A. An assessment of a multipoint dosing approach for exothermic nitration in CSTRs in series. React. Chem. Eng. 2022, 7, 1671–1679. [Google Scholar] [CrossRef]
  27. Mukherjee, D.; Raja, G.L.; Kundu, P.; Ghosh, A. Design of optimal fractional order lyapunov based model reference adaptive control scheme for CSTR. IFAC-Pap. 2022, 55, 436–441. [Google Scholar] [CrossRef]
Figure 1. Neuron structure model.
Figure 1. Neuron structure model.
Processes 11 01794 g001
Figure 2. Structure of BPNN.
Figure 2. Structure of BPNN.
Processes 11 01794 g002
Figure 3. Specific training steps of LM-BP optimization algorithm.
Figure 3. Specific training steps of LM-BP optimization algorithm.
Processes 11 01794 g003
Figure 4. Serial-parallel structure of NN identification.
Figure 4. Serial-parallel structure of NN identification.
Processes 11 01794 g004
Figure 5. Comparison of convergence rates between BP Model and LM-BP Model.
Figure 5. Comparison of convergence rates between BP Model and LM-BP Model.
Processes 11 01794 g005
Figure 6. Model recognition accuracy under different Training and Test set ratios.
Figure 6. Model recognition accuracy under different Training and Test set ratios.
Processes 11 01794 g006
Figure 7. Identification results of LM-BP NN in Training samples.
Figure 7. Identification results of LM-BP NN in Training samples.
Processes 11 01794 g007
Figure 8. Identification results of LM-BPNN in Test set.
Figure 8. Identification results of LM-BPNN in Test set.
Processes 11 01794 g008
Figure 9. Output effects of different methods under ideal conditions.
Figure 9. Output effects of different methods under ideal conditions.
Processes 11 01794 g009
Figure 10. Local output effects of different methods under signal interference.
Figure 10. Local output effects of different methods under signal interference.
Processes 11 01794 g010
Figure 11. Concentration tracking of CA substances under different control methods.
Figure 11. Concentration tracking of CA substances under different control methods.
Processes 11 01794 g011
Table 1. The definitions of terms.
Table 1. The definitions of terms.
TermDefinition
CSTRContinuous stirred tank reactor
LMLevenberg–Marquardt
BPNNBack Propagation Neural Network
NNNeural Network
PIProportional-Integral
CaReactant concentration
RMSERoot mean square error
MAEMaximum absolute error
AAEAverage absolute error
GN-BPGauss–Newton-Back Propagation
ANNArtificial neural network
NSNonlinear system
Table 2. The situation/setup of the Nonlinear System.
Table 2. The situation/setup of the Nonlinear System.
Test ConditionsParameter Value
Input signal y k 1 , u k 1
Output signal y k
Training functionTrainlm function
Number of hidden layer nodes7
Learning Rateθ = 0.5
Maximum Number of Iterations500
Target error accuracy10−4
Inertial coefficient0.05
Input reference trajectory10 Hz square wave signal
Table 3. Statistical results of Modeling Performance of LM-BP and BP algorithm models.
Table 3. Statistical results of Modeling Performance of LM-BP and BP algorithm models.
Model TypeRMSEMAEAAE
LM-BPNN0.04510.09580.0351
BPNN0.07440.17750.0536
GN-BPNN0.05390.11080.0443
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, X.; Cao, H.; Jia, B. Optimization of Levenberg Marquardt Algorithm Applied to Nonlinear Systems. Processes 2023, 11, 1794. https://doi.org/10.3390/pr11061794

AMA Style

Huang X, Cao H, Jia B. Optimization of Levenberg Marquardt Algorithm Applied to Nonlinear Systems. Processes. 2023; 11(6):1794. https://doi.org/10.3390/pr11061794

Chicago/Turabian Style

Huang, Xinyi, Hao Cao, and Bingjing Jia. 2023. "Optimization of Levenberg Marquardt Algorithm Applied to Nonlinear Systems" Processes 11, no. 6: 1794. https://doi.org/10.3390/pr11061794

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop