Next Article in Journal
A Preliminary Study of Radon Equilibrium Factor at a Tourist Cave in Okinawa, Japan
Previous Article in Journal
Large-Eddy Simulations on the Effects of Two Wind Passage Types between Buildings on the Airflow and Drag Characteristics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Air Quality Estimation Using Dendritic Neural Regression with Scale-Free Network-Based Differential Evolution

1
College of Computer Science and Technology, Taizhou University, Taizhou 225300, China
2
Faculty of Engineering, University of Toyama, Toyama-shi 930-8555, Japan
3
School of Electrical and Computer Engineering, Kanazawa University, Kanazawa-shi 920-1192, Japan
*
Author to whom correspondence should be addressed.
Atmosphere 2021, 12(12), 1647; https://doi.org/10.3390/atmos12121647
Submission received: 10 November 2021 / Revised: 3 December 2021 / Accepted: 6 December 2021 / Published: 9 December 2021
(This article belongs to the Section Air Quality)

Abstract

:
With the rapid development of the global economy, air pollution, which restricts sustainable development and threatens human health, has become an important focus of environmental governance worldwide. The modeling and reliable prediction of air quality remain substantial challenges because uncertainties residing in emissions data are unknown and the dynamic processes are not well understood. A number of machine learning approaches have been used to predict air quality to help alleviate air pollution, since accurate air quality estimation may result in significant social-economic development. From this perspective, a novel air quality estimation approach is proposed, which consists of two components: newly-designed dendritic neural regression (DNR) and customized scale-free network-based differential evolution (SFDE). The DNR can adaptively utilize spatio-temporal information to capture the nonlinear correlation between observations and air pollutant concentrations. Since the landscape of the weight space in DNR is vast and multimodal, SFDE is used as the optimization algorithm due to its powerful search ability. Extensive experimental results demonstrate that the proposed approach can provide stable and reliable performances in the estimation of both PM2.5 and PM10 concentrations, being significantly better than several commonly-used machine learning algorithms, such as support vector regression and long short-term memory.

1. Introduction

In recent years, due to climate change, industrial production, and population growth, the air quality has deteriorated in many parts of the world. This decline in air quality has seriously affected economic development and public health. Fortunately, with gradual improvements in air quality monitoring systems, many countries have established multilevel air quality monitoring networks. The air pollutants of concern mainly include gaseous compounds such as carbon monoxide (CO), sulpfur dioxide (SO2), nitrogen oxides (NOx), ozone (O3), and fine particulate matter (PM2.5 and PM10) [1,2,3]. In particular, the PM2.5 and PM10 concentrations have considerable impacts on human health [4]. Hence, it is crucial to forecast the ambient air quality and air quality index to ensure timely and proper responses to heavily polluted weather and to provide guidance for joint emission reduction measures to reduce regional air pollution. In addition, improving the accuracy of air quality analysis and forecasting can help governments to improve the reliability of environmental management and decision-making, and enable timely and effective prevention and control measures to minimize the harm caused by air pollution to people. However, many factors affect air quality—including environmental factors such as temperature, humidity, and wind speed; and human factors such as traffic conditions and pollution source emissions—which increases the difficulty of accurately predicting air quality. Therefore, it is particularly important to establish an air quality prediction system that can achieve excellent performance.
Traditional air quality prediction methods mainly include numerical prediction and regression statistics [5]. Numerical air quality models are based on atmospheric dynamics and employ the monitoring information from multiple environmental monitoring stations to establish meteorological emission and chemical models, and thus, simulate the migration, exchange, diffusion, and emission of pollutants [6]. However, numerical prediction methods are subject to complex prior knowledge, use unreliable and limited data, and have various usage constraints [7]. Moreover, the requirements for the input data are relatively strict, rendering it difficult to accurately predict air quality in real time [8]. Thus, it is theoretically difficult to simulate the real atmospheric environment. In contrast, the regression statistics-based approach avoids complex theoretical models and instead leverages statistical models to predict air quality based on analyses of historical air quality data. Compared with numerical prediction methods, which are based primarily on historical meteorological data and the regular analysis of pollutant monitoring concentrations, meteorological forecast products are utilized to predict pollutant concentrations [9]. Nevertheless, the complex linear or nonlinear relationships between the various factors affecting both air quality and the concentrations of air pollutants are challenging to describe with a definite mathematical model.
Statistics-based air quality prediction models are relatively simple to implement, as the relationships between pollutant concentrations and meteorological factors are established on the basis of statistics. As air pollutant concentration data are nonlinear and irregular, the above-mentioned methods cannot meet the requirements of practical applications to obtain sufficiently accurate and reliable prediction results. However, while the prediction performances of statistics-based methods need to be improved, these models can be applied to predict the air quality in smaller areas, and thus, can provide a certain theoretical basis for future predictions using machine learning and deep learning models. Currently, with the rapid development and application of the Internet of Things and sensor technologies, atmospheric data collected by various sensors and related data collection equipment in cities provide the necessary sources of data for air quality prediction. Since traditional shallow learning models still encounter bottlenecks in utilizing big data, new air quality prediction methods need the support of data-driven models [10]. Recently, many machine and deep learning approaches, such as decision trees (DTs) [11,12], support vector regression (SVR) [13,14,15], long short-term memory (LSTM) [16,17,18], and random forest models [19,20], have been adapted to air quality forecasting. In addition, a universal and effective deep learning air model was proposed to resolve the interpolation, prediction, and feature analysis of air quality at a fine resolution [21] via the embedded feature selection and semisupervised learning of different layers in a deep learning network. This model utilizes relevant information from unlabelled air quality data to improve the interpolation and prediction performance. Moreover, in 2020, a hybrid deep learning model that combines LSTM and convolutional neural networks was developed to improve the air quality prediction accuracy [22]. It can consider the spatial correlation characteristics of air pollutants to achieve high prediction performance. Furthermore, in regard to spatiotemporal correlations, a deep learning model was provided in [23] for daily PM2.5 concentration forecasting.
Although machine and deep learning models can rapidly forecast air quality with high accuracy, under many complex air quality conditions, feature extraction is not a simple task, as it requires the artificial design of an effective feature set to predict the training results. Therefore, for long-term air quality predictions and addressing the uncertainties and nonlinear problems in prediction systems, employing machine learning prediction models remains challenging. For instance, an LSTM network cannot model and analyze the complex spatial and temporal correlations with air quality, and its nonlinear spatial dependence is an important factor affecting the air quality prediction performance. Nevertheless, with the rise of artificial neural networks (ANNs), which are data-driven and have various advantages, including data adaptation, parameter self-learning, and combined memory, a number of researchers have attempted to apply various neural network models to predict air quality. For example, a hybrid multilayer perceptron (MLP) and linear regression model was developed on the basis of principal component analysis to analyze air pollution [24]. In [25], a feedforward backpropagation neural network (BPNN) and regression model were combined to predict seasonal indoor PM2.5–10 and PM2.5 concentrations, and another BPNN-based approach was developed in [26] for regional multi-step-ahead PM2.5 forecasting. ANNs were likewise used in a highly polluted region to predict the concentrations of all types of pollutants on subsequent days [1]. Moreover, based on a supervised learning neural network, a modified depth-first search method was employed to estimate PM10 concentrations in [27], and Zhang et al. developed an Elman neural network (ENN)-based model for estimating air quality [28]. In addition, fuzzy neural networks [29] and recurrent neural networks [30,31] have been widely used in air quality prediction.
ANNs can reliably and accurately map the correlations between inputs and outputs, and thus have been extensively applied to the prediction of air quality. However, because each ANN model has unique advantages and limitations, it is difficult to select the most suitable model for all air quality time series. In addition, the above-mentioned models focus only on the signal transmission between neurons and ignore the nonlinear relationship in the dendritic structure of each neuron, which has been verified in biological neurons [32]. Moreover, most training algorithms easily fall into local optima and are sensitive to the initial state with gradient descent information [33,34]. To overcome these limitations and consider the calculation efficiency, in this study, a novel dendritic neural regression model (DNR) is proposed to estimate air quality. It is an improved version of our previously proposed dendritic neural model (DNM), which has been successfully applied to morphological hardware realization [35], classification [36,37,38,39,40], and time series prediction [41,42]. Due to the plasticity of dendrites and the nonlinear characteristics of synapses, the DNM can effectively simulate the processes by which biological neurons transmit information and has the capability to fit complex nonlinear functions well. However, since the original DNM network was designed for classification problems, adaptive pruning is needed in the calculation process to improve the calculation efficiency, especially in the processing of high-dimensional data classification; consequently, the computational complexity is excessively high, leading to a long computation time. In the proposed DNR network, we employ a single-branch approach to reduce the computational complexity, and utilize a new weight to control the strength of the branch. In addition, because DNR is used to predict air quality, for which the weight space to be trained is vast and complex, a global optimization algorithm with stronger search capabilities is needed to replace the traditional gradient descent-based algorithm. In this study, to further enhance the air quality prediction performance of DNR, a scale-free network-based differential evolution (SFDE) algorithm is proposed to optimize the weight and threshold of DNR [43]. This scale-free local search method can ensure the diversity of individuals and avoid local optima, thereby helping the differential evolution (DE) algorithm reach the global optimum.
Moreover, one-dimensional air quality time series data can be unpredictable and irregular, and some information is hidden in high dimensions. According to Takens’s theory [44], phase space reconstruction (PSR) can extend one-dimensional data into high-dimensional space. If this high-dimensional data space exhibits chaotic characteristics, it is predictable. Therefore, before DNR is implemented for the air quality prediction, first, mutual information (MI) and false nearest neighbors (FNN) methods are utilized to calculate the time delay and embedding dimensions of the dataset. Then, PSR is performed to transform the one-dimensional PM2.5 and PM10 time series data into predictable multidimensional spatial vector data, and the maximum Lyapunov exponent (MLE) is used to validate the chaotic peculiarity. Finally, the resulting vectors are used as the training samples of DNR, and the trained DNR network is employed to perform the air quality prediction. For a fair comparison, three PM2.5 and three PM10 concentration datasets from the past two years were selected for experiments to evaluate DNR’s prediction performance, and each group of experiments was run independently 30 times. Our extensive experimental and statistical results confirm that the air quality estimations of the proposed DNR network are superior to those of its competitors. The novelty and primary contributions of this study are as follows:
  • A single-branch dendritic neural regression-based approach is proposed to estimate air quality.
  • To enhance the prediction performance of DNR, a customized SFDE algorithm is proposed to optimize DNR.
  • Extensive experiments demonstrate that DNR can more accurately and stably predict the PM2.5 and PM10 concentrations than the existing methods.
  • Two nonparametric statistical tests further verify that the proposed DNR network is superior to nine of its competitors.
The remainder of this paper is organized as follows: Section 2 mainly introduces the related methods and techniques, including the proposed DNR network, SFDE algorithm, chaotic time series theory, and PSR. The relevant details of the experiments are introduced in Section 3, including a brief description of the experimental data, the experimental setup, the evaluation criteria of the prediction methods, and the experimental results, which are presented and discussed in detail. Section 4 presents a conclusion and summarizes the prospects for future work.

2. Proposed Method

2.1. Dendritic Neural Regression

In this study, a single-branch DNR network is proposed to predict and estimate air quality data and enhance the computational efficiency by improving the traditional DNM network. Due to the calculation mechanism of the dendritic structure of DNR, the relationship mapping between the input and output can be performed properly. The architecture of DNR is shown in Figure 1. Figure 1 (right) shows the transformation process from DNM to DNR, which is composed of four layers: synapses, dendrites, a membrane, and a soma layer. As demonstrated in Figure 1, DNR is a feedforward neural network model in which the signal enters from the synaptic layer and is calculated and gradually propagates backward to the output of the soma layer. Figure 1 (left) describes the six connection cases between the synapse structure and the input signal in DNR. The following is a detailed description of DNR.

2.1.1. Synapses

In neurons, synapses are crucial nodes for signal transmission between dendrites or between dendrites and axons. Since the transmission of a signal on a synapse is feedforward and nonlinear in nature and has two states, namely, excitement and inhibition, we can use the sigmoid function to simulate this process. The formula is described as follows:
S i = 1 1 + e k w i y i q i ,
where y i is the input value of the sample attribute, the hyperparameter k is a positive integer, and i represents the feature dimension of the sample. w i and q i are the weight and threshold of the input connection, respectively; for different prediction tasks, their suitable values can be obtained after training based on optimization algorithms. To simulate the excited and inhibited states of synapses, according to the values of w i and q i , Figure 1 (left) indicates that the connection states can be divided into six cases [42]:
  • w i < q i < 0 (for example, w i = 1 and q i = 0.5 ): a high potential input leads to a low potential output, so this state is called an inhibited connection.
  • 0 < q i < w i (for example, w i = 1 and q i = 0.5 ): a high potential input leads to a high potential output, so this state is called an excited connection.
  • q i < 0 < w i or q i < w i < 0 (for example, w i = 1 and q i = 0.5 or w i = 1 and q i = 1.5 ): in these two cases, regardless of the value of the input, the output is always 1, and these two cases are called constant 1 connections.
  • 0 < w i < q i or w i < 0 < q i (for example, w i = 1 and q i = 1.5 or w i = 1 and q i = 0.5 ): these cases are similar to the third case, except the output is 0 regardless of the input, so both states are called 0 connections.
Each input is processed by the sigmoid function of the synaptic layer; this is similar to the pruning operation of a neural network. The trained DNR network can further simplify the calculation cost and improve the calculation efficiency based on this operation.

2.1.2. Dendrites

In DNR, the dendritic layer collects the nonlinear correlated signals transmitted from the synaptic layer, and these nonlinear relationships can be mapped by multiplication [45]. The formula is defined as:
B = i = 1 n S i .

2.1.3. Membrane

The membrane layer adjusts the intensity of the output signal on the dendritic layer and continues to transmit the obtained signal to the soma body. The calculation formula for this layer is expressed as follows:
V = μ · B ,
where μ represents the parameter that adjusts the output intensity of the dendritic layer. In DNM, the intensity of the dendritic signal is not considered ( μ = 1 ) , which can also be utilized to distinguish the original DNM network from DNR. In DNR, for a certain regression problem, μ is also used as a weight that is trained and constantly changed by optimization algorithms.

2.1.4. Soma

The soma layer represents the last calculation operation of DNR. We use the sigmoid function as the activation function to map the nonlinear relationships. The soma layer output calculation is expressed as follows:
O = 1 1 + e k s ( V α ) .
where k s represents a user-defined positive integer, and the parameter α is a threshold of the sigmoid activation function in cell bodies.

2.2. Optimization Algorithm

In the calculation of the dendritic layer in DNR, a large number of multiplication operations are employed to map the nonlinear relationships between features, and an adjustment parameter is added to control the branch strength in the membrane layer, making the output of DNR very sensitive to the input of the model. Moreover, the parameter space required to participate in the training is vast and complex. Traditional gradient descent optimization algorithms suffer from some limitations in such a parameter space. Therefore, we adopt the more powerful search-capable DE algorithm to replace it. To further improve the DE search performance, we propose the SFDE algorithm. The following describes the search process of the SFDE algorithm and the parameter settings.

2.2.1. Differential Evolution

The DE algorithm is a novel heuristic intelligent optimization algorithm with a simple and computationally efficient structure inspired by the principle of “survival of the fittest”. The basic idea is to create new individuals through differences between individuals and then select better individuals to enter the next generation. The standard DE implementation process consists of four steps: initialization, mutation, crossover, and selection.
First, the population is initialized to P individuals, which are randomly distributed in the solution space. Then, mutation operations are performed on the population. In each generation, according to the differences between individuals, a mutation vector V i is obtained. The common mutation strategies are as follows:
  • rand/a
    V i t = Y 1 t + F × Y 2 t Y 3 t ,
  • best/a
    V i t = Y b e s t t + F × Y 1 t Y 2 t ,
  • rand/b
    V i t = Y 1 t + F × Y 2 t Y 3 t + F × Y 4 t Y 5 t ,
  • best/b
    V i t = Y b e s t t + F × Y 1 t Y 2 t + F × Y 3 t Y 4 t ,
where Y b e s t t represents the optimal individual in the t-th iteration and F is a scaling factor in (0, 1).
After the mutation operation, all pairs of target vectors Y i t and difference vectors V i t are crossed to generate the trial vector R i t = ( r 1 t , r 2 t , , r i t ) . The operation is defined as follows:
r i , j t = v i , j t i f r a n d i , j [ 0 , 1 ] C R i t o r j = D , y i , j t o t h e r w i s e ,
where r i , j t is the i-th trial vector in the t-th generation, j is a positive integer representing the dimension D of the problem, and y i , j t and v i , j t represent the cross vector and target vector, respectively, in the j-th dimension.
Finally, based on the current fitness of R i t and Y i t , a better individual is selected to enter the next generation of evolution. The selection process formula is as follows:
Y i t + 1 = R i t i f f R i t f Y i t , Y i t o t h e r w i s e .
The above three steps are repeated in each generation until the termination conditions of the algorithm are met.

2.2.2. Scale-Free Network-Based Differential Evolution Algorithm

Barabási and Albert proposed the scale-free BA network model when studying the topology of the World Wide Web [46], which is named the BA network model, and this model has two characteristics: node growth and a priority connection mechanism. The degree distribution of the BA network conforms to a power law distribution as follows:
P ( k ) k β ,
where P ( k ) is the probability where the degree of the node is the probability k and β ( 2 , 3 ) is the scaling factor. Therefore, the BA model can describe many real networks. The network model is constructed in the following steps: (1) Node growth: the network initially has m 0 nodes, and a new node is added each time to connect to the m existing nodes, where m m 0 . (2) Priority connection: the probability P a of connecting a newly added node to an existing node is:
P a = k a + 1 b k b + 1 ,
where k a denotes the degree of all existing nodes and k b represents the sum of all degrees to node b. After n iterations of the algorithm, a scale-free network with m·n edges and N ( N = m 0 + n ) nodes is generated. (3) The above operations are repeated until all nodes are connected. In the current network space, it is easier to establish links between high- and low-degree nodes, while high-degree nodes rarely connect to each other. The scaling factor β can also indicate the degree of connection between low- and high-degree nodes. The formula for calculating β is defined as:
β = I 1 i x i y i I 1 i x i + y i / 2 2 I 1 i x i + y i / 2 I 1 i x i + y i / 2 2 ,
where x i and y i represent the degrees of connecting two adjacent nodes on the i-th link, and I represents the number of links in the network structure. The greater the value of β is, the more connections with high-degree nodes that are likely to be generated.
The proposed local search strategy based on the scale-free network ensures the diversity of the population during the search process of the algorithm, which helps the DE algorithm achieve the goal of global optimization. First, based on the population size, a corresponding scale-free network is generated, and all nodes are numbered in the network. Then, the nodes are ranked depending on the number of links. According to the BA algorithm, fewer high-degree nodes and more low-degree nodes can be obtained in the network at this time. After each iteration of the DE algorithm, the individuals are ranked based on their fitness. Finally, the ranked individuals are placed into the network nodes in the same order, and all the individuals are stored in the nodes from high to low (from best to worst). The structure of the SFDE algorithm is shown in Figure 2. The power law distribution reduces the number of high-degree nodes. Therefore, most individuals are close to being excellent individuals after each update, which improves the quality of individuals and generally accelerates the convergence of the algorithm. The update rule for each individual is as follows:
Y i t + 1 = Y i t + r a n d ( 0 , 1 ) · Y i _ n t Y i t ,
where Y i t is the weight vector of the i-th DE individual in the t-th generation and Y i _ n t is a node connected to Y i t in the scale-free network. After each DE iteration, a local search is performed in the scale-free network until the algorithm finally converges. In the process of optimizing DNR, the initial scaling factor F 0 of the SFDE algorithm is set to 0.7, the cross probability is C R 0 = 0.9 , the scale-free network parameter m 0 is set to 10, and the remaining parameters adopt the default values.

2.3. Chaotic Time Series

Before using DNR to perform an air quality prediction, it is necessary to preprocess the original one-dimensional time series data. In this section, we mainly introduce how to map the one-dimensional time series data to the high-dimensional phase space based on the PSR method. Takens proposed that mapping one-dimensional data to a high-dimensional space requires that the delay and embedding dimensions be determined [44]. The maximum likelihood estimator (MLE) is used to determine the chaotic characteristics of the new spatial data. Then, the data are analyzed and predicted in the high-dimensional phase space. The methods of calculating the time delay, embedding dimension, and MLE are described below.

2.3.1. Mutual Information

MI refers to the correlation between two event sets. The theoretical basis of MI is information entropy, which is a nonlinear method that is widely used to solve the time delay of PSR. The optimal time delay corresponds to the first minimum value of MI. The calculation formula of information entropy is:
H ( S ) = i = 1 n P s i · ln P s i ,
where s i represents the time series variable, P ( s i ) is the probability of s i , and H ( S ) is information entropy. It can be concluded that the joint information entropy of the two groups of time series is:
H ( S , Q ) = i = 1 n j = 1 m P s , q s i , q i · ln P s , q s i , q i ,
where n and m are the lengths of the two time series systems S and Q, respectively, which can be denoted [xt, xt+τ]. If S is determined, the MI value can be calculated by:
I ( Q , S ) = H ( Q ) + H ( S ) H ( S , Q ) .
Thus, based on different time delays τ , the corresponding MI value can be obtained by the following formula:
I ( τ ) = I x t , x t + τ = H ( x ) + H x t H x t , x t + τ .
According to Equation (18), the τ corresponding to the first minimum value of I ( τ ) can be obtained as the optimum time delay.

2.3.2. False Nearest Neighbors

From a geometric point of view, chaotic time series are formed by mapping the chaotic motion trajectory in the phase space to the one-dimensional space [47]. A chaotic trajectory may be distorted during projection, which shows that the two points in the phase space are not neighbors, but mapping to the one-dimensional space facilitates the neighbor phenomenon, and the two points are called FNN. The idea of using the FNN method to obtain the embedding dimension is that as the embedding dimension increases, the chaotic motion trajectory gradually expands, the FNN are gradually eliminated, and the chaotic motion trajectory is finally recovered. Therefore, when the FNN phenomenon disappears completely, the corresponding embedding dimension m is the most suitable embedding dimension.
Assuming that the FNN point of Y i = { y i , y ( i + τ ) , , y ( i + ( m 1 ) · τ ) } is Y i F N N in the m-dimensional phase space, the distance between the two points is defined as:
D i m = Y i Y i F N N .
When the embedding dimension increases by 1, the distance between these two points is:
D i 2 ( m + 1 ) = D i 2 m + Y i + m τ Y i + m τ F N N 2 .
If D i ( m + 1 ) D i m , these two points are far from each other in the phase space and become two FNN after mapping to the low-dimensional space. The following condition can also be utilized to determine whether the two points in the space are false neighbors:
T m = Y i + m τ Y i + m τ F N N D i m ,
where T m is the determination threshold and the general value range is [10, 50]. If T m is greater than the threshold, Y i and Y i F N N are considered to be false neighbors.

2.3.3. Phase Space Reconstruction

By using the delay τ and the embedding dimension m, the one-dimensional air quality time series { y 1 , y 2 , , y n } can be mapped to the equivalent phase space based on PSR. This method yields the first vector Y 1 = { y 1 , y 2 , , y m } in the m-dimensional phase space by extracting m data, moves backward once with τ as the period, and yields the second vector Y 2 = { y 1 + τ , y 2 + τ , , y m + τ } in the m-dimensional phase space. Thus, the process of forming a set { Y i i = 1 , 2 , , N } is as follows:
Y = Y 1 T Y 2 T Y N T = y 1 y 2 y N ( m 1 ) · τ 1 y 1 + τ y 2 + τ y N ( m 2 ) · τ 1 y 1 + ( m 1 ) · τ y 2 + ( m 1 ) · τ y N 1 ,
where Y N T is the input of DNR, and the target vector is:
Y t a r g e t = y 2 + ( m 1 ) · τ , y 3 + ( m 1 ) · τ , , y N .
According to the above methods, the original air quality time series data can be mapped to the phase space by selecting the appropriate time delay and embedding dimensions. The processes of calculating the time delay and embedding dimensions are shown in Figure 3, and the results of τ and m are shown in Table 1.

2.3.4. Maximum Lyapunov Exponent

Using the above-mentioned PSR method, one-dimensional data can be mapped to a high-dimensional space. However, before performing the prediction task, the chaotic characteristics of the data need to be verified. The MLE, which is based on the idea that a chaotic system is extremely sensitive to the initial value, is a common tool for determining whether a system exhibits chaotic characteristics. If the MLE is positive, even if the correlation between adjacent data in the initial time series is small, their differences will continue to expand after iteration and finally separate [48]. The steps for calculating the MLE are as follows: Y t o is taken as the initial point, the nearest point is selected as Y 0 t 0 , and the distance L 0 between these two points is calculated. With the continuous motion of these two points, when the distance is greater than the threshold μ , the distance L 1 is defined as:
L 1 = Y t 1 Y 0 t 1 > μ .
Then, Y 0 t 0 is taken as the centre point, another point with the closest distance is selected, and the distance L 1 is:
L 1 = Y t 1 Y 1 t 1 < μ .
The above steps are repeated for m iterations until all points in the time series are traversed, and the MLE λ max can be calculated by:
λ max = 1 m i = 0 m ln L i L i .
The MLE is used to describe the degree of separation between two similar initial values over time. Therefore, λ > 0 indicates that adjacent points will eventually separate, and the trajectory is characterized by local instability. As shown in Table 1, λ m a x all exceed 0. Thus, the six time series datasets considered in this study all have chaotic properties [48].

3. Experimental Studies

In this section, we primarily introduce the data description and preprocessing, the experimental environment, and related parameter settings. Then, the four indicators employed to evaluate the air quality prediction performance of each method are introduced. Finally, the PM2.5 and PM10 concentrations predicted by all methods are plotted in graphs for a comparative analysis and discussion. The methodological air quality prediction framework of DNR is shown in Figure 4.

3.1. Data Description and Preprocessing

In this study, daily PM2.5 and PM10 concentration data from October 2019 to June 2021 in Beijing, China, were selected to verify the performance of our proposed method. All data were obtained from China’s air quality online monitoring and analysis platform, and the daily air quality concentration data were calculated and averaged based on the hourly data of the main Environmental Protection Station, the unit of which is μg/m3, obtained from the website https://www.aqistudy.cn/ (accessed on 30 July 2021). First, the original PM2.5 and PM10 data were divided into three subdatasets with the same number of samples, and each subdataset was then divided into two subdatasets, namely, a training set and a test set. For the PM2.5 and PM10 concentration datasets, to achieve the best results in the experiments, in the experimental verification, the portion of the training set from each subdataset was different, and the training set was taken from the top of each dataset, as shown in Table 2. It should be emphasized that during the model training and PSR steps, the test datasets were not visible to the model and participated only in the testing and verification of the methods.
Since the synaptic input interval of DNR is [0, 1], the original data should be normalized. Normalization was not performed to improve computational efficiency, but was also necessary to fully meet the experimental requirements. The normalization formula is as follows:
y i = y i M i n ( y ) M a x ( y ) M i n ( y ) ,
where y i is the normalized vector and M a x ( y ) , and M i n ( y ) are the maximum and minimum values of the original data, respectively. As the soma layer of DNR also adopts a sigmoid function as the activation function, the final output of the model is also [0, 1]. In the experiments, to restore the original values, we applied inverse normalization to the output results.

3.2. Experimental Setup

To improve DNR’s prediction accuracy, we proposed a SFDE algorithm to optimize all the weights and thresholds in the model. To verify the feasibility, superiority, and reliability of the proposed prediction model, we selected 9 commonly used machine learning models for comparison with the proposed DNR method: the MLP, ENN, the DT methods; SVR with four different kernel functions—a linear kernel (SVR + L), a radial basis function (RBF) kernel (SVR + R), a polynomial kernel (SVR + P), and a sigmoid kernel (SVR + S) [49]; an LSTM neural network; and DNR optimized by a BP algorithm (DNR+BP).
Before performing air quality estimation, each model has its own hyperparameters that need to be determined. The changes in the two hyperparameters of DNR have a greater influence on the prediction results than the changes in the other hyperparameters. For a fair comparison, in this study, we utilized a random search method to optimize all the hyperparameters of all the models [50]. The parameter settings of DNR are shown in Table 3, and the settings of the other model parameters are shown in Table 4, where epoch indicates a full iteration and c and γ are parameters of the kernel function in SVRs. To obtain a reliable performance comparison, each prediction task was independently repeated 30 times for each prediction method, and the results are average values of the repeated experiments. All experiments were performed on multiple computers with the same Intel(R) Core(TM)i7-10770k 3.60 GHz CPU with 32 GB of memory, and the simulation software was MATLAB R2020a.

3.3. Evaluation Indicators

To accurately and uniformly compare the performances of all participating air quality prediction methods, we selected four common evaluation indicators, all of which are calculated based on the differences between the predicted and true values: (1) The mean square error (MSE), which refers to the expected value of the square of the difference between the estimated and true values. The MSE is used to evaluate the degree of data change; specifically, a smaller MSE value indicates that the model can more accurately fit the data. (2) The root mean square error (RMSE) is the arithmetic square root of the MSE. The difference between it and the MSE is reflected mainly in the sensitivity to error. (3) The mean absolute error (MAE), which can better reflect the actual situation of the predicted value error. (4) The mean absolute percentage error (MAPE), which is very similar to the MAE. The lower the MAPE value is, the better the predictive ability of the model. The corresponding formulas are as follows:
M S E = 1 n i = 1 n y ^ t e s t ( i ) y t e s t ( i ) 2 ,
R M S E = 1 n i = 1 n y ^ t e s t ( i ) y t e s t ( i ) 2 1 / 2 ,
M A E = 1 n i = 1 n y ^ t e s t ( i ) y t e s t ( i ) ,
M A P E = 1 n i = 1 n y ^ t e s t ( i ) y t e s t ( i ) y ^ t e s t ( i ) ,
where y ^ t e s t ( i ) and y t e s t ( i ) represent the model-predicted value and the target of the original data, respectively, and n is the number of samples. It should be noted that all evaluation indicators were calculated based on the prediction set, and the results obtained are the concentrations of air pollutants.

3.4. Results and Discussion

In this section, we present the experimental results and comparative analysis in detail. The original one-dimensional daily air pollutant concentration time series data are irregular and random, and thus difficult to predict. Thus, first, the time delay and embedding dimensions of the time series samples were calculated by the MI and FNN approaches, respectively. Then, based on PSR, the time series were constructed into a multidimensional predictable space vector, and the MLE was used to validate its chaos and predictability. Finally, the newly obtained vector data were utilized as the input of the prediction model to predict air quality. For a fair comparison, three daily PM2.5 datasets and three PM10 datasets from Beijing were selected to avoid errors in the experimental results.
In addition, nine common prediction models were selected as competitors of DNR, and four prediction performance evaluation metrics were used to measure the predictive abilities of the models. Moreover, the stability of the learning algorithm and its influence on the model are discussed and analyzed, and two nonparametric statistical test methods were employed to verify the validity of the final experimental results.

3.4.1. Comparison Study

Before the experiment, in accordance with Table 3 and Table 4, the parameters of all methods were set. To intuitively reflect the concentrations of air pollutants, the outputs of the relevant models were treated by inverse normalization. Table 5 summarizes the prediction results of all models based on the six datasets. To avoid erroneous experimental results, all the reported values are the averages of 30 experiments. The optimal results and second-best results of each dataset are presented in red and blue, respectively. Table 5 demonstrates that DNR achieved excellent performance when predicting both PM2.5 and PM10 concentrations and outperformed the competitor models, which means that DNR can better address daily air quality forecasting problems. Among the four SVR models, SVR + L had the best performance in the PM2.5(b) task, indicating that the linear kernel function has a good fitting ability in the SVR training process. SVR+P also displayed certain advantages regarding the PM2.5(b) and PM10(c) datasets. However, ENN did not perform well in air quality forecasting, as its MAPE value exceeds 2.5, indicating that ENN cannot reliably map the nonlinear relationship between the input and output, and thus cannot accurately predict air quality. In general, SVR + P achieved a prediction effect similar to that of DNR for all six forecasting problems, especially when the difference in the MAE was very small. In addition, the DT and LSTM models had mediocre performances, and in the process of training the LSTM network, due to the characteristics of its learning mechanism, more time is required to fit the data. The other models could perform the prediction tasks efficiently.
Figure 5 shows the curves of the actual and output values of DNR during the training and prediction processes, where the PM2.5 and PM10 concentrations are colored green and red, respectively, and the gray curve represents the actual values. It is worth emphasizing that the training set was the top 75% or 70% of each dataset for model learning, and the remaining part was the prediction set used to validate the prediction accuracy. Figure 5 demonstrates that DNR can fit the real data reliably. Overall, the predicted values in each prediction problem are consistent with the fluctuations in the observed values. However, there were some defects in the processing of peak and valley (maximum and minimum) values, which deviated from the actual values. As the multiplication operation is used in DNR for dendrites, the soma layer cannot accurately respond to changes in the input when the input suddenly increases at outlier points. In addition, the green curve appears to perform better than the red curve, indicating that the daily PM2.5 concentration data exhibit stronger chaos, as is also reflected in the MLE values in Table 1, as a larger MLE represents a more predictable chaotic state. As shown in Table 5 (right), the performance indicators of the six datasets were averaged. Although DNR shows certain disadvantages for the PM2.5(b) dataset, due to its strong nonlinear fitting ability, DNR is still the best model for the overall air quality prediction task. Additionally, to better clearly demonstrate the ability of prediction models, the RMSE and MAPE are described in the form of bar charts in Figure 6, and we found that DNR has numerous advantages for both the PM2.5 and PM10 concentration datasets.

3.4.2. Spatial Stability of the Models

According to Table 5, based on the mean results of the 30 independent experiments, the predictive ability of DNR is significantly better than those of the other models. However, these experiments failed to reflect its stability when performing the same task each time. In other words, stability is another criterion for evaluating the models, and a boxplot can be used to visualize the distribution of all the results for each model. Figure 7 shows the MAE values for all 30 independent runs for all the models; the red horizontal line represents the median of the experimental results, and the rectangle represents the overall MAE range for the corresponding model in the experiment. ENN clearly oscillated significantly on each dataset, thereby showing low stability. MLP, LSTM, DNR+BP, and DNR all show normal offset ranges, indicating that the randomness of the initial value of the optimization algorithm has a certain impact on the prediction performance of the model. In addition, due to their computational mechanisms, the SVR and DT models all demonstrated high stability, as they produced similar experimental results every time.

3.4.3. Effects of Learning Strategies

The results in Table 5 and Figure 5 and Figure 7 show that DNR exhibited strong predictive abilities for both PM2.5 and PM10 datasets. Due to the unique nonlinear multiplication operation in DNR, this method can reliably fit the nonlinear relationship between the input and output, which is a crucial advantage that other ANNs do not possess. However, during the model training process, the SFDE algorithm plays an important role in optimizing the weights and thresholds in DNR. On high-dimensional error surfaces, the SFDE algorithm can capture relatively better training results based on its powerful search capabilities.
Among the competing models, both ENN and MLP are optimized based on gradient descent, which easily falls into local optima in high-dimensional solution spaces and often converges too quickly due to the quality of the initial value and the complexity of the solution space; as a result, ENN and MLP could not reach the global optimal state. In response to this disadvantage, based on the Adam method, LSTM can be used to enhance the performance to a certain extent by improving the gradient descent search strategy. However, the LSTM training process requires a significant computational cost. To further compare the influence of the search ability of the learning algorithm on the prediction results, a DNR+BP model is proposed, the structure and hyperparameters of which are the same as those of DNR, and the learning algorithm is a gradient descent-based BP algorithm. The bottom of Table 5 indicates that the prediction performance of DNR was better than that of DNR+BP for each dataset, indicating that the SFDE algorithm can effectively help DNR find a more appropriate combination of parameters for predicting air quality in the DNR training stage.

3.4.4. Statistical Analysis

In the experiments, all methods were independently run 30 times for each prediction task. To further validate the effectiveness and superiority of the proposed method, the Friedman statistical test and Wilcoxon rank sum test were used to verify all the results of each experiment [51,52]. Both test methods were implemented in KEEL software [53], and the nonparametric statistical test results are shown in Table 6 and Table 7. The Friedman test was used to calculate the ranking of all models on the same prediction problem, with smaller numbers indicating better performance. According to Table 6, DNR obtained the highest ranking on all six datasets, especially on PM10(b), for which the DNR rank reached 1.3. We also averaged the ranking results for each method and found that DNR achieved the best ranking for individual datasets and in the average results, which shows that DNR performs air quality prediction excellently. However, to further reflect the significant differences between DNR and the competing models, Wilcoxon rank-sum tests were conducted to provide measurement p values, which are used to indicate whether there is a significant difference, and the threshold of a significant difference is 0.05 (5.00 × 10 2 ); if the p value is less than 0.05, DNR has significant superiority over its competitor. As shown in Table 7, most of the p values are much less than 0.05, which means that DNR has a significantly superior predictive ability compared with the competing models. In particular, the minimum p value of 1.82 × 10 12 is observed many times, indicating that DNR has notable superiority over most of the other models. However, when comparing DNR with SVR + L, SVR + P, and LSTM, one can see that there are four non-significant differences, which is also consistent with the results of our experiments. In summary, these nonparametric statistical tests reveal that DNR is more suitable for predicting air quality than the other models and can achieve superior performance.

4. Conclusions

Estimates of air quality are conducive to environmental monitoring and governance. However, the irregularity of air quality concentration time series data renders it difficult to accurately and stably predict air quality. In this paper, a novel DNR method was proposed to improve the prediction accuracy of daily PM2.5 and PM10 concentrations. To enhance the performance and robustness of DNR, scale-free network technology is embedded into the DE algorithm to optimize the DNR network. The crucial PSR parameters are calculated by the MI and FNN methods, and their chaotic properties are verified by the MLE. Forecasts were performed by numerous competing models on nearly two years of three daily PM2.5 and three daily PM10 concentration datasets. Four evaluation metrics and two nonparametric statistical tests were used to evaluate the performance of each competitor. The extensive experimental results confirmed that DNR has superior performance regarding both accuracy and stability in air quality estimation tasks. Therefore, DNR can be considered a highly competitive air quality prediction method. This study focused on the prediction of air quality concentration time series, but did not quantitatively analyze the factors influencing air pollution. Thus, we will combine more attributes related to air pollution (e.g., temperature, weather, and industrial development factors), observation locations, data modeling, and the prediction of multidimensional attributes in our future work.

Author Contributions

Conceptualization, methodology, software, and writing—original draft preparation, Z.S.; investigation, resources, and formal analysis, C.T. and J.Q.; visualization, data curation and validation, J.Q. and B.Z.; supervision, Z.S. and Y.T.; project administration, Z.S. and Y.T.; funding acquisition, Z.S. All authors have read and agreed to the published version of the manuscript.

Funding

This study was partially supported by the Nature Science Foundation of the Jiangsu Higher Education Institutions of China (Grant No. 19KJB520015), the Talent Development Project of Taizhou University (No. TZXY2018QDJJ006), the National Science Foundation for Young Scientists of China (Grant No. 61802274), and the Computer Science and Technology Construction Project of Taizhou University (No. 19YLZYA02).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Agarwal, S.; Sharma, S.; Suresh, R.; Rahman, M.H.; Vranckx, S.; Maiheu, B.; Blyth, L.; Janssen, S.; Gargava, P.; Shukla, V.; et al. Air quality forecasting using artificial neural networks with real time dynamic error correction in highly polluted regions. Sci. Total Environ. 2020, 735, 139454. [Google Scholar] [CrossRef] [PubMed]
  2. Xu, Y.; Yang, W.; Wang, J. Air quality early-warning system for cities in china. Atmos. Environ. 2017, 148, 239–C257. [Google Scholar] [CrossRef]
  3. Cekim, H.O. Forecasting pm 10 concentrations using time series models: A case of the most polluted cities in turkey. Environ. Sci. Pollut. Res. 2020, 27, 25612–25624. [Google Scholar] [CrossRef] [PubMed]
  4. Biancofiore, F.; Busilacchio, M.; Verdecchia, M.; Tomassetti, B.; Aruffo, E.; Bianco, S.; Di Carlo, P. Recursive neural network model for analysis and forecast of PM10 and PM2.5. Atmos. Pollut. Res. 2017, 8, 652–659. [Google Scholar] [CrossRef]
  5. Feng, X.; Li, Q.; Zhu, Y.; Hou, J.; Jin, L.; Wang, J. Artificial neural networks forecasting of PM2.5 pollution using air mass trajectory based geographic model and wavelet transformation. Atmos. Environ. 2015, 107, 118–128. [Google Scholar] [CrossRef]
  6. Baklanov, A.; Mestayer, P.; Clappier, A.; Zilitinkevich, S.; Joffre, S.; Mahura, A.; Nielsen, N. Towards improving the simulation of meteorological fields in urban areas through updated/advanced surface fluxes description. Atmos. Chem. Phys. 2008, 8, 523–543. [Google Scholar] [CrossRef] [Green Version]
  7. Stern, R.; Builtjes, P.; Schaap, M.; Timmermans, R.; Vautard, R.; Hodzic, A.; Memmesheimer, M.; Feldmann, H.; Renner, E.; Wolke, R.; et al. A model inter-comparison study focussing on episodes with elevated PM10 concentrations. Atmos. Environ. 2008, 42, 4567–4588. [Google Scholar] [CrossRef]
  8. Lv, B.; Cobourn, W.G.; Bai, Y. Development of nonlinear empirical models to forecast daily PM2.5 and ozone levels in three large chinese cities. Atmos. Environ. 2016, 147, 209–223. [Google Scholar] [CrossRef]
  9. Sahu, R.; Nagal, A.; Dixit, K.K.; Unnibhavi, H.; Mantravadi, S.; Nair, S.; Simmhan, Y.; Mishra, B.; Zele, R.; Sutaria, R.; et al. Robust statistical calibration and characterization of portable low-cost air quality monitoring sensors to quantify real-time o 3 and no 2 concentrations in diverse environments. Atmos. Meas. Tech. 2021, 14, 37–52. [Google Scholar] [CrossRef]
  10. Hsieh, H.-P.; Lin, S.-D.; Zheng, Y. Inferring air quality for station location recommendation based on urban big data. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Sydney, Australia, 10–13 August 2015; pp. 437–446. [Google Scholar]
  11. Wang, Y.; Kong, T. Air quality predictive modeling based on an improved decision tree in a weather-smart grid. IEEE Access 2019, 7, 172892–172901. [Google Scholar] [CrossRef]
  12. Gore, R.W.; Deshpande, D.S. An approach for classification of health risks based on air quality levels. In Proceedings of the 2017 1st International Conference on Intelligent Systems and Information Management (ICISIM), Aurangabad, India, 5–6 October 2017; pp. 58–61. [Google Scholar]
  13. Liu, B.; Jin, Y.; Li, C. Analysis and prediction of air quality in nanjing from autumn 2018 to summer 2019 using pcršCsvršCarma combined model. Sci. Rep. 2021, 11, 348. [Google Scholar] [CrossRef] [PubMed]
  14. Dun, M.; Xu, Z.; Chen, Y.; Wu, L. Short-term air quality prediction based on fractional grey linear regression and support vector machine. Math. Probl. Eng. 2020, 2020, 8914501. [Google Scholar] [CrossRef]
  15. Leong, W.; Kelani, R.; Ahmad, Z. Prediction of air pollution index (api) using support vector machine (svm). J. Environ. Chem. Eng. 2020, 8, 103208. [Google Scholar] [CrossRef]
  16. Yan, R.; Liao, J.; Yang, J.; Sun, W.; Nong, M.; Li, F. Multi-hour and multi-site air quality index forecasting in beijing using cnn, lstm, cnnlstm, and spatiotemporal clustering. Expert Syst. Appl. 2021, 169, 114513. [Google Scholar] [CrossRef]
  17. Jin, N.; Zeng, Y.; Yan, K.; Ji, Z. Multivariate air quality forecasting with nested lstm neural network. IEEE Trans. Ind. Inform. 2021, 17, 8514–8522. [Google Scholar] [CrossRef]
  18. Bai, Y.; Zeng, B.; Li, C.; Zhang, J. An ensemble long short-term memory neural network for hourly PM2.5 concentration forecasting. Chemosphere 2019, 222, 286–294. [Google Scholar] [CrossRef] [PubMed]
  19. Font, A.; Tremper, A.H.; Lin, C.; Priestman, M.; Marsh, D.; Woods, M.; Heal, M.R.; Green, D.C. Air quality in enclosed railway stations: Quantifying the impact of diesel trains through deployment of multi-site measurement and random forest modelling. Environ. Pollut. 2020, 262, 114284. [Google Scholar] [CrossRef] [PubMed]
  20. Stafoggia, M.; Bellander, T.; Bucci, S.; Davoli, M.; De Hoogh, K.; De’Donato, F.; Gariazzo, C.; Lyapustin, A.; Michelozzi, P.; Renzi, M.; et al. Estimation of daily PM10 and PM2.5 concentrations in italy, 2013–2015, using a spatiotemporal land-use random-forest model. Environ. Int. 2019, 124, 170–179. [Google Scholar] [CrossRef]
  21. Qi, Z.; Wang, T.; Song, G.; Hu, W.; Li, X.; Zhang, Z. Deep air learning: Interpolation, prediction, and feature analysis of fine-grained air quality. IEEE Trans. Knowl. Data Eng. 2018, 30, 2285–2297. [Google Scholar] [CrossRef] [Green Version]
  22. Zhang, Q.; Lam, J.C.; Li, V.O.; Han, Y. Deep-air: A hybrid cnnlstm framework forfine-grained air pollution forecast. arXiv 2020, arXiv:2001.11957. [Google Scholar]
  23. Pak, U.; Ma, J.; Ryu, U.; Ryom, K.; Juhyok, U.; Pak, K.; Pak, C. Deep learning-based PM2.5 prediction considering the spatiotemporal correlations: A case study of beijing, china. Sci. Total Environ. 2020, 699, 133561. [Google Scholar] [CrossRef]
  24. Voukantsis, D.; Karatzas, K.; Kukkonen, J.; Räsänen, T.; Karppinen, A.; Kolehmainen, M. Intercomparison of air quality data using principal component analysis, and forecasting of PM10 and PM2.5 concentrations using artificial neural networks, in thessaloniki and helsinki. Sci. Total Environ. 2011, 409, 1266–1276. [Google Scholar] [CrossRef]
  25. Elbayoumi, M.; Ramli, N.A.; Yusof, N.F.F.M. Development and comparison of regression models and feedforward backpropagation neural network models to predict seasonal indoor PM2.5–10 and PM2.5 concentrations in naturally ventilated schools. Atmos. Pollut. Res. 2015, 6, 1013–1023. [Google Scholar] [CrossRef]
  26. Kow, P.-Y.; Wang, Y.-S.; Zhou, Y.; Kao, I.-F.; Issermann, M.; Chang, L.-C.; Chang, F.-J. Seamless integration of convolutional and backpropagation neural networks for regional multi-step-ahead PM2.5 forecasting. J. Clean. Prod.. 2020, 261, 121285. [Google Scholar] [CrossRef]
  27. Photphanloet, C.; Lipikorn, R. PM10 concentration forecast using modified depth-first search and supervised learning neural network. Sci. Total Environ. 2020, 727, 138507. [Google Scholar] [CrossRef] [PubMed]
  28. Zhang, L.; Xie, Y.; Chen, A.; Duan, G.A. forecasting model based on enhanced elman neural network for air quality prediction. In Advanced Multimedia and Ubiquitous Engineering; Springer: Berlin/Heidelberg, Germany, 2018; pp. 65–74. [Google Scholar]
  29. Lin, Y.-C.; Lee, S.-J.; Ouyang, C.-S.; Wu, C.-H. Air quality prediction by neuro-fuzzy modeling approach. Appl. Soft Comput. 2020, 86, 105898. [Google Scholar] [CrossRef]
  30. Ma, J.; Ding, Y.; Cheng, J.C.; Jiang, F.; Tan, Y.; Gan, V.J.; Wan, Z. Identification of high impact factors of air quality on a national scale using big data and machine learning techniques. J. Clean. Prod. 2020, 244, 118955. [Google Scholar] [CrossRef]
  31. Loy-Benitez, J.; Vilela, P.; Li, Q.; Yoo, C. Sequential prediction of quantitative health risk assessment for the fine particulate matter in an underground facility using deep recurrent neural networks. Ecotoxicol. Environ. Saf. 2019, 169, 316–324. [Google Scholar] [CrossRef]
  32. London, M.; Häusser, M. Dendritic computation. Annu. Rev. Neurosci. 2005, 28, 503–532. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  33. Van Ooyen, A.; Nienhuis, B. Improving the convergence of the backpropagation algorithm. Neural Netw. 1992, 5, 465–471. [Google Scholar] [CrossRef]
  34. Weir, M.K. A method for self-determination of adaptive learning rates in back propagation. Neural Netw. 1991, 4, 371–379. [Google Scholar] [CrossRef]
  35. Ji, J.; Gao, S.; Cheng, J.; Tang, Z.; Todo, Y. An approximate logic neuron model with a dendritic structure. Neurocomputing 2016, 173, 1775–1783. [Google Scholar] [CrossRef]
  36. Tang, C.; Ji, J.; Tang, Y.; Gao, S.; Tang, Z.; Todo, Y. A novel machine learning technique for computer-aided diagnosis. Eng. Artif. Intell. 2020, 92, 103627. [Google Scholar] [CrossRef]
  37. Song, S.; Chen, X.; Song, S.; Todo, Y. A neuron model with dendrite morphology for classification. Electronics 2021, 10, 1062. [Google Scholar] [CrossRef]
  38. Song, S.; Chen, X.; Tang, C.; Song, S.; Tang, Z.; Todo, Y. Training an approximate logic dendritic neuron model using social learning particle swarm optimization algorithm. IEEE Access 2019, 7, 141947–141959. [Google Scholar] [CrossRef]
  39. Tang, Y.; Ji, J.; Gao, S.; Dai, H.; Yu, Y.; Todo, Y. A pruning neural network model in credit classification analysis. In Computational Intelligence and Neuroscience; Hindawi: London, UK, 2018; p. 9390410. [Google Scholar]
  40. Ji, J.; Song, S.; Tang, Y.; Gao, S.; Tang, Z.; Todo, Y. Approximate logic neuron model trained by states of matter search algorithm. Knowl.-Based Syst. 2019, 163, 120–130. [Google Scholar] [CrossRef]
  41. Zhou, T.; Gao, S.; Wang, J.; Chu, C.; Todo, Y.; Tang, Z. Financial time series prediction using a dendritic neuron model. Knowl.-Based Syst. 2016, 105, 214–224. [Google Scholar] [CrossRef]
  42. Song, Z.; Tang, Y.; Ji, J.; Todo, Y. Evaluating a dendritic neuron model for wind speed forecasting. Knowl.-Based Syst. 2020, 201, 106052. [Google Scholar] [CrossRef]
  43. Das, S.; Suganthan, P.N. Differential evolution: A survey of the stateof-the-art. IEEE Trans. Evol. Comput. 2010, 15, 4–31. [Google Scholar] [CrossRef]
  44. Takens, F. Detecting strange attractors in turbulence. In Dynamical Systems and Turbulence; Springer: Berlin/Heidelberg, Germany, 1981; pp. 366–381. [Google Scholar]
  45. Gabbiani, F.; Krapp, H.G.; Koch, C.; Laurent, G. Multiplicative computation in a visual neuron sensitive to looming. Nature 2002, 420, 320–324. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  46. Albert, R.; Barabási, A.L. Statistical mechanics of complex networks. Rev. Mod. Phys. 2002, 74, 47. [Google Scholar] [CrossRef] [Green Version]
  47. Kennel, M.B.; Brown, R.; Abarbanel, H.D. Determining embedding dimension for phase-space reconstruction using a geometrical construction. Phys. Rev. A 1992, 45, 3403. [Google Scholar] [CrossRef] [Green Version]
  48. Wolf, A.; Swift, J.B.; Swinney, H.L.; Vastano, J.A. Determining Lyapunov exponents from a time series. Phys. D Nonlinear Phenom. 1985, 16, 285–317. [Google Scholar] [CrossRef] [Green Version]
  49. Chang, C.-C.; Lin, C.-J. Libsvm: A library for support vector machines. ACM Trans. Intell. Syst. Technol. TIST 2011, 2, 27. [Google Scholar] [CrossRef]
  50. Bergstra, J.; Bengio, Y. Random search for hyper-parameter optimization. J. Mach. Learn. Res. 2012, 13, 281–305. [Google Scholar]
  51. GGarcía, S.; Fernández, A.; Luengo, J.; Herrera, F. Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: Experimental analysis of power. Inf. Sci. 2010, 180, 2044–2064. [Google Scholar] [CrossRef]
  52. Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
  53. Alcalá-Fdez, J.; Sanchez, L.; Garcia, S.; del Jesus, M.J.; Ventura, S.; Garrell, J.M.; Herrera, F. KEEL: A software tool to assess evolutionary algorithms for data mining problems. Soft Comput. 2009, 13, 307–318. [Google Scholar] [CrossRef]
Figure 1. Illustration of the proposed DNR network and the six connection states of the synapse structure.
Figure 1. Illustration of the proposed DNR network and the six connection states of the synapse structure.
Atmosphere 12 01647 g001
Figure 2. Topology of the scale-free network in the search space.
Figure 2. Topology of the scale-free network in the search space.
Atmosphere 12 01647 g002
Figure 3. Process of calculating the time delay and embedding dimensions of the six air quality time series datasets.
Figure 3. Process of calculating the time delay and embedding dimensions of the six air quality time series datasets.
Atmosphere 12 01647 g003
Figure 4. Air quality estimation framework based on DNR.
Figure 4. Air quality estimation framework based on DNR.
Atmosphere 12 01647 g004
Figure 5. Air quality time series training and estimation results of DNR for each dataset.
Figure 5. Air quality time series training and estimation results of DNR for each dataset.
Atmosphere 12 01647 g005
Figure 6. Comparison of the RMSE and MAPE of the estimation models based on six datasets.
Figure 6. Comparison of the RMSE and MAPE of the estimation models based on six datasets.
Atmosphere 12 01647 g006
Figure 7. Box-and-whisker plots for the MAE of all prediction models.
Figure 7. Box-and-whisker plots for the MAE of all prediction models.
Atmosphere 12 01647 g007
Table 1. The results of the time delay, embedding dimensions, and MLE of the six air quality time series datasets for PSR.
Table 1. The results of the time delay, embedding dimensions, and MLE of the six air quality time series datasets for PSR.
DatasetTime Delay τ Embedding Dimensions mMLE λ max Chaotic
PM2.5(a)240.1269Yes
PM2.5(b)230.1524Yes
PM2.5(c)230.1506Yes
PM10(a)330.0683Yes
PM10(b)350.0409Yes
PM10(c)550.0022Yes
Table 2. Descriptions of the Beijing daily PM2.5 and PM10 concentrations in the experiments.
Table 2. Descriptions of the Beijing daily PM2.5 and PM10 concentrations in the experiments.
DatasetRangePartition RatioInstance Number
PM2.5(a)2019.10–2020.1275%, 25%343, 114
PM2.5(b)2020.01–2021.0375%, 25%342, 113
PM2.5(c)2020.04–2021.0675%, 25%342, 113
PM10(a)2019.10–2020.1270%, 30%320, 137
PM10(b)2020.01–2021.0370%, 30%318, 137
PM10(c)2020.04–2021.0670%, 30%318, 137
Table 3. Parameter settings of DNR for each air quality dataset.
Table 3. Parameter settings of DNR for each air quality dataset.
ParameterPM2.5(a)PM2.5(b)PM2.5(c)PM10(a)PM10(b)PM10(c)
k151615222021
α 10910665
Popsize and epoch100, 1000100, 1000100, 1000100, 1000100, 1000100, 1000
Table 4. Parameter settings of the prediction models for each air quality dataset.
Table 4. Parameter settings of the prediction models for each air quality dataset.
ModelRelated Parameters
ENN L e a r n i n g r a t e = 0.01 , e p o c h = 1000
MLP H i d d e n l a y e r = 10 , l e a r n i n g r a t e = 0.01 , e p o c h = 1000
SVRs C o s t ( c ) = 0.5 , γ = 0.2
DT M i n l e a f = 25
LSTM H i d d e n u n i t s = 200 , e p o c h = 1000
DNR + BPk and α are the same as DNR, l e a r n i n g r a t e = 0.05
Table 5. Experimental results based on 30 runs of all the models on the six air quality datasets. The best and second-best results are highlighted in red and blue, respectively.
Table 5. Experimental results based on 30 runs of all the models on the six air quality datasets. The best and second-best results are highlighted in red and blue, respectively.
PM2.5(a)PM2.5(b)PM2.5(c)PM10(a)PM10(b)PM10(c)Avg.
ENNMSE649,021768,38427,3833575610,2192,873,219821,967
RMSE359.698568.40995.38951.733525.458952.916425.601
MAE79.706159.87927.43929.892161.304347.669134.315
MAPE2.5305.4720.9540.6761.5444.2952.578
MLPMSE773.6382016.49846.8551236.4125,439.526,261.79429.09
RMSE27.76844.52029.00535.135159.492162.04976.328
MAE22.73031.37520.46927.90153.37154.52035.061
MAPE1.4120.9301.1720.6940.5640.5230.883
SVR + LMSE659.7951265.69637.809974.77427,243.128,065.49807.75
RMSE25.68635.57625.25531.221165.055167.52775.054
MAE19.99125.50816.48823.85753.57754.68732.351
MAPE1.1830.8500.8310.5630.5250.5150.745
SVR + RMSE807.9862255.30853.8401236.54161,070167,87355,682.7
RMSE28.42547.49029.22135.164401.335409.723158.560
MAE22.39233.17320.87027.035103.474122.21554.860
MAPE1.3090.8801.1610.6200.9341.3551.043
SVR + PMSE657.8391300.42631.037934.28124,282.425,500.08884.33
RMSE25.64836.06125.12030.566155.828159.68772.152
MAE20.19425.66916.56223.07247.30449.68230.414
MAPE1.1950.8010.8790.5420.4730.4250.719
SVR + SMSE671.8951368.19626.755985.25727,136.428,303.99848.73
RMSE25.92136.98925.03531.389164.731168.23875.384
MAE20.52226.40017.04424.12553.25754.63032.663
MAPE1.2050.8390.9290.5730.5180.5050.762
DTMSE1074.891656.205991.6291707.9324,538.626,226.09365.88
RMSE32.78640.69631.49041.327156.648161.94477.482
MAE22.11228.46119.46930.59550.68250.81333.689
MAPE1.1450.8130.8860.6880.5560.4650.758
LSTMMSE1569.634705.712244.444206.4232,055.432,795.112,929.5
RMSE39.60168.58247.35864.851179.040181.09496.754
MAE28.17349.99731.43253.57590.68590.82157.447
MAPE0.7140.8060.7530.9040.9430.9230.839
DNR + BPMSE1809.385065.462041.343841.4731,741.432,667.112,861.0
RMSE42.53771.17245.18161.980178.161180.74196.629
MAE31.31352.73230.78650.02786.03884.33855.872
MAPE0.8280.8790.7430.8000.8450.7630.810
DNRMSE623.1011299.78569.916909.19523,965.225,091.68743.14
RMSE24.95836.04023.87130.151154.801158.40371.371
MAE19.03825.34915.89122.70846.30248.94529.705
MAPE0.7040.8030.7470.5350.4620.4620.621
Table 6. Resulting nonparametric statistical Friedman tests of all prediction models, with the best and second-best results highlighted in red and blue, respectively.
Table 6. Resulting nonparametric statistical Friedman tests of all prediction models, with the best and second-best results highlighted in red and blue, respectively.
RankingENNMLPSVR + LSVR + RSVR + PSVR + SDTLSTMDNR + BPDNR
PM2.5(a)7.4256.5753.8256.6754.255.1755.755.9256.52.9
PM2.5(b)9.5755.92.67.0252.5254.2754.756.9758.7752.6
PM2.5(c)6.356.43.9756.754.95.8755.45.85.63.95
PM10(a)7.26.22.9755.3751.7753.9757.3259.958.951.275
PM10(b)8.44.5755.59.41.754.4753.68.6257.3751.3
PM109.6254.355.2259.3751.85.1752.957.97.11.5
Avg.8.0965.6674.0177.4332.8334.8254.9637.5297.3832.254
Table 7. Resulting statistical Wilcoxon tests of the prediction models compared with DNR.
Table 7. Resulting statistical Wilcoxon tests of the prediction models compared with DNR.
DNR vs.
ENNMLPSVR + LSVR + RSVR + PSVR + SDTLSTMDNR + BP
PM2.5(a)3.68 × 10 07 9.22 × 10 06 1.04 × 10 04 6.09 × 10 08 3.34 × 10 06 1.27 × 10 05 1.38 × 10 07 1.02 × 10 03 2.70 × 10 03
PM2.5(b)1.82 × 10 12 1.95 × 10 05 8.89 × 10 01 1.82 × 10 12 6.81 × 10 01 5.27 × 10 07 8.13 × 10 10 2.29 × 10 09 1.81 × 10 12
PM2.5(c)1.78 × 10 02 1.30 × 10 02 6.99 × 10 02 5.38 × 10 04 2.59 × 10 02 6.46 × 10 07 4.89 × 10 02 8.17 × 10 02 5.83 × 10 03
PM10(a)1.82 × 10 12 1.82 × 10 12 3.64 × 10 12 1.82 × 10 12 1.01 × 10 04 3.64 × 10 12 1.82 × 10 12 1.82 × 10 12 1.82 × 10 12
PM10(b)1.82 × 10 12 1.82 × 10 12 1.82 × 10 12 1.82 × 10 12 2.63 × 10 03 1.82 × 10 12 3.48 × 10 08 1.82 × 10 12 1.82 × 10 12
PM10(c)1.82 × 10 12 1.82 × 10 12 1.82 × 10 12 1.82 × 10 12 1.04 × 10 07 1.82 × 10 12 1.04 × 10 07 1.82 × 10 12 1.82 × 10 12
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Song, Z.; Tang, C.; Qian, J.; Zhang, B.; Todo, Y. Air Quality Estimation Using Dendritic Neural Regression with Scale-Free Network-Based Differential Evolution. Atmosphere 2021, 12, 1647. https://doi.org/10.3390/atmos12121647

AMA Style

Song Z, Tang C, Qian J, Zhang B, Todo Y. Air Quality Estimation Using Dendritic Neural Regression with Scale-Free Network-Based Differential Evolution. Atmosphere. 2021; 12(12):1647. https://doi.org/10.3390/atmos12121647

Chicago/Turabian Style

Song, Zhenyu, Cheng Tang, Jin Qian, Bin Zhang, and Yuki Todo. 2021. "Air Quality Estimation Using Dendritic Neural Regression with Scale-Free Network-Based Differential Evolution" Atmosphere 12, no. 12: 1647. https://doi.org/10.3390/atmos12121647

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop