Next Article in Journal
Design and Analysis of XY Large Travel Micro Stage Based on Secondary Symmetric Lever Amplification
Next Article in Special Issue
A High-Performance and Cost-Effective Field Programmable Gate Array-Based Motor Drive Emulator
Previous Article in Journal
A Low-Profile Dielectric Resonator Filter with Wide Stopband for High Integration on PCB
Previous Article in Special Issue
Implementation of Wavelet-Transform-Based Algorithms in an FPGA for Heart Rate and RT Interval Automatic Measurements in Real Time: Application in a Long-Term Ambulatory Electrocardiogram Monitor
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modeling of Particulate Pollutants Using a Memory-Based Recurrent Neural Network Implemented on an FPGA

by
Julio Alberto Ramírez-Montañez
1,
Jose de Jesús Rangel-Magdaleno
2,
Marco Antonio Aceves-Fernández
1,* and
Juan Manuel Ramos-Arreguín
1
1
Facultad de Ingeniería, Universidad Autónoma de Querétaro, Querétaro 76010, Mexico
2
Digital Systems Group, Electronics Department, National Institute for Astrophysics, Optics and Electronics, Puebla 72840, Mexico
*
Author to whom correspondence should be addressed.
Micromachines 2023, 14(9), 1804; https://doi.org/10.3390/mi14091804
Submission received: 9 August 2023 / Revised: 4 September 2023 / Accepted: 6 September 2023 / Published: 21 September 2023
(This article belongs to the Special Issue FPGA Applications and Future Trends)

Abstract

:
The present work describes the training and subsequent implementation on an FPGA board of an LSTM neural network for the modeling and prediction of the exceedances of criteria pollutants such as nitrogen dioxide (NO2), carbon monoxide (CO), and particulate matter (PM10 and PM2.5). Understanding the behavior of pollutants and assessing air quality in specific geographical regions is crucial. Overexposure to these pollutants can cause harm to both natural ecosystems and living organisms, including humans. Therefore, it is essential to develop a solution that can accurately evaluate pollution levels. One potential approach is to implement a modified LSTM neural network on an FPGA board. This implementation obtained an 11% improvement compared to the original LSTM network, demonstrating that the proposed architecture is able to maintain its functionality despite reducing the number of neurons in its initial layers. It shows the feasibility of integrating a prediction network into a limited system such as an FPGA board, but easily coupled to a different system. Importantly, this implementation does not compromise the prediction accuracy for both 24 h and 72 h time frames, highlighting an opportunity for further enhancement and refinement.

1. Introduction

Environmental pollution is a mixture of naturally and anthropogenically produced pollutants [1]. Reducing pollution or maintaining its level is one of the main objectives worldwide to ensure the health of the population and ecosystems [2]. There are different types of classification of environmental pollution, such as water pollution, land pollution, and air pollution [1,2]. The latter has subclassifications such as greenhouse gases, short-lived pollutants, ozone-depleting substances, and finally, criteria pollutants [1,2,3,4].
Monitoring networks were developed to record the behavior of the different criteria pollutants and various environmental conditions [1,5]. The stations are strategically placed across a specific geographic area and equipped with suitable sensors. This arrangement enables the generation of a comprehensive database that contains records of past pollutant behavior. By analyzing these data, we can assess the region’s progress and determine whether certain stations need to be maintained, replaced, or new ones added to the network [5,6].
Developing algorithms based on artificial intelligence has allowed us to interpret and classify large datasets, generating mathematical models of its behavior [7,8,9,10,11] and, at the same time, generating predictive models [12,13,14,15]. Artificial intelligence has two essential areas: machine learning and deep learning [16]. The latter classifies algorithms such as deep neural networks: convolutional neural networks and recurrent neural networks, among others.
Recurrent neural networks are one of the main algorithms implemented to detect a series of behaviors in continuous records [7,10,11,12,13,14,15], maintaining consistency between the previous and new data that this network proposes, emulating the behavior of the analyzed records.
In the present work, an LSTM architecture was implemented, which will be detailed in a later section. The selection of this architecture is based on the wide implementation in registers whose behavior is highly nonlinear [11,15], that is to say that they do not have a periodic behavior, and this behavior is the result of the intervention of different variables. However, this behavior presents certain behaviors that an RNN can detect and model, allowing the generation of a predictive model.

1.1. Criteria Pollutants

Criteria pollutants are so named because they are the primary focus of public assessments in documents pertaining to air quality [3,17]. These pollutants are subject to specific concentration limits in the environment to safeguard and uphold the population’s well-being [17,18].
They pertain to a distinct category of atmospheric pollution primarily attributed to human activities, particularly industrial processes [3]. The specific pollutants under consideration include sulfur dioxide, nitrogen dioxide, carbon monoxide, ozone, and particulate matter with diameters less than 10 and 2.5 μm [3,18]. The characteristics of these contaminants are outlined in Table 1 [4,17].
Criteria pollutants have different behaviors; each one is affected by different variables, both climatic and the result of industrial, commercial, or population processes [3,4]. This generates a highly nonlinear behavior, as shown in Figure 1. The acronyms CUT, FAC, HGM, BJU, ATI, CAM, and CCA represent the monitoring stations from which the behavioral records were retrieved [5].
In the present work, we will focus on three criteria pollutants for the modeling stage and a fourth for the prediction stage. In the modeling stage, it will be carbon monoxide (CO) and particulate matter (PM10 and PM2.5), whose behaviors present a greater randomness in the records and mainly because they have the largest number of continuous records. In the prediction stage, we added ozone (O3) as a control criteria pollutant, since it is one of the most-widely studied [3,4].

1.2. Monitoring Networks

Atmospheric monitoring is a set of actions that allow measuring the values of meteorological and air quality parameters in a given region. According to the National Institute of Ecology and Climate Change (INECC by its Spanish acronym), atmospheric monitoring in Mexico is used as an instrument for the establishment of environmental policies to protect the health of the population and ecosystems [2,19].
Atmospheric monitoring stations play a crucial role in generating accurate data and aiding in developing air quality standards while ensuring their adherence [20]. These stations have various sensors that record important climatic variables and pollutant concentrations. The primary sensors include temperature, relative humidity, wind direction, and wind intensity, which are essential for monitoring climatic conditions [2,6]. Additionally, pollutant sensors are utilized to measure concentrations of ozone, nitrogen dioxide, nitrogen monoxide, nitrogen oxides, carbon monoxide, sulfur dioxide, PM10, and PM2.5 [5].

1.3. Recurrent Neural Networks

Recurrent neural networks are a deep learning tools that recursively compute new states by applying transfer functions to previous states and their inputs [21,22]. Transfer functions usually comprise an affine transformation followed by a nonlinear function determined by the nature of the problem at hand [10,21].
In 2007, Maass showed that RNNs possess the so-called universal approximation property, which establishes the ability to approximate arbitrary nonlinear dynamical systems with some arbitrary accuracy by performing complex mappings from input sequences to output sequences [12,23].
RNNs do not have a defined layer structure, which allows arbitrary connections between neurons, allowing the creation of a certain temporality, generating a network with memory. There are several types of recurrent networks depending on the number of layers and the way in which backpropagation is performed [12].
Some of the main applications of RNNs are natural language recognition, such as for chatbots or translators, pattern recognition, and prediction in continuous records [10,11,12].

LSTM

The long short-term memory (LSTM) recurrent neural network, proposed by Hochreiter and Schmidhunder in 1997, is part of deep learning [14,22,24], widely used in natural language processing and time series [7,11,15].
It is unlike a simple recurrent neural network, which forms a long-term memory in the form of weights between neurons, which are modified during the training of the network, and a short-term memory, defined in the activation functions between the communication of the neuron nodes [12,15,22].
The LSTM model introduces an internal memory block, composed of simple blocks connected in a specific way (see Figure 2a), each of which is described in Equations (1)–(6) [22], while in Figure 2b, the neuron to neuron connection and data flow are observed.
Figure 2. LSTM network structures (adapted from [15]).
Figure 2. LSTM network structures (adapted from [15]).
Micromachines 14 01804 g002
f t = σ ( U f h t 1 + W f x t + b f )
i t = σ ( U i h t 1 + W i x t + b i )
n c t = t a n t ( U n c h t 1 + W n c x t + b n c )
c t = f t c t 1 + i t n c t
h t = o t t a n t ( c t )
o t = σ ( U o h t 1 + W o x t + b o )
The variables are described below:
x t : initial data (input).
C t 1 , C t : cell state, next cell state.
h t 1 , h t : hidden state, next hidden state/output.
U f , U i , U n c , U o : feedback weights.
W f , W i , W n c , W o : internal weights.
b f , b i , b n c , b o : bias.

1.4. FPGA Board

Field programmable gate arrays (FPGAs) are programmable electronic devices based on a matrix of logic blocks whose interconnection and functionality can be configured by the user using a specialized description language (such as VHDL and Verilog) [25,26], belonging to a classification of an embedded system [27]. The term embedded system refers to small systems with limited resources, referencing the operability of a personal computer [25,26,27].
The main advantage of using FPGA cards is that they can be used to implement massively parallel data-processing algorithms, through a programmable structure based on blocks that operate together, allowing a reconfiguration if necessary to improve connections from stage to stage, necessary for a specific algorithm [27,28]. Microcontrollers are another tool used, but they work sequentially, limiting their development process and application to already-defined tasks [27].
FPGA boards contain logic gates, clock controllers, and RAM and ROM memories, among other elements. In the development of this work, we used the DE10-Standard board, which included a Cyclone V SX SoC-5CSXFC6D6F31C6NN, a 925 MHz Dual-Core ARM processor, and a 64 MB SDRAM memory, including GPIO connection ports.

2. Methodology

Figure 3 shows a diagram of the implemented methodology, distinguishing the two main stages: the implementation stage at the software and hardware level. These have internal sections, and each one will be explained in more detail in the following sections.

3. Data Preparation

The selected database corresponds to the records of the Automatic Atmospheric Monitoring Network (RAMA) of Mexico City, corresponding to the period from 2000 to 2019. The amount of atypical data corresponding to an erroneous measurement, represented in this database by −99, was counted to maintain continuity in the records. Table 2 shows the percentages of invalid data for different monitoring stations whose records correspond to the period mentioned above. The MER station was selected for the development of the present work.
Once the stations with the most valid records have been identified, the next step was to propose the behavior of the missing data, for which the multiple imputation by chained equations (MICE) algorithm was implemented. This algorithm has demonstrated its ability to emulate the behavior of missing data by comparing the valid records [29].
The MICE algorithm operates iteratively, improving the missing data estimates. The base algorithm is described in Algorithm 1. In the first iteration, a simple imputation (mean), then a regression are applied between consecutive pairs until the defined number of iterations is completed [14,15].
Algorithm 1 MICE algorithm.
  • Fill in missing values from random draws of non-missing data
  • for each iteration do
  •     for each variable v with missing values do
  •         Subset data, where v was originally nonmissing
  •         Train model v X , where X are the other variables in the dataset
  •         Do either:
  •         (1) Replace missing values with predictions from model
  •         (2) Replace missing values using mean matching
  •     end for
  • end for
It is still necessary to consider that, when proposing the behavior of the data in the periods where invalid data were recorded, these only represent an approximate behavior [14]. We cannot ensure that the behavior represented is similar to the one existing in those periods of time [15,29].

4. Network Training

The LSTM network that was chosen comprises two main stages. The first stage is the behavioral modeling stage, which consists of three layers. The second stage is the exceedance detection stage, which also consists of three layers. Figure 4 shows the specific number and type of neurons present in each layer. It is important to note that, in Layer 4, the number of neurons is denoted by “x” because it varies depending on the desired time of anticipation for generating the detection. Specifically, it is equivalent to having one neuron per previous hour. For instance, if a 24 h anticipation is required, then 24 neurons are necessary. On the other hand, if the objective is to predict with a 72 h anticipation, then 72 neurons are needed [14].
In the modeling phase of the applied network architecture, the entire dataset was utilized, with 80 % of the data designated for training purposes and the remaining 20 % reserved for validation. To ascertain the network’s ability to identify behavioral patterns, the validation was conducted using the correlation coefficient (CC) [30] and the root-mean-squared error (RMSE) [30,31]. The mathematical expressions for these metrics are provided below:
C C = i = 1 n ( M i M ¯ ) ( R i R ¯ ) n 1 i = 1 n ( M i M ¯ ) 2 n 1 i = 1 n ( R i R ¯ ) 2 n 1
R M S E = 1 n i = 1 n ( M i R i ) 2
The variables are described below:
R i : real data.
R ¯ : average of real data.
M i : data modeled by the LSTM network.
M ¯ : average of data modeled by the LSTM network.
n: total data.
In the section of the detection of exceedances, the first step is to identify what is an exceedance, and for this we used the Mexican standards: NOM-025-SSA1-2021, NOM-021-SSA1-2020, and NOM-022-SSA1-2019. These indicate the maximum permitted value and the period in which it is evaluated (see Table 3) [32].
Since the values to be evaluated are an average of the 8 h and 24 h records, we have a daily value, with which we will be able to identify the previous behavior so that we can label and save the days with exceedances and those that do not with the previous behavior, either 24 or 72 h in advance.
Once the days with and without exceedances have been sorted, they are placed in chronological order, selecting 80% of the data to train the exceedance detection network, while the remaining 20% will be used to evaluate the network.

5. Reduction of LSTM Network Parameters

The utilization of neural networks in FPGAs involves employing smaller networks with a complete connection configuration [27,28]. When implementing LSTM networks, individual neurons are constructed as separate blocks [33]. However, creating a complete network within an FPGA becomes challenging due to the constrained resources available on FPGA boards.
Considering the specific architecture of the implemented network, which encompasses 14,578,523 internal parameters, significant memory capacity is necessary solely for storing the weights and biases of each neuron. This calculation does not even take into account the memory space required for storing the intermediate results generated at each stage until the final output is obtained.
Table 4 shows the initial architecture of the implemented LSTM recurrent network and the characteristics of the final architecture. This reduction was obtained by taking as a reference the work of Kaushik Bhattacharya [34], which consists of defining the number of neurons in the initial layer equal to the length of the vector of the variables to be estimated, in this case, 24 times the hours of recording of a day. For the hidden layers, multiplies of the number of neurons in the initial layer were used, using 24 in the same way; if this does not generate similar results to the initial architecture, the number of neurons will be constantly increased or another hidden layer will be added.
In the prediction stage, being a simple architecture, it is not modified, only in the initial layer denoted with an X, since the number of neurons depends on the time with which we want to make the prediction, 24 or 72, referring to the hours of anticipation.

6. Implementation of the LSTM Network on an FPGA

As the neural network was previously trained, the weights matrices corresponding to each layer of the network were extracted, to identify the maximum and minimum values and define their conversion to a binary system: implementing two representation options (12 and 16 bit) in a five-point seven and five-point eleven format, respectively; using the most-significant bit as the sign bit, as shown in the Table 5.
The subsequent phase involved the organization of LSTM neuron operations, aiming to exert control over the selection of corresponding weights and the sequential execution of operations while retaining the results for subsequent iterations. Regarding the internal activation functions of the LSTM neuron, an approximation method was applied. Specifically, the sigmoid function was approximated using a polynomial function (Equation (9)), whereas the hyperbolic tangent function was approximated using a piecewise function (Equation (10)). Figure 5 shows the representation of the digital design resulting from the development of an LSTM neuron.
y = 0.0127 x 3 + 0.2419 x + 0.4999
y = x 1.5 y = 0.97 1.5 < x < 1.5 y = 0.1045 x 3 + 0.8644 x x 1.5 y = 0.98
Figure 6 shows the basic structure of the internal operations of the LSTM neuron, using multiply–accumulate (MAC). This same structure was implemented to represent the dense neurons that symbolize perceptrons. This is shown in Equation (11).
Figure 6. Base operational structure.
Figure 6. Base operational structure.
Micromachines 14 01804 g006
y = i = 1 n ( x i W i ) + b
The variables are described below:
x i : input data.
W i : weights.
b: bias.
Multiplication operations are characteristic of increasing the number of bits by a factor of two compared to the input operators. This study utilized two fixed-point configurations, one with 12 bit and the other with 16 bit. Consequently, the output of these configurations would be 24 bit and 32 bit, respectively. In order to maintain the same word length, specific datasets were chosen for analysis, covering Bit Positions 7 to 18 and 11 to 26, respectively. The implemented multiplication and addition structure is illustrated in Figure 7. On the other hand, the addition operation preserves the word length both at the output and input, as it is a direct operation.
With the weights defined in the network training, we grouped the corresponding matrices in RAM blocks from a computer to the FPGA through the RS232 communication protocol, synchronizing the memory blocks with the operations to be performed by means of a counter and a second one to define the layer in which it is operating, until finishing the evaluation of the whole network.
The final configuration is depicted in Figure 8, illustrating the RAM memory blocks wherein the weights and bias values are stored. Additionally, it demonstrates the interconnectedness between the blocks, representing each network layer. Internally, these blocks are constituted by components that symbolize the LSTM neuron, as depicted in Figure 5.

7. Results and Discussions

To ensure that the reduction of the parameters due to the reduction of LSTM neurons in the second network architecture maintains a similar behavior in the RMSE and CC, we performed evaluations with identical data sections in ten independent tests. We obtained the results shown in Table 6, showing that the modeling proposed by the networks remains stable evaluating neuron by neuron, the initial and final layer, even when evaluated on the computer.
Once the correct operation of the reduced network has been demonstrated, the next step is to compare the percentage error between the initial data and the data to be supplied to the FPGA in 12 bit and 16 bit format and evaluate the accumulated neuron-to-neuron error and the final error.
In Table 7, it is observed that the accumulated error in the minimum final layer belongs to the use of 16 bit; in the same way it, is observed that the error of the initial layer oscillates between 9 % and 15 % , observing the percentage of error of the first neuron and the twelfth neuron and the accumulated error of the first layer. On the contrary, when observing the behavior of the errors with the 12 bit format, an increase in the accumulated errors is perceived and how it grows up to 25 % in the final layer, generating deficient modeling.
Figure 9 shows the level of repeatability of the accumulated errors in the modeling stage in its final layer after 20 tests evaluating different data represented in 12 and 16 bit. It shows that the accumulated error with 16 bit remains constant in a range of 11 % to 13 % error, while with a 12 bit representation, the error ranges between 25 % and 27 % .
Figure 10a,b show the behavior between the real data (gray color), the data modeled by the original LSTM network (blue color), and the data modeled by the FPGA architecture (orange color), during the first 72 h (3 days) of January 2022 and 2018, respectively, for PM10. It can be seen that the LSTM network architectures (both the original and reduced) have similar behavior, obtaining better results, as shown in Figure 10c for 2018, while in 2022, the networks failed to model the behavior until after the first 56 h. This may be due to the sharp increases recorded and the behavior of the previous year; on the contrary, 2018 presents a better modeling as it has a more-stable behavior.
Figure 10c shows an increase in the first 12 h of January 2018, showing the difference in the modeling between both LSTM neural network architectures.
Figure 11a,b show the behavior between the real data (gray color), the data modeled by the original LSTM network (blue color), and the data modeled by the FPGA architecture (orange color), during the first 72 h (3 days) of January 2022 and 2018, respectively, for PM2.5. As in the previous figure, it is observed that the LSTM network architectures (both original and reduced) have a similar behavior. In the context of high peak modeling (in the presence of outliers), a notable issue arises due to the dissimilar behavior exhibited by various architectures. Unlike PM10, which display abrupt changes, PM2.5 demonstrate a more-stable behavior, thereby setting them apart.
In Figure 11c is an increase in the first 12 h of January 2018, showing the difference in the modeling between both LSTM neural network architectures.
Figure 12a,b show the behavior of carbon dioxide (CO), which presents a more-chaotic behavior than the behavior of PM10 and PM2.5; shown is the behavior of the real data (gray color), the data modeled by the original LSTM network (blue color), and the data modeled by the FPGA architecture (orange color), during the first 72 h (3 days) of January 2022 and 2018, respectively. Given that this behavior was registered during the training of the different architectures, it is possible to emulate the behavior better; clearly, there are some points where the network registers higher or lower than the real ones.
Figure 12c shows an increase in the first 12 h of January 2018, showing the difference in modeling between both LSTM neural network architectures.
Figure 13a,b show the behavior of ozone dioxide (O3), which presents a simpler behavior than the behavior of PM10 and PM2.5; shown is the behavior of the real data (gray color), the data modeled by the original LSTM network (blue color), and the data modeled by the FPGA architecture (orange color), during the first 72 h (3 days) of January 2022 and 2018, respectively. Given that this behavior was registered during the training of the different architectures, it is possible to emulate the behavior better; clearly, there are some points that the network registers higher or lower than the real ones.
Figure 13c shows the behavior in the first 12 h of January 2018, showing the difference in modeling between both LSTM neural network architectures.
Finally, Figure 14 shows the evaluation of the prediction stage, for which four pollutants (PM10 PM2.5, O3, CO) were used at two time intervals (24 and 72 h in advance). In order to guarantee the robustness of the network architecture, the results at 24 h of prediction were more compact for PM10 (85– 89 % ), PM2.5 (81– 84 % ), and CO (77– 82 % ); in the case of O3, it had the highest amplitude from 81– 92 % ; they had atypical points, but these did not decrease more than 73 % in the case of CO.
On the other hand, if an exceedance is predicted 72 h in advance, the valid percentages increase their range, going from 80 to 90 % for PM10, 75 to 83 % for PM2.5, for O3 from 80 to 90 % , and for CO from 65 to 80 % , showing an atypical drop of 50 % . Of the four criteria pollutants evaluated, CO is the one that presented the greatest problem for the detection of exceedances.

8. Conclusions

In the implementation of a neural network, in this case, an LSTM architecture previously trained on a computer, it is necessary to know the amount of internal parameters since the available memory capacity is a factor to be taken into account in an FPGA board. Therefore, the process of reducing the network is a methodological process, where it is necessary to evaluate stage by stage, verifying that the accumulated error does not tend to grow.
The limitation of memory on an FPGA board is a negative factor, which is compensated by the ease of connection of different peripheral devices, in this case, environmental sensors. The adaptability of the FPGA board allows us to determine the necessary logic requirements for the implementation of the binary values of the weight matrices of the implemented network architecture.
When converting decimal values to binary, it is imperative to carefully consider both the desired accuracy (resolution) and the fundamental integer representation. Neglecting these aspects in internal operations can lead to a significant loss of information, not to mention the criticality of preserving a sign bit.
The inherent characteristics of the network enabled the maintenance of a consistent error level at each stage. When comparing this operation with the original architecture of the LSTM neural network, the error remained within a range of about 11 % during the modeling stage. Furthermore, in the prediction stage, the accumulated errors did not significantly differ between the two evaluated architectures.
One issue that requires attention in future endeavors is identifying the precise moment when a contaminant surpasses the acceptable limit and enhancing the prediction accuracy for longer time frames.

Author Contributions

Conceptualization, M.A.A.-F.; Methodology, J.A.R.-M.; Software, J.M.R.-A.; Validation, J.d.J.R.-M. and J.M.R.-A.; Investigation, J.A.R.-M. and M.A.A.-F.; Resources, J.d.J.R.-M.; Data curation, M.A.A.-F.; Writing—original draft, J.A.R.-M.; Visualization, J.M.R.-A.; Supervision, J.d.J.R.-M. Every author has contributed equally. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Xie, X.; Semanjski, I.; Gautama, S.; Tsiligianni, E.; Deligiannis, N.; Rajan, R.T.; Pasveer, F.; Philips, W. A review of urban air pollution monitoring and exposure assessment methods. ISPRS Int. J.-Geo-Inf. 2017, 6, 389. [Google Scholar] [CrossRef]
  2. Idrees, Z.; Zheng, L. Low cost air pollution monitoring systems: A review of protocols and enabling technologies. J. Ind. Inf. Integr. 2020, 17, 100123. [Google Scholar] [CrossRef]
  3. Saxena, P.; Sonwani, S. Criteria Air Pollutants: Chemistry, Sources and Sinks. In Criteria Air Pollutants and Their Impact on Environmental Health; Springer: Berlin/Heidelberg, Germany, 2019; pp. 7–48. [Google Scholar]
  4. Saxena, P.; Sonwani, S. Primary criteria air pollutants: Environmental health effects. In Criteria Air Pollutants and Their Impact on Environmental Health; Springer: Berlin/Heidelberg, Germany, 2019; pp. 49–82. [Google Scholar]
  5. SEDENA. Bases de Datos—Red Automática de Monitoreo Atmosférico (RAMA). 2023. Available online: http://www.aire.cdmx.gob.mx/default.php?opc=%27aKBh%27 (accessed on 8 August 2023).
  6. Al Mamun, M.A.; Yuce, M.R. Sensors and systems for wearable environmental monitoring toward IoT-enabled applications: A review. IEEE Sensors J. 2019, 19, 7771–7788. [Google Scholar] [CrossRef]
  7. Li, X.; Peng, L.; Yao, X.; Cui, S.; Hu, Y.; You, C.; Chi, T. Long short-term memory neural network for air pollutant concentration predictions: Method development and evaluation. Environ. Pollut. 2017, 231, 997–1004. [Google Scholar] [CrossRef]
  8. Barrero-González, D.; Ramírez-Montañez, J.A.; Aceves-Fernández, M.A.; Ramos-Arreguín, J.M. Capability of an Elman Recurrent Neural Network for predicting the non-linear behavior of airborne pollutants. Earth Sci. Inform. 2022, 4, 125–135. [Google Scholar] [CrossRef]
  9. Almosova, A.; Andresen, N. Nonlinear inflation forecasting with recurrent neural networks. J. Forecast. 2023, 42, 240–259. [Google Scholar] [CrossRef]
  10. Kriegeskorte, N.; Golan, T. Neural network models and deep learning. Curr. Biol. 2019, 29, R231–R236. [Google Scholar] [CrossRef]
  11. Kuri-Monge, G.J.; Aceves-Fernández, M.A.; Ramírez-Montañez, J.A.; Pedraza-Ortega, J.C. Capability of a recurrent deep neural network optimized by swarm intelligence techniques to predict exceedances of airborne pollution (PMx) in largely populated areas. In Proceedings of the 2021 International Conference on Information Technology (ICIT), London, UK, 14–16 June 2021; pp. 61–68. [Google Scholar]
  12. Chen, Y.; Song, L.; Liu, Y.; Yang, L.; Li, D. A review of the artificial neural network models for water quality prediction. Appl. Sci. 2020, 10, 5776. [Google Scholar] [CrossRef]
  13. Weerakody, P.B.; Wong, K.W.; Wang, G.; Ela, W. A review of irregular time series data handling with gated recurrent neural networks. Neurocomputing 2021, 441, 161–178. [Google Scholar] [CrossRef]
  14. Montañez, J.A.R.; Fernandez, M.A.A.; Arriaga, S.T.; Arreguin, J.M.R.; Calderon, G.A.S. Evaluation of a recurrent neural network LSTM for the detection of exceedances of particles PM10. In Proceedings of the 2019 16th International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE), Berlin, Germany, 3–6 August 2019; pp. 1–6. [Google Scholar]
  15. Ramírez-Montañez, J.A.; Aceves-Fernández, M.A.; Pedraza-Ortega, J.C.; Gorrostieta-Hurtado, E.; Sotomayor-Olmedo, A. Airborne Particulate Matter Modeling: A Comparison of Three Methods Using a Topology Performance Approach. Appl. Sci. 2021, 12, 256. [Google Scholar] [CrossRef]
  16. Janiesch, C.; Zschech, P.; Heinrich, K. Machine learning and deep learning. Electron. Mark. 2021, 31, 685–695. [Google Scholar] [CrossRef]
  17. Davidson, C.I.; Phalen, R.F.; Solomon, P.A. Airborne particulate matter and human health: A review. Aerosol Sci. Technol. 2005, 39, 737–749. [Google Scholar] [CrossRef]
  18. Concepción Jiménez, M.D.L. Definición y Validación de Una Metodología Para Correlacionar Concentración de Contaminantes Atmosféricos e Ingresos Hospitalarios. 2015. Available online: https://idus.us.es/handle/11441/40688 (accessed on 8 August 2023).
  19. Molina, L.T.; Velasco, E.; Retama, A.; Zavala, M. Experience from integrated air quality management in the Mexico City Metropolitan Area and Singapore. Atmosphere 2019, 10, 512. [Google Scholar] [CrossRef]
  20. Salcido, A.; Murillo, A.T.C.; Flores, G.A.T.; Flores, N.H.; Sierra, S.C.; Flores, M.A.M.; Aguilar, A.L.C.; Olivares, H.A.S.; Merino, A.I.S.; Gaspar, J.A. Calidad del aire y monitoreo atmosférico. Rev. Digit. Univ. 2019, 20, 3. [Google Scholar]
  21. Bianchi, F.M.; Maiorino, E.; Kampffmeyer, M.; Rizzi, A.; Jenssen, R. Recurrent Neural Network Architectures. In Recurrent Neural Networks for Short-Term Load Forecasting; Springer: Berlin/Heidelberg, Germany, 2017; pp. 23–29. [Google Scholar] [CrossRef]
  22. Hua, Y.; Zhao, Z.; Li, R.; Chen, X.; Liu, Z.; Zhang, H. Deep Learning with Long Short-Term Memory for Time Series Prediction. IEEE Commun. Mag. 2019, 17, 100182. [Google Scholar] [CrossRef]
  23. Maass, W.; Joshi, P.; Sontag, E. Computational aspects of feedback in neural circuits. PLoS Comput. Biol. 2007, 3, 20165. [Google Scholar] [CrossRef]
  24. Sherstinsky, A. Fundamentals of recurrent neural network (RNN) and long short-term memory (LSTM) network. Phys. D Nonlinear Phenom. 2020, 404, 132306. [Google Scholar] [CrossRef]
  25. Kuon, I.; Tessier, R.; Rose, J. FPGA architecture: Survey and challenges. Found. Trends Electron. Des. Autom. 2008, 2, 135–253. [Google Scholar] [CrossRef]
  26. Farooq, U.; Marrakchi, Z.; Mehrez, H. FPGA architectures: An overview. Tree-Based Heterog. FPGA Archit. 2012, 5, 7–48. [Google Scholar]
  27. Li, Z.; Huang, Y.J.; Lin, W.C. FPGA implementation of neuron block for artificial neural network. In Proceedings of the 2017 International Conference on Electron Devices and Solid-State Circuits (EDSSC), Beijing, China, 10–17 November 2017; pp. 1–2. [Google Scholar]
  28. Ferreira, J.C.; Fonseca, J. An FPGA implementation of a long short-term memory neural network. In Proceedings of the 2016 International Conference on ReConFigurable Computing and FPGAs (ReConFig), Basel, Switzerland, 19–25 April 2016; pp. 1–8. [Google Scholar]
  29. Javadi, S.; Bahrampour, A.; Saber, M.M.; Garrusi, B.; Baneshi, M.R. Evaluation of four multiple imputation methods for handling missing binary outcome data in the presence of an interaction between a dummy and a continuous variable. J. Probab. Stat. 2021, 2021, 1–14. [Google Scholar] [CrossRef]
  30. Runge, J.; Zmeureanu, R. Forecasting energy use in buildings using artificial neural networks: A review. Energies 2019, 12, 3254. [Google Scholar] [CrossRef]
  31. Cabaneros, S.M.; Calautit, J.K.; Hughes, B.R. A review of artificial neural network models for ambient air pollution prediction. Environ. Model. Softw. 2019, 119, 285–304. [Google Scholar] [CrossRef]
  32. NOM, N.O.M.; de la Federación, D.O. Norma Oficial Mexicana nom-087-ecol-ssa1-2002. Available online: https://www.cndh.org.mx/DocTR/2016/JUR/A70/01/JUR-20170331-NOR14.pdf (accessed on 8 August 2023).
  33. Chang, A.X.M.; Martini, B.; Culurciello, E. Recurrent neural networks hardware implementation on FPGA. arXiv 2015, arXiv:1511.05552. [Google Scholar]
  34. Bhattacharya, K.; Hosseini, B.; Kovachki, N.B.; Stuart, A.M. Model reduction and neural networks for parametric PDEs. SMAI J. Comput. Math. 2021, 7, 121–157. [Google Scholar] [CrossRef]
Figure 1. Comparison of recorded behavior for different criteria pollutants. (a) PM2.5, (b) PM10 and (c) NO2.
Figure 1. Comparison of recorded behavior for different criteria pollutants. (a) PM2.5, (b) PM10 and (c) NO2.
Micromachines 14 01804 g001
Figure 3. Methodology.
Figure 3. Methodology.
Micromachines 14 01804 g003
Figure 4. Stages for proposed LSTM structure.
Figure 4. Stages for proposed LSTM structure.
Micromachines 14 01804 g004
Figure 5. LSTM neuron in VHDL.
Figure 5. LSTM neuron in VHDL.
Micromachines 14 01804 g005
Figure 7. LSTM network structures.
Figure 7. LSTM network structures.
Micromachines 14 01804 g007
Figure 8. Final structure.
Figure 8. Final structure.
Micromachines 14 01804 g008
Figure 9. Percentage error.
Figure 9. Percentage error.
Micromachines 14 01804 g009
Figure 10. Modeling results.
Figure 10. Modeling results.
Micromachines 14 01804 g010
Figure 11. Modeling results.
Figure 11. Modeling results.
Micromachines 14 01804 g011
Figure 12. Modeling results.
Figure 12. Modeling results.
Micromachines 14 01804 g012
Figure 13. Modeling results.
Figure 13. Modeling results.
Micromachines 14 01804 g013
Figure 14. Prediction results.
Figure 14. Prediction results.
Micromachines 14 01804 g014
Table 1. Criteria pollutant characteristics.
Table 1. Criteria pollutant characteristics.
PollutantSymbolCharacteristicsDamage to Health
Sulfur dioxideSO2It is a colorless gas with a pungent odor generated by fossil fuel combustion and the smelting of sulfur-containing ores.Irritating to the respiratory tract. In high concentrations, may cause bronchitis and tracheitis.
Nitrogen dioxideNO2The main sources of anthropogenic NO2 emissions are combustion processes (heating, electricity generation, and vehicle and ship engines).Irritating to the respiratory tract. In high concentrations, may cause bronchitis and pneumonia.
Carbon monoxideCOIt is produced from the incomplete combustion of coal. Both human activities and natural sources produce it.In high concentrations, it disables oxygen transport to cells and cause dizziness, headache, unconsciousness, and even death.
OzoneO3It is formed by the reaction with sunlight of pollutants such as nitrogen oxides (NOx) from vehicle or industrial emissions and volatile organic compounds emitted by vehicles, solvents, and industry.Irritating to the respiratory tract. In high concentrations, reduces lung function, worsens asthma, and aggravates chronic lung diseases.
Particulate matterPM10They are solid or liquid particles of dust, ashes, soot, metallic particles, cement, or pollen, dispersed in the atmosphere and whose diameter is less than 10 μm.Aggravates asthma and cardiovascular respiratory diseases. Prolonged exposure may increase the risk of mortality.
PM2.5As it is less than 2.5 μm in size, it remains suspended in the atmosphere for long periods of time, travel long distances, and penetrate the interiors of homes, offices, etc., thus exposing the population to them for longer periods of time.Aggravates asthma, reduces lung function, and is associated with the development of diabetes.
Table 2. Invalid data percentages.
Table 2. Invalid data percentages.
MERCUAPEDBJU
PM10 9 % 10 % 38 % 12 %
PM2.5 11 % 23 % 30 % 13 %
CO 11 % 23 % 30 % 13 %
NO2 12 % 8 % 24 % 43 %
Table 3. Data representative of the Mexican basic standards.
Table 3. Data representative of the Mexican basic standards.
MaximumAverage
NormPollutantValueTime
Criterion( ug / m 3 )(h)
NOM-025-SSA1-2021PM107024
NOM-025-SSA1-2021PM2.54124
NOM-022-SSA1-2019 S O 2 104.824
NOM-021-SSA1-2020 C O 10,0008
Table 4. LSTM neural network architecture.
Table 4. LSTM neural network architecture.
Layer 1Layer 2Layer 3Layer 4Layer 5Layer 6
LSTMLSTMDenseDenseDenseDense
Initial502561x101
Final24241x101
Table 5. Distribution of the fixed-point 12 and 16 bit word.
Table 5. Distribution of the fixed-point 12 and 16 bit word.
SingInteger PartDecimal Part
1 bit4 bit7 bit
000000.0000000
1 bit4 bit11 bit
000000.00000000000
Table 6. Comparison of the results obtained.
Table 6. Comparison of the results obtained.
NeuralInternalRMSECCPrediction
Parameters 24 h72 h
Initial325,28616.1450.97 90 % 85 %
Final748618.2510.96 89 % 84 %
Table 7. Error comparison.
Table 7. Error comparison.
No. of BitsInitialFinal Errors
ValuesNeuron 1Neuron 12Layer 1Final Layer
12 0.0073 % 23.17 % 24.54 % 15.69 % 25.75 %
16 0.0046 % 15.45 % 9.27 % 10.31 % 11.91 %
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ramírez-Montañez, J.A.; Rangel-Magdaleno, J.d.J.; Aceves-Fernández, M.A.; Ramos-Arreguín, J.M. Modeling of Particulate Pollutants Using a Memory-Based Recurrent Neural Network Implemented on an FPGA. Micromachines 2023, 14, 1804. https://doi.org/10.3390/mi14091804

AMA Style

Ramírez-Montañez JA, Rangel-Magdaleno JdJ, Aceves-Fernández MA, Ramos-Arreguín JM. Modeling of Particulate Pollutants Using a Memory-Based Recurrent Neural Network Implemented on an FPGA. Micromachines. 2023; 14(9):1804. https://doi.org/10.3390/mi14091804

Chicago/Turabian Style

Ramírez-Montañez, Julio Alberto, Jose de Jesús Rangel-Magdaleno, Marco Antonio Aceves-Fernández, and Juan Manuel Ramos-Arreguín. 2023. "Modeling of Particulate Pollutants Using a Memory-Based Recurrent Neural Network Implemented on an FPGA" Micromachines 14, no. 9: 1804. https://doi.org/10.3390/mi14091804

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop