Next Article in Journal
A WRKY Transcription Factor CbWRKY27 Negatively Regulates Salt Tolerance in Catalpa bungei
Next Article in Special Issue
Analysis of Spectral Separability for Detecting Burned Areas Using Landsat-8 OLI/TIRS Images under Different Biomes in Brazil and Portugal
Previous Article in Journal
Classification of Individual Tree Species Using UAV LiDAR Based on Transformer
Previous Article in Special Issue
Unsupervised Domain Adaptation for Forest Fire Recognition Using Transferable Knowledge from Public Datasets
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Forest Fire Smoke Detection Research Based on the Random Forest Algorithm and Sub-Pixel Mapping Method

College of Forestry, Central South University of Forestry and Technology, Changsha 410004, China
*
Author to whom correspondence should be addressed.
Forests 2023, 14(3), 485; https://doi.org/10.3390/f14030485
Submission received: 23 December 2022 / Revised: 23 February 2023 / Accepted: 24 February 2023 / Published: 28 February 2023
(This article belongs to the Special Issue Forest Fires Prediction and Detection)

Abstract

:
In order to locate forest fire smoke more precisely and expand existing forest fire monitoring methods, this research employed Himawari-8 data with a sub-pixel positioning concept in smoke detection. In this study, Himawari-8 data of forest fire smoke in Xichang and Linzhi were selected. An improved sub-pixel mapping method based on random forest results was proposed to realize the identification and sub-pixel positioning of smoke. More spatial details of forest fire smoke were restored in the final results. The continuous monitoring of smoke indicated the dynamic changes therein. The accuracy evaluation of smoke detection was realized using a confusion matrix. Based on the improved sub-pixel mapping method, the overall accuracies were 87.95% and 86.32%. Compared with the raw images, the smoke contours of the improved sub-pixel mapping results were clearer and smoother. The improved sub-pixel mapping method outperforms traditional classification methods in locating smoke range. Moreover, it especially made a breakthrough in the limitations of the pixel scale and in realizing sub-pixel positioning. Compared with the results of the classic PSA method, there were fewer “spots” and “holes” after correction. The final results of this study show higher accuracies of smoke discrimination, with it becoming the basis for another method of forest fire monitoring.

1. Introduction

The rapid development of satellite remote sensing technology has brought broad research prospects for many areas. Meteorological satellites play a vital role in forest fire monitoring. Contemporary satellite forest fire monitoring mainly uses mid-infrared bands. When forest fires occur, the infrared radiation energy from burning combustibles emits outward constantly. As soon as the satellite sensors receive infrared radiation, the bright temperature values of fire pixels change sharply, forming a strong contrast with other pixels. However, in the early stages of a forest fire, the weak infrared radiation from burning areas cannot be detected by satellites. Additionally, the infrared radiation can be blocked by dense forest canopy. Therefore, the mid-infrared band of a satellite cannot receive enough infrared radiation from the ground, leading to untimely or missed forest fire detection.
In order to solve the above problems, smoke should be another key study object. Smoke is one of the key features of forest fires, with it being produced throughout the whole process. Smoke occurs before flames. It passes through the dense forest canopy and rises rapidly into the sky. The more intense the combustion burning, the stronger the smoke. In fact, smoke can be identified by satellite sensors. Synthesizing visible and near-infrared bands can detect smoke successfully. Combining smoke detection with fire point detection can significantly reduce the omission and delay of forest fire detection.
Smoke contains a lot of toxic and harmful gases, which endanger biological safety and the stability of ecosystems [1,2]. Smoke usually lingers in the air for a long time. The propagation distance of smoke can even reach the stratosphere under certain atmospheric circulation conditions [3]. Timely smoke detection can provide indicative signals for forest fire prevention.
Research on satellite smoke detection began in the 1970s. The main methods are the visual discrimination method [4], multi-channel threshold method [5], multi-image time difference method [6], aerosol inversion method [7], and pattern recognition method. The visual discrimination method [4] is widely used to determine the approximate position of forest fire smoke by presenting true or false color images in different band synthesis sequences. Xie et al. [5] proposed a multi-channel threshold method based on MODIS (Moderate-resolution Imaging Spectroradiometer), using prior knowledge of the spectral radiation characteristics of bands and regional optimal thresholds to gradually exclude non-smoke pixels. The smoke area could finally be obtained when other objects were removed. However, the threshold setting could not be unified due to the differences between sensors. A small range of smoke can easily be identified as noise pixels. Chrysoulakis et al. [6] proposed a multi-image time difference method that improved smoke detection based on Advanced Very High-Resolution Radiometer (AVHRR) data from multi-period and multi-spectral characteristics. After masking water and cloud, NDVI (normalized difference vegetation index) and near-infrared band reflectance anomalies were obtained to detect the plume center; then, the whole smoke range was solved by extending the plume center with temporal and spatial information. However, this method requires cloudless images on a sunny day. The aerosol inversion method [8,9,10] regards smoke as an aerosol. It compares the optical thickness and particle size parameters between smoke and other aerosol categories. The multiple scattering and enhanced absorption of smoke in the blue light band are considered. However, the applicability of this method has regional differences, and there is no fixed standard for parameter setting. The pattern recognition method uses the spectral characteristic differences between smoke and other typical ground objects in classification, so as to identify the smoke pixels. For example, Li et al. [11] studied the fire smoke detection algorithm based on a combination of K-means clustering and the Fisher classifier. Li et al. [12] also proposed a forest fire smoke identification model based on the MODIS sensor and BP neural network and introduced the concept of seasonal applicability into forest fire smoke detection. Rui Ba [13] proposed a wildfire smoke detection model based on the MODIS sensor and a convolution neural network. The model was continuously optimized by training a large number of smoke sample datasets (USTC_SmokeRS). Machine learning algorithms are usually designed to fit the complicated relationship between input and output, in order to find the optimal function to classify smoke [14]. These methods require sufficient smoke sample data.
The current methods of smoke detection are at the pixel level. In Himawari-8 images, mixed pixels are common because of the 2 km scale. In scenarios of forest fire smoke detection, the atmospheric apparent reflectance values of low-density smoke pixels are composed of other ground objects, because light can pass through them and be received by satellite sensors. The pixel-level classification methods divide single pixels into single classes, which is inclined to cause a loss of detailed information. In order to solve this problem, this paper introduces the sub-pixel mapping idea into smoke locating, aiming to determine the accurate location of smoke at the sub-pixel level and providing effective information for forest fire detection.
The idea of sub-pixel mapping was first proposed by Atkinson [15] in 1997. The pixel in the image is divided into several sub-pixels according to the scale factor; then, the endmember location is carried out according to the spatial correlation theory of the ground objects. The closer the ground objects are, the greater the similarity [16]. The classical sub-pixel mapping algorithms include the pixel switch algorithm (PSA) proposed by Atkinson [17] and the sub-pixel/pixel spatial attraction model (SPSAM) proposed by Mertens [18]. The former considers the spatial attraction between sub-pixels and adjacent sub-pixels in mixed pixels; the latter concentrates on the spatial attraction between sub-pixels and adjacent mixed pixels. Kasetkasem et al. [19] introduced the Markov random field theory, considering the spectral information and spatial correlation of ground objects. It solves the conditional probability corresponding to sub-pixels and reduces the constraint of abundance value on sub-pixel mapping results. Tolpekin [20,21] researched the energy function with or without prior information and applied the Markov model to analyze the uncertainty of sub-pixel mapping. The application of sub-pixel mapping includes water extraction, building extraction, forest land extraction, etc. Ling et al. [22] realized building boundary extraction at sub-pixel scales using the shape and direction information of buildings on the basis of simulation data and real IKONS and AVIRIS images. Li et al. [23] completed flood thematic maps of the Yangtze River Basin in China and the Murray–Darling Basin in Australia using the concept of sub-pixel mapping, based on the Bayesian standardized BPNN and particle swarm optimization algorithm without additional auxiliary information. Li et al. [24] used real Landsat-5 TM images, Landsat-5 ETM+ images, and MODIS images to detect changes in tropical rain forests in the Amazon Basin, Mato Grosso, Brazil, due to excessive logging. Yin et al. [25] realized green space extraction in the urban–rural boundary area of the Haidian District in Beijing at the sub-pixel scale using GF-2 images. To date, there has been a research gap in using the sub-pixel mapping method in forest fire smoke detection, which is the research focus of this paper.
In this study, smoke detection based on the improved sub-pixel mapping method and random forest was performed. The main body of this paper consists of five sections. The Introduction includes the background, recent studies, and the importance of this paper. The Materials and Methods present the experimental data, introduce the main methods, and show the method flow chart. In the Results section, the results of each procedure are displayed in the form of maps and tables. Some comparison results are also listed. The Discussion presents analyses of the results, and the limitations of this research are summarized. The Conclusion summarizes the entire research. The essence and prospects of this research are described.

2. Materials and Methods

2.1. Study Area

Xichang (schematic diagram B in Figure 1) and Linzhi (schematic diagram A in Figure 1) are two cities with rich vegetation resources. Both of them are located in the west of China. Xichang is situated in Sichuan Province, while Linzhi is located in Tibet. The geographical location maps of the study areas are shown in Figure 1. Frequent forest fires occur in these two places; the forest fire studied here occurred in Xichang on 30 March 2020. The image range is between 101.1°–103.8° E and 27.5°–28.8° N. The damaged forest area was 791.6 hectares, and the fire was not completely extinguished until 2 April 2020. The other case is the fire that occurred in Linzhi on 27 October 2021. The image range is between 97.1°–98.2° E and 28.5°–29.5° N. The altitude here is 2400 m to 3500 m, and the firefighting work was not totally finished until 3 November 2021. Detecting smoke not only shortens the time for forest fires to be detected but also helps to reduce the losses caused.

2.2. Data Pre-Processing

In this study, Himawari-8 data were used to detect forest fire smoke, and NOAA-20 data were used to verify the accuracy. The projection of Himawari-8 HSD data is full disk. Therefore, projection conversion is required to display the forest fire area clearly in a plane. The most essential work lies in converting the raw and line numbers to geographical coordinates. For NOAA-20 data, when converting the data from RDR to SDR, the “bow-tie effect” needs to be eliminated by geometric correction, for displaying the complete image.
In addition, both kinds of raw data have several bands. Single bands were extracted from raw data. These bands need to be radiometrically calibrated so as to the convert digital numbers to the corresponding reflectance values of each pixel. For Himawari-8 data, synthesis bands 1–6 were used. For NOAA-20 data, all of the image bands were synthesized. After that, the images were clipped according to the coordinates of forest fire area. Additionally, registration procedures need to be performed between the two datasets.
In order to verify the mapping result, NOAA-20 images with the same scene are required. However, the spatial resolution of NOAA-20 images is 375 m. Therefore, resampling is a necessary process. The pixel aggregate method can resample the spatial resolution from 375 m to 400 m. This method comprehensively considers all the raw pixels included in the interpolated pixels. Additionally, it determines the corresponding weight coefficient according to the area ratio of the raw pixels included. The raw pixel values included in the resampled pixels are multiplied by the sum of the corresponding weight coefficients and divided by the ratio of the resampled pixels to the original image resolution to obtain the grid values of the resampled pixels.

2.3. Forest Fire Smoke Detection Based on Random Forest

Random forest is a supervised classification method. It optimizes classification performance by integrating a large number of basic classifiers (decision trees). The classic decision tree is CART, which was proposed by Breiman [26]. Each CART decision tree makes a judgment in the form of voting independently. The final result with the most votes represents the class of the pixel. The indicators used for making judgments include feature selection, decision tree generation, and pruning [27]. Therefore, the classification results are affected by the strength and correlations of decision trees [28]. The split of a decision tree starts from the root node, which does not end until the sample characteristics of the sub-node and parent node are the same. Each split generates two subsets. The relationship between splitting times and decision tree performance is positive. However, the calculation becomes more complicated at the same time. The model of the random forest structure is shown in Figure 2.
The concept of “random” reflects the aspects of sample extraction, sample feature selection, and the combination of decision trees. Random forest is a flexible machine learning algorithm with strong fitting ability and good anti-noise performance. It shows good adaptability on image classification. Overall, the realization of random forest includes two steps: (1) train the model; random extract samples with playback are based on a bootstrap, which means that two-thirds of the training samples are in the bag (repeat extraction) and one-third of the training samples are out of the bag (do not repeat extraction), forming an “out of bag error” [29]. “Out of bag error” is an unbiased estimation of random forest, used for testing the classification effects of each decision tree [28]. Each decision tree splits according to the minimum principle of the Gini coefficient, forming the complete random forest model. (2) Input the sample sets into the model; the highest voted category is determined as the output classification result.
In order to ensure high accuracy and calculation efficiency of the model, the number of decision trees was set to 100, and that of characteristic variables was set to 6. The number of sample points for each object classification in the two areas are shown in Table 1.
The original sets were divided into 67% training samples and 33% test samples. The training set was applied to building models, and the test set aimed to evaluate the classification effects of the models.

2.4. Mixed Pixel Decomposition Based on LMMs

Satellite remote sensing images obtain the basic information of ground objects with pixels as the basic unit. Additionally, information recorded by the sensor is the sum of the radiation energy of all ground objects in the pixels. Therefore, pixels with several ground objects are called “mixed pixels”. Current research on mixed pixel decomposition is based on the assumption that there are a few ground objects (endmembers) with stable spectral characteristics [30]. The mixed reflectance value of mixed pixels can be expressed as a function of the spectrum of endmembers and the proportions (abundances) of the endmember area [31]. The function is as follows:
M = G   ( f 1 , f 2 , , f n ; R 1 , R 2 , , R n )
In Equation (1), M is the mixed spectral vector, f n represents the endmember area proportion, R n denotes the spectrum of endmembers of the n class, and G is the spectral mixture function, which is divided into the linear model and nonlinear model.
A linear mixed model (LMM) [32] assumes that the spectra of different types of endmembers are relatively independent and do not work with each other. Therefore, the spectral characteristics of the same class of endmembers in pixels are linearly additive, and they are linearly superimposed into a comprehensive spectrum according to a certain proportion. In smoke detection scenes of satellite remote sensing, due to the 2 km pixel scale and density changes in smoke, LMMs are more appropriate. The LMM equation is as follows:
M = k = 1 n f k R j k + ε
In Equation (2), f k is the abundance of the endmember k (the area ratio of the endmember in a pixel), R j k represents the reflectivity of endmember k in band j, and ε is the noise vector, which is generally considered to satisfy the Gaussian distribution [33].
Endmember extraction includes determining the type, quantity, and the spectral characteristics of each endmember, directly affecting the decomposition results of the mixed pixels [34]. In this study, both the pixel purity index (PPI) [35] and Sentinel-2 10 m spatial resolution images were used in endmember extraction. The PPI performs well in extracting stable ground objects in a short time, such as vegetation, bare land, buildings, etc., and is sensitive to noise. There is no suitable parameter selection rule, and it is not easy to judge the actual endmember category from the result. Therefore, using Sentinel-2 images in the same place and same date for reference in endmember extraction is preferable here.
Abundance inversion [36] in LMMs considers that the abundance value, f k , should be constrained according to fully constrained least squares (FCLS). The constraint conditions are shown in Equation (3).
k = 1 n f k = 1   a n d   f k > 0
The abundance value of each endmember must be between 0 and 1, which ensures the non-negativity of the abundance value. The sum of the abundance values of each endmember in each pixel is equal to 1, which ensures the full addition constraint of abundance [37].

2.5. Sub-Pixel Mapping

In order to reflect the specific contour of smoke in the image, so as to realize the extraction of forest fire smoke at the sub-pixel scale, sub-pixel mapping is required after mixed pixel decomposition. According to Tobler’s first law [38], all the objects in geography are inter-related. The closer the spatial distance, the greater the correlation between two objects [16]. This theory provides support for sub-pixel mapping.
This study aimed to extract forest fire smoke, which means that other endmembers can be regarded as a whole. Therefore, the smoke sub-pixels equal 1 and others equal 0 in the final result of sub-pixel mapping. The sub-pixel segmentation scale factor is 5, which means that the mixed pixel of the raw image has been split into 25 sub-pixels. The spatial resolution of the sub-pixel mapping result is 400 m. The methods of PSA and SPSAM are compared here.

2.5.1. PSA Method

The pixel switch algorithm (PSA) method was proposed by Atkinson in 2005 [39]. The basic idea is to continuously exchange the positions of sub-pixels to maximize the spatial attraction. Researchers have tried to adopt this method to locate sub-pixels of various land types, and to obtain classification images by stacking them one by one [40]. The research purpose of this paper is to extract forest fire smoke at the sub-pixel scale, without considering the separate positioning of other ground objects except smoke. Therefore, other ground objects sub-pixels are uniformly set as background values.
The PSA method uses the spatial attraction between sub-pixels and neighboring subpixels to quantify the spatial correlation [41]. Firstly, the sub-pixel images are initialized according to the abundance inversion results; then, the space gravity of each sub-pixel location is calculated. The spatial attraction expression of sub-pixel P i and adjacent sub-pixels in endmember e is as follows:
A ( P i e ) = j = 1 P t λ i j Z ( x j e )
In Equation (4), the quantity of adjacent sub-pixels around P i is   P t . λ i j represents the distance weight function between P i and P j , which is shown in Equation (5). In Equation (6), Z ( x j e ) is the discrimination function of the endmember, which equals a binary number 0 or 1.
λ i j = exp ( d ( P i ,   P j ) α )
Z ( x j e ) = { 1 ( sub pixel   i   belongs   to   endmember   e ) 0 ( others )  
In Equation (5), α is the exponential parameter of distance decay model. d ( P i ,   P j ) represents the Euclidean distance between the sub-pixels P i and P j . In a two-dimensional plane, it equals the linear distance between two points. The calculation expression is as follows:
d ( P i ,   P j ) = ( x i x j ) 2 + ( y i y j ) 2
For the sub-pixel P i , the number of neighborhood sub-pixels, P t , can be calculated according to Equation (8):
P t = ( 2 r + 1 ) 2 1
In Equation (9), r represents the search radius of a searching neighborhood. After calculating the spatial gravity of all sub-pixels in the image, the spatial gravity is sorted according to the endmember category within each initial mixed pixel. For endmember e, the sub-pixel, P i , that belongs to endmember e with the smallest spatial gravity   A   ( P i e ) is retrieved, and the sub-pixel P j does not belong to endmember e with the largest spatial gravity   A   ( P j e ) . If   A   ( P i e ) < A   ( P j e ) , the endmember properties of sub-pixel P i and   P j are exchanged; otherwise, the original properties are maintained. Each endmember and each mixed pixel are needed to perform the above operation. If endmember attribute exchanges occur between sub-pixels, the corresponding spatial gravity needs to be recalculated. The process operates circularly until reaching the maximum cycles or the results converge. Finally, the endmember attributes of the final output are stable.

2.5.2. SPSAM Method

The sub-pixel/pixel spatial attraction model (SPSAM) method [18] is also a classic method. The basic principles are as follows: (1) the spatial attraction of neighboring pixels is determined by the abundance value and distance; (2) the sub-pixel can only generate spatial attraction with its neighboring pixels, and the number of neighboring pixels is determined by the size of the search window; (3) the space attraction between the pixel out of the neighborhood and sub-pixel is ignored due to the large distance [42]. The attraction calculation formula of the SPSAM method is as follows:
A e ( P i j S 2 ) = t = 1 T λ t F e ( P t ) = t = 1 T F e ( P t ) d ( P i j , P t )
In a two-dimensional coordinate, A e ( P i j S 2 ) represents the attraction of sub-pixel P i j by the endmember e in the search window. In Equation (9), i and j represent the horizontal and vertical coordinates, respectively. S is the scale factor of pixel segmentation. Therefore, S 2 represents the number of segmented sub-pixels in a raw pixel. T represents the number of pixels in the search window; F e ( P t ) represents the abundance value when neighboring pixel P t belongs to endmember e. λ t is the spatial correlation weight, and the calculation formula is as follows:
λ t = 1 d ( P i j , P t )
In Equation (10), d t represents the Euclidean distance between the center of sub-pixel P i j and adjacent pixel P t , and the calculation formula is the same as Equation (7).
Solving the number and location of sub-pixels belonging to endmember e in the mixed pixel refers to spatial attraction A e ( P i j S 2 ) . The sum of spatial attraction in the search window is maximized; then, the number of sub-pixels belonging to endmember e in the mixed pixel is constrained according to the abundance value. The calculation formula is as follows:
N i j e = S 2 F e ( P t )
The number of sub-pixels belonging to endmember e in the mixed pixel is obtained by multiplying the number of sub-pixels in a mixed pixel by the abundance value of endmember e. The location of the top N i j e sub-pixels in spatial attractiveness rankings belong to endmember e.

2.6. Sub-Pixel Mapping Correction Based on a Random Forest Classification Map

Sub-pixel mapping relies on images without auxiliary information. However, due to the lack of constraint information and reference to setting parameters, the mapping results have uncertainties. The results have some noise pixels, including commission errors and errors of smoke omission, which are shown as “spots” and “holes” in images. In order to improve the accuracy of sub-pixel mapping, it is necessary to obtain more information that can reflect the internal spatial distribution characteristics of mixed pixels as the constraint conditions for sub-pixel positioning [43]. Smoke floats in the air with changing shapes, determined by the distribution of fire. It is unpractical to record smoke shapes, especially from the perspective of satellites. Therefore, a classification map with considerable accuracy can provide more information for correction.
To some extent, a clumping process can solve the problem of spatial discontinuity caused by sub-pixel positioning. The principle of clumping is using morphological operators to cluster or merge adjacent similar classification regions. First of all, the classification objects are merged by expanding operation according to the dilate kernel value. Next, the classification images are eroded according to the erode kernel value. Finally, most “spots” and “holes” in the images are eliminated. The smoke edge becomes smoother. In this study, there were five extended kernels and three eroded kernels.
The clumping process is unable to handle those “spots” and “holes” on a larger scale. In this case, a random forest classification result map was introduced, which can help reduce the commission error and omission error of smoke in sub-pixel mapping results. The main approach is called the “sliding window” method. If the category of a pixel is inconsistent with those of the surrounding eight pixels, the center pixel will be decomposed, and its category will be determined by sub-pixel mapping. If the category is consistent with those of the surrounding pixels, the center pixel is directly determined as the same category as the adjacent pixels. Finally, some pixels with misclassification between smoke and others can be corrected.

2.7. Accuracy Evaluation

For both forest fire smoke detection and sub-pixel mapping parts, accuracy evaluations are necessary. In this study, a confusion matrix was used in the accuracy evaluation. The evaluating indicators were overall accuracy, kappa coefficient, producer accuracy, user accuracy, commission error, and omission error.
When the classification is correct, the results are arranged on the main diagonal of the matrix, and the misclassified sub-pixels are arranged on both sides of the main diagonal. The larger the element value on the main diagonal, the more accurate the classification, and the higher the classification accuracy. The larger the element values on both sides of the main diagonal, the more serious the misclassification, and the lower the classification accuracy. The confusion matrix structure of smoke detection is shown in Table 2.
The basic statistical estimates of the confusion matrix are as follows:
(1)
Overall Accuracy
The overall classification accuracy is equal to the proportion of correctly classified sub-pixel numbers in the total number of sub-pixels involved in the classification. In a confusion matrix, overall accuracy [44,45] represents the ratio of the sum of elements on the main diagonal to the sum of all elements in the square matrix. It is used to measure the overall classification quality of the model. The expressions of the four indicators are as follows:
Overall   Accuracy = T P + T N T P + F N + F P + T N
Producer accuracy [46] and user accuracy [47] are used to reflect the omission error and commission error, respectively. The calculation equations are as follows:
(2)
Producer Accuracy
Producer   Accuracy = T P T P + F N
(3)
User Accuracy
User   Accuracy = T P T P + F P
The kappa coefficient [48,49] is generated by statistical testing to evaluate the accuracy of classification. The equation is as follows:
(4)
Kappa coefficient
Kappa   coefficient = p o p e 1 p e
In Equation (15), p o is the ratio of correct classified sub-pixels number to the total number of sub-pixels, which equals the overall accuracy. The equation of p e is as follows:
p e = ( T P + F N ) × ( T P + F P ) + ( F P + T N ) × ( F N + T N ) ( T P + F N + F P + T N ) 2
The kappa coefficient is used to test the consistency between classification and fact [50]. The threshold range is [−1, 1]. The kappa coefficient is usually greater than 0; the greater the value, the higher the consistency. The judging basis of the kappa coefficient is as follows: 0.00–0.20 (slight consistency), 0.21–0.40 (fair consistency), 0.41–0.60 (moderate consistency), 0.61–0.80 (substantial consistency), and 0.81–1.00 (almost perfect) [51,52,53].

2.8. Method Flow Chart

The method flow chart of this study is shown in Figure 3.
Step 1: The random forest algorithm is used to confirm whether there is smoke and to locate smoke pixels. It shows superior pixel-level classification ability in classifying smoke, clouds, vegetation, and bare lands. The result map also becomes the constraint information of sub-pixel mapping correction.
Step 2: Mixed pixel decomposition is used to determine the abundance value of each endmember in mixed pixels, providing information for sub-pixel mapping. The results contain the abundance maps of each endmember.
Step 3: Sub-pixel mapping and correction are utilized to locate smoke in the sub-pixel unit. This method not only enhances the spatial resolution, but also determines the distribution of smoke in mixed pixels. Based on the improved sub-pixel mapping method, those “spots” and “holes” in images caused by lacking constraint information have basically been eliminated, restoring the more realistic appearance of smoke.

3. Results

All the results are based on the two forest fires which occurred in Xichang and Linzhi on 30 March 2020, and 28 October 2021, respectively.

3.1. Results of Spectral Characteristics Extraction

The visible and near-infrared bands (1–6 channels) of Himawari-8 data were selected to analyze the spectral characteristics of smoke, clouds, vegetation, and bare land. The statistical results of atmospheric apparent reflectance mean values are shown in Figure 4.
According to Figure 4, in the visible light band and channel 4 near-infrared band (0.86 μm), the average reflectance values of smoke are lower than those of clouds. Additionally, the clouds’ reflectance mean values are the highest among the four ground objects. For vegetation and bare lands, these are less than 0.1; both values are less than that of smoke. In near-infrared band 5, the average reflectance values of bare lands are higher than those of smoke. The average reflectance values of smoke in near-infrared band 6 are the lowest among the four ground objects.

3.2. Results of Forest Fire Smoke Detection

In the pixel-level classification of images, there are four classes: vegetation, bare lands, smoke, and clouds. Figure 5 shows the classification result of the Xichang forest fire based on random forest.
Table 3 shows the pixel quantity statistics of the four classes in the Xichang study area. It is obvious that the sum of the area proportion of vegetation and bare lands is 74.40%, which is much higher than those of smoke and clouds. In addition, the area proportion of clouds is more than twice as that of smoke. From the perspective of Figure 5, the main body of smoke was not sheltered by clouds.
Figure 6 shows the classification result of the Linzhi study area based on random forest. The ground objects were marked in four colors. Figure 6 has the same form as Figure 5.
Table 4 shows the pixel quantities statistics of the four classes in the Linzhi study area. It can be seen that the sum of the area proportion of vegetation and bare lands is 74.46%. However, the area proportion of smoke and clouds is almost equivalent. Moreover, the smoke and clouds are irregularly distributed in Figure 6.
Both images detected forest fire smoke successfully. However, the scale of the forest fire in Xichang was larger than that in Linzhi. In Xichang, the average elevation is below 2000 m; thus, smoke and clouds existed separately in this fire image. However, because the elevation of Linzhi is higher, the smoke and clouds existed in the same area from the satellite perspective, which disrupted the classification.
Using Sentinel-2 data as a reference, test samples of each category were selected homogeneously on the Himawari-8 images. In order to determine the classification accuracy of smoke in the two fires, the overall accuracy, kappa coefficient, producer accuracy, and smoke user accuracy were used for analysis, as shown in Table 5.
The overall accuracy and kappa coefficient are based on the four classes of classification. Both kappa coefficients are larger than 0.6. Therefore, the classification results are highly consistent with the fact. The smoke producer accuracies were 88.06% and 90.57%, respectively, which means that the omission errors of smoke were low. However, the smoke user accuracies were 72.84% and 77.42%, respectively. The commission errors of smoke are mainly caused by misclassification between smoke and clouds. An accurate distinction between smoke and clouds is still a challenge for random forest classification.

3.3. Results of Mixed Pixel Decomposition

According to the classification, the endmembers of the two forest fire images were vegetation, bare lands, smoke, and clouds. The grey maps of endmember extraction in the two study areas are shown in Figure 7 and Figure 8.
Figure 7c and Figure 8c represent the abundance of smoke. The grey level reflects the possibility of smoke. The pixel value ranges between 0 and 1; a higher grey value means that the pixel is purer (with fewer other endmembers), and the endmember extraction is more complete. In order to reflect the effects of overall abundance inversion, the abundance maps in RGB form are shown in Figure 9 and Figure 10.
In Figure 9 and Figure 10, red represents smoke, green represents vegetation, blue represents clouds, and the remaining part indicates bare lands. Mixed pixels with unpurified colors contain more than one endmember.

3.4. Results of Sub-Pixel Mapping before and after Correction

The smoke sub-pixel mapping results are images with two classes: smoke and others. The smoke pixel values equal 1; other pixel values equal 0. The sub-pixel mapping results before and after correction based on the PSA and SPSAM methods are shown in Figure 11 and Figure 12.
Figure 11 and Figure 12 present comparisons of the PSA and SPSAM methods; Figure 11a,b and Figure 12a,b show the PSA method results, and Figure 11c,d and Figure 12c,d show the SPSAM method results. Figure 11b,d and Figure 12b,d exhibit fewer spots and holes than Figure 11a,d and Figure 12a,d. Figure 12b has a clearer smoke outline at the bottom, which can be inferred as the origin of the fire. Through the comparison of the sub-pixel mapping results before and after correction, it can be found that most “spots” and “holes” caused by misclassification have been eliminated.
In order to highlight the effects of the improved sub-pixel mapping method in smoke detection, the raw Himawari-8 images and corresponding final smoke detection results are shown in Figure 13 and Figure 14.
Figure 13b and Figure 14b contain more detailed spatial information of smoke than the raw image. The spatial resolution was enhanced from 2 km to 400 m, and the endmember classes contained within the pixels were relocated. The red rectangular box areas are regarded as the origin positions of the forest fire. Moreover, the smoke origin positions are clearer in Figure 13a and Figure 14a. In particular, the outline of the smoke has been clearly distinguished.
In the accuracy evaluation, NOAA-20 images with a resampled 400 m spatial resolution were set as a control group. Sub-pixel mapping results before and after correction were set as the experimental groups. They have the same pixel scale. Accuracy evaluation has four indicators: overall accuracy, kappa coefficient, smoke producer accuracy, and smoke user accuracy. The results are shown in Table 6 and Table 7.
The results of the two methods show accuracy differences in smoke sub-pixel positioning. In most cases, the effects of the PSA method are better than those of the SPSAM method. Even though the overall accuracies were all maintained in a certain range between 80% and 90%, the spatial detailed information of the PSA method was still richer than that of the SPSAM method.
The accuracy evaluation part shows that the overall accuracies of the two methods have been improved after correction to different extents. For the PSA method, the overall accuracies of the two forest fires were 87.95% and 86.32%, increases of 3.59% and 2.80%, respectively. The kappa coefficients were 0.74 and 0.69, increases of 0.12 and 0.07, respectively. For the SPSAM method, the overall accuracies of the two forest fires were 86.88% and 85.22%, increases of 2.96% and 0.84%, respectively. The kappa coefficients were 0.73 and 0.68, which increased by 0.11 and 0.04, respectively. After correction, the producer accuracy and user accuracy were enhanced, which demonstrated the effect of introducing random forest results. However, the user accuracies were lower than the producer accuracies, which reflected the existing commission errors. The small range of confusion between smoke and clouds in the edge of the clouds is the main reason. However, the smoke detection in this study still exhibited high accuracy according to the three other indicators.
Most “spots” and “holes” were eliminated; thus, the commission errors and omission errors decreased after correction. Therefore, the smoke user accuracies and producer accuracies were enhanced accordingly. These phenomena demonstrate the superiority of the improved sub-pixel mapping method developed in this study.

3.5. Comparisons between Sub-Pixel Mapping Correction and Traditional Classification Methods

Compared with traditional classification methods, the improved sub-pixel mapping method performed better in terms of smoke location. This study selected the BP neural network, random forest, and support vector machine (SVM) as traditional classification methods for comparison. The results are shown in Figure 15 and Figure 16.
In order to highlight the smoke parts, the smoke pixels are shown in white. Other pixels, representing vegetation, bare lands, and clouds, are all shown in black. The spatial resolution of the images in Figure 15b and Figure 16b are 400 m, whereas the other subfigures have a 2 km spatial resolution. The accuracy evaluation results are shown in Table 8 and Table 9.
According to Table 8 and Table 9, both the overall accuracy and kappa coefficient indicators reflect the superiority of the improved sub-pixel mapping method; this method performed better in smoke detection than the three other traditional classification methods. Compared with the three other traditional classification methods, random forest showed better performance in pixel-level smoke classification. Thus, the improved sub-pixel mapping method used the random forest result map as constraint information.

3.6. Results of Continuous Smoke Monitoring

The high timeliness of Himawari-8 data is reflected in the continuous monitoring of forest fire smoke. The forest fire smoke occurring in Liangshan Yi Autonomous Prefecture, Sichuan Province, on 30 March 2020, was selected as the research object for time sequence monitoring. The downloaded remote sensing images were taken at 13:00, 14:00, 15:00, and 16:00 Beijing time.
Smoke detection based on the random forest algorithm was performed on four images; the results are shown in Figure 17.
The red pixels represent smoke. The four maps in Figure 17 show that the range of smoke kept expanding from 13:00 to 16:00 continuously. The statistical results of the area proportion of each endmember are shown in Table 10.
From the aspect of time sequence, the area proportion of smoke increased from 0.78% to 27.57% over the three hours. The classification results and the raw Himawari-8 image were overlaid; the accuracy evaluation results are shown in Table 11.
From the accuracy evaluation results, the overall accuracies of the four images are 82.59%, 81.18%, 82.67%, and 83.52%; the kappa coefficients are 0.76, 0.74, 0.75, and 0.78, which show that the recognition of forest fire smoke based on the random forest algorithm and Himawari-8 images is highly consistent.
After mixed pixel decomposition and sub-pixel mapping correction had been performed, the smoke abundance results in the form of grey maps are shown in Figure 18.
A higher grey value shows that the smoke endmember is purer in the pixel, and the smoke abundance value is higher. Figure 19 shows the results of sub-pixel mapping correction and corresponding raw images of Himawari-8. Although the smoke gradually expanded over time, the clouds basically retained their primary shape and size, and the two objects existed relatively independently. Therefore, misclassification between them in the cloud edge position could be corrected to some extent.
The accuracies of the sub-pixel mapping correction results were evaluated based on the confusion matrix method, as shown in Table 12.
Table 12 shows that the overall accuracies during each time period (13:00, 14:00, 15:00, and 16:00) were 89.75%, 88.31%, 88.69%, and 86.65%, respectively, which are 7.16%, 7.13%, 6.02%, and 3.13% higher than those of the random forest results, respectively. The kappa coefficients of the sub-pixel mapping correction results are 0.78, 0.77, 0.76, and 0.75, respectively. After correction and clustering, most of the holes and spots in the sub-pixel mapping results were eliminated. As a result, the smoke map looked more realistic, and the smoke producer accuracy and user accuracy were higher.

4. Discussion

Most research on smoke detection concentrates on the image level and pixel level [54,55]. Sub-pixel mapping technology is mainly used in the accurate identification of various objects on the ground, such as land cover [56], forest lands [57], water boundaries [58], etc. Smoke floats in the air, similar to clouds, causing difficulties in detection from the perspective of satellite sensors. There are some studies about pixel-level smoke detection, such as using machine learning methods [11,12], using deep learning methods [59,60], etc. However, sub-pixel smoke detection is very rare. This study selected two typical forest fires with large-scale characteristics occurring in Xichang and Linzhi and proposed an improved sub-pixel mapping method based on the random forest algorithm for locating smoke. A discussion of the results is presented subsequently.
In pixel-level smoke detection, random forest [61,62] performs better than the BP neural network [63] and SVM [64]. In this paper, the aim of the technique was to determine the spatial distribution of smoke in 2 km Himawari-8 images. The overall accuracies of the two forest fire images based on random forest were 83.52% and 84.68% for Xichang and Linzhi, respectively. However, smoke has different concentrations at various spatial scales, which results in different pixel reflectance values. In the classification result maps, the pixels inside the smoke are correctly classified, but those pixels on the edge of smoke are easily misclassified due to the limitations of pixel scale. In order to break through the limitations of pixel scale and determine the endmember distribution of mixed pixels, a sub-pixel mapping method is required.
In mixed pixel decomposition, LMMs are widely used in multi-spectrum images, without considering light reflection between ground objects [65]. In addition, endmember extractions in current research are mainly surface coverage [66]. This paper regards smoke and clouds as two kinds of independent endmembers, as a novel attempt at sub-pixel mapping. This approach completes abundance inversion, determining the endmember abundances in mixed pixels. The abundance map reflects the concentration differences of smoke in different areas.
The PSA and SPSAM methods [67,68] were compared in terms of sub-pixel mapping. These two methods can realize the sub-pixel positioning of smoke. However, the spatial detail information of the PSA results was richer than that of the SPSAM, and the overall accuracies of the PSA results after correction were higher than those of the SPSAM. Therefore, the PSA method is more suitable than the SPSAM method for smoke detection.
Due to the lack of constraint information, sub-pixel mapping corrections are designed to improve accuracy. The existence of noise caused by misclassification is almost a common problem for satellite remote sensing data [69]. Some noise reduction methods have been proposed in some research, such as rank approximation [70], second-generation wavelets [71], double logistic function-fitting [72], introducing extra information for correction (such as introducing DEM in water detection) [73], etc. In this paper, random forest classification maps and clumping processes were introduced to address this. After correction based on the “sliding window” method, the overall accuracies and kappa coefficients of the PSA and SPSAM methods all increased to some extent; most spots and holes disappeared in the result maps.
Through the continuous monitoring [74] of forest fire smoke at the same location for a period of time based on the PSA method, it was found that using highly timely Himawari-8 data for smoke detection research is feasible. The variation in smoke range can provide scientific information for forest fire identification. Detecting smoke can solve the problem of fire point misjudgment due to insufficient infrared radiation energy in the early stages. Smoke lingers throughout the whole process of a forest fire; therefore, the development and spreading trends in forest fires can be observed through the continuous monitoring of smoke.
One limitation of this study is that smoke detection requires daytime in optical remote sensing. In addition, when the smoke range is smaller than one pixel, it is difficult to carry out research due to the requirements of spatial correlation information. When clouds block smoke, smoke detection based on satellite remote sensing will be affected, causing the shape of smoke to be incoherent. However, once the planar satellite images can be used to identify the existence of smoke, research can be carried out.

5. Conclusions

At present, the most common method of detecting forest fires is searching for high-temperature infrared points. However, infrared energy in the early stages of a fire is too weak to be identified. This phenomenon is usually caused by the inadequate burning of combustible materials and the occlusion of dense tree canopies. Smoke is generated first when forest fires occur. The combination of visible and near-infrared bands of meteorological satellites can be used to detect forest fire smoke, and a combination with the infrared detection of high-temperature points of forest fire can solve the problem of missed and delayed judgment in forest fire monitoring.
For Himawari-8 data, the traditional random forest classification method can realize the pixel-level detection of smoke. However, the boundary of smoke is not clear due to the 2 km scale pixel. In order to determine the smoke distribution in mixed pixels, the improved sub-pixel mapping method based on the random forest algorithm was introduced. This method can enhance the accuracy of smoke detection and restore the more real form of smoke. Continuous monitoring can determine the variation in smoke. This can provide information on searching for sites of fire and predicting fire spread directions. Therefore, searching for forest fires through smoke detection is feasible. One of the contributions of this paper is that it fills the research gap of using the sub-pixel mapping concept in locating smoke. Another contribution is that this study employed a random forest result map to determine constraint information in smoke sub-pixel positioning. In smoke detection scenarios, the results of the improved sub-pixel mapping method based on the random forest algorithm have higher accuracies than those of traditional classification methods and classical sub-pixel mapping methods.
Smoke morphology simulation and traversing fire points through smoke detection results will be the focus of future research. When fire points can be determined, fire suppression actions will be more precise. In practical applications, fire points with geographical coordinates can be provided to improve decision-making.

Author Contributions

G.Z. conceived and designed the study. X.L. wrote the first draft, performed the data analysis, and collected all of the study data. G.Z., S.T., Z.Y. and X.W. provided critical insights in editing the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the National Natural Science Foundation Project of China (grant No. 32271879) and the Science and Technology Innovation Platform and Talent Plan Project of Hunan Province (grant No. 2017TP1022).

Data Availability Statement

The data presented in this study are available upon request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Barmpoutis, P.; Papaioannou, P.; Dimitropoulos, K.; Grammalidis, N. A Review on Early Forest Fire Detection Systems Using Optical Remote Sensing. Sensors 2020, 20, 6442. [Google Scholar] [CrossRef] [PubMed]
  2. Borchers-Arriagada, N.; Palmer, A.J.; Bowman, D.M.J.S.; Williamson, G.J.; Johnston, F.H. Health Impacts of Ambient Biomass Smoke in Tasmania, Australia. Int. J. Environ. Res. Public Health 2020, 17, 3264. [Google Scholar] [CrossRef] [PubMed]
  3. Miranda, A.I. Forecasting the Effects of Wildland Fires on Air Quality and on Human Health. Environ. Sci. Proc. 2022, 17, 9. [Google Scholar]
  4. Chung, Y.-S.; Le, H.V. Detection of forest-fire smoke plumes by satellite imagery. Atmos. Environ. 1984, 18, 2143–2151. [Google Scholar] [CrossRef]
  5. Xie, Y.; Qu, J.J.; Xiong, X.; Hao, X.; Che, N.; Sommers, W. Smoke plume detection in the eastern United States using MODIS. Int. J. Remote Sens. 2007, 28, 2367–2374. [Google Scholar] [CrossRef]
  6. Chrysoulakis, N.; Herlin, I.; Prastacos, P.; Yahia, H.; Grazzini, J.; Cartalis, C. An improved algorithm for the detection of plumes caused by natural or technological hazards using AVHRR imagery. Remote Sens. Environ. 2009, 108, 393–406. [Google Scholar] [CrossRef]
  7. Tao, M.; Wang, J.; Li, R.; Wang, L.; Chen, L. Performance of MODIS high-resolution MAIAC aerosol algorithm in China: Characterization and limitation. Atmos. Environ. 2019, 213, 159–169. [Google Scholar] [CrossRef]
  8. Hsu, N.C.; Tsay, S.C.; King, M.D.; Herman, J.R. Aerosol properties over bright-reflecting source regions. IEEE Trans. Geosci. Remote Sens. 2004, 42, 557–569. [Google Scholar] [CrossRef]
  9. Kaufman, Y.J.; Ichoku, C.; Giglio, L.; Korontzi, S.; Chu, D.A.; Hao, W.M.; Li, R.R.; Justice, C.O. Fire and smoke observed from the Earth Observing System MODIS instrument--products, validation, and operational use. Int. J. Remote Sens. 2003, 24, 1765–1781. [Google Scholar] [CrossRef]
  10. Lyapustin, A.; Korkin, S.; Wang, Y.; Quayle, B.; Laszlo, I. Discrimination of Biomass Burning Smoke and Clouds in MAIAC Algorithm. Atmos. Chem. Phys. 2012, 12, 9679–9686. [Google Scholar] [CrossRef] [Green Version]
  11. Li, X.L.; Wang, J.; Song, W.G.; Ma, J.; Telesca, L.; Zhang, Y.M. Automatic Smoke Detection in MODIS Satellites Data based on K-means Clustering and Fisher Linear Discrimination. Photogramm. Eng. Remote Sens. 2014, 80, 971–982. [Google Scholar] [CrossRef]
  12. Li, X.L.; Song, W.G.; Lian, L.; Wei, X. Forest Fire Smoke Detection Using Back-Propagation Neural Network Based on MODIS Data. Remote Sens. 2015, 7, 4473–4498. [Google Scholar] [CrossRef] [Green Version]
  13. Ba, R.; Chen, C.; Yuan, J.; Song, W.; Lo, S. SmokeNet: Satellite Smoke Scene Detection Using Convolutional Neural Network with Spatial and Channel-Wise Attention. Remote Sens. 2019, 11, 1702. [Google Scholar] [CrossRef] [Green Version]
  14. Mo, Y.; Yang, X.; Tang, H.; Li, Z. Smoke Detection from Himawari-8 Satellite Data over Kalimantan Island Using Multilayer Perceptrons. Remote Sens. 2021, 13, 3721. [Google Scholar] [CrossRef]
  15. Atkinson, P.M.; Cutler, M.E.J.; Lewis, H. Mapping sub-pixel proportional land cover with AVHRR imagery. Int. J. Remote Sens. 1997, 18, 917–935. [Google Scholar] [CrossRef]
  16. Tobler, W.R. A Computer Movie Simulating Urban Growth in the Detroit Region. Econ. Geogr. 1970, 46, 234–240. [Google Scholar] [CrossRef]
  17. Atkinson, P.M. Sub-pixel Target Mapping from Soft-classified, Remotely Sensed Imagery. Photogramm. Eng. Remote Sens. 2015, 71, 839–846. [Google Scholar] [CrossRef] [Green Version]
  18. Mertens, K.C.; Baets, B.D.; Verbeke, L.P.C.; Wulf, R.R.D. A sub-pixel mapping algorithm based on sub-pixel/pixel spatial attraction models. Int. J. Remote Sens. 2006, 27, 3293–3310. [Google Scholar] [CrossRef]
  19. Kasetkasem, T.; Arora, M.K.; Varshney, P.K. Super-resolution land cover mapping using a Markov random field based approach. Remote Sens. Environ. 2005, 96, 302–314. [Google Scholar] [CrossRef]
  20. Tolpekin, V.A.; Hamm, N. Fuzzy Super Resolution Mapping Based on Markov Random Fields. IEEE Trans. Geosci. Remote Sens. 2008, 2, 875–878. [Google Scholar]
  21. Tolpekin, V.A.; Stein, A. Quantification of the Effects of Land-Cover-Class Spectral Separability on the Accuracy of Markov-Random-Field-Based Superresolution Mapping. IEEE Trans. Geosci. Remote Sens. 2009, 47, 3283–3297. [Google Scholar] [CrossRef]
  22. Feng, L.; Xiaoddong, L.; Fei, X.; Shiming, F.; Yun, D. Object-based sub-pixel mapping of buildings incorporating the prior shape information from remotely sensed imagery. Int. J. Appl. Earth Obs. 2012, 18, 283–292. [Google Scholar]
  23. Linyi, L.; Yun, C.; Tingbao, X.; Chang, H.; Rui, L. Integration of Bayesian regulation back-propagation neural network and particle swarm optimization for enhancing sub-pixel mapping of flood inundation in river basins. Remote Sens. Lett. 2016, 7, 631–640. [Google Scholar]
  24. Li, X.; Feng, L.; Foody, G.M.; Yun, D. A Superresolution Land-Cover Change Detection Method Using Remotely Sensed Images with Different Spatial Resolutions. IEEE Trans. Geosci. Remote Sens. 2016, 54, 3822–3841. [Google Scholar] [CrossRef]
  25. Weida, Y.; Jian, Y. Sub-pixel vs. super-pixel-based greenspace mapping along the urban-rural gradient using high spatial resolution Gaofen-2 satellite imagery: A case study of Haidian District, Beijing, China. Int. J. Remote Sens. 2017, 38, 6386–6406. [Google Scholar]
  26. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  27. Ghosh, A.; Sharma, R.; Joshi, P.K. Random forest classification of urban landscape using Landsat archive and ancillary data: Combining seasonal maps with decision level fusion. Appl. Geogr. 2012, 48, 31–41. [Google Scholar] [CrossRef]
  28. Cutler, D.R.; Edwards, T.C.; Beard, K.H.; Hess, K.T. Random forests for classification in ecology. Ecology 2007, 88, 2783–2792. [Google Scholar] [CrossRef]
  29. Hou, N.; Zhang, X.; Zhang, W.; Wei, Y.; Jia, K.; Yao, Y.; Jiang, B.; Cheng, J. Estimation of Surface Downward Shortwave Radiation over China from Himawari-8 AHI Data Based on Random Forest. Remote Sens. 2020, 12, 181. [Google Scholar] [CrossRef] [Green Version]
  30. Adams, J.B.; Smith, M.O.; Johnson, P.E. Spectral mixture modeling: A new analysis of rock and soil types at the Viking Lander 1 site. J. Geophys. Res.-Atmos. 1986, 91, 8098–8112. [Google Scholar] [CrossRef]
  31. Keshava, N.; Mustard, J.F. Spectral unmixing. IEEE Signal Process. Mag. 2002, 19, 44–57. [Google Scholar] [CrossRef]
  32. Zhu, J.; Cao, S.; Shang, G.; Shi, J.; Wang, X.; Zheng, Z.; Liu, C.; Yang, H.; Xie, B. Subpixel Snow Mapping Using Daily AVHRR/2 Data over Qinghai–Tibet Plateau. Remote Sens. 2022, 14, 2899. [Google Scholar] [CrossRef]
  33. Jing, L.; Li, X.; Huang, B.; Zhao, L. Hopfield Neural Network Approach for Supervised Nonlinear Spectral Unmixing. IEEE Geosci. Remote Sens. 2016, 13, 1002–1006. [Google Scholar]
  34. Somers, B.; Asner, G.P.; Tits, L.; Coppin, P. Endmember variability in Spectral Mixture Analysis: A review. Remote Sens. Environ. 2011, 11, 1603–1616. [Google Scholar] [CrossRef]
  35. Tompkins, S.; Mustard, J.F.; Pieters, C.M.; Forsyth, D.W. Optimization of endmembers for spectral mixture analysis. Remote Sens. Environ. 1997, 59, 472–489. [Google Scholar] [CrossRef]
  36. Xu, X.; Tong, X.; Plaza, A.; Zhong, Y.; Xie, H.; Zhang, L. Joint Sparse Sub-Pixel Mapping Model with Endmember Variability for Remotely Sensed Imagery. Remote Sens. 2017, 9, 15. [Google Scholar] [CrossRef] [Green Version]
  37. Chang, C.I. Fully Abundance-Constrained Sequential Endmember Finding: Linear Spectral Mixture Analysis. Real-Time Progress. Hyperspectral Image Process. 2016, 2, 291–322. [Google Scholar]
  38. Miller, H.J. Tobler’s First Law and Spatial Analysis. Ann. Assoc. Am. Geogr. 2004, 94, 284–289. [Google Scholar] [CrossRef]
  39. Wang, Q.M.; Atkinson, P.M. The effect of the point spread function on sub-pixel mapping. Remote. Sens. Environ. 2017, 193, 127–137. [Google Scholar] [CrossRef] [Green Version]
  40. Kumar, U.; Ganguly, S.; Nemani, R.R.; Raja, K.S.; Milesi, C.; Sinha, R.; Michaelis, A.; Votava, P.; Hashimoto, H.; Li, S.; et al. Exploring Subpixel Learning Algorithms for Estimating Global Land Cover Fractions from Satellite Data Using High Performance Computing. Remote Sens. 2017, 9, 1105. [Google Scholar] [CrossRef] [Green Version]
  41. Liu, X.; Deng, R.; Xu, J.; Zhang, F. Coupling the Modified Linear Spectral Mixture Analysis and Pixel-Swapping Methods for Improving Subpixel Water Mapping: Application to the Pearl River Delta, China. Water 2017, 9, 658. [Google Scholar] [CrossRef] [Green Version]
  42. Lu, L.; Huang, Y.; Di, L.; Hang, D. A New Spatial Attraction Model for Improving Subpixel Land Cover Classification. Remote Sens. 2017, 9, 360. [Google Scholar] [CrossRef] [Green Version]
  43. Aplin, P.; Atkinson, P.M. Sub-pixel land cover mapping for per-field classification. Int. J. Remote Sens. 2010, 22, 2853–2858. [Google Scholar] [CrossRef]
  44. Ma, X.; Man, Q.; Yang, X.; Dong, P.; Yang, Z.; Wu, J.; Liu, C. Urban Feature Extraction within a Complex Urban Area with an Improved 3D-CNN Using Airborne Hyperspectral Data. Remote Sens. 2023, 15, 992. [Google Scholar] [CrossRef]
  45. Krivoguz, D.; Bondarenko, L.; Matveeva, E.; Zhilenkov, A.; Chernyi, S.; Zinchenko, E. Machine Learning Approach for Detection of Water Overgrowth in Azov Sea with Sentinel-2 Data. J. Mar. Sci. Eng. 2023, 11, 423. [Google Scholar] [CrossRef]
  46. Nelson, M.D.; Garner, J.D.; Tavernia, B.G.; Stehman, S.V.; Riemann, R.I.; Lister, A.J.; Perry, C.H. Assessing map accuracy from a suite of site-specific, non-site specific, and spatial distribution approaches. Remote Sens. Environ. 2021, 260, 112442. [Google Scholar] [CrossRef]
  47. Watanabe, M.; Koyama, C.N.; Hayashi, M.; Nagatani, I.; Tadono, T.; Shimada, M. Refined algorithm for forest early warning system with ALOS-2/PALSAR-2 ScanSAR data in tropical forest regions. Remote Sens. Environ. 2021, 265, 112643. [Google Scholar] [CrossRef]
  48. Foody, G.M. Impacts of ignorance on the accuracy of image classification and thematic mapping. Remote Sens. Environ. 2021, 259, 112367. [Google Scholar] [CrossRef]
  49. Tridawati, A.; Wikantika, K.; Susantoro, T.M.; Harto, A.B.; Darmawan, S.; Yayusman, L.F.; Ghazali, M.F. Mapping the Distribution of Coffee Plantations from Multi-Resolution, Multi-Temporal, and Multi-Sensor Data Using a Random Forest Algorithm. Remote Sens. 2020, 12, 3933. [Google Scholar] [CrossRef]
  50. Jiang, Z.; Zhang, J.; Ma, Y.; Mao, X. Hyperspectral Remote Sensing Detection of Marine Oil Spills Using an Adaptive Long-Term Moment Estimation Optimizer. Remote Sens. 2021, 14, 157. [Google Scholar] [CrossRef]
  51. Gwet, K.L. Testing the Difference of Correlated Agreement Coefficients for Statistical Significance. Educ. Psychol. Meas. 2016, 76, 609–637. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  52. Rigby, A.S. Statistical methods in epidemiology. v. Towards an understanding of the kappa coefficient. Disabil. Rehabil. 2000, 22, 339–344. [Google Scholar] [CrossRef] [PubMed]
  53. Roberts, C.; McNamee, R. A matrix of kappa-type coefficients to assess the reliability of nominal scales. Stat. Med. 1998, 17, 471–488. [Google Scholar] [CrossRef]
  54. Wang, Z.; Zheng, C.; Yin, J.; Tian, Y.; Cui, W. A Semantic Segmentation Method for Early Forest Fire Smoke Based on Concentration Weighting. Electronics 2021, 10, 2675. [Google Scholar] [CrossRef]
  55. Wang, Z.; Yang, P.; Liang, H.; Zheng, C.; Yin, J.; Tian, Y.; Cui, W. Semantic Segmentation and Analysis on Sensitive Parameters of Forest Fire Smoke Using Smoke-Unet and Landsat-8 Imagery. Remote Sens. 2022, 14, 45. [Google Scholar] [CrossRef]
  56. Su, L.; Xu, Y.; Yuan, Y.; Yang, J. Combining Pixel Swapping and Simulated Annealing for Land Cover Mapping. Sensors 2020, 20, 1503. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  57. Cooper, S.; Okujeni, A.; Jnicke, C.; Clark, M.; van der Linden, S.; Hostert, P. Disentangling fractional vegetation cover: Regression-based unmixing of simulated spaceborne imaging spectroscopy data. Remote Sens. Environ. 2020, 246, 111856. [Google Scholar] [CrossRef]
  58. Zhang, N.; Wang, P.; Zhao, X. Fine Extraction of Water Boundaries Based on an Improved Subpixel Mapping Algorithm. IEEE Access 2020, 8, 179203–179212. [Google Scholar] [CrossRef]
  59. Zheng, X.; Chen, F.; Lou, L.; Cheng, P.; Huang, Y. Real-Time Detection of Full-Scale Forest Fire Smoke Based on Deep Convolution Neural Network. Remote Sens. 2022, 14, 536. [Google Scholar] [CrossRef]
  60. Hu, X.; Ban, Y.; Nascetti, A. Uni-Temporal Multispectral Imagery for Burned Area Mapping with Deep Learning. Remote Sens. 2021, 13, 1509. [Google Scholar] [CrossRef]
  61. Roy, D.P.; Huang, H.; Boschetti, L.; Giglio, L.; Yan, L.; Zhang, H.H.; Li, Z. Landsat-8 and Sentinel-2 burned area mapping-A combined sensor multi-temporal change detection approach. Remote Sens. Environ. 2019, 231, 111254. [Google Scholar] [CrossRef]
  62. Mansaray, L.R.; Yang, L.; Kabba, V.T.; Kanu, A.S.; Huang, J.; Wang, F. Optimising rice mapping in cloud-prone environments by combining quad-source optical with Sentinel-1A microwave satellite imagery. Remote Sens. 2019, 56, 1333–1354. [Google Scholar] [CrossRef]
  63. Huang, G.; Yan, B.; Mou, Z.; Wu, K.; Lv, X. Surrogate Model for Torsional Behavior of Bundle Conductors and its Application. IEEE Trans. 2021, 37, 67–75. [Google Scholar] [CrossRef]
  64. Hamilton, D.; Brothers, K.; McCall, C.; Gautier, B.; Shea, T. Mapping Forest Burn Extent from Hyperspatial Imagery Using Machine Learning. Remote Sens. 2021, 13, 3843. [Google Scholar] [CrossRef]
  65. Yu, J.; Wang, B.; Lin, Y.; Li, F.; Cai, J. A novel inequality-constrained weighted linear mixture model for endmember variability. Remote Sens. Environ. 2021, 257, 112359. [Google Scholar] [CrossRef]
  66. Rashwan, S.; Dobigeon, N.; Sheta, W.; Hassan, H. Non-linear unmixing of hyperspectral images using multiple-kernel self-organising maps. IET Image Process. 2019, 13, 2190–2195. [Google Scholar] [CrossRef] [Green Version]
  67. Wang, P.; Wu, Y.; Leung, H. Subpixel land cover mapping based on a new spatial attraction model with spatial-spectral information. Int. J. Remote Sens. 2019, 40, 6444–6463. [Google Scholar] [CrossRef]
  68. Ponomarev, E.I.; Litvintsev, K.Y.; Shvetsov, E.G.; Finnikov, K.A.; Yakimov, N.D. Approximation of the High-Temperature Fire Zone Based on Terra/MODIS Data in the Problem of Subpixel Analysis. Atmos. Ocean. Phys. 2019, 55, 884–892. [Google Scholar] [CrossRef]
  69. Chang, Y.; Yan, L.; Fang, H.; Liu, H. Simultaneous Destriping and Denoising for Remote Sensing Images With Unidirectional Total Variation and Sparse Representation. IEEE Geosci. Remote Sens. Lett. 2013, 11, 1051–1055. [Google Scholar] [CrossRef]
  70. Ha, C.; Kim, W.; Jeong, J. Remote sensing image enhancement based on singular value decomposition. Opt. Eng. 2013, 52, 083101. [Google Scholar] [CrossRef]
  71. Ebadi, L.; Shafri, H.Z.M.; Mansor, S.B.; Ashurov, R. A review of applying second-generation wavelets for noise removal from remote sensing data. Environ. Earth Sci. 2013, 70, 2679–2690. [Google Scholar] [CrossRef] [Green Version]
  72. Hird, J.N.; McDermid, G.J. Noise reduction of NDVI time series: An empirical comparison of selected techniques. Remote Sens. Environ. 2009, 113, 248–258. [Google Scholar] [CrossRef]
  73. Garrote, J. Free Global DEMs and Flood Modelling—A Comparison Analysis for the January 2015 Flooding Event in Mocuba City (Mozambique). Water 2022, 14, 176. [Google Scholar] [CrossRef]
  74. Deng, C.B.; Zhu, Z. Continuous subpixel monitoring of urban impervious surface using Landsat time series. Remote Sens. Environ. 2020, 238, 110929. [Google Scholar] [CrossRef]
Figure 1. Geographical location map of the study areas.
Figure 1. Geographical location map of the study areas.
Forests 14 00485 g001
Figure 2. Model structure of the random forest.
Figure 2. Model structure of the random forest.
Forests 14 00485 g002
Figure 3. Flow chart of the improved sub-pixel mapping method based on the random forest result.
Figure 3. Flow chart of the improved sub-pixel mapping method based on the random forest result.
Forests 14 00485 g003
Figure 4. Statistical results of the reflectance mean value of the endmember. (a) Xichang study area; (b) Linzhi study area. The x-coordinate represents different bands of the satellite, and the y-coordinate represents the mean values of reflectance. The red polyline represents smoke; the blue polyline represents clouds; the green polyline represents vegetation; the yellow polyline represents bare lands.
Figure 4. Statistical results of the reflectance mean value of the endmember. (a) Xichang study area; (b) Linzhi study area. The x-coordinate represents different bands of the satellite, and the y-coordinate represents the mean values of reflectance. The red polyline represents smoke; the blue polyline represents clouds; the green polyline represents vegetation; the yellow polyline represents bare lands.
Forests 14 00485 g004
Figure 5. Random forest classification result of the Xichang study area. The vegetation pixels are in green, bare lands pixels are in yellow, smoke pixels are in red, and clouds pixels are in blue.
Figure 5. Random forest classification result of the Xichang study area. The vegetation pixels are in green, bare lands pixels are in yellow, smoke pixels are in red, and clouds pixels are in blue.
Forests 14 00485 g005
Figure 6. Random forest classification result of the Linzhi study area. The vegetation, bare lands, smoke, clouds pixels are in green, yellow, red, blue colors, respectively.
Figure 6. Random forest classification result of the Linzhi study area. The vegetation, bare lands, smoke, clouds pixels are in green, yellow, red, blue colors, respectively.
Forests 14 00485 g006
Figure 7. Endmember extraction grey maps of the Xichang study area. (a) Vegetation; (b) bare lands; (c) smoke; (d) clouds. The grey level represents the area proportion of the endmember in a pixel.
Figure 7. Endmember extraction grey maps of the Xichang study area. (a) Vegetation; (b) bare lands; (c) smoke; (d) clouds. The grey level represents the area proportion of the endmember in a pixel.
Forests 14 00485 g007
Figure 8. Endmember extraction grey maps of the Linzhi study area. (a) Vegetation; (b) bare lands; (c) smoke; (d) clouds.
Figure 8. Endmember extraction grey maps of the Linzhi study area. (a) Vegetation; (b) bare lands; (c) smoke; (d) clouds.
Forests 14 00485 g008aForests 14 00485 g008b
Figure 9. RGB abundance map of the Xichang study area. The smoke, vegetation, and clouds pixels are displayed in red, green, and blue colors, respectively.
Figure 9. RGB abundance map of the Xichang study area. The smoke, vegetation, and clouds pixels are displayed in red, green, and blue colors, respectively.
Forests 14 00485 g009
Figure 10. RGB abundance map of the Linzhi study area.
Figure 10. RGB abundance map of the Linzhi study area.
Forests 14 00485 g010
Figure 11. Sub-pixel mapping result of the Xichang study area. (a) Before correction based on the PSA method; (b) after correction based on the PSA method; (c) before correction based on the SPSAM method; (d) after correction based on the SPSAM method.
Figure 11. Sub-pixel mapping result of the Xichang study area. (a) Before correction based on the PSA method; (b) after correction based on the PSA method; (c) before correction based on the SPSAM method; (d) after correction based on the SPSAM method.
Forests 14 00485 g011
Figure 12. Sub-pixel mapping result of the Linzhi study area. (a) Before correction based on the PSA method; (b) after correction based on the PSA method; (c) before correction based on the SPSAM method; (d) after correction based on the SPSAM method.
Figure 12. Sub-pixel mapping result of the Linzhi study area. (a) Before correction based on the PSA method; (b) after correction based on the PSA method; (c) before correction based on the SPSAM method; (d) after correction based on the SPSAM method.
Forests 14 00485 g012
Figure 13. Comparison between the raw image and the final result in the Xichang study area. (a) Himawari-8 band 1 image with a 2 km spatial resolution (single band image); (b) the final result after correction with a 400 m spatial resolution (binary image).
Figure 13. Comparison between the raw image and the final result in the Xichang study area. (a) Himawari-8 band 1 image with a 2 km spatial resolution (single band image); (b) the final result after correction with a 400 m spatial resolution (binary image).
Forests 14 00485 g013
Figure 14. Comparison between the raw image and the final result in the Linzhi study area. (a) Himawari-8 band 1 image with a 2 km spatial resolution (single band image); (b) the final result after correction with a 400 m spatial resolution (binary image).
Figure 14. Comparison between the raw image and the final result in the Linzhi study area. (a) Himawari-8 band 1 image with a 2 km spatial resolution (single band image); (b) the final result after correction with a 400 m spatial resolution (binary image).
Forests 14 00485 g014
Figure 15. Smoke detection results (binary image) based on four methods in the Xichang study area. (a) BP neural network (2 km spatial resolution); (b) the improved sub-pixel mapping method (400 m spatial resolution); (c) random forest (2 km spatial resolution); (d) SVM (2 km spatial resolution).
Figure 15. Smoke detection results (binary image) based on four methods in the Xichang study area. (a) BP neural network (2 km spatial resolution); (b) the improved sub-pixel mapping method (400 m spatial resolution); (c) random forest (2 km spatial resolution); (d) SVM (2 km spatial resolution).
Forests 14 00485 g015
Figure 16. Smoke detection results (binary image) based on four methods in the Linzhi study area. (a) BP neural network (2 km spatial resolution); (b) the improved sub-pixel mapping method (400 m spatial resolution); (c) random forest (2 km spatial resolution); (d) SVM (2 km spatial resolution).
Figure 16. Smoke detection results (binary image) based on four methods in the Linzhi study area. (a) BP neural network (2 km spatial resolution); (b) the improved sub-pixel mapping method (400 m spatial resolution); (c) random forest (2 km spatial resolution); (d) SVM (2 km spatial resolution).
Forests 14 00485 g016
Figure 17. Results of hourly forest fire smoke detection based on the random forest algorithm. The smoke, clouds, vegetation, and bare lands pixels are in red, blue, green, and brown colors, respectively. (a) 13:00; (b) 14:00; (c) 15:00; (d) 16:00.
Figure 17. Results of hourly forest fire smoke detection based on the random forest algorithm. The smoke, clouds, vegetation, and bare lands pixels are in red, blue, green, and brown colors, respectively. (a) 13:00; (b) 14:00; (c) 15:00; (d) 16:00.
Forests 14 00485 g017
Figure 18. Results of abundance inversion based on hourly forest fire smoke detection. (a) 13:00; (b) 14:00; (c) 15:00; (d) 16:00. The grey level represents the area proportion of smoke in a pixel.
Figure 18. Results of abundance inversion based on hourly forest fire smoke detection. (a) 13:00; (b) 14:00; (c) 15:00; (d) 16:00. The grey level represents the area proportion of smoke in a pixel.
Forests 14 00485 g018
Figure 19. Comparison between raw Himawari-8 images and sub-pixel mapping correction results. (a) Raw Himawari-8 images (synthetic bands image with a 2 km spatial resolution); (b) sub-pixel mapping correction results (binary image with a 400 m spatial resolution).
Figure 19. Comparison between raw Himawari-8 images and sub-pixel mapping correction results. (a) Raw Himawari-8 images (synthetic bands image with a 2 km spatial resolution); (b) sub-pixel mapping correction results (binary image with a 400 m spatial resolution).
Forests 14 00485 g019
Table 1. Number of samples for each class in the two study areas.
Table 1. Number of samples for each class in the two study areas.
Characteristic ClassStudy AreaTraining SamplesTest SamplesTotal Number of Samples
VegetationXichang209210463138
Linzhi328163491
Bare landsXichang285114254276
Linzhi7113551066
SmokeXichang516257773
Linzhi17286258
CloudsXichang11865921778
Linzhi18492276
Table 2. Confusion matrix structure of smoke detection.
Table 2. Confusion matrix structure of smoke detection.
Smoke in ClassificationOthers in Classification
Smoke in realTrue Positive (TP)False Negative (FN)
Others in realFalse Positive (FP)True Negative (TN)
Table 3. Random forest classification results statistics of the Xichang study area.
Table 3. Random forest classification results statistics of the Xichang study area.
VegetationBare LandsSmokeClouds
Area proportion (%)31.4942.917.7617.84
Number of pixels287039117071626
Table 4. Random forest classification results statistics of the Linzhi study area.
Table 4. Random forest classification results statistics of the Linzhi study area.
VegetationBare LandsSmokeClouds
Area proportion (%)23.4850.9812.3413.20
Number of pixels4911066258276
Table 5. Accuracy evaluation of smoke detection based on random forest.
Table 5. Accuracy evaluation of smoke detection based on random forest.
Overall
Accuracy (%)
Kappa
Coefficient
Smoke Producer Accuracy (%)Smoke User Accuracy (%)
Xichang83.520.6688.0672.84
Linzhi84.680.6990.5777.42
Table 6. Accuracy evaluation of the sub-pixel mapping results before and after correction in Xichang.
Table 6. Accuracy evaluation of the sub-pixel mapping results before and after correction in Xichang.
MethodOverall Accuracy (%)Kappa CoefficientSmoke Producer
Accuracy (%)
Smoke User
Accuracy (%)
Before
correction
PSA84.360.6274.0771.94
SPSAM83.920.6280.8570.56
After
correction
PSA87.950.7492.3175.52
SPSAM86.880.7384.9173.85
Table 7. Accuracy evaluation of sub-pixel mapping results before and after correction in Linzhi.
Table 7. Accuracy evaluation of sub-pixel mapping results before and after correction in Linzhi.
MethodOverall Accuracy (%)Kappa CoefficientSmoke Producer
Accuracy (%)
Smoke User
Accuracy (%)
Before
correction
PSA83.520.6282.6967.72
SPSAM84.380.6482.6570.20
After
correction
PSA86.320.6983.5175.70
SPSAM85.220.6883.6571.28
Table 8. Accuracy evaluation of smoke detection based on the four methods in Xichang.
Table 8. Accuracy evaluation of smoke detection based on the four methods in Xichang.
MethodOverall Accuracy (%)Kappa Coefficient
BP neural network76.920.54
Sub-pixel mapping correction87.950.74
Random forest83.520.66
SVM76.920.51
Table 9. Accuracy evaluation of smoke detection based on four methods in Linzhi.
Table 9. Accuracy evaluation of smoke detection based on four methods in Linzhi.
MethodOverall Accuracy (%)Kappa Coefficient
BP neural network81.450.63
Sub-pixel mapping correction86.320.69
Random forest84.680.69
SVM83.060.65
Table 10. Area proportion statistics of hourly smoke detection based on random forest.
Table 10. Area proportion statistics of hourly smoke detection based on random forest.
TimeSmokeCloudsVegetationBare Lands
Area
Proportion (%)
13:000.7824.4543.3131.46
14:001.7220.8744.9332.48
15:0010.7420.5834.0434.64
16:0027.5721.0535.5715.81
Table 11. Accuracy evaluation statistics of hourly smoke detection based on random forest.
Table 11. Accuracy evaluation statistics of hourly smoke detection based on random forest.
TimeOverall Accuracy (%)Kappa CoefficientSmoke Producer Accuracy (%)Smoke User Accuracy (%)
13:0082.590.7670.8398.08
14:0081.180.7472.5695.34
15:0082.670.7578.6393.37
16:0083.520.7883.5788.69
Table 12. Accuracy evaluation of hourly smoke detection based on sub-pixel mapping correction.
Table 12. Accuracy evaluation of hourly smoke detection based on sub-pixel mapping correction.
TimeOverall Accuracy (%)Kappa CoefficientSmoke Producer Accuracy (%)Smoke User Accuracy (%)
13:0089.750.7878.3886.57
14:0088.310.7777.9388.21
15:0088.690.7680.4689.94
16:0086.650.7577.5885.72
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, X.; Zhang, G.; Tan, S.; Yang, Z.; Wu, X. Forest Fire Smoke Detection Research Based on the Random Forest Algorithm and Sub-Pixel Mapping Method. Forests 2023, 14, 485. https://doi.org/10.3390/f14030485

AMA Style

Li X, Zhang G, Tan S, Yang Z, Wu X. Forest Fire Smoke Detection Research Based on the Random Forest Algorithm and Sub-Pixel Mapping Method. Forests. 2023; 14(3):485. https://doi.org/10.3390/f14030485

Chicago/Turabian Style

Li, Xihao, Gui Zhang, Sanqing Tan, Zhigao Yang, and Xin Wu. 2023. "Forest Fire Smoke Detection Research Based on the Random Forest Algorithm and Sub-Pixel Mapping Method" Forests 14, no. 3: 485. https://doi.org/10.3390/f14030485

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop