Next Article in Journal
Examining the Factors Influencing the Mobile Learning Applications Usage in Higher Education during the COVID-19 Pandemic
Next Article in Special Issue
Lane following Learning Based on Semantic Segmentation with Chroma Key and Image Superposition
Previous Article in Journal
Design and Fabrication of a Printed Tri-Band Antenna for 5G Applications Operating across Ka-, and V-Band Spectrums
Previous Article in Special Issue
Multi-Stage Attention-Enhanced Sparse Graph Convolutional Network for Skeleton-Based Action Recognition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Semantic Segmentation Method for Early Forest Fire Smoke Based on Concentration Weighting

1
School of Technology, Beijing Forestry University, Beijing 100083, China
2
China Fire and Rescue Institute, Beijing 102202, China
3
Ontario Ministry of Northern Development, Mines, Natural Resources and Forestry, Sault Ste. Marie, ON 279541, Canada
*
Author to whom correspondence should be addressed.
Electronics 2021, 10(21), 2675; https://doi.org/10.3390/electronics10212675
Submission received: 26 September 2021 / Revised: 28 October 2021 / Accepted: 29 October 2021 / Published: 31 October 2021

Abstract

:
Forest fire smoke detection based on deep learning has been widely studied. Labeling the smoke image is a necessity when building datasets of target detection and semantic segmentation. The uncertainty in labeling the forest fire smoke pixels caused by the non-uniform diffusion of smoke particles will affect the recognition accuracy of the deep learning model. To overcome the labeling ambiguity, the weighted idea was proposed in this paper for the first time. First, the pixel-concentration relationship between the gray value and the concentration of forest fire smoke pixels in the image was established. Second, the loss function of the semantic segmentation method based on concentration weighting was built and improved; thus, the network could pay attention to the smoke pixels differently, an effort to better segment smoke by weighting the loss calculation of smoke pixels. Finally, based on the established forest fire smoke dataset, selection of the optimum weighted factors was made through experiments. mIoU based on the weighted method increased by 1.52% than the unweighted method. The weighted method cannot only be applied to the semantic segmentation and target detection of forest fire smoke, but also has a certain significance to other dispersive target recognition.

1. Introduction

The security risks and destruction of ecological balance caused by forest fires have increased dramatically in recent years in terms of both frequency and scale [1,2,3,4]. The forest fire monitoring and detection is of great significance for reducing the above hazards. However, it is very laborious to rely solely on the manual monitoring and detection of forest fires. The development of science and technology has made it possible to monitor and detect forest fires automatically [5,6,7].
Many researchers have been working on automatic smoke detection to reduce damages, since smoke can provide earlier clues for forest fire alarms than flames [8,9,10,11]. Many forest fire detection methods based on smoke recognition have been proposed in the past decade. The image-based forest fire smoke detection method is the most widely used [12,13,14,15,16,17,18,19]. Strictly speaking, image-based smoke detection for forest fires can be divided into three categories. The first category is to only judge whether there is forest fire smoke in an image or not, which is known as whole image forest fire smoke recognition. The second one is not only to recognize whether there is forest fire smoke, but also to indicate the locations of forest fire smoke by bounding boxes [20]. This category is called forest fire smoke detection. The third one is to densely classify each pixel in an image, which is known as forest fire smoke segmentation.
Forest fire smoke segmentation is a far more difficult task than forest fire smoke recognition and forest fire smoke detection. It requires accurate separation of forest fire smoke components from background scenes in an image at pixel levels. Forest fire smoke segmentation outputs a mask with detailed edges, involving object classification, localization and boundary delineation. Traditional forest fire smoke segmentation methods mainly use hand-crafted features, such as forest fire smoke color, texture and motion [21,22,23,24,25,26,27,28,29,30,31]. Nevertheless, it is pretty difficult to define, design or choose useful features due to large variations of forest fire smoke appearance, resulting in quite poor segmentation performance. Furthermore, some forest fire smoke segmentation methods extract dynamic features from videos [32]; however, they are extremely unstable in cases of bad weather. Therefore, forest fire smoke segmentation from static images plays a very important role in visual monitoring and detection for forest fire smoke.
In recent years, many methods based on convolutional neural networks (CNNs) have attracted attention due to their outstanding performance in image segmentation [33]. Semantic segmentation based on CNN, with the input of an arbitrary-size image, utilizes a set of convolutional layers, non-linear activation functions, pooling and upsampling layers to output a predicted image [34,35,36,37,38]. Moreover, CNNs have achieved a lot of significant results in the field of vision detection of forest fire smoke [39,40].
For the forest fire smoke segmentation method based on CNN, it is necessary to manually label pixels which are forest fire smoke or background in all training images. However, the fuzziness, translucency, and diversified concentration of forest fire smoke make it extremely difficult to label forest fire smoke accurately, resulting in subjectivity and ambiguity for labeling forest fire smoke; thus, annotating such a training dataset has become a bottleneck in applying these models to forest fire detection.
The labeling problem is widespread in other recognition tasks based on deep learning and has been studied by many researchers [41,42,43,44,45]. However, in the field of forest fires, the labeling problem has not been studied. This paper focuses on how to reduce the impact of the uncertainty for labeling forest fire smoke on smoke segmentation.
In order to improve the accuracy of semantic segmentation of forest fire smoke images and eliminate the impact of labeling ambiguity on the recognition results, a semantic segmentation method based on concentration weighting was first proposed in this paper. By introducing a weighted factor as a measure of the labeling uncertainty, this method can avoid treating all labeled pixels equally so as to improve the accuracy of the model. The weighted method was tested and evaluated on the forest fire smoke dataset.

2. Materials and Methods

For semantic segmentation of forest fire smoke, the influence of smoke concentration was considered, and the idea of weighting was introduced in this paper. By establishing the pixel-concentration relationship of forest fire smoke in the image, the influence of the labeling ambiguity caused by non-uniform smoke diffusion would be alleviated and the recognition accuracy of forest fire smoke would be improved. The method framework of this paper is shown in Figure 1.

2.1. Forest Fire Smoke Labeling Based on Weight

The input of the semantic segmentation network was original images and ground truth (GT) images corresponding to original images. The pixel value of the forest fire smoke pixel in GT images was labeled as 1 [48,49] and that of the non-forest fire smoke pixel in GT images was labeled as 0 in Figure 2a,b. The concentration of forest fire smoke varies in pixel because of the non-uniform diffusion of forest fire smoke particles. Due to the influence of environmental factors, the concentration of smoke particles will gradually decrease in the diffusion process, which will result in blurring of the edges of the smoke image or mixing with the background such as cloud and fog to cause the uncertainty of the labeling. It is impossible to reflect this kind of uncertainty by simply labeling pixels as 1 or 0 without distinction. The misidentification of the trained network model will be caused by the inaccuracy of the labeling.
The idea of weighting in this paper is to integrate the weight into the original method in order to make the network understand that forest fire smoke is different in concentration. By introducing a weighted factor, it is used as a measure of the uncertainty of the labeled pixels to avoid treating all labeled pixels equally and to identify forest fire smoke more accurately.
The forest fire smoke concentration has a direct correlation with the smoke pixel value in the forest fire smoke image. The difference in smoke concentration in the same image is represented by the difference in smoke pixel value. For white smoke, the higher the smoke concentration, the higher the smoke pixel value in the smoke image, while the black smoke is the opposite.
Therefore, establishing the relationship between the pixel value and the concentration distribution of the forest fire smoke pixel in the image is necessary for the introduction of weight. A normalization method to establish the pixel-concentration relationship was adopted in this paper as shown in Equations (1) and (2).
G r ( x , y ) = G ( x , y ) min ( G ( x , y ) ) ,
R g c = G r ( x , y ) max ( G r ( x , y ) ) ,
where G ( x , y ) is the pixel value of the smoke area, as shown in the white area in Figure 2b. min ( G ( x , y ) ) is the minimum pixel value of the smoke area and G r ( x , y ) is the relative pixel value of the smoke area. R g c is the basic pixel-concentration coefficient and max ( G r ( x , y ) ) is the maximum relative pixel value of the smoke region.
In order to discriminate between smoke, cloud, and fog, the background information of the smoke should be included in the pixel-concentration relationship. Therefore, the contrast coefficient k was introduced, as shown in Equation (3). The greater the gap between the average pixel value of the forest fire smoke area and the average pixel value of the entire image, the larger the contrast coefficient, so that it is much easier to identify the smoke area.
k = | G ¯ G p ¯ G n s ¯ G p ¯ | ,
where G ¯ is the pixel mean of the whole image, G p ¯ is the pixel mean of the smoke area, G n s ¯ is the pixel mean of the non-smoke area and k is the contrast coefficient. The value range of k is [0, 1], which reflects the relative distance between the pixel mean in the smoke area and the pixel mean in the whole image.
Finally, the weighted coefficient reflecting the pixel-concentration relationship is:
λ = k × R g c + 1 k ,
where λ is the concentration weight. The value range of λ is [1− k , 1]. The lower limit of the pixel-concentration relationship is increased for Equation (4), which can enhance the confidence of model for smoke. The weighted image is shown in Figure 2c.

2.2. Improvement for the Loss Function

In the training process of the semantic segmentation network, when calculating the loss value, the contribution of each pixel in the smoke area to the loss value should be evaluated according to the weighted image. The improved loss function was as follows:
L = α L C E + ( 1 α ) L W ,
where L is the overall loss function, L C E is the cross-entropy loss function [50,51], L W is the weighted loss function, calculating the loss between the predicted value and the weighted value, and α is the control coefficient. The proportion of the weighted part in the overall loss function was determined by the control coefficient α . L W and α would be determined by experiments.
When the weighted loss was not considered, the cross-entropy loss function was used to train the network, as shown in Equation (6). When the weighted loss was considered, the idea of weighted was introduced and the loss function was Equation (5).
L C E = { log y i y i = 1 log ( 1 y i ) y i = 0 ,
where y i = { 0 , 1 } is the category label of a real image and y i [ 0 , 1 ] is the prediction probability when the corresponding category label is 1.
Since the weight was a discrete value distributed in a certain interval, calculating the weighted loss was a regression problem. The common loss functions include Mean Absolute Error Loss ( L M A E ) [52], Mean Squared Error Loss ( L M S E ) [53] and Cosine Proximity Loss ( L C P ) [54]. On this basis, the corresponding improvements of weighted loss function were made, as shown in Equations (7)–(9).
L M A E = 1 N i = 0 N | y i y Δ i | ,
L M S E = 1 N i = 0 N ( y i y Δ i ) 2 ,
L C P = 1 i = 0 N y i y Δ i i = 0 N ( y i ) 2 i = 0 N ( y Δ i ) 2 ,
where N is the number of samples, y i Δ [ 0 , 1 ] is the weight when the corresponding category label is 1. L M A E and L M S E are respectively named as L1 Loss and L2 Loss. The original L C P is the opposite of the cosine distance between the predicted value and the weight. Because the minimum of the original L C P is −1, which is not suitable to combine with other loss functions. In this paper, L C P was added 1 to make sure its minimum is 0. The optimal type of weighted loss function L W would be determined by experimental analysis.

3. Results

3.1. Experimental Platform

The experimental environment was Ubuntu 16.04 and the deep learning framework was Keras. The hardware configuration included E5-2620 CPU and GeForce GTX 1080TI GPU. The encoder of semantic segmentation network is MobileNet [46] and decoder of semantic segmentation network is PSPnet [47]. The batch size was 4 during training. The initial learning rate was set as 0.0001 and the optimization method was Adam. Then the learning rate was dynamically adjusted according to the loss value of the verification set. Once the loss value of the validation set stop decreasing, the learning rate will decay at a rate of 0.9. The network input was RGB images, GT images and weighted images. The size of these images was adjusted to 576 × 576.

3.2. Forest Fire Smoke Dataset

With the consideration of the environment, shooting angle, shooting distance and interference such as the coexistence of clouds and smoke, the forest fire smoke dataset was composed of 176 forest fire smoke images collected from literature and websites. According to the concentration of forest fire smoke and cloud interference, the dataset was divided into four categories: thick smoke (TKS), thin smoke (TNS), thick smoke and clouds (TKSC), and thin smoke and clouds (TNSC), as shown in Figure 3. The distribution, which is basically the same as that in literature and websites because of the randomness when collecting these images without considering smoke concentration, is shown in Table 1. 10% of the images in the dataset were randomly selected as the test set. The remaining images were randomly divided into the training set and the validation set at a ratio of 1:1.

3.3. Evaluation Index

To better verify the accuracy of the semantic segmentation network for forest fire smoke recognition, the mean intersection over union (mIoU) was used to evaluate the model performance in this paper. The larger mIoU, the better the recognition performance.
mIoU is a standard indicator of semantic segmentation tasks [55]. In the semantic segmentation field, IoU is essentially a method to quantify the overlap percentage between the target mask and the prediction mask. Specifically, it refers to the ratio of the number of pixels in the common area of the target mask and the prediction mask to the total number of pixels between them. mIoU is the average of IoUs for each category, as shown in Equation (10).
mIoU = 1 k + 1 i = 0 k p i i j = 0 k p i j + j = 0 k p j i p i i ,
where since each pixel in the image has a category label, it is assumed that the total number of categories is k + 1 , including k object categories and 1 background. p i j represents the number of pixels of category i predicted to be category j. In this paper, k is 1.

3.4. The Segmentation Results for the Weighted Method

Three main factors affect the weighted method: the relationship between the pixel value and the concentration distribution of forest fire smoke, the type of weighted loss function L W and the control coefficient α . There are two kinds of relationships between the pixel value and the concentration distribution of forest fire smoke, respectively R g c and λ . The former is just the normalized relative pixel value of forest fire smoke, which is approximately regarded as the concentration distribution of forest fire smoke, while the latter is multiplied by a contrast coefficient k based on the former and then increases the lower limit of the pixel-concentration relationship.
When the pixel-concentration distribution of forest fire smoke is R g c and λ , the experimental segmentation results with different type of weighted loss function L W and control coefficient α are shown in Figure 4.
The results of mIoU for three types of weighted loss functions L W are shown in Table 2. All the experiments in Table 2 were made 20 times and then the average value was taken as the final result. When the pixel-concentration relationship is λ , the optimal weight loss function is L M A E and the optimal control coefficient is 0.1, mIoU of the weighted method is 75.49%, which is the highest among the weighted methods.
Figure 5 shows the corresponding segmentation images of these methods and makes a visual comparison between the segmentation images and the corresponding GT images. It was concluded that the results obtained by the method proposed in this paper are more similar to GT images than the method without weighting. Figure 5d,h are the results of the weighted method.
In order to evaluate the statistical significance of classification differences, a 10-fold cross validation was performed so as to have more than 1 experiment (10 in this case) where to calculate average (Ave) and standard deviations (SD) about experimental results in case of different approaches or algorithms. As shown in Figure 6, the dataset was divided into ten parts randomly on average, one of which was used as the test set, and all images in the remaining nine parts would be randomly allocated as the train set and the validation set at a ratio of 1:1. Each test set is different, ensuring that each image in the dataset can be used as test set once.
First, 10-fold cross-validation was used to evaluate the performance of the model with (w.) and without (w/o) weights. The experimental results showed that the weighted method was 1.52% higher than the average value of mIoU for the method without weighting in Table 3, which verified the effectiveness of the weighted method. At the same time, the weighted method has a lower standard deviation, which makes the network more stable.
Then the performance for the proposed algorithm was investigated by considering the segmentation performances in four categories (thick smoke, thin smoke, thick smoke and clouds, and thin smoke and clouds) separately, as shown in Table 4. For TKS, the weighted method has a good performance compared with the unweighted method. For TNS and TKSC, the mean value of mIoU for the weighted method is higher than that of the unweighted method, but its standard deviation is larger. The reason for this phenomenon may be that the amount of data about TNS and TKSC is too small, which leads to larger fluctuations in the model. The result of TNSC has further verified the weighted method would fail when the data about TNSC is severely insufficient.

4. Discussion

In order to evaluate the effectiveness of the weighted method, comparative experiments with and without the weight were conducted on several common semantic segmentation networks, such as FCN [56], Segnet [57] and Unet [58], and a forest fire smoke detection method, Frizzi [39]. The control coefficient and weighted loss function type for all the tested network architectures have been determined by alike experiments conducted in 3.4, as shown in Table 5. The comparative results for the above segmentation methods with and without weighting are shown in Table 6.
As shown in Table 6, for each semantic segmentation network, mIoU of the weighted method has been improved than the unweighted method to some degree. The experimental results showed that the optimal type of weighted loss function and control coefficient of each segmentation method may be different. For a specific dataset, the three optimal parameters of the weighted method, including the pixel-concentration relationship, the type of loss function and the control coefficient, would be verified by experiments.
The above experiments show that the amount of data is an important factor that affects the weighted method. If the amount of data is too small, the performance of the network will fluctuate greatly, which can be proved by the standard deviation of the weighted method under 10-fold cross-validation becoming larger when the data is insufficient. The amount of data will be further expanded in the following research. In addition, as a multi-objective optimization problem, the selection for specific parameters of the weighted method will be made a further research focus.

5. Conclusions

The semantic segmentation method based on concentration weighting was proposed for the first time in this paper. After building a semantic segmentation dataset of forest fire smoke, the pixel-concentration relationship between the gray value and the concentration of forest fire smoke pixels in the image was established. Then the loss function of the semantic segmentation method based on concentration weighting was built and improved. Finally, the selection of the optimum weighted factors was made by experiments. The segmentation experiments based on the weighted method were made and mIoU increased by 1.52% than the unweighted method. It can be concluded that the weighted method has a better segmentation and recognition performance than the unweighted method and can reduce the influence of labeling ambiguity on segmentation results to a certain extent. The weighted method cannot only be applied to the semantic segmentation and target detection of forest fire smoke, but also has a certain significance to other dispersive target recognition.

Author Contributions

Conceptualization, Z.W. and C.Z.; methodology, Z.W., C.Z. and Y.T.; software, Z.W.; validation, Z.W., C.Z., Y.T., J.Y. and W.C.; formal analysis, Z.W.; investigation, Z.W.; resources, Z.W.; data curation, Z.W.; writing—original draft preparation, Z.W.; writing—review and editing, Z.W., C.Z., Y.T., J.Y. and W.C.; visualization, Z.W.; supervision, C.Z., Y.T., J.Y. and W.C.; project administration, C.Z.; funding acquisition, C.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 31971668.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Van Wees, D.; van Der Werf, G.R.; Randerson, J.T.; Andela, N.; Chen, Y.; Morton, D.C. The role of fire in global forest loss dynamics. Glob. Chang. Biol. 2021, 27, 2377–2391. [Google Scholar] [CrossRef] [PubMed]
  2. Coogan, S.C.P.; Daniels, L.D.; Den, B.; Burton, P.J.; Flannigan, M.D.; Gauthier, S.; Kafka, V.; Park, J.S.; Wotton, B.M. Fifty years of wildland fire science in Canada. Can. J. For. Res. 2021, 51, 283–302. [Google Scholar] [CrossRef]
  3. Fischer, R. The Long-Term Consequences of Forest Fires on the Carbon Fluxes of a Tropical Forest in Africa. Appl. Sci. 2021, 11, 4696. [Google Scholar] [CrossRef]
  4. Milanovic, S.; Markovic, N.; Pamucar, D.; Gigovic, L.; Kostic, P.; Milanovic, S.D. Forest Fire Probability Mapping in Eastern Serbia: Logistic Regression versus Random Forest Method. Forests 2021, 12, 5. [Google Scholar] [CrossRef]
  5. Cang, N.M.; Yu, W.J. Early forest fire smoke detection based on aerial video. J. Phys. Conf. Ser. 2020, 1684, 012095. [Google Scholar] [CrossRef]
  6. Chen, T.H.; Wu, P.H.; Chiou, Y.C. An early fire-detection method based on image processing. In Proceedings of the 2004 International Conference on Image Processing, Singapore, 24–27 October 2004; Volume 1703, pp. 1707–1710. [Google Scholar]
  7. Chino, D.Y.T.; Avalhais, L.P.S.; Rodrigues, J.F.; Traina, A.J.M. BoWFire: Detection of Fire in Still Images by Integrating Pixel Color and Texture Analysis. In Proceedings of the 2015 28th SIBGRAPI Conference on Graphics, Patterns and Images, Salvador, Brazil, 26–29 August 2015; pp. 95–102. [Google Scholar]
  8. Filkov, A.I.; Duff, T.J.; Penman, T.D. Improving Fire Behaviour Data Obtained from Wildfires. Forests 2018, 9, 81. [Google Scholar] [CrossRef] [Green Version]
  9. McKenzie, D.; Shankar, U.; Keane, R.E.; Stavros, E.N.; Heilman, W.E.; Fox, D.G.; Riebau, A.C. Smoke consequences of new wildfire regimes driven by climate change. Earth’s Future 2014, 2, 35–39. [Google Scholar] [CrossRef]
  10. Hinojosa, M.B.; Laudicina, V.A.; Parra, A.; Albert-Belda, E.; Moreno, J.M. Drought and its legacy modulate the post-fire recovery of soil functionality and microbial community structure in a Mediterranean shrubland. Glob. Chang. Biol. 2019, 25, 1409–1427. [Google Scholar] [CrossRef] [PubMed]
  11. Gigović, L.; Pourghasemi, H.R.; Drobnjak, S.; Bai, S. Testing a New Ensemble Model Based on SVM and Random Forest in Forest Fire Susceptibility Assessment and Its Mapping in Serbia’s Tara National Park. Forests 2019, 10, 408. [Google Scholar] [CrossRef] [Green Version]
  12. Zhao, E.; Liu, Y.; Zhang, J.; Tian, Y. Forest Fire Smoke Recognition Based on Anchor Box Adaptive Generation Method. Electronics 2021, 10, 566. [Google Scholar] [CrossRef]
  13. Liu, C.; Liu, P.; Ji, Y. Research on forest fire smoke detection technology based on video region dynamic features. J. Beijing For. Univ. 2021, 43, 10–19. [Google Scholar]
  14. Wu, X.; Cao, Y.; Lu, X.; Leung, H. Patchwise dictionary learning for video forest fire smoke detection in wavelet domain. Neural Comput. Appl. 2021, 33, 7965–7977. [Google Scholar] [CrossRef]
  15. Wang, Y.; Dang, L.; Ren, J. Forest fire image recognition based on convolutional neural network. J. Algorithms Comput. Technol. 2019, 13, 1748302619887689. [Google Scholar] [CrossRef] [Green Version]
  16. Sun, X.; Sun, L.; Huang, Y. Forest fire smoke recognition based on convolutional neural network. J. For. Res. 2020, 32, 1921–1927. [Google Scholar] [CrossRef]
  17. Dasari, P.; Reddy, G.; Gudipalli, A. Forest fire detection using wireless sensor networks. Int. J. Smart Sens. Intell. Syst. 2020, 13, 1–8. [Google Scholar] [CrossRef] [Green Version]
  18. Sudhakar, S.; Vijayakumar, V.; Sathiya Kumar, C.; Priya, V.; Ravi, L.; Subramaniyaswamy, V. Unmanned Aerial Vehicle (UAV) based Forest Fire Detection and monitoring for reducing false alarms in forest-fires. Comput. Commun. 2020, 149, 1–16. [Google Scholar] [CrossRef]
  19. Peng, Y.; Wang, Y. Real-time forest smoke detection using hand-designed features and deep learning. Comput. Electron. Agric. 2019, 167, 105029. [Google Scholar] [CrossRef]
  20. Pan, J.; Ou, X.; Xu, L. A Collaborative Region Detection and Grading Framework for Forest Fire Smoke Using Weakly Supervised Fine Segmentation and Lightweight Faster-RCNN. Forests 2019, 12, 768. [Google Scholar] [CrossRef]
  21. Krstinic, D.; Stipanicev, D.; Jakovcevic, T. Histogram-based smoke Segmentation in forest fire detection system. Inf. Technol. Control 2009, 38, 237–244. [Google Scholar]
  22. Young, M.J.; Seok-hwan, Y. An Intelligent Automatic Early Detection System of Forest Fire Smoke Signatures using Gaussian Mixture Model. J. Inf. Process. Syst. 2013, 9, 621–632. [Google Scholar]
  23. Prema, C.E.; Vinsley, S.S.; Suresh, S. Multi Feature Analysis of Smoke in YUV Color Space for Early Forest Fire Detection. Fire Technol. 2016, 52, 1319–1342. [Google Scholar] [CrossRef]
  24. Zhou, H. Forest fire smoke detection method based on video image. For. Fire Prev. 2014, 4, 30–33. [Google Scholar] [CrossRef]
  25. Xu, R.; Lin, H.; Lu, K.; Cao, L.; Liu, Y. A Forest Fire Detection System Based on Ensemble Learning. Forests 2021, 12, 217. [Google Scholar] [CrossRef]
  26. Zheng, H.; Zhai, J. Forest fire smoke detection based on video analysis. J. Nanjing Univ. Sci. Technol. 2015, 39, 686–691, 710. [Google Scholar]
  27. Filonenko, A.; Caceres Hernandez, D.; Jo, K.H. Fast Smoke Detection for Video Surveillance Using CUDA. IEEE Trans. Ind. Inform. 2018, 14, 725–733. [Google Scholar] [CrossRef]
  28. Zhao, M.; Zhang, W.; Wang, X.; Liu, Y. A Smoke Detection Algorithm with Multi-Texture Feature Exploration Under a Spatio-Temporal Background Model. J. Xi’an Jiaotong Univ. 2018, 52, 67–73. [Google Scholar]
  29. Li, H.; Yuan, F. Image based smoke detection using pyramid texture and edge features. J. Image Graph. 2015, 20, 772–780. [Google Scholar]
  30. Surit, S.; Chatwiriya, W. Forest Fire Smoke Detection in Video Based on Digital Image Processing Approach with Static and Dynamic Characteristic Analysis. In Proceedings of the 2011 First ACIS/JNU International Conference on Computers, Networks, Systems and Industrial Engineering, Washington, DC, USA, 23–25 May 2011; pp. 35–39. [Google Scholar]
  31. Yuan, F.; Shi, J.; Xia, X.; Yang, Y.; Fang, Y.; Wang, R. Sub Oriented Histograms of Local Binary Patterns for Smoke Detection and Texture Classification. Ksii Trans. Internet Inf. Syst. 2016, 10, 1807–1823. [Google Scholar]
  32. Wu, X.; Lu, X.; Leung, H. A motion and lightness saliency approach for forest smoke segmentation and detection. Multimed. Tools Appl. 2020, 79, 69–88. [Google Scholar] [CrossRef]
  33. Zhang, J.; Zhu, H.; Wang, P.; Ling, X. ATT Squeeze U-Net: A Lightweight Network for Forest Fire Detection and Recognition. IEEE Access 2021, 9, 10858–10870. [Google Scholar] [CrossRef]
  34. Krstinic, D.; Braovic, M.; Bozic-Stulic, D. Convolutional Neural Networks and Transfer Learning Based Classification of Natural Landscape Images. J. Univers. Comput. Sci. 2020, 26, 244–267. [Google Scholar] [CrossRef]
  35. Kaabi, R.; Bouchouicha, M.; Mouelhi, A.; Sayadi, M.; Moreau, E. An Efficient Smoke Detection Algorithm Based on Deep Belief Network Classifier Using Energy and Intensity Features. Electronics 2020, 9, 1390. [Google Scholar] [CrossRef]
  36. Tao, C.; Zhang, J.; Wang, P. Smoke Detection Based on Deep Convolutional Neural Networks. In Proceedings of the 2016 International Conference on Industrial Informatics—Computing Technology, Intelligent Technology, Industrial Information Integration (ICIICII), Wuhan, China, 3–4 December 2016; pp. 150–153. [Google Scholar]
  37. Gagliardi, A.; de Gioia, F.; Saponara, S. A real-time video smoke detection algorithm based on Kalman filter and CNN. J. Real-Time Image Process. 2021, 18, 1–11. [Google Scholar] [CrossRef]
  38. Luo, Y.; Zhao, L.; Liu, P.; Huang, D. Fire smoke detection algorithm based on motion characteristic and convolutional neural networks. Multimed. Tools Appl. 2018, 77, 15075–15092. [Google Scholar] [CrossRef]
  39. Frizzi, S.; Bouchouicha, M.; Ginoux, J.; Moreau, E.; Sayadi, M. Convolutional neural network for smoke and fire semantic segmentation. IET Image Process. 2021, 15, 634–647. [Google Scholar] [CrossRef]
  40. Zhu, G.; Chen, Z.; Liu, C.; Rong, X.; He, W. 3D video semantic segmentation for wildfire smoke. Mach. Vis. Appl. 2020, 31, 1–10. [Google Scholar] [CrossRef]
  41. Fang, M.; Zhu, X. Active learning with uncertain labeling knowledge. Pattern Recognit. Lett. 2014, 43, 98–108. [Google Scholar] [CrossRef]
  42. Harbaugh, R.; Maxwell, J.W.; Roussillon, B. Label Confusion: The Groucho Effect of Uncertain Standards. Manag. Sci. 2011, 57, 1512–1527. [Google Scholar] [CrossRef] [Green Version]
  43. Raith, A.; Schmidt, M.; Schoebel, A.; Thom, L. Extensions of labeling algorithms for multi-objective uncertain shortest path problems. Networks 2018, 72, 84–127. [Google Scholar] [CrossRef] [Green Version]
  44. Zeyl, T.; Yin, E.; Keightley, M.; Chau, T. Partially supervised P300 speller adaptation for eventual stimulus timing optimization: Target confidence is superior to error-related potential score as an uncertain label. J. Neural Eng. 2016, 13, 026008. [Google Scholar] [CrossRef]
  45. Yuan, F.; Zhang, L.; Xia, X.; Wan, B.; Huang, Q.; Li, X. Deep smoke segmentation. Neurocomputing 2019, 357, 248–260. [Google Scholar] [CrossRef]
  46. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
  47. Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid Scene Parsing Network. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 6230–6239. [Google Scholar]
  48. Yuan, X.; Shi, J.; Gu, L. A review of deep learning methods for semantic segmentation of remote sensing imagery. Expert Syst. Appl. 2021, 169, 114417. [Google Scholar] [CrossRef]
  49. Li, Y.; Shi, T.; Zhang, Y.; Chen, W.; Li, H. Learning deep semantic segmentation network under multiple weakly-supervised constraints for cross-domain remote sensing image semantic segmentation. ISPRS J. Photogramm. Remote Sens. 2021, 175, 20–33. [Google Scholar] [CrossRef]
  50. Xu, H.; Yang, M.; Deng, L.; Qian, Y.; Wang, C. Neutral Cross-Entropy Loss Based Unsupervised Domain Adaptation for Semantic Segmentation. IEEE Trans. Image Process. Publ. IEEE Signal Process. Soc. 2021, 30, 4516–4525. [Google Scholar] [CrossRef]
  51. Niu, X.; Yan, B.; Tan, W.; Wang, J. Effective image restoration for semantic segmentation—ScienceDirect. Neurocomputing 2020, 374, 100–108. [Google Scholar] [CrossRef]
  52. Qi, J.; Du, J.; Siniscalchi, S.M.; Ma, X.; Lee, C.H. On Mean Absolute Error for Deep Neural Network Based Vector-to-Vector Regression. IEEE Signal Process. Lett. 2020, 27, 1485–1489. [Google Scholar] [CrossRef]
  53. Zhang, N.N.; Ni, J.G.; Chen, J.; Li, Z. Steady-State Mean-Square Error Performance Analysis of the Tensor LMS Algorithm. IEEE Trans. Circuits Syst. II-Express Briefs 2021, 68, 1043–1047. [Google Scholar] [CrossRef]
  54. Pambudi, R.A.; Adiwijaya Mubarok, M.S. Multi-label classification of Indonesian news topics using Pseudo Nearest Neighbor Rule. J. Phys. Conf. Ser. 2019, 1192, 012031. [Google Scholar] [CrossRef]
  55. Hou, F.; Lei, W.; Li, S.; Xi, J.; Xu, M.; Luo, J. Improved Mask R-CNN with distance guided intersection over union for GPR signature detection and segmentation. Autom. Constr. 2021, 121, 103414. [Google Scholar] [CrossRef]
  56. Shelhamer, E.; Long, J.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 640–651. [Google Scholar] [CrossRef] [PubMed]
  57. Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef] [PubMed]
  58. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015, Munich, Germany, 5–9 October 2015; Volume 9351, pp. 234–241. [Google Scholar]
Figure 1. The main framework of the weighted method proposed in this paper 1. 1: The network is pre-existing deep learning semantic segmentation network, encoder is MobileNet [46] and decoder is PSPnet [47]. The network was initialized with the weights of the MobileNet network pre-trained on the ImageNet. The final optimization loss includes weighted loss and cross-entropy loss.
Figure 1. The main framework of the weighted method proposed in this paper 1. 1: The network is pre-existing deep learning semantic segmentation network, encoder is MobileNet [46] and decoder is PSPnet [47]. The network was initialized with the weights of the MobileNet network pre-trained on the ImageNet. The final optimization loss includes weighted loss and cross-entropy loss.
Electronics 10 02675 g001
Figure 2. Original image, GT image and weighted image. (a) The original image. (b) The GT image. The white area with the pixel value of 1 is the smoke area and the black area with the pixel value of 0 is the non-smoke area. (c) The weighted image. The pixel value of the smoke area is distributed between [0, 1].
Figure 2. Original image, GT image and weighted image. (a) The original image. (b) The GT image. The white area with the pixel value of 1 is the smoke area and the black area with the pixel value of 0 is the non-smoke area. (c) The weighted image. The pixel value of the smoke area is distributed between [0, 1].
Electronics 10 02675 g002
Figure 3. The four categories of the forest fire smoke dataset. (a) Thick smoke. (b) Thin smoke. (c) Thick smoke and clouds. (d) Thin smoke and clouds.
Figure 3. The four categories of the forest fire smoke dataset. (a) Thick smoke. (b) Thin smoke. (c) Thick smoke and clouds. (d) Thin smoke and clouds.
Electronics 10 02675 g003
Figure 4. Comparison of segmentation results with different type of weighted loss function and different value of control coefficient. (a) The pixel-concentration distribution of forest fire smoke is R g c ; (b) The pixel-concentration distribution of forest fire smoke is λ .
Figure 4. Comparison of segmentation results with different type of weighted loss function and different value of control coefficient. (a) The pixel-concentration distribution of forest fire smoke is R g c ; (b) The pixel-concentration distribution of forest fire smoke is λ .
Electronics 10 02675 g004
Figure 5. Comparison of segmentation results. (a,e) The original image. (b,f) GT image. (c,g) Unweighted. (d,h) Segmentation with λ -MAE-0.1.
Figure 5. Comparison of segmentation results. (a,e) The original image. (b,f) GT image. (c,g) Unweighted. (d,h) Segmentation with λ -MAE-0.1.
Electronics 10 02675 g005
Figure 6. Dataset division when using 10-fold cross-validation.
Figure 6. Dataset division when using 10-fold cross-validation.
Electronics 10 02675 g006
Table 1. The distribution of the four categories for the forest fire smoke dataset.
Table 1. The distribution of the four categories for the forest fire smoke dataset.
Thick SmokeThin SmokeThick Smoke and CloudsThin Smoke and Clouds
12625214
Table 2. Segmentation results of the pixel-concentration relationship, the type of weighted loss function and the control coefficient.
Table 2. Segmentation results of the pixel-concentration relationship, the type of weighted loss function and the control coefficient.
Pixel-Concentration RelationshipWeighted Loss Function TypeControl CoefficientmIoU (%)
R g c L M A E 0.174.99
R g c L M S E 0.874.31
R g c L C P 0.974.25
λ L M A E 0.175.49
λ L M S E 0.975.27
λ L C P 0.475.26
Table 3. Comparative experimental results with or without weighting.
Table 3. Comparative experimental results with or without weighting.
NumberPart 1Part 2Part 3Part 4Part 5Part 6Part 7Part 8Part 9Part 10AveSD
PSPnet w/o weight74.0175.2070.4174.6777.1569.8179.9473.1769.1475.1073.863.21
PSPnet w. weight75.4979.6473.3775.175.6570.0780.1175.3572.6476.3375.382.85
Table 4. The segmentation performance for the proposed algorithm in four categories.
Table 4. The segmentation performance for the proposed algorithm in four categories.
NumberWithout WeightWith Weight
TKSTNSTKSCTNSCTKSTNSTKSCTNSC
Ave73.1967.4472.662.1274.5968.0872.7760.47
SD2.845.525.822.452.417.135.885.42
Table 5. Determination of experimental parameters for different segmentation methods.
Table 5. Determination of experimental parameters for different segmentation methods.
MethodsPixel-Concentration RelationshipWeighted Loss Function TypeControl CoefficientmIoU(%)
FCN λ L M A E 0.171.73
Segnet λ L C P 0.777.21
Unet λ L M S E 0.875.66
Frizzi λ L M A E 0.276.37
Table 6. Comparison of different segmentation methods with and without weighting.
Table 6. Comparison of different segmentation methods with and without weighting.
NumberFCNSegnetUnetFrizzi
w/o Weightw. Weightw/o Weightw. Weightw/o Weightw. Weightw/o Weightw. Weight
Ave73.6376.0171.8974.1173.3575.1274.2075.68
SD4.833.233.263.653.672.542.542.15
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, Z.; Zheng, C.; Yin, J.; Tian, Y.; Cui, W. A Semantic Segmentation Method for Early Forest Fire Smoke Based on Concentration Weighting. Electronics 2021, 10, 2675. https://doi.org/10.3390/electronics10212675

AMA Style

Wang Z, Zheng C, Yin J, Tian Y, Cui W. A Semantic Segmentation Method for Early Forest Fire Smoke Based on Concentration Weighting. Electronics. 2021; 10(21):2675. https://doi.org/10.3390/electronics10212675

Chicago/Turabian Style

Wang, Zewei, Change Zheng, Jiyan Yin, Ye Tian, and Wenbin Cui. 2021. "A Semantic Segmentation Method for Early Forest Fire Smoke Based on Concentration Weighting" Electronics 10, no. 21: 2675. https://doi.org/10.3390/electronics10212675

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop