Next Article in Journal
Biodiagnostics of Resistance to the Copper (Cu) Pollution of Forest Soils at the Dry and Humid Subtropics in the Greater Caucasus Region
Previous Article in Journal
Chloroplast Microsatellite-Based High-Resolution Melting Analysis for Authentication and Discrimination of Ilex Species
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Investigation of Recognition and Classification of Forest Fires Based on Fusion Color and Textural Features of Images

School of Emergency Management and Safety Engineering, China University of Mining & Technology (Beijing), Beijing 100083, China
*
Author to whom correspondence should be addressed.
Forests 2022, 13(10), 1719; https://doi.org/10.3390/f13101719
Submission received: 31 August 2022 / Revised: 11 October 2022 / Accepted: 17 October 2022 / Published: 18 October 2022
(This article belongs to the Section Natural Hazards and Risk Management)

Abstract

:
An image recognition and classification method based on fusion color and textural features was studied. Firstly, the suspected forest fire region was segmented via the fusion RGB-YCbCr color spaces. Then, 10 kinds of textural features were extracted by a local binary pattern (LBP) algorithm and 4 kinds of textural features were extracted by a gray-level co-occurrence matrix (GLCM) algorithm from the suspected fire region. In terms of its application, a database of the forest fire textural feature vector of three scenes was constructed, including forest images without fire, forest images with fire, and forest images with fire-like interference. The existence of forest fires can be recognized based on the database via a support vector machine (SVM). The results showed that the method’s recognition rate for forest fires reached 93.15% and that it had a strong robustness with respect to distinguishing fire-like interference, which provides a more effective scheme for forest fire recognition.

1. Introduction

The global area affected by forest fires is increasing yearly, and the demand for the rapid and efficient recognition of forest fires is gradually expanding. The frequent occurrence of forest fires in California has caused huge property losses, and tens of thousands of acres of land were burned in 2022. Forest fire recognition technology based on image features has significant advantages, such as high timeliness and a high recognition rate, which grants it the ability to identify forest fires as soon as possible to prevent their expansion in scale and replaces the traditional artificial lookout and artificial secondary image recognition methods that require high investments but yield poor results. It has been one of the main methods for forest fires’ monitoring and identification and acts as an early warning solution.
With the continuous maturity of image acquisition and processing technology, scholars have thoroughly researched forest fires from the perspective of image recognition and have proposed a variety of detection and recognition methods. At the same time, the Handbook of Neural Computing [1] and the Handbook of Deep Learning Applications [2] introduced the extensive application of neural computing and deep learning, providing us with new ideas regarding forest fire image recognition. By setting hidden layer nodes directly, the ISSA (improved sparrow search algorithm) was used to input the corresponding weight and deviation as feature vectors for the rapid random configuration of flames in an FSCN (fast stochastic configuration network) grid and was trained using the extracted flame image and interference image feature vectors [3,4]. The method proposed by Roy et al. [5] is based on the LeNet5 convolutional neural network fire detection model combined with an L2-regularized non-sparse solution to classify the fire and non-fire images in order to identify a fire in an outdoor environment, and it has achieved good measurement accuracy. Chen et al. [6] extracted multiple adjacent frames of flames, extracted dynamic features from the perspective of time and space, and described the process of the dynamic motion recognition of flame textures. The flames’ structure and spatial features are extracted from the corresponding consistency information of the three-phase cross planes (the image sequence is divided into the three orthogonal directions), and the combined HOPC-TOP (which extracts PC features from spatial and temporal space) is used to identify the flame. Liu et al. [7] used an HOG (Histogram of Oriented Gradients) + an Adaboost classifier with a high recall rate to preliminarily identify possible forest fires, and then used a high-precision CNN (convolutional neural network) + an SVM (Support Vector Machine) classifier to further identify forest fire areas.
The recognition of forest fires from their color and textural features has achieved notable results. Zhang et al. [8] divided the core combustion region, used an Otsu–Kmeans flame image segmentation method to realize the regionalization segmentation of the flame target, extracted and input 10 feature vectors of the target region, constructed the model to output the corresponding combustion state, and then established an SVM vector machine for classification and recognition. Hosseini et al. [9] discussed a method employing deep learning to recognize flames and smoke, termed the “UFS-Net”. The convolutional neural network structure is customized according to the flame for recognition. At the same time, a UFS data set (which includes a large number of images and videos collected from various data sources and artificial images for the training and evaluation of the UFS network) is used as the flame evaluation and training set, which is generally embodied as a computer vision-learning method. The UFS data set can also be used as the flame recognition training set. Wang et al.’s [10] convolutional neural network is often used for image feature learning. When combined with image processing, it can effectively and specifically learn to recognize flames and extract the corresponding features; furthermore, it has good performance and efficiency. Jiang et al. [11] applied a technique based on infrared images and flame spectrum threshold analysis to obtain a feature vector so as to quickly locate fires. Correspondingly, it can effectively eliminate various interferences in the forest and various noises in its own scene. In the research of Chen et al. [12], according to the temporal and spatial motion characteristics of flame, firstly, a Gaussian model was used to extract the flame’s motion region; secondly, the flame recognition region was segmented through a flame-filtering algorithm; and finally, the recognition was made according to the statistics of the flame flicker frequency in the flame segmentation region. Muhammad et al. [13] proposed a new method for detecting forest fires using color and multi-color spatial local binary patterns based on flame and smoke characteristics and a single artificial neural network. It can detect various challenging flame and smoke regions. Hossain et al. [14] extracted the unique color and textural features of flames and formulated a variety of spatial color vector rules for flame segmentation to divide the feature region. However, due to the uncontrollable change in the brightness of a gray image, it is vulnerable to natural light and artificial light, which increases the error rate regarding flame recognition. Chen et al. [15] used flames’ color space to filter local noisy feature points, and then used a SIFT (Scale Invariant Feature Transform) algorithm to extract fire feature values and converted them into feature vectors to identify fires. Kuang et al. [16] extracted the local textural features of a flame, reduced the dimension of the obtained feature vector via a principal component analysis algorithm, and substituted the obtained feature vector into the genetic algorithm for fire identification after an SVM calculation. Cui et al. [17] first used the watershed algorithm to extract the suspected flame area, and then selected four main flame characteristics as the recognition feature vector. Accordingly, the irregular sources are eliminated according to the irregular characteristics of the flame, which improves the degree of flame image processing and makes full use of eliminating interference sources and extracting flame characteristics. Prasad et al. [18] captured and processed the surface texture images and extracted 16 segmentation regions through preprocessing. The GLCM (Gray-level co-occurrence matrix) was used to extract and recognize the feature quantity of the surface texture segmentation region, and the feature feedback was substituted into the SVM vector machine classifier. At the same time, it was also effectively applied in random forest texture recognition. Wang et al. [19] used a hog algorithm to extract image features from garbage. They put forward a new idea of an SVM to train the classification equipment and send relevant information to the database as recognized content. Liu et al. [20] proposed a flame detection algorithm based on saliency detection technology and a uniform local binary pattern that can reduce the false alarm rate of fire recognition technologies and improve the accuracy of sample classification. Ashour et al. [21] established an SVM vector machine classifier for processing the corresponding functions of different steps and judged its characteristics according to the drawn histogram. It also showed an excellent performance when it was substituted into the data set for machine learning.
At the same time, image recognition technology also plays an important role in the related research of fire, fire protection, and forest management. Liu et al. [22] proposed a new method of tree species identification and stock estimation for strengthening forest management. In this method, the forest images are collected by a digital camera. The method uses the UNET (the model used for the semantic segmentation of tree species images) network pre-trained by the VGG16 (as the encoder in the UNET network) model to accurately identify the number of trees and tree species contained in the image. In order to advance the use of UAVs in forest measurement, Seifert et al. [23] chose to use video clips obtained from flight at multiple altitudes in combination with commercial multi-view reconstruction software and a multivariate, generalized additive model for analysis in order to set the best flight parameters and select sensor resolution. Lai et al. [24] combined flames’ surface, invisible heat flow, and temperature into an image recognition system. By changing the forced airflow size and wind direction of the micro wind tunnel, the combustion intensity was studied, the flame combustion and propagation process were identified, and the flame’s temperature and material surface temperature were monitored. He et al. [25] used the image recognition method to quantitatively study the influence of a tunnel’s longitudinal ventilation speed on the intermittent combustion behavior and flame injection behavior of a car. Zhao et al. [26] studied the combustion behavior of a floating roof tank in a chemical industry park. This paper introduces an image recognition method based on images of the flame’s profile, which is used to analyze the necking and periodic fluctuation of flame under different diameter oil pans and different buoyancy plume conditions. Zhao et al. [27] used the RGB (Red Green Blue) color rule to determine a flame’s shape through the difference between the flame and the background, analyzed the flame diffusion and combustion behavior, and explored the influence of slope on the flames’ spread and height with respect to the leaked oil. Li et al. [28] captured the flame combustion signal through high-speed photography to determine the diffusion combustion behavior in a tube, which is manifested as the influence on the ignition mechanism and the propagation of the Flame Shock Wave in the tube under pressure change conditions.
Previous studies on the characteristics of forest fire images focused on images’ color, texture, and motion detection. However, the influence of fire-like interference sources (red objects that interfere with fire image recognition in forests, such as banners, maple leaves, bottles, etc.) on forest fire recognition has not been adequately considered, and the recognition accuracy still needs to be further improved. In this paper, a forest fire recognition method based on fusion color and textural features was investigated. The suspected forest fire region was detected and segmented via the fusion RGB-YCbCr (Y is the luminance component of the color, while CB and CR are the concentration offset components of blue and red) color spaces. A 14-dimensional vector of the forest fire image was formed by LBP (Local Binary Patterns) and a GLCM (gray-level co-occurrence matrix) algorithm. Consequently, a forest fire can be recognized via comparison to the database of the vectors through the SVM, and the method was verified to have a high accuracy for forest fire recognition.

2. Method

The technological roadmap of the method is presented in Figure 1. There are four main steps to achieve accurate recognition. Firstly, preprocessed images and the suspected flame area were extracted based on the fused RGB-YCbCr color spaces. Secondly, the LBP and GLCM algorithms were used to extract the textural features of the suspected flame area, and a 14-dimensional texture feature vector was formed. Thirdly, the database of textural features was established based on a large number of training images, including normal forest images, forest fire images, and fire-like interference images. Fourthly, forest fires could be identified by judging the images’ similarity with three types of images in the database via support vector machine. In addition, the accuracy of the method was further improved by expanding the forest fire image database continuously.

2.1. Segment of Suspected Flame Region

2.1.1. Segment via RGB Color Space

RGB mode is the kind of color of a mature system; most displays have adopted the RGB mode. The R part of the flame pixel is larger than the G part and the B part; the difference between R, G, and B can be used to extract the red suspected flame pixel in the forest image. RGB color rules are as follows:
R I ( x , y )   =   { I ( x , y ) , R ( x , y ) G ( x , y ) > R G T , R ( x , y ) B ( x , y ) > R B T 0 , e l s e
Here, RGT is the red–green color threshold and RBT is the red–blue color threshold. Select 30, 40, 50, 60, and 70 for RGT and RBT, and calculate the extraction results and pixel retention rate of RGB algorithm under different RGT and RBT conditions [29,30,31]. Our results are shown in Figure 2 and Figure 3:
Figure 3 shows that RGT and RBT are inversely proportional to the extracted reserved pixel rate. Since the green concentration of trees and the red concentration of flames in forest images are large, the difference between R and G can better reflect the difference between flame pixels and forest pixels; so, RGT is the main index for extracting flame components. When RGT is large, the extraction result is mainly affected by RGT value and is relatively less affected by RBT value. Comparing the extraction results of different RGT and RBT with the manual extraction results yields the accuracy of flame extraction, as shown in Table 1.
Table 1 shows that when RGT = 60 and RBT = 40, the extraction effect is the best, as it can accurately exclude non-flame pixels and preserve the burning area of the forest. Some extraction results via the RGB color space of the forest image without fire, with fire, and with fire-like interference are shown in Figure 4. It can be seen that the deficiency of the RGB color space is that some red fire-like interference may be misjudged as fire in Figure 4c.

2.1.2. Segment via YCbCr Color Space

YCbCr color space mainly focuses on image brightness features, and it can extract suspected flame pixels with high brightness in the image. Conversion formula from RGB space to YCbCr space is as follows: RGB (0~255)
[ Y C b C r ]   =   1 256 × [ 0.2568 0.5041 0.0979 0.1482 0.2910 0.4392 0.4392 0.3678 0.0714 ] × [ R G B ] + [ 16 128 128 ]
YCbCr color rules are as follows:
R II ( x , y )   =   { I ( x , y ) , Y ( x , y ) > Y m e a n , C b ( x , y ) < C b m e a n , C r ( x , y ) > C r m e a n 0 , e l s e
Here, Ymean is the mean value of the brightness of the original image, Crmean is the mean value of the red concentration component of the original image, and Cbmean is the mean value of the blue concentration component of the original image [32]. The extraction results are shown in Figure 5. The deficiency of the YCbCr color space is that some green plants may be recognized as fire, as in Figure 5a,c.

2.1.3. Segment via Fusion RGB-YCbCr Color Spaces

Comparing the extraction results of the RGB and YCbCr color spaces of the forest fire images, the extraction of RGB color space was more accurate for the region with large value of the red component, while it could not accurately exclude some low-brightness interferences. However, the YCbCr color space was more accurate in extracting high-brightness flames, but some non-red pixels could be included. Combined with the advantages of the two kinds of color spaces, and by determining the intersection of the two results, the fusion of the RGB and YCbCr color spaces was applied to extract the suspected fire region to improve the accuracy of the segment [33]. The comprehensive extraction rule is written as follows:
R III ( x , y )   =   { I ( x , y ) , R I ( x , y ) 0 , R II ( x , y ) 0 0 , else
In addition, the accuracy of the segment was obtained by comparing different algorithms with manual extraction results. When RGB and YCbCr color models are used alone, the accuracy of segment was 0.8051 and 0.6522, respectively. However, when the fused RGB-YCbCr color spaces are used, the accuracy of segment can be raised to 0.8568, which is better than the traditional RGB or YCbCr color models. The comparison of RGB, YCbCr, and fusion RGB-YCbCr is shown in Figure 6.
Figure 6 indicates that by comparing the different segment results with the original image, some green and red interferences were excluded, and the fire area was successfully extracted, indicating that the algorithm has strong robustness. From the comparison between Figure 6b,c, there were obvious differences between the textural features of flame and interference sources. Therefore, considering the special textural features of forest fires, the LBP and GLCM algorithm are used to characterize the textural feature-based information of a suspected fire region for recognition and classification.

2.2. Extraction of Textural Features

The textural features of forest fire images generally have a sheet or plane distribution, with dense texture and strong continuity in the central area. The textural features of sunsets are mainly stripes and not densely distributed. The textural features of red leaf forests generally have a point distribution, and the continuity of their central region is poor. The textural features of red stripes are generally continuous and concentrated. The difference in the image textures between forest fire flames and a fire-like interference source can be used to classify and recognize flames in forest fire images.

2.2.1. Extraction of Textural Features via LBP

LBP algorithm can be used to describe the textural features of forest fire images [34]. With the continuous development and application of the algorithm, the basic LBP algorithm has become too fixed [35]. Ojala et al. [36] extended the LBP algorithm and proposed a uniform pattern, rotation-invariant pattern, and rotation-invariant uniform pattern. The uniform pattern meets the requirement of reducing the number of feature vectors for LBP algorithm. The calculation method is as follows:
U ( L B P P , R )   =   i = 0 P 1 | s ( g ( i + 1 ) mod P ) s ( g i g c ) |
where gc is the Gray value of the central pixel. gi is the Gray value of neighborhood pixels. P denotes the number of pixels around. R represents the neighborhood radius, which is the Euclidean distance between the central pixel and the neighborhood pixels. Rotation invariant pattern can solve the problem wherein the LBP value changes due to image rotation or tilt, and thus keeps LBP value unchanged. The calculation method is as follows:
L B P P , R r i   =   min 0 i P 1 { R O R ( L B P P , R , i ) }
ROR(x,j) performs a rotation operation that moves the x loop to the right by i bits. Rotation invariant uniform pattern is obtained by combining rotation-invariant pattern with uniform mode. The calculation method is as follows:
L B P P , R r i u 2   =   { i = 0 P 1 s ( g i g c ) , U ( L B P P , R ) 2 P + 1 , e l s e
This experiment mainly studies the L B P ( 8 , 1 ) r i u 2 algorithm and L B P ( 8 , 2 ) r i u 2 algorithm with fewer feature vectors. Taking the L B P ( 8 , 2 ) r i u 2 algorithm as an example, Figure 7 shows the LBP texture of various forest fire images, and Figure 8 shows the specific values of the three scenes. In other words, there are 10 values of textural features extracted from the forest image via the L B P ( 8 , 2 ) r i u 2 algorithm, which can achieve the recognition and classification of objects in the following section.

2.2.2. Extraction of Textural Features via GLCM

Gray level co-occurrence matrix is an algorithm that obtains the textural features of the images by counting the gray levels of two pixels at a relative position in the image [37,38,39]. The commonly used statistical features of GLCM algorithm include angular second moment, contrast, inverse different moment, and correlation.
Angular second moment is the sum of the squares of the values of the gray level co-occurrence matrix, which represents the thickness of a fire’s texture and the uniformity of the gray distribution, as in Equation (8):
A   =   b 1 b 2 ( C b 1 , b 2 ) 2
Contrast is the relationship between a pixel value and an adjacent pixel value, which measures the depth and clarity of forest fire image textures, as in Equation (9):
C 1   =   b 1 b 2 ( b 1 b 2 ) 2 C b 1 , b 2
The inverse different moment measures the degree of textural change and smoothness of the local area of a forest fire image, as in Equation (10):
I   =   b 1 b 2 C b 1 , b 2 1 + ( b 1 b 2 ) 2
Correlation measures the similarity of spatial gray level co-occurrence matrix elements in row or columnal directions and indicates the linear relationship of gray forest fire gray image, as Equation (11):
C 2   =   b 1 b 2 ( b 1 μ b 1 ) ( b 2 μ b 2 ) C b 1 , b 2 σ b 1 σ b 2
Here, mx is the sum of each column element in matrix C2, my is the sum of each row element in matrix C2, and μb1, μb2, σb1, and σb2 are the mean and standard deviation of mx and my. Figure 9 presents the 4 eigenvalues of the three scenes extracted via GLCM algorithm, and the feature vectors are shown in Table 2. It can be seen that the GLCM eigenvalues of the image without fire are basically zero, while those of the image with fire-like interference present a larger contrast and a negative correlation. The trend of these eigenvalues can be used as a criterion for recognition and classification.
By analyzing and comparing the LBP feature and GLCM feature extraction results of forest images with fire and forest images with fire-like interference sources, a significant difference was discovered between the two. When analyzing the fire’s textural features, the combination of the two can complement each other and improve the accuracy of recognizing flames in forest fire images. The feature vectors extracted by the two were combined to form a new 14-dimensional feature vector to describe the textural features of forest fire flames [40].

2.3. Classifier

In this section, the support vector machine was used to construct a decision function that recognizes and classifies the forest images in the three scenes. The purpose of SVM algorithm is to construct a decision function that can classify data to the greatest extent. All sample data correspond to the following formula:
M i n i m i z e w , b , ξ 1 2 w w + C i = 1 n ξ i
where w is the normal vector of the hyperplane, b is the intercept of the hyperplane, and C is the penalty parameter. The LBP histogram distribution feature and GLCM feature of a forest fire image with flames and an interference image are extracted from the existing samples to form 14-dimensional vector: X = [L1, L2, L3, …, L10, A, C1, I, C2]. The collected data are used to establish training set and test set, and Radial Basis Function (RBF) kernel is used to identify and classify forest fire flames [41,42,43]. RBF kernel is defined as follows:
K ( x i , x )   =   e x x i 2 2 p 2
where p is the width of the radial basis kernel function. Using the radial basis function kernel to identify forest fires is effective. RBF can analyze high-dimensional functions, for which the main identification classification steps are as follows.
(1)
Training process: The target image is extracted as the training set, based on the LBP histogram feature of the target image and the GLCM texture extraction feature as the image feature input to the SVM vector machine for classifier training.
(2)
Recognition process: The LBP and GLCM features of the recognition image are extracted and classified by the trained classifier. Finally, the classification performance of the classifier is evaluated by the accurate recognition results.
This paper is based on a support vector machine used to identify and classify forest fires and combines comparative convolutional neural networks to verify the accuracy and effectiveness of the algorithm. Convolutional neural networks are a kind of feedforward neural network employing convolution calculations and possessing a deep structure, and it is one of the representative algorithms of deep learning. Convolutional neural network has mature applications in the field of computer vision, and this paper uses its image recognition capability to recognize the fire in forest fire images. First, this algorithm uses the fusion color space rule to extract the suspected flame area, then uses the LBP-GLCM method to extract the textural features, and finally inputs the textural features into the support vector machine. As a comparison algorithm, we input the extracted image of the suspected flame area into the convolutional neural network.

3. Results and Discussion

This method was implemented in MATLAB R2019a. The operating system of the experimental platform was Windows 11, and the processor was Intel® CoreTM i5—9300H. A total of 1317 forest images were collected from the field, including 513 forest images without fire, 420 forest images with fire, and 384 forest images with fire-like interference. The forest images included a variety of common trees in China, such as birch, pine, cypress, etc. The forest fire images selected were mainly close range or high-definition images because their color and textural features are more obvious. The types of fire-like interference included red garbage, red stripes, a red leaf forest, etc. Some of the images of the recognition process regarding the three scenes are presented as Figure 10. Figure 10a is a normal forest image without fire, and obviously there was no extracted fire region after the color extraction process. Figure 10b is a forest image with fire, while Figure 10c is a red banner in the forest, which was used as interference in this study.
Table 3 shows the image and feature vectors of the typical image samples used in this study, and Table 4 shows the forest fire identification results under different algorithms.
Table 3 shows that the difference in the textural feature vector between different images is large. In particular, the first 13 terms of the textural feature vector of the forest image without fire are all 0. We can use the different characteristics of the textural features of the different types of images to identify forest fires and input the textural information of the three types of images as feature vectors into the support vector machine.
The results in the table indicate that the recognition rate concerning forest fire flame images was low when the LBP or GLCM algorithms were used alone. The two algorithms were fused, and the 14-dimensional fusion feature vector was obtained. By using the SVM classifier, the recognition rate for forest fires can reach more than 90%, and the fusion algorithm L B P ( 8 , 2 ) r i u 2 + G L C M can reach 93.15%, wherein both applications can accurately identify forest fires. Then, the test set sample was added to the next training set to extend the picture sample, and in the application of the algorithm, different types of forest fire image samples were added to the database. The forest fire image database can be gradually expanded to enhance the algorithm’s recognition accuracy towards forest fire images.
It can be seen from Table 5 that the accuracy of this algorithm is higher than that of a convolutional neural network algorithm, improving the real-time performance of fire warning while ensuring accuracy. The time consumption of this algorithm is only 1/4 of the convolutional neural network algorithm. The deep-learning method has high requirements regarding equipment performance and has a long training time. The proposed algorithm greatly reduces the training and prediction time.

4. Conclusions

This paper proposed a forest fire recognition method based on fusion color and textural features. The suspected fire region was segmented via the fusion RGB-YCbCr color space. Then, the textural features of the suspected fire region were extracted via LBP and GLCM algorithms to form a 14-dimensional textural feature vector. Finally, the forest fire image feature database was established, and the support vector machine was used for forest fire recognition and classification. The results show that the algorithm’s accuracy of recognizing flames in forest fire images can reach 93.15%, and the algorithm has good robustness when fire-like interference appears. In the future, it is proposed to further improve the forest fire image database and expand the training set to include more forest fire images with different shapes, sizes, colors, and burning degrees; add more forest images with fire-like interference and classify them according to different features; and further reduce the algorithm’s training and testing times to provide new concepts for the application of image recognition in forest fire prevention.

Author Contributions

Conceptualization and methodology, C.L.; software and investigation, Q.L.; validation and formal analysis, B.L.; data curation and writing—original draft preparation, L.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Grant No. U2033206), the National Key Research and Development (R&D) Plan (Grant No. 2018YFC0809500), Science and Technology Project of State Grid General Aviation Company Limited (Grant No. 1100/2021-440038), Guizhou Scientific Support Project (Grant No. 2021 General 514), and the Project of the Excellent Youthful Teacher of Fundamental Research Funds for the Central Universities (Grant No. 2022YQAQ05).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Samui, P.; Roy, S.S.; Balas, V.E. (Eds.) Handbook of Neural Computation; Academic Press: Cambridge, MA, USA, 2017. [Google Scholar]
  2. Balas, V.E.; Roy, S.S.; Sharma, D.; Samui, P. (Eds.) Handbook of Deep Learning Applications; Springer: New York, NY, USA, 2019; Volume 136. [Google Scholar]
  3. Wu, H.; Zhang, A.; Han, Y.; Nan, J.; Li, K. Fast stochastic configuration network based on an improved sparrow search algorithm for fire flame recognition. Knowl.-Based Syst. 2022, 245, 108626. [Google Scholar] [CrossRef]
  4. Wen, Z.; Xie, L.; Feng, H.; Tan, Y. Robust fusion algorithm based on RBF neural network with TS fuzzy model and its application to infrared flame detection problem. Appl. Soft Comput. J. 2019, 76, 251–264. [Google Scholar] [CrossRef]
  5. Roy, S.S.; Goti, V.; Sood, A.; Roy, H.; Gavrila, T.; Floroian, D.; Mohammadi-Ivatloo, B. L2 regularized deep convolutional neural networks for fire detection. J. Intell. Fuzzy Syst. 2014, 43, 1–12. [Google Scholar] [CrossRef]
  6. Chen, H.; Yan, T.; Zhang, X. Burning condition recognition of rotary kiln based on spatiotemporal features of flame video. Energy 2020, 211, 118656. [Google Scholar] [CrossRef]
  7. Liu, Z.; Zhang, K.; Wang, C.; Huang, S. Research on the identification method for the forest fire based on deep learning. Optik 2020, 223, 165491. [Google Scholar] [CrossRef]
  8. Zhang, R.; Lu, S.; Yu, H.; Wang, X. Recognition method of cement rotary kiln burning state based on Otsu-Kmeans flame image segmentation and SVM. Optik 2021, 243, 167418. [Google Scholar] [CrossRef]
  9. Hosseini, A.; Hashemzadeh, M.; Farajzadeh, N. UFS-Net: A unified flame and smoke detection method for early detection of fire in video surveillance applications using CNNs. J. Comput. Sci. 2022, 61, 101638. [Google Scholar] [CrossRef]
  10. Wang, Y.; Dang, L.; Ren, J. Forest fire image recognition based on convolutional neural network. J. Algorithms Comput. Technol. 2019, 13, 1748302619887689. [Google Scholar] [CrossRef] [Green Version]
  11. Jiang, L.; Qi, Q.; Zhang, A.; Guo, C.; Cheng, X. Improving the accuracy of image-based forest fire recognition and spatial positioning. Sci. China Technol. Sci. 2010, 53, 184–190. [Google Scholar] [CrossRef]
  12. Chen, J.; He, Y.; Wang, J. Multi-feature fusion based fast video flame detection. Build. Environ. 2010, 45, 1113–1122. [Google Scholar] [CrossRef]
  13. Muhammad, K.; Ahmad, J.; Mehmood, I.; Rho, S.; Baik, S. Convolutional Neural Networks Based Fire Detection in Surveillance Videos. IEEE Access 2018, 6, 18174–18183. [Google Scholar] [CrossRef]
  14. AnimHossain, F.; Zhang, Y.; Akter, M. Forest fire flame and smoke detection from UAV-captured images using fire-specific color features and multi-color space local binary pattern. J. Unmanned Veh. Syst. 2020, 8, 285–309. [Google Scholar]
  15. Chen, Y.; Xu, W.; Zuo, J.; Yang, K. The fire recognition algorithm using dynamic feature fusion and IV-SVM classifier. Clust. Comput. 2019, 22, 7665–7675. [Google Scholar] [CrossRef]
  16. Kuang, F.; Xu, W.; Zhang, S. A novel hybrid KPCA and SVM with GA model for intrusion detection. Appl. Soft Comput. 2014, 18, 178–184. [Google Scholar] [CrossRef]
  17. Cui, Y.; Qu, F. Experimental Discussion on Fire Image Recognition Based on Feature Extraction. J. Phys. Conf. Ser. 2021, 2066, 012086. [Google Scholar] [CrossRef]
  18. Prasad, G.; Vijay, G.; Raghavendra, K. Comparative study on classification of machined surfaces using ML techniques applied to GLCM based image features. Mater. Today Proc. 2022, 62, 1440–1445. [Google Scholar] [CrossRef]
  19. Wang, W.; Zhang, B.; Wang, Z.; Zhang, F.; Liu, Q. Garbage image recognition and classification based on hog feature and SVM-Boosting. J. Phys. Conf. Ser. 2021, 1966, 012002. [Google Scholar]
  20. Liu, Z.; Yang, Y.; Ji, X. Flame detection algorithm based on a saliency detection technique and the uniform local binary pattern in the YCbCr color space. Signal Image Video Process. 2016, 10, 277–284. [Google Scholar] [CrossRef]
  21. Ashour, M.; Khalid, F.; Halin, A. Machining process classification using PCA reduced histogram features and the Support Vector Machine. In Proceedings of the IEEE International Conference on Signal & Image Processing Applications, Kuala Lumpur, Malaysia, 19–21 October 2016. [Google Scholar]
  22. Liu, J.; Wang, X.; Wang, T. Classification of tree species and stock volume estimation in ground forest images using Deep Learning. Comput. Electron. Agric. 2019, 166, 105012. [Google Scholar] [CrossRef]
  23. Seifert, E.; Seifert, S.; Vogt, H.; Drew, D.; Van Aardt, J.; Kunneke, A.; Seifert, T. Influence of drone altitude, image overlap, and optical sensor resolution on multi-view reconstruction of forest images. Remote Sens. 2019, 11, 1252. [Google Scholar] [CrossRef] [Green Version]
  24. Lai, Y.; Wang, X.; Rockett, T.B.O.; Willmott, J.R.; Zhang, Y. Investigation into wind effects on fire spread on inclined wooden rods by multi-spectrum and schlieren imaging. Fire Saf. J. 2022, 127, 103513. [Google Scholar] [CrossRef]
  25. He, Q.; Tang, F.; Zhao, Y.; Hu, P.; Gu, M. An experimental study on the intermittent flame ejecting behavior and critical excess heat release rate of carriage fires in tunnels with longitudinal ventilation. Int. J. Therm. Sci. 2022, 176, 107483. [Google Scholar] [CrossRef]
  26. Zhao, J.; Zhang, X.; Zhang, J.; Wang, W.; Chen, C. Experimental study on the flame length and burning behaviors of pool fires with different ullage heights. Energy 2022, 246, 123397. [Google Scholar] [CrossRef]
  27. Zhao, J.; Zhu, H.; Zhang, J.; Huang, H.; Yang, R. Experimental study on the spread and burning behaviors of continuously discharge spill fires under different slopes. J. Hazard. Mater. 2020, 392, 122352. [Google Scholar] [CrossRef]
  28. Li, P.; Zeng, Q.; Duan, Q.; Sun, J. Visualization of spontaneous ignition and flame behavior in tubes with and without obstacles during the high-pressure hydrogen release. Process Saf. Environ. Prot. 2021, 153, 354–362. [Google Scholar] [CrossRef]
  29. Lukac, R.; Plataniotis, K. Universal demosaicking for imaging pipelines with an RGB color filter array. Pattern Recognit. 2005, 38, 2208–2212. [Google Scholar] [CrossRef]
  30. Jia, N.; Kootstra, G.; Koerkamp, P.; Shi, Z.; Du, S. Segmentation of body parts of cows in RGB-depth images based on template matching. Comput. Electron. Agric. 2021, 180, 105897. [Google Scholar] [CrossRef]
  31. Jiang, X.; Gao, M.; Gao, Z. A novel index to detect green-tide using UAV-based RGB imagery. Estuar. Coast. Shelf Sci. 2020, 245, 106943. [Google Scholar] [CrossRef]
  32. Dong, L.; Zhang, W.; Xu, W. Underwater image enhancement via integrated RGB and LAB color models. Signal Process. Image Commun. 2022, 104, 116684. [Google Scholar] [CrossRef]
  33. Khalili, M. DCT-Arnold chaotic based watermarking using JPEG-YCbCr. Opt.-Int. J. Light Electron Opt. 2015, 126, 4367–4371. [Google Scholar] [CrossRef]
  34. Zhao, Y.; Zhong, Z.; Xu, M. Forest Fire Smoke Video Detection Using Spatiotemporal and Dynamic Texture Features. J. Electr. Comput. Eng. 2015, 2015, 40. [Google Scholar] [CrossRef] [Green Version]
  35. Saigaa, M.; Chitroub, S.; Meraoumia, A. An effective biometric identification system using enhanced palm texture features. Evol. Syst. 2021, 13, 43–63. [Google Scholar] [CrossRef]
  36. Ojala, T.; Pietikainen, M.; Harwood, D. A Comparative Study of Texture Measures with Classification Based on Feature Distributions. Pattern Recognit. 1996, 29, 51–59. [Google Scholar] [CrossRef]
  37. Haralick, R.; Shanmugam, K.; Dinstein, I. Textural Features for Image Classification. Stud. Media Commun. 1973, 6, 610–621. [Google Scholar] [CrossRef] [Green Version]
  38. Ulaby, F.; Kouyate, F.; Brisco, B.; Lee Williams, T. Textural information in SAR images. IEEE Trans. Geosci. Remote Sens. 1986, 24, 235–245. [Google Scholar] [CrossRef]
  39. Mathew, A.; Antony, A.; Mahadeshwar, Y.; Khan, T.; Kulkarni, A. Plant disease detection using GLCM feature extractor and voting classification approach. Mater. Today Proc. 2022, 58, 407–415. [Google Scholar] [CrossRef]
  40. Zhao, X.; Xue, L.; Xu, F. Asphalt pavement paving segregation detection method using more efficiency and quality texture features extract algorithm. Constr. Build. Mater. 2021, 277, 122302. [Google Scholar] [CrossRef]
  41. Li, P.; Zhao, W. Image fire detection algorithms based on convolutional neural networks. Case Stud. Therm. Eng. 2020, 19, 100625. [Google Scholar] [CrossRef]
  42. Sethy, P.; Barpanda, N.; Rath, A.; Behera, S. Deep feature based rice leaf disease identification using support vector machine. Comput. Electron. Agric. 2020, 175, 105527. [Google Scholar] [CrossRef]
  43. Alamgir, N.; Nguyen, K.; Chandran, V.; Boles, W. Combining multi-channel color space with local binary co-occurrence feature descriptors for accurate smoke detection from surveillance videos. Fire Saf. J. 2018, 102, 1–10. [Google Scholar] [CrossRef]
Figure 1. Forest Fire Identification Process.
Figure 1. Forest Fire Identification Process.
Forests 13 01719 g001
Figure 2. Pixel retention rate of different RGT and RBT (thresholds).
Figure 2. Pixel retention rate of different RGT and RBT (thresholds).
Forests 13 01719 g002
Figure 3. The relationship between RGT, RBT, and pixel retention rate.
Figure 3. The relationship between RGT, RBT, and pixel retention rate.
Forests 13 01719 g003
Figure 4. Extraction results via RGB color space: (a1) Forest image without fire; (a2) extraction results of a forest image without fire; (b1) forest image with fire; (b2) extraction results of forest image with fire; (c1) forest image with fire-like interference; (c2) extraction results of forest image with fire-like interference.
Figure 4. Extraction results via RGB color space: (a1) Forest image without fire; (a2) extraction results of a forest image without fire; (b1) forest image with fire; (b2) extraction results of forest image with fire; (c1) forest image with fire-like interference; (c2) extraction results of forest image with fire-like interference.
Forests 13 01719 g004
Figure 5. Extraction results via YCbCr color space: (a1) Forest image without fire; (a2) extraction results of forest image without fire; (b1) forest image with fire; (b2) extraction results of forest image with fire; (c1) forest image with fire-like interference; (c2) extraction results of forest image with fire-like interference.
Figure 5. Extraction results via YCbCr color space: (a1) Forest image without fire; (a2) extraction results of forest image without fire; (b1) forest image with fire; (b2) extraction results of forest image with fire; (c1) forest image with fire-like interference; (c2) extraction results of forest image with fire-like interference.
Forests 13 01719 g005
Figure 6. Comparison of RGB, YCbCr, and fusion RGB-YCbCr results: (a) extraction results of forest image without fire; (b) extraction results of forest image with fire; (c) extraction results of forest image with fire-like interference.
Figure 6. Comparison of RGB, YCbCr, and fusion RGB-YCbCr results: (a) extraction results of forest image without fire; (b) extraction results of forest image with fire; (c) extraction results of forest image with fire-like interference.
Forests 13 01719 g006
Figure 7. LBP-extracted image textures of forest fire images: (a1) forest image without fire; (a2) LBP-extracted image textures of forest image without fire; (b1) forest image with fire; (b2) LBP-extracted image textures of forest image with fire; (c1) forest image with fire-like interference; (c2) LBP-extracted image textures of forest image with fire-like interference.
Figure 7. LBP-extracted image textures of forest fire images: (a1) forest image without fire; (a2) LBP-extracted image textures of forest image without fire; (b1) forest image with fire; (b2) LBP-extracted image textures of forest image with fire; (c1) forest image with fire-like interference; (c2) LBP-extracted image textures of forest image with fire-like interference.
Forests 13 01719 g007aForests 13 01719 g007b
Figure 8. Histograms of various types of forest fire images: (a) L B P ( 8 , 2 ) r i u 2 histograms of forest image without fire; (b) L B P ( 8 , 2 ) r i u 2 histograms of forest image with fire; (c) L B P ( 8 , 2 ) r i u 2 histograms of forest image with fire-like interference.
Figure 8. Histograms of various types of forest fire images: (a) L B P ( 8 , 2 ) r i u 2 histograms of forest image without fire; (b) L B P ( 8 , 2 ) r i u 2 histograms of forest image with fire; (c) L B P ( 8 , 2 ) r i u 2 histograms of forest image with fire-like interference.
Forests 13 01719 g008
Figure 9. Gray scale image: (a) forest image without fire; (b) forest image with fire; (c) forest image with fire-like interference.
Figure 9. Gray scale image: (a) forest image without fire; (b) forest image with fire; (c) forest image with fire-like interference.
Forests 13 01719 g009
Figure 10. Experimental image sample and extraction results: (a1) forest image without fire; (a2) color extraction result of forest image without fire; (a3) LBP extraction result of forest image without fire; (a4) gray image of forest image without fire; (b1) forest image with fire; (b2) color extraction result of forest image with fire; (b3) LBP extraction result of forest image with fire; (b4) gray image of forest image with fire; (c1) forest image with fire-like interference; (c2) color extraction result of forest image with fire-like interference; (c3) LBP extraction result of forest image with fire-like interference; (c4) gray image of forest image with fire-like interference.
Figure 10. Experimental image sample and extraction results: (a1) forest image without fire; (a2) color extraction result of forest image without fire; (a3) LBP extraction result of forest image without fire; (a4) gray image of forest image without fire; (b1) forest image with fire; (b2) color extraction result of forest image with fire; (b3) LBP extraction result of forest image with fire; (b4) gray image of forest image with fire; (c1) forest image with fire-like interference; (c2) color extraction result of forest image with fire-like interference; (c3) LBP extraction result of forest image with fire-like interference; (c4) gray image of forest image with fire-like interference.
Forests 13 01719 g010aForests 13 01719 g010b
Table 1. Extraction accuracy of RGB color space with different values.
Table 1. Extraction accuracy of RGB color space with different values.
ThresholdRGT
3040506070
RBT300.6249 0.6534 0.7126 0.7743 0.7683
400.7095 0.7097 0.7255 0.8051 0.7957
500.7545 0.7545 0.7552 0.7584 0.7695
600.7200 0.7200 0.7200 0.7201 0.7186
700.6415 0.6416 0.6416 0.6416 0.6417
Table 2. Gray level co-occurrence matrix feature vector.
Table 2. Gray level co-occurrence matrix feature vector.
Picture TypesAC1IC2
Forest image without fire000NaN
Forest image with fire0.03516.80540.64730.5862
Forest image with fire-like interference0.026340.24760.3872−0.0207
Table 3. Fusion feature vector of partial forest sample image and color texture.
Table 3. Fusion feature vector of partial forest sample image and color texture.
Forest
without Fire
Forest Image
with Fire 1
Forest Image
with Fire 2
Forest Image
with Fire 3
Interference 1Interference 2Interference 3
Forests 13 01719 i001Forests 13 01719 i002Forests 13 01719 i003Forests 13 01719 i004Forests 13 01719 i005Forests 13 01719 i006Forests 13 01719 i007
L10.00000.11870.12920.09760.09420.14170.1196
L20.00000.10210.10710.09220.10770.05600.1129
L30.00000.05380.05300.07590.06760.01010.0509
L40.00000.04720.04720.08060.06970.01260.0480
L50.00000.04130.03690.06970.06010.00620.0339
L60.00000.03000.03080.05320.02680.00220.0170
L70.00000.01800.02660.03130.01060.00000.0089
L80.00000.02060.03610.02820.00470.00000.0074
L90.00000.40340.33290.32450.46170.75570.5004
L100.00000.16470.20020.14690.09670.01550.1011
A0.00000.00190.00190.00220.00180.00310.0011
C10.00000.97760.94710.94240.97821.00180.9925
I0.00000.01880.02960.03360.01840.00420.0098
C2NaN0.00170.02140.02170.0016−0.0091−0.0033
Table 4. Forest fire flame identification results.
Table 4. Forest fire flame identification results.
AlgorithmVector DimensionsSample SizeNumber of Correct RecognitionsAccuracy
L B P ( 8 , 1 ) r i u 2 1019015682.11
L B P ( 8 , 2 ) r i u 2 1019016385.78
G L C M 419016184.73
L B P ( 8 , 1 ) r i u 2 + G L C M 1419017491.58
L B P ( 8 , 2 ) r i u 2 + G L C M 1419017793.15
Table 5. Comparison of the proposed algorithm and deep-learning algorithms.
Table 5. Comparison of the proposed algorithm and deep-learning algorithms.
Recognition AlgorithmThe Proposed AlgorithmConvolutional Neural Network
Identification accuracy93.15%91.42%
Total time-consuming9.22 min28.30 min
Recognition rate0.42 s1.29 s
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, C.; Liu, Q.; Li, B.; Liu, L. Investigation of Recognition and Classification of Forest Fires Based on Fusion Color and Textural Features of Images. Forests 2022, 13, 1719. https://doi.org/10.3390/f13101719

AMA Style

Li C, Liu Q, Li B, Liu L. Investigation of Recognition and Classification of Forest Fires Based on Fusion Color and Textural Features of Images. Forests. 2022; 13(10):1719. https://doi.org/10.3390/f13101719

Chicago/Turabian Style

Li, Cong, Qiang Liu, Binrui Li, and Luying Liu. 2022. "Investigation of Recognition and Classification of Forest Fires Based on Fusion Color and Textural Features of Images" Forests 13, no. 10: 1719. https://doi.org/10.3390/f13101719

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop