Next Article in Journal
Integrating LiDAR, Multispectral and SAR Data to Estimate and Map Canopy Height in Tropical Forests
Next Article in Special Issue
Spatio-Temporal Sub-Pixel Land Cover Mapping of Remote Sensing Imagery Using Spatial Distribution Information From Same-Class Pixels
Previous Article in Journal
The Influence of Heterogeneity on Lunar Irradiance Based on Multiscale Analysis
Previous Article in Special Issue
Super-Resolution Land Cover Mapping Based on the Convolutional Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multispectral Image Super-Resolution Burned-Area Mapping Based on Space-Temperature Information

1
College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
2
Department of Traffic Information and Control Engineering, Tongji University, Shanghai 200092, China
3
Department of Electrical and Computer Engineering, University of Calgary, Calgary, AB T2N 1N4, Canada
*
Author to whom correspondence should be addressed.
Remote Sens. 2019, 11(22), 2695; https://doi.org/10.3390/rs11222695
Submission received: 29 October 2019 / Revised: 13 November 2019 / Accepted: 14 November 2019 / Published: 18 November 2019
(This article belongs to the Special Issue New Advances on Sub-pixel Processing: Unmixing and Mapping Methods)

Abstract

:
Multispectral imaging (MI) provides important information for burned-area mapping. Due to the severe conditions of burned areas and the limitations of sensors, the resolution of collected multispectral images is sometimes very rough, hindering the accurate determination of burned areas. Super-resolution mapping (SRM) has been proposed for mapping burned areas in rough images to solve this problem, allowing super-resolution burned-area mapping (SRBAM). However, the existing SRBAM methods do not use sufficiently accurate space information and detailed temperature information. To improve the mapping accuracy of burned areas, an improved SRBAM method utilizing space–temperature information (STI) is proposed here. STI contains two elements, a space element and a temperature element. We utilized the random-walker algorithm (RWA) to characterize the space element, which encompassed accurate object space information, while the temperature element with rich temperature information was derived by calculating the normalized burn ratio (NBR). The two elements were then merged to produce an objective function with space–temperature information. The particle swarm optimization algorithm (PSOA) was employed to handle the objective function and derive the burned-area mapping results. The dataset of the Landsat-8 Operational Land Imager (OLI) from Denali National Park, Alaska, was used for testing and showed that the STI method is superior to the traditional SRBAM method.

Graphical Abstract

1. Introduction

A challenging problem of the earth’s ecosystem is wildland fires, which affect the balance of greenhouse gases, plant distribution, and inhabitant safety. The distribution of burned areas is fundamental for the study of wildland fires [1], which can be achieved using multispectral imaging (MI). However, due to the severe conditions of burned areas and the limitations of sensors, the resolution of collected multispectral images is sometimes very rough, presenting many mixed pixels, which hinders the accurate determination of burned areas [2]. Since there is more than one land-cover class in one mixed pixel, traditional classification technology that assigns one land-cover class to one pixel often cannot effectively deal with mixed pixels. Therefore, burned-area mapping using the classification results of coarse images is usually not ideal [3]. To address this issue, the super-resolution mapping (SRM) technology has been proposed to handle mixed pixels and obtain burned-area mapping, resulting in the method named super-resolution burned-area mapping (SRBAM).
In SRM, we segment a mixed pixel into S × S subpixels according to scale S , and each subpixel is then given a class label to produce the final mapping result. In other words, we can handle the fractional images that are obtained by spectral unmixing through MI and derive a high-resolution classification map at the subpixel scale. Atkinson first suggested that SRM is usually based on the spatial dependence theory. On the basis of this theory, the most likely SRM map is assumed to be the one with the greatest spatial correlation [4]. There are two types of SRM, initialized-then-optimized SRM and soft-then-hard SRM [5]. In the initialized-then-optimized type, land-cover class labels are assigned randomly to subpixels, and the SRM result is proofread by gradually changing the space position of the subpixels. The pixel-swapping algorithm (PSA) proposed by Atkinson [6] is typically used for this type of SRM. In PSA, two subpixel classes that need swapping within coarse pixels are first exchanged, then the SRM results are approached iteratively. Other approaches such as perimeter of minimizing [7], values of neighboring [8], and Moran’s I [9] are also used for this type of SRM. To achieve better results, artificial intelligence methods such as the genetic method [10], simulated annealing [11], and particle swarm optimization [12] have been used to optimize this type of SRM. The other SRM type is soft-then-hard SRM [13]. In the soft-then-hard type, high-resolution fractional images with the proportions of all subpixels belonging to different classes are derived through rough fractional images, using subpixel sharpening. On the basis of the relevant proportions, each subpixel is then assigned a class label by using a class allocation method. In subpixel-sharpening methods, various types of space attraction models have been proposed to quantify spatial dependence [14,15]. Hopefield neural network [16,17] is applied to obtain the output of neurons, representing subpixels, based on the energy minimization principle. Subpixel-sharpening methods also include backpropagation neural network [18,19], indicator–co-kriging [20,21], and some super-resolution reconstruction algorithms [22,23]. In addition, the class allocation method selected has an impact on the final SRM results. The common class allocation methods include class units [24], assigning highest soft attribute values first [25], linear optimization [26], and object units [27]. Because SRM is an ill-posed inverse problem that aims to reconstruct detailed information at the subpixel scale from coarser pixels, some auxiliary data, such as subpixel shifting images [28,29], light detection and ranging [30], par-sharpening images [31,32], and fine-scale information [33] are utilized to improve the results. For example, using subpixel shifting images as the additional information is an effective way to improve mapping accuracy. The basic idea is to combine subpixel shifting images in the same scene to produce a resolution-enhanced image. These SRM methods have been applied in many areas, including flood inundation mapping [34], water boundary extraction [35], change detection [36], urban development [37]. From the above introduction, we can get the following conclusions. First, with artificial intelligence technology, initialized-then-optimized SRM can achieve ideal results. However, this type of SRM usually requires complex physical structures and long computational time, which hinders its wide application. Second, soft-then-hard SRM usually contains only two steps and can be performed in a simple way. Therefore, this type is more commonly used. Finally, when appropriate auxiliary data are available, we can use these data to improve the final SRM results.
An accurate distribution of burned areas in rough multispectral images can be obtained by the SRM technique SRBAM [3]. However, there are some problems with the existing SRBAM; first, it is based on subpixel space information, and object space information has been proved to be more effective and accurate [38,39]. Therefore, the space information used in the existing SRBAM is not accurate enough. In addition, temperature information is also not fully taken into account in the existing SRBAM [3,16]. Burned areas have higher temperatures than their surroundings. This important temperature information should be utilized to improve the final mapping’s accuracy. To solve these issues, we propose a novel SRBAM method based on space–temperature information, that we name STI. STI includes a space element and a temperature element. In the space element, we utilize the random-walker algorithm (RWA) [40] to obtain the object space information to replace the subpixel space information. The temperature element is calculated using the normalized burn ratio (NBR) [41]. We can obtain an objective function with space–temperature information by combining the space information with the temperature information. According to this objective function, the particle swarm optimization algorithm (PSOA) is employed to obtain the final burned-area mapping.
The innovations of STI are twofold: (1) We utilize RWA to characterize the space element with object space information. Since this takes into account the space information among and within objects, it is more comprehensive and accurate than the subpixel space information [3]. (2) The temperature information is utilized in the proposed STI by calculating the NBR. The experiments here reported using the Landsat-8 Operational Land Imager (OLI) dataset show the superiority of STI over other state-of-the-art methods.

2. Dataset

Lightning ignited the Castle Rocks fire in the deep backcountry of Denali National Park, Alaska, in July 2013. More than 12,900 acres were burned in two months. The fire caused great losses to the local ecosystem and economy. It is very important to obtain a fine space distribution of burned areas for firefighting and disaster relief. The experimental dataset used was an image of this area obtained by Landsat-8 OLI on 26 August 2013, which can be downloaded from the US Geological Survey (USGS) website: http://earthexplorer.usgs.gov/. The image has a size of 2968 × 2052 pixels, 30 m space resolution, and is centered at 64°31′N, 152°52′W. As shown in Figure 1a, the five visible main burned areas are marked in red due to the false color of the image. For quantitative evaluation, we needed a reference image derived from Figure 1a by a classification algorithm based on a least-squares support vector [42]. There are two class labels (burned area and background) in the reference image shown in Figure 1b. To highlight the burned area, the label is marked in red, and the background label is marked in black.

3. Methodology

3.1. Space Element

Here, we introduce the space element T spa with the object space information to obtain more accurate space information. Figure 2 shows the process of production of the space element. The rough multispectral image was upsampled by bicubic interpolation. A fractional image with burned-area classes of subpixel proportions was obtained by unmixing the upsampled image.
The first principal component (PC) was then extracted from the upsampled image through principal component analysis (PCA). Because there was a lot of space information contained in this first PC, we segmented it to obtain objects using a multiresolution segmentation method [43]. Q is defined as the segmentation scale parameter, which determines the object size and the condition of merger termination. The segmentation method is given by
H = λ × H spectral + ( 1 λ ) × H shape
where H represents regional differences, and λ is a free parameter to balance the shape difference H shape and the spectral difference H spectral .
The shape difference H shape is calculated by
H shape = λ shape × A / N + ( 1 λ shape ) × A / R
where A is the actual frontier length of the object region, R is the rectangular frontier length of the object region, N is the subpixel number in the object region, A / N and A / R represent the smoothness and compactness of the object region, respectively, and λ shape is a free parameter.
The spectral difference H spectral is defined by
H spectral = b = 1 B λ b spectral × D b
where b represents a spectral band ( b = 1, 2, …, B ; with B being the total band number), D b is the b t h band spectral value standard deviation in the object region, and λ b spectral is the free parameter here.
Among adjacent object regions, we merged two objects with the minimum difference. When H was larger than Q , we terminated the merging process and extracted the final objects.
Third, the space element T spa with the object space information was derived by RWA. M objects O m ( m = 1, 2, ..., M ) were derived by segmenting the upsampled image, where object O m comprised N m subpixels. The burned-area class proportion L ( p i ) of subpixel p i ( i = 1, 2, ..., N m ) was obtained from the spectral unmixing of the upsampled image, and we averaged the burned area class proportions of subpixels to generate the burned area class proportion G ( O m ) of object O m .
G ( O m ) = i = 1 N m L ( p i ) / N m
The space part T ( i ) spa corresponding to the i t h subpixel was then obtained by RWA, according to Equation (5):
T ( i ) spa = β T among ( G ) + ( 1 β ) T within ( G )
where T among ( G ) is the space information among objects, T within ( G ) represents the space information within each object, G = [ G ( O 1 ) , G ( O 2 ) , , G ( O m ) ] is a column vector, and β is set to 0.5.
T among ( G ) is given by
T among ( G ) = G T L G
where L is the Laplacian matrix:
L = { v m b   if   m = b v m b   if   m   and   b   are   adjacent   objects   0   otherwise  
where v m b = exp ( ε ( v ^ m v ^ b ) 2 ) . The free parameter ε was set to 0.6, and v ^ m was the m t h object O m spectral value.
v ^ m = i = 1 N m v i / N m
where v i is the spectral value of the i t h subpixel in object O m . T within ( G ) is defined as
T within ( G ) = ( 1 G ) T Λ ¯ ( 1 G ) + ( G 1 ) T Λ ( G 1 )
where the diagonal values in the diagonal matrix Λ ¯ are the background class proportions, and the diagonal values in the diagonal matrix Λ represent the burned area class proportion.
We minimized the space element T spa for all land-cover classes. The minimized formula is
x ( p i ) = { 1 ,   if   subpixel   p i   belongs   to   burnt - area   class   0 ,   otherwise
T spa = Min i = 1 N m x ( p i ) × T ( i ) spa

3.2. Temperature Element

A new temperature part T tem is here proposed to fully utilize the temperature information. T tem aims to minimize the difference in spectrum between the observed NBR value ( NBR obe ) and the simulated NBR value ( NBR sim ). The near-infrared (NIR) band and short-wave infrared band 1 (SWIR1) were used to calculate NBR obe here [41]:
N B R obe = 1 K ρ NIR obe ρ SWIR 1 obe ρ NIR obe + ρ SWIR 1 obe
where the observed reflectance of both NIR band ρ NIR obe and SWIR1 band ρ SWIR 1 obe are obtained directly from the original MI, and K is the number of mixed pixels.
Suppose r NIR bur and r SWIR 1 bur are the reflectance of the burned area in the NIR and SWIR1 bands, and r NIR non and r SWIR 1 non are the corresponding reflectance of the background. For each mixed pixel in these bands, the ratio of burned area subpixel number to total subpixel number is the proportion of burned area a NIR bur or a SWIR 1 bur . The proportions of background in the 2 bands are then 1 a NIR bur and 1 a SWIR 1 bur , respectively. We built a linear mixture including all subpixel spectra to consider the reflectance of each mixed pixel. Then, each mixed pixel-simulated reflectance in the NIR band ρ NIR sim and SWIR1 band ρ SWIR 1 sim were calculated using Equations (13) and (14), respectively:
ρ NIR sim = ( r NIR bur × a NIR bur ) + [ r NIR non × ( 1 a NIR bur ) ]
ρ SWIR 1 sim = ( r SWIR 1 bur × a SWIR 1 bur ) + [ r SWIR 1 non × ( 1 a SWIR 1 bur ) ]
NBR sim is given by:
NBR sim = 1 K ρ NIR sim ρ SWIR 1 sim ρ NIR sim + ρ SWIR 1 sim
The temperature part T tem was then obtained by minimizing the difference between NBR obe and NBR sim :
T tem = Min ( NBR obe NBR sim ) 2

3.3. Implementation of STI

To improve the burned area mapping result, STI is proposed as shown in Figure 3. It includes the following 3 steps:
Step 1. Bicubic interpolation, segmentation, and RWA were utilized to obtain the space element T spa with more accurate space information. At the same time, the temperature element T tem , which contains rich temperature information, was obtained by calculating the NBR.
Step 2. We merged the space element T spa and the temperature element T tem through a trade-off parameter θ to produce the objective function T with space–temperature information. The aim of the proposed STI is to minimize T . In STI, we consider a weighted sum of the space and temperature elements of STI, because this information-fusion method has a simple physical meaning and is easy to implement. Of course, we can also use other more effective information-fusion techniques, such as multiobjective optimization [44], alpha integration [45], and so on.
Min   T = ( 1 θ ) T spa + θ T tem
Step 3. To optimize the objective function, PSOA was employed. First, we randomly assigned a burned area or background label to all subpixels. Second, the labels of these subpixels were iteratively changed until the minimum value of T was derived. During each iteration, the burned-area label was changed to the background label and vice versa. If T increased, the change was rejected, otherwise it was accepted. When less than 0.1% of labels were changed, the PSOA was terminated.

4. Experiments and Results

4.1. Experimental Settings

Fine images of five visible main burned areas from the experimental dataset are shown in Figure 4. The test sizes of the five burned areas were 720 × 720 pixels, 300 × 300 pixels, 720 × 720 pixels, 400 × 400 pixels, and 500 × 500 pixels. A flowchart of the experimental process is shown in Figure 5. We used the most commonly used experimental process of SRM to conduct the experiments. The five visible main burned areas were downsampled via an S × S mean filter to produce a rough multispectral image. Here, scale S was set to 8, namely 8 × 8 pixels in the original fine image were merged into one mixed pixel in the simulated rough image. In this case, we could directly evaluate the impact of the error of image registration on SRM. In addition, a quantitative evaluation could be carried out more reasonably in this way; a reference image could be derived from the classification result of the fine image, which was compared with the SRM result from the simulated rough image. Rough images of the five burned areas are shown in Figure 6. Although the false color rough image could highlight the burned area, it was difficult to obtain more accurate distribution and boundary information of the burned area due to the rough resolution. For example, because of the presence of many mixed pixels at the edge of the burned area, there is an obvious vertical line at the edge of the burned area in Figure 6d compared with Figure 4d. In addition, it was difficult for the classification technology to handle the mixed pixels, because one mixed pixel contained more than one land-cover class. To solve this problem, SRM was utilized to handle the mixed pixels and produce accurate burned-area mapping. The least-squares linear mixture model (LSLMM) [46] was applied to the rough images to derive fractional images as inputs. In the segmentation method, the selected λ , λ shape , and λ b spectral were set to 0.5, 0.4, and 1, respectively, according to multiple tests. The trade-off parameter θ was set to 0.4, 0.5, 0.4, 0.6, and 0.4 in the five test areas, while the segmentation scale parameter Q was set to 15, 10, 15, 20, and 15. All experiments were performed using MATLAB 2018a.
We tested four SRM methods: the hybrid spatial attraction model (HSAM) [15], the object-scale spatial SRM (OSRM) [39], the SRBAM [3], and the proposed STI. The ratio between the number of correct mapping subpixels belonging to burned areas derived from each SRM result and the total number of subpixels belonging to burned areas derived from the reference image was defined as burned area (%), and the ratio between the number of correct mapping subpixels derived from each SRM result belonging to background and the total number of subpixels belonging to background derived from the reference image was defined as background (%). The four methods were evaluated on the basis of the determination accuracy of each class (burned area (%) and background (%)), overall accuracy (OA (%)), and kappa coefficient (Kappa) [5].

4.2. Results Analysis

First, a visual comparison was performed. The burned-area mapping results of the SRM method in the five test areas are given in Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11. The detailed area is marked in a rectangular white frame. When we compared the reference images with the four experimental results, we found that STI outperformed the other three SRM methods, and the results from STI were more similar to the reference images. For burned areas with complex distribution, such as Areas 1, 3, 4, and 5, there were many disconnected patches, and some small areas disappeared in the results of HSAM, OSRM, and SRBAM. For some areas with simple distribution, such as Area 2, there were many obvious burrs at the boundaries between burned area in the results of HSAM, OSRM, and SRBAM. There are two reasons for these phenomena: First, the space information was not accurate enough. HSAM and SRBAM only consider pixel-level space information, which is less detailed than object-level space information. Although OSRM utilizes object-level space information, it only calculates space information among object regions and does not consider space information within object regions. Since the proposed STI utilizes object-level space information among and within object regions through RWA, the space information is more accurate in STI than in the other three methods. In addition, STI is better able to make full use of temperature information than the other methods, so it obtains better burned-area mapping results.
Second, we analyzed the accuracy evaluation index. The performance of the four SRM methods was evaluated on the basis of burned area (%), background (%), OA (%), and Kappa. Checking the evaluating indicators in Table 1, the burned area (%) measured by STI was higher than that measured by the other three methods. Compared with SRBAM, the burned area (%) of STI increased by 3.29%, 4.73%, 2.85%, 0.63% and 3.55% in the five test areas. With the aid of space–temperature information, the proposed STI produced the highest OA (%) and Kappa.
Third, we tested the performance of SRM by different scales S , which represent the simulated rough images with different resolution as inputs. The scales S set to different values confirmed that the STI still had the best performance for inputs with different resolutions. HSAM, OSRM, SRBAM, and STI were tested using the other two scales (5 and 10) in the five test areas. The burned area (%) of these methods in relation to S = 5 and S = 10 are shown in Figure 12. We found that, as the value of S increased, the burned area (%) determined by the four methods decreased. This is because as S became larger, the input image became rougher, producing a greater challenge to SRM. The experimental results showed that STI still determined the highest burned area (%) when using different scales S .
Fourth, the influence of the selected parameter θ on the proposed method was studied. Five test areas ( S = 8 ) were rerun for 10 combinations of θ from 0 to 0.9, at an interval of 0.1. The results are shown in Figure 13. There was no contribution of the temperature element T tem when θ = 0 . At this time, only the space element T spa was working, therefore the value of burned area (%) was low. As θ increased, the burned area (%) increased. This is because the use of temperature information from the temperature element T tem increased as θ increased. When θ = 0.4 , θ = 0.5 , θ = 0.4 , θ = 0.6 , and θ = 0.4 in the five test areas, the burned area (%) reached its highest value. At this time, the contributions of the space term T spa and the temperature term T tem reached a state of balance. However, when θ increased, the space term T spa reduced its contribution to Equation (17). The burned-area mapping accuracy was affected as a consequence of the decreased space information from the space term T spa . The parameter θ in STI required several experiments to be determines, that is, adaptability is not ideal in STI.
Fifth, the impact of the segmentation scale parameter Q on the proposed method was studied. In the proposed STI, the space element T spa was obtained by calculating the class proportion of objects through RWA. Therefore, the step of segmentation that produces the object is very important for STI, and the quality of the objects is determined by the segmentation scale parameter Q in the segmentation method. Because Q determines the object size and the condition of merger termination, we studied the optimal selection of Q in this experiment. Ten Q values from 5 to 50 at an interval of 5 were tested in the five test areas ( S = 8 ). As shown in Figure 14, the selection of Q had an impact on the final mapping accuracy. When the value of Q was not properly selected, the burned area (%) was low. This is because an inappropriate Q resulted in low-quality objects, which affected the accuracy of space information in the space element T spa . After many experiments, it was noted that the best Q values of the five test areas were 15, 10, 15, 20, and 15. The segmentation scale parameter Q also required many experiments to be determined; therefore, this also proves that the adaptability of STI is not ideal.
Finally, we analyzed the operation time (s). Figure 15 shows the operation time (s) of the four SRM methods in the five test areas ( S = 8 ). The results showed that STI required the longest time. This is because the proposed STI is characterized by a more complex processing. Although STI requires more computation time than the other SRM methods, it shows improved performance. Therefore, the long computation and running times are disadvantages of the proposed method.

5. Conclusions

Often, traditional classification technology cannot effectively deal with mixed pixels in rough multispectral images. In order to obtain accurate distribution results of land-cover classes in rough images, the SRM technology has been proposed. SRM can produce better mapping results than traditional classification methods when processing rough images. In this paper, STI is proposed to improve burned-area mapping by fully utilizing the space–temperature information of burned areas. The space element and the temperature element are used in STI. The RWA is used to compute the segmented objects to obtain the space element with accurate and comprehensive space information. At the same time, the temperature element with full temperature information is obtained by calculating the difference between NBR obe and NBR sim . An objective element with the space–temperature information is derived by integrating the space element and the temperature element. Finally, the PSOA is utilized to optimize the objective element to produce burned-area mapping results. Thanks to the space–temperature information, the proposed STI obtains better burned-area mapping results than the existing SRBAM. Experiments on Landsat-8 OLI images of burned areas in Denali National Park, Alaska, showed that STI produced the highest OA (%), achieving 92.39%, 94.21%, 94.42%, 99.01%, and 95.48% OA in the five tested areas. Although it seems that STI performed only 1–2% better in OA (%) than the other SRM methods, in fact it successfully corrected thousands of pixels. For example, the OA (%) of STI was around 1.5% greater than that of SRBAM in Area 1. According to the definition of OA (%), since Area 1 had 720 × 720 pixels, the number of corrected pixels obtained by STI included about 7776 pixels more than that that obtained by SRBAM. Therefore, the gain in accuracy of the proposed method is obvious.
The appropriate values of the parameters θ and Q were selected by multiple tests when using STI. To improve the final mapping results, it is worth studying an adaptive method for selecting the most appropriate value of the parameters θ and Q in future work. In addition, it is worth further studying how to simplify the structure of the proposed STI and improve the running speed. Finally, the use of new artificial intelligence technology and a large number of auxiliary data to improve STI and obtain OA values closer to 100% is also worth studying in the future.

Author Contributions

Conceptualization, P.W.; Methodology, P.W.; Software, P.W.; Validation, G.Z., L.Z. and B.J.; Formal analysis, P.W.; Investigation, G.Z.; Resources, L.Z.; Data curation, P.W.; Writing—original draft preparation, P.W.; Writing—review and editing, B.J. and H.L.; Visualization, P.W.; Supervision, L.Z.; Project administration, P.W.; Funding acquisition, L.Z.

Funding

This work was supported by National Natural Science Foundation of China (61801211, 61871218, 61971218); China Postdoctoral Science Foundation (2019M651824), Open Fund of Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University (2019MIP006); National Key R&D Program of China (2018YFE0101000).

Acknowledgments

The authors would like to thank the handling editors and the reviewers for providing valuable comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Giglio, L.; Boschetti, L.; Roy, D.P.; Humber, M.L.; Justice, C.O. The Collection 6 MODIS burned area mapping algorithm and product. Remote. Sens. Environ. 2018, 217, 72–85. [Google Scholar] [CrossRef]
  2. Bastarrika, A.; Chuvieco, E.; Martin, M.P. Mapping burned areas from Landsat TM/ETM plus data with a two-phase algorithm: Balancing omission and commission errors. Remote Sens. Environ. 2011, 115, 1003–1012. [Google Scholar] [CrossRef]
  3. Ling, F.; Du, Y.; Zhang, Y.; Li, X.; Xiao, F. Burned-Area Mapping at the Subpixel Scale With MODIS Images. IEEE Geosci. Remote. Sens. Lett. 2015, 12, 1963–1967. [Google Scholar] [CrossRef]
  4. Atkinson, P.M. Mapping sub-pixel boundaries from remotely sensed images. In Proc. Innovations in GIS; CRC Press: New York, NY, USA, 1997; pp. 166–180. [Google Scholar]
  5. Wang, Q.; Atkinson, P.M.; Shi, W. Indicator cokriging-based subpixel mapping without prior spatial structure information. IEEE Trans. Geosci. Remote Sens. 2015, 53, 309–323. [Google Scholar] [CrossRef]
  6. Atkinson, P.M. Sub-pixel target mapping from soft-classified remotely sensed imagery. Photogramm. Eng. Remote Sens. 2005, 71, 839–846. [Google Scholar] [CrossRef]
  7. Villa, A.; Chanussot, J.; Benediktsson, J.A.; Jutten, C. Spectral unmixing for the classification of hyperspectral images at a finer spatial resolution. IEEE J. Sel. Top. Signal Process. 2011, 5, 521–533. [Google Scholar] [CrossRef]
  8. Ling, F.; Li, W.; Du, Y.; Li, X. Land cover change mapping at the subpixel scale with different spatial-resolution remotely sensed imagery. IEEE Geosci. Remote Sens. Lett. 2011, 8, 182–186. [Google Scholar] [CrossRef]
  9. Makido, Y.; Shortridge, A.; Messina, J.P. Assessing alternatives for modeling the spatial distribution of multiple land-cover classes at subpixel scales. Photogramm. Eng. Remote Sens. 2007, 73, 935–943. [Google Scholar] [CrossRef]
  10. Verhoeye, J.; De Wulf, R. Land-cover mapping at sub-pixel scales using linear optimization techniques. Remote Sens. Environ. 2002, 79, 96–104. [Google Scholar] [CrossRef]
  11. Zhang, Y.; Ling, F.; Li, X.; Du, Y. Super-resolution land cover mapping using multiscale self-similarity redundancy. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2015, 8, 5130–5145. [Google Scholar] [CrossRef]
  12. Tong, X.; Xu, X.; Plaza, A.; Xie, H.; Pan, H.; Cao, W.; Lv, D. A new genetic method for subpixel mapping using hyperspectral images. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2016, 9, 4480–4491. [Google Scholar] [CrossRef]
  13. He, D.; Zhong, Y.; Feng, R.; Zhang, L. Spatial-temporal sub-pixel mapping based on swarm intelligence theory. Remote Sens. 2016, 8, 894. [Google Scholar] [CrossRef]
  14. Wang, P.; Zhang, G.; Hao, S.; Wang, L. Improving remote sensing image super-resolution mapping based on the spatial attraction model by utilizing the pansharpening technique. Remote Sens. 2019, 11, 247. [Google Scholar] [CrossRef]
  15. Ling, F.; Li, X.; Du, Y.; Xiao, F. Sub-pixel mapping of remotely sensed imagery with hybrid intra- and inter-pixel dependence. Int. J. Remote Sens. 2013, 34, 341–357. [Google Scholar] [CrossRef]
  16. Wang, P.; Wang, L.; Leung, H.; Zhang, G. Subpixel mapping based on hopfield neural network with more prior information. IEEE Geosci. Remote Sens. Lett. 2019, 8, 1284–1288. [Google Scholar] [CrossRef]
  17. Li, X.; Du, Y.; Ling, F.; Feng, Q.; Fu, B. Superresolution mapping of remotely sensed image based on hopfield neural network with anisotropic spatial dependence model. IEEE Geosci. Remote Sens. Lett. 2014, 11, 1265–1269. [Google Scholar]
  18. Nigussie, D.; Zurita-Milla, R.; Clevers, J.G.P.W. Possibilities and limitations of artificial neural networks for subpixel mapping of land cover. Int. J. Remote Sens. 2011, 32, 7203–7226. [Google Scholar] [CrossRef]
  19. Shao, Y.; Lunetta, R.S. Sub-pixel mapping of tree canopy, impervious surfaces, and cropland in the Laurentian great lakes basin using MODIS time-series data. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2011, 4, 336–347. [Google Scholar] [CrossRef]
  20. Wang, Q.; Atkinson, P.M. The effect of the point spread function on sub-pixel mapping. Remote Sens. Environ. 2017, 193, 127–137. [Google Scholar] [CrossRef]
  21. Wang, Q.; Shi, W.; Wang, L. Allocating classes for soft-then-hard sub-pixel mapping algorithms in units of class. IEEE Trans. Geosci. Remote Sens. 2014, 5, 2940–2959. [Google Scholar] [CrossRef]
  22. Wang, P.; Wang, L.; Chanussot, J. Soft-then-hard subpixel land cover mapping based on spatial-spectral interpolation. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1851–1854. [Google Scholar] [CrossRef]
  23. Wang, Q.; Shi, W.; Atkinson, P.M. Sub-pixel mapping of remote sensing images based on radial basis function interpolation. ISPRS J. Photogramm. 2014, 92, 1–15. [Google Scholar] [CrossRef]
  24. Wang, Q.; Shi, W.; Atkinson, P.M.; Zhang, H. Class allocation for soft-then-hard subpixel mapping algorithms with adaptive visiting order of classes. IEEE Geosci. Remote Sens. Lett. 2014, 11, 1494–1498. [Google Scholar] [CrossRef]
  25. Chen, Y.; Ge, Y.; Heuvelink, G.B.M.; Hu, J.; Jiang, Y. Hybrid constraints of pure and mixed pixels for soft-then-hard super-resolution mapping with multiple shifted images. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2015, 8, 2040–2052. [Google Scholar] [CrossRef]
  26. Wang, P.; Zhang, G.; Kong, Y.; Leung, H. Superresolution mapping based on hybrid interpolation by parallel paths. Remote Sens. Lett. 2019, 10, 149–157. [Google Scholar] [CrossRef]
  27. Ge, Y.; Chen, Y.; Stein, A.; Li, S.; Hu, J. Enhanced sub-pixel mapping with spatial distribution patterns of geographical objects. IEEE Trans. Geosci. Remote Sens. 2016, 54, 2356–2370. [Google Scholar] [CrossRef]
  28. Xu, X.; Zhong, Y.; Zhang, L.; Zhang, H. Sub-pixel mapping based on a MAP model with multiple shifted hyperspectral imagery. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2013, 6, 580–593. [Google Scholar] [CrossRef]
  29. Wang, P.; Wang, L.; Mura, M.D.; Chanussot, J. Using multiple subpixel shifted images with spatial-spectral information in soft-then-hard subpixel mapping. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2017, 13, 1851–1854. [Google Scholar] [CrossRef]
  30. Nguyen, M.Q.; Atkinson, P.M.; Lewis, H.G. Superresolution mapping using Hopfield neural network with LIDAR data. IEEE Geosci. Remote Sens. Lett. 2005, 2, 366–370. [Google Scholar] [CrossRef]
  31. Wang, P.; Wang, L.; Wu, Y.; Leung, H. Utilizing pansharpening technique to produce sub-pixel resolution thematic map from coarse remote sensing image. Remote Sens. 2018, 10, 884. [Google Scholar] [CrossRef]
  32. Wang, P.; Mura, M.D.; Chanussot, J.; Zhang, G. Soft-then-hard super-resolution mapping based on pansharpening technique for remote sensing image. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2019, 12, 334–344. [Google Scholar] [CrossRef]
  33. Thornton, M.W.; Atkinson, P.M.; Holland, D.A. A linearised pixel swapping method for mapping rural linear land cover features from fine spatial resolution remotely sensed imagery. Comput. Geosci. 2007, 33, 1261–1272. [Google Scholar] [CrossRef]
  34. Wang, P.; Zhang, G.; Leung, H. Improving super-resolution flood inundation mapping for multispectral remote sensing image by supplying more spectral information. IEEE Geosci. Remote Sens. Lett. 2019, 16, 771–775. [Google Scholar] [CrossRef]
  35. Xie, H.; Luo, X.; Xu, X.; Pan, H.; Tong, X. Automated subpixel surface water mapping from heterogeneous urban environments using Landsat 8 OLI Imagery. Remote Sens. 2016, 8, 584. [Google Scholar] [CrossRef] [Green Version]
  36. Wang, Q.; Atkinson, P.M.; Shi, W. Fast subpixel mapping algorithms for subpixel resolution change detection. IEEE Trans. Geosci. Remote Sens. 2015, 53, 1692–1706. [Google Scholar] [CrossRef]
  37. Ling, F.; Li, X.; Xiao, F.; Fang, S.; Du, Y. Object-based sub-pixel mapping of buildings incorporating the prior shape information from remotely sensed imagery. Int. J. Appl. Earth Observat. Geoinf. 2012, 18, 283–292. [Google Scholar] [CrossRef]
  38. Wang, P.; Zhang, L.; Zhang, G.; Bi, H.; Mura, M.D.; Chanussot, J. Superresolution land cover mapping based on pixel-, subpixel-, and superpixel-scale spatial dependence with pansharpening technique. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2019, online. 1–17. [Google Scholar] [CrossRef]
  39. Chen, Y.; Ge, Y.; Heuvelink, G.B.M.; An, R.; Chen, Y. Object-based superresolution land-cover mapping from remotely sensed imagery. IEEE Trans. Geosci. Remote Sens. 2018, 56, 328–340. [Google Scholar] [CrossRef]
  40. Kang, X.; Li, S.; Fang, L.; Li, M.; Benediktsson, J.A. Extended random walker-based classification of hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2015, 53, 144–153. [Google Scholar] [CrossRef]
  41. Holden, Z.A.; Smith, A.M.S.; Morgan, P.; Rollins, M.G.; Gessler, P.E. Evaluation of novel thermally enhanced spectral indices for mapping fire perimeters and comparisons with fire atlas data. Int. J. Remote Sens. 2005, 26, 4801–4808. [Google Scholar] [CrossRef]
  42. Huang, X.; Zhang, L. An SVM ensemble approach combining spectral, structural, and semantic features for the classification of high-resolution remotely sensed imagery. IEEE Trans. Geosci. Remote Sens. 2013, 51, 257–272. [Google Scholar] [CrossRef]
  43. Cui, B.; Xie, X.; Ma, X.; Ren, G.; Ma, Y. Superpixel-based extended random walker for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2018, 53, 3233–3243. [Google Scholar] [CrossRef]
  44. Song, M.; Zhong, Y.; Ma, A.; Feng, R. Multiobjective sparse subpixel mapping for remote sensing imagery. IEEE Trans. Geosci. Remote Sens. 2019, 47, 4490–4508. [Google Scholar] [CrossRef]
  45. Soriano, A.; Vergara, L.; Ahmed, B.; Salazar, A. Fusion of scores in a detection context based on alpha integration. Neural Comput. 2015, 27, 1983–2010. [Google Scholar] [CrossRef]
  46. Jia, S.; Qian, Y. Spectral and spatial complexity-based hyperspectral unmixing. IEEE Trans. Geosci. Remote Sens. 2006, 45, 3867–3879. [Google Scholar]
Figure 1. (a) False color image (short-wave infrared two bands, near-infrared band, and blue band for red, green, and blue, respectively). (b) Reference image.
Figure 1. (a) False color image (short-wave infrared two bands, near-infrared band, and blue band for red, green, and blue, respectively). (b) Reference image.
Remotesensing 11 02695 g001
Figure 2. The flowchart of producing the space element. PCA: principal component analysis, PC: principal component, RWA: random-walker algorithm.
Figure 2. The flowchart of producing the space element. PCA: principal component analysis, PC: principal component, RWA: random-walker algorithm.
Remotesensing 11 02695 g002
Figure 3. The flowchart of the space–temperature information (STI) method. NBR: normalized burn ratio, POSA: particle swarm optimization algorithm.
Figure 3. The flowchart of the space–temperature information (STI) method. NBR: normalized burn ratio, POSA: particle swarm optimization algorithm.
Remotesensing 11 02695 g003
Figure 4. Fine image of five burned areas. (a) Area 1, (b) Area 2, (c) Area 3, (d) Area 4, (e) Area 5.
Figure 4. Fine image of five burned areas. (a) Area 1, (b) Area 2, (c) Area 3, (d) Area 4, (e) Area 5.
Remotesensing 11 02695 g004aRemotesensing 11 02695 g004b
Figure 5. Flowchart of the experimental process. SRM: super-resolution mapping.
Figure 5. Flowchart of the experimental process. SRM: super-resolution mapping.
Remotesensing 11 02695 g005
Figure 6. Rough image of five burned areas. (a) Area 1, (b) Area 2, (c) Area 3, (d) Area 4, (e) Area 5.
Figure 6. Rough image of five burned areas. (a) Area 1, (b) Area 2, (c) Area 3, (d) Area 4, (e) Area 5.
Remotesensing 11 02695 g006aRemotesensing 11 02695 g006b
Figure 7. Burned-area mapping results in area 1. (a) Reference image. Images obtained by (b) hybrid spatial attraction model (HSAM), (c) object-scale spatial SRM (OSRM), (d) super-resolution burned-area mapping (SRBAM), (e) STI.
Figure 7. Burned-area mapping results in area 1. (a) Reference image. Images obtained by (b) hybrid spatial attraction model (HSAM), (c) object-scale spatial SRM (OSRM), (d) super-resolution burned-area mapping (SRBAM), (e) STI.
Remotesensing 11 02695 g007aRemotesensing 11 02695 g007b
Figure 8. Burned-area mapping results in area 2. (a) Reference image. Images obtained by (b) HSAM, (c) OSRM, (d) SRBAM, (e) STI.
Figure 8. Burned-area mapping results in area 2. (a) Reference image. Images obtained by (b) HSAM, (c) OSRM, (d) SRBAM, (e) STI.
Remotesensing 11 02695 g008
Figure 9. Burned-area mapping results in area 3. (a) Reference image. Images obtained by (b) HSAM, (c) OSRM, (d) SRBAM, (e) STI.
Figure 9. Burned-area mapping results in area 3. (a) Reference image. Images obtained by (b) HSAM, (c) OSRM, (d) SRBAM, (e) STI.
Remotesensing 11 02695 g009
Figure 10. Burned-area mapping results in area 4. (a) Reference image. Images obtained by (b) HSAM, (c) OSRM, (d) SRBAM, (e) STI.
Figure 10. Burned-area mapping results in area 4. (a) Reference image. Images obtained by (b) HSAM, (c) OSRM, (d) SRBAM, (e) STI.
Remotesensing 11 02695 g010
Figure 11. Burned-area mapping results in area 5. (a) Reference image. Images obtained by (b) HSAM, (c) OSRM, (d) SRBAM, (e) STI.
Figure 11. Burned-area mapping results in area 5. (a) Reference image. Images obtained by (b) HSAM, (c) OSRM, (d) SRBAM, (e) STI.
Remotesensing 11 02695 g011aRemotesensing 11 02695 g011b
Figure 12. Burned area (%) derived using the four methods tested for different values of S (a) S = 5 and (b) S = 10 .
Figure 12. Burned area (%) derived using the four methods tested for different values of S (a) S = 5 and (b) S = 10 .
Remotesensing 11 02695 g012
Figure 13. Burned area (%) derived using the four methods tested for different values of the weight parameter θ . (a) Area 1, (b) Area 2, (c) Area 3, (d) Area 4, (e) Area 5.
Figure 13. Burned area (%) derived using the four methods tested for different values of the weight parameter θ . (a) Area 1, (b) Area 2, (c) Area 3, (d) Area 4, (e) Area 5.
Remotesensing 11 02695 g013
Figure 14. Burned area (%) derived using the four methods tested for different values of the segmentation scale parameter Q . (a) Area 1, (b) Area 2, (c) Area 3, (d) Area 4, (e) Area 5.
Figure 14. Burned area (%) derived using the four methods tested for different values of the segmentation scale parameter Q . (a) Area 1, (b) Area 2, (c) Area 3, (d) Area 4, (e) Area 5.
Remotesensing 11 02695 g014
Figure 15. Operation time (s) in relation to the four SRM methods.
Figure 15. Operation time (s) in relation to the four SRM methods.
Remotesensing 11 02695 g015
Table 1. Evaluating indicators by the four methods. OA: overall accuracy.
Table 1. Evaluating indicators by the four methods. OA: overall accuracy.
Area 1
HSAMOSRMSRBAMSTI
Burned area (%)76.3077.6679.8483.13
Background (%)93.1093.5094.1395.09
OA (%)89.3189.9390.9192.39
Kappa0.69400.71160.73970.7622
Area 2
HSAMOSRMSRBAMSTI
Burned area (%)56.5059.7363.6668.39
Background (%)95.6295.9496.3496.82
OA (%)92.0492.6393.3594.21
Kappa0.52120.55670.60000.6321
Area 3
HSAMOSRMSRBAMSTI
Burned area (%)72.1873.9877.0279.87
Background (%)95.5295.8196.0996.76
OA (%)92.2892.7893.3994.42
Kappa0.67700.69790.71120.7463
Area 4
HSAMOSRMSRBAMSTI
Burned area (%)94.2395.3595.4196.04
background (%)98.5498.5998.6099.23
OA (%)98.1898.2698.4799.01
Kappa0.94480.94940.95310.9596
Area 5
HSAMOSRMSRBAMSTI
Burned area (%)71.6073.1476.2779.82
Background (%)96.4196.6197.0197.45
OA (%)93.6393.9894.6895.48
Kappa0.68010.69750.73280.7627

Share and Cite

MDPI and ACS Style

Wang, P.; Zhang, L.; Zhang, G.; Jin, B.; Leung, H. Multispectral Image Super-Resolution Burned-Area Mapping Based on Space-Temperature Information. Remote Sens. 2019, 11, 2695. https://doi.org/10.3390/rs11222695

AMA Style

Wang P, Zhang L, Zhang G, Jin B, Leung H. Multispectral Image Super-Resolution Burned-Area Mapping Based on Space-Temperature Information. Remote Sensing. 2019; 11(22):2695. https://doi.org/10.3390/rs11222695

Chicago/Turabian Style

Wang, Peng, Lei Zhang, Gong Zhang, Benzhou Jin, and Henry Leung. 2019. "Multispectral Image Super-Resolution Burned-Area Mapping Based on Space-Temperature Information" Remote Sensing 11, no. 22: 2695. https://doi.org/10.3390/rs11222695

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop