Next Article in Journal
Robust, Long-term Lake Level Change from Multiple Satellite Altimeters in Tibet: Observing the Rapid Rise of Ngangzi Co over a New Wetland
Next Article in Special Issue
Mapping of Coastal Cities Using Optimized Spectral–Spatial Features Based Multi-Scale Superpixel Classification
Previous Article in Journal
Fusion of Multispectral and Panchromatic Images via Spatial Weighted Neighbor Embedding
Previous Article in Special Issue
Superpixel based Feature Specific Sparse Representation for Spectral-Spatial Classification of Hyperspectral Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Technical Note

Monitoring Forest Loss in ALOS/PALSAR Time-Series with Superpixels

Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2019, 11(5), 556; https://doi.org/10.3390/rs11050556
Submission received: 26 January 2019 / Revised: 23 February 2019 / Accepted: 1 March 2019 / Published: 7 March 2019
(This article belongs to the Special Issue Superpixel based Analysis and Classification of Remote Sensing Images)

Abstract

:
We present a flexible methodology to identify forest loss in synthetic aperture radar (SAR) L-band ALOS/PALSAR images. Instead of single pixel analysis, we generate spatial segments (i.e., superpixels) based on local image statistics to track homogeneous patches of forest across a time-series of ALOS/PALSAR images. Forest loss detection is performed using an ensemble of Support Vector Machines (SVMs) trained on local radar backscatter features derived from superpixels. This method is applied to time-series of ALOS-1 and ALOS-2 radar images over a boreal forest within the Laurentides Wildlife Reserve in Québec, Canada. We evaluate four spatial arrangements including (1) single pixels, (2) square grid cells, (3) superpixels based on segmentation of the radar images, and (4) superpixels derived from ancillary optical Landsat imagery. Detection of forest loss using superpixels outperforms single pixel and regular square grid cell approaches, especially when superpixels are generated from ancillary optical imagery. Results are validated with official Québec forestry data and Hansen et al. forest loss products. Our results indicate that this approach can be applied to monitor forest loss across large study areas using L-band radar instruments such as ALOS/PALSAR, particularly when combined with superpixels generated from ancillary optical data.

Graphical Abstract

1. Introduction

Accurate forest accounting is important for tracking the global carbon stock and ecological modeling. JAXA illustrated global forest accounting with L-band imagery in [1]. There is continued interest in L-band land cover and land use change analysis with the current ALOS-2 and SAOCOM missions, and the forthcoming ALOS-4 and NISAR missions, which will provide high temporal and spatial resolution imagery. An important NISAR objective is to monitor forest disturbances at the 1 ha scale [2]. Indeed, L-band SAR images are not affected by clouds and aerosols, so SAR image stacks may be used for long-term forest studies in regions that are difficult to monitor at a high temporal resolution using optical sensors alone. However, SAR images are plagued with speckle noise and are sensitive to dielectric changes in the target (e.g., moisture in biomass). Additionally, certain spectral features useful for forest studies are not visible in the microwave spectrum. Even in the presence of these obstacles, SAR images offer an invaluable tool for forest accounting [3].
We present a methodology for detecting forest disturbance from L-band SAR time-series. Given a time-series of images (i.e., an image stack), our method identifies when and where forest disturbance occurred. In determining when a change occurred, we consider a small window of images around a particular date and extract a temporally averaged pair. Using this pair, we apply a two-part change detection method. First, with a segmentation of our image, we derive backscatter features. Then, we use use a classifier to determine if a segment lost forest. We also present empirical uncertainties associated with our change maps.
For our change analysis, we consider a simple pair of features: the initial backscatter and the backscatter change. It is well known that a decrease in HV-polarized backscatter may indicate forest loss [1,4,5]. We incorporate larger spatial scales into the analysis of these features with superpixels. Superpixels refer to the segments that partition an image into small homogeneous areas. We frequently interchange “segments”, “superpixels”, and “superpixel segments” to refer to these contiguous, homogeneous areas.
Ren and Malik introduced the notion of superpixels in [6] as a way to prepare an image for object detection. They assert that “(1) pixels are not natural entities; they are merely a consequence of the discrete representation of images; and (2) the number of pixels is high even at moderate resolution”. Indeed, the latter is particularly relevant for ALOS/PALSAR tiles that we investigate here, which each have tens of millions of pixels. Within the remote sensing community, superpixel analysis developed independently with the availability of the eCognition software suite [7] and has typically been referred to as “object-based” analysis [8], with this nomenclature likely stemming from the event that these small segments frequently enclose a single building or road in a given remotely sensed image. Since the introductions of such analyses, there have been numerous algorithms to determine such image segmentations [8,9,10,11]. We use Felzenszwalb and Huttenlocher’s method from [9] implemented in an open-source python library [12]. We select this algorithm because the parameters of the segmentation algorithm depend on the resolution of the image, not its dimensions, allowing us to apply this segmentation to subsets of an image without adjusting the parameters to obtain comparable segmentations. We aggregate segment statistics according to linear γ 0 .
Superpixel segmentation has become an important tool for a number of remote sensing tasks including change detection [13,14,15,16,17,18], classification [19,20,21,22], and denoising [23,24]. Indeed, JAXA’s global forest/non-forest maps employed segments derived from eCognition’s software suite [1]. Such analyses have been adapted for regional forest studies [4,25,26] and for mangrove monitoring [27,28,29].
Our contribution with this work is to demonstrate that superpixel analysis can be applied directly to time-series of ALOS/PALSAR HV images that have been radiometrically and terrain corrected (RTC) [30]. We found adapting such techniques to ALOS/PALSAR time-series to be important where JAXA time-series mosaics have errors related to merging tiles or geo-referencing [28]. Using RTC images, we remove dependence on incidence angle and can apply a single model to the full extent of the time-series. We obtain a change map at the 2 ha scale using superpixels and demonstrate these change maps have better agreement with validated change products over our site than those produced using individual pixels or square grid cells. Furthermore, we show that using segments derived from auxiliary optical products can improve such agreement with our validation data further.
We use the mean backscatter within a segment to characterize a superpixel as a proxy for individual pixel backscatter. We expect the use of mean backscatter to mitigate speckle effects because, if a superpixel encloses a single target, the sample mean will approach the mean backscatter associated with this target as more pixels of this target are sampled [20,24]; we will not formally investigate this claim as it outside scope of this work. Superpixel-based analysis also speeds up our change analysis as there are less superpixels to analyze than pixels.
In our analysis, we compare four different local spatial contexts for our change analysis: superpixels derived from backscatter, superpixels derived from ancilliary optical products, square cells derived from a regular grid, and individual pixels.
Superpixels pose some obstacles for our change analysis. Superpixels need not accurately capture the boundary of forest disturbances and the precise shape and size of the irregular segments are difficult to control. The parameters best suited for a particular image stack depend on the resolution, contrast, and the scale of image features being studied in that stack. In this work, our focus is comparing the four spatial contexts and we select sizes accordingly.
Once features have been extracted from our superpixel segments, we apply a classifier to determine if a disturbance has occurred. One can directly apply an unsupervised classifier as in [31], a statistical test as in [32] or apply a Markov Random Field to further incorporate spatial relationships [33,34]. We take a supervised approach and train an SVM similar to [35] so we can utilize our validation data at our site for training.
Indeed, this work falls into the broader category of change detection methods of remote sensing images. However, we do not compare our methodology to techniques of more general change detection in part because we focus on a specific aspect of change detection in SAR images (i.e., forest loss) and our validation data is less reliable on the full ALOS/PALSAR time series. Many of the state-of-the-art techniques (in particular, deep learning [36]) require a sizeable corpus of accurate training samples for transferring a particular image model to a new sensor [37]. Although we have validation data from the Quebec government [38] and the Hansen et al. forest loss products, such data does not precisely align with our ALOS/PALSAR time-series in that a change in a particular year may have occurred before or after a particular image was retrieved. Furthermore, changes identified in such validation data are derived from aerial photography or optical data and may not be detected in the ALOS/PALSAR time-series. Similarly, changes detected in the ALOS/PALSAR time-series may not be present in our validation data. Creating a suitable dataset for comparing these methods on ALOS/PALSAR time-series is beyond the scope of this work.
We also note that superpixels are but one way to integrate higher-level spatial scales into change analysis. We have only considered superpixels at one spatial scale in this study, but multiple spatial scales can be analyzed to better model larger features for classification [21,22]. Furthermore, Convolutional Neural Nets (CNNs) are a powerful tool that can integrate multiple spatial scales for classification [39,40] and are able to learn more complicated image features not possible with just superpixels. An active area of research is developing techniques to efficiently transfer CNNs to new sensors and making models less dependent on the site where they are trained [36,40,41].
In what follows, we illustrate how to adapt superpixels to track changes in ALOS/PALSAR time-series and demonstrate its benefit over other spatial contexts. We apply our method to ALOS-1 and ALOS-2 time-series demonstrating the benefit of superpixels over pixels and square grid cells using Quebec forestry data [38] and Hansen et al. forest loss products [35]. We also show that using superpixels derived from ancillary optical data can improve performance further.
In Section 2, we present the preprocessing, change detection, and empirical uncertainty quantification associated with our methodology. In Section 3, we apply our methodology to ALOS/PALSAR HV image stacks over the Laurentides Wildlife Reserve and validate our change maps using Québec’s Ministry of Forests data [38] and Hansen et al. forest disturbance data [35] for ALOS-1 and ALOS-2, respectively. In Figure 1a, we show the extent of an ALOS-1 tile over the Laurentides Wildlife Reserve, which we investigate in Section 3.

2. Methodology

We now describe our methodology for change detection on SAR image stacks to identify forest loss. First, we discuss the preprocessing of an image stack. Then, we discuss our change detection methodology. Finally, we introduce empirical uncertainty measures associated with the change methodology.

2.1. Preprocessing Our Image Stack

Preprocessing our image stack helps mitigate environmental and phenological effects such as rain and snow cover that can significantly alter brightness in backscatter images. In this work, we consider two different stacks: HV ALOS-1 image tiles radiometrically and terrain corrected (RTC) by the ASF [42] and HV ALOS-2 image tiles RTC processed with [43]. RTC images allows us to apply a change model to the full image extent without reference to incidence angle. We select images acquired from June through September during peak biomass and to avoid snow cover that can impact backscatter returns. We now describe our preprocessing pipeline, which is briefly summarized in Figure 2.
Having a set of RTC images, we project all the images into the same coordinate reference system and mask void pixels consistently through the stack so we ignore any pixel that has at least one void area in the time-series. With a spatially coregistered and correctly masked stack, we perform channel by channel preprocessing. First, we clip the dynamic range of our HV image to fall between −30 and −5 dB. Then, we apply total variation (TV) denoising [44] in dB to remove noise. Although SAR noise in dB is additive and γ -distributed [45], we used TV denoising in dB scale to remove speckle noise [46]. We used a weight parameter λ = 0.25 for ALOS-1 and λ = 0.5 for ALOS-2 (see [44] for parameter description). We applied this denoising method instead of the Gaussian filter used in [9], as a Gaussian filter is not appropriate for the statistical distribution of speckle in SAR imagery [45].
To complete preprocessing, we adjust image statistics through large superpixels. Having images normalized correctly helps ensure that a decrease in backscatter is an indicator of forest loss, rather than an indicator of natural fluctuations in the dielectric properties of the vegetation, for example, changes in vegetation water content as a result of precipitation. Specifically, we normalize a pixel’s backscatter p at image index i according to
p ^ i = p i μ i σ 0 σ i + μ 0 ,
where μ i , σ i are the ith image’s mean and standard deviation, respectively, within the segment that pixel p belongs to. For ALOS-1 time series, we use the superpixels shown in Figure 3. The mean segment size of these superpixels is 9.4 × 10 4 ha, which is four orders of magnitude larger than the minimum size forest loss of 2 ha that we wish to observe. Because ALOS/PALSAR tiles span such a large area, a normalization using an entire image’s statistics does not ensure that statistics over smaller image subsets have normalized statistics and indeed such a subset may have fluctuating brightness over the course of a time series even if this subset is undisturbed. Such fluctuations of undisturbed pixels can negatively impact a particular change detection model’s ability to generalize to an entire ALOS/PALSAR time-series, especially if this model expects a backscatter decrease as an indicator of forest loss. We illustrate this obstacle in Figure 4 using our training site as a subset (see Figure 3 for the location of the training site). We consider consecutive image differences, which is I j + 1 I j for ALOS-1 time series I 1 , I 2 , , I n . We consider these consecutive differences from time-series that have been normalized in two different ways: (1) using global image statistics and (2) using large superpixels. We determine undisturbed areas using the Québec forestry data [38] and remove water areas using a −21 db threshold [47]. We expect that consecutive differences of undisturbed forest pixels over any subset of an image to have approximately zero mean. As we can see in Figure 3, the consecutive means of undisturbed areas in the training area fluctuate more significantly when normalized using global image statistics. In particular, we can see from this figure that the tile retrieved on 2009-06-21 is brighter than other images in the stack (likely due to frost or rain) causing the consecutive differences to skew. We show the empirical difference densities of these two normalizations in Figure 4b,c to highlight this skewing further.

2.2. Change Detection

We now describe the change detection methodology that we apply to a preprocessed image stack. The final output of this change detection is a change map indicating regions that were disturbed and the time at which these disturbances took place. For this discussion, we assume each region is disturbed at most once. In this section, we will refer to a “change” region as a region that loses forest.
To determine if change occurred in a particular image I j of our stack, we first select a window around this image, specifically a subset of consecutive images containing I j in our image stack. We call I j the focal image of our window. We specify a forward window size w f and backward window w b as in Figure 5. The window sizes w f and w b determine the temporal scale we wish to consider. A longer window size means the changes should be observable at longer time scales. For our analysis of forest loss, we typically ensure that each window spans a few years (we used w b = w f = 2 for the ALOS/PALSAR time-series considered here). Within a window, we average the images within the forward and backward window, respectively, noting that the forward window includes the focal image in our setup. We are left with an image pair to perform change detection. These first steps of our change analysis are summarized in the first row of the flow chart in Figure 5.
Next, we introduce superpixels [9] to our change analysis. We use Felzenszwalb and Huttenlocher’s method [9] as its size parameters are independent of image dimensions if the resolution is fixed. Furthermore, the algorithm [9] only enforces a minimum size on the final segments allowing for larger segments if there are large homogeneous areas. The algorithm is feasible on large ALOS-1 tiles as its runtime is O ( n log ( n ) ) , where n is the number of pixels [9]. From these segments, we derive mean backscatter and mean backscatter change between the forward and backward windows. We extract these superpixels using the first and the last images in our original image stack.
We fix the minimum size m to be 10 pixels and scale κ to be 0.1 for ALOS-1 and ALOS-2 HV backscatter images. These parameters produced segments with mean size 0.3 ha for ALOS-1 time series and mean size 0.25 ha for ALOS-2 time series. We do not pursue parameter optimizations further as our focus is on the viability of superpixels for the identification of forest loss within ALOS/PALSAR time-series analysis. To highlight the difference between individual pixel analysis and superpixel analysis, we ensure that each superpixel covers an area one order of magnitude larger than the area covered by individual pixels with this selection of m. Parameter optimization and comparisons with numerous other superpixel segmentations such as mean shift [11] and SLIC [10] as in [48,49] will be explored in future work. Because the superpixels have mean size greater than 0.25 ha for both ALOS time series, we only consider changes that are at least 2 ha scale. We found that a 1 ha size filter typically produced an F 1 score below 0.4 due to commission error.
With this segmentation, we derive features for each superpixel by aggregating backscatter values of those pixels contained in a given superpixel. Here, we consider the mean backscatter obtained from the forward window and the mean backscatter change from backward to forward windows.
Next, we load a trained SVM with a radial basis function as our model’s kernel [50] to determine where changes occur. We train our model on a pair of images over a small study area where there was visible forest loss. In Figure 1, we show the extent of the training area. We trained our models using available validated forestry data consistent with the ALOS/PALSAR time-series extent. Because there are far more “no change” than “change” segments, we select a random sample of “no change” segments to balance the classes for training. We ensemble 50 models together (each trained on a different random sample) to remove sample dependence. We discuss the ensembling of SVM in more detail in Appendix A. With a trained SVM, we identify change within a temporally averaged pair. To remove regions of small, isolated changes, we apply a size filter, removing changed areas that are smaller than 2 ha. We summarize the entire change detection methodology in Figure 5. In Figure 6, we illustrate the empirical probability density of our two superpixel features over an ALOS-1 pair over the training region. Specifically, we show how the initial backscatter is distributed over change and unchanged regions in Figure 6a,b. We show the decision boundary and the classification of our SVM in Figure 6.

2.3. Empirical Uncertainty Measures

There are numerous sources of possible error in our change detection methodology. We will focus on two important aspects: the superpixel segmentation and the SVM model. We define empirical measures to evaluate our change map in each regard.
We first discuss the empirical certainty associated with segmentation boundaries of our superpixels. A region of forest loss may poorly coincide with a superpixel’s boundaries. To quantify the possible disagreement of segment changes with finer changes detected at the pixel level, we measure the number of pixels that satisfy the change criterion as determined using our trained SVM within a superpixel. Specifically, given a trained SVM, we can determine which pixels have changed using the same features at the pixel level. Having labeled change at the pixel level, we determine the percentage of pixels within a segment that is labeled as change to quantify our certainty that segment is labeled correctly. Because this requires pixel analysis, such uncertainty analysis adds significant computational overhead to our original change analysis. In Figure 7, we illustrate an uncertainty map derived from the first window in our ALOS-1 stack within our training area. In fact, this product could be used for change analysis as well, though we do not explore this further here.
To quantify the uncertainty associated with the SVM, we use the well-known Platt scaling [51]. This method fits a standard logistic function to the distances from the decision boundary of the training samples. In our case, we ensemble the Platt Scaling output from all of our models to measure the certainty of a given prediction. Because the parameters required for Platt scaling are determined during training, this has a much lower computational overhead than the previous uncertainty measure. In Figure 6, we illustrate the features we use for training over regions with forest loss and regions that are undisturbed. Specifically, in Figure 6c, we have an ensemble of 50 SVMs and the associated model certainty on the domain to illustrate the averaged Platt scaling. In the next section, we apply our change methodology to ALOS-1 and ALOS-2 stacks.

3. Applications

In this section, we apply our change methodology to an ALOS/PALSAR time series. We illustrate two ALOS/PALSAR time-series in which superpixels outperform individual pixels and square grid cells of comparable size. Our performance metrics are higher for the ALOS-1 time series because the validation map is hand labelled from aerial photography [38], whereas the Hansen et al. [35] forest loss products are statistically determined from Landsat mosaics and are thus negatively affected by cloud cover and snow in this area. Furthermore, many of the changes in both optically derived validation datasets are not visible to the ALOS/PALSAR imagery, and, conversely, the changes seen in the ALOS/PALSAR images may not appear in the validation data. We also note that our evaluation metrics are further hindered by the fact that our validation data is misaligned with our ALOS/PALSAR time series in that changes noted in the data may have occurred before or after ALOS images were retrieved.
For these comparisons, we consider a pair of images from ALOS-1 and ALOS-2 stacks trained on a small subset of the extent with Québec forestry data [38] and Hansen et al. forest loss products [35]. We then validate each methodology on the full tile. After we discuss the performance of the methodology using superpixels, square segments, and individual pixels, we apply the methodology with the trained model to both full time-series to illustrate the proposed data product. The products used for the time-series are summarized in Table 1 and contain basic topographic features of the study area.

3.1. ALOS-1

We now apply our change methodology to an ASF-processed ALOS-1 stack [42]. We train and validate our methodology using open Québec data [38] produced by the forestry service. We consider only four types of forest disturbances: total cuts, cuts with protection of small or high merchantable stems and soil, and cuts with regeneration protection (see [38]). These correspond to approximately 85% of all disturbances and are visible over the training area we selected. Because we apply a size filter to our final change map, we remove changes within this dataset whose total area is below 2 ha.
The Québec data, in addition to providing when and where disturbance occurred, also provides a segmentation of the ALOS-1 tile, so we train our model using these segments directly. We also apply our trained model to these segments as an additional point of comparison. Because the forest loss data is based on the Québec segments, our methodology works best using these segments. These segments, which were created using aerial photographs, allows us to incorporate optical image information into SAR analysis. We note that the relative performance of the segmentations over the training area actually changes when evaluating the performance over the full tile. Indeed, pixels perform better than HV derived superpixels over the training area, but worse over the full tile as indicated in Table 2. This suggests that superpixels help our change detection model generalize as it is applied outside the training area. In Table 2, we compare changes tracked using superpixels, square segments, and individual pixels on the full ALOS-1 tile. We show the F 1 score (both over the training site and the full tile), the producer accuracy (full tile), and user accuracy (full tile) using the forestry data as ground truth. Superpixels derived from the radar data perform the best after the Québec segments. The main source of mislabeling comes from false positives, including the expansion of Route 175 [52] which required the cutting of trees along this highway, but these disturbances are not included in the Québec forestry data.
Using our trained ensemble of models, we identify disturbance in the full ALOS-1 stack illustrated in Figure 8. We see the expansion of Route 175 [52] at the bottom of the image.

3.2. ALOS-2

We now describe our change analysis on an ALOS-2 stack over the same area. We use Hansen et al. forest disturbance data [35] to train our model as Québec forestry data does not go past 2014. We performed radiometric terrain correction with [43]. We modify the original Hansen et al. forest loss map so training is done on segments rather than pixels, mitigating speckle and improving efficiency. First, we extracted superpixels from Hansen et al.’s 2017 landsat mosaic. Then, with the changes that aligned with our ALOS-2 retrieval dates, we labeled a segment as change if a majority of pixels within the extracted segments were changed. This ensured that segments with a high volume of forest loss were trained correctly. Since regions labeled as undisturbed are randomly sampled during training, we expect false negatives to be of minor impact during training. However, when validating our model on the full ALOS-2 tile, we used the original Hansen et al. change map with losses smaller than 2 ha filtered out. We proceed with the same change analysis as in Section 3.2. Table 3 compares the change methodology on the Landsat segments, superpixels, square segments, and individual pixels, illustrating that the superpixels derived from Landsat data produce the most accurate change detection product. As before, we note that, even though the model performs better using pixels than superpixels over the training site, the opposite is true over the full tile. Figure 9 has a detailed area of the ALOS-2 change map product produced using the backscatter derived superpixels.

4. Conclusions

We have introduced a flexible change detection methodology for identifying forest loss in ALOS/PALSAR images and validate the methodology with official Québec forestry data [38] and Hansen et al. forest loss products [35]. Our methodology uses simple features so that this change method can be adapted for other forest sites and other L-band image stacks. We have demonstrated the use of superpixel segmentation in our change analysis to improve computational efficiency and incorporate optical information. We compared superpixel segmentation within our change methodology favorably to segments generated by square grid cells and individual pixels. Furthermore, we illustrated how spatial segmentation can be used to incorporate optical data into the SAR change analysis to improve change detection accuracy. In future work, we plan to compare more spatial segmentation methods, expand our methodology for larger study areas, and analyze image stacks with higher temporal sampling.

Author Contributions

M.S. conceptualized the project presented here. All authors worked on the development of the change detection methodology. C.M. implemented the final methodology in software and performed the analysis. M.D. performed software optimizations and prepared the RTC images for the ALOS-2 time-series; C.M. prepared the original draft and all contributed to the revising and editing of the subsequent drafts.

Funding

The research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. This work was funded by the NASA Terrestrial Ecology Program in support of the NISAR Science Team.

Acknowledgments

We thank JAXA for their continued support and for providing the ALOS-2 data for this study. We also thank the Québec Ministry of Forests, Wildlife and Parks for providing forest disturbance datasets. We are grateful to Adam Chlus for sharing his initial analysis of the Québec forestry data, and to Nathan Thomas, Daniel Jensen, Tom Van der Stocken, and Charlotte Smetanka for helpful conversations at various stages of this work. © 2019. California Institute of Technology. Government sponsorship acknowledged.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ALOS-1/-2Advanced Land Observing Satellite-1/-2
ASFAlaska Satellite Facility
NISARNASA-ISRO Synthetic Aperture Radar
PALSARPhased Array type L-band Synthetic Aperture Radar
RTCRadiometrically and Terrain Corrected
SAOCOMSatellites for Observation and Communications
SARSynthetic Aperture Radar
SVMSupport Vector Machine
TV DenoisingTotal Variation Denoising

Appendix A. Ensembling Support Vector Machines for Identifying Forest Loss

Figure A1. Above, a comparison of the distrubution of F 1 scores as the number of models in the SVM ensemble increases. We obtain the above statistics using 1000 different ensembles such that each SVM has parameters C = γ = 50 and C c = 20 .
Figure A1. Above, a comparison of the distrubution of F 1 scores as the number of models in the SVM ensemble increases. We obtain the above statistics using 1000 different ensembles such that each SVM has parameters C = γ = 50 and C c = 20 .
Remotesensing 11 00556 g0a1
In this appendix, we review the ensembling of Support Vector Machine (SVM) that we use to identify forest loss. An SVM determines a decision boundary from so-called support vectors w , which are solutions to the minimization problem:
w = arg min w , w 0 | | w | | 2 + C i = 1 n max ( 0 , 1 y i ( w · φ ( x i ) w 0 ) ) ,
where C > 0 , x i are the training feature vectors, φ is our feature embedding, and y i are the class labels such that y i { 1 , 1 } . Let y i = 1 indicate forest loss and y i = 1 an undisturbed area. The support vector w determines if a data point x is in a particular class according to the sign of w · φ ( x ) w 0 . Using the so-called “kernel trick” [50], we can nonlinearly embed our features with φ without adding significant computational overhead so long as we can specify the kernel k ( x i , x j ) = φ ( x i ) · φ ( x j ) . We select the radial basis function as our kernel: k ( x i , x j ) = exp ( γ | | x i x j | | ) . To deal with the imbalance problem (there are far fewer regions that have forest loss than areas that do not), we employ two strategies. First, we adjust the weights for the two classes according to a multiplicative constant C c ( i ) , namely,
w = arg min w , w 0 | | w | | 2 + C i = 1 n C c ( i ) max ( 0 , 1 y i ( w · φ ( x i ) w 0 ) ) ,
where C c ( i ) = 1 if y i = 1 and C c ( i ) = c > 0 is a scaling factor for y i = 1 . We refer to C c by the value c it takes on undisturbed regions. Second, we train our SVM on a random sample of unchanged regions so that the number of disturbed and undisturbed areas are equal. Because a model’s output depends on this random sample, we then assemble the SVM models together classifying each point based on the majority [54]. This strategy also helps with the fact that our data has a decent amount of false negatives, namely regions with forest loss, but that are not labeled as such. In Figure A1, we illustrate the results of training an increasing number of models in our ensemble at the training site of ALOS-1 data. As the number of ensembles increases past 40, the F 1 scores of the ensemble SVM level off and remain concentrated around the mean. When selecting parameters γ , C, and C c , we perform a grid search using our ensemble method to determine the parameters with optimal F 1 score on our training area.

References

  1. Shimada, M.; Itoh, T.; Motooka, T.; Watanabe, M.; Shiraishi, T.; Thapa, R.; Lucas, R. New global Forest/Non-Forest Maps from ALOS PALSAR Data (2007–2010). Remote Sens. Environ. 2014, 155, 13–31. [Google Scholar] [CrossRef]
  2. NISAR Science Team. NASA-ISRO SAR Mission Science Users Handbook. 2019. Available online: https://nisar.jpl.nasa.gov/files/nisar/NISAR_Science_Users_Handbook.pdf (accessed on 14 January 2019).
  3. Chambers, J.Q.; Asner, G.P.; Morton, D.C.; Anderson, L.O.; Saatchi, S.S.; Espírito-Santo, F.D.; Palace, M.; Souza, C., Jr. Regional Ecosystem Structure and Function: Ecological Insights from Remote Sensing of Tropical Forests. Trends Ecol. Evol. 2007, 22, 414–423. [Google Scholar] [CrossRef] [PubMed]
  4. Avtar, R.; Sawada, H.; Takeuchi, W.; Singh, G. Characterization of Forests and Deforestation in Cambodia using ALOS/PALSAR Observation. Geocarto Int. 2012, 27, 119–137. [Google Scholar] [CrossRef]
  5. Thomas, N.; Lucas, R.; Bunting, P.; Hardy, A.; Rosenqvist, A.; Simard, M. Distribution and Drivers of Global Mangrove Forest Change, 1996–2010. PLoS ONE 2017, 12, e0179302. [Google Scholar] [CrossRef] [PubMed]
  6. Ren, X.; Malik, J. Learning a Classification Model for Segmentation. In Proceedings of the Ninth IEEE International Conference on Computer Vision, Nice, France, 13–16 October 2003; pp. 10–17. [Google Scholar]
  7. Flanders, D.; Hall-Beyer, M.; Pereverzoff, J. Preliminary Evaluation of eCognition Object-based Software for Cut Block Delineation and Feature Extraction. Can. J. Remote Sens. 2003, 29, 441–452. [Google Scholar] [CrossRef]
  8. Meinel, G.; Neubert, M. A Comparison of Segmentation Programs for High Resolution Remote Sensing Data. Int. Arch. Photogramm. Remote Sens. 2004, 35, 1097–1105. [Google Scholar]
  9. Felzenszwalb, P.F.; Huttenlocher, D.P. Efficient Graph-based Image Segmentation. Int. J. Comput. Vis. 2004, 59, 167–181. [Google Scholar] [CrossRef]
  10. Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; Süsstrunk, S. SLIC Superpixels Compared to State-of-the-Art Superpixel Methods. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2274–2282. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  11. Comaniciu, D.; Meer, P. Mean Shift: A Robust Approach toward Feature Space Analysis. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 603–619. [Google Scholar] [CrossRef]
  12. Van der Walt, S.; Schönberger, J.L.; Nunez-Iglesias, J.; Boulogne, F.; Warner, J.D.; Yager, N.; Gouillart, E.; Yu, T. Scikit-image: Image Processing in Python. PeerJ 2014, 2, e453. [Google Scholar] [CrossRef] [PubMed]
  13. Lv, N.; Chen, C.; Qiu, T.; Sangaiah, A.K. Deep Learning and Superpixel Feature Extraction based on Contractive Autoencoder for Change Detection in SAR Images. IEEE Trans. Ind. Inform. 2018, 14, 5530–5538. [Google Scholar] [CrossRef]
  14. Gong, M.; Zhan, T.; Zhang, P.; Miao, Q. Superpixel-Based Difference Representation Learning for Change Detection in Multispectral Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2017, 55, 2658–2673. [Google Scholar] [CrossRef]
  15. Zhou, L.; Cao, G.; Li, Y.; Shang, Y. Change Detection Based on Conditional Random Field with Region Connection Constraints in High-Resolution Remote Sensing Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 3478–3488. [Google Scholar] [CrossRef]
  16. Huang, X.; Yang, W.; Xia, G.; Liao, M. Superpixel-based Change Detection in High Resolution SAR Images using Region Covariance Features. In Proceedings of the 8th International Workshop on the Analysis of Multitemporal Remote Sensing Images, Annecy, France, 22–24 July 2015; pp. 1–4. [Google Scholar]
  17. Ertürk, A.; Ertürk, S.; Plaza, A. Unmixing with SLIC Superpixels for Hyperspectral Change Detection. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium, Beijing, China, 10–15 July 2016; pp. 3370–3373. [Google Scholar]
  18. Clewley, D.; Bunting, P.; Shepherd, J.; Gillingham, S.; Flood, N.; Dymond, J.; Lucas, R.; Armston, J.; Moghaddam, M. A Python-based Open Source System for Geographic Object-based Image Analysis (GEOBIA) Utilizing Raster Attribute Tables. Remote Sens. 2014, 6, 6111–6135. [Google Scholar] [CrossRef]
  19. Liu, B.; Hu, H.; Wang, H.; Wang, K.; Liu, X.; Yu, W. Superpixel-based Classification with an Adaptive Number of Classes for Polarimetric SAR Images. IEEE Trans. Geosci. Remote Sens. 2013, 51, 907–924. [Google Scholar] [CrossRef]
  20. Thompson, D.R.; Mandrake, L.; Gilmore, M.S.; Castano, R. Superpixel Endmember Detection. IEEE Trans. Geosci. Remote Sens. 2010, 48, 4023–4033. [Google Scholar] [CrossRef]
  21. Zhang, S.; Li, S.; Fu, W.; Fang, L. Multiscale Superpixel-based Sparse Representation for Hyperspectral Image Classification. Remote Sens. 2017, 9, 139. [Google Scholar] [CrossRef]
  22. Audebert, N.; Saux, B.L.; Lefevre, S. How Useful is Region-based Classification of Remote Sensing Images in a Deep Learning Framework? arXiv, 2016; arXiv:1609.06861. [Google Scholar]
  23. Fan, F.; Ma, Y.; Li, C.; Mei, X.; Huang, J.; Ma, J. Hyperspectral Image Denoising with Superpixel Segmentation and Low-rank Representation. Inf. Sci. 2017, 397, 48–68. [Google Scholar] [CrossRef]
  24. Liu, X.; Jia, H.; Cao, L.; Wang, C.; Li, J.; Cheng, M. Superpixel-based Coastline Extraction in SAR Images with Speckle Noise Removal. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Beijing, China, 10–15 July 2016; pp. 1034–1037. [Google Scholar]
  25. Dupuy, S.; Herbreteau, V.; Feyfant, T.; Morand, S.; Tran, A. Land-cover Dynamics in Southeast Asia: Contribution of Object-oriented techniques for Change Detection. In Proceedings of the 4th International Conference on GEographic Object-Based Image Analysis (GEOBIA 2012), Rio de Janeiro, Brazil, 7–9 May 2012. [Google Scholar]
  26. Dingle Robertson, L.; King, D.J. Comparison of Pixel and Object-based Classification in Land Cover Change Mapping. Int. J. Remote Sens. 2011, 32, 1505–1529. [Google Scholar] [CrossRef]
  27. Myint, S.W.; Giri, C.P.; Wang, L.; Zhu, Z.; Gillette, S.C. Identifying Mangrove Species and Their Surrounding Land Use and Land Cover Classes using an Object-Oriented Approach with a Lacunarity Spatial Measure. GISci. Remote Sens. 2008, 45, 188–208. [Google Scholar] [CrossRef]
  28. Thomas, N.; Bunting, P.; Lucas, R.; Hardy, A.; Rosenqvist, A.; Fatoyinbo, T. Mapping Mangrove Extent and Change: A Globally Applicable Approach. Remote Sens. 2018, 10, 1466. [Google Scholar] [CrossRef]
  29. Kamal, M.; Phinn, S. Hyperspectral Data for Mangrove Species Mapping: A Comparison of Pixel-based and Object-based Approach. Remote Sens. 2011, 3, 2222–2242. [Google Scholar] [CrossRef]
  30. Small, D. Flattening Gamma: Radiometric Terrain Correction for SAR Imagery. IEEE Trans. Geosci. Remote Sens. 2011, 49, 3081–3093. [Google Scholar] [CrossRef]
  31. Celik, T. Unsupervised Change Detection in Satellite Images using Principal Component Analysis and k-means Clustering. IEEE Geosci. Remote Sens. Lett. 2009, 6, 772–776. [Google Scholar] [CrossRef]
  32. Nielsen, A.A. The Regularized Iteratively Reweighted MAD Method for Change Detection in Multi- and Hyper-Spectral Data. IEEE Trans. Image Process. 2007, 16, 463–478. [Google Scholar] [CrossRef] [PubMed]
  33. Li, S.; Jia, X.; Zhang, B. Superpixel-based Markov Random Field for Classification of Hyperspectral Images. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium-IGARSS, Melbourne, Australia, 21–26 July 2013; pp. 3491–3494. [Google Scholar]
  34. Bruzzone, L.; Prieto, D.F. Automatic Analysis of the Difference Image for Unsupervised Change Detection. IEEE Trans. Geosci. Remote Sens. 2000, 38, 1171–1182. [Google Scholar] [CrossRef]
  35. Hansen, M.C.; Potapov, P.V.; Moore, R.; Hancher, M.; Turubanova, S.A.A.; Tyukavina, A.; Thau, D.; Stehman, S.V.; Goetz, S.J.; Loveland, T.R.; et al. High-resolution Global Maps of 21st Century Forest Cover Change. Science 2013, 342, 850–853. [Google Scholar] [CrossRef] [PubMed]
  36. Zhu, X.X.; Tuia, D.; Mou, L.; Xia, G.S.; Zhang, L.; Xu, F.; Fraundorfer, F. Deep Learning in Remote Sensing: A Comprehensive Review and List of Resources. IEEE Geosci. Remote Sens. Mag. 2017, 5, 8–36. [Google Scholar] [CrossRef] [Green Version]
  37. Zhang, L.; Zhang, L.; Du, B. Deep Learning for Remote Sensing Data: A Technical Tutorial on the State of the Art. IEEE Geosci. Remote Sens. Mag. 2016, 4, 22–40. [Google Scholar] [CrossRef]
  38. Ministry of Forests, Wildlife and Parks. EcoforestMap with Distrubances. 2018. Quebec Data Portal. Available online: https://www.donneesquebec.ca/recherche/fr/dataset/carte-ecoforestiere-avec-perturbations (accessed on 14 January 2019).
  39. Mou, L.; Bruzzone, L.; Zhu, X.X. Learning Spectral-Spatial-Temporal features via a Recurrent Convolutional Neural Network for Change Detection in Multispectral Imagery. IEEE Trans. Geosci. Remote Sens. 2019, 57, 924–935. [Google Scholar] [CrossRef]
  40. Audebert, N.; Le Saux, B.; Lefèvre, S. Semantic Segmentation of Earth Observation Data using Multimodal and Multi-Scale Deep Networks. In Proceedings of the Asian Conference on Computer Vision, Taipei, Taiwan, 20–24 November 2016; pp. 180–196. [Google Scholar]
  41. Lyu, H.; Lu, H.; Mou, L. Learning a Transferable Change Rule from a Recurrent Neural Network for Land Cover Change Detection. Remote Sens. 2016, 8, 506. [Google Scholar] [CrossRef]
  42. Alaska Satellite Facility. ASF DAAC 2015; Includes Material©JAXA/METI 2007. Available online: http://dx.doi.org/10.5067/Z97HFCNKR6VA (accessed on 14 January 2019).
  43. Simard, M.; Riel, B.V.; Denbina, M.; Hensley, S. Radiometric Correction of Airborne Radar Images over Forested Terrain with Topography. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4488–4500. [Google Scholar] [CrossRef]
  44. Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear Total Variation Based Noise Removal Algorithms. Phys. D Nonlinear Phenom. 1992, 60, 259–268. [Google Scholar] [CrossRef]
  45. Bioucas-Dias, J.M.; Figueiredo, M.A.T. Multiplicative Noise Removal Using Variable Splitting and Constrained Optimization. IEEE Trans. Image Process. 2010, 19, 1720–1730. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  46. Zhao, Y.; Liu, J.G.; Zhang, B.; Hong, W.; Wu, Y.R. Adaptive Total Variation Regularization based SAR Image Despeckling and Despeckling Evaluation Index. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2765–2774. [Google Scholar] [CrossRef]
  47. Martinis, S.; Kuenzer, C.; Wendleder, A.; Huth, J.; Twele, A.; Roth, A.; Dech, S. Comparing Four Operational SAR-based Water and Flood Detection Approaches. Int. J. Remote Sens. 2015, 36, 3519–3543. [Google Scholar] [CrossRef]
  48. Wang, M.; Liu, X.; Gao, Y.; Ma, X.; Soomro, N.Q. Superpixel Segmentation: A Benchmark. Signal Process. Image Commun. 2017, 56, 28–39. [Google Scholar] [CrossRef]
  49. Stutz, D.; Hermans, A.; Leibe, B. Superpixels: An Evaluation of the State-of-the-Art. Comput. Vis. Image Underst. 2018, 166, 1–27. [Google Scholar] [CrossRef]
  50. Boser, B.E.; Guyon, I.M.; Vapnik, V.N. A Training Algorithm for Optimal Margin Classifiers. In Proceedings of the 5th Annual Workshop on Computational Learning Theory, Pittsburgh, PA, USA, 27–29 July 1992; pp. 144–152. [Google Scholar]
  51. Platt, J. Probabilistic Outputs for Support Vector Machines and Comparisons to Regularized Likelihood Methods. Adv. Large Margin Classif. 1999, 10, 61–74. [Google Scholar]
  52. Quebec Transportation. Route 73/175 Project. 2008. Available online: https://web.archive.org/web/20110716214657/http://www.mtq.gouv.qc.ca/portal/page/portal/grands_projets/trouver_grand_projet/axe_routier_73_175 (accessed on 14 December 2018).
  53. JAXA. ALOS/ALOS-2 User Interface Gateway. Available online: https://auig2.jaxa.jp/ips/home (accessed on 14 December 2018).
  54. Murphy, K.P. Machine Learning: A Probabilistic Perspective; MIT Press: Cambridge, MA, USA, 2012. [Google Scholar]
Figure 1. In the first row, the boundaries of the Laurentides Wildlife Reserve, an overlapping ALOS-1 tile, and the training site. In the second row, (b) is the HV backscatter over the training site and (c) is a detailed area in the training site. Figures (b,c) are from ASF high resolution ALOS-1 RTC data [42] (12.5 m resolution). The superpixels have mean size 31.34 pixels2 (0.49 ha). In the last row, the figure (e) shows superpixels with mean backscatter value and (f) shows regular square grid cells with 5 pixel spacing populated with mean backscatter.
Figure 1. In the first row, the boundaries of the Laurentides Wildlife Reserve, an overlapping ALOS-1 tile, and the training site. In the second row, (b) is the HV backscatter over the training site and (c) is a detailed area in the training site. Figures (b,c) are from ASF high resolution ALOS-1 RTC data [42] (12.5 m resolution). The superpixels have mean size 31.34 pixels2 (0.49 ha). In the last row, the figure (e) shows superpixels with mean backscatter value and (f) shows regular square grid cells with 5 pixel spacing populated with mean backscatter.
Remotesensing 11 00556 g001
Figure 2. Preprocessing a SAR image stack for change detection.
Figure 2. Preprocessing a SAR image stack for change detection.
Remotesensing 11 00556 g002
Figure 3. Above, the large superpixels used to normalize backscatter statistics through the ALOS-1 time series. The superpixels have average area 9.4 × 10 4 ha.
Figure 3. Above, the large superpixels used to normalize backscatter statistics through the ALOS-1 time series. The superpixels have average area 9.4 × 10 4 ha.
Remotesensing 11 00556 g003
Figure 4. This figure collects statistics of consecutive differences over ALOS-1 time series over undisturbed forest pixels in our training area. Undisturbed pixels are determined with the Québec forestry data [38]. We remove water pixels using a −21 db threshold as in [47]. In (a), we show the mean of consecutive image differences of undisturbed pixels normalized in two different ways: with statistics in large superpixels (see Figure 3) or with global image statistics from the full tile. Then, we consider some consecutive differences with high variation (highlighted in gray in (a)) and display their empirical densities separately. In (b), there are the differences from images normalized with large superpixels and in (c), are those normalized with global image statistics.
Figure 4. This figure collects statistics of consecutive differences over ALOS-1 time series over undisturbed forest pixels in our training area. Undisturbed pixels are determined with the Québec forestry data [38]. We remove water pixels using a −21 db threshold as in [47]. In (a), we show the mean of consecutive image differences of undisturbed pixels normalized in two different ways: with statistics in large superpixels (see Figure 3) or with global image statistics from the full tile. Then, we consider some consecutive differences with high variation (highlighted in gray in (a)) and display their empirical densities separately. In (b), there are the differences from images normalized with large superpixels and in (c), are those normalized with global image statistics.
Remotesensing 11 00556 g004
Figure 5. Schematic for change detection pipeline.
Figure 5. Schematic for change detection pipeline.
Remotesensing 11 00556 g005
Figure 6. In (a) and (b), we compare the HV features of the Québec segment we use to determine change. We compare the empirical probability densities over regions with and without forest loss as labeled using Québec disturbance data [38]. The statistics are confined to the training site. In (c), we illustrate how these two features are related and illustrate the Platt scaling used to quantify SVM class certainty. The Platt scaling illustrates the model’s confidence in change with 1 being the highest and 0, the lowest. We used parameters C = 50, γ = 50 , and C c = 20 for each SVM in our ensemble (see Appendix A for a discussion of the parameters).
Figure 6. In (a) and (b), we compare the HV features of the Québec segment we use to determine change. We compare the empirical probability densities over regions with and without forest loss as labeled using Québec disturbance data [38]. The statistics are confined to the training site. In (c), we illustrate how these two features are related and illustrate the Platt scaling used to quantify SVM class certainty. The Platt scaling illustrates the model’s confidence in change with 1 being the highest and 0, the lowest. We used parameters C = 50, γ = 50 , and C c = 20 for each SVM in our ensemble (see Appendix A for a discussion of the parameters).
Remotesensing 11 00556 g006
Figure 7. (a) is a detailed change map from the first window of ALOS-1 time series (focal image is 16 June 2007); (b) is the associated empirical change certainty computed as the percentage of pixels that are labeled as change by the trained SVM.
Figure 7. (a) is a detailed change map from the first window of ALOS-1 time series (focal image is 16 June 2007); (b) is the associated empirical change certainty computed as the percentage of pixels that are labeled as change by the trained SVM.
Remotesensing 11 00556 g007
Figure 8. A detailed area of the ALOS-1 change map product, including the expansion of route 175 [52] in the bottom of the image.
Figure 8. A detailed area of the ALOS-1 change map product, including the expansion of route 175 [52] in the bottom of the image.
Remotesensing 11 00556 g008
Figure 9. A detailed area of the ALOS-2 change map product. The bright white pixels are void data from the RTC correction [43].
Figure 9. A detailed area of the ALOS-2 change map product. The bright white pixels are void data from the RTC correction [43].
Remotesensing 11 00556 g009
Table 1. Available dates for ALOS-1 obtained through the ASF [42] and for ALOS-2 obtained through the ALOS/ALOS-2 user interface gateway [53]. We include the sensor’s mode, topographic data, and image descriptors for reference. We note that we only discuss the upper bounds of the elevation and slope as there are zero elevation areas and flat terrain in both extents.
Table 1. Available dates for ALOS-1 obtained through the ASF [42] and for ALOS-2 obtained through the ALOS/ALOS-2 user interface gateway [53]. We include the sensor’s mode, topographic data, and image descriptors for reference. We note that we only discuss the upper bounds of the elevation and slope as there are zero elevation areas and flat terrain in both extents.
SensorModeAvailable Dates Italicized Are Used in Time Series and Underlined Are Used for SVM Training and ValidationResolution (m)Elevation μ /max (m)Slope μ / μ + 3 σ (degrees)Total Area (ha)
ALOS-1Fine Beam Dual2007-06-16, 2007-08-01,
2008-09-18, 2009-06-21,
2009-08-06, 2010-06-24,
2010-08-09, 2010-09-24
12.5801/11488.44/28.13 3.67 × 10 5
ALOS-2Strip Map (10 meter)2014-11-22, 2014-12-20,
2015-02-28, 2015-07-04,
2015-08-01, 2016-06-18,
2016-07-02, 2016-11-19,
2017-07-01, 2017-12-16
10796/11618.90/28.79 4.54 × 10 5
Table 2. Results of change detection performed on the ALOS-1 pair using various segmentations.
Table 2. Results of change detection performed on the ALOS-1 pair using various segmentations.
Segments F 1 (Training Site) F 1 (Full Tile)Producer Accuracy (Full Tile)User Accuracy (Full Tile)
Quebec Segments0.9220.77190.68710.8806
Superpixels0.7040.5970.53770.6709
Pixels0.7220.5710.51310.6436
Squares0.7080.5670.50440.6473
Table 3. Results of change detection performed on the ALOS-2 pair using various segmentations.
Table 3. Results of change detection performed on the ALOS-2 pair using various segmentations.
Segments F 1 (Training Site) F 1 (Full Tile)Producer Accuracy (Full Tile)User Accuracy (Full Tile)
Landsat Segments0.8310.51690.53290.5019
Superpixels0.7120.48410.50980.4609
Pixels0.7160.46680.46720.4665
Squares0.710.4580.43710.481

Share and Cite

MDPI and ACS Style

Marshak, C.; Simard, M.; Denbina, M. Monitoring Forest Loss in ALOS/PALSAR Time-Series with Superpixels. Remote Sens. 2019, 11, 556. https://doi.org/10.3390/rs11050556

AMA Style

Marshak C, Simard M, Denbina M. Monitoring Forest Loss in ALOS/PALSAR Time-Series with Superpixels. Remote Sensing. 2019; 11(5):556. https://doi.org/10.3390/rs11050556

Chicago/Turabian Style

Marshak, Charlie, Marc Simard, and Michael Denbina. 2019. "Monitoring Forest Loss in ALOS/PALSAR Time-Series with Superpixels" Remote Sensing 11, no. 5: 556. https://doi.org/10.3390/rs11050556

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop