Next Article in Journal
Comparison of Different Cropland Classification Methods under Diversified Agroecological Conditions in the Zambezi River Basin
Next Article in Special Issue
Using UAV-Based SOPC Derived LAI and SAFY Model for Biomass and Yield Estimation of Winter Wheat
Previous Article in Journal
Falling Mixed-Phase Ice Virga and their Liquid Parent Cloud Layers as Observed by Ground-Based Lidars
Previous Article in Special Issue
Using Linear Regression, Random Forests, and Support Vector Machine with Unmanned Aerial Vehicle Multispectral Images to Predict Canopy Nitrogen Weight in Corn
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Wetland Mapping with Landsat 8 OLI, Sentinel-1, ALOS-1 PALSAR, and LiDAR Data in Southern New Brunswick, Canada

1
Remote Sensing Laboratory, Faculty of Forestry and Environmental Management, University of New Brunswick, Fredericton, NB E3B 5A3, Canada
2
Department of Forestry and Agro-Environmental Sciences, Università degli Studi di Padova, 35100 Padova, Italy
3
New Brunswick Department of Natural Resources and Energy Development, Fredericton, NB E3B 5A3, Canada
4
Canadian Wildlife Service, Environment and Climate Change Canada, Sackville, NB E3B 5A3, Canada
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(13), 2095; https://doi.org/10.3390/rs12132095
Submission received: 21 May 2020 / Revised: 16 June 2020 / Accepted: 24 June 2020 / Published: 30 June 2020
(This article belongs to the Special Issue Environmental Mapping Using Remote Sensing)

Abstract

:
Mapping wetlands with high spatial and thematic accuracy is crucial for the management and monitoring of these important ecosystems. Wetland maps in New Brunswick (NB) have traditionally been produced by the visual interpretation of aerial photographs. In this study, we used an alternative method to produce a wetland map for southern New Brunswick, Canada, by classifying a combination of Landsat 8 OLI, ALOS-1 PALSAR, Sentinel-1, and LiDAR-derived topographic metrics with the Random Forests (RF) classifier. The images were acquired in three seasons (spring, summer, and fall) with different water levels and during leaf-off/on periods. The resulting map has eleven wetland classes (open bog, shrub bog, treed bog, open fen, shrub fen, freshwater marsh, coastal marsh, shrub marsh, shrub wetland, forested wetland, and aquatic bed) plus various non-wetland classes. We achieved an overall accuracy classification of 97.67%. We compared 951 in-situ validation sites to the classified image and both the 2106 and 2019 reference maps available through Service New Brunswick. Both reference maps were produced by photo-interpretation of RGB-NIR digital aerial photographs, but the 2019 NB reference also included information from LiDAR-derived surface and ecological metrics. Of these 951 sites, 94.95% were correctly identified on the classified image, while only 63.30% and 80.02% of these sites were correctly identified on the 2016 and 2019 NB reference maps, respectively. If only the 489 wetland validation sites were considered, 96.93% of the sites were correctly identified as a wetland on the classified image, while only 58.69% and 62.17% of the sites were correctly identified as a wetland on the 2016 and 2019 NB reference maps, respectively.

1. Introduction

Wetlands are defined as lands that are saturated with water long enough to cause the formation of hydric soils and the growth of hydrophytic or water-tolerant plants [1]. Wetlands are found in almost all the regions of the world from the tundra to the tropics and are a critical part of the natural environment [2,3]. They have high biological diversity and offer critical habitats for numerous flora and fauna species [3]. Wetlands can also provide valuable services to humans such as flood reduction by temporarily storing and gradually releasing stormwater [2]. Wetlands are complex ecological systems that are formed when hydrological, geomorphological, and biological factors work collectively to create the necessary conditions [4]. There are various types of wetlands depending on the regional and local variations in soils, topography, climate, hydrology, water chemistry, vegetation, and other factors, including human disturbances [5,6].
Given that wetlands are facing numerous pressures from human activities, invasive species, and climate change [3], proper management and monitoring tools are necessary to ensure wetland conservation and protection. One fundamental wetland management and monitoring tool is the mapping of wetlands with high spatial and thematic precision [3] because these maps help to identify the threats and pressures to wetlands and evaluate the effectiveness of wetland conservation policies [7]. In New Brunswick such as in several provinces of Canada, there are wetland conservation strategies, but their successful implementation requires an accurate and up-to-date wetland map. The production of accurate wetland maps relies on the cost-effectiveness and accuracy of data sources, subsequent analyses, and ground-truthing. Wetland mapping in New Brunswick (Canada) to date has been primarily based on the visual interpretation of air photos [8]. The last wetland map with wetland classes (hereafter called the 2016 NB reference map) was released by Service New Brunswick in 2016 [9]. It combines information from two GIS layers, namely the regulated wetlands GIS layer and the wetland classes of the GIS layer corresponding to the forest map [10]. Both GIS layers were produced by photo-interpretation of the digital orthorectified RGB-NIR 1:12,500 aerial photographs having a 30 cm resolution following the method developed in 2006 [11]. Service New Brunswick released in 2019 a new wetland map, hereafter called the 2019 NB reference map, but the new map does not include wetland classes [9]. The 2019 NB reference map was produced by photo-interpretation of aerial photographs, but the interpretation also included information from LiDAR-derived surface products at 1m resolution following earlier methodology [12] and ground-based observations. In general, photo-interpretation of digitized aerial photography is unreliable for the identification, delineation, and classification of forested wetlands, particularly in regions with dense tree cover such as most parts of New Brunswick [13,14,15,16].
For mapping natural features over large areas such as the Province of New Brunswick, Earth’s observation satellite images can be a good alternative to aerial photographs. Indeed, current satellite imagery provides multi-temporal data at a lower cost over large areas at a suitable spatial and temporal resolution that allows frequent updating of wetland cover maps [17,18,19]. In particular, multi-date satellite images can incorporate water level and seasonal effects on vegetation, both of which can influence mapping accuracy [20]. While most of the first wetland mapping studies have used optical satellite images [7,21], synthetic aperture radar (SAR) imagery was also tested for wetland mapping because it can penetrate a vegetation canopy (depending on the wavelength) to detect inundation and is sensitive to moisture conditions [22,23,24]. In contrast to optical satellite data, SAR image acquisition is possible under the cloud, haze, and other atmospheric perturbation conditions because SAR sensors are active, sending their incident radiation and therefore not dependent on solar radiation to acquire images. SAR imagery quality may, however, be affected by environmental conditions such as wind, rain, and freezing temperatures.
The first SAR studies used mainly single-polarized images, such as in several studies related to the Canadian Wetland Inventory [20,25,26] or elsewhere [21]. Wetland classification can be improved using multiple polarizations as opposed to single polarized imagery [21,27]. Multiple polarizations can provide more information than a single polarization, especially when there is a specific orientation to an object or objects being detected. In the case of wetlands, when there is emergent vegetation within wetlands, L-VV backscatter is relatively low, while L-HH and L-VH backscatters are high [28]. C-HH data were found to be superior to C-HV or C-VV data in delimiting flood extent of the Elbe River in Germany, although C-HV data provide some information for flood detection [29]. C-HH data provided the highest accuracies for delineating sawgrass and cattails, but C-VV data are useful to separate cattails and low-density marshes [30]. Co-polarizations (HH and VV) give a higher contrast backscatter between swamps and dry forest than cross-polarization for X- and L-bands that gives the ability to separate between flooded and non-flooded forests [31]. However, some studies have noted that cross-polarization is better at separating between marsh and swamp classes for L-band e.g., [31]. Longer wavelength P- and L-bands have been useful in penetrating forest canopies to detect standing water, as the surface water results in a double bounce off the tree trunks, enhancing the signal response. C-band data have been useful in detecting standing water under short vegetation [20,31]. C-band and X-band data have also been shown to be favorable in some wooded wetlands with low-density canopies, or leaf-off conditions [31,32,33,34]. Due to the complex nature of wetlands as well as spectral and textural similarities of several wetland classes, multi-source satellite imagery combining SAR imagery from various frequencies and optical imagery is more suitable for classifying wetlands [35]. Indeed, the complementary characteristics of optical, radar, and elevation data enhance the detection of both forested and non-forested wetlands [17]. The combined use of optical and SAR imagery for wetland mapping has already been tested in several studies (Table 1 and Table 2). The fusion of SAR and optical data for wetland mapping was also proposed [36]. Indeed, optical imagery provides vital information relating to the reflectance which is an attribute of the presence or absence of the vegetation, vegetation type, as well as soil moisture levels in dense canopy areas [37]. In high-dense canopy areas, the use of SAR imagery provides necessary information relating to surface texture and scattering mechanisms, indicating the presence or absence of vegetation as well as to the vegetation type and moisture content. Most of the previous studies on wetland mapping used a non-parametric supervised classifier (Random Forests) (Table 1). Other classifiers have also been tested such as maximum likelihood, rule-based classifier, neural networks, and hierarchical decision tree, but they generally produced lower classification accuracies than Random Forests (Table 2). The achieved classification accuracies depend on the classifier used, the number of wetland classes considered, the types of images used, and the region. The best classification overall accuracy (94.3%) was obtained by [38], who mapped wetlands in central New Brunswick by applying the Random Forests classifier to Landsat 5 TM, ALOS-1 PALSAR (L-HH, L-HV), Radarsat-2 (C-HH, C-HV), DEM, and slope data.
Our study objective was to assess the accuracy of wetland mapping for southern New Brunswick using freely available optical and SAR satellite imagery. Such as in many Canadian studies, the resulting map included at least the five wetland classes as defined in the Canadian Wetland Classification System (CWCS): bog, fen, marsh, swamp, and shallow open water of fewer than 2 m in depth [1]. Given that combining SAR and optical images has been already demonstrated as being the most useful for wetland mapping (Table 1 and Table 2) and to have a low-cost methodology, our study used freely available optical and SAR satellite imagery, i.e., Sentinel-1 C-band SAR, ALOS-1 PALSAR L-band SAR, and optical Landsat 8 OLI imagery. The images were combined with topographical metrics that were extracted from Lidar data as they have been shown to be useful for wetland mapping [37,38]. The images were classified using Random Forests which has been successfully used for wetland mapping using optical and SAR (Table 1). Following Jahncke et al. [37] and LaRocque et al. [38], the satellite images were acquired at different times of the year (including during flood periods) to consider the seasonal (leaf on/off) and water level variations on the SAR backscatters and optical data. The accuracy of both the resulting map and of both the 2016 and 2019 NB reference maps was validated by comparison to 951 GPS field wetland and non-wetland sites that were visited during the summer and fall of 2016. This comparison allowed the identification of the source of errors for misidentified wetland sites on both NB reference maps and the classified image.

2. Materials and Methods

2.1. Study Area

The study area in southern New Brunswick encompasses approximately 30,000 km2 between 45° and 46° Lat. N and 66° and 67° Long. W (Figure 1). The area is dominated by extensive floodplains of the Saint John River and its tributaries. The area has an elevation range of between 0 m ASL at Grand Lake to 470 m ASL in the Central Uplands Ecoregion in the northwest part of the study area. There is also a hilly region in the southeast part of the study area known as the Caledonia Ecodistrict (Figure 1).
The study area was chosen primarily due to its recognition as a priority wetland conservation area by Eastern Habitat Joint Venture, the availability of recent LiDAR data, and a high number of easily accessible wetland sites for ground-truthing. While most of the study area still has natural landscapes, it is facing increased pressure from agriculture, urban developments, and climate change. The study area experiences warm summers and cold snowy winters. Precipitation is quite evenly distributed throughout the year with around 1000 mm per year in the central north to 1300 mm on the southern coast. Snowfall is abundant with 1–3 meters of snowfall per year [71]. According to the New Brunswick Ecological Working Group, the area is dominated by a mixture of well-drained soils to the west of the Saint John River and poorly drained soils to the east of the Saint John River [71]. The vegetation in the area is mainly a mixture of hardwood species, e.g., red maple (Acer rubrum), sugar maple (Acer saccharum), beech (Fagus grandifolia), yellow birch (Betula alleghaniensis) white birch (Betula spp.), balsam poplar (Populus balsamifera), trembling aspen (Populus tremuloides) and softwood species, like balsam fir (Abies balsamea), black spruce (Picea mariana), white spruce (Picea glauca), red spruce (Picea rubens), jack pine (Pinus banksiana), red pine (Pinus resinosa), and white pine (Pinus strobus). The area has the five wetlands classes (i.e., bog, fen, marsh, swamp or forested wetland, and shallow/open water) of the Canadian Wetland Classification System (CWCS) [11,41,72], some of them being subdivided as a function of the vegetation cover. In total, the study considered the following 11 wetland classes: open bog, shrub bog, treed bog, open fen, shrub fen, freshwater marsh, coastal marsh, shrub marsh, shrub wetland, forested wetland, and aquatic bed. The 11 non-wetland classes included: unvegetated, sparse vegetation, crop field, pasture/hayfield, grassland, softwood forest, hardwood forest, mixed wood forest, shrubland, and forest clear-cut.

2.2. Satellite Imagery

The study used optical imagery and two types of SAR imagery. The first SAR imagery was acquired by the Japanese Advanced Land Observing Satellite using its Phased Array type L-band Synthetic Aperture Radar (ALOS-1 PALSAR). The ALOS-1 PALSAR sensor operates in the L-band (1270 MHz) and has a 23.62 cm wavelength with a pixel resolution of 12.5 m and a swath of 70 km. Incidence angles ranged between 36° and 38°. The images correspond to the ascending orbit that is northeast-looking. The images were acquired in 2010 during three seasons which corresponds to different water levels in the wetlands (low, moderate, or high) (Table 3). We produced one mosaic for each season to cover the full study area.
The second SAR imagery was acquired by the European Sentinel-1 satellite using the Terrain Observation with a Progressive Scanning SAR (TOPSAR) sensor in a high resolution (HR) Interferometric Wide (IW) swath mode. It operates in the C-band (5.405 GHz) and has a 5.6 cm wavelength with a pixel resolution of 10 m and a swath of 250 km. The incidence angles ranged between 29.1° and 46°. We used the C-HH and C-HV polarized images as well as the C-VH and C-VV images. The C-HH and C-HV images correspond to the descending orbit that is northwest-looking. The C-VH and C-VV images correspond to the ascending orbit that is northeast-looking. The S1A Level-1 Ground Range Detected (GRD) data products are already multi-looked and projected to ground range on the WGS84 coordinate system using an Earth ellipsoid model, hence multi-looking was unnecessary. The Sentinel-1 images were acquired in 2017 according to the season (spring, summer, and fall), which corresponds to a water level in the wetlands (flood, high, or low) (Table 4). For each season, while the descending image covered the whole study area, it was necessary to mosaic the two ascending images.
The optical imagery was acquired by the new Landsat 8 Operational Land Imager (OLI) sensor (Table 5). We used the radiometrically calibrated and orthorectified Landsat 8 OLI imagery coming from the Level-1 (L1TP) Collection in a GeoTIFF Data Product, which is the highest Level-1 and the most suitable imagery for time series analysis [73]. Among the spectral bands of the sensor, we used the following eight bands: Band 1–Ultra Blue (0.43 - 0.45 μm), Band 2–Blue (0.45–0.51 μm), Band 3–Green (0.53–0.59 μm), Band 4–Red (0.64–0.67 μm), Band 5–Near-infrared or NIR (0.85–0.88 μm), Band 6–Shortwave Infrared 1 or SWIR-1 (1.57–1.65 μm), Band 7–Shortwave Infrared 2 or SWIR-2 (2.11–2.29 μm) and Band 8–Panchromatic (0.50–0.68 μm). These images have a pixel resolution of 30 m (15 m for the panchromatic band) and a swath of 180 km. We have selected images without any cloud or snow on the ground. They were downloaded from the archives accessible of the USGS Global Visualization Viewer website (https://glovis.usgs.gov). The images were acquired in three seasons (spring, summer, and fall), which represents a different water level in the wetlands (flood, high, low, or moderate) (Table 4). The coverage of the whole study area requires four mosaicked images. One mosaic was produced for each season. The mosaics could have images acquired at different years to avoid cloud cover, but a visual inspection did not highlight big changes in the land cover between the years.

2.3. Field Data

Between mid-July and October 2016, 1979 GPS ground-truthing sites were visited through the study area (Figure 2). Sites were chosen that were easily accessible, well distributed over the study area, and across the various classes. Following [73], they were defined as a wetland when the water table was close to (less than 10 cm) or at the surface, or when we found indicator plants, soil hydrology, or other signs that the area is very often saturated with water. Some wetland sites were previously found by the interpretation of aerial photographs or mapped as a wetland on the NB reference map. On each site, the following measurements were made: GPS location, elevation, and site class based on the class description of Table 6. Ground photographs were also taken, showing the vegetation cover and the soil moisture status. In addition, 1028 sites were used to delineate the training areas, while the remaining sites (951) were used for validation data (Table 7). The validation sites were also compared to both the 2016 and 2019 NB reference maps. Among the 1028 sites, 530 sites were non-wetland or water sites while 498 sites were wetland sites. For the validation sites, the total number of non-wetland or water sites was 462 and the total number of wetland sites was 489.

2.4. Lidar Metrics

The Digital Elevation Model (DTM) and the Digital Surface Model (DSM) tiles were downloaded from the High-Resolution Digital Elevation Model (HRDEM) site [74]. Both layers have a spatial resolution of 1 m and were derived from 1 point / m2 LiDAR data acquired in 2013 and 2014 as well as from 6 points / m2 LiDAR data acquired in 2015 throughout the Province of New Brunswick. The tiles were mosaicked in PCI Geomatica into a 15 m mosaic (Figure 1) to match the satellite imagery resolution. Using the same method as [38], we derived several topographic metrics from the DTM to model how and where water will flow since they are related to landscape shape and position. They include: (i) Slope; (ii) Profile curvature; (iii) Plan curvature; (iv) Topographic position index (TPI); (v) Topographic wetness index #1; and (vi) Topographic wetness index #2. Terrain morphology influences water flow across landscapes, hence it plays a major role in defining where wetlands could develop [11,38]. All the metrics were computed using the System for Automated Geoscientific Analysis (SAGA, v.7.4.0_x64.), a free open-source software designed for spatial data analysis [75]. The software only needs the input files and the grid system to generate the metrics. We also computed the canopy height model (CHM) by subtracting the DTM and DSM layers in order to consider the vegetation height in the classification. All metric outputs were then exported to ArcGIS 10.6.1 in GeoTiff format.

2.5. Image Processing and Classification

The image processing was performed mainly in PCI Geomatica 2018, except where otherwise noted. The ALOS-1 PALSAR imagery was terrain-corrected and georeferenced using the Alaska Satellite Facility’s Map Ready tool kit [76] and the DTM. The imagery was then exported to PCI Geomatica as a Geotiff file. The images for each of the water level/surface conditions were then mosaicked together. Pre-classification processing of Sentinel-1 GRD data included updating orbit metadata, noise removal, and terrain correction with the SNAP toolbox [77]. More details about Sentinel-1 data processing can be found in [78]. The imagery was then exported to PCI Geomatica as a Geotiff file. The VV and VH images were mosaicked, while the C-HH and C-HV covered the whole study area. The Landsat 8 OLI imagery was already calibrated and georeferenced. They were atmospherically corrected using the ATCOR2 program of PCI Geomatica 2018 that uses the algorithm of [79]. Such a correction removes some of the atmospheric interference and converts the image digital numbers in reflectance values. The original multispectral images had a spatial resolution of 30 m and were pan-sharped to 15 m using the PANSHARP program of PCI Geomatica 2018 that applies the method of Zhang [80]. All the input datasets (Landsat 8 OLI, ALOS-1 PALSAR, Sentinel-1, Lidar variables) were then re-projected into New Brunswick Double Stereographic NAD83 (CanNBnad83) datum with a 15 m pixel resolution. The re-projected data were then used into a supervised classifier that requires delineation of training areas for each class. We delineated a total of 1028 training areas of 40 Landsat pan-sharpened pixels of 15 m GSD that were randomly distributed throughout the study area (Table 7).
The training areas were used to calculate the class spectral signatures for Landsat 8 OLI B2 to B6 images acquired during the three seasons. The class spectral signatures were then used to assess the spectral separabilities between class pairs with the Jeffries–Matsushita (J–M) distance, which is defined in [81]. The J–M distance ranges between 0.0 and 2.0, a value of 2.0 indicating 100% class spectral separability. The training areas were then used in the Random Forests (RF) classifier, which was originally developed by Breiman and Cutler [82,83] and which had recently been successfully employed in several wetland mapping studies (Table 1). The classifier is a non-parametric decision tree supervised classifier that can handle both Gaussian and non-Gaussian data as it does not consider the data distribution parameters. The classification was performed with all input features. Indeed, we compared classification performances using different groups of input features in previous wetland mapping studies [37,38] and concluded that the best case is using all topographic data, SAR imagery, and optical imagery acquired in different seasons. The code used in this study was the one developed in the R programming language [84]. We used the all-polygon version that has the advantage of taking account of the actual class size and outperforms the sub-polygon version [85]. The settings of the classifier were a forest of 500 independent decision trees with the default mtry variable [86]. Such a setting includes all input features, i.e., all pixels are randomly sampled as candidates at each split of every node. RF is not sensitive to noise or over-fitting and produces a “Mean Decrease Accuracy” plot that gives the importance of the individual input variables in the classification [87,88,89,90].

2.6. Accuracy Assessment

The classification accuracy was assessed first by comparing independent training areas with the equivalent classified land use in the imagery. Such comparison was performed using a “confusion matrix” or error matrix”, where each cell expresses the number of pixels classified to a particular class with the class defined by the training areas [91]. The confusion matrix allows for computing the average and overall accuracies as well as the individual class User’s and Producer’s accuracies and their related errors (omission and commission), as described in [91]. However, the classification accuracy is based on training areas and does not give a good assessment of the actual map accuracy. A more robust and independent accuracy assessment is to compare the resulting classified image with an independent set of field observation data acquired over the GPS validation sites that were not used as training areas. If the image returns the same class as the one observed at the validation sites, then the pixel related to this validation site is associated with a value of 1. If it is not the case, then the value is zero. A percentage of correct identifications can then be computed as a function of the total number of validation sites. Such a comparison was done using 951 GPS sites that were related to both wetland and non-wetland classes (Table 7).

3. Results

3.1. J–M Distance

Table 8 presents the Jeffries-Matsushita (J–M) distances between the class pairs that were computed using the reflectance values of Landsat 8 OLI Band 2 to Band 6 images. The mean J–M distance is 1.971, indicating a good spectral separability between the classes. The minimum separability of 1.706 occurred between the shrub marsh (SM) and shrub wetland (SW) classes. The highest separability values of 2.000 were obtained between the deep water (WA) class and other classes except with the aquatic bed (AB) class.

3.2. Classified Image

Table 9 shows the confusion matrix (and associated classification accuracies) comparing the training areas with the classified image for all the 22 landcover classes when RF was applied to the whole dataset. The resulting classified image is presented in Figure 3. We achieved an overall accuracy (OA) of 97.67% indicating an excellent classification accuracy. As shown in Table 9, the highest User’s (UA) and Producer’s (PA) accuracies for all upland classes were obtained for the deepwater (WA) class (99.99%). The highest error of commission (EC) was obtained with the mixed wood forest (MF) class (14.53%), mainly because of a confusion with other treed classes such as softwood forest (SF), shrub fen (TF), shrub marsh (SM), shrub wetland (SW), and treed bog (TB). The highest error of omission (EO) occurred with the Sparse Vegetation (SV) class (16.24%), mainly because of a confusion with unvegetated (UV), pasture/hayfield (AP), shrubland (SL), and forested wetland (FW). For the wetland classes, the highest User’s and Producer’s accuracy was obtained with the coastal marsh (CM) class (99.03%). The highest error of commission was obtained with the shrub wetland (SW) class (12.89%), mainly because of a confusion with forested wetland (FW), open fen (OF), shrub marsh (SM), freshwater marsh (FM), shrub fen (TF), and mixed wood forest (MF). The highest error of omission happened with the shrub marsh (SM) class (18.07%) because of a confusion with softwood forests (SF), mixed wood forest (MF), treed bog (TB), and shrub wetland (SW).
The RF classifier produces the variable of importance rank scores (expressed as a percentile), which gives the importance of each variable in the overall classification (Figure 4). For the classification, we used all the input data, since, in a previous study [39], we already did a detailed analysis of classification performances by generating wetland maps using different groups of variables and showed that the best classification was produced using all variables. The most important variable was the digital terrain model (DTM), whose removal from the model resulted in a decrease in model accuracy of 59%. Four of the LiDAR-derived topographic metrics TWI02, TPI, TWI01, and CHM were among the top 10 variables. The other variables in the top ten most important variables in the classification were the Landsat 8 OLI B1 band (ultra-blue) image acquired in summer and spring and the Landsat 8 OLI B2 band (blue) image acquired in spring. The most important SAR imagery variable was the Sentinel-1 C-VV imagery followed by the Sentinel-1 C-HH and Sentinel-1 C-VH images acquired in summer (Figure 4).

3.3. Comparison with Independent GPS Sites and the 2016 and 2019 NB Reference Maps

The classified image was validated by comparison to 951 in-situ GPS collected field sites. Table 10 shows the confusion matrix (and associated accuracies) comparing the field sites to the classified image. The validation has an overall accuracy of 94.95%. For the upland and water classes, the highest UA and PA occurred with crop field (AC) and deep water (WA) classes, (100%). Forest clear-cut (CC) has the highest error of commission (19.23%) because of a confusion with softwood forest (SF) and hardwood forest (HF). The softwood forests (SF) class has the highest error of omission of 33.33%, which is due to a confusion with the treed bog (TB), mixed wood forests (MF), forest clear-cuts (CC), and shrub wetlands (SW). For most of the wetland classes, the UA and PA were above 91.30%, with an exception for PA for treed bog (TB) and the UA for freshwater marsh (FM).
The highest UA and PA occurred with the aquatic bed (AB) class. The treed bog (TB) class has the highest error of commission (21.28%) mainly because of confusion with softwood forests (SF) and forested wetlands (FW). The highest error of omission was for freshwater marsh (FM) class (8.70%) because of a confusion with pasture/hayfield (AP) and grasslands (AG). We also compared the in-situ GPS wetland validation sites to the 2016 provincial wetland reference map (Table 11). For such a comparison, given that the 2016 NB reference map has only wetland classes, all non-wetland classes were combined into a single class. The overall accuracy (63.30%) was well below those obtained with the classified image produced in this study. The associated UA and PA are generally lower, except for coastal marsh (CM). The highest UA and PA occurred with aquatic bed class (100.00%) and open bog (OB) class (65.38%), respectively. The open fen (OF) class has the highest error of commission (96.97%) mainly because of confusion with freshwater marsh (FM), shrub wetland (SW), forested wetland (FW), and non-wetland classes. The highest error of omission was for open fen (93.33%) because of confusion with open bog (OB), shrub bog (SB), shrub fen (TF), and freshwater marsh (FM).
We compared the in-situ GPS validation sites to the 2019 NB reference map (Table 12). Given that the 2019 NB reference map does not have individual wetland classes, all the in-situ GPS wetland validation sites were grouped into two categories: wetland and non-wetland. While the 2019 NB reference map gives a higher overall accuracy of 80.02% than the 2016 NB reference map (63.30%), this new map has still a lower accuracy than the map produced in this study (94.95%). The 2019 NB reference map does not provide individual wetland classes, which limits its practical use for several applications, including or ecological studies. When the classified image and the two NB reference maps were compared only to the 489 wetland sites 96.93% of these sites were classified in a wetland class over the classified image, while only 58.69% and 62.17% of these sites were identified as a wetland on the 2016 and 2019 NB reference maps, respectively
The confusion matrices of Table 10, Table 11, and Table 12 give a global accuracy assessment of the classified image produced in this study, the 2016 NB reference map, and the 2019 NB reference map, and it is necessary to further investigate the sources of error for each map. Table 13 compares the 2016 NB reference map and the classified image for the number or percentage of the 489 in-situ GPS wetland validation sites that were correctly identified in a wetland class or in the correct wetland class. While 97.34% of the GPS validation sites were correctly identified in a wetland class over the classified image, this percentage dropped to 59.92% in the case of the 2016 NB reference map. Similarly, 94.27% of the GPS validation sites were correctly identified in the correct wetland class in the classified image, while only 29.86% were correctly identified in the correct wetland class in the 2016 NB reference map. To determine which wetland class in the classified map or the 2016 NB reference map has the best or poorest identification (by comparison with the GPS wetland validation sites), we calculated the number and the percentage of correctly identified wetland validation sites for each wetland class (Table 14). All the classes were better identified in the classified image than in the 2016 NB reference maps. For the classified image, the open fen (OF) and the coastal marsh (CM) had the highest identification accuracy (100.00%). The treed bog (TB) had the lowest accuracy (78.72%). For the 2016 NB reference map, the open bog (OB) had the highest accuracy (65.38%), while treed fen (TF) and shrub marsh (SM) classes were unable to be identified (0.00%).
For the classified image, about half of the incorrectly identified in-situ GPS wetland validation sites were not in the right wetland class and the other half were not identified as a wetland class (Table 15). For the 2016 NB reference map, about 2/3 incorrectly identified in-situ GPS wetland validation sites were not in the right wetland class and the other 1/3 were not identified as a wetland class (Table 15). On the classified image, misidentifications were mainly due to wetlands being identified in the wrong wetland classes (Table 16). On the 2016 NB reference map, the sites were either identified in the wrong wetland class (Table 16) or an upland class (Table 17). This last case is particularly true for the treed bogs, the coastal marshes, the shrub wetlands, and the forested wetlands (Table 17).

4. Discussion

In this study, we produced a map having 10 non-wetland classes, 10 wetland classes, and one water class by applying the Random Forest classifier to a combination of LiDAR topographic metrics with Landsat 8 OLI, Sentinel-1, ALOS1-PALSAR images, acquired at three seasons and under different water level conditions. We achieved a mean Jeffries–Matsushita (J–M) distance of 1.971. It was computed between the class pairs using the reflectance values of Landsat 8 OLI Band 2 to Band 6 images. Such a J–M distance indicates a good mean spectral separability between the classes. The lowest J–M distance (1.706) was obtained for the SM and SW classes that are hardly distinguishable from each other using optical bands. Both classes have shrub-type vegetation and only differ according to the amount of shrub coverage. The optical images cannot distinguish well the percent shrub coverage. It has already been shown that shrub wetlands are the least separable classes [24,48,53,92,93]. The highest M distances were obtained between the deep water (WA) class and the other classes except for the aquatic bed (AB) class. This can be attributed to the very distinct reflectance of the deep water and aquatic bed classes. Other studies [24,57,92] have also found that open/shallow-water is the most separable class from other wetland classes.
The overall classification accuracy obtained (97.67%) was much higher than those obtained in the Random Forest-based wetland mapping studies listed in Table 1. Most of these studies only used a combination of C-band SAR, optical, and DEM data. Some studies also included L-band imagery [41,50,51,56,68] given that L-band is a longer wavelength that is more penetrating especially in high-density canopy areas, where C-band and optical beams have a limited penetration and cannot reach the floor. However, even when including the L-band imagery, the corresponding overall accuracy was slightly lower than this study. Lower classification accuracies in previous studies could be attributed to the fact that a few of these studies considered the seasonality of vegetation and water level in imagery selection, as done in this study, where images were selected based on three different seasons to account for the influence of leaf-off/on periods and water level. In addition, some images for this study were acquired in spring during a flooding event which aided the delineation of wetlands. In flooding conditions, most wetlands are filled with water which aided in classification [38,51]. In comparison to previous studies [38,49], which also considered seasonality, our classification accuracies were slightly higher because of the use of Landsat 8 OLI optical imagery which has a better spatial resolution than the Landsat 5 TM imagery used in their studies. Landsat 8 OLI images are advantageous for wetland mapping because they have more bands than the Landsat 5 TM and a better radiometric resolution. Similar to [37], we used several LiDAR-derived topographic metrics that can provide much-needed information related to water flow. Our better overall accuracy can also be attributed to the use of pan-sharpened Landsat 8 OLI images and of high-resolution Sentinel-1 and ALOS-1 PALSAR images. High spatial resolution optical images were already known to be advantageous for detecting wetland boundaries and species using Random Forests [19]. Another reason is the use of Random Forests that has also been shown to be superior over other classifiers such as the maximum likelihood, rule-based, neural network, hierarchical decision tree, or support vector machine classifier in several previous studies [7,47,66,94]. This is evident by comparing Table 1 and Table 2. One of the studies of Table 2 [70] achieved an overall accuracy of 94% by applying a support vector machine classifier, but the resulting map has only two wetland classes, whereas our map included 11 classes.
Four of the LiDAR-derived topographic metrics TWI02, TPI, TWI01, and CHM were among the top 10 variables. These LiDAR-derived topographic metrics seem to be very important when mapping both upland and wetland land covers. Topography was shown to be very important when mapping wetlands in Ontario [39], Nova Scotia [37], and Newfoundland and Labrador [54]. The high rank for the LiDAR-derived topographic metrics can be explained by the influence of the topography on the way water flows across or into a wetland. Wetlands can develop in a variety of landscapes, but topography influences the distribution of surplus water and consequently the location of wetlands [73]. The other variables in the top ten most important variables in the classification were the Landsat 8 B1 band (ultra-blue) image acquired in summer and spring and B2 band (blue) image acquired in spring. In numerous previous studies, optical imagery was found to be suitable for mapping wetlands, particularly in the case of open wetlands with short vegetation [20,23,48,49,50]. Most variables derived from SAR imagery were shown to be less important than optical imagery except for Sentinel-1 C-VV summer imagery (Figure 4). Such a result agrees with previous studies, which found optical imagery acquired during high to medium water levels (spring and summer) was more important than SAR images when classifying wetlands [37,38,51]. The most important SAR imagery variable was the Sentinel-1 C-band VV followed by HH and VH acquired in summer. C-VV imagery has proved useful for delineating low-density marshes [30], similar to the coastal and shrub marshes of our study area. In addition, co-polarizations (HH and VV) were found to give a higher contrast backscatter between swamps and dry forest than cross-polarization [31]. C-band imagery has been useful in detecting standing water under short vegetation [20,31] and was suitable in some wooded wetlands with low-density canopies or leaf-off conditions [31,32,33,34]. While we should expect longer wavelength L-band imagery to be useful in penetrating forest canopies to detect standing water, the ALOS-1 PALSAR L-band imagery was less important than Sentinel-1 C-band imagery. Similar results were reported for mapping wetlands in Yukon, Canada [50]. One probable reason in our case is the time delay between the ALOS-1 PALSAR L-band imagery and the other imagery and data. The ALOS-1 PALSAR L-band imagery was acquired in 2010, whereas the other data (LiDAR, Sentinel-1, Landsat 8 OLI) were acquired between 2013 and 2018. Future mapping efforts and studies would be assisted by obtaining all data sources during the same years. Amongst all the ALOS-1 PALSAR L-band imagery, the L-HV fall imagery was ranked higher than the other L-band imageries. These results are inconsistent with Bourgeau-Chavez et al. [49] reports that L-HH spring bands are the most important among other L-bands.
The classified image produced by this study had an overall validation accuracy of 94.95%. This is somewhat lower than the validation accuracy (98.6%) obtained previously for a study in central New Brunswick [38]. This could be attributed to the fact that the previous study used Radarsat-2 C-band images acquired during flooding events, while, in this study, Sentinel-1 C-band imagery was acquired in high water levels, but not during flooding events. In addition, the number of validation sites for [38] was lower. The accuracy in this current study was higher than that we obtained previously in Nova Scotia (88.5%), by also using L-band imagery and imagery acquired during flooding events [38]. While the confusion matrices obtained by comparing the classified image and the two reference maps to the validation GPS sites cannot be compared directly given the different number of classes for the classified image and the two maps, there was a lower overall accuracy for both the 2016 and 2019 NB reference maps compared to the classified image produced in this study. Both the 2016 and 2019 NB reference maps were produced by photo-interpretation of digitized aerial photography. Such a method has already been shown to be unreliable for mapping forested wetlands, particularly in regions such as most parts of New Brunswick because of the effect of the dense tree cover [13,14,15,37]. The 2019 NB reference map gave slightly better accuracies than the 2016 NB reference map because the map also included information from Lidar-derived 1m topographic data. Figure 4 supported that Lidar-derived topographic data are very important for land cover mapping and adding such information in the analysis is beneficial. Figure 5 compares our classified image, the 2016 NB reference map, and the 2019 NB reference map for a forested wetland site for which the ground picture of the site is given. Only the classified image was able to properly map the site as a forested wetland. On both the 2016 and 2019 NB reference maps, the site was mapped as an upland site. Our detailed analysis of the sources of error for each map is consistent with our previous work in central NB, which found most misidentifications of GPS validation sites occurred for treed wetland classes [38].

5. Conclusions

Traditional mapping of wetlands using visual interpretation of air photos can be costly and time-consuming. This study has demonstrated the potential of applying the Random Forests (RF) classifier to freely available Landsat 8 OLI, ALOS-1 PALSAR, and Sentinel-1 images combined with LiDAR-derived topographic metrics to produce a highly accurate wetland map of southern New Brunswick. The resulting overall classification accuracy (97.67%) and validation accuracy (94.95%) showed that the combination of optical, SAR, and Lidar data acquired in three seasons with varying water levels in the wetlands is highly efficient for mapping wetlands in forested landscapes, such as already shown in recent studies. Our confusion matrixes showed that the main sources of errors in wetland mapping occurred for treed wetland classes, especially for the treed bog (TB). A comparison with the GPS field validation sites indicated that the wetland map produced in this study was more accurate than both the 2016 and 2019 NB reference maps.
Compared to previous studies on wetland mapping using optical, SAR, and Lidar data (Table 1 and Table 2), exceptionally high classification and validation accuracies were obtained despite the time period between when the Landsat 8 OLI, SAR imagery, and LiDAR data were collected. Some landcover changes would have occurred within this time period. Further work is needed to assess whether this time gap has a significant result on the mapping accuracy and whether better accuracy can be obtained using data acquired within the same year. The study used Landsat 8 OLI optical imagery. Future work would benefit from testing newly available free optical images such as Sentinel-2.

Author Contributions

Conceptualization, A.L. and B.L..; methodology, A.L. and B.L..; validation, A.L. and C.P.; formal analysis, A.L., B.L., and C.P.; investigation A.L. and C.P.; resources, B.L., K.C., and A.H.; writing—original draft preparation, A.L., C.P., and B.L.; writing—review and editing, B.L., F.P., K.C., and A.H.; supervision, B.L. and F.P.; project administration, B.L.; funding acquisition, B.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded through Ducks Unlimited Canada and the partners of the Eastern Habitat Joint Venture via a contract awarded to Prof. Dr. Brigitte Leblon (University of New Brunswick). C.P., a TRANSFOR-M student, was funded by a University of Padova Scholarship.

Acknowledgments

The authors wish to acknowledge and thank Damien LaRocque, Michael Mordini, Sophie Fialdès, Lara Kim, Todd Arseneault, Duncan MacGillivray, Trevor Webb, Andréas Brotzer, and Bennet Wilson for their help during the fieldwork.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rubec, C. The Canadian wetland classification system. In The Wetland Book; Finlayson, C.M., Everard, M., Irvine, K., McInnes, R., Middleton, B., van Dam, A., Davidson, N.C., Eds.; Springer: Dordrecht, The Netherlands, 2018; pp. 1577–1581. [Google Scholar]
  2. Mitsch, W.J.; Bernal, B.; Hernandez, M.E. Ecosystem services of wetlands. Int. J. Biodivers. Sci. Ecosyst. Serv. Manag. 2015, 11, 1–4. [Google Scholar] [CrossRef] [Green Version]
  3. Ramsar Convention on Wetlands. Global Wetland Outlook: State of the World’s Wetlands and Their Services to People; Ramsar Convention Secretariat: Gland, Switzerland, 2018; p. 18. [Google Scholar]
  4. Lynch-Stewart, P.; Neice, P.; Rubec, C.; Kessel-Taylor, I. Federal Policy on Wetland Conservation: Implementation Guide for Federal Land Managers; North American Wetlands Conservation Council (NAWCC): Ottawa, ON, Canada, 1996; p. 44.
  5. Mahdavi, S.; Salehi, B.; Granger, J.; Amani, M.; Brisco, B.; Huang, W. Remote sensing for wetland classification: A comprehensive review. GISci. Remote Sens. 2018, 55, 623–658. [Google Scholar] [CrossRef]
  6. Rampi, L.P.; Knight, J.F.; Pelletier, K.C. Wetland mapping in the upper Midwest United States: An object-based approach integrating lidar and imagery data. Photogramm. Eng. Remote Sens. 2014, 80, 439–448. [Google Scholar] [CrossRef]
  7. Chasmer, L.; Cobbaert, D.; Mahoney, C.; Millard, K.; Peters, D.; Devito, K.; Brisco, B.; Hopkinson, C.; Merchant, M.; Montgomery, J.; et al. Remote Sensing of Boreal Wetlands 1: Data Use for Policy and Management. Remote Sens. 2020, 12, 1320. [Google Scholar] [CrossRef] [Green Version]
  8. Hanson, A.R.; Calkins, L. Wetlands of the Maritime Provinces: Revised Documentation for the Wetlands Inventory; Canadian Wildlife Service: Sackville, NB, 1996; p. 70. ISBN 9780662252856. [Google Scholar]
  9. GeoNB. Available online: http://www.snb.ca/geonb1/e/DC/RW.asp (accessed on 1 September 2016). 1 September 2016 for the 2016 NB reference map and on 1 February, 2020 for the 2019 NB reference map.
  10. GeoNB. Available online: http://www.snb.ca/geonb1/e/DC/forest.asp (accessed on 1 September 2016).
  11. New Brunswick Department of Natural Resources. New Brunswick Wetland Classification for 2003–2012 Photo Cycle; New Brunswick Department of Natural Resources; Fish, and Wildlife Branch: Fredericton, NB, USA, 2006; p. 7.
  12. Murphy, P.N.C.; Ogilvie, J.; Connor, K.; Arp, P.A. Mapping wetlands: A comparison of two different approaches for New Brunswick, Canada. Wetlands 2007, 27, 846–854. [Google Scholar] [CrossRef]
  13. Jacobson, J.E.; Ritter, R.A.; Koeln, G.T. Accuracy of Thematic Mapper derived wetlands as based on national wetland inventory data. In Proceedings of the American Society Photogrammetry and Remote Sensing Technical Papers; ASPRS-ACSM Fall Convention: Reno, NV, USA, 1997; pp. 109–118. [Google Scholar]
  14. Sader, A.; Ahl, D.; Liou, W. Accuracy of Landsat TM and GIS rule-based methods for forest wetland classification in Maine. Remote Sens. Environ. 1995, 53, 133–144. [Google Scholar] [CrossRef]
  15. Todd, A.R.; Hogg, K.W. Automated discrimination of upland and wetland using terrain derivatives. Can. J. Remote Sens. 2007, 33, S68–S83. [Google Scholar]
  16. Chasmer, L.; Mahoney, C.; Millard, K.; Nelson, K.; Peters, D.L.; Merchant, M.; Hopkinson, C.; Brisco, B.; Niemann, O.; Montgomery, J.; et al. Remote sensing of boreal wetlands 2: Methods for evaluating boreal wetland ecosystem state and drivers of change. Remote Sens. 2020, 12, 1321. [Google Scholar] [CrossRef] [Green Version]
  17. Bwangoy, J.R.B.; Hansen, M.C.; Roy, D.P.; Grandi, G.D.; Justice, C.O. Wetland mapping in the Congo Basin using optical and radar remotely sensed data and derived topographical indices. Remote Sens. Environ. 2010, 114, 73–86. [Google Scholar] [CrossRef]
  18. Amani, M.; Salehi, B.; Mahdavi, S.M.; Granger, J.; Brisco, B. Evaluation of multi-temporal Landsat 8 data for wetland classification in Newfoundland, Canada. In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS’17), Fort Worth, TX, USA, 23–28 July 2017; pp. 6229–6231. [Google Scholar]
  19. Li, J.; Chen, W. A rule-based method for mapping Canada’s wetlands using optical, radar, and DEM data. Int. J. Remote Sens. 2005, 26, 5051–5069. [Google Scholar] [CrossRef]
  20. Ozesmi, S.L.; Bauer, M. Satellite remote sensing of wetlands. Wetland Ecol. Manag. 2002, 10, 381–402. [Google Scholar] [CrossRef]
  21. Brisco, B.; Kapfer, M.; Hirose, T.; Tedford, B.; Liu, J. Evaluation of C-band polarization diversity and polarimetry for wetland mapping. Can. J. Remote Sens. 2011, 37, 82–92. [Google Scholar] [CrossRef]
  22. Brisco, B.; Li, K.; Tedford, B.; Charbonneau, F.; Yun, S.; Murnaghan, K. Compact polarimetry assessment for rice and wetland mapping. Int. J. Remote Sens. 2013, 34, 1949–1964. [Google Scholar] [CrossRef] [Green Version]
  23. White, L.; Brisco, B.; Pregitzer, M.; Tedford, B.; Boychuk, L. RADARSAT-2 beam mode selection for surface water and flood mapping. Can. J. Remote Sens. 2014, 40, 135–151. [Google Scholar]
  24. Grenier, M.; Demers, A.M.; Labrecque, S.; Benoit, M.; Fournier, R.A.; Drolet, B. An object-based method to map wetland using RADARSAT-1 and Landsat ETM images: Test case on two sites in Quebec, Canada. Can. J. Remote Sens. 2007, 33, S28–S45. [Google Scholar] [CrossRef]
  25. Fournier, R.A.; Grenier, M.; Lavoie, A.; Hélie, R. Towards a strategy to implement the Canadian Wetland Inventory using satellite remote sensing. Can. J. Remote Sens. 2007, 33, S1–S16. [Google Scholar] [CrossRef]
  26. Wang, J.; Shang, J.; Brisco, B.; Brown, R.J. Evaluation of multidate ERS-1 and multispectral Landsat imagery for wetland detection in Southern Ontario. Can. J. Remote Sens. 1998, 24, 60–68. [Google Scholar] [CrossRef]
  27. Ramsey, E.W., III; Nelson, G.; Saptoka, S.; Laine, S.; Verdi, J.; Krasznay, S. Using multiple-polarization L-band radar to monitor marsh burn recovery. IEEE Trans. Geosci. Remote Sens. 1999, 37, 635–639. [Google Scholar] [CrossRef]
  28. Henry, J.B.; Chastanet, P.; Fellah, K.; Desnos, Y. Envisat multi-polarized ASAR data for flood mapping. Int. J. Remote Sens. 2006, 27, 1921–1929. [Google Scholar] [CrossRef]
  29. Pope, K.O.; Rejmankova, E.; Paris, J.F.; Woodruff, R. Detecting seasonal flooding cycles in marshes of the Yucatan peninsula with SIR-C polarimetric radar imagery. Remote Sens. Environ. 1997, 59, 157–166. [Google Scholar] [CrossRef]
  30. Henderson, F.; Lewis, A. Radar detection of wetland ecosystems: A review. Int. J. Remote Sens. 2008, 29, 5809–5835. [Google Scholar] [CrossRef]
  31. Townsend, P.A. Relationships between forest structure and the detection of flood inundation in forested wetlands using C-Band SAR. Int. J. Remote Sens. 2002, 23, 443–460. [Google Scholar] [CrossRef]
  32. Lang, M.W.; Kasischke, E.S. Using C-Band synthetic aperture radar data to monitor forested wetland hydrology in Maryland’s coastal plain USA. IEEE Trans. Geosci. Remote Sens. 2008, 46, 535–546. [Google Scholar] [CrossRef]
  33. Lang, M.W.; Townsend, P.; Kasischke, E.S. Influence of incidence angle on detecting flooded forests using C-HH synthetic aperture radar data. Remote Sens. Environ. 2008, 11, 3898–3907. [Google Scholar] [CrossRef]
  34. Whitcomb, J.; Moghaddam, M.; Kellndorfer, J.; McDonald, K.; Podest, E. Wetlands map of Alaska using L-band radar satellite imagery. In Proceedings of the 2007 IEEE International Geoscience and Remote Sensing Symposium (IGARSS’07), Barcelona, Spain, 23–28 July 2007; Volume 35, pp. 2487–2490. [Google Scholar]
  35. Augusteijn, M.; Warrender, C. Wetland classification using optical and radar data and neural network classification. Int. J. Remote Sens. 1998, 19, 1545–1560. [Google Scholar] [CrossRef]
  36. Bourgeau-Chavez, L.L.; Kasischke, E.S.; Brunzell, S.M.; Mudd, J.P.; Smith, K.B.; Frick, A.L. Analysis of space-born SAR data for wetland mapping in Virginia riparian ecosystems. Int. J. Remote Sens. 2001, 22, 3665–3687. [Google Scholar] [CrossRef]
  37. Jahncke, R.; Leblon, B.; Bush, P.; LaRocque, A. Mapping wetlands in Nova Scotia with multi-beam RADARSAT-2 polarimetric SAR, optical satellite imagery, and Lidar data. Int. J. Appl. Earth Obs. Geoinf. 2018, 68, 139–156. [Google Scholar] [CrossRef]
  38. LaRocque, A.; Leblon, B.; Woodward, R.; Bourgeau-Chavez, L.L. Wetland mapping in New Brunswick, Canada, with Landsat 5 TM, ALOS-1 PALSAR, and Radarsat-2 imagery. Proceedings of the XXIVth ISPRS Congress. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, 8, 308–332. [Google Scholar]
  39. Millard, K.; Richardson, M. Wetland mapping with LiDAR derivatives, SAR polarimetric decompositions, and LiDAR-SAR fusion using a RF classifier. Can. J. Remote Sens. 2013, 39, 290–307. [Google Scholar] [CrossRef]
  40. Amani, M.; Mahdavi, S.; Afshar, M.; Brisco, B.; Huang, W.; Javad, M.; Mirzadeh, S.; White, L.; Banks, S.; Montgomery, J.; et al. Canadian wetland inventory using Google Earth Engine: The first map and preliminary results. Remote Sens. 2019, 11, 842. [Google Scholar] [CrossRef] [Green Version]
  41. Mohammadimanesh, F.; Salehi, B.; Mahdianpari, M.; Brisco, B.; Motagh, M. Multi-temporal, multi-frequency, and multi-polarization coherence and SAR backscatter analysis of wetlands. ISPRS J. Photogramm. Remote Sens. 2018, 142, 78–93. [Google Scholar] [CrossRef]
  42. Corcoran, J.M.; Knight, J.F.; Brisco, B.; Kaya, S.; Cull, A.; Murnaghan, K. The integration of optical, topographic, and radar data for wetland mapping in northern Minnesota. Can. J. Remote Sens. 2011, 37, 564–582. [Google Scholar] [CrossRef]
  43. Mahdianpari, M.; Salehi, B.; Mohammadimanesh, F.; Brisco, B. An assessment of simulated compact polarimetric SAR data for wetland classification using Random Forest algorithm. Can. J. Remote Sens. 2017, 43, 468–484. [Google Scholar] [CrossRef]
  44. Whitcomb, J.; Moghaddam, M.; McDonald, K.; Kellndorfer, J.; Podest, E. Mapping vegetated wetlands of Alaska Using L-band radar satellite imagery. Can. J. Remote Sens. 2009, 37, 54–72. [Google Scholar] [CrossRef]
  45. Mahdianpari, M.; Salehi, B.; Mohammadimanesh, F.; Brisco, B.; Homayouni, S.; Gill, E.; DeLancey, E.R.; Bourgeau-Chavez, L. Big Data for a big country: The first generation of Canadian wetland inventory map at a spatial resolution of 10-m using Sentinel-1 and Sentinel-2 data on the Google Earth Engine cloud computing platform. Can. J. Remote Sens. 2020, 46, 15–33. [Google Scholar] [CrossRef]
  46. Mahdianpari, M.; Salehi, B.; Mohammadimanesh, F. The effect of PolSAR image de-speckling on wetland classification: Introducing a new adaptive method. Can. J. Remote Sens. 2017, 43, 485–503. [Google Scholar] [CrossRef]
  47. Whyte, A.; Ferentinos, K.P.; Petropoulos, G.P. A new synergistic approach for monitoring wetlands using Sentinel-1 and 2 data with object-based machine learning algorithms. Environ. Model. Softw. 2018, 104, 40–54. [Google Scholar] [CrossRef] [Green Version]
  48. Amani, M.; Salehi, B.; Mahdavi, S.M.; Brisco, B. Spectral analysis of wetlands using multi-source optical satellite imagery. ISPRS J. Photogramm. Remote Sens. 2018, 144, 119–136. [Google Scholar] [CrossRef]
  49. Bourgeau-Chavez, L.L.; Endres, S.; Battaglia, M.; Miller, M.E.; Banda, E.; Laubach, Z.; Higman, P.; Chaw-Fraser, P.; Marcaccio, J. Development of a bi-national Great Lakes coastal wetland and land use map using three-season PALSAR and Landsat imagery. Remote Sens. 2015, 7, 8655–8682. [Google Scholar] [CrossRef] [Green Version]
  50. Merchant, M.A.; Warren, R.K.; Edwards, R.; Kenyon, J.K. An object-based assessment of multi-wavelength SAR, optical imagery and topographical datasets for operational wetland mapping in Boreal Yukon. Can. J. Remote Sens. 2019, 45, 308–332. [Google Scholar] [CrossRef]
  51. Corcoran, J.M.; Knight, J.F.; Gallant, A.L. Influence of multi-source and multi-temporal remotely sensed and ancillary data on the accuracy of Random Forest classification of wetlands in northern Minnesota. Remote Sens. 2013, 5, 3212–3238. [Google Scholar] [CrossRef] [Green Version]
  52. Amani, M.; Mahdavi, S.; Berard, O. Supervised wetland classification using high spatial resolution optical, SAR, and LiDAR imagery. J. App. Remote Sens. 2020, 14, 024502. [Google Scholar]
  53. Mahdianpari, M.; Salehi, B.; Mohammadimanesh, F.; Homayouni, S.; Gill, E. The first wetland inventory map of Newfoundland at a spatial resolution of 10 m using Sentinel-1 and Sentinel-2 data on the Google Earth Engine cloud computing platform. Remote Sens. 2019, 11, 43. [Google Scholar] [CrossRef] [Green Version]
  54. Mahdavi, S.; Salehi, B.; Amani, M.; Granger, J.E.; Brisco, B.; Huang, W.; Hanson, A. Object-based classification of wetlands in Newfoundland and Labrador using multi-temporal PolSAR data. Can. J. Remote Sens. 2017, 43, 432–450. [Google Scholar] [CrossRef]
  55. Mohammadimanesh, F.; Salehi, B.; Mahdianpari, M.; Brisco, B.; Gill, E. Full and simulated compact polarimetry SAR responses to Canadian wetlands: Separability analysis and classification. Remote Sens. 2019, 11, 516. [Google Scholar] [CrossRef] [Green Version]
  56. Amani, M.; Salehi, B.; Mahdavi, S.M.; Granger, J.; Brisco, B.; Hanson, A. Wetland classification using multi-source and multi-temporal optical remote sensing data in Newfoundland and Labrador, Canada. Can. J. Remote Sens. 2017, 43, 360–373. [Google Scholar] [CrossRef]
  57. Bourgeau-Chavez, L.L.; Endres, S.; Powell, R.; Battaglia, M.J.; Benscoter, B.; Turetsky, M.; Kasischke, E.S.; Banda, E. Mapping boreal peatland ecosystem types from multitemporal radar and optical satellite imagery. Can. J. For. Res. 2017, 47, 545–559. [Google Scholar] [CrossRef]
  58. Amani, M.; Salehi, B.; Mahdavi, S.M.; Granger, J.; Brisco, B.; Hanson, A. Wetland classification in Newfoundland and Labrador using multi-source SAR and optical data integration. GISci. Remote Sens. 2017, 54, 779–796. [Google Scholar] [CrossRef]
  59. Castañeda, C.; Ducrot, D. Land cover mapping of wetland areas in an agricultural landscape using SAR and Landsat imagery. J. Environ. Manag. 2009, 90, 2270–2277. [Google Scholar] [CrossRef] [Green Version]
  60. Bourgeau-Chavez, L.L.; Riordan, K.; Powell, R.B.; Miller, N.; Barada, H. Improving wetland characterization with multi-sensor, multi-temporal SAR and optical/infrared data fusion. In Advances in Geoscience and Remote Sensing; Jedlovec, G., Ed.; IntechOpen: London, UK, 2009; Chapter 33; pp. 679–708. Available online: https://www.intechopen.com/books/advances-in-geoscience-and-remote-sensing/improving-wetland-characterization-with-multi-sensor-multi-temporal-sar-and-optical-infrared-data-fu (accessed on 12 April 2020).
  61. Baghdadi, N.; Bernier, M.; Gauthier, R.P.; Neeson, I. Evaluation of C-band SAR data for wetlands mapping. Int. J. Remote Sens. 2001, 22, 71–88. [Google Scholar] [CrossRef]
  62. Bolstad, P.V.; Lillesand, T. Rule-based classification models: Flexible integration of satellite imagery and thematic spatial data. Photogramm. Eng. Remote Sens. 1992, 58, 965–971. [Google Scholar]
  63. Pouliot, D.; Latifovic, R.; Pasher, J.; Duffe, J. Assessment of convolution neural networks for wetland mapping with Landsat in the central Canadian boreal forest region. Remote Sens. 2019, 11, 772. [Google Scholar] [CrossRef] [Green Version]
  64. DeLancey, E.R.; Simms, J.F.; Mahdianpari, M.; Brisco, B.; Mahoney, C.; Kariyeva, J. Comparing deep learning and shallow learning for large-scale wetland classification in Alberta. Remote Sens. 2020, 12, 2. [Google Scholar] [CrossRef] [Green Version]
  65. Dingle-Robertson, L.; King, D.J.; Davies, C. Object-based image analysis of optical and radar variables for wetland evaluation. Int. J. Remote Sens. 2015, 36, 5811–5841. [Google Scholar] [CrossRef]
  66. Gosselin, G.; Touzi, R.; Cavayas, F. Polarimetric Radarsat-2 wetland classification using the Touzi decomposition: Case of the Lac Saint-Pierre Ramsar wetland. Can. J. Remote Sens. 2013, 36, 491–506. [Google Scholar] [CrossRef]
  67. Parmuchi, M.G.; Karszenbaum, H.; Kandus, P. Mapping wetlands using multi-temporal RADARSAT-1 data and a decision-based classifier. Can. J. Remote Sens. 2002, 28, 175–186. [Google Scholar] [CrossRef]
  68. Rebelo, L.M. Eco-hydrological characterization of inland wetlands in Africa using L-band SAR. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2010, 3, 554–559. [Google Scholar] [CrossRef]
  69. Amani, M.; Salehi, B.; Mahdavi, S.M.; Brisco, B.; Shehata, M. A multiple classifier system to improve mapping complex land covers: A case study of wetland classification using SAR data in Newfoundland, Canada. Int. J. Remote Sens. 2018, 39, 1–14. [Google Scholar] [CrossRef]
  70. Kaplan, G.; Avdan, U. Evaluating the utilization of the red edge and radar bands from Sentinel sensors for wetland classification. Catena 2019, 178, 109–119. [Google Scholar] [CrossRef]
  71. Zelazny, V.F.; Martin, G.L.; Toner, M.; Gorman, M.; Colpitts, M.; Veen, H.; Godin, B.; McInnis, B.; Steeves, C.; Wuest, L.; et al. Our Landscape Heritage: The Story of Ecological Land Classification in New Brunswick, 2nd ed.; Government of New Brunswick: Fredericton, NB, Canada, 2007; p. 359.
  72. Tiner, R.W. Wetlands indicators. A Guide to Wetland Identification, Delineation, Classification, and Mapping, 1st ed.; Lewis Publishers: Boca Raton, FL, USA, 1999; p. 593. [Google Scholar]
  73. USGS EROS. Landsat Collection 1 Level 1 Product Definition, LSDS-1656 Version 2.0; USGS: Sioux Falls, SD, USA, 2019; p. 26. Available online: https://www.usgs.gov/media/files/landsat-collection-1-level-1-product-definition (accessed on 1 September 2019).
  74. Index of /pub/elevation/dem_mne/highresolution_hauteresolution. Available online: https://ftp.maps.canada.ca/pub/elevation/dem_mne/highresolution_hauteresolution/ (accessed on 1 February 2020).
  75. SAGA: GIS Tool Library Documentation (V7.0.0). Available online: http://www.saga.gis.org/saga_tool_doc/7.0.0/index.html (accessed on 27 January 2020).
  76. ALOS PALSAR—Documents and Tools. Available online: https://asf.alaska.edu/data-sets/sar-data-sets/alos-palsar/alos-palsar-documents-tools/ (accessed on 16 March 2020).
  77. SNAP Toolbox. Available online: http://step.esa.int/main/download/ (accessed on 1 November 2019).
  78. Filipponi, F. Sentinel-1 GRD preprocessing workflow. In Proceedings of the 3rd International Electronic Conference on Remote Sensing, Roma, Italy, 22 May–5 June 2019. [Google Scholar]
  79. Richter, R. Atmospheric/Topographic Correction for Satellite Imagery—ATCOR2/3 User Guide; DLR—German Aerospace Center: Wessling, Germany, 2010; p. 165. [Google Scholar]
  80. Zhang, Y. Problems in the fusion of commercial high-resolution satellite as well as Landsat 7 images and initial solutions. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2002, 34, 587–592. [Google Scholar]
  81. Richards, J.A. Remote Sensing Digital Image Analysis; Springer: Berlin/Heidelberg, Germany, 1994; p. 494. [Google Scholar]
  82. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  83. Breiman, L. Manual-Setting Up, Using and Understanding Random Forests V4.0. Available online: https://www.stat.berkeley.edu/~breiman/Using_random_forests_v4.0.pdf (accessed on 20 March 2020).
  84. R Development Core Team. R: A Language and Environment for Statistical Computing. Available online: https://www.r-project.org/ (accessed on 25 March 2020).
  85. Byatt, J.; LaRocque, A.; Leblon, B.; Harris, J.; McMartin, I. Mapping surficial materials in Nunavut using RADARSAT-2 C-HH and C-HV, Landsat 8 OLI, DEM, and slope data. Can. J. Remote Sens. 2019, 44, 491–512. [Google Scholar] [CrossRef]
  86. Liaw, A.; Wiener, M. Package Random Forest: Breiman and Cutler’s Random Forests for Classification and Regression; U. of California at Berkeley: Berkeley, CA USA, 2015; Volume 4, p. 29. Available online: https://www.stat.berkeley.edu/~breiman/RandomForests/ (accessed on 25 March 2020).
  87. Waske, B.; Braun, M. Classifier ensembles for land cover mapping using multitemporal SAR imagery. ISPRS J. Photogramm. Remote Sens. 2009, 64, 450–457. [Google Scholar] [CrossRef]
  88. Gislason, P.O.; Benediktsson, J.A.; Sveinsson, J.R. Random forests for land cover classification. Pattern Recognit. Lett. 2006, 27, 294–300. [Google Scholar] [CrossRef]
  89. Strobl, C.; Boulesteix, A.L.; Kneib, T.; Augustin, T.; Zeileis, A. Conditional variable importance for Random Forests. BMC Biofor. 2008, 9, 307. [Google Scholar] [CrossRef] [Green Version]
  90. Louppe, G.; Wehenkel, L.; Sutera, A.; Geurt, P. Understanding variable importances in forests of randomized trees. Adv. Neural Inf. Process. Syst. 2013, 23, 431–439. [Google Scholar]
  91. Congalton, R. A review of assessing the accuracy of classifications of remotely sensed data. Remote Sens. Environ. 1991, 37, 35–46. [Google Scholar] [CrossRef]
  92. Ernst-Dottavio, C.L.; Hoffer, R.M.; Mroczynski, R.P. Spectral characteristics of wetland habitats. Photogramm. Remote Sens. 1981, 47, 223–227. [Google Scholar]
  93. Lulla, K. The Landsat satellites and selected aspects of physical geography. Prog. Phys. Geogr. Earth Environ. 1983, 7, 1–45. [Google Scholar] [CrossRef]
  94. Maxwell, A.E.; Warner, T.A.; Fang, F. Implementation of machine-learning classification in remote sensing: An applied review. Int. J. Remote Sens. 2018, 39, 2784–2817. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Location of the study area and the corresponding Digital Elevation Model (DEM) and river network.
Figure 1. Location of the study area and the corresponding Digital Elevation Model (DEM) and river network.
Remotesensing 12 02095 g001
Figure 2. Location of the ground truth sites in the study area.
Figure 2. Location of the ground truth sites in the study area.
Remotesensing 12 02095 g002
Figure 3. Classified image produced by applying the Random Forest classifier to the whole data set.
Figure 3. Classified image produced by applying the Random Forest classifier to the whole data set.
Remotesensing 12 02095 g003
Figure 4. Variables of importance for a combination of Landsat 8 OLI, Also-1 Palsar, and Sentinel-1 imagery with LiDAR-derived metrics.
Figure 4. Variables of importance for a combination of Landsat 8 OLI, Also-1 Palsar, and Sentinel-1 imagery with LiDAR-derived metrics.
Remotesensing 12 02095 g004
Figure 5. Comparison between the classified image, the 2016 NB reference map, and the 2019 NB reference map for a forested wetland site. The ground picture of the site is also given. (a) Ground picture; (b) Classified image; (c) 2016 NB reference map; (d) 2019 NB reference map.
Figure 5. Comparison between the classified image, the 2016 NB reference map, and the 2019 NB reference map for a forested wetland site. The ground picture of the site is also given. (a) Ground picture; (b) Classified image; (c) 2016 NB reference map; (d) 2019 NB reference map.
Remotesensing 12 02095 g005
Table 1. Comparison of the best overall accuracy (%) achieved in previous studies that used Random Forests for wetland mapping, as a function of the number of classes and the input datasets.
Table 1. Comparison of the best overall accuracy (%) achieved in previous studies that used Random Forests for wetland mapping, as a function of the number of classes and the input datasets.
Overall Accuracy (%)Number of ClassesInput DataRegionAuthors
70.2 4Radarsat -2 C-band polSAR, LiDAR-derived dataOntario, Canada [39]
70.65Landsat 8 OLICanada[40]
74.35Radarsat -2 (C-HH, C-HV, C-VH, C-VV), ALOS-1 PALSAR (L-HH, L-HV), Terrasar-X (X-HH)Newfoundland and Labrador, Canada[41]
74.7 3Aerial photography, Radarsat-2 C-band polSAR, DEM-derived dataMinnesota, USA[42]
75.35Radarsat -2 C-band polSARNewfoundland, Canada [43]
77.3 13JERS-1 (L-HH), DEM-derived dataAlaska, USA[44]
79.05Sentinel-1 SAR (C-HH, C-HV, C-VV, C-VH), Sentinel-2Canada[45]
81.06RADARSAT-2 C-polSARNewfoundland, Canada [46]
83.36Sentinel-1 (C-VV, C-VH), Sentinel-2, SRTM DEMSouth Africa[47]
86.05ASTER, Landsat 8 OLI, RapidEye, Sentinel-2Newfoundland and Labrador, Canada[48]
86.3 5Landsat 5 TM, ALOS-1 PALSAR (L-HH, L-HV)Michigan, USA[49]
86.55Sentinel-1 C-VV, C-VH), Sentinel-2, ALOS-1 PALSAR, DEMYukon, Canada[50]
86.7 3Landsat 5 TM, ALOS-1 PALSAR (L-HH, L-HV), Radarsat-2 C-polSAR, DEM-derived data, Soil mapMinnesota, USA[51]
87.24WorldView-2, Radarsat-2 (C-HH, C-VV) LidarNewfoundland, Canada[52]
88.05Landsat 8 OLINewfoundland, Canada[18]
88.45Sentinel-1 (C-HH, C-HV, C-VV, C-VH), Sentinel-2Newfoundland, Canada[53]
88.8 3Radarsat-2 (C-HH, C-HV, C-VV, C-VH)Newfoundland and Labrador, Canada[54]
88.85Radarsat-2 C-polSAR and simulated compact polSAR Newfoundland and Labrador, Canada[55]
90.05RapidEye, Landsat 8 OLI, Canadian Digital Surface Model (CDSM)Newfoundland and Labrador, Canada[56]
91.56QuickBird, Radarsat-2 C-polSAR, LiDAR-derived dataNova Scotia, Canada[37]
91.7 5Landsat 5 TM, ALOS-1 PALSAR (L-HH, L-HV), ERS-1/2 (C-VV)Alberta, Canada[57]
92.05RapidEye, Landsat 8 OLI, Radarsat -2 (C-HH+VV, C-HV+VH), Canadian Digital Surface Model (CDSM)Newfoundland and Labrador, Canada[58]
94.35Landsat 5 TM, ALOS-1 PALSAR (L-HH, L-HV), Radarsat-2 (C-HH, C-HV), DEM, slopeNew Brunswick, Canada [38]
Table 2. Comparison of the best overall accuracy (%) achieved in previous studies that used various classifiers for wetland mapping, as a function of the classifier, number of classes, and the input datasets.
Table 2. Comparison of the best overall accuracy (%) achieved in previous studies that used various classifiers for wetland mapping, as a function of the classifier, number of classes, and the input datasets.
ClassifierOverall Accuracy (%)Number of ClassesInput DataRegionAuthors
Maximum Likelihood60.17Radarsat-2 C- polSARManitoba, Canada[21]
87.23Landsat 5 TMOntario, Canada[26]
89.22Landsat 7 ETM+, ERS-2 (C-VV)Spain[59]
93.95Radarsat-2 C-simulated compact polSARManitoba, Canada[22]
94.03Landsat 5 TM, ERS-1 (C-VV), JERS-1 (L-HH), Radarsat-1 (C-HH)Great Lakes, North America[60]
Rule-based classifier85.74Airborne (C-HH, C-HV, C-VV, C-VH)Ontario, Canada[61]
89.33Landsat 7 ETM+, Radarsat-1 (C-HH), DEM, slopeOntario, Canada[19]
94.02Aerial photographs, Landsat 5 TM, soil, landform, vegetation mapsWisconsin, USA[62]
Neural69.06Landsat 5, SRTM DEMAlberta, Canada[63]
Networks72.32AIRSAR (C-HH, C-HV, C-VV, C-VH), ATLAS (15 optical bands)Maryland, USA[35]
80.84Sentinel-1 C-VH, Sentinel-2, SRTM-DEMAlberta, Canada[64]
Hierarchical72.74WorldView-2, Radarsat-2 C-polSAR, DEMOntario, Canada [65]
decision74.45Radarsat-2 C-polSARQuébec, Canada[66]
tree85.02Radarsat-1 (C-HH) Argentina[67]
89.03ASTER, Landsat 5 TM, ALOS-1 PALSAR (L-HH, L-VV), DEMCentral Africa[68]
Others76.45Landsat 7 ETM+, Radarsat-1 (C-HH)Québec, Canada[24]
88.05RapidEye, Radarsat-2 (C-HH, C-HV, C-VV, C-VH), Sentinel-1 (C-VH, C-VV), ALOS-1/2 PALSAR (L-HH, L-HV)Newfoundland and Labrador, Canada[69]
94.02Sentinel-1 (C-VV, C-VH), Sentinel-2Turkey[70]
Table 3. Characteristics of the ALOS-1 PALSAR L-HH and L-HV images used in the study as a function of the season and the water levels.
Table 3. Characteristics of the ALOS-1 PALSAR L-HH and L-HV images used in the study as a function of the season and the water levels.
SeasonWater LevelWater Height (m) *Date
SpringModerate1.632010-05-23
1.552010-05-28
1.942010-06-09
1.792010-06-14
1.502010-06-21
1.462010-06-26
SummerLow1.262010-08-06
1.332010-08-11
1.172010-08-23
1.192010-08-28
1.272010-09-09
1.332010-09-14
FallHigh2.442010-11-06
3.332010-11-11
2.472010-11-23
2.302010-11-28
3.052010-12-10
4.242010-12-15
(*) Water level recorded daily on the Saint John River at the Gagetown station (Lat. 45° 46′ 07″ N; Long. 66° 08′ 25″ W). https://wateroffice.ec.gc.ca/report/real_time_e.html?stn=01AO012.
Table 4. Characteristics of the Sentinel-1 C-band imagery used in the study as a function of the season, water level, and polarization.
Table 4. Characteristics of the Sentinel-1 C-band imagery used in the study as a function of the season, water level, and polarization.
SeasonWater LevelWater Height (m) *OrbitDatePolarizations
SpringHigh2.45Ascending2017-05-04C-VH, C-VV
Flood3.05Ascending2017-05-11C-VH, C-VV
High2.39Descending2017-05-03C-HH, C-HV
SummerLown.r.Ascending2017-08-15C-VH, C-VV
Lown.r.Ascending2017-09-01C-VH, C-VV
Lown.r.Descending2017-08-31C-HH, C-HV
Fall Moderate1.53Ascending2017-10-26C-VH, C-VV
High2.54Ascending2017-11-12C-VH, C-VV
High2.42Descending2017-11-11C-HH, C-HV
(*) Water level recorded daily on the Saint John River, at Gagetown station (Lat. 45° 46′ 07″ N; Long. 66° 08′ 25″ W). n.r.: not recorded.
Table 5. List of Landsat-8 Images used in the study as a function of the season and water level.
Table 5. List of Landsat-8 Images used in the study as a function of the season and water level.
SeasonWater LevelWater Height (m) *DatePathRowLandsat Product Identifier
SpringHigh2.552018/05/27928LC08_L1TP_009028_20180527_20180605_01_T1
High2.552018/05/27929LC08_L1TP_009029_20180527_20180605_01_T1
Flood3.852018/05/181028LC08_L1TP_010028_20180518_20180604_01_T1
Flood3.852018/05/181028LC08_L1TP_010029_20180518_20180604_01_T1
SummerHigh2.462013/09/18928LC08_L1TP_009028_20130918_20170502_01_T1
High2.462013/09/18929LC08_L1TP_009029_20130918_20170502_01_T1
Moderate1.902013/08/241028LC08_L1TP_010028_20130824_20170309_01_T1
Moderate1.902013/08/241028LC08_L1TP_010029_20130824_20170309_01_T1
FallLow1.202015/09/24928LC08_L1TP_009028_20150924_20170403_01_T1
Low1.202015/09/24929LC08_L1TP_009029_20150924_20170403_01_T1
Low1.192014/09/281028LC08_L1TP_010028_20140928_20170303_01_T1
Low1.192014/09/281028LC08_L1TP_010029_20140928_20170303_01_T1
(*) Water level recorded daily on the Saint John River, at Gagetown station (Lat. 45° 46′ 07″ N; Long. 66° 08′ 25″ W).
Table 6. Name and description of each landcover class used in this study with its corresponding mapping color and class code.
Table 6. Name and description of each landcover class used in this study with its corresponding mapping color and class code.
Mapping ColorCodeNameDescription
Remotesensing 12 02095 i001UVUnvegetatedAreas with no vegetation, or vegetation covering less than 25% of the area. May include developed areas, bare rock, beach, and tidal flat.
Remotesensing 12 02095 i002SVSparse vegetationAreas with a mixture of constructed materials and vegetation (more than 25% of the area) or other covers.
Remotesensing 12 02095 i003ACAgriculture CropAgriculture field of annual crops
Remotesensing 12 02095 i004APPasture or hayfieldAgricultural field with the purpose of harvest or grazing
Remotesensing 12 02095 i005AGGrasslandLand covered with grasses such as abandoned fields
Remotesensing 12 02095 i006SFSoftwood forestForest dominated with softwood tree species (>66%)
Remotesensing 12 02095 i007HFHardwood forestForest dominated with hardwood tree species (>66%)
Remotesensing 12 02095 i008MFMixedwood forestForest with a mixture of softwood and hardwood species.
Remotesensing 12 02095 i009SLShrublandLand dominant by shrubs (e.g., willows, dogwoods, meadowsweet, bog rosemary, leatherleaf, Labrador tea and saplings of trees such as red maple)
Remotesensing 12 02095 i010CCForest clearcutForestland where most of the trees were recently removed
Remotesensing 12 02095 i011OBOpen bogWetlands typically covered by peat, which have a saturated water regime as well as a closed drainage system and frequently covered by ericaceous shrubs, sedges, and sphagnum moss and/or black spruce
Remotesensing 12 02095 i012SBShrub bogWetlands typically covered by peat, which have a saturated water regime as well as a closed drainage system and frequently covered by ericaceous shrubs
Remotesensing 12 02095 i013TBTreed bogWetlands typically covered by peat, which have a saturated water regime as well as a closed drainage system and frequently covered by sphagnum moss and/or black spruce/larch
Remotesensing 12 02095 i014OFOpen fenWetlands typically covered by peat, having a saturated water regime, and an open drainage system and typically covered by sedges
Remotesensing 12 02095 i015TFShrub fenWetlands typically covered by peat, having a saturated water regime, and an open drainage system and typically covered by sedges and shrubs.
Remotesensing 12 02095 i016FMFreshwater marshWetlands dominated by rooted herbaceous plants that include most typical marshes as well as seasonally flooded wet meadows
Remotesensing 12 02095 i017CMCoastal marshWetlands dominated by rooted herbaceous plants that drain directly into coastal waters and have the potential to be least partially inundated with salt or brackish water
Remotesensing 12 02095 i018SMShrub marshMarshes with shrubs covering between 25% to 50% of the area
Remotesensing 12 02095 i019SWShrub wetlandWetland with more than 50% shrubs
Remotesensing 12 02095 i020FWForested wetlandWetlands dominated by tree species
Remotesensing 12 02095 i021ABAquatic bedWetlands dominated by permanent shallow standing water (<2m depth during mid-summer) that may contain plants that grow on or below the water surface
Remotesensing 12 02095 i022WADeepwaterDeepwater with no vegetation present (e.g., lake)
Table 7. Number of training polygons and validation GPS sites for each class.
Table 7. Number of training polygons and validation GPS sites for each class.
Class CodeTraining PolygonsValidation Sites
UV4646
SV3526
AC2724
AP42132
AG2841
SF5633
HF5438
MF9349
SL4724
CC4426
OB5726
SB7137
TB7747
OF6033
TF2229
FM2523
CM2427
SM2731
SW58102
FW3493
AB4341
WA5823
Total1028951
Table 8. J–M distances computed with the bands 2 to 6 of all the Landsat 8 OLI images.
Table 8. J–M distances computed with the bands 2 to 6 of all the Landsat 8 OLI images.
ClassUVSVACAPAGSFHFMFSLCCOBSBTBOFTFFMCMSMSWFWAB
SV1.783
AC1.9761.950
AP1.9961.9401.932
AG1.9981.9731.9851.852
SF2.0001.9962.0002.0002.000
HF2.0001.9901.9991.9971.9931.999
MF2.0001.9841.9991.9981.9971.8151.870
SL1.9991.9661.9921.9671.8371.9971.8601.905
CC1.9981.9841.9981.9941.9822.0001.9992.0001.973
OB2.0001.9851.9961.9971.9972.0002.0001.9971.9621.995
SB2.0001.9741.9971.9971.9931.9981.9991.9911.9141.9991.819
TB2.0001.9922.0002.0002.0001.8391.9991.9221.9922.0001.9861.825
OF1.9911.9151.9741.9801.9321.9911.9961.9831.8831.9711.7241.9291.985
TF2.0001.9831.9991.9961.9811.9891.9881.9201.9021.9931.9121.7161.8891.910
FM1.9991.9961.9991.9941.9582.0002.0002.0001.9841.9971.9991.9892.0001.9681.971
CM1.9781.9291.9941.9981.9942.0001.9991.9981.9921.9901.9951.9961.9981.9431.9931.995
SM1.9991.9852.0001.9981.9951.9821.9861.9621.9871.9981.9981.9691.9841.9791.8881.9491.992
SW1.9981.9581.9991.9951.9751.9851.9581.9201.9201.9711.9861.9441.9901.9041.7901.9441.9851.706
FW1.9961.9391.9921.9851.9761.9791.9881.9521.9911.9981.9971.9851.9921.9651.9751.9931.9861.9401.834
AB1.9951.9591.9991.9991.9981.9912.0001.9971.9941.9991.9931.9711.9931.9181.9811.9921.9801.9631.9681.866
WA2.0002.0002.0002.0002.0002.0002.0002.0002.0002.0002.0002.0002.0002.0002.0002.0002.0002.0002.0002.0001.996
(*) mean = 1.971; minimum = 1.706; maximum = 2.000
Table 9. Confusion matrix (in pixels) and associated accuracies when the RF classifier is applied to all the images and the Lidar-derived metrics(*).
Table 9. Confusion matrix (in pixels) and associated accuracies when the RF classifier is applied to all the images and the Lidar-derived metrics(*).
ClassUVSVACAPAGSFHFMFSLCCOBSBTBOFTFFMCMSMSWFWABWATotalUA (%)EC (%)
UV1704826100010060210300000500182993.176.83
SV471171329115243121210294001071740139883.7616.24
AC112106910102570200800001000111895.624.38
AP03451566611291000321000010310167293.666.34
AG0201510440951910102022152310113292.237.77
SF0700020590110000021000001000219893.686.32
HF02000320508680001000000000215095.354.65
MF010008664347638008240000017400371893.496.51
SL021124172024174745411302008000186493.726.28
CC1300001211720010300008000174098.851.15
OB00000101420216345301870050000228594.665.34
SB00000403203627358210000019000289194.605.40
TB020002516060010328371020004020305292.967.04
OF021021003215636767820846301322330241986.1513.85
TF0000011281022933207070002800085083.1816.82
FM02001001100040433828025802095486.7913.21
CM06010815011047009150034095695.714.29
SM0000023138500434411091681190111881.9318.07
SW01101033472153371911110062089430229291.148.86
FW0162735315200210011027127260136393.326.68
AB050011509101111732005132815546168992.017.99
WA0000010000002300000013902329025199.980.02
Total1753136010871642106323072160406719011742225130453225227673783792493223981362163290238
PA (%)97.2086.1098.3495.3798.2189.2594.9185.4791.9098.7496.0989.8287.9791.5695.9398.9299.0398.2887.1193.3995.2299.99
EO (%)2.8013.901.664.631.7910.755.0914.538.101.263.9110.1812.038.444.071.080.971.7212.896.614.780.01
(*) Bold figures denote well classified pixels.
Table 10. Confusion matrix (in number of sites) and associated accuracies when comparing the RF-classified image to 951 in-situ GPS field sites(*)
Table 10. Confusion matrix (in number of sites) and associated accuracies when comparing the RF-classified image to 951 in-situ GPS field sites(*)
ClassUVSVACAPAGSFHFMFSLCCOBSBTBOFTFFMCMSMSWFWABWATotalUA (%)EC (%)
UV460010000000000000000004797.872.13
SV025000000000000000010002696.153.85
AC0024000000000000000000024100.000.00
AP00012900000000000100000013099.230.77
AG010040000000000000000004197.562.44
SF000003003020090000010004566.6733.33
HF000000360030000000000003992.317.69
MF000002246000000000010005190.209.80
SL000100002200000000010102588.0012.00
CC000001000210000000000002295.454.55
OB0000000000250000000000025100.000.00
SB000000000013100100000003393.946.06
TB000000000002370000000003994.875.13
OF000000001000033010000003594.295.71
TF000000000001002800100003093.336.67
FM000110000000000210000002391.308.70
CM000000000001000027000002896.433.57
SM0000000000000000030000030100.000.00
SW0000000010020000009820010395.154.85
FW000000000000100000091009298.911.09
AB0000000000000000000040040100.000.00
WA0000000000000000000002323100.000.00
Total4626241324133384924262637473329232731102934123
PA (%)100.0096.15100.0097.7397.5690.9194.7493.8891.6780.7796.1583.7878.72100.0096.5591.30100.0096.7796.0897.8597.56100.00
EO (%)0.003.850.002.272.449.095.266.128.3319.233.8516.2221.280.003.458.700.003.233.922.152.440.00
(*) Bold figures denote well classified pixels.
Table 11. Confusion matrix (in the number of sites) and associated accuracies when comparing the 2016 NB reference map to 951 in-situ GPS field sites(*)
Table 11. Confusion matrix (in the number of sites) and associated accuracies when comparing the 2016 NB reference map to 951 in-situ GPS field sites(*)
ClassOBSBTBOFTFFMCMSMSWFWABUpland+WaterTotalUA (%)EC (%)
OB171210100000003154.8445.16
SB2500300000001050.0050.00
TB1560500030012128.5771.43
OF620142000000156.6793.33
TF00000000000000.00100.00
FM010811105451004524.4475.56
CM00001017010001989.4710.53
SM00000000000000.00100.00
SW0201042021398439341.9458.06
FW00111200240024981.6318.37
AB000000000010010100.000.00
Upland+Water01139138610553401745665869.3030.70
Total26384733282327311029341462
PA (%)65.3813.1612.773.030.0047.8362.960.0038.2443.0124.3998.70
EO (%)34.6286.8487.2396.97100.0052.1737.04100.0061.7656.9975.611.30
(*) Bold figures denote well classified pixels.
Table 12. Confusion matrix (in the number of sites) and associated accuracies when comparing the 2019 NB reference map to the 951 in-situ GPS field sites(*)
Table 12. Confusion matrix (in the number of sites) and associated accuracies when comparing the 2019 NB reference map to the 951 in-situ GPS field sites(*)
ClassUpland+waterWetlandTotalUA (%)EC (%)
Upland+water45718564271.1828.82
Wetland530430998.381.62
Total462489951
PA (%)98.9262.17
EO (%)1.0837.83
(*) Bold figures denote well classified pixels.
Table 13. Comparison between the NB wetland map and the classified image for the number or percentage of the 489 in-situ validation GPS wetland sites that were correctly identified in a wetland class or in the correct wetland class.
Table 13. Comparison between the NB wetland map and the classified image for the number or percentage of the 489 in-situ validation GPS wetland sites that were correctly identified in a wetland class or in the correct wetland class.
Category2016 NB Reference MapClassified Image
N%N%
Correctly identified in a wetland class29359.9247697.34
Identified in the correct wetland class14629.8646194.27
Table 14. Comparison between the 2016 NB reference map and classified image for the number and percentage of the 489 in-situ GPS wetland sites that were correctly identified as a function of the wetland class.
Table 14. Comparison between the 2016 NB reference map and classified image for the number and percentage of the 489 in-situ GPS wetland sites that were correctly identified as a function of the wetland class.
Wetland ClassTotal GPS Sites2016 NB Reference MapClassified Image
N%N%
OB261765.382596.15
SB37513.513183.78
TB47612.773778.72
OF3313.0333100.00
TF2900.002896.55
FM231147.832191.30
CM271762.9627100.00
SM3100.003096.77
SW1023938.249896.08
FW934043.019197.85
AB411024.394097.56
Total48914629.8646194.27
Table 15. Comparison between the 2016 NB reference map and the classified image for the distribution of the incorrectly identified in-situ GPS wetland validation sites as a function of the source of errors.
Table 15. Comparison between the 2016 NB reference map and the classified image for the distribution of the incorrectly identified in-situ GPS wetland validation sites as a function of the source of errors.
Source of Errors2016 NB Reference MapClassified Image
N%N%
Not in the right wetland class14141.111346.43
Not in a wetland class20258.891553.57
Total343100.0028100.00
Table 16. Comparison between the 2016 NB reference map and the classified image for the number and percentage of the in-situ GPS wetland sites that were not identified in the right wetland class.
Table 16. Comparison between the 2016 NB reference map and the classified image for the number and percentage of the in-situ GPS wetland sites that were not identified in the right wetland class.
Wetland ClassTotal GPS Sites2016 NB Reference MapClassified Image
N%N%
OB26934.6213.85
SB372259.46616.22
TB4724.2612.13
OF331957.5800.00
TF292068.9713.45
FM23626.0914.35
CM2700.0000.00
SM312683.8713.23
SW102109.8000.00
FW931313.9822.15
AB411434.1500.00
Total48914128.83132.66
Table 17. Comparison between the 2016 NB reference map and the classified image for the number and percentage of the in-situ GPS wetland sites that were not identified as a wetland class.
Table 17. Comparison between the 2016 NB reference map and the classified image for the number and percentage of the in-situ GPS wetland sites that were not identified as a wetland class.
Wetland ClassTotal GPS Sites2016 NB Reference MapClassified Image
N%N%
OB2600.0000.00
SB371129.7300.00
TB473982.98919.15
OF331339.3900.00
TF29827.5900.00
FM23626.0914.35
CM271037.0400.00
SM31516.1300.00
SW1025351.9643.92
FW934043.0100.00
AB411741.4612.44
Total48920241.31153.07

Share and Cite

MDPI and ACS Style

LaRocque, A.; Phiri, C.; Leblon, B.; Pirotti, F.; Connor, K.; Hanson, A. Wetland Mapping with Landsat 8 OLI, Sentinel-1, ALOS-1 PALSAR, and LiDAR Data in Southern New Brunswick, Canada. Remote Sens. 2020, 12, 2095. https://doi.org/10.3390/rs12132095

AMA Style

LaRocque A, Phiri C, Leblon B, Pirotti F, Connor K, Hanson A. Wetland Mapping with Landsat 8 OLI, Sentinel-1, ALOS-1 PALSAR, and LiDAR Data in Southern New Brunswick, Canada. Remote Sensing. 2020; 12(13):2095. https://doi.org/10.3390/rs12132095

Chicago/Turabian Style

LaRocque, Armand, Chafika Phiri, Brigitte Leblon, Francesco Pirotti, Kevin Connor, and Alan Hanson. 2020. "Wetland Mapping with Landsat 8 OLI, Sentinel-1, ALOS-1 PALSAR, and LiDAR Data in Southern New Brunswick, Canada" Remote Sensing 12, no. 13: 2095. https://doi.org/10.3390/rs12132095

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop