Next Article in Journal
Convolutional Neural Network-Driven Improvements in Global Cloud Detection for Landsat 8 and Transfer Learning on Sentinel-2 Imagery
Next Article in Special Issue
Synergistic Integration of Time Series Optical and SAR Satellite Data for Mariculture Extraction
Previous Article in Journal
Present-Day Surface Deformation in North-East Italy Using InSAR and GNSS Data
Previous Article in Special Issue
Densifying and Optimizing the Water Level Series for Large Lakes from Multi-Orbit ICESat-2 Observations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

RadWet: An Improved and Transferable Mapping of Open Water and Inundated Vegetation Using Sentinel-1

Department of Geography and Earth Sciences, Aberystwyth University, Aberystwyth SY23 3DB, UK
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(6), 1705; https://doi.org/10.3390/rs15061705
Submission received: 8 February 2023 / Revised: 17 March 2023 / Accepted: 17 March 2023 / Published: 22 March 2023
(This article belongs to the Special Issue Advances of Remote Sensing and GIS Technology in Surface Water Bodies)

Abstract

:
Mapping the spatial and temporal dynamics of tropical herbaceous wetlands is vital for a wide range of applications. Inundated vegetation can account for over three-quarters of the total inundated area, yet widely used EO mapping approaches are limited to the detection of open water bodies. This paper presents a new wetland mapping approach, RadWet, that automatically defines open water and inundated vegetation training data using a novel mixture of radar, terrain, and optical imagery. Training data samples are then used to classify serial Sentinel-1 radar imagery using an ensemble machine learning classification routine, providing information on the spatial and temporal dynamics of inundation every 12 days at a resolution of 30 m. The approach was evaluated over the period 2017–2022, covering a range of conditions (dry season to wet season) for two sites: (1) the Barotseland Floodplain, Zambia (31,172 km2) and (2) the Upper Rupununi Wetlands in Guyana (11,745 km2). Good agreement was found at both sites using random stratified accuracy assessment data (n = 28,223) with a median overall accuracy of 89% in Barotseland and 80% in the Upper Rupununi, outperforming existing approaches. The results revealed fine-scale hydrological processes driving inundation patterns as well as temporal patterns in seasonal flood pulse timing and magnitude. Inundated vegetation dominated wet season wetland extent, accounting for a mean 80% of total inundation. RadWet offers a new way in which tropical wetlands can be routinely monitored and characterised. This can provide significant benefits for a range of application areas, including flood hazard management, wetland inventories, monitoring natural greenhouse gas emissions and disease vector control.

1. Introduction

Tropical herbaceous wetlands play a vital role in a number of the world’s biggest ecological challenges: they represent essential ecosystems, supporting high biodiversity of flora and fauna [1]; act as significant sources and sinks of greenhouse gases [1,2]; govern the health and wellbeing of large populations that rely on flood recession farming practices [3]; and tropical wetlands can also pose a hazard to local populations, acting as breeding sites for vector mosquitoes for diseases including malaria and dengue fever [4,5,6,7,8,9,10,11]. Despite their importance, relatively few attempts have been made to produce a routine, tropical herbaceous wetlands mapping system capable of assessing the inter- and intra-annual extent of inundation and the timing and duration of seasonal inundation, all factors which could exhibit a control on malaria vector transmission [10,11].
Satellite Earth Observation (EO) has the potential to provide timely and accurate herbaceous wetland maps using freely available datasets over large regional, national, and even continental scales. Previous attempts at herbaceous wetland mapping, both static and dynamic maps, have used optical [9,10,12,13,14] and radar [5,12,15,16,17,18,19,20,21,22] sensing techniques to varying success.
Herbaceous wetland mapping techniques using optical EO imagery in tropical areas are all inherently limited by spatially extensive and persistent cloud cover. This generally limits the usefulness of optical data in generating operational wetland monitoring tools. Although wetland maps can be produced successfully, due to cloud cover, they tend to be single epoch maps [23] or rely on generating monthly or seasonal composites to generate cloud-free imagery [24].
Radar EO has the advantage of being unaffected by cloud cover and is capable of imaging both day and night. Consequently, it has been used extensively for operational floodwater hazard mapping [12,15,16,17,18,19,21,22]. However, its use for monitoring herbaceous wetlands has received less attention [12,15,20,25,26,27] despite its potentially important role in governing globally significant greenhouse gas emissions [28]. Mapping surface water in wetland environments presents a challenge in that a significant proportion of the inundated area is likely to be vegetated [5] but will not produce the distinctive low backscatter signal that makes open water readily mappable.
Inundated vegetation has a more complex interaction with an incident radar pulse compared to open water, wherein the co-polarised channel has a double-bounce interaction between underlying water and the vertical structure of the vegetation occurs [5,20,29,30]. This interaction is typically observed in the co-polarised channel and not in the cross-polarised channel, which is much more sensitive to volume scattering, and, as such, does not effectively penetrate vegetation due to the de-polarising nature of the vegetation canopy [20,29].
Radar-based approaches for mapping open water tend to focus on high-magnitude, low-frequency flood events rather than seasonal inundation. As such, studies have found success with image differencing approaches, exploring large differences in backscatter between the dry season and imager during a flood event producing high degrees of accuracy (97%) [31]. However, such approaches are not able to identify areas of permanent inundation or the extent of open/vegetated water in the dry season or the transition between wet and dry periods. Mapping inundation extent throughout the hydrological year is needed if we are to understand the dynamics of wetland systems in response to short- and long-term shocks such as periods of drought. Additionally, knowledge of inundated areas that persist through the dry season represent potentially important habitats for disease vectors and therefore act as primary targets for vector control initiatives like larval source management [32,33].
Many floodwater monitoring approaches employ image thresholding to identify areas of low backscatter within a radar image. Efforts have been made using manually [22] or automatically [15,16] defined thresholds applied to the image histogram. However, defining a global threshold in this manner and applying it to the wider scene tends to result in a high degree of false positives where high backscatter targets in an image tend to skew an image histogram, often raising the threshold that can separate areas of low backscatter. To overcome this, split-based thresholding is used that divides the image into a series of sub-tiles to which automatic histogram thresholding is applied to generate a number of local thresholds. These local thresholds are then combined to create an accurate and efficient mapping approach for open water [15,16].
Complex machine learning with automatic generation of training data has been successfully implemented in the Barotseland Floodplain by Hardy et al., [5], where a logical rule base and ancillary datasets were used to successfully map areas of open water and inundated vegetation (>92%). This approach, however, relies on expert knowledge of wet and dry season timing and uses a rule base specific to the study site and, therefore, cannot easily be applied in other locations.
Existing flood monitoring techniques, such as image thresholding and time series analysis show potential to be modified for the detection of seasonally inundated vegetation and areas of open water, which could be applied on a scene-by-scene basis to generate automatically defined training data to be used in a machine learning classifier for use in wetland monitoring. Although existing image classification techniques exist, there is no evidence that these techniques can be applied to new areas and produce good results without in-depth prior knowledge of the study site, which may not always be available.
This study presents a new globally transferrable wetland mapping tool called RadWet that uses split-based image thresholding, dense time series analysis and logical rules to automatically generate training data, which is used in a machine learning classification consensus workflow to accurately map areas of open water and inundated vegetation in herbaceous wetlands. The RadWet workflow has been specifically developed to be globally transferrable across study sites, require no prior knowledge of wet/dry season timings, require no user input or optimisation, and produce accurate results in a timely manner on consumer-grade hardware allowing the workflow to be widely accessible.

2. Materials and Methods

The wetland mapping approach presented here uses an open-source, python-based supervised machine learning image classification routine, where training data is generated automatically and applied to a machine learning ensemble classification routine. The methodology was built using the freely available python packages (a) RSGISLib [34], (b) GDAL [35], (c) NumPy [36], (d) SciKit Learn [37], the performance was optimised using Numba [38], and the built-in python multiprocessing package was used to allow for parallel processing across multiple CPU cores, designed to run locally on consumer-grade hardware.

2.1. Study Sites

2.1.1. The Barotseland Floodplain, Western Zambia

The Barotseland Floodplain is an extensive wetland region within western Zambia, through which the Zambezi River flows in an anabranching form contained within rocky escarpment to the east and west, shown in Figure 1. The floodplain extends from the confluence of the Lungwebungu and the Kabompo rivers, stretching 240 km to the south, with an average width of approximately 30 km and 50 km at its widest point [3,39]. The provincial capital can be found to the east of the floodplain atop an edge escarpment approximately 50 m above the elevation of the main Zambezi Channel [3,39]. Flooding lags behind peak rainfall, where rainfall is greatest in January and reduces to very little or no rainfall in May [3]. Flood water has been observed to build to a peak around March to April, receding to a minimum flood level in September and October [3]. The exact timings of the flooding maxima and minima are highly variable between years [39]; consequently, exact dates cannot be relied on. The floodplain can generally be considered to be covered with clay or loamy sediment [3], as well as Kalahari sands [39]. Vegetation within the floodplain itself consists mainly of common reeds as well as some species of moisture-tolerant shrub, with some higher sandbanks containing species of fern [40]. Within the floodplain, trees are uncommon [40]. On top of the eastern escarpment are regions of Dambos, which can be commonly found around areas of local drainage, usually covered in grasses and reeds that are either permanently or seasonally inundated [5,40].
The floodplain is primarily inhabited by the semi-nomadic Lozi people, with an average population density of approximately 17 per km2. They rely on flood-driven agriculture, fishing, cattle grazing and commercial use of vegetation as their main source of livelihood [41]. During the wet season, they migrate from the floodplain itself to higher ground surrounding the floodplain [42]. The largest urban area in the region is the City of Mongu on the eastern edge of the floodplain atop the eastern escarpment, with a population of approximately 179,585 (2010 Census), and seasonal flooding is not considered to be a hazard for the city.

2.1.2. Rupununi Wetlands, Guyana

The North Rupununi Wetlands are situated within the south-west of Guyana close to the border with Brazil, with the landscape comprised of low relief, a mixture of seasonally and permanently inundated wetlands and savannah grassland [43,44,45,46] as seen in Figure 2. Small free-flowing water channels cross the region as well as larger river systems such as the Rupununi (a tributary of the Essequibo), Takutu and Ireng (Maú) Rivers, which conflux to the west of the main wetlands and are themselves tributaries of the Amazon. The Rupununi wetland region experiences a tropical monsoon and tropical savannah climate [47], with a rainy season between April/May to September [44]. During the wet season, the region experiences significant inundation [43,44,45,46], and the Ireng (Maú) and Rupununi rivers become connected through a series of small creeks, notably the Pirara Creek [48].

2.2. Dataset Preparation

RadWet image classification was applied to dual polarised (VV, VH), full resolution Sentinel-1, Interferometric Wide Swath (IW) ground range detected (GRD) data, acquired in the ascending orbit direction, downloaded from NASA’s Alaska Satellite Facility (ASF: https://search.asf.alaska.edu/ accessed on 16 March 2023). Sentinel-1 Data was pre-processed by (a) application of ESA Orbit file, (b) calibrating to Gamma 0 (γ0), (c) multi-looking, (d) thermal noise correction, (e) range doppler terrain correction, (f) lee filtering (5 × 5), and (g) calculation of the Normalised Difference of the Co- and Cross-polarisation channels (NDPI).

2.2.1. Overview

The novel RadWet approach presented in this paper automatically extracts training data for open water and inundated vegetation targets, as well as background dry features, to be used in a supervised machine learning consensus image classification, shown in Figure 3.
Training data for open water (OW) and inundated vegetation (IV) are extracted from separate masks and generated based on the general assumption that OW pixels will have a relatively low backscatter response, and IV will be relatively high. The generation of these low backscatter and high backscatter masks is described in Section 2.2.2. Training data targets are generated using a series of rules described in Section 2.2.4 applied to the relevant high or low backscatter masks utilising time series metrics These training data are supplied to the consensus classification approach and run over 25 replicates to produce a final classified product.

2.2.2. Low/High Backscatter Image Masks

The generation of the per-scene low/high backscatter mask is done using split-based thresholding (SBT) pioneered by Martinis et al. [15,16]. The SBT approach applied in this study is summarised graphically in Figure 4, and the processing steps are outlined as pseudo-code in Figure 5. Here, the input Sentinel-1 scene is split into n numbers (approximately 813,908 tiles for a typical scene covering the Barotseland Floodplain) of 20 × 20-pixel sub-tiles. A sub-tile can only be considered suitable for analysis where both low/high backscatter classes are present (i.e., a boundary is crossed). In this respect, sub-tiles with homogenous backscatter are rejected.
To do this, the following metrics are calculated per sub-tile: coefficient of variance (cv); the ratio (r) between the mean sub-tile pixel backscatter and the global mean pixel backscatter of the Sentinel-1 scene. These metrics are then summarised across all sub-tiles as a mean, i.e., cv and r. For each sub-tile, the cv and r are plotted alongside c v - and r - .
As illustrated in Figure 4c, where a sub-tile’s cv and r fall within (measured by Euclidean distance) three standard deviations of c v - and r - , then a sub-tile is labelled as homogenous and rejected. Conversely, where a sub-tile’s cv and r fall outside three standard deviations of c v - and r - (e.g., Figure 4e), then it is considered suitably heterogeneous and carried forward for analysis. This is illustrated graphically in Figure 4b with sub-tiles (represented by pixels) being coloured red where their variance is less than three standard deviations of c v - and r - (i.e., low variance, homogenous) or coloured blue where their variance is greater than three standard deviations of c v - and r - (i.e., high variance, heterogeneous).
Heterogeneous sub-tiles (graphically illustrated in Figure 4d) then undergo Otsu thresholding to determine a threshold between high and low backscatter pixels (Figure 4e), resulting in a list of threshold values for all heterogenous sub-tiles. This list is refined by selecting only those values that are less than the global mean of the Sentinel-1 scene. A global low backscatter threshold is generated for the scene based on the median of this refined list. The list of thresholds for all heterogenous sub-tiles can be further refined to select values greater than the global mean of the Sentinel-1 scene; calculating the median of this refined list provides a very high backscatter threshold. Areas of low backscatter containing areas of open water would be defined where the VV backscatter is less than the low backscatter threshold returned by the split-based thresholding.
It was found through initial visual assessment that the very high backscatter threshold was able to discriminate areas of very strong double-bounce backscatter, indicating inundated vegetation. However, the threshold was almost always too high to detect areas of inundated vegetation where the double-bounce backscatter interaction was present but less distinct. Consequently, it was decided to define the high backscatter image mask where the VV backscatter was greater than the low backscatter threshold. This would then include areas of inundated vegetation where the signature double-bounce backscatter interaction is present but weak.

2.2.3. Time Series Metrics

A set of rules are applied to a range of metrics (shown in Table 1) to automatically generate training data from the low and high backscatter masks, which were computed for each study site using the Google Earth Engine cloud computing platform [25], using the whole Sentinel-1 time series (2014–2021) in the ascending direction and exported as single band files to be used locally in the classification process.
Due to the complex nature of the interaction of inundated vegetation and an incident radar pulse, it is difficult to delineate training data for inundated vegetation using single image acquisitions [19,51]. Seasonally or ephemerally inundated-vegetation pixels typically experience a relatively high degree of variation in backscatter throughout the hydrological year. This characteristic can be summarised by analysing the time series in backscatter response.
Specifically, inundated vegetation is known to exhibit a double-bounce backscatter mechanism under certain conditions. This mechanism manifests itself in a relatively large difference in backscatter between the VV and VH [5,19,20,29], which can be characterised using the Normalised Difference Polarisation Index (NDPI) [52].
N D P I = ( V V - V H ) ( V V + V H )
Conceptually, NDPI values for pixels that inundate every year will be significantly more variable than those that tend to remain dry. Therefore, by selecting pixels that are temporally dynamic in NDPI, we can isolate areas of inundated vegetation from dry land cover classes. Additionally, this approach can also separate inundated vegetation from urban areas that also exhibit a double-bounce backscatter response but will tend not to vary significantly over the hydrological year.
Additionally, the variation in a pixel’s backscatter response over time can be summarised using the Z-score metric.
Originally developed by Tsyganskaya et al. [19], the Z-Score image is a metric by which backscatter values in a specific Sentinel-1 observation of interest can be normalised and compared across the time series [19] and is calculated per scene using the following equation applied to each polarisation, and the NDPI:
Z - Score = ( y 0 s 1 μ ) / σ
where:
  • μ is the per-pixel time series mean;
  • σ is the pixel time series standard deviation;
  • y 0 S1 is the Gamma Nought corrected backscatter pixel values for that particular Sentinel-1 observation.
The concept of Z-score images in this context was originally developed to highlight low-frequency, high-magnitude flood events, where pixels show a significant deviation from their normal time series values and are consequently not ideally suited to explicitly detect highly dynamic seasonally flooded vegetation. Despite this, Z-Score images can inform whether a pixel has increased or decreased in backscatter response relative to the time series aiding the detection rather than explicitly defining areas of inundated vegetation.
Open water and flat dry sand have very similar backscatter responses due to their smooth texture (and low dielectric properties in the case of dry sand), leading to almost identical low backscatter values. As such, ancillary data time series metrics (Table 1) are needed to separate these two land cover classes.
Here, the Tropical Wetland Mapping Tool (TropWet V8-Modified) [24] is used as a means of separating these similar classes. TropWet is a free-to-use Google Earth Engine-based platform for mapping wetlands using spectral unmixing of Landsat imagery into per-pixel fractions of water, vegetation and bare sand/soil. In this instance, a modified version of TropWet was used to generate % occurrence of (i) water and (ii) sand over the period 2000–2020 to give a long time series indication of the locations of permanent water and flat bare earth, which are subsequently used alongside radar backscatter imagery to define training data for areas of open water and sand/bare earth.
NASA’s GRACE and GRACE-FO [53,54] mission data were used as a means of estimating a study area’s hydrological condition based on the mass anomaly, which is used as a proxy for the degree of soil wetness or surface water extent within an area. References [53,54] expressed as an index between 0 (dry) and 1 (wet) per month. The GRACE index, in this instance, is used in conjunction with TropWet-derived open water occurrence to indicate dry season conditions; therefore, a low backscatter pixel is more likely to be a dry feature rather than open water (Figure 6). Accounting for hydrological conditions in this manner to refine open water/dry flat sand delineation has been implemented elsewhere [5], but by using GRACE, the user does not need any a priori knowledge of the hydrological conditions of an area that is required in the case of Hardy et al. [5].
Figure 6 demonstrates the concept of the GRACE index and is indicative of reported periods of high and low inundation for the Barotseland Floodplain and North Rupununi Wetlands as defined in the literature by [5] and [44,46], respectively. During periods of increased inundation, the GRACE index value also increases, reducing back down after peak inundation, supporting the assumption that the interannual GRACE mass anomaly is responding to changes €n soil moisture and water inundation. The GRACE index value, as presented in Figure 6, is used as part of a dynamic rule base described in Section 2.2.4 for the detection of Open Surface Water rather than attempts to categorise Sentinel-1 observations into wet or dry season scenes.

2.2.4. Training Data Generation

A globally transferable rule base (i.e., the rules can be applied to a Sentinel-1 scene for any time of the year) is applied to the Sentinel-1 observations as well as the time series metrics crutinized in Table 2) to generate training data masks for the following classes: (a) Open Water (OW), (b) Flat Bare Earth (FBE), (c) Inundated Vegetation (IV), (d) Background Land Cover (B), and€) Dense Vegetation (DV) on a local consumer-grade computer.
The low backscatter mask is used for generating OW and FBE training data. For OW, pixels are where the TropWet Water Occurrence layer (2000–2020) is greater than 90%. Visual sensitivity analysis showed that using a higher % occurrence threshold (i.e., 95% and 100%) was too restrictive, with just a relatively small number of permanently inundated features being available, particularly for Upper Rupununi where the trunk river is relatively narrow (~30 m wide), leading to a relatively small and unrepresentative training data sample. Conversely, using a lower threshold (i.e., 70% or 80%) led to the inclusion of areas which are prone to drying out and therefore led to an overestimation of OW in the dry season.
Separating OW from FBE is a key challenge in mapping inundation, particularly in areas with flat dry sand, typically found in tropical wetlands during the dry season [5,17]. Here, FBE training samples are defined (applied to the low backscatter mask) using a combination of TropWet % Sand Occurrence (2000–2020), TropWet % Water Occurrence (2000–2020), and the GRACE index giving both (i) an indication of a pixel’s propensity for being bare sand and (ii) a measure of the hydrological conditions of the Sentinel-1 scene. To enable this combined ruleset, the GRACE index is inverted and scaled from 0–100, i.e., 100 × (1 − GRACE index), to match the scale of the TropWet Occurrence metrics. Specifically, FBE training samples are selected where TropWet Sand Occurrence > 50% and where TropWet Water Occurrence < crutinizeised-inverted GRACE index. In doing so, pixels that have a low backscatter in the dry season that historically tend to be classed as sand by TropWet will be labelled as FBE. Conversely, low backscatter pixels in the wet season are likely to be inundated rather than dry sand and therefore are labelled as OW.
IV training samples were defined using the high backscatter image. As described earlier (Section 2.2.3), the underlying assumption is that the backscatter of these areas is characteristically variable over time, as defined by the NDPI time series variance and Z-score. Specifically, IV training samples are defined where the NDPI time series variance > the 95th percentile, where the Z-score is <−2, where the VH backscatter < the high backscatter SBT-defined threshold (described in Section 2.2.3) and, finally, where the slope angle < 5°.
The 95th percentile was used to isolate the most variable pixels with high backscatter in the VV band but lower backscatter in the VH channel, typical of IV targets [5,19,20,29,51,55]. Visual sensitivity analysis using lower values resulted in an overestimation of IV training data as vegetated areas will also tend to be relatively variable over time, with backscatter increasing as vegetation canopies become denser, increasing the degree of volumetric scattering. Additionally, the NDPI metric is more sensitive to double-bounce backscatter; therefore, it is able to better discriminate between the volumetric scattering of vegetation canopies over dry soils and the double-bounce backscatter associated with IV [52].
During initial testing, confusion was found between IV and DV features with a particularly high canopy density, resulting in high-volume backscattering in both the VV and VH channels. This was resolved by applying the SBT-derived threshold to the VH channel to isolate those pixels that exhibit a very high backscatter resulting from volumetric scattering associated with vegetation canopies, labelling them as DV training samples.
The remaining pixels, denoted ”y a ’eneral Background land cover class, were extracted from the high backscatter mask where their NDPI variance <95th percentile. This works on the basis that pixels characterised as having low variability to backscatter over time are unlikely to be IV or areas of DV.

2.3. Image Classification

Final image classification was carried out using an Object-Based approach. Objects were generated separately for the low and high backscatter masks based on a stack of VV, VH and NDPI bands per Sentinel-1 scene using the Shepherd Segmentation algorithm [56], with a minimum of 15 pixels per object. Subsequent objects were populated with the mean and standard deviation for the VV, VH and NDPI bands, as well as terrain slope in Table 1. These datasets include terrain-based derivatives that have previously been shown to improve wetland classification accuracy by accounting for simple hydrological principles [5,24,57].
The low and high backscatter segmen€ed images are classi€ied independently using a consensus approach [5,58] which has been shown to improve the classification performance and computational efficiency when classifying wetland land cover classes. A total of 25 replicate classifications are made per scene for both low and high backscatter regions using 500 randomly selected training samples per class at each iteration. An object is assigned to a class when €here is >70% agreement; e.g., if the clas€ifier identifies an object as being OW more than 18 times out of 25, then it is considered OW. If this 70% agreement threshold is not met, then an object is deemed to be too uncertain and assumed to be background land cover. The machine learning classifier Extra Random Trees (eRT) was parameterised using the approach described below in Section 2.4.
For the low backscatter image, OW and FBE training masks are used to train a binary eRT classifier. For the high backscatter image, IV, B and DV training masks are used to train a multiclass eRT classifier. The resulting DV class is collapsed into the B land cover class to form a universal “dry” thematic class. These two separate classified outputs are then combined to create a single classified output product (OW, FBE, IV and B) for the input Sentinel-1 scene. This routine was applied to each scene in the archive for both the Barotseland Floodplain (totalling 142 scenes) and the Upper Rupununi Wetlands (totalling 141 Scenes).

2.4. Classifier Parameterisation

Automatically generating training data using the rule base described above in Section 2.2.4 and outlined in Table 2. creates a considerable number of training samples for each class.
For example, when applied to the Barotseland study site, the median number of training samples per class for a scene was: (a) OW: 27,627, (b) FBE: 19,454, (c) IV: 3667, (d) DV: 130,288, and (e) B: 7,158,836. With a training dataset this large, it can be computationally demanding to optimise the classification parameters using random forest machine learning approaches. Although boosted classification algorithms exist, such as XGBoost, testing showed that computational demand was unfeasible, despite implementing GPU hardware acceleration. To resolve this issue, randomly sampling the training data (500 samples per class) and a classification census approach over 25 classification replicates were implemented. This both reduced the per classification training and prediction time (compared to a single classification using all of the automatically generated training data) whilst maintaining the use of the large training dataset produced through the automated procedure.
Each classification replicate optimises algorithm hyperparameters on the randomly selected sample of training data through a grid search of possible parameters shown in Table 3, along with the frequency that each parameter was used in classification and the mode classification hyperparameters.

2.5. Accuracy Assessment

Validation of the classification products was made using a mixture of Landsat-8, Sentinel-2 and Planet Labs PlanetScope optical imagery where cloud cover permitted. This was supported by field observations made during the 2019 wet season and dry season in Barotseland (Zambia) and a year-round field campaign from 2020–2021 in Upper Rupununi (Guyana). In both locations, information on the presence of water, vegetation cover characteristics, photos and locations were recorded. There were insufficient data points to conduct an accuracy assessment using the field data alone; therefore, this data was used to inform the interpretation of optical satellite imagery in determining ground truth.
In total, 17 scenes for Barotseland and 9 scenes for Upper Rupununi were validated. The selected scenes covered a range of hydrological conditions at both sites, including the dry season, wet season, post-wet season and the period of draw-up preceding the peak of the wet season. Accuracy assessment points were randomly allocated within each validation scene. A total of 28,223 points were scrutinised: 20,240 over Barotseland and 8,094 over Upper Rupununi.
Standard accuracy assessment metrics were generated, including overall accuracy, F1 score, kappa and individual F1 scores for the open water, inundated vegetation and background land cover classes. Following Murray et al. [59,60] and Bunting et al. [61], bootstrapping of accuracy assessment points was used to determine confidence intervals for the accuracy assessment metrics.
The RadWet classification performance was compared against two versions of the Hardy et al., [5] automatic wetland classification approach using (i) a segmentation with a minimum of 15 pixels per cluster (matching RadWet Minimum Mapping unit), (ii) a segmentation with a minimum of 100 pixels per cluster (the original used in the Hardy et al. approach [5]), and (iii) a modification of RadWet where training data is supplied using manually digitised features using the RegionGrow QGIS Plugin [62] instead of using the rule base outlined in Table 2, with all other aspects of the workflow the same. It was considered essential to test the new RadWet approach against existing wetland mapping techniques presented in the literature, as well as compare its performance against a human-defined classification to assess the validity of the automatic training data generation. Pairwise t-tests were performed to determine whether RadWet represented a significant improvement in wetland classification performance.
Sensitivity analysis was used to evaluate the influence of the number of classification replicates used and the % agreement with the mode on classification performance for Barotseland. Additionally, the computational demand (time and memory) was quantified to evaluate RadWet against manually derived training classification, the Hardy et al. [5]. approach, as well as running RadWet as a single-threaded procedure, as opposed to the multi-threaded procedure that RadWet uses.

3. Results

Figure 7 and Figure 8 show examples of RadWet classified outputs for the Barotseland Floodplain, western Zambia, for the wet season (19 April 2020) and dry season (28 August 2018), respectively. Sub-map figures (Figure 7(b1–b3)) demonstrate the ability of RadWet to characterise fine-scale hydrological features, such as anabranches, inundated paleochannels and scroll bar sequences leading to the formation of oxbow lakes and disconnected channels. During the wet season, the area surrounding the main trunk channel becomes inundated, leading to extensive inundated vegetation (Figure 7(b1–b3)) that subsequently dries out in the dry season (Figure 8(b1–b3)).
Outside of the Barotseland Floodplain, the eastern escarpment is characterised by the presence of ‘dambo’ features: areas of topographic lows that are vegetated and filled with water and usually permanently inundated (Figure 7(c1–c3) and Figure 8(c1–c3)). These features are detected by RadWet in both the wet and the dry season imagery, although they are more distinct during the wet season, during which increased surface water availability leads to an increased double-bounce backscatter interaction. Conversely, as flood water recedes in the dry season, the relative proportion of vegetation above the water’s surface increases, thus increasing the contribution of volume scattering and reducing the ability to detect any inundated vegetation.
Examples of RadWet classified products for north Rupununi, Guyana, are shown in Figure 9 and Figure 10 for the 2020 wet season and dry season, respectively. Figure 9(b1–b3) and Figure 10(b1–b3) demonstrate the ability of RadWet to map that Lake Amuku is characterised by extensive inundated vegetation in both the wet and, to some extent, in the dry season and, therefore, would not be detected using other low backscatter-based water detection approaches. During the wet season, the Ireng and Takutu river channels are clearly depicted, but during the dry season, these features become disconnected. This is a result of the channel width contracting to a width that is generally smaller than the RadWet minimum mapping unit. A pixel-based approach may improve the minimum mapping unit.

3.1. Classification Accuracy Assessment

RadWet demonstrated a good level of agreement with optical validation data, with an overall classification accuracy of 88.68% (95th CI: 88.64–88.72) in Barotseland (Table 4), significantly outperforming the other three classification approaches tested (p < 0.01), the closest being the Hardy et al. [5] approach with an overall accuracy of 84.27% (95th CI: 84.2–84.3). RadWet achieved high classification for all classes with F1 scores of (i) 0.92 for OW, (ii) 0.828 for IV, and (iii) 0.902 for B, giving a macro F1 score of 0.88, and a test sample weighted macro F1 score of 0.86. In comparison to the other three approaches, the main improvements made by RadWet were in the classification of IV. This represents a 19% and 11% improvement over the F1 scores achieved with manually defined training data and the Hardy et al. [5] approach.
Despite the improvements made in the classification of IV, this class demonstrated the lowest accuracy scores for RadWet. Specifically, the producer’s accuracy scores were consistently higher (median 21% higher) than the user’s accuracy scores indicating a general trend of under-classification. False negatives occurred over areas of inundated vegetation where the canopy is sufficiently dense to exhibit volumetric scattering and is, therefore, indistinguishable from dry vegetation (for example, see Figure 11).
When transferred to the Rupununi Wetlands, RadWet also performed well with a median overall accuracy of 80.15% (95th CI: 80.06–80.234) (Table 5), narrowly but significantly (p < 0.01) outperforming the other three classification approaches tested (closest was manually defined training data approach with a median overall accuracy of 79.39%, 95th CI: 79.13–79.64). Similar to Barotseland, RadWet performed best in detecting open water with an F1 score of 0.855 (95th CI: 0.854–0.856), with inundated vegetation returning an F1 score of 0.791 (95th CI: 0.789–0.792) and background land cover returning an F1 score of 0.771 (95th CI: 0.769–0.772). The classification approach using manually defined training data achieved a very high level of accuracy for OW (median F1 0.98), significantly higher than the automatically defined training data approaches, including RadWet. This reflects the ability of the human operator to distinguish between areas of open water and other low backscatter landscape features that were included as training data in RadWet and the Hardy et al. [5] approaches, leading to over-classification of OW (Figure 12), particularly in the dry season when channels are relatively narrow (~15 m), often falling below the minimum mapping unit. In terms of classifying IV, the user’s accuracy scores were very low (ranging from a median of 42.2–35.7%) for the two Hardy et al. [5] approaches and the manually defined training data approach, compared to a median of 75.8% for RadWet.

3.2. Sensitivity Analysis

Generally, classification performance decreases with an increase in % agreement with the classification mode, with the weighted average F1 score decreasing from 0.88 at 50% agreement to 0.84 at 100% agreement, as seen in Figure 13a. This is mainly driven by the performance of the inundated vegetation class that demonstrates a marked decrease in the F1 score where agreement with the mode is >90%. The backscatter characteristics of this class are not well constrained due to the variation in vegetation type, canopy density and inundation conditions, alongside the similarity with many ‘dry’ land cover types such as non-inundated vegetation. By increasing the % mode agreement, we can increase the confidence in pixels positively identified as inundated vegetation, but, in doing so, a significant degree of false negatives occurs as we restrict the classifier’s ability to sample from the full range of training data generated.
The overall classification performance is improved by carrying out replicate classifications and following the consensus approach to labelling pixels seen in Figure 13b. Beyond 10 replicates, there is no difference in the overall performance of RadWet. However, after 20 replicates, the F1 score for inundated vegetation stabilizes; therefore, this represents the optimum number of replicates to achieve an acute classification whilst minimising computational demand.

3.3. Wetland Dynamics

By applying RadWet over contiguous Sentinel-1 observations (142 images in total), the fine-scale temporal dynamics in inundation extent can be determined for the Barotseland Floodplain (142 images) (Figure 14a). This can be used to monitor the timing, magnitude and duration of individual wet seasons, here, denoted by the rate of change in inundation extent, where derivative values greater than the 95th or less than the 5th percentile indicate the start and conclusion of wet seasons (Figure 14b).
Inundation tends to peak on the 8th of March (median date, Std Dev: 9 days). Peak inundation tends to occur 72 days (median, Std Dev: 7 days) after the initial onset of inundation, with the wet season persisting for a median of 156 days. The peak extent for the 2019 wet season was 1500 km2, 70% smaller than the mean of the other four years (~5000 km2), indicating a period of significant drought for the people of Barotseland.
The inundation extent is dominated by inundated vegetation, which accounts for an average of 80% of the total inundated area in the wet season and 51% in the dry season.
Patterns in inundation extent are less distinct in Upper Rupununi due to the complex hydrological mechanisms that govern the area resulting from contributions from both the Amazon and Essequibo basins. As a consequence, the wet season duration ranges from 180 to 288 (Median: 192 days, Std Dev: 55 days) days over the five-year study period. Peak extent tends to occur on 16th July but varies from year to year (Std Dev: 22 days). Similar to Barotseland, inundated vegetation dominates (89% in the wet season, 83% in the dry season) overall inundation extent seen in Figure 15.
Alongside temporal dynamics, RadWet is able to depict fine-scale spatial dynamics in inundation extent. A median wet season extent raster was generated for the Barotseland Floodplain to represent the typical inundation extent for the period 2017–2021. This was used to generate annual wet season maps of the deviation in extent from the typical extent (Figure 16). Annual deviations in extent from the median extent indicate that the Luena Floodplain region, in the northeast of the study area, experiences a great deal of variation, experiencing a ~40% decrease in inundation occurrence during the 2019 drought and a ~30% increase in inundation occurrence during 2017, the wettest wet season recorded over the 2017–2021 study period. Additionally, the 2018 and 2021 wet seasons experienced an increase in surrounding the main Zambezi channel.
Similarly, the Upper Rupununi wetlands indicated a great deal of spatial variation in extent (Figure 17), deviating ±40% in % inundation occurrence over the period 2017–2021, however, specific patterns were less pronounced.

4. Discussion

Using a fully automated approach applied to Sentinel-1 C-band radar imagery, RadWet is able to map both open water and inundated vegetation within herbaceous wetlands with a high degree of accuracy. This represents an improvement over the only other existing automated inundation mapping approach, demonstrated through paired testing [5]. Additionally, RadWet has the benefit of not requiring any a priori information about the timing of wet and dry seasons, used in other studies to correct for false positive classification of open water over flat dry sandy areas through the use of ancillary information from the GRACE and GRACE-FO mission [64] to defined seasonal hydrological conditions. Moreover, we were able to demonstrate the transferability of RadWet to two independent wetlands (Barotseland in Zambia and Upper Rupununi in Guyana), representing a significant step forward in our ability to automatically map wetland inundation dynamics.
RadWet employs a novel consensus-based approach to image classification by iteratively sampling input training data and repeat classifications. We were able to demonstrate a reliable classification after a relatively small number of iterations (n = 25). This efficient approach is implemented in a multi-threaded processing environment, thereby increasing the suitability of RadWet for scaling up as part of broader-scale wetland monitoring programmes. In this respect, RadWet may offer a solution to generating wetland inventories at regional or national scales: products that are cited as a deficiency by the Ramsar Convention for the protection of wetlands [65].

4.1. Areas for Development

RadWet is able to detect inundation in herbaceous vegetation due to the double-bounce backscatter mechanism in C-band Sentinel-1 imagery. However, as the vegetation canopy becomes denser, volumetric scattering dominates, leading to little or no signal from the water’s surface below [5,19,20,29,51]. This was apparent at both study sites where field observations, alongside high-resolution optical Planet imagery, indicated the presence of water underneath herbaceous vegetation canopies, but the Sentinel-1 image lacked any discernible difference from non-inundated vegetation targets. Due to these physical backscatter mechanisms, any approach that relies on Sentinel-1 backscatter, including RadWet, will underestimate the extent of inundation.
To overcome this limitation, future work should focus on the integration of longer wavelength radar systems such as L-band. Longer wavelengths have a greater degree of penetration into vegetation canopies than the C-band, increasing the signal from the surface below. To this effect, there have been examples of the use of an L-band ALOS PALSAR for mapping inundation underneath tree canopies in the Amazon [66,67]. Although RadWet was developed for use with Sentinel-1, it can readily be applied to other radar image datasets, including ALOS PALSAR. This is because RadWet automatically defines training data based on the backscatter interaction characteristics of inundated targets, characteristics that remain consistent across radar platforms, with the only variable being radar wavelength. Consequently, applying RadWet to longer wavelength sensors would detect areas of flooded forest or swamp rather than herbaceous inundated vegetation.

4.2. Broader Implications

Currently, the most widely used flood mapping techniques [15,16,18,52,57,68] and surface water mapping approaches (e.g., Global Surface Water: [25]) lack the ability to map inundated vegetation. Our results support the findings presented by Hardy et al. [5], demonstrating that the inundation extent in tropical herbaceous wetlands is dominated by inundated vegetation in both the Barotseland and Upper Rupununi wetlands. This underlines the importance of mapping both inundated vegetation and open water if services are to provide accurate maps of floodwater inundation in these environments.
The ability to map inundated vegetation carries significance for a number of application areas. Tropical wetlands are known to be globally significant sources of greenhouse gas emissions [69,70,71], accounting for an estimated 20% of global CH4 emissions, with inundated vegetation being a key driver behind emission rates [28]. Yet, across the globe, wetland extent and dynamics are poorly quantified and are cited as a key source of uncertainty in greenhouse gas emission modelling [69,70,71,72]. Indeed, models such as WETCHARTS rely on static wetland extent products such as GlobCover and Global Lakes and Wetlands Database (GLWD) [73]. RadWet offers a potential solution for improving methane emission estimates by providing precise and timely updates on wetland extent.
Recent efforts have demonstrated the use of the CYGNSS signal to estimate inundation, providing monthly inundation maps at a resolution of approximately 1.1 km [71]. In contrast, RadWet is able to provide inundation extent maps with a minimum mapping unit of approximately 30 m at a timestep of 12 days (dependent on Sentinel-1 status: note 2021 failure of the Sentinel-1b satellite affecting image availability). This resolution is critical to understanding the hydrological drivers behind inundation dynamics [74]. In this study, we demonstrate RadWet’s ability to resolve hydrological features such as anabranches, inundated paleochannels, scroll bars and oxbow lakes, features that are unlikely to be detected at mapping units greater than 30 m.
Unlike other radar-based flood mapping approaches [31,44,75,76], RadWet is able to map inundation in dry conditions, with no significant difference between the wet season and dry season accuracy in Barotseland (p = 0.15) and in the Rupununi Wetlands (p = 0.043). This offers benefits for applications in public health as dry season water sources, particularly those inundated with vegetation, can provide important refuge habitats for disease vectors such as malarial mosquitoes [32,77]. RadWet has the potential to map these habitats providing geographic targets for malaria control interventions. Additionally, this means that inundation extent can be mapped throughout the year, capturing intra-annual dynamics that can provide value for application areas such as safeguarding livelihoods. For instance, in the Barotseland Floodplain, RadWet was able to identify the ENSO-linked drought event in 2019 [78] with broad implications for the people across the broader southern African continent who depend on the floodplain ecology, including a shortage of forage for grazing animals [79].

5. Conclusions

RadWet represents an improved means of mapping both open water and inundated vegetation within herbaceous wetland environments. RadWet is an efficient, fully automated approach using freely available data and software and is therefore suitable for routine wetland monitoring programs at large scales. Being mainly driven by serial Sentinel-1 observations, RadWet can depict fine spatial and temporal inundation dynamics with implications for a range of applications, including improving greenhouse gas emissions from natural wetlands.

Author Contributions

Conceptualisation, G.O. and A.H.; methodology, G.O., A.H. and P.B.; software, G.O.; validation, G.O.; formal analysis, G.O. and A.H.; investigation, G.O. and A.H.; resources, GO and P.B.; data curation, G.O.; writing—original draft preparation, G.O.; writing—review and editing, G.O., A.H. and P.B.; visualisation, G.O.; supervision, A.H. and P.B.; project administration, G.O. and A.H.; funding acquisition, G.O. and A.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Aberystwyth University’s AberDoc Programme.

Data Availability Statement

The data are available from the author upon request.

Acknowledgments

The authors would like to thank Donall Cross and Andrea Berardi for supplying field observations for use in validation. The authors would also like to thank Supercomputing Wales, who provided computer resources to allow this work to be completed.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kolka, R.K.; Murdiyarso, D.; Kauffman, J.B.; Birdsey, R.A. Tropical Wetlands, Climate, and Land-Use Change: Adaptation and Mitigation Opportunities. Wetl. Ecol. Manag. 2016, 24, 107–112. [Google Scholar] [CrossRef] [Green Version]
  2. Zhang, Z.; Zimmermann, N.E.; Stenke, A.; Li, X.; Hodson, E.L.; Zhu, G.; Huang, C.; Poulter, B. Emerging Role of Wetland Methane Emissions in Driving 21st Century Climate Change. Proc. Natl. Acad. Sci. USA 2017, 114, 201618765. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Cai, X.; Haile, A.T.; Magidi, J.; Mapedza, E.; Nhamo, L. Living with Floods—Household Perception and Satellite Observations in the Barotse Floodplain, Zambia. Phys. Chem. Earth 2017, 100, 278–286. [Google Scholar] [CrossRef]
  4. Cohen, J.M.; Ernst, K.C.; Lindblade, K.A.; Vulule, J.M.; John, C.C.; Wilson, M.L. Local Topographic Wetness Indices Predict Household Malaria Risk Better than Land-Use and Land-Cover in the Western Kenya Highlands. Malar. J. 2010, 9, 328. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Hardy, A.; Ettritch, G.; Cross, D.E.; Bunting, P.; Liywalii, F.; Sakala, J.; Silumesii, A.; Singini, D.; Smith, M.; Willis, T.; et al. Automatic Detection of Open and Vegetated Water Bodies Using Sentinel 1 to Map African Malaria Vector Mosquito Breeding Habitats. Remote Sens. 2019, 11, 593. [Google Scholar] [CrossRef] [Green Version]
  6. Hardy, A.; Mageni, Z.; Dongus, S.; Killeen, G.; Macklin, M.G.; Majambare, S.; Ali, A.; Msellem, M.; Al-Mafazy, A.W.; Smith, M.; et al. Mapping Hotspots of Malaria Transmission from Pre-Existing Hydrology, Geology and Geomorphology Data in the Pre-Elimination Context of Zanzibar, United Republic of Tanzania. Parasites Vectors 2015, 8, 41. [Google Scholar] [CrossRef] [Green Version]
  7. Fillinger, U.; Lindsay, S.W. Suppression of Exposure to Malaria Vectors by an Order of Magnitude Using Microbial Larvicides in Rural Kenya. Trop. Med. Int. Health 2006, 11, 1629–1642. [Google Scholar] [CrossRef]
  8. Sinka, M.E.; Bangs, M.J.; Manguin, S.; Coetzee, M.; Mbogo, C.M.; Hemingway, J.; Patil, A.P.; Temperley, W.H.; Gething, P.W.; Kabaria, C.W.; et al. The Dominant Anopheles Vectors of Human Malaria in Africa, Europe and the Middle East: Occurrence Data, Distribution Maps and Bionomic Précis. Parasites Vectors 2010, 3, 117. [Google Scholar] [CrossRef] [Green Version]
  9. Ndenga, B.A.; Simbauni, J.A.; Mbugi, J.P.; Githeko, A.K.; Fillinger, U. Productivity of Malaria Vectors from Different Habitat Types in the Western Kenya Highlands. PLoS ONE 2011, 6, e19473. [Google Scholar] [CrossRef]
  10. Bomblies, A.; Duchemin, J.B.; Eltahir, E.A.B. Hydrology of Malaria: Model Development and Application to a Sahelian Village. Water Resour. Res. 2008, 44, 12445. [Google Scholar] [CrossRef]
  11. Smith, M.W.; Macklin, M.G.; Thomas, C.J. Hydrological and Geomorphological Controls of Malaria Transmission; Elsevier: Amsterdam, The Netherlands, 2013; Volume 116, pp. 109–127. [Google Scholar]
  12. Ovakoglou, G.; Cherif, I.; Alexandridis, T.K.; Pantazi, X.-E.; Tamouridou, A.-A.; Moshou, D.; Tseni, X.; Raptis, I.; Kalaitzopoulou, S.; Mourelatos, S. Automatic Detection of Surface-Water Bodies from Sentinel-1 Images for Effective Mosquito Larvae Control. J. Appl. Remote Sens. 2021, 15, 014507. [Google Scholar] [CrossRef]
  13. Sánchez-Espinosa, A.; Schröder, C. Land Use and Land Cover Mapping in Wetlands One Step Closer to the Ground: Sentinel-2 versus Landsat 8. J. Environ. Manag. 2019, 247, 484–498. [Google Scholar] [CrossRef]
  14. Pope, K.; Masuoka, P.; Rejmankova, E.; Grieco, J.; Johnson, S.; Roberts, D. Mosquito Habitats, Land Use, and Malaria Risk in Belize from Satellite Imagery. Ecol. Appl. 2005, 15, 1223–1232. [Google Scholar] [CrossRef] [Green Version]
  15. Martinis, S.; Twele, A.; Voigt, S. Towards Operational near Real-Time Flood Detection Using a Split-Based Automatic Thresholding Procedure on High Resolution TerraSAR-X Data. Nat. Hazards Earth Syst. Sci. 2009, 9, 303–314. [Google Scholar] [CrossRef]
  16. Martinis, S.; Kersten, J.; Twele, A. A Fully Automated TerraSAR-X Based Flood Service. ISPRS J. Photogramm. Remote Sens. 2015, 104, 203–212. [Google Scholar] [CrossRef]
  17. Martinis, S.; Plank, S.; Ćwik, K. The Use of Sentinel-1 Time-Series Data to Improve Flood Monitoring in Arid Areas. Remote Sens. 2018, 10, 583. [Google Scholar] [CrossRef] [Green Version]
  18. Twele, A.; Cao, W.; Plank, S.; Martinis, S. Sentinel-1-Based Flood Mapping: A Fully Automated Processing Chain. Int. J. Remote Sens. 2016, 37, 2990–3004. [Google Scholar] [CrossRef]
  19. Tsyganskaya, V.; Martinis, S.; Marzahn, P.; Ludwig, R. Detection of Temporary Flooded Vegetation Using Sentinel-1 Time Series Data. Remote Sens. 2018, 10, 1286. [Google Scholar] [CrossRef] [Green Version]
  20. Tsyganskaya, V.; Martinis, S.; Marzahn, P.; Ludwig, R. SAR-Based Detection of Flooded Vegetation—A Review of Characteristics and Approaches. Int. J. Remote Sens. 2018, 39, 2255–2293. [Google Scholar] [CrossRef]
  21. Landuyt, L.; Van Coillie, F.M.B.; Vogels, B.; Dewelde, J.; Verhoest, N.E.C. Towards Operational Flood Monitoring in Flanders Using Sentinel-1. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 11004–11018. [Google Scholar] [CrossRef]
  22. Boryan, C.G.; Yang, Z.; Sandborn, A.; Willis, P.; Haack, B. Operational Agricultural Flood Monitoring with Sentinel-1 Synthetic Aperture Radar. In Proceedings of the International Geoscience and Remote Sensing Symposium (IGARSS), Valencia, Spain, 22–27 July 2018; pp. 5831–5834. [Google Scholar] [CrossRef]
  23. Whiteside, T.G.; Bartolo, R.E. Mapping Aquatic Vegetation in a Tropical Wetland Using High Spatial Resolution Multispectral Satellite Imagery. Remote Sens. 2015, 7, 11664–11694. [Google Scholar] [CrossRef] [Green Version]
  24. Hardy, A.; Oakes, G.; Ettritch, G. Tropical Wetland (TropWet) Mapping Tool: The Automatic Detection of Open and Vegetated Waterbodies in Google Earth Engine for Tropical Wetlands. Remote Sens. 2020, 12, 1182. [Google Scholar] [CrossRef] [Green Version]
  25. Pekel, J.F.; Cottam, A.; Gorelick, N.; Belward, A.S. High-Resolution Mapping of Global Surface Water and Its Long-Term Changes. Nature 2016, 540, 418–422. [Google Scholar] [CrossRef]
  26. Slagter, B.; Tsendbazar, N.E.; Vollrath, A.; Reiche, J. Mapping Wetland Characteristics Using Temporally Dense Sentinel-1 and Sentinel-2 Data: A Case Study in the St. Lucia Wetlands, South Africa. Int. J. Appl. Earth Obs. Geoinf. 2020, 86, 102009. [Google Scholar] [CrossRef]
  27. Muro, J.; Canty, M.; Conradsen, K.; Hüttich, C.; Nielsen, A.A.; Skriver, H.; Remy, F.; Strauch, A.; Thonfeld, F.; Menz, G. Short-Term Change Detection in Wetlands Using Sentinel-1 Time Series. Remote Sens. 2016, 8, 795. [Google Scholar] [CrossRef] [Green Version]
  28. Shaw, J.T.; Allen, G.; Barker, P.; Pitt, J.R.; Pasternak, D.; Bauguitte, S.J.B.; Lee, J.; Bower, K.N.; Daly, M.C.; Lunt, M.F.; et al. Large Methane Emission Fluxes Observed From Tropical Wetlands in Zambia. Glob. Biogeochem. Cycles 2022, 36, e2021GB007261. [Google Scholar] [CrossRef]
  29. Woodhouse, I.H. Introduction to Microwave Remote Sensing; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
  30. Mahdianpari, M.; Salehi, B.; Mohammadimanesh, F.; Motagh, M. Random Forest Wetland Classification Using ALOS-2 L-Band, RADARSAT-2 C-Band, and TerraSAR-X Imagery. ISPRS J. Photogramm. Remote Sens. 2017, 130, 13–31. [Google Scholar] [CrossRef]
  31. Clement, M.A.; Kilsby, C.G.; Moore, P. Multi-Temporal Synthetic Aperture Radar Flood Mapping Using Change Detection. J. Flood Risk Manag. 2018, 11, 152–168. [Google Scholar] [CrossRef] [Green Version]
  32. Charlwood, J.D.; Vij, R.; Billingsley, P.F. Dry Season Refugia of Malaria-Transmitting Mosquitoes in a Dry Savannah Zone of East Africa. Am. J. Trop. Med. Hyg. 2000, 62, 726–732. [Google Scholar] [CrossRef] [Green Version]
  33. Hardy, A.J.; Gamarra, J.G.P.; Cross, D.E.; Macklin, M.G.; Smith, M.W.; Kihonda, J.; Killeen, G.F.; Ling’ala, G.N.; Thomas, C.J. Habitat Hydrology and Geomorphology Control the Distribution of Malaria Vector Larvae in Rural Africa. PLoS ONE 2013, 8, e81931. [Google Scholar] [CrossRef] [Green Version]
  34. Bunting, P.; Clewley, D.; Lucas, R.M.; Gillingham, S. The Remote Sensing and GIS Software Library (RSGISLib). Comput. Geosci. 2014, 62, 216–226. [Google Scholar] [CrossRef]
  35. GDAL/OGR contributors {GDAL/OGR} Geospatial Data Abstraction Software Library 2022. Available online: https://gdal.org/ (accessed on 16 March 2023).
  36. Harris, C.R.; Millman, K.J.; van der Walt, S.J.; Gommers, R.; Virtanen, P.; Cournapeau, D.; Wieser, E.; Taylor, J.; Berg, S.; Smith, N.J.; et al. Array Programming with NumPy. Nature 2020, 585, 357–362. [Google Scholar] [CrossRef]
  37. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-Learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  38. Lam, S.K.; Pitrou, A.; Seibert, S. Numba: A LLVM-Based Python JIT Compiler. In Proceedings of the Second Workshop on the LLVM Compiler Infrastructure in HPC-LLVM’15, Austin, TX, USA, 15 November 2015. [Google Scholar] [CrossRef]
  39. Zimba, H.; Kawawa, B.; Chabala, A.; Phiri, W.; Selsam, P.; Meinhardt, M.; Nyambe, I. Assessment of Trends in Inundation Extent in the Barotse Floodplain, Upper Zambezi Riv.ver Basin: A Remote Sensing-Based Approach. J. Hydrol. Reg. Stud. 2018, 15, 149–170. [Google Scholar] [CrossRef]
  40. Timberlake, J.; Bingham, M. Vegetation Descriptions of the Upper Zambezi Districts of Zambia. Occasional Publications in Biodiversity No. 22; Biodiversity Foundation for Africa, Bulawayo: Richmond, UK, 2010; Originally Issued as Forest Research Pamphlets by the Zambia Forest Research Department. [Google Scholar]
  41. Estrada-Carmona, N.; Attwood, S.; Cole, S.M.; Remans, R.; DeClerck, F. A Gendered Ecosystem Services Approach to Identify Novel and Locally-Relevant Strategies for Jointly Improving Food Security, Nutrition, and Conservation in the Barotse Floodplain. Int. J. Agric. Sustain. 2020, 18, 351–375. [Google Scholar] [CrossRef]
  42. Tweddle, D. Overview of the Zambezi River System: Its History, Fish Fauna, Fisheries, and Conservation. Aquat. Ecosyst. Health Manag. 2010, 13, 224–240. [Google Scholar] [CrossRef]
  43. Mistry, J.; Berardi, A.; Simpson, M. Birds as Indicators of Wetland Status and Change in the North Rupununi, Guyana. Biodivers. Conserv. 2008, 17, 2383–2409. [Google Scholar] [CrossRef] [Green Version]
  44. Ruiz-Ramos, J.; Berardi, A.; Marino, A.; Bhowmik, D.; Simpson, M. Assessing Hydrological Dynamics of Guyana’s North Rupununi Wetlands Using Sentinel-1 Sar Imagery Change Detection Analysis on Google Earth Engine. In Proceedings of the IEEE India Geoscience and Remote Sensing Symposium, InGARSS, Ahmedabad, India, 1–4 December 2020; pp. 5–8. [Google Scholar] [CrossRef]
  45. Mistry, J.; Simpson, M.; Berardi, A.; Sandy, Y. Exploring the Links between Natural Resource Use and Biophysical Status in the Waterways of the North Rupununi, Guyana. J. Environ. Manag. 2004, 72, 117–131. [Google Scholar] [CrossRef] [Green Version]
  46. Ruiz-Ramos, J.; Marino, A.; Berardi, A.; Hardy, A.; Simpson, M. Characterization of Natural Wetlands with Cumulative Sums of Polarimetric Sar Timeseries. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; pp. 5899–5902. [Google Scholar] [CrossRef]
  47. Beck, H.E.; Zimmermann, N.E.; McVicar, T.R.; Vergopolan, N.; Berg, A.; Wood, E.F. Present and Future Köppen-Geiger Climate Classification Maps at 1-Km Resolution. Sci. Data 2018, 5, 180214. [Google Scholar] [CrossRef] [Green Version]
  48. Barbosa, R.I.; do Nascimento, S.P.; Weiduschat, A.A.; Bonatto, F. Notes on an Expedition to the Headwaters of the Maú (Ireng) River, Roraima, Brazil. Bol. Mus. Integr. Roraima 2000, 7, 45–54. [Google Scholar]
  49. Gorelick, N.; Hancher, M.; Dixon, M.; Ilyushchenko, S.; Thau, D.; Moore, R. Google Earth Engine: Planetary-Scale Geospatial Analysis for Everyone. Remote Sens. Environ. 2017, 202, 18–27. [Google Scholar] [CrossRef]
  50. Lehner, B.; Verdin, K.; Jarvis, A. New Global Hydrography Derived from Spaceborne Elevation Data. EOS 2008, 89, 93–94. [Google Scholar] [CrossRef]
  51. Tsyganskaya, V.; Martinis, S.; Marzahn, P. Flood Monitoring in Vegetated Areas Using Multitemporal Sentinel-1 Data: Impact of Time Series Features. Water 2019, 11, 1938. [Google Scholar] [CrossRef] [Green Version]
  52. Huang, W.; DeVries, B.; Huang, C.; Lang, M.W.; Jones, J.W.; Creed, I.F.; Carroll, M.L. Automated Extraction of Surface Water Extent from Sentinel-1 Data. Remote Sens. 2018, 10, 797. [Google Scholar] [CrossRef] [Green Version]
  53. Landerer, F.W.; Swenson, S.C. Accuracy of Scaled GRACE Terrestrial Water Storage Estimates. Water Resour. Res. 2012, 48, 4531. [Google Scholar] [CrossRef]
  54. Landerer, F. CSR TELLUS GRACE Level-3 Monthly Land Water-Equivalent-Thickness Surface Mass Anomaly Release 6.0 Version 04 in NetCDF/ASCII/GeoTIFF Formats. Available online: https://podaac.jpl.nasa.gov/dataset/TELLUS_GRAC_L3_CSR_RL06_LND_v04 (accessed on 31 October 2021).
  55. Plank, S.; Jussi, M.; Martinis, S.; Twele, A. Combining Polarimetric Sentinel-1 and ALOS-2/PALSAR-2 Imagery for Mapping of Flooded Vegetation. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Kuala Lumpur, Malaysia, 23–28 July 2017; pp. 5705–5708. [Google Scholar]
  56. Shepherd, J.D.; Bunting, P.; Dymond, J.R. Operational Large-Scale Segmentation of Imagery Based on Iterative Elimination. Remote Sens. 2019, 11, 658. [Google Scholar] [CrossRef] [Green Version]
  57. Martinis, S.; Groth, S.; Wieland, M.; Knopp, L.; Rättich, M. Towards a Global Seasonal and Permanent Reference Water Product from Sentinel-1/2 Data for Improved Flood Mapping. Remote Sens. Environ. 2022, 278, 113077. [Google Scholar] [CrossRef]
  58. Dargie, G.C.; Lewis, S.L.; Lawson, I.T.; Mitchard, E.T.A.; Page, S.E.; Bocko, Y.E.; Ifo, S.A. Age, Extent and Carbon Storage of the Central Congo Basin Peatland Complex. Nature 2017, 542, 86–90. [Google Scholar] [CrossRef] [PubMed]
  59. Murray, N.J.; Phinn, S.R.; DeWitt, M.; Ferrari, R.; Johnston, R.; Lyons, M.B.; Clinton, N.; Thau, D.; Fuller, R.A. The Global Distribution and Trajectory of Tidal Flats. Nature 2018, 565, 222–225. [Google Scholar] [CrossRef]
  60. Murray, N.J.; Worthington, T.A.; Bunting, P.; Duce, S.; Hagger, V.; Lovelock, C.E.; Lucas, R.; Saunders, M.I.; Sheaves, M.; Spalding, M.; et al. High-Resolution Mapping of Losses and Gains of Earth’s Tidal Wetlands. Science 2022, 376, 744–749. [Google Scholar] [CrossRef]
  61. Bunting, P.; Rosenqvist, A.; Hilarides, L.; Lucas, R.M.; Thomas, N.; Tadono, T.; Worthington, T.A.; Spalding, M.; Murray, N.J.; Rebelo, L.M. Global Mangrove Extent Change 1996–2020: Global Mangrove Watch Version 3.0. Remote Sens. 2022, 14, 3657. [Google Scholar] [CrossRef]
  62. Hardy, A.; Oakes, G.; Hassan, J.; Yussuf, Y. Improved Use of Drone Imagery for Malaria Vector Control through Technology-Assisted Digitizing (TAD). Remote Sens. 2022, 14, 317. [Google Scholar] [CrossRef]
  63. Team, P. Planet Application Program Interface. In Space for Life on Earth; NASA: San Francisco, CA, USA, 2017. [Google Scholar]
  64. Landerer, F.W.; Flechtner, F.M.; Save, H.; Webb, F.H.; Bandikova, T.; Bertiger, W.I.; Bettadpur, S.V.; Byun, S.H.; Dahle, C.; Dobslaw, H.; et al. Extending the Global Mass Change Data Record: GRACE Follow-On Instrument and Science Data Performance. Geophys. Res. Lett. 2020, 47, e2020GL088306. [Google Scholar] [CrossRef]
  65. Ramsar A Framework for Wetland Inventory. In Proceedings of the 8th Meeting of the Conference of the Contracting Parties to the Convention on Wetlands (Ramsar, Iran, 1971), Valencia, Spain, 18–26 November 2002.
  66. Chapman, B.; McDonald, K.; Shimada, M.; Rosenqvist, A.; Schroeder, R.; Hess, L. Mapping Regional Inundation with Spaceborne L-Band SAR. Remote Sens. 2015, 7, 5440–5470. [Google Scholar] [CrossRef] [Green Version]
  67. Rosenqvist, J.; Rosenqvist, A.; Jensen, K.; McDonald, K. Mapping of Maximum and Minimum Inundation Extents in the Amazon Basin 2014-2017 with ALOS-2 PALSAR-2 Scan SAR Time-Series Data. Remote Sens. 2020, 12, 1326. [Google Scholar] [CrossRef] [Green Version]
  68. Li, Y.; Martinis, S.; Plank, S.; Ludwig, R. An Automatic Change Detection Approach for Rapid Flood Mapping in Sentinel-1 SAR Data. Int. J. Appl. Earth Obs. Geoinf. 2018, 73, 123–135. [Google Scholar] [CrossRef]
  69. Lunt, M.F.; Palmer, P.I.; Lorente, A.; Borsdorff, T.; Landgraf, J.; Parker, R.J.; Boesch, H. Rain-Fed Pulses of Methane from East Africa during 2018-2019 Contributed to Atmospheric Growth Rate. Environ. Res. Lett. 2021, 16, 024021. [Google Scholar] [CrossRef]
  70. Lunt, M.M.; Palmer, P.P.; Feng, L.; Taylor, C.C.; Boesch, H.; Parker, R.R. An Increase in Methane Emissions from Tropical Africa between 2010 and 2016 Inferred from Satellite Data. Atmos. Chem. Phys. 2019, 19, 14721–14740. [Google Scholar] [CrossRef] [Green Version]
  71. Gerlein-Safdi, C.; Bloom, A.A.; Plant, G.; Kort, E.A.; Ruf, C.S. Improving Representation of Tropical Wetland Methane Emissions with CYGNSS Inundation Maps. Glob. Biogeochem. Cycles 2021, 35, e2020GB006890. [Google Scholar] [CrossRef]
  72. Pandey, S.; Houweling, S.; Lorente, A.; Borsdorff, T.; Tsivlidou, M.; Anthony Bloom, A.; Poulter, B.; Zhang, Z.; Aben, I. Using Satellite Data to Identify the Methane Emission Controls of South Sudan’s Wetlands. Biogeosciences 2021, 18, 557–572. [Google Scholar] [CrossRef]
  73. Anthony Bloom, A.; Bowman, W.K.; Lee, M.; Turner, J.A.; Schroeder, R.; Worden, R.J.; Weidner, R.; McDonald, C.K.; Jacob, J.D. A Global Wetland Methane Emissions and Uncertainty Dataset for Atmospheric Chemical Transport Models (WetCHARTs Version 1.0). Geosci. Model. Dev. 2017, 10, 2141–2156. [Google Scholar] [CrossRef] [Green Version]
  74. Di Vittorio, C.A.; Georgakakos, A.P. Land Cover Classification and Wetland Inundation Mapping Using MODIS. Remote Sens Environ. 2018, 204, 1–17. [Google Scholar] [CrossRef]
  75. Giustarini, L.; Hostache, R.; Matgen, P.; Schumann, G.J.P.; Bates, P.D.; Mason, D.C. A Change Detection Approach to Flood Mapping in Urban Areas Using TerraSAR-X. IEEE Trans. Geosci. Remote Sens. 2013, 51, 2417–2430. [Google Scholar] [CrossRef] [Green Version]
  76. Westerhoff, R.S.; Kleuskens, M.P.H.; Winsemius, H.C.; Huizinga, H.J.; Brakenridge, G.R.; Bishop, C. Automated Global Water Mapping Based on Wide-Swath Orbital Synthetic-Aperture Radar. Hydrol. Earth Syst. Sci. 2013, 17, 651–663. [Google Scholar] [CrossRef] [Green Version]
  77. Cross, D.E.; Thomas, C.; McKeown, N.; Siaziyu, V.; Healey, A.; Willis, T.; Singini, D.; Liywalii, F.; Silumesii, A.; Sakala, J.; et al. Geographically Extensive Larval Surveys Reveal an Unexpected Scarcity of Primary Vector Mosquitoes in a Region of Persistent Malaria Transmission in Western Zambia. Parasites Vectors 2021, 14, 91. [Google Scholar] [CrossRef]
  78. Hulsman, P.; Savenije, H.H.G.; Hrachowitz, M. Satellite-Based Drought Analysis in the Zambezi River Basin: Was the 2019 Drought the Most Extreme in Several Decades as Locally Perceived? J. Hydrol. Reg. Stud. 2021, 34, 100789. [Google Scholar] [CrossRef]
  79. Sazib, N.; Mladenova, L.E.; Bolten, J.D. Assessing the Impact of ENSO on Agriculture Over Africa Using Earth Observation Data. Front. Sustain. Food Syst 2020, 4, 188. [Google Scholar] [CrossRef]
Figure 1. Open TopoMap showing the location of the Barotseland Floodplain study site within western Zambia (highlighted with red square and point), with the main Zambezi Channel highlighted as well as the Leuna, Lui and Matabele Mulonga tributaries marked.
Figure 1. Open TopoMap showing the location of the Barotseland Floodplain study site within western Zambia (highlighted with red square and point), with the main Zambezi Channel highlighted as well as the Leuna, Lui and Matabele Mulonga tributaries marked.
Remotesensing 15 01705 g001
Figure 2. Open TopoMap showing the North Rupununi Study Area within Guyana (highlighted with red square and point) with the Takutu, Ireng and Rupununi Rivers labelled. Areas of Wetland are not currently depicted on Open TopoMap data.
Figure 2. Open TopoMap showing the North Rupununi Study Area within Guyana (highlighted with red square and point) with the Takutu, Ireng and Rupununi Rivers labelled. Areas of Wetland are not currently depicted on Open TopoMap data.
Remotesensing 15 01705 g002
Figure 3. Flow diagram summarising the RadWet Sentinel-1 image classification processes for wetland mapping.
Figure 3. Flow diagram summarising the RadWet Sentinel-1 image classification processes for wetland mapping.
Remotesensing 15 01705 g003
Figure 4. Diagram-based description of the split-based thresholding routine showing: (a) Input Sentinel-1 observation, (b) split tile variability based on sub-tile cv and r Euclidean distance to c v - and r - , (c) visual description of a homogenous sub-tile showing cv and r with the boundary of 3 Standard Deviations from c v - and r - , and histogram of VV backscatter values, (d) candidate heterogenous sub-tiles used for threshold calculation, (e) candidate heterogenous sub-tile covering a class boundary of open water and dry background land cover, a histogram of VV backscatter values are included as well as subsequent split-based threshold.
Figure 4. Diagram-based description of the split-based thresholding routine showing: (a) Input Sentinel-1 observation, (b) split tile variability based on sub-tile cv and r Euclidean distance to c v - and r - , (c) visual description of a homogenous sub-tile showing cv and r with the boundary of 3 Standard Deviations from c v - and r - , and histogram of VV backscatter values, (d) candidate heterogenous sub-tiles used for threshold calculation, (e) candidate heterogenous sub-tile covering a class boundary of open water and dry background land cover, a histogram of VV backscatter values are included as well as subsequent split-based threshold.
Remotesensing 15 01705 g004
Figure 5. A pseudo-code description of Split-Based Thresholding approach used to generate global image thresholds.
Figure 5. A pseudo-code description of Split-Based Thresholding approach used to generate global image thresholds.
Remotesensing 15 01705 g005
Figure 6. NASA GRACE and GRACE-FO (2000–2020) monthly crutinized gravity anomaly expressed as water equivalent thickness. (a) Monthly crutinized water equivalent thickness over the Barotseland Floodplain, with expected periods of high and low inundation timings defined by [5]. (b) Monthly crutinized water equivalent thickness over the North Rupununi Wetlands, with expected periods of high and low inundation timings defined by [44,46].
Figure 6. NASA GRACE and GRACE-FO (2000–2020) monthly crutinized gravity anomaly expressed as water equivalent thickness. (a) Monthly crutinized water equivalent thickness over the Barotseland Floodplain, with expected periods of high and low inundation timings defined by [5]. (b) Monthly crutinized water equivalent thickness over the North Rupununi Wetlands, with expected periods of high and low inundation timings defined by [44,46].
Remotesensing 15 01705 g006
Figure 7. Example of RadWet wet season classified output over the Barotseland Floodplain dated 19 April 2020 (a). Subset figures represent the RadWet classified output, at representative locations including Sentinel-1 imagery used for the classification, and reference Sentinel-2 optical image. Specifically, (b1b3) is a section of the main Zambezi channel and surrounding river features; (c1c3) represents dambo features atop the eastern escarpment; and (d1d3) is the location of the in-flow of the Leuna floodplain into the main Barotseland Floodplain.
Figure 7. Example of RadWet wet season classified output over the Barotseland Floodplain dated 19 April 2020 (a). Subset figures represent the RadWet classified output, at representative locations including Sentinel-1 imagery used for the classification, and reference Sentinel-2 optical image. Specifically, (b1b3) is a section of the main Zambezi channel and surrounding river features; (c1c3) represents dambo features atop the eastern escarpment; and (d1d3) is the location of the in-flow of the Leuna floodplain into the main Barotseland Floodplain.
Remotesensing 15 01705 g007
Figure 8. Example of RadWet dry season classified output over the Barotseland Floodplain dated 28 August 2018 (a). Subset figures represent the RadWet classified output, at representative locations including Sentinel-1 imagery used for the classification, and reference Sentinel-2 optical image. Specifically, (b1b3) is a section of the main Zambezi channel and surrounding river features; (c1c3) represents dambo features atop the eastern escarpment; and (d1d3) is the location of the in-flow of the Leuna floodplain into the main Barotseland Floodplain.
Figure 8. Example of RadWet dry season classified output over the Barotseland Floodplain dated 28 August 2018 (a). Subset figures represent the RadWet classified output, at representative locations including Sentinel-1 imagery used for the classification, and reference Sentinel-2 optical image. Specifically, (b1b3) is a section of the main Zambezi channel and surrounding river features; (c1c3) represents dambo features atop the eastern escarpment; and (d1d3) is the location of the in-flow of the Leuna floodplain into the main Barotseland Floodplain.
Remotesensing 15 01705 g008
Figure 9. Example of RadWet wet season classified output over the North Rupununi dated 2 October 2020 (a). Subset figures represent the RadWet classified output, at representative locations including Sentinel-1 imagery used for the classification, and reference Sentinel-2 optical image. Specifically, (b1b3) covers a section ofAmoko Lake; (c1c3) shows the Irend River, including agricultural land anmd seasonally inundated savannah; and (d1d3) is the location of the confluence of the Takutu and Irend Rivers.
Figure 9. Example of RadWet wet season classified output over the North Rupununi dated 2 October 2020 (a). Subset figures represent the RadWet classified output, at representative locations including Sentinel-1 imagery used for the classification, and reference Sentinel-2 optical image. Specifically, (b1b3) covers a section ofAmoko Lake; (c1c3) shows the Irend River, including agricultural land anmd seasonally inundated savannah; and (d1d3) is the location of the confluence of the Takutu and Irend Rivers.
Remotesensing 15 01705 g009
Figure 10. Example of RadWet dry season classified output over the North Rupununi dated 28 March 2017 (a). Subset figures represent the RadWet classified output, at representative locations including Sentinel-1 imagery used for the classification, and reference Sentinel-2 optical image. Specifically, (b1b3) covers a section ofAmoko Lake; (c1c3) shows the Irend River, including agricultural land anmd seasonally inundated savannah; and (d1d3) is the location of the confluence of the Takutu and Irend Rivers.
Figure 10. Example of RadWet dry season classified output over the North Rupununi dated 28 March 2017 (a). Subset figures represent the RadWet classified output, at representative locations including Sentinel-1 imagery used for the classification, and reference Sentinel-2 optical image. Specifically, (b1b3) covers a section ofAmoko Lake; (c1c3) shows the Irend River, including agricultural land anmd seasonally inundated savannah; and (d1d3) is the location of the confluence of the Takutu and Irend Rivers.
Remotesensing 15 01705 g010
Figure 11. (a) Landsat 8 composite (False Colour: NIR, SWIR1, Red), (b) Sentinel-1 Observation (False Colour: VV, VH, NDPI) and (c) RadWet output product dated 8 October 2017 showing a section of the Lui river within the Barotseland Floodplain where inundated vegetation can be observed within the Landsat 8 composite, but the typical red/orange double-bounce signature is not present in the Sentinel-1 Observation.
Figure 11. (a) Landsat 8 composite (False Colour: NIR, SWIR1, Red), (b) Sentinel-1 Observation (False Colour: VV, VH, NDPI) and (c) RadWet output product dated 8 October 2017 showing a section of the Lui river within the Barotseland Floodplain where inundated vegetation can be observed within the Landsat 8 composite, but the typical red/orange double-bounce signature is not present in the Sentinel-1 Observation.
Remotesensing 15 01705 g011
Figure 12. (a) PlanetLabs composite (False Colour: NIR, SWIR1, Red) [63], (b) Sentinel-1 Observation (False Colour: VV, VH, NDPI) and (c) RadWet output product dated 29 February 2020 showing a section of the Ireng river within the Rupununi Wetlands Region in Guyana where the Sentinel-1 Observation shows very low backscatter in the northeast, much lower than that of the Ireng river but is shown not to be open water in the PlanetLabs composite.
Figure 12. (a) PlanetLabs composite (False Colour: NIR, SWIR1, Red) [63], (b) Sentinel-1 Observation (False Colour: VV, VH, NDPI) and (c) RadWet output product dated 29 February 2020 showing a section of the Ireng river within the Rupununi Wetlands Region in Guyana where the Sentinel-1 Observation shows very low backscatter in the northeast, much lower than that of the Ireng river but is shown not to be open water in the PlanetLabs composite.
Remotesensing 15 01705 g012
Figure 13. Classification sensitivity analysis. (a) Classification accuracy for each class and the weighted average F1 score for the classification as classification replicate mode agreement increases, (b) classification accuracy for each class and the weighted average F1 score for the classification as the number of replicates increases.
Figure 13. Classification sensitivity analysis. (a) Classification accuracy for each class and the weighted average F1 score for the classification as classification replicate mode agreement increases, (b) classification accuracy for each class and the weighted average F1 score for the classification as the number of replicates increases.
Remotesensing 15 01705 g013
Figure 14. (a) RadWet detected total wetted area, inundated vegetation, and open water extent between Jan 2017 to December 2021 (142 observations) for the Barotseland Floodplain, Zambia. (b) Rate of change derivative of the total wetted area, with wet season timings indicated.
Figure 14. (a) RadWet detected total wetted area, inundated vegetation, and open water extent between Jan 2017 to December 2021 (142 observations) for the Barotseland Floodplain, Zambia. (b) Rate of change derivative of the total wetted area, with wet season timings indicated.
Remotesensing 15 01705 g014
Figure 15. (a) RadWet detected total wetted area, inundated vegetation, and open water extent between Jan 2017 to December 2021 (142 observations) for the Rupununi Floodplain, Guyana. (b) Rate of change derivative of the total wetted area, with wet season timings indicated.
Figure 15. (a) RadWet detected total wetted area, inundated vegetation, and open water extent between Jan 2017 to December 2021 (142 observations) for the Rupununi Floodplain, Guyana. (b) Rate of change derivative of the total wetted area, with wet season timings indicated.
Remotesensing 15 01705 g015
Figure 16. (a) Median % wet season inundation occurrence for the Barotseland Floodplain over the period 2017–2021 and subsequent % deviation in inundation occurrence annually: (b) 2017, (c) 2018, (d) 2019, (e) 2020, (f) 2021.
Figure 16. (a) Median % wet season inundation occurrence for the Barotseland Floodplain over the period 2017–2021 and subsequent % deviation in inundation occurrence annually: (b) 2017, (c) 2018, (d) 2019, (e) 2020, (f) 2021.
Remotesensing 15 01705 g016
Figure 17. (a) Median % wet season inundation occurrence for the Upper Rupununi wetlands over the period 2017–2021 and subsequent % deviation in inundation occurrence annually: (b) 2017, (c) 2018, (d) 2019, (e) 2020, (f) 2021.
Figure 17. (a) Median % wet season inundation occurrence for the Upper Rupununi wetlands over the period 2017–2021 and subsequent % deviation in inundation occurrence annually: (b) 2017, (c) 2018, (d) 2019, (e) 2020, (f) 2021.
Remotesensing 15 01705 g017
Table 1. Summary of ancillary datasets used to assist Sentinel-1 image classification. All datasets were produced in Google Earth Engine [49] and exported as GeoTiff files for local processing.
Table 1. Summary of ancillary datasets used to assist Sentinel-1 image classification. All datasets were produced in Google Earth Engine [49] and exported as GeoTiff files for local processing.
DatasetSensor/DataTime
Period
UnitReference
NDPI MeanSentinel-12014–2021Backscatter dBN/A
NDPI Std Dev
VV Mean
VV Std Dev
VH Mean
VH Std Dev
TropWet
Water
Occurrence
Landsat 5,7,82000–2020Percentage Occurrence[24]
TropWet
Flat Bare Sand
Occurrence
Slope AngleWWF
Hydrologically
Conditioned
DEM
N/ADegrees[50]
Table 2. Summary of thematic classes used in the Sentinel-1 image classification along with a detailed description of the rule base applied to generate classification training data.
Table 2. Summary of thematic classes used in the Sentinel-1 image classification along with a detailed description of the rule base applied to generate classification training data.
Class Name Expected Land CoverImage UsedTraining Data Generation Rules
Open Water
(OW)
Open Surface Water Low Backscatter Image TropWet Water Occurrence > 90%
Completed Flooded Vegetation
Flat Bare Earth
(FBE)
Flat Dry SandLow Backscatter Image TropWet Sand Occurrence > 50%
Flat Low Texture Bare EarthTropWet Water Occurrence < (100 × (1 GRACE Index)
Inundated
Vegetation
(IV)
Flood Inundated VegetationHigh Backscatter Image NDPI Time Series Variance > 95th Percentile
Z-Score < −2
Aquatic VegetationVH < SBT High Backscatter Threshold
Slope < 5°
Background
Land Cover
(B)
Dry Herbaceous Vegetation High Backscatter Image NDPI Variance < 95th Percentile
Trees
Urban Areas
Dense
Vegetation
(DV)
Dense VegetationHigh Backscatter Image VH > SBT VH Threshold
Wet Sand (High Dielectric Constant)
Table 3. Extra Random Forest hyperparameters used in grid search selection for each classification replicate. The count of each hyperparameter used in each classification replicates over the accuracy assessment scenes is included as well as the most commonly used classification hyperparameters.
Table 3. Extra Random Forest hyperparameters used in grid search selection for each classification replicate. The count of each hyperparameter used in each classification replicates over the accuracy assessment scenes is included as well as the most commonly used classification hyperparameters.
Barotseland Floodplain—Open Water
ParameterSearch RangeSelected Values (Count)Mode Values
Bootstrap(True, False)True (3640)True
Max Depth(8,10,15)8 (61), 10 (168), 15 (3411)15
Min Samples Leaf(4,6,8)4 (3500), 6 (112), 8 (28)4
Min Samples Split(2,5,10)10 (1560)10
Number Estimators(10,20,50,100,250,500)10 (61), 20 (272), 50 (594), 250 (922), 500 (993) 10
Barotseland Floodplain—Inundated Vegetation
ParameterSearch RangeSelected Values (Count)Mode Values
Bootstrap(True, False)True (3692)True
Max Depth(8,10,15)8 (1297), 10 (1335), 15 (1060)10
Min Samples Leaf(4,6,8)4 (3388), 6 (227), 8 (77)4
Min Samples Split(2,5,10)10 (1302)10
Number Estimators(10,20,50,100,250,500)10 (45), 20 (324), 50 (755), 250 (925), 500 (738)250
Upper Rupununi—Open Water
ParameterSearch RangeSelected Values (Count)Mode Values
Bootstrap(True, False)True (3825)True
Max Depth(8,10,15)8 (1120), 10 (1321), 15(1384)15
Min Samples Leaf(4,6,8)4 (3126), 6 (456), 8 (243)4
Min Samples Split(2,5,10)10 (1418)10
Number Estimators(10,20,50,100,250,500)10 (277), 20 (580), 50 (860), 250 (682), 500 (640) 50
Upper Rupununi—Inundated Vegetation
ParameterSearch RangeSelected Values (Count)Mode Values
Bootstrap(True, False)True (4025)True
Max Depth(8,10,15)8 (11), 10 (463), 15(3551)15
Min Samples Leaf(4,6,8)4 (4022), 6 (3)4
Min Samples Split(2,5,10)10 (1524)10
Number Estimators(10,20,50,100,250,500)10 (4), 20 (65), 50 (421), 250 (1216), 500 (1499)500
Table 4. Summary of Barotseland Floodplain classification accuracy assessment results.
Table 4. Summary of Barotseland Floodplain classification accuracy assessment results.
Classification MethodOverall
Accuracy
(%)
KappaOpen WaterInundated VegetationDry Background
Land Cover
Macro
F1
Score
Weighted Average
F1
UsersProducersF1UsersProducersF1UserProducersF1
RadWetMedian88.6750.80487.80996.2670.91875.64491.5610.82895.49785.5220.9020.8830.885
95th
Confidence
Interval
88.635–88.7170.803–0.80487.700–87.91696.205–96.3280.918–0.91975.526–75.75491.479–91.6370.828–0.82995.460–95.53685.464–85.5860.902–0.9030.883–0.8840.885–0.886
Manually
Defined
Classification
Median82.4320.69993.16593.1650.94159.56683.1840.69491.26279.0390.8470.8280.818
95th
Confidence Interval
82.379–82.4850.699–0.70093.084–93.24793.084
–93.247
0.941–0.94259.429–59.69283.065–3.2990.693–0.69591.210–91.31478.965–79.1080.847–0.8480.827–0.8280.817–0.818
Hardy et al. [5]
(15 Pixels Min Per Cluster)
Median84.2700.72783.39692.4890.87764.94491.0650.75895.55479.9200.8700.8350.838
95th
Confidence Interval
84.222–84.3190.727–0.72883.274–83.52592.394–92.5890.876–0.87864.823–65.06290.983–91.1560.757–0.75995.517–95.59579.851–79.9870.870–0.8710.835–0.8360.837–0.838
Hardy et al. [5]
(100 Pixels Min Per Cluster)
Median84.7150.73486.80992.8550.89761.98093.8580.74796.95879.8210.8760.8400.841
95th
Confidence Interval
84.668–84.7650.734–0.73586.701–86.92092.759–92.9450.897–0.89861.855–62.10093.784–93.9310.746–0.74896.925–96.98979.752–79.8910.875–0.8760.839–0.8400.840–0.841
Table 5. Summary of accuracy assessment statistics for the Rupununi Wetlands in Guyana.
Table 5. Summary of accuracy assessment statistics for the Rupununi Wetlands in Guyana.
Classification MethodOverall
Accuracy
(%)
KappaOpen WaterInundated VegetationDry Background
Land Cover
Macro
F1
Score
Weighted Average
F1
UsersProducersF1UsersProducersF1UserProducersF1
RadWetMedian80.1470.69981.45089.9910.85575.81282.6130.79182.52572.2700.7710.8050.803
95th
Confidence
Interval
80.063–
80.236
0.697–0.70081.297–
81.600
89.874–9
0.114
0.854–
0.856
75.654–
75.974
82.458–
82.761
0.789–
0.792
82.381–
82.656
72.121–72.4180.769–0.7720.805–0.8060.802–
0.804
Manually
Defined
Classification
Median79.3890.68998.79196.1730.97552.35281.5550.63785.70966.0950.7460.7860.787
95th
Confidence Interval
79.130–
79.641
0.685–0.69298.678–
98.908
95.973–
96.370
0.974–
0.976
51.802–
52.918
81.035–
82.076
0.633–
0.642
85.344–
86.069
65.640–66.5340.743–0.7500.784–0.7890.784–
0.789
Hardy et al. [5]
(15 Pixels Min Per Cluster)
Median77.7210.66091.53689.7890.90753.69882.2210.65085.572 67.9770.7580.7710.771
95th
Confidence Interval
77.632–
77.810
0.658–0.66191.434–
91.645
89.671–
89.903
0.906 –
0.907
53.498–
53.894
82.040–
82.422
0.648–
0.651
85.453–
85.709
67.826–68.1110.757–0.7590.770–0.7720.770–
0.772
Hardy et al. [5]
(100 Pixels Min Per Cluster)
Median76.4030.63792.13591.0850.91642.24486.6030.56890.71664.9810.7570.7470.749
95th
Confidence Interval
76.305–
76.493
0.636–0.63992.021–
92.237
90.978–
91.190
0.915–
0.917
42.055–
42.436
86.414–
86.800
0.566–
0.570
90.620–
90.822
64.839–65.1210.756–0.7580.746–0.7480.748–
0.750
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Oakes, G.; Hardy, A.; Bunting, P. RadWet: An Improved and Transferable Mapping of Open Water and Inundated Vegetation Using Sentinel-1. Remote Sens. 2023, 15, 1705. https://doi.org/10.3390/rs15061705

AMA Style

Oakes G, Hardy A, Bunting P. RadWet: An Improved and Transferable Mapping of Open Water and Inundated Vegetation Using Sentinel-1. Remote Sensing. 2023; 15(6):1705. https://doi.org/10.3390/rs15061705

Chicago/Turabian Style

Oakes, Gregory, Andy Hardy, and Pete Bunting. 2023. "RadWet: An Improved and Transferable Mapping of Open Water and Inundated Vegetation Using Sentinel-1" Remote Sensing 15, no. 6: 1705. https://doi.org/10.3390/rs15061705

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop