Next Article in Journal
P-Band InSAR for Geohazard Detection over Forested Terrains: Preliminary Results
Previous Article in Journal
Retrieval of All-Weather 1 km Land Surface Temperature from Combined MODIS and AMSR2 Data over the Tibetan Plateau
Previous Article in Special Issue
Spectral-Spatial Offset Graph Convolutional Networks for Hyperspectral Image Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Technical Note

Semi-Automated Semantic Segmentation of Arctic Shorelines Using Very High-Resolution Airborne Imagery, Spectral Indices and Weakly Supervised Machine Learning Approaches

by
Bibek Aryal
1,*,†,
Stephen M. Escarzaga
2,†,
Sergio A. Vargas Zesati
2,†,
Miguel Velez-Reyes
3,
Olac Fuentes
4 and
Craig Tweedie
2
1
Computational Science Program, The University of Texas at El Paso, 500 W University Ave., El Paso, TX 79968, USA
2
Environmental Science and Engineering Program, The University of Texas at El Paso, 500 W University Ave., El Paso, TX 79968, USA
3
College of Engineering, Electrical & Computer Engineering, The University of Texas at El Paso, 500 W University Ave., El Paso, TX 79968, USA
4
Department of Computer Science, The University of Texas at El Paso, 500 W University Ave., El Paso, TX 79968, USA
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Remote Sens. 2021, 13(22), 4572; https://doi.org/10.3390/rs13224572
Submission received: 29 September 2021 / Revised: 9 November 2021 / Accepted: 9 November 2021 / Published: 14 November 2021
(This article belongs to the Special Issue Computer Vision and Deep Learning for Remote Sensing Applications)

Abstract

:
Precise coastal shoreline mapping is essential for monitoring changes in erosion rates, surface hydrology, and ecosystem structure and function. Monitoring water bodies in the Arctic National Wildlife Refuge (ANWR) is of high importance, especially considering the potential for oil and natural gas exploration in the region. In this work, we propose a modified variant of the Deep Neural Network based U-Net Architecture for the automated mapping of 4 Band Orthorectified NOAA Airborne Imagery using sparsely labeled training data and compare it to the performance of traditional Machine Learning (ML) based approaches—namely, random forest, xgboost—and spectral water indices—Normalized Difference Water Index (NDWI), and Normalized Difference Surface Water Index (NDSWI)—to support shoreline mapping of Arctic coastlines. We conclude that it is possible to modify the U-Net model to accept sparse labels as input and the results are comparable to other ML methods (an Intersection-over-Union (IoU) of 94.86% using U-Net vs. an IoU of 95.05% using the best performing method).

1. Introduction

The Arctic Ocean has the longest coastline of any ocean on Earth and has recently been characterized as one of the most climate change vulnerable ecosystems on the planet [1]. Some of the most widely seen changes that are occurring include the lengthening of open-water seasons, stronger storms, declining sea ice extent and thickness, lake drainage, and increased rates of coastal erosion, all of which are impacting a range of ecosystem goods and services, wilderness areas, numerous mostly coastal indigenous communities and their subsistence-based food security and cultural and heritage sites, as well as industry, defense, and energy related infrastructure [2,3,4,5]. Approximately 65% of the Arctic coastline is unlithified [6], persists barely above a rising sea level, and/or exhibits degrading ice-rich permafrost [7] that appears to be subsiding in many Arctic locations [8]. Importantly, most sustained arctic coastal erosion studies appear to show an increase in the rate of coastal erosion, and several suggest erosion rates for the last few decades are 50–100% greater than those recorded in the previous half century [9,10,11,12]. Increasing rates of coastal erosion translate to large amounts of soil organic carbon loss, which contribute to positive feedback loops that strengthen warming trends and accelerate ecological changes across the globe.
The need to improve our understanding of Arctic coastal change is well recognized [4,13,14,15]. However, due to the remoteness and difficulty associated with accessing much of the Arctic, implementing field-based mapping techniques presents a significant logistical challenge. Furthermore, applied limitations such as persistent cloud-cover, dynamic weather patterns and short snow and ice-free summer months (∼3 months) make it difficult to accumulate a dense time series of remote sensing imagery for the purposes of land surface and hydrological observation [16]. In addition to having safety considerations in navigating through these remote regions, the use of these methods is also costly and time-consuming. Remote sensing data can be helpful in this scenario and as such, considerable effort has been focused on the development of semi-automated classification approaches using Earth Observation data. However, the advancement appears to have been mostly hindered by a combination of the lack of available Very High-spatial Resolution (VHR) (<1 m) remote-sensing data and lack of novel approaches typically practiced by the computer vision, artificial intelligence and remote sensing communities. In recent years, researchers have exploited Synthetic Aperture Radar (SAR) images for arctic, shoreline and surface water mapping [17,18,19,20]. Unlike optical images from most satellite and airborne images, SAR produces its own energy and then records the amount of that energy reflected back after interacting with the Earth. This results in SAR having distinct advantage of being able to collect imagery regardless of weather, and during both day and night. However, open and freely available SAR imagery can, at its highest spatial resolution, only be obtained at 10 m ground sampling distance from the European Space Agency’s (ESA) Sentinel-1 satellite and to better understand the patterns, controls, drivers and impacts of coastal land loss, inundation, and fluxes of soil carbon to the nearshore marine environment, improved precision coastline (i.e., shoreline) mapping efforts across large remote regions are required at very high spatial resolutions [6,21,22].
Various methods exist for delineating and mapping shoreline positions from both airborne and satellite imagery, including manual digitization [3,9,10,23], image classifications [24], principal component analysis [25], thresholding and band ratio [26,27], and spectral indices [28,29]. The latter, is the most widely used approach due to its simple calculation using multispectral bands that are commonly available on Earth-orbiting satellites (e.g., Sentinel 2 and GeoEye-1) [30,31], and commercial-off-the-shelf (COTS) imaging systems used on airborne, and, more recently, on small unoccupied aerial systems (sUAS) [32]. Spectral water indices such as NDWI [33], the Modified Normalized Difference Water Index (MNDWI) [34] and the NDSWI [35] in particular, have proved to be useful in helping to spectrally separate land and water pixels [35,36,37,38,39].
Developing effective automated/semi-automated techniques for accurate land cover mapping is just as necessary as the availability of VHR remote sensing images since the periodic manual labeling of remote sensing images over a large area is practically impossible. Fortunately, over the years, there have been advancements in areas of computer vision and remote sensing that allow researchers to automate this task using various ML methods [40,41,42]. The introduction of geospatial cloud computing platforms, like Google Earth Engine, have allowed researchers to harness the power of distributed computing for processing dense stacks of satellite imagery and performing machine learning routines for the extraction of coastline data [43]. Neural networks were used as early as in 1991 for extraction of shoreline features [44] using Multi Layered Perceptron (MLP). In recent years, many researchers have focused on using Deep Learning (DL) to extract water bodies from remote sensing images using DL algorithms [45,46,47]. Land cover mapping using Convolutional Neural Network (CNN) have seen great success in different application domains [48,49,50] using remote sensing images in recent years. Advanced DL architectures used to extract water bodies from remote sensing images include CNN [51], stacked sparse autoencoder DL architecture [52], U-Net architecture [53], edge-aware CNN [54], Fully Convolutional Network (FCN) [55], restricted receptive field deconvolution network (RRF DeconvNet) [56], DeepUNet [38], Mask R-CNN [57], and a hybrid ML system which consists of CNN logistic regression classifier [58]. Most of the research involving advanced DL architectures were conducted using 30 m spatial resolution Landsat images.
Despite the advances in techniques for land cover mapping in recent years, having access to labelled data remains a limiting factor. Deep neural networks usually require thousands of images as training samples where the desired features have already been determined. However, a recently developed CNN based model, U-Net [59], has been shown to provide highly accurate semantic segmentation with small numbers of training data. Since manual labelling from satellite images requires considerable time and effort to generate training labels, the U-Net model is a good alternative for this situation. While the U-Net model requires significantly fewer volumes of data compared to traditional deep-learning based models, one of the shortcomings of this method is the need for accurate pixel level annotations during training. These types of annotations, typically referred to as dense labels, as seen in Figure 1b, require considerable amount of time, effort, and often skilled annotators with a good understanding of the region of interest/labelled classes. Sparsely labelled data, as seen in Figure 1c, where only some pixels for a given image are labelled, can be collected in large amounts in a relatively fast and cheap manner and often without the need for an expert. Promising advances in computer vision research have shown that such methods that can learn from unlabeled or partially labeled data [60]. As such, there have been many research projects focused on training CNN model with sparse training labels [61,62,63]. In this paper, we present supervised ML methods that can learn from sparsely labeled data and aid segmentation of land and water pixels in high resolution imagery and compare their performance to remote sensing-derived indices to detect water and land boundaries.
In summary, the main objectives of this paper are to:
  • Compare the performances of different ML algorithms and remote sensing indices derived from VHR airborne multispectral imagery for shoreline mapping on the Beaufort Sea coast of the Arctic National Wildlife Refuge, Alaska;
  • Modify U-Net model (a supervised learning approach for deep neural networks) to accept sparse labels as an input for generating densely segmented labels as the output.

2. Study Area and Data Sources

Coastlines along the Beaufort Sea are geomorphologically variable but can generally be classified as either bays/inlets, deltas, exposed bluffs, lagoons or tapped basins [64], where coastal lagoons make up over 50% of the region [65]. There are numerous factors controlling erosion of these coastal features including duration of sea ice-free extent, wind fetch length, nearshore bathymetry, land cover type, and ground ice-content [11,66]. These coastal features contain substantial stores of soil organic carbon (SOC) [66] and the erosion and subsequent release of SOC is partially responsible for the organic matter input to nearshore marine environments that supports productive food webs [67] while riverine sediment transport also plays a significant role [68]. This influx of terrestrial and organic materials along with the range of water depths and sediment compositions are what makes these nearshore waters (from a remote sensing perspective) optically complex [4,68]. Within these areas are shallow lagoons and embayments with depths typically no greater than 10 m [69]. Sea level is minimally impacted by tidal range along the Beaufort Sea Coast (<50 cm) but can be dramatically elevated by a couple of meters through wind driven action [3,70].
This study focuses on the coastal margin of the “1002 area” of the Arctic National Wildlife Refuge (ANWR). ANWR is a ∼78,000 km 2 coastal plain region on the eastern North Slope of Alaska and was established as a refuge in 1980 through the Alaska National Interest Lands Conservation Act by Congress, recognizing the large potential for oil and gas resources and its importance as a wildlife habitat [3]. The 1002 area consists of barrier islands, salt marshes, coastal lagoons, coastal bluffs and river deltas that provide habitats for over 42 fish species, 37 land mammals, eight marine mammals and ∼200 residential and migratory bird species (Figure 2) [3]. These coastlines can display narrow, low-lying beaches while backshore coastal morphology can consist of sand and gravel beaches, barrier islands, wetlands, barrier spits, and low-lying permafrost coastal bluffs with a range of ice content that are typically 2–6 m above sea-level in some areas [3,69,71]. Surface features of the coastal plain within our study area consist of tapped and untapped thermokarst lakes, coalesced low-center polygons, and braided rivers rich with sediment from the interior Brooks Range [3].
We obtained National Oceanic and Atmospheric Administration (NOAA) RSD high-resolution airborne RGB and NIR imagery covering the roughly 170-kilometer coastline of the 1002 area of ANWR collected between 18–19 July 2017. For both RGB and NIR scenes, 265 orthomosaiced image tiles were downloaded through NOAA’s Data Access Viewer (https://www.coast.noaa.gov/dataviewer/ (accessed on 23 January 2021)). Each image tile measured 2.5 km × 2.5 km for a total imagery footprint of 1672.80 sq. km (Figure 3 and Figure 4). Images tiles were download in either 3-band (RGB) or 1-band (NIR), 8-bit GeoTIFF format and later reprojected to a NAD83/Alaska Albers (EPSG: 3338) coordinate reference system. Finally, corresponding RGB and NIR image tiles were composited into 265 4-band images using ArcMap 10.6. NOAA RSD collected this imagery from a Beechcraft King Air 350CER manned aircraft flying at a nominal altitude of ∼2286 m above ground level (AGL) with two Appanix Digital Sensor System (DSS) SN580 cameras (one each for RGN and NIR). Image capture and precision georeferencing were synchronized and completed with an on-board Applanix POS/AV410 Global Navigation Satellite System (GNSS) and Inertial Measurement Unit (IMU). RGB and NIR camera systems had focal lengths of 52 mm and CDD pixel sizes of 5.2 × 5.2 µm and 6.0 × 6.0 µm respectively. While the ground sampling distance (GSD) of posted RGB and NIR orthomosaic images tiles was 35 cm. Stated horizontal accuracy for posted orthomosaics was +/−1.5 m at 95% CI.

3. Methods

The task of identifying and mapping geomorphological features in remote sensing images fits well within the framework of semantic segmentation. Semantic Segmentation is one of the oldest and most widely studied problems in computer vision [72,73,74,75,76] and involves understanding not only what objects are in the scene, but also what regions of the image the objects are located in and at what spatial resolution. In recent years, land cover mapping using semantic segmentation of satellite/airborne images has seen great success in different application domains. These can partly be accredited to an increasingly large amount of fully annotated images. However, collecting large scale accurate pixel-level annotations is time consuming and sometimes requires substantial amounts of financial investment and skilled labor. We get around such challenges by introducing a new method to train the modified U-Net model with easy to generate sparse labels. Here, we apply two remote sensing indices—NDWI [33], and NDSWI [35]—and two classical ML techniques—random forest, eXtreme Gradient Boosting or xgboost [77], and modified U-Net (Section 3.2)—to automate methods using a high-performance cloud computing environment for mapping water bodies from airborne imagery. We then fine-tuned the threshold to generate binary labels using Decision Stump (DS) (Section 3.3). We utilized a modified version of the U-Net architecture [59] to perform semantic segmentation—pixel-wise classification in images. Using the ANWR as an area of study, we leveraged freely available high-resolution orthomosaic imagery for training and evaluation. We generated sparse labels for training and dense labels for evaluation using the approach in Section 3.1. Using these resources, we developed an extensible pipeline, a dataset and baseline methods that can be utilized for generating land/water masks from high resolution airborne images. We also present qualitative and quantitative results describing properties of our models.

3.1. Label Creation Strategy

Composited 4-band airborne image tiles were used to manually delineate areas of both water and land pixels by creating polygon features in ESRI shapefile formats within ArcMapThese shapefiles were then used as model training and testing datasets. From the 265 total airborne scenes, 165 scenes were hand-annotated with sparse labels by two remote sensing and ecological scientists familiar with landforms in the study area. Annotations were used for training after filtering out scenes that contained only water or only land or had image artifacts due to orthomosaicing. A total of 20 different scenes were used to create dense labels for the test sets. The strategy used by the specialists to make the sparse labels consisted of creating circular polygon features of various sizes within some (but not all) land and water regions of each scene. The dense labels, however, were annotated more carefully, and therefore more time was needed in delineating every single water pixel from each scene when creating the dense labels. While coastal water annotation was visually more straight forward, with the exception of deltaic regions, some terrestrial features required additional scrutiny to determine the presence of surface water. Annotators used a combination of visual characteristics to classify a feature as containing surface water:
  • Dark color in the visual spectrum indicating sufficient light attenuation in standing water;
  • the presence of reflected light due to ripples or waves caused by wind; and
  • the presence of accumulated white water on the western shorelines of water bodies caused by prevailing easterly winds.
The corresponding land labels were created by inverting water polygons for each scene.

3.2. Model Selection

3.2.1. Spectral Water Indices

This study focused on testing and utilizing the capacity of NDWI and NDSWI in masking water and land pixels along arctic coastal tundra shorelines. Due to the selection of available bands in the source imagery used in this study (near-infrared, Red, Green and Blue), we were limited to the number of indices useful for water detection (primarily those that utilize the near-infrared band as opposed to the shortwave-infrared). Moreover, NDSWI was specifically developed using in situ hyperspectral data of tundra wetlands and with a goal to develop an index that could not be confounded by atmospheric moisture as other water spectral indices have shown to be sensitive to in these arctic coastal marine ecosystems [35]. The equations for the indices are as shown in Equation (1).
N D W I = N I R G r e e n N I R + G r e e n , N D S W I = ln ( N I R ) ln ( R e d ) ln ( N I R ) + ln ( R e d ) .

3.2.2. Machine Learning

Fundamental to any form of geospatial remote sensing image processing is the need for reliable, repeatable, and accurate landscape feature (e.g., shoreline, bluff edge, and beach width) identification and delineation. Arctic Coastal Change Detection (ACCD) is challenged by the need to detect change in coastal features (waterline, bluff edge, beach etc.) over thousands of kilometers of coast at high spatial and temporal resolutions. This type of land cover mapping is an application area of a wider range of problem in the computer vision community, known as semantic segmentation, for which supervised ML based approaches have performed well. Furthermore, these techniques have improved rapidly in recent years due to progress in deep learning and semantic segmentation with Convolutional Neural Networks (CNNs) [59,78]. Recently, high performance computing, ML, and deep learning approaches have provided solutions for efficient and accurate landscape feature mapping across difference ecosystems. In the Arctic, studies have delineated polygonal tundra geomorphologies [45,79], arctic lake features [80], glacier extents [48,81,82], and coastal features [40,41,42,83]. In this research, we propose an automated pipeline using traditional ML based methods—random forest, and xgboost, and a deep neural network based U-Net architecture for arctic coastal mapping and compare their performances. One of the advantages of using ML based approaches over spectral indices is that the ML based method for land cover mapping generalizes geological features such as impervious surfaces, wetlands, and Plant functional types (PFT) for which the spectral indices are not well defined.

3.3. Threshold Fine-Tuning

Given that NDWI and NDSWI values ranges from −1 to 1; the outputs from random forest, xgboost, and U-Net being probabilistic in nature, ranges from 0 to 1; and we require binary labels as the final segmentation mask; we needed to effectively convert these intensities to binary mask. A simple and widely used approach to do so is to use a threshold of 0.5 for probabilistic output intensities and select the appropriate threshold based on the literature for NDWI (>=0.3 for water [84]) and NDSWI. Other methods for threshold fine-tuning include analyzing ROC curve [85], Otsu’s method [86], and DS—a one-level decision tree—[87]. We implement DS with IoU as the single input feature using exhaustive search.

4. Architectural Overview

We used two different spectral water indices—NDWI and NDSWI—and three ML methods—random forest, xgboost, and modified variant of the U-Net architecture [59]—for this study. Since the U-Net expects training labels to be dense, we modified dice loss [88] by masking this with pixel locations on the sparse labels. Then, this masked dice loss with gradient descent as the optimization algorithm was used for training the modified U-Net architecture.
Our approach is summarized in the multi-step pipeline presented in Figure 5 using NDWI and NDSWI to generate intensity masks (see Section 3.2 below). These approaches for generating intensity masks do not require training labels. To train the ML models, we first converted the raw vector data sparse labels to corresponding image masks for each image. We then augmented the 4 Band orthorectified airborne imagery with NDWI and NDSWI as additional channels, divided each airborne image/corresponding mask into 255 subregions, filtered out any subregion for which there was not at least 10% of labeled pixels, and randomly split them into training, testing and validation data sets. The training and validation samples are normalized dynamically during training with the mean and standard deviation of the test set. As a post-processing step, we then generated a binary mask by thresholding the intensity for “water” class using optimal thresholds calculated using the validation set. The final scores that we report use the densely labelled test set.

Masked Dice Loss

With the introduction of Convolutional Neural Networks (CNNs), different application areas involving semantic segmentation has achieved good results [48,59,78]. One of the downside of these CNN architecture is the need for dense labeled segmentation data. Obtaining large amounts of dense labeled segmentation data can sometimes be time consuming. It is therefore, a promising direction in computer vision research to develop semantic segmentation methods that can learn without the need for dense labels. These types of labels are also commonly referred to as weak labels. Previous research has reported semantic segmentation networks trained with various types of weak labels such as image level annotations [60,89] and sparse labels [61,63]. We present masked dice loss and a method to train a deep neural architecture using sparse training labels and masked dice loss in this research. Dice loss is based on the Sorensen–Dice coefficient [90,91] which is a statistic used to gauge the similarity of two samples. In the computer vision community, it was introduced for 3D medical image segmentation [92]. Dice Loss is given by the equation:
D = c C 2 × i I p i , c × g i , c i I p i , c 2 + i I g i , c 2 ,
where C is the set of classes that are present in the image, I is the set of pixels in the image, p i , c denotes the probabilistic output from the model for class c at position i, and g i , c denotes the ground truth value for class c at position i.
For our purpose, we need to train the image semantic segmentation network using sparse labels. For this purpose, we introduce masked dice loss in the equation:
M ( i ) = T R U E , if c C g i , c 0 . F A L S E , otherwise . M a s k e d D = c C M ( i ) = T R U E 2 × i I p i , c × g i , c i I p i , c 2 + i I g i , c 2 .
Implementing the masked dice loss, gradients are computed and back-propagated based only on the output for the pixels present in the ground truth sparse labels.
Reproducibility: Our approach implementation is based on scikit-learn [93] and pytorch [94]. All networks were trained on Azure NC6 Virtual Machine powered by NVIDIA Tesla K80 GPU. The code to replicate our process is available at (https://github.com/Aryal007/coastal_mapping (accessed on 30 May 2021)).

5. Results

Figure 6 shows examples of output land/water segmentation masks using different spectral indices and ML based approaches. From the figure, we can observe that the U-Net model has an intensity close to the end values for land/water classification compared to other methods. For quantitative evaluation, the intensity masks need to be converted to binary mask with unique values for each classes(in this case land and water) as can be seen on Figure 7.

5.1. Threshold Fine-Tuning

While visualizing the results, instead of just finding a threshold that works the best for each of the methods, we plot a curve showing the performance at each possible threshold interval with a step size of 0.01. We use the validation set to determine the appropriate threshold that produces the highest IoU for class water using each method. The highest IoU and the threshold that yielded highest IoU on the validation is summarized in Table 1.
Figure 8 shows the histogram for pixels corresponding to land and water classes in the validation set, the IoU for land/water classes with an interval of 0.01 between thresholds, the threshold that yielded the maximum IoU, and maximum possible IoU. Based on the distribution of the histogram for land and water classes, we expect the results seen using remote sensing indices features to be more subjective to the thresholding value. This is exactly what we see in the line graph showing land and water IoU at each threshold interval. It is also important to note that the DS computed thresholds for NDWI (0.78), random forest (0.4), and xgboost (0.38) are different from the commonly used thresholds (0.3, 0.5 and 0.5, respectively) while the DS computed threshold for U-Net model is close to the commonly used value of 0.5. At the time of writing, we could not find a published recommended threshold value for NDSWI. Based off of our findings, we propose using a value of 0.48 as the optimal threshold for land/water segmentation in environment similar to the arctic coastal plain of Alaska. This means that the performance using the U-Net model is less dependent on finding the optimal thresholding value over other methods.

5.2. Evaluations

We evaluate the performances of spectral water indices, ML models, and U-Net on the densely labeled test set and used IoU, Precision, and Recall as our evaluation metrics. The comparative performance between different methods can be seen in Table 2.

Region Based Evaluations

Each scene was divided into 225 (512 × 512-pixel) sub regions with no overlap. The subregions were classified as coastal if they contained portions of the land/water interface adjacent to mainland backshore environments. This classification excluded barrier islands and deltaic regions where narrow beaches and/or permafrost bluffs do not directly interface with the waterline. A total of 162 sub regions were classified as coastal. We see a similar performance trend across all the models with random forest performing the best in terms of IoU. The comparative performance for region based evaluations between different methods can be seen in Table 3.

6. Discussion

Unprecedented change in the Arctic has drawn the attention of numerous large-scale and long-term research initiatives. NASA’s Arctic-COLORS (https://arctic-colors.gsfc.nasa.gov/ (accessed on 12 November 2021)) and Arctic Boreal Vulnerability Experiment (ABoVE) (https://above.nasa.gov/ (accessed on 12 November 2021)) as well as a newly funded National Science Foundation funded Long Term Ecological Research Project (Beaufort Lagoon Ecosystems LTER) are a few of a much larger group of initiatives currently conducting research to better understand biogeochemical, land-marine interactions, and how arctic coastal change is modifying ecological properties and processes. More accurate and high-resolution mapping data will no doubt aid in various research efforts being conducted in this field. Technological advances in remote sensing, computer vision and high-performance computing (HPC) along with the increase in large-scale, agency-level airborne campaigns such as NOAA’s coastal imaging missions conducted by their Remote Sensing Division (RSD), provide a unique opportunity for mapping arctic shorelines across large areas at high spatial resolutions on Alaska’s North Slope.
NOAA airborne image collections across coastal regions primarily bolster NOAA’s mission goal of coastal resiliency by serving as baseline datasets in creating high-resolution orthomosaic imagery to aid in navigation, determining pre-and post-storm conditions and facilitating coastal-zone management. Shoreline vectors (digital representations of the interface between land and water) derived from this imagery are used in, among many other research and management purposes, efforts for tracking and quantifying rates of coastal change. Generally, these shorelines are operator-derived mono or stereoscopically, manually digitized or created through feature extraction routines and published to NOAA’s Continuously Updated Shoreline Product (CUSP). Furthermore, these image collections extend well beyond Alaska’s coastal regions. Similar VHR, 4-band, airborne imagery is collected along the majority of coastlines in the contiguous United States (including the Great Lakes coastal areas) and is freely available to public. With the open-source methodology presented here, similar land/water segmentation efforts can be expanded to a wide range of coastal regions coinciding with available NOAA image collections.
We show that accurate shoreline mapping in the “1002 area” of the Arctic National Wildlife Refuge (ANWR) can be obtained using two different remote sensing indices, two different traditional ML based approaches and a DL method. However, the model has not been tested outside of this region. Direct comparison of the results presented here to previous work is difficult due to the variety of methods and source imagery used in the literature. Similar work to create land/water masks from 1 m resolution, airborne, color-infrared imagery using NDWI threshold and object-based classification methods in Arctic-Boreal regions report recall, precision and IOU of 0.94, 0.87 and 0.83 respectively for the water class. Additionally, they report recall, precision and IOU of 0.98, 0.99 and 0.98, respectively, for the land class [95]. While we expect the performance between different models to be relatively similar for other regions for this task, we may see an improvement in performance in using DL methods with more number of land cover classes [96]. Based on the body of literature around the performance of U-Net architecture, one would expect U-Nets to outperform single-pixel based models. However, random forest may have performed better for this task since the U-Net model was trained using sparse labels. Furthermore, as the CNN models make explicit assumption that the inputs are images and thus compute the output of neurons that are connected to local regions in the input, we may not have been able to utilize the full spatial properties of the U-Net model due to the sparsely labelled training data. Further research should consider inter-comparison with a model trained using dense labels to utilize the full spatial properties of the convolutional neural network-based architecture. For future work, we will also investigate the performance of U-Net model trained with sparse labels to classify edge pixels using metrics that better capture the changes along highest priority coastal sections in Alaska [97].

7. Conclusions

We have addressed the problem of training DL based U-Net model using sparse labels for training and have shown that the sparsely labeled data allow it to learn good distribution comparable to other ML based methods. The results are very competitive but the random forest model provides slightly better results over the U-Net model trained using sparse labels for our task of land/water classification. Additionally, and from an operational perspective, our findings suggest that efficient and accurate surface water mapping can be achieved with less labeling effort and a lower barrier to entry in terms of computer science expertise. Although, since the performance of remote sensing indices are highly dependent on finding the optimal threshold, an exhaustive search for the threshold is needed to observe the best results. A relatively similar performance to that of ML based approaches is seen when using the remote sensing indices (NDWI and NDSWI), which could be attributed to the simplistic nature of the task itself (only two output classes), or to limited multispectral properties being incorporated in the input indices.

Author Contributions

Conceptualization, B.A., S.A.V.Z. and S.M.E.; methodology, B.A. and O.F.; Coding, B.A.; validation, S.M.E. and S.A.V.Z.; resources, S.M.E.; data curation, S.A.V.Z. and S.M.E.; writing—original draft preparation, B.A., S.A.V.Z. and S.M.E.; writing—review and editing, O.F., C.T. and M.V.-R.; visualization, B.A.; supervision, O.F., C.T. and M.V.-R.; project administration, S.A.V.Z.; funding acquisition, S.A.V.Z., S.M.E. and M.V.-R. All authors have read and agreed to the published version of the manuscript.

Funding

Miguel Velez-Reyes, Craig Tweedie, and Stephen Escarzaga were partially supported by the National Oceanic and Atmospheric Administration, Office of Education Educational Partnership Program award number NA16SEC4810008 and by the NSF LTER award number: 1656026. Sergio Vargas Zesati was partially supported by NASA award numbers NNX17AC58A and 80NSSC21K1164 and NSF ITEX-AON award number: 1836861. The content of the paper is solely the responsibility of the award recipient and do not necessarily represent the official views of the U.S. Department of Commerce, National Oceanic and Atmospheric Administration.

Data Availability Statement

All data used during the study were downloaded through NOAA’s Data Access Viewer (https://www.coast.noaa.gov/dataviewer/ (accessed on 23 January 2021)).

Acknowledgments

We would like to thank Microsoft for providing us with the free Microsoft Azure resources through their AI for Earth grant program. All tests to obtain the results in this paper were conducted on a Microsoft Azure deployment which consists of two HDInsight clusters (an Azure HDInsight Apache Kafka cluster, and an HDInsight Apache Spark cluster), communicating directly within the premises of a single virtual network. We also acknowledge NOAA for providing a rich dataset which this work has been built on. This research was supported in part by the Department of Computer Science, and the Environmental Science and Engineering program at the University of Texas at El Paso.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ANWRArctic National Wildlife Refuge
NDWINormalized Difference Water Index
NDSWINormalized Difference Surface Water Index
NOAANational Oceanic and Atmospheric Administration
SARSynthetic Aperture Radar
CNNConvolutional Neural Network
DLDeep Learning
MLPMulti Layered Perceptron
FCNFully Convolutional Network
MLMachine Learning
IoUIntersection-over-Union
DSDecision Stump
VHRVery High-spatial Resolution

References

  1. Fritz, M.; Vonk, J.E.; Lantuit, H. Collapsing arctic coastlines. Nat. Clim. Chang. 2017, 7, 6–7. [Google Scholar]
  2. Nitze, I.; Cooley, S.W.; Duguay, C.R.; Jones, B.M.; Grosse, G. The catastrophic thermokarst lake drainage events of 2018 in northwestern Alaska: Fast-forward into the future. Cryosphere 2020, 14, 4279–4297. [Google Scholar]
  3. Gibbs, A.E.; Richmond, B.M. National Assessment of Shoreline Change: Historical Shoreline Change along the North Coast of Alaska, US-Canadian Border to Icy Cape; US Department of the Interior, US Geological Survey: Reston, VA, USA, 2015.
  4. Hernes, P.; Tzortziou, M.; Salisbury, J.; Mannino, A.; Matrai, P.; Friedrichs, M.A.; Del Castillo, C.E. Arctic-COLORS (Coastal Land Ocean Interactions in the Arctic)—A NASA field campaign scoping study to examine land-ocean interactions in the Arctic. In AGU Fall Meeting Abstracts; American Geophysical Union: Washington, DC, USA, 2014; Volume 2014, p. B43B-0242. [Google Scholar]
  5. Forbes, D.L. State of the Arctic Coast 2010: Scientific Review and Outlook; Land-Ocean Interactions in the Coastal Zone; Institute of Coastal Research: Geesthacht, Germany, 2011. [Google Scholar]
  6. Lantuit, H.; Overduin, P.P.; Couture, N.; Wetterich, S.; Aré, F.; Atkinson, D.; Brown, J.; Cherkashov, G.; Drozdov, D.; Forbes, D.L.; et al. The Arctic coastal dynamics database: A new classification scheme and statistics on Arctic permafrost coastlines. Estuaries Coasts 2012, 35, 383–400. [Google Scholar]
  7. Turetsky, M.R.; Abbott, B.W.; Jones, M.C.; Anthony, K.W.; Olefeldt, D.; Schuur, E.A.; Koven, C.; McGuire, A.D.; Grosse, G.; Kuhry, P.; et al. Permafrost collapse is accelerating carbon release. Nature 2019, 569, 32–34. [Google Scholar] [PubMed] [Green Version]
  8. Streletskiy, D.A.; Shiklomanov, N.I.; Little, J.D.; Nelson, F.E.; Brown, J.; Nyland, K.E.; Klene, A.E. Thaw subsidence in undisturbed tundra landscapes, Barrow, Alaska, 1962–2015. Permafr. Periglac. Process. 2017, 28, 566–572. [Google Scholar]
  9. Jones, B.M.; Farquharson, L.M.; Baughman, C.A.; Buzard, R.M.; Arp, C.D.; Grosse, G.; Bull, D.L.; Günther, F.; Nitze, I.; Urban, F.; et al. A decade of remotely sensed observations highlight complex processes linked to coastal permafrost bluff erosion in the Arctic. Environ. Res. Lett. 2018, 13, 115001. [Google Scholar]
  10. Günther, F.; Overduin, P.P.; Sandakov, A.V.; Grosse, G.; Grigoriev, M.N. Short-and long-term thermo-erosion of ice-rich permafrost coasts in the Laptev Sea region. Biogeosciences 2013, 10, 4297–4318. [Google Scholar]
  11. Tweedie, C.; Aguirre, A.; Cody, R.; Vargas, S.; Brown, J. Spatial and temporal dynamics of erosion along the Elson Lagoon Coastline near Barrow, Alaska (2002–2011). In Proceedings of the Tenth International Conference on Permafrost, Salekhard, Yamal-Nenets Autonomous District, Siberia, Russia, 25–29 June 2012. [Google Scholar]
  12. Lantuit, H.; Pollard, W. Fifty years of coastal erosion and retrogressive thaw slump activity on Herschel Island, southern Beaufort Sea, Yukon Territory, Canada. Geomorphology 2008, 95, 84–102. [Google Scholar]
  13. Richter-Menge, J.; Druckenmiller, M.L.; Jefferies, E.M. Report Card. 2019. Available online: https://www.arctic.noaa.gov/Report-Card (accessed on 14 March 2021).
  14. Board, Space Studies and National Academies of Sciences, Engineering, and Medicine. Thriving on Our Changing Planet: A Decadal Strategy for Earth Observation from Space; National Academies Press: Washington, DC, USA, 2019. [Google Scholar]
  15. Farquharson, L.M.; Mann, D.; Swanson, D.; Jones, B.; Buzard, R.; Jordan, J. Temporal and spatial variability in coastline response to declining sea-ice in northwest Alaska. Mar. Geol. 2018, 404, 71–83. [Google Scholar]
  16. Marshall, G.; Dowdeswell, J.; Rees, W. The spatial and temporal effect of cloud cover on the acquisition of high quality Landsat imagery in the European Arctic sector. Remote Sens. Environ. 1994, 50, 149–160. [Google Scholar]
  17. Huang, W.; DeVries, B.; Huang, C.; Lang, M.W.; Jones, J.W.; Creed, I.F.; Carroll, M.L. Automated extraction of surface water extent from Sentinel-1 data. Remote Sens. 2018, 10, 797. [Google Scholar]
  18. Main-Knorn, M.; Pflug, B.; Louis, J.; Debaecker, V.; Müller-Wilm, U.; Gascon, F. Sen2Cor for sentinel-2. In Image and Signal Processing for Remote Sensing XXIII; International Society for Optics and Photonics: Bellingham, DC, USA, 2017; Volume 10427, p. 1042704. [Google Scholar]
  19. Banks, S.; Millard, K.; Behnamian, A.; White, L.; Ullmann, T.; Charbonneau, F.; Chen, Z.; Wang, H.; Pasher, J.; Duffe, J. Contributions of Actual and Simulated Satellite SAR Data for Substrate Type Differentiation and Shoreline Mapping in the Canadian Arctic. Remote Sens. 2017, 9, 1206. [Google Scholar] [CrossRef] [Green Version]
  20. Lu, X.; Yang, K.; Lu, Y.; Gleason, C.J.; Smith, L.C.; Li, M. Small Arctic rivers mapped from Sentinel-2 satellite imagery and ArcticDEM. J. Hydrol. 2020, 584, 124689. [Google Scholar]
  21. Obu, J.; Lantuit, H.; Fritz, M.; Pollard, W.H.; Sachs, T.; Günther, F. Relation between planimetric and volumetric measurements of permafrost coast erosion: A case study from Herschel Island, western Canadian Arctic. Polar Res. 2016, 35, 30313. [Google Scholar]
  22. Ma, S.; Zhou, Y.; Gowda, P.H.; Dong, J.; Zhang, G.; Kakani, V.G.; Wagle, P.; Chen, L.; Flynn, K.C.; Jiang, W. Application of the water-related spectral reflectance indices: A review. Ecol. Indic. 2019, 98, 68–79. [Google Scholar] [CrossRef]
  23. Kinsman, N.; Gibbs, A.; Nolan, M. Evaluation of vector coastline features extracted from ‘structure from motion’-derived elevation data. In The Proceedings of the Coastal Sediments; World Scientific: Hackensack, NJ, USA, 2015. [Google Scholar]
  24. Sekovski, I.; Stecchi, F.; Mancini, F.; Del Rio, L. Image classification methods applied to shoreline extraction on very high-resolution multispectral imagery. Int. J. Remote Sens. 2014, 35, 3556–3578. [Google Scholar]
  25. Ghoneim, E.; Mashaly, J.; Gamble, D.; Halls, J.; AbuBakr, M. Nile Delta exhibited a spatial reversal in the rates of shoreline retreat on the Rosetta promontory comparing pre-and post-beach protection. Geomorphology 2015, 228, 1–14. [Google Scholar]
  26. Ozturk, D.; Sesli, F.A. Shoreline change analysis of the Kizilirmak Lagoon Series. Ocean Coast. Manag. 2015, 118, 290–308. [Google Scholar]
  27. Dickens, K.; Armstrong, A. Application of machine learning in satellite derived bathymetry and coastline detection. SMU Data Sci. Rev. 2019, 2, 4. [Google Scholar]
  28. Boak, E.H.; Turner, I.L. Shoreline definition and detection: A review. J. Coast. Res. 2005, 21, 688–703. [Google Scholar]
  29. Choung, Y.J.; Jo, M.H. Comparison between a machine-learning-based method and a water-index-based method for shoreline mapping using a high-resolution satellite image acquired in Hwado Island, South Korea. J. Sens. 2017, 2017, 8245204. [Google Scholar]
  30. Constantino, D.; Pepe, M.; Dardanelli, G.; Baiocchi, V. Using optical Satellite and aerial imagery for automatic coastline mapping. Geogr. Tech. 2020, 15, 171–190. [Google Scholar]
  31. Randazzo, G.; Barreca, G.; Cascio, M.; Crupi, A.; Fontana, M.; Gregorio, F.; Lanza, S.; Muzirafuti, A. Analysis of Very High Spatial Resolution Images for Automatic Shoreline Extraction and Satellite-Derived Bathymetry Mapping. Geosciences 2020, 10, 172. [Google Scholar]
  32. Dehm, D.; Becker, R.; Godre, A. SUAS Based Multispectral Imagery for Monitoring Wetland Inundation and Vegetation. Preprints 2019, e201911032. Available online: https://www.preprints.org/manuscript/201911.0326/v1 (accessed on 12 November 2021).
  33. McFeeters, S.K. The use of the Normalized Difference Water Index (NDWI) in the delineation of open water features. Int. J. Remote Sens. 1996, 17, 1425–1432. [Google Scholar] [CrossRef]
  34. Xu, H. Modification of normalised difference water index (NDWI) to enhance open water features in remotely sensed imagery. Int. J. Remote Sens. 2006, 27, 3025–3033. [Google Scholar]
  35. Goswami, S.; Gamon, J.A.; Tweedie, C.E. Surface hydrology of an arctic ecosystem: Multiscale analysis of a flooding and draining experiment using spectral reflectance. J. Geophys. Res. Biogeosci. 2011, 116. [Google Scholar] [CrossRef]
  36. Maglione, P.; Parente, C.; Vallario, A. Coastline extraction using high resolution WorldView-2 satellite imagery. Eur. J. Remote Sens. 2014, 47, 685–699. [Google Scholar]
  37. Saeed, A.; Fatima, A. Coastline extraction using satellite imagery and image processing techniques. Red 2016, 600, 720. [Google Scholar]
  38. Li, R.; Liu, W.; Yang, L.; Sun, S.; Hu, W.; Zhang, F.; Li, W. Deepunet: A deep fully convolutional network for pixel-level sea-land segmentation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 3954–3962. [Google Scholar]
  39. Nazeer, M.; Waqas, M.; Shahzad, M.I.; Zia, I.; Wu, W. Coastline Vulnerability Assessment through Landsat and Cubesats in a Coastal Mega City. Remote Sens. 2020, 12, 749. [Google Scholar]
  40. Dixon, B.; Candade, N. Multispectral landuse classification using neural networks and support vector machines: One or the other, or both? Int. J. Remote Sens. 2008, 29, 1185–1206. [Google Scholar] [CrossRef]
  41. Kalkan, K.; Bayram, B.; Maktav, D.; Sunar, F. Comparison of support vector machine and object based classification methods for coastline detection. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, 7, W2. [Google Scholar] [CrossRef] [Green Version]
  42. Bayram, B.; Erdem, F.; Akpinar, B.; Ince, A.; Bozkurt, S.; Reis, H.C.; Seker, D. The efficiency of random forest method for shoreline extraction from LANDSAT-8 and GOKTURK-2 imageries. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 4, 141. [Google Scholar] [CrossRef] [Green Version]
  43. Vos, K.; Splinter, K.D.; Harley, M.D.; Simmons, J.A.; Turner, I.L. CoastSat: A Google Earth Engine-enabled Python toolkit to extract shorelines from publicly available satellite imagery. Environ. Model. Softw. 2019, 122, 104528. [Google Scholar] [CrossRef]
  44. Ryan, T.; Sementilli, P.; Yuen, P.; Hunt, B. Extraction of shoreline features by neural nets and image processing. Photogramm. Eng. Remote Sens. 1991, 57, 947–955. [Google Scholar]
  45. Zhang, W.; Witharana, C.; Liljedahl, A.K.; Kanevskiy, M. Deep convolutional neural networks for automated characterization of arctic ice-wedge polygons in very high spatial resolution aerial imagery. Remote Sens. 2018, 10, 1487. [Google Scholar] [CrossRef] [Green Version]
  46. Liu, W.; Chen, X.; Ran, J.; Liu, L.; Wang, Q.; Xin, L.; Li, G. LaeNet: A Novel Lightweight Multitask CNN for Automatically Extracting Lake Area and Shoreline from Remote Sensing Images. Remote Sens. 2021, 13, 56. [Google Scholar] [CrossRef]
  47. Robinson, C.; Ortiz, A.; Malkin, K.; Elias, B.; Peng, A.; Morris, D.; Dilkina, B.; Jojic, N. Human-Machine Collaboration for Fast Land Cover Mapping. Proc. AAAI Conf. Artif. Intell. 2020, 34, 2509–2517. [Google Scholar] [CrossRef]
  48. Baraka, S.; Akera, B.; Aryal, B.; Sherpa, T.; Shresta, F.; Ortiz, A.; Sankaran, K.; Ferres, J.L.; Matin, M.; Bengio, Y. Machine Learning for Glacier Monitoring in the Hindu Kush Himalaya. arXiv 2020, arXiv:2012.05013. [Google Scholar]
  49. Mahdianpari, M.; Salehi, B.; Rezaee, M.; Mohammadimanesh, F.; Zhang, Y. Very deep convolutional neural networks for complex land cover mapping using multispectral remote sensing imagery. Remote Sens. 2018, 10, 1119. [Google Scholar] [CrossRef] [Green Version]
  50. Bhuiyan, M.A.E.; Witharana, C.; Liljedahl, A.K. Use of Very High Spatial Resolution Commercial Satellite Imagery and Deep Learning to Automatically Map Ice-Wedge Polygons across Tundra Vegetation Types. J. Imaging 2020, 6, 137. [Google Scholar] [CrossRef] [PubMed]
  51. Chen, Y.; Fan, R.; Yang, X.; Wang, J.; Latif, A. Extraction of urban water bodies from high-resolution remote-sensing imagery using deep learning. Water 2018, 10, 585. [Google Scholar] [CrossRef] [Green Version]
  52. Yang, L.; Tian, S.; Yu, L.; Ye, F.; Qian, J.; Qian, Y. Deep learning for extracting water body from Landsat imagery. Int. J. Innov. Comput. Inf. Control 2015, 11, 1913–1929. [Google Scholar]
  53. Iglovikov, V.; Mushinskiy, S.; Osin, V. Satellite imagery feature detection using deep convolutional neural network: A kaggle competition. arXiv 2017, arXiv:1706.06169. [Google Scholar]
  54. Cheng, D.; Meng, G.; Xiang, S.; Pan, C. FusionNet: Edge aware deep convolutional networks for semantic segmentation of remote sensing harbor images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 5769–5783. [Google Scholar] [CrossRef]
  55. Isikdogan, F.; Bovik, A.C.; Passalacqua, P. Surface water mapping by deep learning. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 4909–4918. [Google Scholar] [CrossRef]
  56. Miao, Z.; Fu, K.; Sun, H.; Sun, X.; Yan, M. Automatic water-body segmentation from high-resolution satellite images via deep networks. IEEE Geosci. Remote Sens. Lett. 2018, 15, 602–606. [Google Scholar] [CrossRef]
  57. Song, S.; Liu, J.; Liu, Y.; Feng, G.; Han, H.; Yao, Y.; Du, M. Intelligent object recognition of urban water bodies based on deep learning for multi-source and multi-temporal high spatial resolution remote sensing imagery. Sensors 2020, 20, 397. [Google Scholar] [CrossRef] [Green Version]
  58. Yu, L.; Wang, Z.; Tian, S.; Ye, F.; Ding, J.; Kong, J. Convolutional neural networks for water body extraction from Landsat imagery. Int. J. Comput. Intell. Appl. 2017, 16, 1750001. [Google Scholar] [CrossRef]
  59. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
  60. Kolesnikov, A.; Lampert, C.H. Seed, expand and constrain: Three principles for weakly-supervised image segmentation. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 8–16 October 2016; Springer: Cham, Switzerland, 2016; pp. 695–711. [Google Scholar]
  61. Alonso, I.; Cambra, A.; Munoz, A.; Treibitz, T.; Murillo, A.C. Coral-Segmentation: Training Dense Labeling Models with Sparse Ground Truth. In Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops, Venice, Italy, 22–29 October 2017. [Google Scholar]
  62. Alonso, I.; Murillo, A.C. Semantic segmentation from sparse labeling using multi-level superpixels. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 5785–5792. [Google Scholar]
  63. Wang, S.; Chen, W.; Xie, S.M.; Azzari, G.; Lobell, D.B. Weakly Supervised Deep Learning for Segmentation of Remote Sensing Imagery. Remote Sens. 2020, 12, 207. [Google Scholar] [CrossRef] [Green Version]
  64. Jorgenson, M.; Brown, J. Classification of the Alaskan Beaufort Sea Coast and estimation of carbon and sediment inputs from coastal erosion. Geo-Mar. Lett. 2005, 25, 69–80. [Google Scholar] [CrossRef] [Green Version]
  65. Harris, C.M.; McTigue, N.D.; McClelland, J.W.; Dunton, K.H. Do high Arctic coastal food webs rely on a terrestrial carbon subsidy? Food Webs 2018, 15, e00081. [Google Scholar] [CrossRef]
  66. Ping, C.L.; Michaelson, G.J.; Guo, L.; Jorgenson, M.T.; Kanevskiy, M.; Shur, Y.; Dou, F.; Liang, J. Soil carbon and material fluxes across the eroding Alaska Beaufort Sea coastline. J. Geophys. Res. Biogeosci. 2011, 116. [Google Scholar] [CrossRef]
  67. Dunton, K.H.; Schonberg, S.V.; Cooper, L.W. Food web structure of the Alaskan nearshore shelf and estuarine lagoons of the Beaufort Sea. Estuaries Coasts 2012, 35, 416–435. [Google Scholar] [CrossRef] [Green Version]
  68. Heim, B.; Abramova, E.; Doerffer, R.; Günther, F.; Hölemann, J.; Kraberg, A.; Lantuit, H.; Loginova, A.; Martynov, F.; Overduin, P.P.; et al. Ocean colour remote sensing in the southern Laptev Sea: Evaluation and applications. Biogeosciences 2014, 11, 4191–4210. [Google Scholar] [CrossRef] [Green Version]
  69. Gatto, L.W. Coastal Environment, Bathymetry and Physical Oceanography along the Beaufort, Chukchi and Bering Seas; Technical Report; Cold Regions Research and Engineering Lab: Hanover, NH, USA, 1980. [Google Scholar]
  70. Barnhart, K.R.; Overeem, I.; Anderson, R.S. The effect of changing sea ice on the physical vulnerability of Arctic coasts. Cryosphere 2014, 8, 1777–1799. [Google Scholar] [CrossRef] [Green Version]
  71. Jones, B.M.; Hinkel, K.M.; Arp, C.D.; Eisner, W.R. Modern erosion rates and loss of coastal features and sites, Beaufort Sea coastline, Alaska. Arctic 2008, 61, 361–372. [Google Scholar] [CrossRef] [Green Version]
  72. Brice, C.R.; Fennema, C.L. Scene analysis using regions. Artif. Intell. 1970, 1, 205–226. [Google Scholar] [CrossRef]
  73. Pavlidis, T. Polygonal approximations by Newton’s method. IEEE Trans. Comput. 1977, 26, 800–807. [Google Scholar] [CrossRef]
  74. Riseman, E.M.; Arbib, M.A. Computational techniques in the visual segmentation of static scenes. Comput. Graph. Image Process. 1977, 6, 221–276. [Google Scholar] [CrossRef]
  75. Ohlander, R.; Price, K.; Reddy, D.R. Picture segmentation using a recursive region splitting method. Comput. Graph. Image Process. 1978, 8, 313–333. [Google Scholar] [CrossRef]
  76. Rosenfield, A.; Davis, L. Image segmentation and image model. Proc. IEEE 1979, 67, 764–772. [Google Scholar] [CrossRef]
  77. Chen, T.; He, T.; Benesty, M.; Khotilovich, V.; Tang, Y. Xgboost: Extreme gradient boosting. In R Package Version 0.4-2; 2015; pp. 1–4. Available online: https://CRAN.R-project.org/package=xgboost (accessed on 12 November 2021).
  78. Long, J.; Shelhamer, E.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. In Proceedings of the The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015. [Google Scholar]
  79. Abolt, C.J.; Young, M.H. High-resolution mapping of spatial heterogeneity in ice wedge polygon geomorphology near Prudhoe Bay, Alaska. Sci. Data 2020, 7, 87. [Google Scholar] [CrossRef] [Green Version]
  80. Cooley, S.W.; Smith, L.C.; Ryan, J.C.; Pitcher, L.H.; Pavelsky, T.M. Arctic-Boreal lake dynamics revealed using CubeSat imagery. Geophys. Res. Lett. 2019, 46, 2111–2120. [Google Scholar] [CrossRef]
  81. Chen, F.; Zhang, M.; Tian, B.; Li, Z. Extraction of glacial lake outlines in Tibet Plateau using Landsat 8 imagery and Google Earth Engine. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 4002–4009. [Google Scholar] [CrossRef]
  82. Winsvold, S.H.; Kääb, A.; Nuth, C. Regional glacier mapping using optical satellite data time series. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 3698–3711. [Google Scholar] [CrossRef] [Green Version]
  83. Park, S.J.; Achmad, A.R.; Syifa, M.; Lee, C.W. Machine learning application for coastal area change detection in gangwon province, South Korea using high-resolution satellite imagery. J. Coast. Res. 2019, 90, 228–235. [Google Scholar] [CrossRef]
  84. McFeeters, S.K. Using the Normalized Difference Water Index (NDWI) within a Geographic Information System to Detect Swimming Pools for Mosquito Abatement: A Practical Approach. Remote Sens. 2013, 5, 3544–3561. [Google Scholar] [CrossRef] [Green Version]
  85. Ruisánchez, I.; Jiménez-Carvelo, A.M.; Callao, M.P. ROC curves for the optimization of one-class model parameters. A case study: Authenticating extra virgin olive oil from a Catalan protected designation of origin. Talanta 2021, 222, 121564. [Google Scholar] [CrossRef]
  86. Otsu, N. A Threshold Selection Method from Gray-Level Histograms/Nobuyuki Otsu. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef] [Green Version]
  87. Iba, W.; Langley, P. Induction of one-level decision trees. In Machine Learning Proceedings 1992; Elsevier: Amsterdam, The Netherlands, 1992; pp. 233–240. [Google Scholar]
  88. Sudre, C.H.; Li, W.; Vercauteren, T.; Ourselin, S.; Cardoso, M.J. Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support; Springer: Berlin/Heidelberg, Germany, 2017; pp. 240–248. [Google Scholar]
  89. Huang, Z.; Wang, X.; Wang, J.; Liu, W.; Wang, J. Weakly-supervised semantic segmentation network with deep seeded region growing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 7014–7023. [Google Scholar]
  90. Dice, L.R. Measures of the amount of ecologic association between species. Ecology 1945, 26, 297–302. [Google Scholar] [CrossRef]
  91. Sørensen, T. A method of establishing groups of equal amplitude in plant sociology based on similarity of species and its application to analyses of the vegetation on Danish commons. K. Dan. Vidensk. Selsk. 1948, 5, 1–34. [Google Scholar]
  92. Milletari, F.; Navab, N.; Ahmadi, S.A. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In Proceedings of the Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; pp. 565–571. [Google Scholar]
  93. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  94. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Advances in Neural Information Processing Systems 32; Curran Associates, Inc.: Red Hook, NY, USA, 2019; pp. 8024–8035. [Google Scholar]
  95. Kyzivat, E.D.; Smith, L.C.; Pitcher, L.H.; Fayne, J.V.; Cooley, S.W.; Cooper, M.G.; Topp, S.N.; Langhorst, T.; Harlan, M.E.; Horvat, C.; et al. A high-resolution airborne color-infrared camera water mask for the NASA ABoVE campaign. Remote Sens. 2019, 11, 2163. [Google Scholar] [CrossRef] [Green Version]
  96. Hu, Y.; Zhang, Q.; Zhang, Y.; Yan, H. A Deep Convolution Neural Network Method for Land Cover Mapping: A Case Study of Qinhuangdao, China. Remote Sens. 2018, 10, 2053. [Google Scholar] [CrossRef] [Green Version]
  97. 2019 Alaska Coastal Mapping Prioritization Survey. Available online: https://aoos.org/wp-content/uploads/2019-AK-Coastal-Mapping-Prioritization-Survey-final-web.pdf (accessed on 1 September 2021).
Figure 1. Our proposed method learns from sparsely labelled data for land/water segmentation. (a) Sample Image; (b) Corresponding Dense Labels; (c) Corresponding Sparse Labels.
Figure 1. Our proposed method learns from sparsely labelled data for land/water segmentation. (a) Sample Image; (b) Corresponding Dense Labels; (c) Corresponding Sparse Labels.
Remotesensing 13 04572 g001
Figure 2. Overview map showing the ANWR region on the eastern North Slope of Alaska, the extent of the 2017 NOAA airborne image collections (in blue) and the individual image tiles (in orange) from this collection used for creation of dense testing labels. The top right corner displays an example high resolution airborne image of a section of coastline acquired by the NOAA RSD airborne imaging.
Figure 2. Overview map showing the ANWR region on the eastern North Slope of Alaska, the extent of the 2017 NOAA airborne image collections (in blue) and the individual image tiles (in orange) from this collection used for creation of dense testing labels. The top right corner displays an example high resolution airborne image of a section of coastline acquired by the NOAA RSD airborne imaging.
Remotesensing 13 04572 g002
Figure 3. Natural color (RGB) image tiles selected from 2017 NOAA airborne imagery taken over the study area displaying the spectral variability of coastal and surface water bodies: (a) lagoonal waters bound by tundra and a sand spit; (b) dark, turbid coastal waters near the mouth of a river; (c) waters of a braided river carrying highly reflective glacial-derived sediments from the Brooks Range to the south of the study area; (d) coastal waters breaking near barrier islands; (e) dark and deep waters of a tundra pond; (f) shallow and turbid waters near a deltaic region; (g) blue waters of a large thermokarst lake; (h) sediment dispersion from coastal erosion in the nearshore; (i) shallow waters of a coalesced low-center polygonal pond.
Figure 3. Natural color (RGB) image tiles selected from 2017 NOAA airborne imagery taken over the study area displaying the spectral variability of coastal and surface water bodies: (a) lagoonal waters bound by tundra and a sand spit; (b) dark, turbid coastal waters near the mouth of a river; (c) waters of a braided river carrying highly reflective glacial-derived sediments from the Brooks Range to the south of the study area; (d) coastal waters breaking near barrier islands; (e) dark and deep waters of a tundra pond; (f) shallow and turbid waters near a deltaic region; (g) blue waters of a large thermokarst lake; (h) sediment dispersion from coastal erosion in the nearshore; (i) shallow waters of a coalesced low-center polygonal pond.
Remotesensing 13 04572 g003
Figure 4. Natural color (RGB) image tiles selected from 2017 NOAA airborne imagery taken over the study area displaying the spectral variability of land surfaces: (a) coastal foredune features; (b) alluvial deposits; (c) polygonized tundra; (d) drained thaw lake; (e) bare tundra; (f) wet and dry sands along a coastal spit.
Figure 4. Natural color (RGB) image tiles selected from 2017 NOAA airborne imagery taken over the study area displaying the spectral variability of land surfaces: (a) coastal foredune features; (b) alluvial deposits; (c) polygonized tundra; (d) drained thaw lake; (e) bare tundra; (f) wet and dry sands along a coastal spit.
Remotesensing 13 04572 g004
Figure 5. Our methodological pipeline takes in airborne imagery and its corresponding sparsely labelled shapefile to prepare subregions for training and evaluation.
Figure 5. Our methodological pipeline takes in airborne imagery and its corresponding sparsely labelled shapefile to prepare subregions for training and evaluation.
Remotesensing 13 04572 g005
Figure 6. Output intensity on a sample image from the test set using different models. The intensity has been normalized to 0–1 range for NDWI and NDSWI for plotting. The intensity for U-Net, Random Forest, and XGBoost represents the probability of the corresponding pixel to be classified as water.
Figure 6. Output intensity on a sample image from the test set using different models. The intensity has been normalized to 0–1 range for NDWI and NDSWI for plotting. The intensity for U-Net, Random Forest, and XGBoost represents the probability of the corresponding pixel to be classified as water.
Remotesensing 13 04572 g006
Figure 7. Human annotated dense label for the image in Figure 6. The intensity masks are converted to respective binary masks using thresholds as seen in Section 3.3 for evaluation.
Figure 7. Human annotated dense label for the image in Figure 6. The intensity masks are converted to respective binary masks using thresholds as seen in Section 3.3 for evaluation.
Remotesensing 13 04572 g007
Figure 8. Binary mask generation by thresholding.
Figure 8. Binary mask generation by thresholding.
Remotesensing 13 04572 g008
Table 1. Threshold selection using exhaustive search DS.
Table 1. Threshold selection using exhaustive search DS.
MethodThresholdIoU
DS/NDWI0.7897.11
DS/NDSWI0.4896.42
DS/Random Forest0.498.21
DS/XGBoost0.3898.26
DS/U-Net0.5397.43
Table 2. Experimental results for binary segmentation masks generated using different methods. The IoU when using random forest is higher than when using all other methods for both land and water classes.
Table 2. Experimental results for binary segmentation masks generated using different methods. The IoU when using random forest is higher than when using all other methods for both land and water classes.
ClassMethodIoUPrecisionRecall
waterNDWI94.9096.9997.78
NDSWI93.8395.9197.75
Random Forest95.0596.3398.62
XGBoost94.9496.2298.62
U-Net94.8696.5698.19
landNDWI80.3190.6187.60
NDSWI75.9590.0182.94
Random Forest80.5192.4186.22
XGBoost79.6593.7184.16
U-Net79.7792.0485.68
Table 3. Experimental results for coastal subregions using different methods. The overall performance of all the models are better for coastal subregions.
Table 3. Experimental results for coastal subregions using different methods. The overall performance of all the models are better for coastal subregions.
ClassMethodIoUPrecisionRecall
waterNDWI96.5597.8198.69
NDSWI95.9697.4198.46
Random Forest96.7397.5099.19
XGBoost96.6697.4699.16
U-Net96.6497.698.99
landNDWI88.0395.1892.14
NDSWI86.0394.3390.72
Random Forest88.4296.9390.96
XGBoost88.2196.8290.84
U-Net88.2196.2391.37
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Aryal, B.; Escarzaga, S.M.; Vargas Zesati, S.A.; Velez-Reyes, M.; Fuentes, O.; Tweedie, C. Semi-Automated Semantic Segmentation of Arctic Shorelines Using Very High-Resolution Airborne Imagery, Spectral Indices and Weakly Supervised Machine Learning Approaches. Remote Sens. 2021, 13, 4572. https://doi.org/10.3390/rs13224572

AMA Style

Aryal B, Escarzaga SM, Vargas Zesati SA, Velez-Reyes M, Fuentes O, Tweedie C. Semi-Automated Semantic Segmentation of Arctic Shorelines Using Very High-Resolution Airborne Imagery, Spectral Indices and Weakly Supervised Machine Learning Approaches. Remote Sensing. 2021; 13(22):4572. https://doi.org/10.3390/rs13224572

Chicago/Turabian Style

Aryal, Bibek, Stephen M. Escarzaga, Sergio A. Vargas Zesati, Miguel Velez-Reyes, Olac Fuentes, and Craig Tweedie. 2021. "Semi-Automated Semantic Segmentation of Arctic Shorelines Using Very High-Resolution Airborne Imagery, Spectral Indices and Weakly Supervised Machine Learning Approaches" Remote Sensing 13, no. 22: 4572. https://doi.org/10.3390/rs13224572

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop