Next Article in Journal
Novel Spatial–Spectral Channel Attention Neural Network for Land Cover Change Detection with Remote Sensed Images
Previous Article in Journal
A Lightweight Object Detector Based on Spatial-Coordinate Self-Attention for UAV Aerial Images
Previous Article in Special Issue
Sentinel-2 Detection of Floating Marine Litter Targets with Partial Spectral Unmixing and Spectral Comparison with Other Floating Materials (Plastic Litter Project 2021)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Designing Unmanned Aerial Survey Monitoring Program to Assess Floating Litter Contamination

by
Sílvia Almeida
1,
Marko Radeta
1,2,3,
Tomoya Kataoka
4,
João Canning-Clode
1,5,
Miguel Pessanha Pais
6,7,
Rúben Freitas
1,2 and
João Gama Monteiro
1,8,*
1
MARE—Marine and Environmental Sciences Centre/ARNET—Aquatic Research Network, Agência Regional para o Desenvolvimento da Investigação Tecnologia e Inovação (ARDITI) Funchal, Madeira, 9020-105 Funchal, Portugal
2
Wave Labs, Faculty of Exact Sciences and Engineering, University of Madeira, 9020-105 Funchal, Portugal
3
Department of Astronomy, Faculty of Mathematics, University of Belgrade, 11000 Belgrade, Serbia
4
Department of Civil & Environmental Engineering, Ehime University, Matsuyama 790-8577, Japan
5
Smithsonian Environmental Research Center, Edgewater, MD 21037, USA
6
MARE—Marine and Environmental Sciences Centre/ARNET—Aquatic Research Network, Faculdade de Ciências, Universidade de Lisboa, 1649-004 Lisboa, Portugal
7
Departamento de Biologia Animal, Faculdade de Ciências, Universidade de Lisboa, 1649-004 Lisboa, Portugal
8
Faculty of Life Sciences, University of Madeira, 9020-105 Funchal, Portugal
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(1), 84; https://doi.org/10.3390/rs15010084
Submission received: 4 November 2022 / Revised: 5 December 2022 / Accepted: 17 December 2022 / Published: 23 December 2022
(This article belongs to the Special Issue Remote Sensing for Mapping and Monitoring Anthropogenic Debris)

Abstract

:
Monitoring marine contamination by floating litter can be particularly challenging since debris are continuously moving over a large spatial extent pushed by currents, waves, and winds. Floating litter contamination have mostly relied on opportunistic surveys from vessels, modeling and, more recently, remote sensing with spectral analysis. This study explores how a low-cost commercial unmanned aircraft system equipped with a high-resolution RGB camera can be used as an alternative to conduct floating litter surveys in coastal waters or from vessels. The study compares different processing and analytical strategies and discusses operational constraints. Collected UAS images were analyzed using three different approaches: (i) manual counting (MC), using visual inspection and image annotation with object counts as a baseline; (ii) pixel-based detection, an automated color analysis process to assess overall contamination; and (iii) machine learning (ML), automated object detection and identification using state-of-the-art convolutional neural network (CNNs). Our findings illustrate that MC still remains the most precise method for classifying different floating objects. ML still has a heterogeneous performance in correctly identifying different classes of floating litter; however, it demonstrates promising results in detecting floating items, which can be leveraged to scale up monitoring efforts and be used in automated analysis of large sets of imagery to assess relative floating litter contamination.

1. Introduction

Over the last few decades, marine litter has increasingly captured the attention and concerns of scientists, decision makers, and civil society [1,2]. The persistent nature of plastic materials and their increasing global presence in both aquatic [3] and terrestrial ecosystems [4] has resulted in the conception of a new era—“The Plasticene” [5]. The incessant and growing delivery of plastic litter and debris to our oceans has become one of the most significant forms of marine pollution [6,7]. While there are bans on single-use plastics and improved recycling practices (e.g., usage of straws), the COVID-19 pandemic has resulted in an immediate increase in personal protection equipment (e.g., discarded face masks), further polluting the aquatic environments [8,9]. Indeed, marine litter has become critical to global sustainability, as it affects marine ecosystems and human health [10,11,12].
Tackling plastic pollution in the marine environment requires concerted strategies and strong actions from policymakers and stakeholders on a global scale [13,14]. In fact, several efforts are already in place at the international, national, and regional levels (e.g., the United Nations Convention on the Law of the Sea (UNCLOS) (United Nations Convention on the Law of the Sea (UNCLOS)—The Faculty of Law Available online: https://www.jus.uio.no/english/services/library/treaties/08/8-01/unclos.xml (accessed on 25 October 2022)), United Nations Environment Programme (UNEP) (Environment, U.N. UNEP—UN Environment Programme Available online: http://www.unep.org/node (accessed on 25 October 2022)), Regional Sea Programme (RSP) (Environment, U.N. Regional Seas Programme Available online: http://www.unep.org/explore-topics/oceansseas/what-we-do/regional-seas-programme (accessed on 25 October 2022)), and the European Union Marine Strategy Framework Directive (MSFD) (European Commission. Joint Research Centre.; MSFD Technical Group on Marine Litter. Monitoring of Floating Marine Macro Litter: State of the Art and Literature Overview; Publications Office: LU, 2022)), with several instruments being recently developed to reduce and manage marine litter [15]. Despite growing concerns, current efforts in tackling marine litter pollution are still (mostly) focused on diagnosing the problem, namely, establishing standard protocols to detect, monitor, and characterize marine litter distribution, including identifying major sources and assessing multiple impacts of various types of marine litter [16,17].
Most consolidated data on marine litter derive from beached litter monitoring programs and sampling, while seafloor and floating litter data is mostly from isolated or discrete efforts [18,19]. Ocean surface litter contamination assessments still greatly rely on opportunistic reporting from sea vessels [20,21,22,23], complemented with the use of ingested items from target species as a proxy [24] and by a few dedicated protocols used to assess microlitter contamination of surface waters [25,26,27]. Floating macrolitter monitoring has been mostly dependent on vessel-based observers [28,29] (i.e., opportunistically reporting sightings), which makes it difficult to standardize effort, methods, and produced datasets, and is often geographically biased, as most marine observing programs are linked to fisheries (European Commission. Joint Research Centre.; MSFD Technical Group on Marine Litter. Monitoring of Floating Marine Macro Litter: State of the Art and Literature Overview.; Publications Office: LU, 2022). Additionally, these methods remain time-consuming and entail numerous difficulties associated with vessel size and type, weather, light, and sea conditions. Another issue in the present monitoring of floating litter is related to the fact that floating debris constantly moves due to weather and currents, providing an additional dynamic dimension and adding complexity to the problem [30,31]. On occasions, numerous factors may cause the accumulation of marine litter in oceanic convergence zones [32,33], which can be detected from vessels, aerial remote sensing, and even satellites [34,35,36]. However, these convergence areas are geographically discrete, and floating litter outside these areas is often in low densities [18,23], making it challenging to detect by vessel-based observers or to be collected by current sampling devices. Similarly, their small sizes and low concentrations outside convergence areas make floating litter items difficult to detect from satellite remote sensing platforms [37,38,39].
The use of satellite data could potentially enable the development of cost-effective, repeatable, and fast methods that estimate floating marine litter contamination and distribution over large spatial scales [34,36,40]. However, despite recent efforts to develop analytical sensing methods, most of these applications face challenges with poor detectability of small objects from space [35,39,41]. Advances in spectral profiling using hyperspectral sensors [16] and advances in satellite technology are expected to be soon able to detect and even monitor high-concentration areas such as the Pacific Garbage Patch and other gyres [42,43,44]. However, monitoring contamination levels in non-accumulation areas will remain challenging.
Remote sensing from aerial platforms, combined with advanced imagery processing and artificial intelligence (AI) provides unique opportunities to advance the monitoring of plastic and litter pollution [45,46]. The application of AI in different areas of oceanic studies is constantly growing. Still, the open challenge in automating imagery analysis is to reduce labor time in identifying and classifying target objects, and ultimately, have a better understanding of the distribution and sources of marine litter items. However, when compared to human inspection and annotation, automated object detection and classification in imagery by AI often lacks flexibility in contextual interpretation, and rely on well-established predetermined object categories. In addition, the computing power and the technical skills required to implement automated object detection based on AI can be considerably more demanding than those required for human-supervised imagery annotation.
The growing availability and the development of inexpensive commercial off-the-shelf (COTS) drones and other advanced unmanned aerial systems (UASs) are making high-tech aerial imagery platforms more accessible [47,48,49]. The use of custom-designed UASs is becoming increasingly popular for recreational, industrial, topographic surveying, monitoring, and research purposes due to their relatively low cost, operational flexibility, and simplicity [50,51,52,53,54]. With low-altitude flight, UASs produce aerial imagery with higher resolution than that achieved by current satellites or by manned aerial platforms [37,55,56]. Additionally, most modern UASs include automated flight capabilities, pre-planned mission controls, high-resolution camera systems, and geotagged logs that enhance their operational capabilities and their range of applications [57,58].
The use of UAS-based remote sensing has already demonstrated a variety of research applications in coastal areas [47,49,51,59,60,61,62,63,64,65,66,67,68,69,70,71]. Operation flexibility and simplicity make UASs promising platforms for developing remote sensing protocols and monitoring litter using systematic approaches. There have been a growing number of studies focusing on the use of UAS-based remote sensing and AI to monitor litter pollution; however, most of them have focused on beached litter [51,54,65,67,70,72,73,74,75], and only a few have explored their use for floating litter [38,46,73,76,77]. A recent critical review on beached litter survey studies using UAS remote sensing [65] summarizes the findings of recent studies and outlines basic guidelines for developing and implementing monitoring programs. Despite some of their conclusions being transferable to floating litter monitoring, these studies do not account for the dynamic nature of open waters, the lack of matching references to construct orthophotos, and differences in image background contrasts and complexity. Other studies have focused on the use of UAS aerial images to monitor floating litter using color-based image processing [78], deep learning [45,79], and other remote sensing analytical techniques [39]. However, these studies are descriptive of technical advances using a single approach or focus on comparing different AI algorithms and classifiers, lacking an overall evaluation of how to implement a floating litter monitoring program that relies on UAS aerial imagery, and missing a critical comparison of different image analysis strategies and options. As such, this study fills some of the current gaps by outlining some of the specificities in floating litter monitoring, including UAS operational constraints, a comparison of MC, PBD, and ML image analysis, and an overall guideline to design and implement a monitoring program.
Detecting and monitoring floating litter using aerial photography and UAS-based remote sensing pose specific challenges. On land, structure from motion photogrammetry uses unique and discrete references in overlapping images to construct a mosaic and estimate position, slope, and other topographic features along the survey area [80,81]. Over open water, the lack of discrete or unique reference points, the homogeneity of images, and the dynamic surface make it virtually impossible to reconstruct orthophotomosaics systematically [82,83]. In theory, one could fly at high enough altitude to simultaneously include land features (to include discrete matching points) and coastal waters. However, this is generally not a practical solution as the survey area will be greatly constrained to near-shore areas due to safety, resolution would reduce with altitude, and due to regulations that prohibit or limit the maximum altitude for UAS operations [84,85].
Tackling limitations in photogrammetry mosaic generation from overlapping aerial images over the ocean, we explore the use of a COTS UAS platform to collect multiple (non-overlapping) individual aerial images to assess floating litter contamination. Leveraging specific flight altitudes and information on the camera field of view and sensor dimensions, it is possible to estimate the surface area for individual images. This strategy allows one to conduct aerial surveys, collecting multiple images that can be processed and analyzed independently to produce overall assessments over meaningful spatial extents (e.g., 1 km2).
In order to assess the feasibility of such a strategy for floating litter monitoring, we designed an experimental trial where floating litter items were deployed and multiple individual aerial images were collected with a UAS to compare three imagery processing and analysis strategies: (i) manual counting (hereinafter abbreviated as MC), an image inspection with object supervised identification and annotation; (ii) pixel-based detection (hereinafter abbreviated as PBD), an automatic color detection of pixels from floating items; and (iii) machine learning (hereinafter abbreviated as ML), an automated object detection and classification. The general objective of this experimental trial was to provide answers to two main questions: (i) can floating litter items be detected from RGB aerial imagery collected by a UAS? (ii) are automated image processing and analysis strategies practical reliable solutions that can replace human image inspection and litter items classification? Ultimately, this study assesses the operational advantages and disadvantages of different aerial imagery processing strategies for floating litter item detection and provides guidelines for optimising and implementing floating litter monitoring programs that rely on UAS-based remote sensing using low-cost, COTS quadcopters equipped with high-resolution RGB cameras.

2. Materials and Methods

2.1. Data Collection

Conducted in Madeira Island (Portugal) coastal waters, this study was developed to assess the feasibility of using UAS aerial imagery for detecting and monitoring floating litter by designing an experimental trial with dummy floating litter objects. Preliminary test flights were conducted (using a DJI Phantom 2 Vision+ and a DJI Mavic 2 PRO) from land and vessels to test and assess UAS flight capabilities (e.g., wind speed limit, range, flight time) as well as optimal take-off and landing techniques and optimal imagery sensor settings (compiled in Supplementary Materials S1). Once flight capabilities and operations were tested, an experimental trial flying a Mavic 2 PRO quadcopter from the sea vessel and using selected “dummy” litter items was carried out.
During the experiment, common floating litter items were deployed from a boat while flying a UAS at 10–30 m of altitude, set up to collect images (with 5472 × 3648 px) of the sea surface area every 10 s where litter items had been deployed (Figure 1A). There was a total of 28 objects, and the majority of the items were made of floating plastic. These objects were categorized into nine classes: Cleaner Bottles and Containers (one item); Drink Bottles—Green (six items); Drink Bottles—Transparent (two items); Drink Bottles—Large (>5 L) (two items); Floating Fishing Gear (seven items); Other Containers (one item); Other Floating Debris (no items); Plastic Bags (five items); and Tetra Pak (four items) (see Supplementary Materials S2, Table S1). As litter items scattered across the water, the vessel was repositioned to be outside of the image frame (Figure 1A), and the UAS position was adjusted to capture as many items as possible inside the live feed frame. Deployed objects naturally drifted at different speeds and directions, for which after some hovering time collecting imagery (5–10 min), the UAS (Figure 1B) was recovered, and all litter items were successfully collected. The procedure was repeated using different exposure settings, specifically, a normal exposure (EV 0) and a low exposure (EV −3), to produce two sets of images a Blue Set and a Dark Set, respectively (Figure 1C,D). The two image sets were used to compare object detectability under two contrasting. The collection of imagery with low exposure values (EVs) was included to enable a major reduction in light backscatter on the sea surface (i.e., homogenizing the background), while maintaining the ability to detect floating objects by visual inspection and based on RGB profiles.
A total of 148 individual images, with objects and no vessel in the frame, were selected for analysis, which were further divided into 2 different collections of individual images: a “Blue Set” (Figure 1C) with 74 images normally exposed, with a blue background and normal sun glint and backscatter; and a “Dark Set” (Figure 1D) with 74 underexposed images, with a dark background and reduced sun glint and backscatter. For selected individual images, ground sampling distance (GSD) ranged between 0.26 and 0.7 cm/px, with estimated areas of 117 to 988 m2, respectively. The two collections were compiled and labeled for individual image analysis using three different strategies to assess floating litter contamination: (i) a visual inspection with manual annotation of detected litter items; (ii) a pixel-based detection color analysis; and iii) the use of CNN for automated object detection.

2.2. Comparison of Analytical Procedures

Keeping a rationale of assessing the pros and cons of different analytical and classification approaches, in order to design floating litter monitoring programs with UAS-based remote sensing that are feasible in different conditions, training, and available resources, we compared the three methods by considering (i) the average time required to inspect and process each image; (ii) the ability to adequately assess floating litter contamination; and (iii) the skills and logistical requirements for implementing a monitoring program using each method.
Visual inspections and annotations for single images were considered as reference data to assess and compare the performance of automated methods. Simple descriptive statistics were applied to compare the outputs of the three methods tested, including time for processing, correlations, and standard metrics to assess deep learning classification performance.

2.2.1. Visual Inspection and Manual Classification

Two independent annotations were performed: one to identify and count all floating objects, labeling them with an all-inclusive category “floating litter item” during annotation; and a second one where floating objects were classified and labeled using nine different categories (see Supplementary Materials S2, Table S1). All images from both Blue and Dark datasets were visually inspected and annotated using DotDotGoose (DotDotGoose Available online: https://biodiversityinformatics.amnh.org/open_source/dotdotgoose/ (accessed on 25 July 2022)) [86]. For each image, all objects were identified/classified, and collected data were exported as .CSV files and compiled into a summary table that included information on image file identification, image dataset (Blue or Dark), number of floating items, number of items per category, the time for inspection and annotation using a single object class, and the time for inspection and annotation using the nine classes of floating items.

2.2.2. Color- and Pixel-Based Detection Analysis

Images of both Blue and Dark datasets were compiled for analysis using pixel color differences to estimate overall floating debris in each image using a color- and pixel-based analysis [78,87] to detect pixels with different color profiles than the background (e.g., seawater color) (see Supplementary Materials S2, Figure S1). The method consists in generating an image of the color difference between the debris and surrounding water in the CIELuv color space and detecting the debris pixels from the color difference image [78]. The color difference is expressed by the Euclidean distance between two points in the CIELuv color space [78]. The fundamental steps for extracting the “debris pixel” from the color difference images were as follows: (i) generating an image which is smoothed from each original image using the median box filter with a 200 × 200 px window; (ii) computing the color difference between the denoised and smoothed images in the CIELuv color space converted from the RGB color space; (iii) extracting the pixels of floating macro debris using an appropriate constant threshold value. In this study, the threshold value was set at 60 by trial and error during empirical tests. The percentage of “debris pixels” was calculated for each image and included on the compiled summary table. Performance was assessed using linear regressions, assuming that automated selection of pixels was proportional and correlated with the number of litter items in each image.

2.2.3. Machine Learning for Automated Object Detection and Classification

Blue and Dark image sets were also used in automated object detection and classification using state-of-the-art CNN architecture combining MobileNetV2 [88] with Sigle-Shot Detection (SSD) algorithm [89]. All images from each dataset were visually inspected and manually annotated with Supervise.ly, an online image annotation tool dedicated to model training (Supervisely: Unified OS for Computer Vision Available online: https://supervise.ly/ (accessed on 25 July 2022)). Target litter items were identified with bounding boxes and classified within the nine litter categories (see Supplementary Materials S2, Table S1) previously established.
Two models were trained to classify floating objects into the nine pre-established categories (see Supplementary Materials S2, Table S1): one using the Blue Set (normally exposed imagery), and a second using the Dark Set (underexposed imagery). The latter used a total of 4041 images, while the former used 7597 training images after applying traditional data augmentation techniques (flip, noise, blur) [90]. All 74 images from both Dark and Blue datasets (with original full-size resolution) were used for model inference. Training and testing procedures involved single- and multiclass identification using object detection, based on ground truth annotation (bounding boxes made by the research authors as annotators) and the bounding boxes predicted by the models. Both models were trained using NVIDIA Tesla P100 PCI-E 16GB GPU on Google Collab, using TensorFlow 1.15.2, in 12 h. Model training was with 200 k epochs using default hyperparameters, ReLU6 activation function and initial learning rate of 0.004. For performance, a batch size of 12 images was used with down-sampled imagery of 300 × 300 px. Overall model performance was assessed by computing model precision (P), recall (R), and F1 score (F1) [91]. For each model, a stopwatch was used to assess the time which fir data upload (ground truth imagery, annotations, trained model), runtime of the model inference script using Jupyter Notebook, computation resource allocation time on the free GPU instances, and results download time. For each image, information from both models (i.e., number of items per category, object classification time) was included in the compiled summary table (see above) for comparison and analysis. Performance was assessed using linear regressions, assuming that the number of overall items classified as litter objects would be proportional and correlated with the number of manually labeled items. Additionally, average over- and underestimations of ML automated classification for each of the nine categories were computed for each of the image sets (i.e., Blue and Dark sets), in order to assess the ML ability to correctly classify floating items in each of the nine selected categories. Standard deviations were also calculated to assess variance in differences between reference data (i.e., number of items manually classified) and the number of items detected by ML for each of the nine categories (see Supplementary Materials S2, Figure S2).

3. Results

3.1. Performance Assessment

Visual inspections and manual classification were assumed to have 100% detectability, and were used as reference data to assess the performance of automated approaches. An inspection of a linear regression using the number of identified objects in each image illustrates that the color difference selection of pixels from normally exposed imagery was inadequate in estimating floating litter contamination, with poor correlation with the actual number of items in each image (Figure 2, top-left panel). The methods performance improved when using underexposed imagery, with less backscatter and sun glint (Figure 2, bottom-left panel). However, it still lacked a strong linear correlation with the number of floating items in each image, as one would expect if the automatically selected pixels corresponded to floating debris. Automated floating object detection using ML had a good overall performance in matching human detection and labeling, especially with normally exposed imagery (Figure 2, top-right panel). Assuming the null hypothesis that two samples (MC and ML for Blue dataset) have equal variances, a two-tailed t-test using the critical value 1.9763 did not show statistical significance (p > 0.05). Such a result indicates that the machine learning method is performing to a similar level as the manual counting method when predicting the Blue dataset. Overall, the lack of strong collinearity with the number of floating items renders the color difference detection of debris pixels from RGB imagery an unreliable method for estimating contamination by floating debris.
ML automated object classification performed differently in discriminating different litter categories across both datasets (Figure 3). Using the manual counts as a reference, the automated object classification using ML had an average under- and overestimation that ranged between −2.95 and 3.32 objects in Blue Set (Figure 3, left panel) and ranged between −4.18 and 7.43 in the Dark Set (Figure 3, right panel). Unlike the performance of pixel-based detection of marine debris (Figure 2), automated classification of floating items overall performs better in normally exposed images (Blue Set) than in underexposed images (Dark Set). An inspection of the average under- or overestimation values and respective standard deviations (Figure 3) illustrates that in normally exposed imagery (Blue Set), the classes of Cleaner Bottles and Containers, Green Drinking Bottles, Floating Fishing Gear, and Plastic Bags were underestimated, whereas Transparent Drinking Bottles, Other Containers, and Other Floating Debris were overestimated (Figure 3, left panel). The categories Cleaner Bottles and Containers and Large Drink Bottles were the ones more accurately detected with low average difference and low standard deviations, whereas the category Other Floating Debris appears to be the most challenging, with an average overestimation of 3.22 and the highest standard deviation. In underexposed imagery (Dark Set), ML had better success in correctly classifying items in the categories Cleaner Bottles and Containers and Transparent Drink Bottles, underestimating these with low average differences and relatively low standard deviations (Figure 3, right panel). Floating items in the categories Drink Bottles—Large (>5 L), Other Containers, and Other Floating Debris were significantly overestimated with relatively high standard deviations, illustrating a poor performance of ML in classifying items in these categories in underexposed imagery.
Average differences and respective standard deviations illustrate to what degree ML can accurately detect and classify a floating object. With lower averages and variances, ML has a better overall performance in classifying floating litter items in normally exposed images (Figure 3). However, it is noteworthy to mention that, in some specific categories (e.g., Transparent Drink Bottles, Floating Fishing Gear), the use of underexposed imagery outperformed the use of normally exposed image sources.

3.2. Comparing Processing Times and Requirements

One additional relevant aspect in automation of litter detection and/or classification relates to processing times (Table 1). On average, visual inspection and user annotation took 26 s to detect e and 52 s to classify all visible objects in a single image. Interestingly, user annotation was slightly faster when inspecting underexposed images (Figure 4). Color- and pixel-based detection had comparable processing times, averaging 43 s to process normally exposed images and 26 s to process underexposed images (Figure 4). As expected, image processing times for ML object classification and detection using deep learning were significantly longer than remaining methods (i.e., visual inspection and color- and pixel-based detection). Interestingly, and similar to other methods, processing times were faster when dealing with underexposed images (Figure 4).

4. Discussion

Based on a custom-designed experimental trial, this case study assesses the detectability of floating litter items from aerial imagery, appraises the advantages and disadvantages in different imagery processing strategies, and sets guidelines for optimizing and implementing floating litter monitoring programs relying on UAS-based remote sensing.
There are numerous challenges in operating UAS for systematic monitoring of the sea surface; namely, the unpredictability of weather conditions (i.e., wind, clouds); the limited flight operations, i.e., geozones and maximum radio range operating of the drone; the risk of losing the drone, i.e., experience in piloting UAS from vessels; how the varying light (i.e., sun glint, cloud shadows, sea conditions (e.g., waves)) can affect the image collected, subsequently affecting image processing and litter detection. Flat ocean conditions offer a homogeneous background where floating items are easily identified [92]. One important factor that can be compounded by sea conditions is related to lighting and light backscatter [83]. Ideal light conditions include clear skies, during a period where the sun is at a low angle (i.e., high angles increase backscatter for nadir imagery), and with sea conditions flat without bright elements (i.e., white caps, foam, waves, and ripples) that will also influence light backscatter on the water surface. Determining light conditions by choosing the time of day, from 8 to 10 am and/or 16 to 18 pm, plus the direction of flight paths helps to enhance image quality by minimizing the sun reflection and backscatter over the sea [83]. In turn, this minimizes the spots of undefined shapes that create visual noise and hamper manual and automated analysis. Overcast conditions, high sun, waves, and floating items partially submerged in the water column can easily decrease image quality for object detection and potentially lead to the need of discarding a large portion of each image. The use of the multispectral sensors can reduce some of the negative impacts of poor conditions, as some channels generally produce outputs that are less sensitive to light backscatter over the sea surface (i.e., infrared, near-infrared) [36,77]. Thermal sensors can also be adequate to detect large objects that have a large proportion that is air-exposed [93]; however, they are typically unable to detect objects that are frequently submerged and cooled by waves and sea spray. Another constraint using UAS-based remote sensing relates to their flight range and the compromise between surveyed area and image resolution. Operational range can greatly vary depending on the UAS. Fixed-wing drones have the advantage of being able to cover larger areas (more battery and longer radio signal range) [94]. However, they tend to require specific conditions for taking off and landing, which makes them less suitable for monitoring surveys from small vessels.
Similar to other studies using UAS aerial imagery to monitor litter, flight parameters selected for this case study have influenced the final result and the detection capability, since flight height, light exposure, and even the orientation of the camera in relation to the light source (among other factors) affect the image quality and the perception of some physical characteristics of the objects to be classified, including (i) color reflectance (translucid vs. opaque objects or reflected spectral profile of the material); (ii) the definition of object contours (well-defined vs. blurred); and (iii) the “size” of objects (number of pixels). As such, most parameters were not variable, with the exception of altitude, which varied between 10–30 m (which provides a range of GSD from 0.26 to 0.7 cm/px and a range of surface area covered from 117 to 988 m2), ensuring that objects were visually identifiable in all selected images. Exposure was purposely set to capture normally exposed images (EV set to 0) and underexposed images (EV set to −3) to enable sun glint and backscatter reduction, and assess whether it affected object detectability. The main reason for carrying out this experiment with two image sets using different light exposures—Blue Set vs. Dark Set—was to understand how differences in exposure and contrast affect the reliability of the automated pixel selection and object detection models. Indeed, one of the biggest problems with nadir images collected over the ocean is the glare from sunlight backscatter, resulting in “specs” of high reflectance that can be misidentified as white floating objects [95]. The use of these contrasting exposure settings were expected to have a major influence on color- and contrast-based analysis and identification of floating items due to the homogenization of the background (i.e., seawater) in underexposed images [36,78].
The conducted experimental trial also allowed to ascertain how well two different autonomous analytical methods (i.e., color- and pixel-based detection and ML object classification) could assess floating litter contamination in comparison to human-supervised annotation of aerial images. Theoretically, the pixel-based detection method would allow one to know the percentage of general contamination of a given area based on the number of “debris” pixels. This method could be useful in scenarios where it is necessary to find places of concentration or sources of contamination by marine litter in a large volume of images and/or with different areas. However, our findings illustrate that the use of color difference “debris” pixel selection to detect floating litter still requires significant improvement. Sun glint and wave crests greatly affect the accuracy of this method, and result in numerous false positives and, even though it performs better in underexposed images, the computed correlation between selected pixels and number of litter items was still rather low (Figure 2). Ultimately, the use of color difference debris pixel detection requires additional optimization and development to reduce error; namely, by integrating additional multispectral data; hyperspectral data; and/or by reducing false-positive pixels in each image by masking all items by bounding boxes that are automatically detected by the machine learning technique (see Supplementary Materials S2, Figure S2).
Similar to other studies [45,73,76,79], the automated classification of floating objects using ML in this case study also showed promising results in detecting floating items (Figure 2). However, similar to previous studies, it showed mixed results in accurately discriminating different types of floating items (Figure 3). The categories Drink Bottles—Green and Plastic Bags had comparable underestimation in both datasets. The similarity of the light reflectance vs. the spectrum between the blue sea and the translucent green of the bottles could have hampered the detection and classification of these items [34,35,96,97]. In the class Plastic Bags, as they present different shapes in each image, automated detection may have been negatively affected, as the shape of an object can be a relevant criterion for classification success [98]. The flexibility and mutable shape of Plastic Bags create a handicap for the automatic detection of this item class. Other Containers and Other Floating Debris categories, in both datasets, were overestimated with many false positives being classified within these two categories. Other Containers overestimation may be an artefact from the use of a single object within this category, a black container that would float under the sea surface. The use of a single object, combined with the lensing effect of water over the partially submerged object, may have contributed for the misclassification of shadows and areas of images with high contrast as an object. Other Floating Debris category, was a category created to enable the algorithms to identify floating objects that could not be classified as one of the existing categories. These generated false positives, mostly produced by high-reflectance backscatter that produces white “false objects” at the sea surface. Transparency of the objects classified as Drink Bottles—Transparent has likely influenced the overestimation of these objects in the Blue Set, as differences in light and color profiles are reduced by transparency. The use of low-exposure images (i.e., Dark Set) appears to produce some mitigating effects, producing a lower underestimation than the overestimation produced with imagery and training sets from the Blue Set. The comparison in performance and accuracy between the Blue and Dark Set also highlights relevant findings concerning the type of object vs. the environment in which it exists. For some object categories, such as Cleaner Container Bottles and Fishing Gear, using low-exposure, high-contrast images in training and analysis seems to perform better and produce lower errors (under or overestimation) than normally exposed imagery. These findings suggest that further research is needed to combine the use of multiple sensors producing contrasting exposure images or multiple spectral data to increase the accuracy in discriminating different objects and materials.
One additional and essential indicator for determining the adequate method and analytical approach to use is the average time required for each image to be processed when using different methods. Despite the short times required for the manual counting method (visual inspection and classification of floating objects), this method requires human supervision through the whole process with 100% dedication of the user, which may hamper scaling up imagery collection and efforts. The dedication and time spent by users will be proportionally and constantly increased by the number of images to process. However, despite being a tedious and repetitive task, the level of expertise required by the user is minimal, as it only needs the user to inspect each image and tag visible litter items. Color difference pixel selection average processing times are comparable to those required for human-supervised annotation (Figure 4); however, it requires more expertise from the user (i.e., advanced image processing and familiarity with programming) and it lacks accuracy in the automated outputs (Figure 2). In the machine learning method, the model requires considerably more time to provide information on the number of different objects than that taken by a user to visually classify an image and tag the multiple objects (Figure 4); however, an important consideration is the fact that the classification process can mostly run with no human supervision required. Indeed, AI algorithms have already been used to automate marine litter recognition from aerial imagery, where the common algorithms applied are typically based on random forest algorithms [64,68,99] or deep learning approaches [45,46,79,98]. The main factor that encourages the development of AI algorithms for automatic identification of floating marine litter is that, after the first initial effort of classification and validation, it is a process that can be replicated for future studies without human supervision, which creates less time-consuming workflows. The time and effort dedicated, the knowledge, and the skills required to optimize and routinely apply machine learning is often compensated for by it being a single initial effort to acquire knowledge and train the model. After this laborious process, the model is continuously self-training, and is fed by the images that the user asks the model to use, that is, if it keeps the classification classes constant. This is one of the most significant differences to be highlighted between the compared methods.

5. Conclusions

Overall, the obtained analysis results suggest that UAS remote sensing can be effectively used for floating litter monitoring in two ways: (i) by visually inspecting each image and identifying or classifying images, or (ii) using deep learning to detect floating items without classifying them. Our findings suggest that ML can be used for processing large numbers of images autonomously for assessing contamination with acceptable error; however, when implementing a floating-litter-dedicated monitoring program, it is important to consider the level of output detail required. The European Marine Strategy Framework Directive and OSPAR litter monitoring standards require monitoring activities to report litter items classified according to extensive standardized lists. The highly detailed categories of these standard lists are often challenging to discriminate, making current automated classification a target for the future. Automated classification will most certainly become more accurate and reliable as research and development progress, and with the introduction of multicamera and multispectral systems, optimizing model training and creating a multistep workflow for the classification. Many of the constraints regarding the use of UAS-based remote sensing to detect, map, or monitor litter contamination are related to the aerial imagery processing requirements. Georeferenced individual images or mosaics collected with regular RGB cameras or with additional channels, require processing and analysis to manually or autonomously detect litter or assess contamination levels. The careful visual inspection of imagery and manual annotations is the simplest solution, but more laborious, especially if dealing with large numbers of images and in long-term programs. Opposingly, automated object detection can potentially reduce user interaction needs, but typically requires higher computational power and programming expertise. In fact, the success of any monitoring program relying on remote sensing will greatly depend on the analysis process and the associated operational costs, processing times, accuracy, and reliability, which can be substantiated by the findings of this case study.
Monitoring programs that aim to use UAS-based remote sensing in the near future should also consider the frequency and total number of images that will be processed when selecting which analytical method suits them best. Special consideration should also be given to available human resources and their skillset. Annual programs with 0−1000 images to process can consider using visual inspection and manual identification or categorization, as they will require low expertise and a total processing time of 9−18 h per year. Large-scale efforts with thousands of images from different sources or with higher frequency should consider implementing an automated classification system using deep learning.
Finally, the findings of this study have not only enabled us to produce recommendations for the selection of imagery processing solutions, and to produce general operational guidelines for floating litter monitoring with UAS-based remote sensing (CROSSREF), but also to underline the importance of continued investment and research in improving light-weight remote sensing UAS payloads and in advancing deep learning and artificial intelligence in accurately detecting and classifying litter to automate imagery and remote sensing data processing and analysis.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/rs15010084/s1, Supplementary Materials S1: Remote sensing and aerial surveys for monitoring floating litter with unmanned aerial systems (UASs): general protocol guidelines for vessel- and shore-based operations. Supplementary Materials S2: Examples of applied methods and the table of floating litter category classes used in image annotation and automated object classification.

Author Contributions

Conceptualization, J.G.M. and S.A.; methodology, J.G.M., T.K., M.R., R.F. and S.A.; formal analysis, T.K., M.R. and S.A.; investigation, J.G.M., T.K., M.R. and S.A.; resources, J.G.M., M.R., J.C.-C.; writing—original draft preparation, S.A.; writing—review and editing, all authors; visualization, J.G.M. and S.A.; supervision, J.G.M., M.R., M.P.P.; project administration, J.G.M. and J.C.-C.; funding acquisition, J.G.M. and J.C.-C. All authors have read and agreed to the published version of the manuscript.

Funding

S.A. is supported by a doctoral fellowship by FCT (UI/BD/151020/2021). M.R. is supported by the FCT grant INTERWHALE (PTDC/CCI-COM/0450/2020). T.K. is supported by the Environment Research and Technology Development Fund (JPMEERF12345678) of the Environmental Restoration and Conservation Agency of Japan, (KAKENHI Grant Number 21H01441). J.C.-C. and J.G.M. are funded by national funds through FCT—Fundação para a Ciência e a Tecnologia, I.P., under the Scientific Employment Stimulus—Institutional Calls—(CEEC- INST/00098/2018) and (CEECINST/00037/2021) respectively. M.P.P. is funded by FCT/FCUL through researcher contract DL57/2016/CP1479/CT0020. R.F. is supported by a doctoral fellowship by FCT (2022.09961.BD). This work was partially supported by the projects: CleanAtlantic (EAPA-46/2016), INTERREG Atlantic Area Program; Oceanlit (MAC2/4.6d/302) and INTERTAGUA (MAC2/1.1.a/385), INTERREG- MAC; LARGESCALE (PTDC/CCI-CIF/32474/2017), FCT—Fundação Para a Ciência e Tecnologia; and project JPNP18016 commissioned by the New Energy and Industrial Technology Development Organization (NEDO). This study also had the support of FCT through the strategic project UIDB/04292/2020 awarded to MARE and through project LA/P/0069/2020 granted to the Associate Laboratory ARNET.

Data Availability Statement

The datasets generated and/or analyzed in the current study are available from the corresponding author on reasonable request.

Acknowledgments

The authors would like to acknowledge the contribution and operational support provided by Madeira Sea Emotions and Madeira Divepoint maritime operators. The machine learning pipeline used in the research was powered by https://wave-labs.org (accessed on 16 December 2022).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Veiga, J.M.; Vlachogianni, T.; Pahl, S.; Thompson, R.C.; Kopke, K.; Doyle, T.K.; Hartley, B.L.; Maes, T.; Orthodoxou, D.L.; Loizidou, X.I.; et al. Enhancing Public Awareness and Promoting Co-Responsibility for Marine Litter in Europe: The Challenge of MARLISCO. Mar. Pollut. Bull. 2016, 102, 309–315. [Google Scholar] [CrossRef]
  2. Gough, A. Educating for the Marine Environment: Challenges for Schools and Scientists. Mar. Pollut. Bull. 2017, 124, 633–638. [Google Scholar] [CrossRef]
  3. Schmid, C.; Cozzarini, L.; Zambello, E. Microplastic’s Story. Mar. Pollut. Bull. 2021, 162, 111820. [Google Scholar] [CrossRef]
  4. Al-Jaibachi, R.; Cuthbert, R.N.; Callaghan, A. Up and Away: Ontogenic Transference as a Pathway for Aerial Dispersal of Microplastics. Biol. Lett. 2018, 14, 20180479. [Google Scholar] [CrossRef] [Green Version]
  5. Reed, C. Dawn of the Plasticene Age. New Sci. 2015, 225, 28–32. [Google Scholar] [CrossRef]
  6. Williams, A.T.; Rangel-Buitrago, N. The Past, Present, and Future of Plastic Pollution. Mar. Pollut. Bull. 2022, 176, 113429. [Google Scholar] [CrossRef]
  7. Villarrubia-Gómez, P.; Cornell, S.E.; Fabres, J. Marine Plastic Pollution as a Planetary Boundary Threat—The Drifting Piece in the Sustainability Puzzle. Mar. Policy 2018, 96, 213–220. [Google Scholar] [CrossRef]
  8. Patrício Silva, A.L.; Prata, J.C.; Walker, T.R.; Campos, D.; Duarte, A.C.; Soares, A.M.V.M.; Barcelò, D.; Rocha-Santos, T. Rethinking and Optimising Plastic Waste Management under COVID-19 Pandemic: Policy Solutions Based on Redesign and Reduction of Single-Use Plastics and Personal Protective Equipment. Sci. Total Environ. 2020, 742, 140565. [Google Scholar] [CrossRef]
  9. Canning-Clode, J.; Sepúlveda, P.; Almeida, S.; Monteiro, J. Will COVID-19 Containment and Treatment Measures Drive Shifts in Marine Litter Pollution? Front. Mar. Sci. 2020, 7, 691. [Google Scholar] [CrossRef]
  10. Woods, J.S.; Verones, F.; Jolliet, O.; Vázquez-Rowe, I.; Boulay, A.-M. A Framework for the Assessment of Marine Litter Impacts in Life Cycle Impact Assessment. Ecol. Indic. 2021, 129, 107918. [Google Scholar] [CrossRef]
  11. Gallo, F.; Fossi, C.; Weber, R.; Santillo, D.; Sousa, J.; Ingram, I.; Nadal, A.; Romano, D. Marine Litter Plastics and Microplastics and Their Toxic Chemicals Components: The Need for Urgent Preventive Measures. Environ. Sci. Eur. 2018, 30, 13. [Google Scholar] [CrossRef] [PubMed]
  12. Abalansa, S.; El Mahrad, B.; Vondolia, G.K.; Icely, J.; Newton, A. The Marine Plastic Litter Issue: A Social-Economic Analysis. Sustainability 2020, 12, 8677. [Google Scholar] [CrossRef]
  13. Ogunola, O.S.; Onada, O.A.; Falaye, A.E. Mitigation Measures to Avert the Impacts of Plastics and Microplastics in the Marine Environment (A Review). Environ. Sci. Pollut. Res. 2018, 25, 9293–9310. [Google Scholar] [CrossRef] [PubMed]
  14. Galgani, F.; Hanke, G.; Werner, S.; De Vrees, L. Marine Litter within the European Marine Strategy Framework Directive. ICES J. Mar. Sci. 2013, 70, 1055–1064. [Google Scholar] [CrossRef] [Green Version]
  15. Chen, C.L. Regulation and management of marine litter. In Marine Anthropogenic Litter; Springer: Cham, Switzerland, 2015. [Google Scholar]
  16. Maximenko, N.; Corradi, P.; Law, K.L.; van Sebille, E.; Garaba, S.P.; Lampitt, R.S.; Galgani, F.; Martinez-Vicente, V.; Goddijn-Murphy, L.; Veiga, J.M.; et al. Toward the Integrated Marine Debris Observing System. Front. Mar. Sci. 2019, 6, 309. [Google Scholar] [CrossRef] [Green Version]
  17. Danovaro, R.; Carugati, L.; Berzano, M.; Cahill, A.E.; Carvalho, S.; Chenuil, A.; Corinaldesi, C.; Cristina, S.; David, R.; Dell’Anno, A.; et al. Implementing and Innovating Marine Monitoring Approaches for Assessing Marine Environmental Status. Front. Mar. Sci. 2016, 3, 213. [Google Scholar] [CrossRef]
  18. Chambault, P.; Vandeperre, F.; Machete, M.; Lagoa, J.C.; Pham, C.K. Distribution and Composition of Floating Macro Litter off the Azores Archipelago and Madeira (NE Atlantic) Using Opportunistic Surveys. Mar. Environ. Res. 2018, 141, 225–232. [Google Scholar] [CrossRef]
  19. Tekman, M.B.; Krumpen, T.; Bergmann, M. Marine Litter on Deep Arctic Seafloor Continues to Increase and Spreads to the North at the HAUSGARTEN Observatory. Deep Sea Res. Part I Oceanogr. Res. Pap. 2017, 120, 88–99. [Google Scholar] [CrossRef] [Green Version]
  20. Lusher, A.L.; Burke, A.; O’Connor, I.; Officer, R. Microplastic Pollution in the Northeast Atlantic Ocean: Validated and Opportunistic Sampling. Mar. Pollut. Bull. 2014, 88, 325–333. [Google Scholar] [CrossRef]
  21. Rothäusler, E.; Jormalainen, V.; Gutow, L.; Thiel, M. Low Abundance of Floating Marine Debris in the Northern Baltic Sea. Mar. Pollut. Bull. 2019, 149, 110522. [Google Scholar] [CrossRef]
  22. Campana, I.; Angeletti, D.; Crosti, R.; Di Miccoli, V.; Arcangeli, A. Seasonal Patterns of Floating Macro-Litter across the Western Mediterranean Sea: A Potential Threat for Cetacean Species. Rend. Lincei Sci. Fis. Nat. 2018, 29, 453–467. [Google Scholar] [CrossRef]
  23. Suaria, G.; Aliani, S. Floating Debris in the Mediterranean Sea. Mar. Pollut. Bull. 2014, 86, 494–504. [Google Scholar] [CrossRef] [PubMed]
  24. Fossi, M.C.; Pedà, C.; Compa, M.; Tsangaris, C.; Alomar, C.; Claro, F.; Ioakeimidis, C.; Galgani, F.; Hema, T.; Deudero, S.; et al. Bioindicators for Monitoring Marine Litter Ingestion and Its Impacts on Mediterranean Biodiversity. Environ. Pollut. 2018, 237, 1023–1040. [Google Scholar] [CrossRef] [PubMed]
  25. Gajšt, T.; Bizjak, T.; Palatinus, A.; Liubartseva, S.; Kržan, A. Sea Surface Microplastics in Slovenian Part of the Northern Adriatic. Mar. Pollut. Bull. 2016, 113, 392–399. [Google Scholar] [CrossRef]
  26. Herrera, A.; Raymond, E.; Martínez, I.; Álvarez, S.; Canning-Clode, J.; Gestoso, I.; Pham, C.K.; Ríos, N.; Rodríguez, Y.; Gómez, M. First Evaluation of Neustonic Microplastics in the Macaronesian Region, NE Atlantic. Mar. Pollut. Bull. 2020, 153, 110999. [Google Scholar] [CrossRef]
  27. Prata, J.C.; da Costa, J.P.; Duarte, A.C.; Rocha-Santos, T. Methods for Sampling and Detection of Microplastics in Water and Sediment: A Critical Review. TrAC Trends Anal. Chem. 2019, 110, 150–159. [Google Scholar] [CrossRef]
  28. Di-Méglio, N.; Campana, I. Floating Macro-Litter along the Mediterranean French Coast: Composition, Density, Distribution and Overlap with Cetacean Range. Mar. Pollut. Bull. 2017, 118, 155–166. [Google Scholar] [CrossRef]
  29. Ruiz, I.; Burgoa, I.; Santos, M.; Basurko, O.C.; García-Barón, I.; Louzao, M.; Beldarrain, B.; Kukul, D.; Valle, C.; Uriarte, A.; et al. First Assessment of Floating Marine Litter Abundance and Distribution in the Bay of Biscay from an Integrated Ecosystem Survey. Mar. Pollut. Bull. 2022, 174, 113266. [Google Scholar] [CrossRef]
  30. Miladinova, S.; Macias, D.; Stips, A.; Garcia-Gorriz, E. Identifying Distribution and Accumulation Patterns of Floating Marine Debris in the Black Sea. Mar. Pollut. Bull. 2020, 153, 110964. [Google Scholar] [CrossRef]
  31. Carlson, D.F.; Suaria, G.; Aliani, S.; Fredj, E.; Fortibuoni, T.; Griffa, A.; Russo, A.; Melli, V. Combining Litter Observations with a Regional Ocean Model to Identify Sources and Sinks of Floating Debris in a Semi-Enclosed Basin: The Adriatic Sea. Front. Mar. Sci. 2017, 4, 78. [Google Scholar] [CrossRef]
  32. van Sebille, E.; Aliani, S.; Law, K.L.; Maximenko, N.; Alsina, J.M.; Bagaev, A.; Bergmann, M.; Chapron, B.; Chubarenko, I.; Cózar, A.; et al. The Physical Oceanography of the Transport of Floating Marine Debris. Environ. Res. Lett. 2020, 15, 023003. [Google Scholar] [CrossRef] [Green Version]
  33. Fossi, M.C.; Romeo, T.; Baini, M.; Panti, C.; Marsili, L.; Campani, T.; Canese, S.; Galgani, F.; Druon, J.-N.; Airoldi, S.; et al. Plastic Debris Occurrence, Convergence Areas and Fin Whales Feeding Ground in the Mediterranean Marine Protected Area Pelagos Sanctuary: A Modeling Approach. Front. Mar. Sci. 2017, 4, 167. [Google Scholar] [CrossRef]
  34. Hu, C. Remote Detection of Marine Debris Using Satellite Observations in the Visible and near Infrared Spectral Range: Challenges and Potentials. Remote Sens. Environ. 2021, 259, 112414. [Google Scholar] [CrossRef]
  35. Biermann, L.; Clewley, D.; Martinez-Vicente, V.; Topouzelis, K. Finding Plastic Patches in Coastal Waters Using Optical Satellite Data. Sci. Rep. 2020, 10, 5364. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Topouzelis, K.; Papageorgiou, D.; Suaria, G.; Aliani, S. Floating Marine Litter Detection Algorithms and Techniques Using Optical Remote Sensing Data: A Review. Mar. Pollut. Bull. 2021, 170, 112675. [Google Scholar] [CrossRef] [PubMed]
  37. Salgado-Hernanz, P.M.; Bauzà, J.; Alomar, C.; Compa, M.; Romero, L.; Deudero, S. Assessment of Marine Litter through Remote Sensing: Recent Approaches and Future Goals. Mar. Pollut. Bull. 2021, 168, 112347. [Google Scholar] [CrossRef]
  38. Topouzelis, K.; Papakonstantinou, A.; Garaba, S.P. Detection of Floating Plastics from Satellite and Unmanned Aerial Systems (Plastic Litter Project 2018). Int. J. Appl. Earth Obs. Geoinf. 2019, 79, 175–183. [Google Scholar] [CrossRef]
  39. Themistocleous, K.; Papoutsa, C.; Michaelides, S.; Hadjimitsis, D. Investigating Detection of Floating Plastic Litter from Space Using Sentinel-2 Imagery. Remote Sens. 2020, 12, 2648. [Google Scholar] [CrossRef]
  40. Von Schuckmann, K.; Le Traon, P.-Y.; Alvarez-Fanjul, E.; Axell, L.; Balmaseda, M.; Breivik, L.-A.; Brewin, R.J.W.; Bricaud, C.; Drevillon, M.; Drillet, Y.; et al. The Copernicus Marine Environment Monitoring Service Ocean State Report. J. Oper. Oceanogr. 2016, 9, s235–s320. [Google Scholar] [CrossRef]
  41. Martínez-Vicente, V.; Clark, J.R.; Corradi, P.; Aliani, S.; Arias, M.; Bochow, M.; Bonnery, G.; Cole, M.; Cózar, A.; Donnelly, R.; et al. Measuring Marine Plastic Debris from Space: Initial Assessment of Observation Requirements. Remote Sens. 2019, 11, 2443. [Google Scholar] [CrossRef]
  42. Sigler, M. The Effects of Plastic Pollution on Aquatic Wildlife: Current Situations and Future Solutions. Water Air Soil Pollut. 2014, 225, 2184. [Google Scholar] [CrossRef]
  43. Park, Y.-J.; Garaba, S.P.; Sainte-Rose, B. Detecting the Great Pacific Garbage Patch Floating Plastic Litter Using WorldView-3 Satellite Imagery. Opt. Express 2021, 29, 35288. [Google Scholar] [CrossRef] [PubMed]
  44. Lebreton, L.; Slat, B.; Ferrari, F.; Sainte-Rose, B.; Aitken, J.; Marthouse, R.; Hajbane, S.; Cunsolo, S.; Schwarz, A.; Levivier, A.; et al. Evidence That the Great Pacific Garbage Patch Is Rapidly Accumulating Plastic. Sci. Rep. 2018, 8, 4666. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  45. Kylili, K.; Kyriakides, I.; Artusi, A.; Hadjistassou, C. Identifying Floating Plastic Marine Debris Using a Deep Learning Approach. Environ. Sci. Pollut. Res. 2019, 26, 17091–17099. [Google Scholar] [CrossRef] [PubMed]
  46. Garcia-Garin, O.; Monleón-Getino, T.; López-Brosa, P.; Borrell, A.; Aguilar, A.; Borja-Robalino, R.; Cardona, L.; Vighi, M. Automatic Detection and Quantification of Floating Marine Macro-Litter in Aerial Images: Introducing a Novel Deep Learning Approach Connected to a Web Application in R. Environ. Pollut. 2021, 273, 116490. [Google Scholar] [CrossRef]
  47. Monteiro, J.G.; Jiménez, J.L.; Gizzi, F.; Přikryl, P.; Lefcheck, J.S.; Santos, R.S.; Canning-Clode, J. Novel Approach to Enhance Coastal Habitat and Biotope Mapping with Drone Aerial Imagery Analysis. Sci. Rep. 2021, 11, 574. [Google Scholar] [CrossRef]
  48. Olivetti, D.; Roig, H.; Martinez, J.-M.; Borges, H.; Ferreira, A.; Casari, R.; Salles, L.; Malta, E. Low-Cost Unmanned Aerial Multispectral Imagery for Siltation Monitoring in Reservoirs. Remote Sens. 2020, 12, 1855. [Google Scholar] [CrossRef]
  49. Ventura, D.; Bonifazi, A.; Gravina, M.F.; Belluscio, A.; Ardizzone, G. Mapping and Classification of Ecologically Sensitive Marine Habitats Using Unmanned Aerial Vehicle (UAV) Imagery and Object-Based Image Analysis (OBIA). Remote Sens. 2018, 10, 1331. [Google Scholar] [CrossRef] [Green Version]
  50. Whitehead, K.; Hugenholtz, C.H.; Myshak, S.; Brown, O.; LeClair, A.; Tamminga, A.; Barchyn, T.E.; Moorman, B.; Eaton, B. Remote Sensing of the Environment with Small Unmanned Aircraft Systems (UASs), Part 2: Scientific and Commercial Applications. J. Unmanned Veh. Syst. 2014, 2, 86–102. [Google Scholar] [CrossRef] [Green Version]
  51. Papakonstantinou, A.; Batsaris, M.; Spondylidis, S.; Topouzelis, K. A Citizen Science Unmanned Aerial System Data Acquisition Protocol and Deep Learning Techniques for the Automatic Detection and Mapping of Marine Litter Concentrations in the Coastal Zone. Drones 2021, 5, 6. [Google Scholar] [CrossRef]
  52. Gupta, S.G.; Ghonge, M.; Jawandhiya, P.M. Review of Unmanned Aircraft System (UAS). Int. J. Adv. Res. Comput. Eng. Technol. 2013, 2, 1646–1658. [Google Scholar] [CrossRef]
  53. Tatum, M.C.; Liu, J. Unmanned Aircraft System Applications in Construction. Procedia Eng. 2017, 196, 167–175. [Google Scholar] [CrossRef]
  54. Escobar-Sánchez, G.; Haseler, M.; Oppelt, N.; Schernewski, G. Efficiency of Aerial Drones for Macrolitter Monitoring on Baltic Sea Beaches. Front. Environ. Sci. 2021, 8, 560237. [Google Scholar] [CrossRef]
  55. Udin, W.S.; Ahmad, A. Assessment of Photogrammetric Mapping Accuracy Based on Variation Flying Altitude Using Unmanned Aerial Vehicle. IOP Conf. Ser. Earth Environ. Sci. 2014, 18, 012027. [Google Scholar] [CrossRef]
  56. Gray, P.; Ridge, J.; Poulin, S.; Seymour, A.; Schwantes, A.; Swenson, J.; Johnston, D. Integrating Drone Imagery into High Resolution Satellite Remote Sensing Assessments of Estuarine Environments. Remote Sens. 2018, 10, 1257. [Google Scholar] [CrossRef] [Green Version]
  57. Rovira-Sugranes, A.; Razi, A.; Afghah, F.; Chakareski, J. A review of AI-enabled routing protocols for UAV networks: Trends, challenges, and future outlook. Ad Hoc Netw. 2022, 130, 102790. [Google Scholar] [CrossRef]
  58. Al-Rawabdeh, A.; Al-Gurrani, H.; Al-Durgham, K.; Detchev, I.; He, F.; El-Sheimy, N.; Habib, A. A robust registration algorithm for point clouds from uav images for change detection. ISPRS—Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, XLI-B1, 765–772. [Google Scholar] [CrossRef] [Green Version]
  59. Madurapperuma, B.; Lamping, J.; McDermott, M.; Murphy, B.; McFarland, J.; Deyoung, K.; Smith, C.; MacAdam, S.; Monroe, S.; Corro, L.; et al. Factors Influencing Movement of the Manila Dunes and Its Impact on Establishing Non-Native Species. Remote Sens. 2020, 12, 1536. [Google Scholar] [CrossRef]
  60. Rossiter, T.; Furey, T.; McCarthy, T.; Stengel, D.B. Application of Multiplatform, Multispectral Remote Sensors for Mapping Intertidal Macroalgae: A Comparative Approach. Aquat. Conserv. Mar. Freshw. Ecosyst. 2020, 30, 1595–1612. [Google Scholar] [CrossRef]
  61. Casella, E.; Collin, A.; Harris, D.; Ferse, S.; Bejarano, S.; Parravicini, V.; Hench, J.L.; Rovere, A. Mapping Coral Reefs Using Consumer-Grade Drones and Structure from Motion Photogrammetry Techniques. Coral Reefs 2017, 36, 269–275. [Google Scholar] [CrossRef]
  62. Nahirnick, N.K.; Reshitnyk, L.; Campbell, M.; Hessing-Lewis, M.; Costa, M.; Yakimishyn, J.; Lee, L. Mapping with Confidence; Delineating Seagrass Habitats Using Unoccupied Aerial Systems (UAS). Remote Sens. Ecol. Conserv. 2019, 5, 121–135. [Google Scholar] [CrossRef]
  63. Rossi, L.; Mammi, I.; Pelliccia, F. UAV-Derived Multispectral Bathymetry. Remote Sens. 2020, 12, 3897. [Google Scholar] [CrossRef]
  64. Gonçalves, G.; Andriolo, U.; Pinto, L.; Duarte, D. Mapping Marine Litter with Unmanned Aerial Systems: A Showcase Comparison among Manual Image Screening and Machine Learning Techniques. Mar. Pollut. Bull. 2020, 155, 111158. [Google Scholar] [CrossRef] [PubMed]
  65. Gonçalves, G.; Andriolo, U.; Gonçalves, L.M.S.; Sobral, P.; Bessa, F. Beach Litter Survey by Drones: Mini-Review and Discussion of a Potential Standardization. Environ. Pollut. 2022, 315, 120370. [Google Scholar] [CrossRef] [PubMed]
  66. Gonçalves, G.; Andriolo, U.; Pinto, L.; Bessa, F. Detecting marine litter on sandy beaches by using UAS-based orthophotos and machine learning methods. In Proceedings of the WORKSHOP Standardization of Procedures in Using UAS for Environmental Monitoring, Coimbra, Portugal, 6 November 2019. [Google Scholar] [CrossRef]
  67. Andriolo, U.; Gonçalves, G.; Rangel-Buitrago, N.; Paterni, M.; Bessa, F.; Gonçalves, L.M.S.; Sobral, P.; Bini, M.; Duarte, D.; Fontán-Bouzas, Á.; et al. Drones for Litter Mapping: An Inter-Operator Concordance Test in Marking Beached Items on Aerial Images. Mar. Pollut. Bull. 2021, 169, 112542. [Google Scholar] [CrossRef] [PubMed]
  68. Gonçalves, G.; Andriolo, U.; Gonçalves, L.; Sobral, P.; Bessa, F. Quantifying Marine Macro Litter Abundance on a Sandy Beach Using Unmanned Aerial Systems and Object-Oriented Machine Learning Methods. Remote Sens. 2020, 12, 2599. [Google Scholar] [CrossRef]
  69. Bao, Z.; Sha, J.; Li, X.; Hanchiso, T.; Shifaw, E. Monitoring of Beach Litter by Automatic Interpretation of Unmanned Aerial Vehicle Images Using the Segmentation Threshold Method. Mar. Pollut. Bull. 2018, 137, 388–398. [Google Scholar] [CrossRef]
  70. Merlino, S.; Paterni, M.; Locritani, M.; Andriolo, U.; Gonçalves, G.; Massetti, L. Citizen Science for Marine Litter Detection and Classification on Unmanned Aerial Vehicle Images. Water 2021, 13, 3349. [Google Scholar] [CrossRef]
  71. Merlino, S.; Paterni, M.; Berton, A.; Massetti, L. Unmanned Aerial Vehicles for Debris Survey in Coastal Areas: Long-Term Monitoring Programme to Study Spatial and Temporal Accumulation of the Dynamics of Beached Marine Litter. Remote Sens. 2020, 12, 1260. [Google Scholar] [CrossRef]
  72. Deidun, A.; Gauci, A.; Lagorio, S.; Galgani, F. Optimising Beached Litter Monitoring Protocols through Aerial Imagery. Mar. Pollut. Bull. 2018, 131, 212–217. [Google Scholar] [CrossRef]
  73. Andriolo, U.; Garcia-Garin, O.; Vighi, M.; Borrell, A.; Gonçalves, G. Beached and Floating Litter Surveys by Unmanned Aerial Vehicles: Operational Analogies and Differences. Remote Sens. 2022, 14, 1336. [Google Scholar] [CrossRef]
  74. Fallati, L.; Polidori, A.; Salvatore, C.; Saponari, L.; Savini, A.; Galli, P. Anthropogenic Marine Debris Assessment with Unmanned Aerial Vehicle Imagery and Deep Learning: A Case Study along the Beaches of the Republic of Maldives. Sci. Total Environ. 2019, 693, 133581. [Google Scholar] [CrossRef] [PubMed]
  75. Kako, S.; Morita, S.; Taneda, T. Estimation of Plastic Marine Debris Volumes on Beaches Using Unmanned Aerial Vehicles and Image Processing Based on Deep Learning. Mar. Pollut. Bull. 2020, 155, 111127. [Google Scholar] [CrossRef] [PubMed]
  76. Garcia-Garin, O.; Borrell, A.; Aguilar, A.; Cardona, L.; Vighi, M. Floating Marine Macro-Litter in the North Western Mediterranean Sea: Results from a Combined Monitoring Approach. Mar. Pollut. Bull. 2020, 159, 111467. [Google Scholar] [CrossRef] [PubMed]
  77. Escobar-Sánchez, G.; Markfort, G.; Berghald, M.; Ritzenhofen, L.; Schernewski, G. Aerial and Underwater Drones for Marine Litter Monitoring in Shallow Coastal Waters: Factors Influencing Item Detection and Cost-Efficiency. Environ. Monit. Assess. 2022, 194, 863. [Google Scholar] [CrossRef]
  78. Kataoka, T.; Nihei, Y. Quantification of Floating Riverine Macro-Debris Transport Using an Image Processing Approach. Sci. Rep. 2020, 10, 2198. [Google Scholar] [CrossRef] [Green Version]
  79. Jakovljevic, G.; Govedarica, M.; Alvarez-Taboada, F. A Deep Learning Model for Automatic Plastic Mapping Using Unmanned Aerial Vehicle (UAV) Data. Remote Sens. 2020, 12, 1515. [Google Scholar] [CrossRef]
  80. Clapuyt, F.; Vanacker, V.; Van Oost, K. Reproducibility of UAV-Based Earth Topography Reconstructions Based on Structure-from-Motion Algorithms. Geomorphology 2016, 260, 4–15. [Google Scholar] [CrossRef]
  81. Nex, F.; Remondino, F. UAV for 3D Mapping Applications: A Review. Appl. Geomat. 2014, 6, 1–15. [Google Scholar] [CrossRef]
  82. Rusnák, M.; Sládek, J.; Kidová, A.; Lehotský, M. Template for High-Resolution River Landscape Mapping Using UAV Technology. Measurement 2018, 115, 139–151. [Google Scholar] [CrossRef]
  83. Joyce, K.E.; Duce, S.; Leahy, S.M.; Leon, J.; Maier, S.W. Principles and Practice of Acquiring Drone-Based Image Data in Marine Environments. Mar. Freshw. Res. 2019, 70, 952. [Google Scholar] [CrossRef]
  84. Xu, C.; Liao, X.; Tan, J.; Ye, H.; Lu, H. Recent Research Progress of Unmanned Aerial Vehicle Regulation Policies and Technologies in Urban Low Altitude. IEEE Access 2020, 8, 74175–74194. [Google Scholar] [CrossRef]
  85. Stöcker, C.; Bennett, R.; Nex, F.; Gerke, M.; Zevenbergen, J. Review of the Current State of UAV Regulations. Remote Sens. 2017, 9, 459. [Google Scholar] [CrossRef] [Green Version]
  86. Felis, J.J.; Kelsey, E.C.; Adams, J.; Stenske, J.G.; White, L.M. Population estimates for selected breeding seabirds at Kīlauea Point National Wildlife Refuge, Kauaʻi, in 2019. U.S. Geological Survey Data Series. 2020, 1130, 32. [Google Scholar] [CrossRef]
  87. Borghgraef, A.; Barnich, O.; Lapierre, F.; Van Droogenbroeck, M.; Philips, W.; Acheroy, M. An Evaluation of Pixel-Based Methods for the Detection of Floating Objects on the Sea Surface. EURASIP J. Adv. Signal Process. 2010, 2010, 978451. [Google Scholar] [CrossRef] [Green Version]
  88. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.-C. MobileNetV2: Inverted residuals and linear bottlenecks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar] [CrossRef]
  89. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A.C. SSD: Single shot multiBox detector. In Proceedings of the Computer Vision—ECCV 2016, Amsterdam, The Netherlands, 11–14 October 2016; Leibe, B., Matas, J., Sebe, N., Welling, M., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 21–37. [Google Scholar] [CrossRef] [Green Version]
  90. Shorten, C.; Khoshgoftaar, T.M. A Survey on Image Data Augmentation for Deep Learning. J. Big Data 2019, 6, 60. [Google Scholar] [CrossRef]
  91. Brown, J.B. Classifiers and Their Metrics Quantified. Mol. Inform. 2018, 37, 1700127. [Google Scholar] [CrossRef]
  92. Gao, M.; Hugenholtz, C.H.; Fox, T.A.; Kucharczyk, M.; Barchyn, T.E.; Nesbit, P.R. Weather Constraints on Global Drone Flyability. Sci. Rep. 2021, 11, 12092. [Google Scholar] [CrossRef]
  93. Leira, F.S.; Johansen, T.A.; Fossen, T.I. Automatic detection, classification and tracking of objects in the ocean surface from UAVs using a thermal camera. In Proceedings of the 2015 IEEE Aerospace Conference, Big Sky, MT, USA, 7–14 March 2015; pp. 1–10. [Google Scholar]
  94. Watts, A.C.; Ambrosia, V.G.; Hinkley, E.A. Unmanned Aircraft Systems in Remote Sensing and Scientific Research: Classification and Considerations of Use. Remote Sens. 2012, 4, 1671–1692. [Google Scholar] [CrossRef] [Green Version]
  95. Doukari, M.; Batsaris, M.; Topouzelis, K. UASea: A Data Acquisition Toolbox for Improving Marine Habitat Mapping. Drones 2021, 5, 73. [Google Scholar] [CrossRef]
  96. Goddijn-Murphy, L.; Dufaur, J. Proof of Concept for a Model of Light Reflectance of Plastics Floating on Natural Waters. Mar. Pollut. Bull. 2018, 135, 1145–1157. [Google Scholar] [CrossRef] [PubMed]
  97. Lee, Z.; Ahn, Y.-H.; Mobley, C.; Arnone, R. Removal of Surface-Reflected Light for the Measurement of Remote-Sensing Reflectance from an above-Surface Platform. Opt. Express 2010, 18, 26313. [Google Scholar] [CrossRef] [PubMed]
  98. Maharjan, N.; Miyazaki, H.; Pati, B.M.; Dailey, M.N.; Shrestha, S.; Nakamura, T. Detection of River Plastic Using UAV Sensor Data and Deep Learning. Remote Sens. 2022, 14, 3049. [Google Scholar] [CrossRef]
  99. Su, J.; Liu, C.; Coombes, M.; Hu, X.; Wang, C.; Xu, X.; Li, Q.; Guo, L.; Chen, W.-H. Wheat Yellow Rust Monitoring by Learning from Multispectral UAV Aerial Imagery. Comput. Electron. Agric. 2018, 155, 157–166. [Google Scholar] [CrossRef]
Figure 1. From top to bottom, left to right: (A) example of an aerial image of the experimental trial where floating litter objects were deployed from the boat to collect aerial imagery; (B) an UAS operator using the commercial DJI Phantom Series UAV; (C,D) example of two types of collected aerial images with normal exposure (Blue Set—(C)) and exposure for low EV (Dark Set—(D)).
Figure 1. From top to bottom, left to right: (A) example of an aerial image of the experimental trial where floating litter objects were deployed from the boat to collect aerial imagery; (B) an UAS operator using the commercial DJI Phantom Series UAV; (C,D) example of two types of collected aerial images with normal exposure (Blue Set—(C)) and exposure for low EV (Dark Set—(D)).
Remotesensing 15 00084 g001
Figure 2. Linear regressions between the number of items per image (i.e., visually identified) and automated analysis using pixel-based detection (left panels) and automated object detection using machine learning (right panels) for the Blue Set (top) and Dark Set (bottom) of images. Machine learning has a greater correlation (R2 = 0.88 and R2 = 0.64 for Blue and Dark datasets, respectively) than that found for pixel-based detection (R2 = 0.002 and R2 = 0.25 for Blue and Dark datasets, respectively).
Figure 2. Linear regressions between the number of items per image (i.e., visually identified) and automated analysis using pixel-based detection (left panels) and automated object detection using machine learning (right panels) for the Blue Set (top) and Dark Set (bottom) of images. Machine learning has a greater correlation (R2 = 0.88 and R2 = 0.64 for Blue and Dark datasets, respectively) than that found for pixel-based detection (R2 = 0.002 and R2 = 0.25 for Blue and Dark datasets, respectively).
Remotesensing 15 00084 g002
Figure 3. Average differences and respective standard deviations per category in the number of classified objects between manual counting and machine learning using the Blue Set of normally exposed images (A) and Dark Set of underexposed images (B). Negative values represent an overall underestimation shaped by false negatives (i.e., the number of non-detected objects in average), and positive values represent an overall overestimation shaped by false positives (i.e., the number of falsely identified objects on average).
Figure 3. Average differences and respective standard deviations per category in the number of classified objects between manual counting and machine learning using the Blue Set of normally exposed images (A) and Dark Set of underexposed images (B). Negative values represent an overall underestimation shaped by false negatives (i.e., the number of non-detected objects in average), and positive values represent an overall overestimation shaped by false positives (i.e., the number of falsely identified objects on average).
Remotesensing 15 00084 g003
Figure 4. Average processing times during identification and classification over the three methods tested using images with normal exposure (Blue set) and low exposure (Dark Set)..
Figure 4. Average processing times during identification and classification over the three methods tested using images with normal exposure (Blue set) and low exposure (Dark Set)..
Remotesensing 15 00084 g004
Table 1. Summary comparison of different performance indicators for the use of manual counting, pixel-based detection, and machine learning to detect and assess floating litter contamination using UAS-based remote sensing to collect aerial imagery (Blue and Dark Sets). Legend: µ, average; σ, standard deviation; Precision, the ratio of the correctly segmented classes that are positive for each class; Recall (sensitivity), ratio of the correctly classified positive classes; F1, harmonic mean indicating the extent of the alignment of the predicted boundary with the ground truth boundary, evaluates the balance between precision and recall values. For Precision, Recall, and F1, the higher the value, the better the performance.
Table 1. Summary comparison of different performance indicators for the use of manual counting, pixel-based detection, and machine learning to detect and assess floating litter contamination using UAS-based remote sensing to collect aerial imagery (Blue and Dark Sets). Legend: µ, average; σ, standard deviation; Precision, the ratio of the correctly segmented classes that are positive for each class; Recall (sensitivity), ratio of the correctly classified positive classes; F1, harmonic mean indicating the extent of the alignment of the predicted boundary with the ground truth boundary, evaluates the balance between precision and recall values. For Precision, Recall, and F1, the higher the value, the better the performance.
Methods Manual CountPixel Base DetectionMachine Learning
DataSet Blue DarkBlue DarkBlue Dark
Performance
Evaluation
Average
Process
Times (s)
Identification26 s 22 s
Classification52 s40 s
Processing 43 s26 s
Object Classification 159 s 135 s
Number of
Objects
Classified
μ141117 152157
σ112112 116187
% of pixels detected 0.0025%0.000049%
Estimated area 5.310.089
Performance from ML method P: 63.59%
R: 78.27%
F1: 56.33%
P: 77.62%
R: 77.71%
F1: 66.15%
Work Interface DotDotGouseWorkflow who to
generate new Algorithm.
-
Supervisely
-
Goolge Collab GPU
-
Python AID
Requests Informatic skills
-
Programming skills.
-
Knowledge in processing color images.
-
Programming skills.
-
Knowledge in deep learning.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Almeida, S.; Radeta, M.; Kataoka, T.; Canning-Clode, J.; Pessanha Pais, M.; Freitas, R.; Monteiro, J.G. Designing Unmanned Aerial Survey Monitoring Program to Assess Floating Litter Contamination. Remote Sens. 2023, 15, 84. https://doi.org/10.3390/rs15010084

AMA Style

Almeida S, Radeta M, Kataoka T, Canning-Clode J, Pessanha Pais M, Freitas R, Monteiro JG. Designing Unmanned Aerial Survey Monitoring Program to Assess Floating Litter Contamination. Remote Sensing. 2023; 15(1):84. https://doi.org/10.3390/rs15010084

Chicago/Turabian Style

Almeida, Sílvia, Marko Radeta, Tomoya Kataoka, João Canning-Clode, Miguel Pessanha Pais, Rúben Freitas, and João Gama Monteiro. 2023. "Designing Unmanned Aerial Survey Monitoring Program to Assess Floating Litter Contamination" Remote Sensing 15, no. 1: 84. https://doi.org/10.3390/rs15010084

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop