Next Article in Journal
Early Detection of Myrtle Rust on Pōhutukawa Using Indices Derived from Hyperspectral and Thermal Imagery
Previous Article in Journal
Glacier Mass Balance and Its Impact on Land Water Storage in the Southeastern Tibetan Plateau Revealed by ICESat-2 and GRACE-FO
Previous Article in Special Issue
Spatiotemporal Dynamics and Driving Factors of Small and Micro Wetlands in the Yellow River Basin from 1990 to 2020
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparing Pixel- and Object-Based Approaches for Classifying Multispectral Drone Imagery of a Salt Marsh Restoration and Reference Site

1
Faculty of Forestry and Environmental Management, University of New Brunswick, Fredericton, NB E3B 5A3, Canada
2
Faculty of Natural Resource Management, Lakehead University, Thunder Bay, ON P7B 5E1, Canada
3
Department of Biology, University of New Brunswick, Fredericton, NB E3B 5A3, Canada
4
Canadian Wildlife Service, Environment Canada, P.O. Box 6227, Sackville, NB E4L 4N1, Canada
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(6), 1049; https://doi.org/10.3390/rs16061049
Submission received: 28 January 2024 / Revised: 6 March 2024 / Accepted: 11 March 2024 / Published: 15 March 2024
(This article belongs to the Special Issue Remote Sensing for the Study of the Changes in Wetlands)

Abstract

:
Monitoring salt marshes with remote sensing is necessary to evaluate their state and restoration. Determining appropriate techniques for this can be overwhelming. Our study provides insight into whether a pixel- or object-based Random Forest classification approach is best for mapping vegetation in north temperate salt marshes. We used input variables from drone images (raw reflectances, vegetation indices, and textural features) acquired in June, July, and August 2021 of a salt marsh restoration and reference site in Aulac, New Brunswick, Canada. We also investigated the importance of input variables and whether using landcover classes representing areas of change was a practical way to evaluate variation in the monthly images. Our results indicated that (1) the classifiers achieved overall validation accuracies of 91.1–95.2%; (2) pixel-based classifiers outperformed object-based classifiers by 1.3–2.0%; (3) input variables extracted from the August images were more important than those extracted from the June and July images; (4) certain raw reflectances, vegetation indices, and textural features were among the most important variables; and (5) classes that changed temporally were mapped with user’s and producer’s validation accuracies of 86.7–100.0%. Knowledge gained during this study will inform assessments of salt marsh restoration trajectories spanning multiple years.

1. Introduction

Although wetlands occupy approximately 2–6% of the earth’s surface, they fulfill many ecological functions [1,2]. Wetlands provide carbon sequestration [3,4]; support soil formation and stabilization [5,6]; supply food, water, and plant biomass [7,8,9]; and serve as cultural and recreational areas [10,11]. The quality and quantity of ecosystem services provided by wetlands varies depending on their type, hydrology, water chemistry, soils, and plant species [12,13,14,15]. Unfortunately, global wetland loss has been extensive because of anthropogenic activities, including agriculture, urbanization, aquaculture, and industry, as well as climate change [16,17]. Major efforts are being made to restore wetlands and the beneficial services they provide [18,19,20,21]. Understanding vegetation dynamics during wetland restoration can aid in assessing and modeling the recovery trajectory, evaluating ecosystem services, and planning future restoration projects [22,23].
The ecological importance of wetlands and their vegetation dynamics highlight the need for reliable, accurate, and efficient methods to monitor vegetation changes. Remote sensing offers practical ways to monitor vegetation distributions in areas that are difficult to access on foot, including wetlands. Indeed, many initiatives to create wetland classification systems and inventories (maps displaying the extent and distribution of wetlands over a geographical area) have been made at regional, national, and international scales [24,25]. The wetlands of interest in our study were coastal salt marshes in Atlantic Canada, which have been part of the Canadian Wetland Classification System since its development in 2002 [24,25].
Aerial imagery acquired using sensors fixed to planes [26,27,28] and satellites, including Sentinel [29], Landsat [30,31,32,33], Worldview [33], and others [34], has been successfully used for mapping coastal wetlands. Very-high-spatial-resolution aerial imagery can be acquired using sensors fixed to drones, which have recently (compared to planes and satellites) become popular tools for environmental monitoring and have also been used to map coastal wetland vegetation and its changes [35,36,37,38,39,40,41]. Prior to the development of very-high-spatial-resolution drone and satellite imagery, which are especially valuable for assessing differences in heterogeneous wetland communities with species composition differing at the scale of centimeters [42], a prevalent challenge in mapping wetlands (including salt marshes) was that the spatial resolution of images was too low to assess fine-scale ecological dynamics [43]. Lower-resolution satellites (e.g., Landsat) have typically been restricted to assessing regional extent and loss of marshes [30,31,32,33]. Despite the desire for higher-spatial-resolution imagery for mapping coastal wetlands, such resolution does not always equate to higher classification accuracy [44], especially as classifications become more complicated with additional landcover classes that can now be detected. In addition to achieving very high resolution, using drone imagery for environmental monitoring is cost- and user-friendly compared to imagery acquired with conventional air- and spacecraft. Drones operate beneath cloud cover and can be deployed whenever the weather permits, making image acquisition at regular temporal intervals possible and facilitating analysis of vegetation dynamics [45]. As additional images are added to a set of multi-temporal images, however, the time associated with fieldwork, image acquisition, and image processing increases the complexity and financial cost of a monitoring project. Overall, there are many considerations when planning an environmental monitoring project using remote sensing techniques, including which image classification methods are most appropriate.
Much effort has been made to develop image classification methods for vegetation mapping and change analysis, including traditional pixel-based (PB) [46,47] and, more recently, object-based (OB) methods [47,48]. Differences in these methods are attributed to the fundamental unit of analysis. For PB techniques, this is an image pixel, while for OB techniques, image objects (groups of pixels) are first created using image segmentation. Selecting the appropriate method depends partially on the sizes of the features of interest and the spatial resolution of the imagery. PB classification approaches have typically been used for wetland mapping in Canada [25], but OB classification is preferred when objects of interest are substantially larger than the spatial resolution of pixels, which is common when using very-high-spatial-resolution drone imagery [48]. Image segmentation can, however, introduce over- and under-segmentation errors by incorrectly grouping pixels where segments do not represent the heterogeneity of plant communities [49,50]. Selecting optimal segmentation settings for an entire wetland landscape can be challenging due to the potential ranges of wetland vegetation sizes [51]. Among studies specifically investigating whether PB or OB methods are more suitable for classifying imagery acquired over coastal wetlands [50,52], Martinez Prentice et al. [50] found that PB classification slightly out-performed OB classification for wetlands in Estonia, while Zheng et al. [52] found that OB outperformed PB for wetlands in China. When inspecting studies in other wetland environments that use very-high-spatial-resolution multispectral images but do not directly compare classification approaches, there is a preference for using OB methods [27,40,44,53] over PB methods [37]. Within a PB or OB framework (i.e., using either pixels or objects as the base unit of image analysis), many commonly used classification algorithms can be applied for mapping coastal wetland vegetation, such as machine learning algorithms, including Maximum Likelihood [32], Support Vector Machine [40], and Random Forest (RF) [33,34,36,40,50,54]; deep learning algorithms, including Artificial Neural Networks [38,44,52]; K-Nearest Neighbours [27]; etc. In general, many studies have investigated the suitability of the available classification algorithms for coastal wetlands and obtained varying results. Among machine learning algorithms, Random Forest is often one of the highest performing [25,52]. A lack of consensus indicates that more research is needed to determine which classification approach, including the base unit of analysis (pixel- or object-based) and algorithm (RF, etc.), is best for mapping vegetative communities of coastal wetlands, including the salt marshes of Atlantic Canada.
Our study navigates the intricate task of selecting effective techniques for monitoring salt marshes through remote sensing, aiming to streamline the process of choosing classification methods tailored for salt marsh mapping in Atlantic Canada. Using drone imagery captured during the growing season of June, July, and August 2021 at a salt marsh restoration and reference site in Aulac, New Brunswick, we sought to identify the optimal Random Forest classification approach—considering both pixel- and object-based methods—for mapping the vegetation within these marshes. Our landcover classes included single and mixed plant species, bare ground, water, various substratum types typical of salt marshes, as well as specific classes to assess monthly image variations (change classes) [46,55,56]. We delved into key input variables for classification, such as reflectance information, vegetation indices, and textural features, and so incorporated a more extensive array of input image variables compared to other remote sensing studies in salt marsh mapping. By evaluating the importance of input variables from June, July, and August, our study provided insight into the value of multi-temporal classification and the times of year that contribute most to achieving high classification accuracy. Our ultimate goal is to guide future classification and change-detection projects in the relatively understudied salt marshes of Atlantic Canada (and other north temperate geographic locations), providing valuable insights for selecting classifiers when using multispectral drone imagery to monitor them.

2. Materials and Methods

2.1. Study Area

The study area in Aulac is located at the head of the Cumberland Basin within the Bay of Fundy (latitude: 45°51′31″N, longitude: 64°18′15″W; Figure 1). Semi-diurnal tidal amplitudes in the Cumberland Basin reach more than 12 m [57]. A managed realignment salt marsh restoration project began here in 2009 under the leadership of Ducks Unlimited Canada (DUC) and partners [23,58,59,60]. The project consists of two restoration (B, C) and two reference (A, D) sites; the study areas of focus for the present paper were the Western reference and restoration sites (A and B, Figure 1). Among the plant taxa present in the sites, we were particularly interested in mapping ecologically significant ones, namely, Spartina alterniflora (saltwater cordgrass, syn. Sporobolus alterniflorus; [61,62]) and Spartina patens (salt marsh hay, syn. Sporobolus pumilus), which are central during restoration [23,60]. We used imagery from when the restoration was in its 11th year (2021) after dike breach, and the low-elevation bioengineer species of salt marshes in eastern North America, S. alterniflora, dominated the site. The reference site is mainly mid-elevation salt marsh dominated by S. patens (typical of Bay of Fundy salt marshes [60]), with S. alterniflora restricted to creeks and seaward edges. Other vegetation present in the sites were terrestrial species growing on the high-elevation dike, and coastal species freshwater cordgrass (Spartina pectinata (syn. Sporobolus michauxianus)) and seaside arrowgrass (Triglochin maritima). Species growing in low densities and usually mixed with other vegetation included maritime orach (Atriplex spp.), sea lavender (Limonium carolinianum), seaside alkali grass (Puccinellia maritima), sea-blite (Suaeda spp.), sea milkwort (Lysimachia maritima), seaside plantain (Plantago maritima), seaside goldenrod (Solidago sempervirens), and sea glasswort (Salicornia maritima) [59]; hereafter, we refer to these various plants by their genus names, except for the Spartina grasses. The phenological growing period of vegetation in these marshes is from early June, when above-ground biomass first emerges from below-ground roots and rhizomes, to late September, when plants begin to senesce [23].

2.2. Field Data

Field data were collected on 11 June, 11 July, and 8–10 August 2021, using stratified random sampling with quadrats (0.5 × 0.5 m) along three transects in the reference site and four transects in the restoration site (15 quadrats transect−1 sampling round−1). In addition, the perimeter of the sites was surveyed on foot to ensure that all landcover classes were documented. Within each quadrat, the landcover class was recorded, and plant stems were identified and counted. Photographs and GPS points were acquired for each quadrat location. Between the two sites, a total of 30 classes were identified using field data and mosaics of each month’s imagery displayed in true and false colour (Table 1).

2.3. Image Acquisition

A MicaSense Dual Camera System (MicaSense, Seattle, WA, USA) mounted on a DJI Matrice 200 V2 aerial drone (DJI, Nanshan District, Shenzhen, China) was used to acquire multispectral drone images of the sites on 13 June, 12 July, and 10 August 2021 (Table 2). The image acquisitions were planned from shortly after the above-ground biomass had emerged (June) until after the vegetation was fully grown and flowering (August) but before it began to senesce (September) [23].
The camera system consisted of two five-band multispectral cameras, with the ten bands covering visible and near-infrared spectra (Table 3). Images that had a spatial resolution close to 7 cm were acquired with 80% front and side overlap between them and along a grid pattern at 100 m altitude over the sites and a horizontal speed of 10 m s−1. The drone was controlled using DJI Pilot mission planner software v1.7.2 (the android version compatible with the DJI CrystalSky Tablet), a DJI Cendence remote controller, and a DJI CrystalSky tablet (Figure 2). Images of the MicaSense Calibrated Reflectance Spectralon Panel (RP04-1949202-OB; Figure 2E) were acquired immediately before and after each flight.

2.4. Pre-Classification Image Processing

The image processing workflow (Figure 3) first included georeferencing and mosaicking together individual drone images corresponding to each multispectral band using Pix4Dmapper software v.4.6.4 (Pix4D, Prilly, Switzerland). In the processing options, the template used for mosaicking was Advanced Multispectral. Different settings were attempted to create the best mosaic, and the final processing options used were as follows. In the initial processing step, the key points image scale was set to full, defining that tie points were automatically extracted from the full size of the imagery. Tie points were features that could be detected in more than one overlapping image and formed the three-dimensional point cloud that was used to photogrammetrically orthorectify the image mosaics. Additional points were computed for every 4th pixel (densification), and every point was re-projected in a minimum of three overlapping images. Orthomosaics (photogrammetrically orthorectified image mosaics) were output in GeoTiff file format and calibrated in reflectance for each MicaSense band using images of the calibrated Spectralon panel.
Among the possible processing options, many of those selected required the greatest amount of computational random-access memory (RAM), and we chose them with the goal of maximizing the quality of our calibrated reflectance mosaics. Lower-quality orthomosaics could include gaps or result in lower spatial resolutions, which we wanted to avoid as this could have reduced our ability to assess small-scale differences that are common in salt marsh vegetation communities. For future projects, constructing less dense point clouds would reduce the computational burden, but tests should be conducted to determine how this affects classification accuracy.
For our project, a 3D textured mesh was not generated because the three-dimensional aspects of the sites were not necessary to create high-quality two-dimensional maps. A 3D textured mesh represents the surface geometry of an object/scene and is created by connecting points within the 3D point cloud. Digital Surface Models (DSMs) and Digital Terrain Models (DTMs) were also not generated for our project because we did not have appropriate elevation data to evaluate their accuracy. We conducted preliminary assessments of DSM and DTM accuracies by subtracting them, creating a canopy height model (CHM), and comparing this to field measurements of plant height. Unfortunately, the CHMs included many negative values and did not correlate with plant heights, which is why we excluded the DSMs and DTMs. Overall, among the many processing options available in Pix4D, project managers must assess their needs and computational resources when determining which to use.
To ensure that the reflectance mosaics generated from the imagery acquired each month were properly aligned, they were orthorectified using OrthoEngine in the PCI Catalyst software (PCI Geomatics Group Inc., Richmond Hill, ON, Canada). Orthorectified reflectance mosaics were clipped using the Clip function in PCI Catalyst Focus to isolate areas of interest, which were then used to compute vegetation indices and textual features using an EASI script in Catalyst. Many vegetation indices have been developed in various contexts, and the 28 vegetation indices we selected were valuable in previous studies of vegetation mapping (Table 4).
Textural features contain information about the spatial distribution of tonal variations within an image. Textural features were calculated using the gray-level co-occurrence matrix (GLCM) method [73]. GLCM examines the spatial relationship among pixels within a defined kernel size, which was set to 9 in our study. For each of the 10 MicaSense band reflectance images, we calculated 10 textural features (Table 5), for a total of 100 per month.
OB classification was conducted using Object Analyst in PCI Catalyst and required pre-processing steps, including image segmentation and attribute calculation. Images were segmented using the 10 raw reflectance bands from each month as the source channels. The parameters used for image segmentation were as follows: scale value of 5, shape value of 0.5, and compactness value of 0.5. We tested the effect that different scale values had on classification accuracy and determined that, in general, classification accuracy decreased as the scale value increased (results not published). Following this assessment, we used the smallest scale value to ensure that image objects represented as much of the site’s heterogeneity as possible. Shape and compactness parameters ranged from 0 to 1.0, and default values in Object Analyst were used, but these parameters do not have a large effect on segmentation results for imagery of wetlands [53]. The shape parameter controls how much the segmentation is based on spectral information versus object shape information, and the compactness parameter controls how much the object shape tends to be spatially compact versus spectrally homogeneous (but less compact). After image objects were created, attributes (including raw reflectances, vegetation indices, and textural features) were calculated using average values of all pixels within each image object.

2.5. Image Classification and Accuracy Assessment

Images were classified using a supervised algorithm applied to pixels or objects. The algorithm required the delineation of training areas or objects for each landcover class. We considered a total of 30 classes (Table 1), which included 6 monocultures of vegetation dominated by singular species; 3 mixed assemblages of vegetation; 6 abiotic landscape features, including rocks, driftwood, bare muddy areas, etc.; 5 classes associated with water features of the sites; and 10 classes that changed from month to month. Separate classifications were conducted for the restoration and reference sites, and some classes were observed in only one site. In total, 15 classes were used for the restoration site and 24 for the reference site, with 9 used for both sites.
For PB classification, training areas were primarily delineated as 5-pixel-by-5-pixel square polygons, although smaller and more irregularly shaped polygons were sometimes used in heterogeneous areas. Training areas were used to compute the Jeffries–Matusita (JM) distance between class pairs, which is a measure of class spectral separability. JM distances range from 0 to 2, with values of 2 representing class pairs that are completely separated [74]. JM distances were computed for each month using 10 band reflectance images. For OB classification, training objects were identified using the centroid of training polygons prepared for PB classifications. In total, the classification of the reference site used 539 training polygons and 535 validation polygons for 24 classes, and the classification of the restoration site used 440 training polygons and validation polygons for 15 classes (Figure 4). We used JM distance values from each month to assess how acquisition time affected the spectral separability of the classes used in our study, which, in turn, affected the classification accuracies.
The mixed-pixel problem occurs when, at the scale of observation, several classes contribute to the observed spectral response of a pixel. For our project, the scale of observation was ~7 cm, and salt marsh vegetation communities commonly have multiple species growing amongst one another within that scale, which contributes to the mixed-pixel problem. We addressed this problem by including classes of mixed vegetation in areas where multiple species grew together in an area of less than ~49 cm2 (area of a pixel). Many of our mixed classes occurred in the reference site (Site A), as is typical in established salt marshes in the region. In the restoration site (Site B), the plant community primarily consists of monocultures which produce pure, unmixed pixels; still, mixed pixels can occur in boundary areas where one monoculture transitions to another. A limitation of our study was that we did not have landcover classes to represent every type of mixed pixel.
We used Random Forest (RF), a non-parametric decision tree-type supervised classification algorithm. RF can be executed in R (script written by Ned Horning using the packages maptools, sp, randomForest, raster, and rgal), which we used for the PB classifications. We used the “all-polygon” version of the algorithm, which considers all the pixels within training area polygons and does not use average values. For OB classification, we used the Random Trees classifier in PCI Catalyst Object Analyst, which uses the same method as the RF algorithm we used for PB classifications. We applied PB and OB classifications to raw reflectances, vegetation indices, and textural features extracted from the multi-temporal drone imagery acquired in June, July, and August over the reference and restoration sites.
RF classification algorithms randomly sampled all pixels or objects as candidates at each node in the forest, which included 500 independent decision trees with the default mtry variable. The value of the default mtry variable is the square root of p, where p is the number of variables in x (i.e., the matrix of predictors for the classification). Within each decision tree, two-thirds of the training data were randomly selected (“in-bag” data; IB) to develop it. This tree was validated using the remaining third of the data (“out-of-bag” data; OOB). This process was repeated for 500 decision trees and produced 500 independent classifications which, once combined, produced the final classification map. Finally, RF ranked the degree of importance of each image variable (consisting of reflectances of individual MicaSense bands, vegetation indices, and textural features) in the classification [75]. Rankings were based on the mean decrease in accuracy of each input variable. Mean decrease in accuracy expresses how much OOB accuracy (described below) the model loses by excluding a variable; the more the accuracy suffers, the more important the variable.
Classification accuracy was assessed in two ways. First, OOB training data were compared to classified images. This was carried out using a confusion matrix, where each cell expressed the number of pixels in each class defined by OOB data with the number classified to it. The confusion matrix allowed for computing average and overall accuracies, kappa coefficients, as well as individual user’s and producer’s accuracies (UAs and PAs) for each class. The second accuracy assessment was conducted by comparing the classified image to validation field data. For each validation data point, classes were extracted from the classified image using the Extract Values to Points tool of ArcMap®. Confusion matrices and associated accuracies were then computed.

3. Results

3.1. Class Spectral Separability

Reflectance values of landcover classes’ training areas (used for PB classification) were well separated from one another, as indicated by a mean JM distance ranging from 1.94 to 1.97 (Table 6). Because separability values were calculated using data from singular months, we could use these results to assess temporal variability in the landcover class spectral signatures and determine the best month for differentiating certain class pairs. For the reference site, July imagery achieved the lowest average JM distance. For the restoration site, class pairs achieved, on average, higher JM distance values when using July and August imagery compared to using June imagery. Despite adequate average JM distance values, many pairs had low values (<1.90), indicating that they were not well separated. While low JM distances must be considered, our classifications were completed using additional input features of vegetation indices and textures that have improved classification accuracies in the past [36,76] but were not included in the JM distance calculations. In addition, many of our classes represented areas that changed from month to month, and it was expected that these classes would have low separability values when analyzed one month at a time. This is why we used multi-temporal images for producing the final map and why we did not include low JM distance values generated using the change classes during our interpretation of separability results.
Landcover class pairs with low inter-class separability values largely included the vegetation classes (Table 6). These class pairs were expected to achieve the lowest JM distance values because the variation in reflectance between types of vegetation is much smaller than it is between more spectrally contrasting class pairs (e.g., vegetation classes when paired with deep water). In the reference site, the classes most prevalently included in pairs with low separability were mixed vegetation ones (Classes 17 and 30). In addition, low separability values were generated between Triglochin (Class 16) and S. alterniflora classes, likely resulting from their having similar morphologies and because they often both grew in mixed assemblages with S. patens. For imagery of the restoration site, class pairs with low separability values in each month were S. alterniflora (clean, dense) and S. alterniflora (clean, sparse) (Classes 6 and 13); S. pectinata and dike vegetation (Classes 12 and 15); and rocks and compacted soil (light) (Classes 3 and 10). We found that differences in S. alterniflora density did not affect reflectance characteristics to a degree that resulted in good separability. Also, the reflectance characteristics of S. pectinata and other vegetation growing in high-elevation areas lining the perimeter of the site were not distinct enough to produce high JM distance values. Despite having relatively low separability in all three months, S. alterniflora (clean, dense) and S. alterniflora (clean, sparse) class pairs were most spectrally distinct in August, while S. pectinata and dike vegetation were least spectrally distinct in August.

3.2. Classification

Overall OOB accuracies were >99% when using multi-temporal image sets for both PB and OB methods (Table 7 and Table 8). It is common for RF classifiers to achieve very high out-of-bag accuracy when working with many input variables [33,36,40], and many studies that use RF choose to not present OOB accuracy assessments. We have presented these values because they display how effective the RF classifiers were at assigning training area pixels to the correct classes, and it is valuable to understand how high these values can be when using hundreds of input variables, as we did (we used 414). PB multi-temporal classifications of the reference and restoration sites achieved 0.1–0.2% higher OOB classification accuracy than OB classification did. High individual OOB UA and PA were achieved during both classifications for both methods. The relatively low UAs and PAs achieved during PB and OB classification of the reference site were a result of confusion between S. alterniflora (muddy) (Class 5) and mixed S. alterniflora and S. patens (Class 30) during OB classification, but no confusion occurred during PB classification. For the restoration site, OOB classification error was primarily the result of confusion between S. pectinata (Class 12) and dike vegetation (Class 15). PB classification also showed confusion between classes 1 and 2, 3 and 10, and 4 and 11. It should be remembered that more data were used in the calculation of PB OOB accuracies, because the classifier considered all pixels within training areas rather than singular objects (i.e., groups of pixels), and so one misclassification during OB classification resulted in a greater loss of accuracy than a few misclassifications during PB classification.
The resulting classified maps produced by PB and OB classifications of multi-temporal images showed the distribution of classes in the sites (Figure 5 for the reference site (A) and Figure 6 for the restoration site (B)).

3.3. Variable Importance

The RF classifier provided a ranking of input variables based on their relative mean decrease in accuracy when omitted. We used 414 total input variables, with 138 generated per image acquisition. For multi-temporal PB classification of the reference site, 8 (32%), 6 (24%), and 11 (44%) of the top 25 input variables were from imagery acquired in June, July, and August, respectively, with 3 (12%), 9 (36%), and 13 (52%) for the OB classifications (Table 9 and Table 10). For the restoration site, 4 (16%), 10 (40%), and 11 (44%) of the top 25 input variables were from imagery acquired in June, July, and August, respectively, for the PB classifications, with 5 (20%), 4 (16%), and 16 (64%) for the OB classifications. For each classification, input variables from August were most prevalent in the top 25. The input variables from July greatly outnumbered the ones from June during the PB classification of the restoration site and the OB classification of the reference site but were slightly outnumbered by those from June during the PB classification of the reference site and OB classification of the restoration site. Overall, the imagery acquired in August consistently provided more than a third of the top 25 ranked variables in the classifications.
Among the 138 total input variables extracted from each month’s imagery, 10 (7%) were raw reflectance bands, 28 (20%) were vegetation indices, and 100 (73%) were textural features. For each site, the same 414 input variables were used for both classification approaches. The most important raw reflectance bands for both approaches were the Green, RedEdge, and Red bands. The Blue band was also ranked high for the OB classification of the reference site. Raw reflectance bands were overrepresented in the top 25 variables of each classification compared to their representation in the dataset. For the PB classifications, raw reflectance bands made up 4 (16%) and 3 (12%) of the top 25 variables for the reference and restoration sites, respectively, and 9 (36%) and 5 (20%) for the OB classifications. Vegetation indices that appeared in the top 25 variables of at least two of our classifications were Normalized Red (NR), Normalized Difference (NDVI), Normalized Near Infrared (NNIR), Green Ratio (GRVI), Red Ratio (RVI), and Normalized Difference Aquatic (NDAVI). The only vegetation index that was not among the top variables of any of our classifications was the Water Adjusted Vegetation Index (WAVI). Only 4 (16%) and 2 (8%) of the 25 top input variables for our PB classifications of the reference and restoration sites, respectively, were vegetation indices. Conversely, vegetation indices comprised 8 (32%) and 15 (56%) of the top 25 ranking variables for the OB classifications. The tonal mean calculation was the most common highly ranked measure of texture during all our classifications, and textures calculated using the reflectance bands Red, Green, Blue, RedEdge, and NIR were all among the top 25 most important variables. For the PB classifications, 17 (68%) and 20 (80%) of the top 25 input variables for the classification of images over the reference and restoration sites, respectively, were textural features (Table 9 and Table 10). OB classifications had fewer textural features in the top 25 variables (8 and 5, for the reference and restoration sites, respectively), but the tonal mean calculation was again the most important.

3.4. Validation Accuracy

Validation accuracy is a more reliable measure of classification accuracy than OOB accuracy because it compares the classified image with an independent set of validation data (Table 11 and Table 12). For the reference site, classification using multi-temporal imagery achieved overall validation accuracies of 92.4% and 91.1% for PB and OB classifications, respectively. The vegetation community was dominated by S. patens, which was often found growing in mixed assemblages with other salt marsh species, including S. alterniflora, Triglochin, Puccinellia, Lysimachia, Plantago, Solidago, Argentina, and Limonium. The S. patens class covered more area than any other and had reductions in accuracy due to confusion with S. alterniflora (clean, dense) (Class 6), mixed mid-elevation vegetation (Class 17), mixed S. alterniflora and S. patens (Class 30), and the change class bare mud to S. alterniflora (Class 9). Spartina alterniflora covered less area in the reference site than in the restoration site. Spartina alterniflora (dense, muddy) (Class 5) had a UA and PA >93.0% for both classification methods, but confusions with Triglochin (Class 16) and mixed S. alterniflora and S. patens (Class 30) were responsible for classification errors. Spartina. alterniflora (clean, dense) (Class 6) was mapped with more error than the previous class because of confusion with two mixed vegetation classes (Classes 17 and 30). Triglochin (Class 16) was mapped with greater error of commission during PB classification but greater error of omission during OB classification. Mixed vegetation classes were often misclassified as one another and as the other vegetation types. Multi-temporal RF classifiers performed well at identifying landcover classes that changed in the reference site. The average UA and PA of the change classes (Classes 9, 19, 22, 23, 24, 26, 27, 28, and 29) were 97.4% and 95.6%, respectively, for PB classification and 96.5% and 94.3% for OB classification. Most of these classes were associated with the large salt pool (depression that holds water at low tide) in the site.
For the restoration site, multi-temporal image classification achieved overall validation accuracies of 95.5% and 94.0% for PB and OB classifications, respectively. Most of the restoration site was covered by a monoculture of S. alterniflora with varying appearance based on its density, the soil moisture content, and the amount of mud on plant leaves (Classes 5, 6, and 13). When using the multi-temporal image set, the average UA and PA of these classes were 95.2% and 98.3%, respectively, for PB classification and 89.4% and 92.5% for OB classification. For PB classification, lower validation accuracies were a result of confusion between S. alterniflora (clean, dense) (Class 6), S. patens (Class 7), and dike vegetation (Class 15) and between S. alterniflora (clean, sparse) (Class 13), S. patens (Class 7), and change class bare mud to S. alterniflora (Class 9). For OB classification, lower validation accuracies resulted from confusion between the classes mentioned above and with one another. Spartina patens was mapped with a UA and PA of 97.0% and 91.4%, respectively, for PB classification and 93.9% and 88.6% for OB classification. Reductions in the validation accuracy for this class (S. patens) were due to confusion with Class 6 (S. alterniflora (clean, dense)), Class 13 (S. alterniflora (clean, sparse)), and Class 15 (dike vegetation). Spartina pectinata (Class 12) has been important during restoration dynamics in our sites, but at the time of image acquisition, it was only found growing in a low density along the high-elevation dike areas and showed some confusion with the dike vegetation class (Class 15). Both PB and OB classifiers were able to accurately identify change classes, although there was some confusion in OB classification between the S. alterniflora to wrack (Class 14) and wrack classes (Class 4) and in both classifications between bare mud to S. alterniflora (Class 9) and S. alterniflora (clean, dense) (Class 6) and S. alterniflora (clean sparse) (Class 13) classes.

4. Discussion

Our primary goal was to determine which classification method, pixel- or object-based, was higher performing and more suitable for monitoring coastal vegetation, using an 11-year-old restoring salt marsh and an established (reference) salt marsh in the upper Bay of Fundy as a case study. A secondary goal was to evaluate the relative importance of input variables (raw reflectances, vegetation indices, and textural features) in our multi-temporal classifications and determine which month(s) of the growing season (June, July, or August) provided the most important variables. We also took the opportunity to test the effectiveness of landcover classes representing areas on the ground that changed throughout our multi-temporal image sets; such change classes may be useful to assess longer time-scale changes, including year-to-year ones, occurring during restoration. Achieving our goals will help to optimize the methodology for remote sensing recovery trajectories of salt marshes in the future. Below, we discuss how and why the PB classifier may have outperformed the OB classifier; how the RF classifier generally compares to other commonly used approaches; how the most important input variables varied among the months and classification approaches; considerations related to classifying environments with strong seasonal and annual variation; how using many landcover classes, including those that focus on change in a multi-temporal image set, affect classification results; challenges associated with remote sensing studies, including ours; and recommendations for future classifications assessing annual salt marsh recovery patterns.

4.1. Comparison of Classification Approaches

In our study, PB RF classifiers outperformed OB RF classifiers in overall validation accuracy (92.4% and 95.2% vs. 91.1% and 93.2%). With very-high-spatial-resolution imagery, it is usually recommended to use OB classification methods to reduce “salt-and-pepper” effects in classified maps that can result from PB classification [27]. A methodological contributor to our comparative result may be that the OB classification was conducted on an inadequately segmented image; a poor-quality segmentation directly leads to a low-quality classification [77]. Furthermore, we used an unsupervised segmentation method that could have incorrectly grouped pixels into too many (over-segmentation) or too few (under-segmentation) objects that did not represent single homogenous classes [49]. As has been mentioned, achieving optimal segmentation is difficult in wetlands with relatively small plants and highly spatial heterogenous vegetation communities [51]. Upon selecting the most suitable method, parameters (including shape, compactness, and size), and input variables for segmentation can be difficult to optimize because of the high spatial variation and low spectral variation in coastal wetland plant communities [53]. PB classification methods are typically more user-friendly than OB methods because they do not require selection and optimization of segmentation parameters. Further work is needed to optimize the segmentation method to improve the accuracy of OB image classification in the case of our salt marsh sites.
Despite the recent popularity of OB classification methods, which many studies have shown a preference for in mapping coastal wetlands [27,40,53], our study found that the PB RF classification approach achieved a higher validation accuracy. Among the studies directly comparing PB and OB methods and mapping coastal wetlands with drone imagery [50,52], our results aligned with those of Martinez-Prentice et al. [50], who found that PB classifiers performed better. Differing results between studies are likely influenced by the nature of the coastal wetland under investigation. The Zheng et al. [52] study, where OB methods outperformed PB methods, was conducted in a sub-tropical (31 degrees latitude) salt and brackish marsh at the mouth of an estuary along the Yellow Sea, which had plant communities consisting of Phragmites sp., Scirpus triqueter, Carex scabrifolia, and Imperata cylindrica [78] (reeds, bulrushes, and cogon grasses, respectively); these plants are larger, with more showy inflorescences, than the plants of the Aulac marshes. On the other hand, Martinez Prentice et al.’s [50] study was conducted in a north temperate coastal wetland (58 degrees latitude) along the Baltic Sea, where the plant communities include moor grasses (Molinia caerulea) and rushes (Carex panicea) [79]. These Baltic coastal meadows include vegetation that resembles the grasses of the Aulac marshes in size and inflorescence. The physical sizes of plants in the type of coastal wetland investigated could be a large contributor to whether a PB or OB classification technique better maps vegetation patterns.
Many classification algorithms are widely available and selecting the appropriate one can be difficult. For our study, we used RF-supervised classification approaches. Random Forest, which has been used to achieve high classification accuracies for mapping salt marshes and other coastal wetlands [33,34,36,40,50,54], is considered easily accessible (including within the free software R v4.2.3) and requires relatively few pre-defined parameters. In addition, the structure of the RF classifier is more easily understood than those of deep learning classifiers, including Artificial Neural Networks, which use hidden layers. Despite this, more research investigating deep learning classifiers should be conducted because they can outperform machine learning classifiers, including RF [38,52], although another study found the opposite [54]. Deep learning networks require more intensive computational power and longer training times than RF but have the potential to achieve higher validation accuracies, even with fewer input variables [35]. Many other classification approaches have shown potential for accurate mapping of coastal wetlands [27,32,40,60], and additional approaches are continually being developed. Overall, environmental monitoring project managers need to assess their needs and resources when selecting a classification algorithm for accurate mapping, but the results of our and other studies have shown that RF is a user-friendly option that can achieve very high classification accuracies that compare with other, more complex classifiers.

4.2. Similarities and Differences in Variable Importance between Classification Approaches

In our multi-temporal RF classifications, the most highly ranked variables differed for the PB and OB classifications but were a combination of raw reflectances, vegetation indices, and textural features from each month for both classifier types. Raw reflectance bands were more prevalent among the top 25 input variables for the OB classifications than for the PB classifications; nonetheless, all our classifications had a larger representation of raw reflectance bands among the top variables above that among the total number of input variables. The first studies using remote sensing to monitor coastal wetlands typically relied on raw reflectance alone [80] and achieved accuracies ranging from ~70 to 90% [26,31,81,82]. As expected, none of the raw reflectances extracted from the June images was among the top variables in our classifications because above-ground vegetative biomass had just started to emerge and likely showed less variation in reflectance compared to when it was more developed later in the growing season. Among the raw reflectance bands in the top 25 input variables, the RedEdge bands were unsurprisingly included; they have been found useful for discriminating vegetation types [83]. In particular, the RedEge portion of a graminoid (the primary plant taxon in the Aulac marshes) canopy is largely affected by inundation [84], and so RedEdge bands may be especially useful for classifying the vegetation of intertidal salt marshes, where water levels regularly fluctuate. The other raw reflectance bands among the top 25 variables included the occasional Red, Green, and Blue (RGB) band, supporting that RGB bands can be useful for classifying wetland vegetation [52,54,83]. Previous studies have accurately mapped coastal wetlands without RedEdge and NIR bands [52], and the minimum number and required types of bands necessary to accurately map salt marsh vegetation should be further investigated. If RGB bands are capable of accurately mapping habitats without RedEdge and NIR, a project’s cost could decrease, since RGB sensors are more readily available and affordable. The NIR band, which was not among the most important input variables in our classifications, has also been found to be less important for distinguishing wetland vegetation previously [54]; it is correlated with leaf thickness [85], and the graminoids dominating the Aulac marshes have very thin leaves. While there were differences in the number of important raw reflectance bands between classification approaches, this importance was more consistent between PB and OB classifications than for the other types of input variables (vegetation indices and textural features).
A major difference in the top 25 most important input variables between our PB and OB classifications was a greater occurrence of vegetation indices in the latter classification. Vegetation indices are now almost always used in classifications of coastal wetlands [37,86] and have also been used to accurately quantify vegetation biomass and canopy moisture [42,87]. Similar to our expectations for the raw reflectance input variables, we expected the June vegetation indices to be less important than those extracted from the July and August images. Surprisingly, the vegetation indices among the most important variables were all extracted from June images in the PB classifications, although it is unclear why. In the OB classifications, however, vegetation indices extracted from the July and August images were more important than those extracted from the June ones. Our results suggest that segmenting an image into objects may increase the importance of vegetation indices and decrease the importance of textural features during the classification of salt marshes (discussed further in the next paragraph).
The sizeable difference in the importance of textural features between our PB and OB classifications suggests that texture is more important when analyzing an image using a smaller base unit of analysis (individual pixels) than it is when using a larger base unit of analysis (image objects) and supports the notion that texture is strongly influenced by spatial resolution [88]. Textural features in classification are useful in sites like salt marshes that have classes with little inter-class spectral variability and high within-class variability [27,56]. We found that the tonal mean calculation always ranked very high for our classifications, as also reported previously [83]. While textural features extracted from the imagery of each month were important, those extracted from the August images were most prevalently featured among the top 25 input variables of our classifications. This may be because August is when vegetation is fully grown and variations in its spatial characteristics are elevated [54]. While there were differences in the input variables considered important for each classification, both classifications were similar in that those from August prevailed. Overall, understanding the importance of input features relative to their acquisition times is useful information when streamlining methodologies for remotely monitoring wetlands, including those undergoing change.

4.3. Further Classification Considerations for Temporal Change, Both Seasonal and Annual

With our multi-temporal images (collected at monthly intervals during the growing season), we found that imagery acquired in August was more useful for distinguishing landcover classes than imagery acquired in June and July, and, in general, imagery acquired in July was more useful than imagery acquired in June. Previous studies have also found that variation in spectral characteristics and vegetation indices between coastal vegetation types is greatest later in the growing season when vegetation is fully grown, particularly when it is flowering [39,89], which typically occurs in August in Atlantic Canadian salt marshes [23]. Furthermore, our study and others have [29,30] determined that multi-temporal classification of coastal wetlands achieves higher validation accuracies than single-temporal classification for habitats with strong seasonality [87,90,91] like our north temperate salt marshes [23]. Note that these short-term studies used multi-temporal imagery acquired within a single year, whereas longer-term image analyses of coastal wetland change typically used images acquired at one time per year [56,92,93].
Beyond overall validation accuracy, other considerations come into play when selecting an appropriate remote sensing methodology for detecting temporal change in vegetative communities. Using multi-temporal images within a growing season may complicate the analysis of long-term annual change by introducing within-year variation and many images. The financial cost and effort required increase with the number of image acquisitions, since resources must be put towards additional ground-truthing and image processing. Therefore, the type of change that is of interest (within or between years) should be considered when determining the necessary frequency of image acquisitions per year. For assessing restoration dynamics, annual changes are often of greater interest than within-year changes [23,60]. Note, though, that early in a restoration project, multiple (monthly) monitoring times have been central to uncovering certain fast changing dynamics [59]. Additionally, restoration occurring in sub-optimal or deteriorating conditions may need frequent monitoring for the quick implementation of adaptive management strategies [21]. Thus, the decision on whether to have (or how many) multiple image acquisition times for a given year may depend on the age and/or environmental conditions of the restoration project. Based on the results of our study, for long-term monitoring of salt marsh vegetation dynamics, we recommend acquiring one or two images annually, specifically when vegetation has fully grown and is flowering. Nevertheless, there may be situations where imagery from both early and late in a growing season is necessary. In the next section, we further discuss quantifying temporal change in salt marshes by comparing our method using change classes with other commonly used methods.

4.4. Assessment of Number of Landcover Classes, including Change Classes

The number and nature of landcover classes used during a classification greatly influence the resulting map. In our study, we used a relatively large number of salt marsh landcover classes (24 and 15) and achieved overall OOB and validation accuracies greater than 90%. Our classification of the restoration site (15 classes) achieved a higher validation accuracy than that of the reference site (24 classes). Many previous studies with similar or higher validation accuracies (88–95%) used fewer classes (five classes with 95% [56], five classes with 88% [52], and eight classes with 90% [28]). Studies experimenting with a similar number of classes (17) typically achieved lower accuracies than our study (57–86%) [31,35], and a classification that used 43 classes only achieved 58% [27]. While we did have many classes, separating the mixed vegetation classes into individual species classes could make them more spectrally homogonous and potentially increase classification accuracy. However, this could also increase the classification error due to the mixed-pixel problem, and, in general, using more classes increases the probability of misclassification. Note that in our study, mixed vegetation classes helped address the small-scale (within a 49 cm2 area) heterogenous plant communities. We considered having more of them but lacked enough appropriate training and validation data because these mixed landcovers did not cover substantial areas within the sites. Previous studies mapping coastal wetlands have also used mixed vegetation classes to address the spatial heterogeneity of coastal wetlands and the mixed-pixel problem [35,40,46]. Other studies using very high spatial resolutions avoid the use of mixed classes [37,38,42,54] and rarely directly discuss how they address mixed pixels or their implications. As in previous studies, our classes with contrasting spectral properties, including our deep salt pool water, wood, and shallow salt pool water classes (Classes 8, 11, and 25, respectively), were more likely to be correctly identified by the classifiers (high validation PA, >95%) and be reliably mapped (high validation UA, >95%) [35,40]. On the other hand, the classes with lower UAs and/or PAs were those with spectral and textural properties that were more similar to those of other classes. Many of our change classes, including Classes 15, 19, 23, 24, 27, 28, and 29, achieved high (>90%) validation UAs and PAs because of their spectrally contrasting properties caused by their variation within the multi-temporal images. Overall, the objectives of a study, the spatial resolution of the imagery, and the prevalence of mixed pixels must be considered when determining the appropriate number and type of landcover classes for a classification. Additionally, the landcover classes should be determined at the beginning of a project, so that appropriate field validation data can be collected for each class. The number of landcover classes and the quality and quantity of field data greatly influence the result of a classification, which highlights the importance of establishing meaningful landcover classes and building strong training and validation datasets.
While we developed change classes within a single growing season (from an image set containing three acquisition times), such classes could be applied to an image set spanning multiple years. Representing change as individual classes has been used to assess temporal patterns in Eucalyptus forests in Brazil [55] and coastal areas, including salt and brackish marshes, in Texas, USA [56], but has never been used to monitor salt marsh restoration. An advantage of using change classes in a single classification is that the importance of variables from each image in the set can be extracted, providing insight into which acquisition time was most important. However, as the number of classes in a classification increases, coinciding with the amount of change observed and the number of images in the set, more processing time is required, and results are often harder to interpret. Instead of using change classes, change assessment is most often conducted as a post-classification technique [32,55,93], where change is more easily interpreted in the form of a confusion matrix. Post-classification techniques do not require the creation of change classes but do require classifications representing before and after change, which are then compared [46]. This method could be applied to single images and used to assess sequential within- and between-year differences, rather than classifying multiple images together. We do not recommend using change classes to assess within-year differences (as we did in our study) in combination with interannual post-classification analysis because it is unlikely that the same seasonal changes would occur in the same spatial locations in multiple years, and so change would be overestimated. If using multi-temporal images within and between years, we recommend that either every image in the time series should be classified individually and change assessed with post-classification analysis (both within and between years), or that a general map excluding change classes should be created for each year and then compared using post-classification change analysis [32].

4.5. Challenges and Future Research in Remote Sensing of Salt Marshes

The process of selecting remote sensing and image analysis methods for environmental monitoring can be daunting. Each project comes with its own set of unique goals and conditions, and the learning curve for acquiring and processing images can be steep. Moreover, with the continuous development of new technology and methods, staying up to date adds to the complexity. Environmental managers need to be mindful of their project objectives, budget constraints, and knowledge limitations when choosing appropriate remote sensing methods. Consulting with remote sensing experts can simplify this process. In the initial project planning phase, understanding the required image spatial resolution and survey area size is central to determining whether satellite or drone imagery is more suitable. Subsequently, consideration must be given to the number of required images and the selection of appropriate remote sensing methods tailored to the desired project outcomes, ranging from simple visual assessments to complex classification models [46]. Some trial and error with various image processing methods may be necessary to optimize the approach for a specific environmental monitoring project. Additionally, considering local ecological knowledge of the landcovers and biotic communities at the sites is necessary to ensure that high-quality training and validation data are created and that meaningful objectives are pursued. For our study, we had this knowledge because we have been sampling the sites since 2010 (i.e., the beginning of the Aulac salt marsh restoration project) [23,59,60]. Overall, the quality of an environmental management project is enhanced through a cross-disciplinary approach, involving collaboration among experts in remote sensing, ecology, geology, sociology, and other relevant fields to broaden the collective knowledge base.
Addressing the challenges encountered in our study that may have influenced result quality is instructive. Initially, our field data acquisition was not intended to evaluate every individual landcover class. Instead, our sampling focused on assessing ecological changes in salt marsh restoration and reference sites, following the field methods outlined in Virgin et al. [60]. Consequently, we gathered numerous field data points for the most abundant vegetation landcovers but fewer points for landcovers around the site perimeters or within the large salt pool in the reference site (Site A). To overcome this challenge, we augmented training and validation data for these less sampled landcovers using our knowledge of the sites and aerial photographs, which has previously been carried out with success [40]. Additionally, the Geographic Positioning System (GPS) we utilized had a spatial accuracy of 2–3 meters, meaning that GPS points collected during our fieldwork did not always precisely represent the sampled area. This spatial inaccuracy was a notable challenge, especially in a salt marsh, where landcover variations occur at centimeter scales. Future studies needing ground-truthing should consider employing centimeter-accuracy RTK GPS units during field data collection. Despite these challenges, we maintain confidence in the accuracy of the training and validation points, as we have expert knowledge of the field sites through extensive annual sampling and could perform appropriate corrections. A further study component that we could have conducted is testing more than one classifier (in addition to Random Forests) for our study system. We chose RF because it is known to perform well, based on our experience [36,94] and other studies in the literature [33,34,40,50,54]. However, a focused comparative analysis with other algorithms could have revealed a higher-performing option, as documented in other studies [52]. Another study component (alluded to above) is to evaluate the minimum number and required types of spectral bands needed to accurately map salt marsh vegetation. Moreover, evaluating the inclusion of additional sensors and remote sensing platforms could have enhanced the comprehensiveness and informativeness of our study to guide future remote sensing of salt marsh restoration projects (see below).
Since the integration of multispectral drone data in coastal environmental monitoring projects has become widespread, many future research efforts are needed to guide and extend their utility in coastal vegetation mapping, in addition to the ones mentioned above. These include investigation of the effect of incorporating more landcover classes, training data, and validation data to assess improvement or diminishment of mapping accuracy [38]. Moreover, the drone camera utilized in our study, the MicaSense Dual Camera System, has similar bands to Sentinel-2 [95]. This compatibility allows for the validation of Sentinel-2-based classified images using drone imagery [96]. The synergy of drone and satellite data facilitates the assessment of environmental changes at various scales and would be particularly valuable for habitats like salt marshes which exhibit heterogeneous vegetation communities while covering extensive geographic areas. Our study solely used input variables derived from the reflectance data acquired with the MicaSense dual-camera system, and future studies could benefit from incorporating DSMs, DTMs, and canopy height models derived from RTK GPS and Lidar data. These additional data sources have been valuable in various studies [35,58], especially considering the strong influence of elevation on salt marshes and its role in driving vegetation zonation in these habitats. Overall, there are many future research directions that, if followed, will further assist with the selection of appropriate remote sensing methods for mapping salt marshes.

5. Conclusions

Our comparison of classification approaches showed that pixel-based Random Forest classifiers achieved higher classification and validation accuracies than object-based Random Forest classifiers for mapping vegetation in north temperate salt marsh sites. Our results likely partially depended on the small-sized plants at the sites; OB methods may be more appropriate for wetlands with larger-sized plants. Pixel-based approaches do not require the optimization and selection of input features for image segmentation, possibly explaining its higher performance in our case study, in addition to making it more user-friendly. We found that input variables extracted from the August images were most important in the classifications, suggesting that imagery should generally be acquired at times when vegetation is developed and flowering. Additionally, our results indicated that many image input variables, including raw reflectances, vegetation indices, and textural features, are valuable for achieving high classification accuracy and that increasing the number of input variables can be achieved by using multi-temporal image sets. Also, our results showed potential for monitoring classes that change when using multi-temporal images. The lessons learned from our study provide guidance for future monitoring projects of salt marshes in Atlantic Canada and could be applied to other geographical locations with plant sizes similar to those in the Aulac marshes. Monitoring vegetation dynamics during coastal wetland restoration is essential because of the threatened status of these ecosystems and the need to restore the important services they provide.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/rs16061049/s1, Table S1: Photographs of vegetation classes; Figure S1: Photographs of substratum landcover classes; Figure S2: Photographs of salt pool landcover classes; Figure S3: Map of spatial distribution of training and validation data in the reference site (Site A); Figure S4: Map of spatial distribution of training and validation data in the restoration site (Site B); Table S2: JM distance values for Reference W classes; Table S3: JM distance values for Restoration W classes; Table S4: OOB and validation accuracy for Reference W classifications; Table S5: OOB and validation accuracy for Restoration W classifications; Table S6: Variable importance for Reference W classifications; Table S7: Variable importance for Restoration W classifications.

Author Contributions

Conceptualization, G.S.N., B.L., A.L., M.A.B. and A.R.H.; methodology G.S.N., B.L., A.L. and M.A.B.; software, A.L.; validation, G.S.N.; formal analysis, G.S.N.; investigation, G.S.N., B.L., A.L. and M.A.B.; resources, B.L., A.L. and M.A.B.; data curation, G.S.N.; writing—original draft preparation, G.S.N.; writing—review and editing, G.S.N., M.A.B., B.L. and A.R.H.; visualization, G.S.N.; supervision, A.L., B.L. and M.A.B.; project administration, G.S.N., B.L., A.L. and M.A.B.; funding acquisition, B.L., M.A.B., A.L. and G.S.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by a MITACS ACCELERATE Fellowship in partnership with Ducks Unlimited Canada (IT25726 to G.N., M.B. and B.L.), the Natural Sciences and Engineering Research Council of Canada (Discovery grant RGPIN-2020-04106 to M.B.; CREATE program grant to B.L.), the New Brunswick Environmental Trust Fund (200133, 210235, and 220335 to A.L., G.N., M.B. and B.L.), the University of New Brunswick, and Lakehead University.

Data Availability Statement

The data presented in this study are not yet openly available but will be made available on Figshare upon publication.

Acknowledgments

We thank Swarna Naojee, Olivia Hanson, Jonathan Linihan, Alexa Stack Mills, and Jenna Watson for assisting in the field. We thank Nic McLellan from Ducks Unlimited Canada and Jeff Ollerhead from Mount Allison University for information and feedback and three anonymous reviewers for helpful comments.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. He, S.; Zhenguo, N.; Chen, Y.; Li, L.; Zhang, H. Global wetlands: Potential distribution, wetland loss, and status. Sci. Total. Environ. 2017, 586, 319–327. [Google Scholar] [CrossRef]
  2. Davidson, N.C.; Fluet-Chouinard, E.; Finlayson, C.M. Global extent and distribution of wetlands: Trends and issues. Mar. Freshw. Res. 2018, 69, 620–627. [Google Scholar] [CrossRef]
  3. Owers, C.J.; Rogers, K.; Woodroffe, C.D. Spatial variation of above-ground carbon storage in temperate coastal wetlands. Estuar. Coast. Shelf Sci. 2018, 210, 55–67. [Google Scholar] [CrossRef]
  4. Gallant, K.; Withey, P.; Risk, D.; Van Kooten, G.C.; Spafford, L. Measurement and economic valuation of carbon sequestration in Nova Scotian wetlands. Ecol. Econ. 2020, 171, 106619. [Google Scholar] [CrossRef]
  5. Gyssels, G.; Poesen, J.; Bochet, E.; Li, Y. Impact of plant roots on the resistance of soils to erosion by water: A review. Prog. Phys. Geogr. Earth Environ. 2005, 29, 189–217. [Google Scholar] [CrossRef]
  6. Ford, H.; Garbutt, A.; Ladd, C.; Malarkey, J.; Skov, M.W. Soil stabilization linked to plant diversity and environmental context in coastal wetlands. J. Veg. Sci. 2016, 27, 259–268. [Google Scholar] [CrossRef] [PubMed]
  7. Kadlec, R.H. Wastewater treatment at the Houghton Lake wetland: Hydrology and water quality. Ecol. Eng. 2009, 35, 1287–1311. [Google Scholar] [CrossRef]
  8. Straub, J.N.; Gates, R.J.; Schultheis, R.D.; Yerkes, T.; Coluccy, J.M.; Stafford, J.D. Wetland food resources for spring-migrating ducks in the Upper Mississippi River and Great Lakes Region. J. Wildl. Manag. 2012, 76, 768–777. [Google Scholar] [CrossRef]
  9. Rasool, S.; Rasool, T.; Gani, K.M. Unlocking the potential of wetland biomass: Treatment approaches and sustainable resource management for enhanced utilization. Bioresour. Technol. Rep. 2023, 23, 101553. [Google Scholar] [CrossRef]
  10. Pedersen, E.; Weisner, S.E.B.; Johansson, M. Wetland areas’ direct contributions to residents’ well-being entitle them to high cultural ecosystem values. Sci. Total Environ. 2019, 646, 1315–1326. [Google Scholar] [CrossRef] [PubMed]
  11. Alikhani, S.; Nummi, P.; Ojala, A. Urban wetlands: A review on ecological and cultural values. Water 2021, 13, 3301. [Google Scholar] [CrossRef]
  12. Xu, X.; Chen, M.; Yang, G.; Jiang, B.; Zhang, J. Wetland ecosystem services research: A critical review. Glob. Ecol. Conserv. 2020, 22, e01027. [Google Scholar] [CrossRef]
  13. Lin, W.; Xu, D.; Guo, P.; Wang, D.; Li, L.; Gao, J. Exploring variations of ecosystem service value in Hangzhou Bay Wetland, Eastern China. Ecosyst. Serv. 2019, 37, 100944. [Google Scholar] [CrossRef]
  14. Balwan, W.K.; Kour, S. Wetland—An ecological boon for the environment. East Afr. Sch. J. Agric. Life Sci. 2021, 4, 38–48. [Google Scholar] [CrossRef]
  15. Zhang, W.; Ge, Z.-M.; Li, S.-H.; Tan, L.-S.; Zhou, K.; Li, Y.-L.; Xie, L.-N.; Dai, Z.-J. The role of seasonal vegetation properties in determining the wave attenuation capacity of coastal marshes: Implications for building natural defenses. Ecol. Eng. 2022, 175, 106494. [Google Scholar] [CrossRef]
  16. Ballut-Dajud, G.A.; Sandoval Herazo, L.C.; Fernández-Lambert, G.; Marín-Muñiz, J.L.; López Méndez, M.C.; Betanzo-Torres, E.A. Factors affecting wetland loss: A review. Land 2022, 11, 434. [Google Scholar] [CrossRef]
  17. Fluet-Chouinard, E.; Stocker, B.D.; Zhang, Z.; Malhotra, A.; Melton, J.R.; Poulter, B.; Kaplan, J.O.; Goldewijk, K.K.; Siebert, S.; Minayeva, T.; et al. Extensive global wetland loss over the past three centuries. Nature 2023, 614, 281–286. [Google Scholar] [CrossRef] [PubMed]
  18. Romaña, S.S.; DeAngelis, D.L.; Koh, H.L.; Sulaiman, R.B.R.; Zhai, L. Conservation and restoration of mangroves: Global status, perspectives, and prognosis. Ocean Coast. Manag. 2018, 154, 72–82. [Google Scholar] [CrossRef]
  19. Erwin, K.L. Wetlands and global climate change: The role of wetland restoration in a changing world. Wetl. Ecol. Manag. 2009, 17, 71–84. [Google Scholar] [CrossRef]
  20. Humpenöder, F.; Karstens, K.; Lotze-Campen, H.; Leifeld, J.; Menichetti, L.; Barthelmes, A.; Popp, A. Peatland protection and restoration are key for climate change mitigation. Environ. Res. Lett. 2020, 15, 104093. [Google Scholar] [CrossRef]
  21. Waltham, N.J.; Alcott, C.; Barbeau, M.A.; Cebrian, J.; Connolly, R.M.; Deegan, L.A.; Dodds, K.; Goodridge Gaines, L.A.; Gilby, B.L.; Henderson, C.J.; et al. Tidal marsh restoration optimism in a changing climate and urbanizing seascape. Estuaries Coasts 2021, 44, 1681–1690. [Google Scholar] [CrossRef]
  22. Pickett, S.T.A.; Cadenasso, M.L.; Meiners, S.J. Ever since Clements: From succession to vegetation dynamics and understanding to intervention. Appl. Veg. Sci. 2009, 12, 9–21. [Google Scholar] [CrossRef]
  23. Norris, G.S.; Virgin, S.D.S.; Schneider, D.W.; McCoy, E.M.; Wilson, J.M.; Morrill, K.L.; Hayter, L.; Hicks, M.E.; Barbeau, M.A. Patch-level processes of vegetation underlying site-level restoration patterns in a megatidal salt marsh. Front. Ecol. Evol. 2022, 10, 1000075. [Google Scholar] [CrossRef]
  24. Mahdavi, S.; Salehi, B.; Granger, J.; Amani, M.; Brisco, B.; Huang, W. Remote Sensing for Wetland Classification: A Comprehensive Review. GIScience Remote Sens. 2018, 55, 623–658. [Google Scholar] [CrossRef]
  25. Mirmazloumi, S.M.; Moghimi, A.; Ranjgar, B.; Mohseni, F.; Ghorbanian, A.; Ahmadi, S.A.; Amani, M.; Brisco, B. Status and trends of wetland studies in Canada using remote sensing technology with a focus on wetland classification: A bibliographic analysis. Remote Sens. 2021, 13, 4025. [Google Scholar] [CrossRef]
  26. Neuenschwander, A.L.; Crawford, M.M.; Provancha, M.J. Mapping of coastal wetlands via hyperspectral AVIRIS data. In Proceedings of the IGARSS ‘98. Sensing and Managing the Environment. 1998 IEEE International Geoscience and Remote Sensing. Symposium Proceedings. (Cat. No.98CH36174), Seattle, WA, USA, 6–10 July 1998. [Google Scholar] [CrossRef]
  27. Yu, Q.; Gong, P.; Clinton, N.; Biging, G.; Kelly, M.; Schirokauer, D. Object-based detailed vegetation classification with airborne high spatial resolution remote sensing imagery. Photogramm. Eng. Remote. Sens. 2006, 72, 799–811. [Google Scholar] [CrossRef]
  28. Correll, M.D.; Hantson, W.; Hodgman, T.P.; Cline, B.B.; Elphick, C.S.; Gregory Shriver, W.; Tymkiw, E.L.; Olsen, B.J. Fine-scale mapping of coastal plant communities in the Northeastern USA. Wetlands 2019, 39, 17–28. [Google Scholar] [CrossRef]
  29. Sun, C.; Li, J.; Liu, Y.; Liu, Y.; Liu, R. Plant species classification in salt marshes using phenological parameters derived from Sentinel-2 pixel-differential time-series. Remote Sens. Environ. 2021, 256, 112320. [Google Scholar] [CrossRef]
  30. Zhao, Y.; Feng, D.; Yu, L.; Wang, X.; Chen, Y.; Bai, Y.; Hernández, H.J.; Galleguillos, M.; Estades, C.; Biging, G.S.; et al. Detailed dynamic land cover mapping of Chile: Accuracy improvement by integrating multi-temporal data. Remote Sens. Environ. 2016, 183, 170–185. [Google Scholar] [CrossRef]
  31. Ramsey, E.W.; Laine, S.C. Comparison of Landsat thematic mapper and high resolution photography to identify change in complex coastal wetlands. J. Coast. Res. 1997, 13, 281–292. [Google Scholar]
  32. Camilleri, S.; De Giglio, M.; Stecchi, F.; Pérez-Hurtado, A. Land use and land cover change analysis in predominately man-made coastal wetlands: Towards a methodological framework. Wetl. Ecol. Manag. 2016, 25, 23–43. [Google Scholar] [CrossRef]
  33. Wang, X.; Gao, X.; Zhang, Y.; Fei, X.; Chen, Z.; Wang, J.; Zhang, Y.; Lu, X.; Zhao, H. Land-cover classification of coastal wetlands using the RF algorithm for Worldview-2 and Landsat 8 images. Remote Sens. 2019, 11, 1927. [Google Scholar] [CrossRef]
  34. Zhang, X.; Xu, J.; Chen, Y.; Xu, K.; Wang, D. Coastal wetland classification with GF-3 Polarimetric SAR imagery by using object-oriented Random Forest algorithm. Sensors 2021, 21, 3395. [Google Scholar] [CrossRef] [PubMed]
  35. Gonzalez-Perez, A.; Abd-Elrahman, A.; Wilkinson, B.; Johnson, D.J.; Carthy, R.R. Deep and machine learning image classification of coastal wetlands using unpiloted aircraft system multispectral images and LiDAR datasets. Remote Sens. 2022, 14, 3937. [Google Scholar] [CrossRef]
  36. Norris, G.S.; Leblon, B.; LaRocque, A.; Barbeau, M.A.; Hanson, A.R. Effect of textural features for landcover classification of UAV multispectral imagery of a salt marsh restoration site. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, XLIII-B3-2022, 951–958. [Google Scholar] [CrossRef]
  37. Abeysinghe, T.; Simic Milas, A.; Arend, K.; Hohman, B.; Reil, P.; Gregory, A.; Vázquez-Ortega, A. Mapping invasive Phragmites australis in the Old Woman Creek Estuary using UAV remote sensing and machine learning classifiers. Remote Sens. 2019, 11, 1380. [Google Scholar] [CrossRef]
  38. Huang, Y.; Lu, C.; Jia, M.; Wang, Z.; Su, Y.; Su, Y. Plant species classification of coastal wetlands based on UAV images and object- oriented deep learning. Biodiv. Sci. 2023, 31, 22411. [Google Scholar] [CrossRef]
  39. Nardin, W.; Taddia, Y.; Quitadamo, M.; Vona, I.; Corbau, C.; Franchi, G.; Staver, L.W.; Pellegrinelli, A. Seasonality and characterization mapping of restored tidal marsh by NDVI imageries coupling UAVs and multispectral camera. Remote Sens. 2021, 13, 4207. [Google Scholar] [CrossRef]
  40. Durgan, S.D.; Zhang, C.; Duecaster, A.; Fourney, F.; Su, H. Unmanned aircraft system photogrammetry for mapping diverse vegetation species in a heterogeneous coastal wetland. Wetlands 2020, 40, 2621–2633. [Google Scholar] [CrossRef]
  41. Doughty, C.; Cavanaugh, K. Mapping coastal wetland biomass from high resolution Unmanned Aerial Vehicle (UAV) imagery. Remote Sens. 2019, 11, 540. [Google Scholar] [CrossRef]
  42. Janousek, C.N.; Thorne, K.M.; Takekawa, J.Y. Vertical zonation and niche breadth of tidal marsh plants along the Northeast Pacific Coast. Estuar. Coast. 2019, 42, 85–98. [Google Scholar] [CrossRef]
  43. Gallant, A. The challenges of remote monitoring of wetlands. Remote Sens. 2015, 7, 10938–10950. [Google Scholar] [CrossRef]
  44. Pande-Chhetri, R.; Abd-Elrahman, A.; Liu, T.; Morton, J.; Wilhelm, V.L. Object-based classification of wetland vegetation using very high-resolution unmanned air system imagery. Eur. J. Remote Sens. 2017, 50, 564–576. [Google Scholar] [CrossRef]
  45. Rebelo, L.-M.; Finlayson, C.M.; Nagabhatla, N. Remote sensing and GIS for wetland inventory, mapping and change analysis. J. Environ. Manag. 2009, 90, 2144–2153. [Google Scholar] [CrossRef]
  46. Lu, D.; Mausel, P.; Brondízio, E.; Moran, E. Change detection techniques. Int. J. Remote Sens. 2004, 25, 2365–2401. [Google Scholar] [CrossRef]
  47. Hussain, M.; Chen, D.; Cheng, A.; Wei, H.; Stanley, D. Change detection from remotely sensed images: From pixel-based to object-based approaches. ISPRS J. Photogramm. 2013, 80, 91–106. [Google Scholar] [CrossRef]
  48. Blaschke, T. Object based image analysis for remote sensing. ISPRS J. Photogramm. 2010, 65, 2–16. [Google Scholar] [CrossRef]
  49. Kotaridis, I.; Lazaridou, M. Remote sensing image segmentation advances: A meta-analysis. ISPRS J. Photogramm. 2021, 173, 309–322. [Google Scholar] [CrossRef]
  50. Martínez Prentice, R.; Villoslada Peciña, M.; Ward, R.D.; Bergamo, T.F.; Joyce, C.B.; Sepp, K. Machine learning classification and accuracy assessment from high-resolution images of coastal wetlands. Remote Sens. 2021, 13, 3669. [Google Scholar] [CrossRef]
  51. Windle, A.E.; Staver, L.W.; Elmore, A.J.; Scherer, S.; Keller, S.; Malmgren, B.; Silsbe, G.M. Multi-temporal high-resolution marsh vegetation mapping using unoccupied aircraft system remote sensing and machine learning. Front. Remote Sens. 2023, 4, 1140999. [Google Scholar] [CrossRef]
  52. Zheng, J.-Y.; Hao, Y.-Y.; Wang, Y.-C.; Zhou, S.-Q.; Wu, W.-B.; Yuan, Q.; Gao, Y.; Guo, H.-Q.; Cai, X.-X.; Zhao, B. Coastal wetland vegetation classification using pixel-based, object-based and deep learning methods based on RGB-UAV. Land 2022, 11, 2039. [Google Scholar] [CrossRef]
  53. Moffett, K.B.; Gorelick, S.M. Distinguishing wetland vegetation and channel features with object-based image segmentation. Int. J. Remote Sens. 2013, 34, 1332–1354. [Google Scholar] [CrossRef]
  54. Du, B.; Mao, D.; Wang, Z.; Qiu, Z.; Yan, H.; Feng, K.; Zhang, Z. Mapping wetland plant communities using unmanned aerial vehicle hyperspectral imagery by comparing object/pixel-based classifications combining multiple machine-learning algorithms. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 8249–8258. [Google Scholar] [CrossRef]
  55. Soares, V.P.; Hoffer, R.M. Eucalyptus forest change classification using multi-date Landsat TM data. In Multispectral and Microwave Sensing of Forestry, Hydrology, and Natural Resources; SPIE: Bellingham, WA, USA, 1995; Volume 2314, pp. 281–291. [Google Scholar] [CrossRef]
  56. Weismiller, R.A.; Kristof, S.J.; Scholz, D.K.; Auta, P.E.; Momin, S.A. Change detection in coastal zone environments. Photogram. Eng. Remote Sens. 1977, 43, 1533–1539. [Google Scholar]
  57. Desplanque, C.; Mossman, D.J. Tides and their seminal impact on the geology, geography, history, and socio-economics of the Bay of Fundy, Eastern Canada. Atl. Geol. 2004, 40, 1–130. [Google Scholar] [CrossRef]
  58. Millard, K.; Redden, A.M.; Webster, T.; Stewart, H. Use of GIS and high resolution LiDAR in salt marsh restoration site suitability assessments in the Upper Bay of Fundy, Canada. Wetlands Ecol. Manag. 2013, 21, 243–262. [Google Scholar] [CrossRef]
  59. Boone, L.K.; Ollerhead, J.; Barbeau, M.A.; Beck, A.D.; Sanderson, B.G.; McLellan, N.R. Returning the tide to dikelands in a macrotidal and ice-influenced environment: Challenges and lessons learned. In Coastal Wetlands: Alteration and Remediation; Finkl, C.W., Makowski, C., Eds.; Coastal Research Library; Springer International Publishing: Cham, Switzerland, 2017; Volume 21, pp. 705–749. ISBN 978-3-319-56178-3. [Google Scholar]
  60. Virgin, S.D.S.; Beck, A.D.; Boone, L.K.; Dykstra, A.K.; Ollerhead, J.; Barbeau, M.A.; McLellan, N.R. A managed realignment in the Upper Bay of Fundy: Community dynamics during salt marsh restoration over 8 Years in a megatidal, ice-influenced environment. Ecol. Eng. 2020, 149, 105713. [Google Scholar] [CrossRef]
  61. Peterson, P.M.; Romaschenko, K.; Arrieta, Y.H.; Saarela, J.M. A Molecular phylogeny and new subgeneric classification of Sporobolus (Poaceae: Chloridoideae: Sporobolinae). Taxon 2014, 63, 1212–1243. [Google Scholar] [CrossRef]
  62. Bortolus, A.; Adam, P.; Adams, J.B.; Ainouche, M.L.; Ayres, D.; Bertness, M.D.; Bouma, T.J.; Bruno, J.F.; Caçador, I.; Carlton, J.T.; et al. Supporting Spartina: Interdisciplinary perspective shows Spartina as a distinct solid genus. Ecology 2019, 100, e02863. [Google Scholar] [CrossRef] [PubMed]
  63. Rouse, J.W.; Haas, R.H.; Scell, J.A.; Deering, D.W.; Harlan, J.C. Monitoring the Vernal Advancements and Retro Gradation (Green Wave Effect) of Natural Vegetation; NASA/GSFC Type III; NASA: Greenbelt, MD, USA, 1974; 371p. [Google Scholar]
  64. Villa, P.; Mousivand, A.; Bresciani, M. Aquatic vegetation indices assessment through radiative transfer modeling and linear mixture simulation. Int. J. Appl. Earth Obs. 2014, 30, 113–127. [Google Scholar] [CrossRef]
  65. Gitelson, A.A.; Jaufman, Y.J.; Merzlyak, M.N. Use of a green channel in remote sensing of global vegetation from EOS-MODIS. Remote Sens. Environ. 1996, 58, 457–459. [Google Scholar] [CrossRef]
  66. Gitelson, A.A.; Merzlyak, M.N. Spectral reflectance changes associated with autumn senescence of Aesculus hippocastanum I. and Acer platanoides I. leaves: Spectral features and relation to chlorophyll estimation. J. Plant Phys. 1994, 143, 286–292. [Google Scholar] [CrossRef]
  67. Sripada, R.P.; Heiniger, R.W.; White, J.G.; Meijer, A.D. Aerial color infrared photography for determining early in-season nitrogen requirements in corn. Agron. J. 2006, 98, 968–977. [Google Scholar] [CrossRef]
  68. Richardson, A.J.; Wiegand, C.L. Distinguishing vegetation from soil background information. Photogramm. Eng. Remote Sens. 1977, 43, 1541–1552. [Google Scholar]
  69. Sripada, R.P.; Heiniger, R.W.; White, J.G.; Weisz, R. Aerial color infrared photography for determining late-season nitrogen requirements in corn. Agron. J. 2005, 97, 1443–1451. [Google Scholar] [CrossRef]
  70. Kimura, R.; Okada, S.; Miura, H.; Kamichika, M. Relationships among the leaf area index, moisture availability, and spectral reflectance in an upland rice field. Agric. Water Manag. 2004, 69, 83–100. [Google Scholar] [CrossRef]
  71. Jordan, C.F. Derivation of leaf-area index from quality of light on the forest floor. Ecology 1969, 50, 663–666. [Google Scholar] [CrossRef]
  72. Datt, B. Visible/near infrared reflectance and chlorophyll content in Eucalyptus leaves. Int. J. Remote Sens. 1999, 20, 2741–2759. [Google Scholar] [CrossRef]
  73. Haralick, R.M.; Shanmugam, K.; Dinstein, I. Textural features for image classification. IEEE Trans. Syst. Man Cybern. 1973, SMC-3, 610–621. [Google Scholar] [CrossRef]
  74. Richards, J.A.; Jia, X. Remote Sensing Digital Image Analysis: An Introduction, 4th ed.; Springer: Berlin/Heidelberg, Germany, 2006; ISBN 978-3-540-25128-6. [Google Scholar]
  75. Louppe, G.; Wehenkel, L.; Sutera, A.; Geurts, P. Understanding variable importances in forests of randomized trees. In Proceedings of the Advances in Neural Information Processing Systems, Lake Tahoe, NV, USA, 5–8 December 2013. [Google Scholar]
  76. Fletcher, R. Using vegetation indices as input into Random Forest for soybean and weed classification. Am. J. Plant Sci. 2016, 07, 2186–2198. [Google Scholar] [CrossRef]
  77. Mesner, N.; Oštir, K. Investigating the impact of spatial and spectral resolution of satellite images on segmentation quality. J. Appl. Remote Sens. 2014, 8, 083696. [Google Scholar] [CrossRef]
  78. Tian, B.; Zhou, Y.; Zhang, Z.; Yuan, L. Analyzing the habitat suitability for migratory birds at the Chongming Dongtan Nature Reserve in Shanghai, China. Estuar. Coast. Shelf Sci. 2008, 80, 296–302. [Google Scholar] [CrossRef]
  79. Kose, M.; Heinsoo, K.; Kaljund, K.; Tali, K. 20 years of Baltic Boreal coastal meadow restoration: Has it been long enough? Ecology 2020, 29, e13266. [Google Scholar] [CrossRef]
  80. Hardisky, M.A.; Gross, M.F.; Klemas, V. Remote sensing of coastal wetlands. BioScience 1986, 36, 453–460. [Google Scholar] [CrossRef]
  81. Klemas, V. Remote sensing of coastal wetland vegetation and estuarine water properties. In Estuarine Processes; Elsevier: Amsterdam, The Netherlands, 1977; pp. 381–403. ISBN 978-0-12-751802-2. [Google Scholar]
  82. Sader, S.A.; Ahl, D.; Liou, W.S. Accuracy of Landsat-TM and GIS rule-based methods for forest wetland classification in Maine. Remote Sens. Environ. 1995, 53, 133–144. [Google Scholar] [CrossRef]
  83. Li, C.; Zhou, L.; Xu, W. Estimating aboveground biomass using Sentinel-2 MSI data and ensemble algorithms for grassland in the Shengjin Lake Wetland, China. Remote Sens. 2021, 13, 1595. [Google Scholar] [CrossRef]
  84. Turpie, K.R. Explaining the spectral red-edge features of inundated marsh vegetation. J. Coast. Res. 2013, 29, 1111–1117. [Google Scholar] [CrossRef]
  85. Streher, A.S.; Torres, R.d.S.; Morellato, L.P.C.; Silva, T.S.F. Accuracy and limitations for spectroscopic prediction of leaf traits in seasonally dry tropical environments. Remote Sens. Environ. 2020, 244, 111828. [Google Scholar] [CrossRef]
  86. Vuolo, F.; Neuwirth, M.; Immitzer, M.; Atzberger, C.; Ng, W.T. How much does multi-temporal Sentinel-2 data improve crop type classification? Int. J. Appl. Earth Obs. 2018, 72, 122–130. [Google Scholar] [CrossRef]
  87. Klemas, V. Remote sensing of coastal wetland biomass: An overview. J. Coast. Res. 2013, 290, 1016–1028. [Google Scholar] [CrossRef]
  88. Liu, L.; Chen, J.; Zhao, G.; Fieguth, P.; Chen, X.; Pietikainen, M. Texture classification in extreme scale variations using GANet. IEEE Trans. Image Process 2018, 28, 3910–3922. [Google Scholar] [CrossRef] [PubMed]
  89. Gao, Z.G.; Zhang, L.Q. Multi-seasonal spectral characteristics analysis of coastal salt marsh vegetation in Shanghai, China. Estuar. Coast. Shelf Sci. 2006, 69, 217–224. [Google Scholar] [CrossRef]
  90. Li, N.; Lu, D.; Wu, M.; Zhang, Y.; Lu, L. Coastal wetland classification with multiseasonal high-spatial resolution satellite imagery. Int. J. Remote Sens. 2018, 39, 8963–8983. [Google Scholar] [CrossRef]
  91. Hentis, L.; Jurgens, C.; Mucsi, L. Seasonal multitemporal land-cover classification and change detection analysis of Bochum, Germany, using multitemporal Landsat TM data. Int. J. Remote Sens. 2015, 37, 3439–3454. [Google Scholar] [CrossRef]
  92. Berberoğlu, S.; Akin, A.; Atkinson, P.M.; Curran, P.J. Utilizing image texture to detect land-cover change in Mediterranean Coastal Wetlands. Int. J. Remote Sens. 2010, 31, 2793–2815. [Google Scholar] [CrossRef]
  93. Kesikoglu, M.H.; Atasever, U.H.; Dadaser-Celik, F.; Ozkan, C. Performance of ANN, SVM, and MLH techniques for land use/cover change detection at Sultan Marshes wetland, Turkey. Water Sci. Technol. 2019, 80, 466–477. [Google Scholar] [CrossRef] [PubMed]
  94. Jahncke, R.; Leblon, B.; Bush, P.; Armand LaRocque, A. Mapping wetlands in Nova Scotia with multi-beam RADARSAT-2 Polarimetric SAR, optical satellite imagery, and Lidar data. Int. J Appl. Earth Obs. 2018, 68, 139–156. [Google Scholar] [CrossRef]
  95. Jiang, J.; Johansen, K.; Tu, Y.-H.; McCabe, M.F. Multi-sensor and multi-platform consistency and interoperability between UAV, Planet CubeSat, Sentinel-2, and Landsat reflectance data. GIScience Remote Sens. 2022, 59, 936–958. [Google Scholar] [CrossRef]
  96. Thomson, E.R.; Spiegel, M.P.; Althuizen, I.H.J.; Bass, P.; Chen, S.; Chmurzynski, A.; Halbritter, A.H.; Henn, J.J.; Jónsdóttir, I.S.; Klanderud, K.; et al. Multiscale mapping of plant functional groups and plant traits in the high arctic using field spectroscopy, UAV imagery and Sentinel-2A data. Environ. Res. Lett. 2021, 16, 055006. [Google Scholar] [CrossRef]
Figure 1. Location of studied salt marshes in Aulac, New Brunswick, in a Google Earth™ image of New Brunswick (insert); details include restoration sites (outlined in yellow) and reference sites (red). The sites examined in our paper were A and B.
Figure 1. Location of studied salt marshes in Aulac, New Brunswick, in a Google Earth™ image of New Brunswick (insert); details include restoration sites (outlined in yellow) and reference sites (red). The sites examined in our paper were A and B.
Remotesensing 16 01049 g001
Figure 2. Photographs of (A) DJI Matrice V2 aerial drone and MicaSense Downwelling Light Sensor (DLS), (B) airborne drone showing the MicaSense Dual Camera System, (C) DJI Cendence remote controller, (D) DJI CrystalSky tablet with DJI Pilot software, and (E) Spectralon panel (RP04-1949202-OB).
Figure 2. Photographs of (A) DJI Matrice V2 aerial drone and MicaSense Downwelling Light Sensor (DLS), (B) airborne drone showing the MicaSense Dual Camera System, (C) DJI Cendence remote controller, (D) DJI CrystalSky tablet with DJI Pilot software, and (E) Spectralon panel (RP04-1949202-OB).
Remotesensing 16 01049 g002
Figure 3. Flowchart of method used for processing multispectral drone images.
Figure 3. Flowchart of method used for processing multispectral drone images.
Remotesensing 16 01049 g003
Figure 4. Number of training and validation polygons used for each landcover class used in classifications of western (A) reference (Site A) and (B) restoration (Site B) sites in Aulac. See Supplementary Figures S3 and S4 for maps showing spatial distributions of training and validation data.
Figure 4. Number of training and validation polygons used for each landcover class used in classifications of western (A) reference (Site A) and (B) restoration (Site B) sites in Aulac. See Supplementary Figures S3 and S4 for maps showing spatial distributions of training and validation data.
Remotesensing 16 01049 g004
Figure 5. Classified images of the western reference site in Aulac (Site A) created using (A) pixel- and (B) object-based (right) Random Forest classifiers applied to 414 input features from imagery acquired on 13 June, 12 July, and 10 August 2021 to map (C) 24 classes.
Figure 5. Classified images of the western reference site in Aulac (Site A) created using (A) pixel- and (B) object-based (right) Random Forest classifiers applied to 414 input features from imagery acquired on 13 June, 12 July, and 10 August 2021 to map (C) 24 classes.
Remotesensing 16 01049 g005
Figure 6. Classified images of the western restoration site in Aulac (Site B) created using (A) pixel- and (B) object-based Random Forest classifiers applied to 414 input features from imagery acquired on 13 June, 12 July, and 10 August 2021 to map (C) 15 classes.
Figure 6. Classified images of the western restoration site in Aulac (Site B) created using (A) pixel- and (B) object-based Random Forest classifiers applied to 414 input features from imagery acquired on 13 June, 12 July, and 10 August 2021 to map (C) 15 classes.
Remotesensing 16 01049 g006
Table 1. Names and descriptions of landcover classes used in the study. Corresponding class number and colour codes included. Classes 1–15 were used in the classification of the restoration site (Site B), and classes 1–9 and 16–30 were used in the classification of the reference site (Site A). See Supplementary Table S1 for photographs of the vegetation classes and Supplementary Figures S1 and S2 for photographs of the other classes.
Table 1. Names and descriptions of landcover classes used in the study. Corresponding class number and colour codes included. Classes 1–15 were used in the classification of the restoration site (Site B), and classes 1–9 and 16–30 were used in the classification of the reference site (Site A). See Supplementary Table S1 for photographs of the vegetation classes and Supplementary Figures S1 and S2 for photographs of the other classes.
Class NumberClass Color CodeNameDescription
1Remotesensing 16 01049 i001Bare mud exposed to airBay of Fundy mud beyond the seaward edges of the sites.
2Remotesensing 16 01049 i002Compacted soil (dark)Compacted soil along the seaward edges of the sites as a result of past dike construction.
3Remotesensing 16 01049 i003Rocks/eroded shoreline piecesRocks washed up on shore. Also includes large chunks of shoreline that eroded from the edge of the sites.
4Remotesensing 16 01049 i004WrackDead grass and algae that accumulated into mats and washed onto the sites.
5Remotesensing 16 01049 i005Spartina alterniflora (muddy)Assemblage dominated by saltwater cordgrass (Spartina alterniflora), the low-elevation bioengineer species of salt marshes in the region, with blades covered with some tidal mud.
6Remotesensing 16 01049 i006Spartina alterniflora (clean, dense)Assemblage dominated by saltwater cordgrass (S. alterniflora) that is not muddy and is growing densely
7Remotesensing 16 01049 i007Spartina patensAssemblage dominated by salt marsh hay (Spartina patens), the mid-elevation bioengineer species of salt marshes in the region.
8Remotesensing 16 01049 i008Deep salt pool waterDeep water contained in salt pools (depressions in the marsh that retain water at low tide).
9Remotesensing 16 01049 i009Bare mud exposed to air (June) → clean S. alterniflora growing in dense assemblages (July, August)Bay of Fundy mud in the June imagery which is colonized by S. alterniflora in the July and August imagery.
10Remotesensing 16 01049 i010Compacted soil (light)Highly compacted soil that appears light in the imagery, likely due to high sand and/or salt content.
11Remotesensing 16 01049 i011WoodWoody debris that has been washed into the site or remnants of past dike construction.
12Remotesensing 16 01049 i012Spartina pectinataFreshwater cordgrass (Spartina pectinata) occupying high-elevation areas next to the dike.
13Remotesensing 16 01049 i013S. alterniflora (clean, sparse)Saltwater cordgrass (S. alterniflora) that is not muddy and is growing in sparse assemblages.
14Remotesensing 16 01049 i014S. alterniflora → Wrack Areas of S. alterniflora in June and July that became covered in wrack by August.
15Remotesensing 16 01049 i015Dike vegetationUnidentified terrestrial plant species that grow on top of the high-elevation dike areas.
16Remotesensing 16 01049 i016Triglochin maritimaAssemblage dominated by seaside arrowgrass (Triglochin maritima), a common salt marsh plant with fleshy dark-green stems.
17Remotesensing 16 01049 i017Mixed mid-elevation vegetation (S. patens, Puccinellia, etc.)Mixed assemblages of vegetation, including S. patens, Puccinellia maritima, Lysimachia maritima, Plantago maritima, Solidago sempervirens, Argentina anserina, and Limonium carolinianum.
18Remotesensing 16 01049 i018Floating green algaeGreen algae (Chlorophyta) floating on top of salt pool water.
19Remotesensing 16 01049 i019Emerged salt pool mud (June) → shallow salt pool water (July, August)Mud within salt pools that was exposed to the air in the June imagery and covered in water in the July and August imagery. Water level is variable in salt pools and controlled by evaporation, spring tides, and rain events.
20Remotesensing 16 01049 i020Emerged salt pool mud (salty)Mud within salt pools that was exposed to the air in the imagery of each month.
21Remotesensing 16 01049 i021Submerged aquatic vegetationUnderwater Ruppia maritima and Chlorophyta in salt pools.
22Remotesensing 16 01049 i022Deep salt pool water (June) → floating green algae (July, August)Deep water contained in salt pools in the June imagery which becomes covered in floating green algae in the July and August imagery.
23Remotesensing 16 01049 i023Deep salt pool water (June) → submerged aquatic vegetation (July, August)Deep water contained in salt pools in the June imagery which becomes submerged aquatic vegetation in the July and August imagery.
24Remotesensing 16 01049 i024Deep salt pool water (June, July) → submerged aquatic vegetation (August)Deep water contained in salt pools in the June and July imagery which becomes submerged aquatic vegetation in the August imagery.
25Remotesensing 16 01049 i025Shallow salt pool waterShallow water contained in salt pools where the unvegetated pool bottom is visible.
26Remotesensing 16 01049 i026Shallow salt pool water (June) → submerged aquatic vegetation (July, August)Shallow water contained in salt pools in the June imagery which becomes submerged aquatic vegetation in the July and August imagery.
27Remotesensing 16 01049 i027Floating green algae (June) → submerged aquatic vegetation (July) → shallow salt pool water (August)Floating green algae (Chlorophyta) in the June imagery which becomes submerged aquatic vegetation in the July imagery and shallow salt pool water in the August imagery.
28Remotesensing 16 01049 i028Floating green algae (June) → deep salt pool water (July, August)Floating green algae (Chlorophyta) in the June imagery which becomes deep salt pool water in the July and August imagery.
29Remotesensing 16 01049 i029Wrack (June) → vegetated areas of S. alterniflora and S. patens (July, August)Wrack in the June imagery which washes away or becomes colonized by vegetation and appears as S. alterniflora and S. patens in the July and August imagery.
30Remotesensing 16 01049 i030Mixed vegetation: S. alterniflora and S. patensMixed assemblages of S. alterniflora and S. patens.
Table 2. Characteristics of multispectral drone images used in the study acquired in Aulac salt marshes in 2021.
Table 2. Characteristics of multispectral drone images used in the study acquired in Aulac salt marshes in 2021.
MonthSiteStart TimeTidal Height
(m) *
Cloud CoverSolar Azimuth
(°)
Solar Altitude
(°)
Course Angle
(°)
No. of Images
JuneReference12:006.7Cumulus139632308550
Restoration12:337.9155662307730
JulyReference12:388.9Stratus155642307380
Restoration13:059.9170662307560
AugustReference10:435.5Stratus119452188180
Restoration11:146.9128502187940
(*) Tidal height obtained from the nearby Pecks Point, New Brunswick, tidal station.
Table 3. Spectral characteristics of the ten bands acquired by the MicaSense Dual-Camera system.
Table 3. Spectral characteristics of the ten bands acquired by the MicaSense Dual-Camera system.
Band NumberBand NameCenter of Wavelength (μm)Bandwidth (μm)
1Coastal Blue 44444428
2Blue 47547532
3Green 53153114
4Green 56056027
5Red 65065016
6Red 66866814
7Red Edge 70570510
8Red Edge 71771712
9Red Edge 74074018
10NIR 84284257
Table 4. Vegetation indices calculated from the ten-band MicaSense imagery.
Table 4. Vegetation indices calculated from the ten-band MicaSense imagery.
Vegetation Index (VI)AbbreviationFormulaReference
Normalized Difference VINDVI-1 N I R 842 R e d 650 / N I R 842 + R e d 650 [63]
NDVI-2 N I R 842 R e d 668 / N I R 842 + R e d 668
Normalized Difference Aquatic VINDAVI-1 N I R 842 B l u e 444 / N I R 842 + B l u e 444 [64]
NDAVI-2 N I R 842 B l u e 475 / N I R 842 + B l u e 475
Green Normalized Difference VIGNDVI-1 N I R 842 G r e e n 531 / N I R 842 + G r e e n 531 [65]
GNDVI-2 N I R 842 G r e e n 560 / N I R 842 + G r e e n 560
Normalized Difference Red Edge VINDRE-1 N I R 842 R e d E d g e 705 / N I R 842 + R e d E d g e 705 [66]
NDRE-2 N I R 842 R e d E d g e 717 / N I R 842 + R e d E d g e 717
NDRE-3 N I R 842 R e d E d g e 740 / N I R 842 + R e d E d g e 740
Normalized Green VING-1 G r e e n 531 / N I R 842 + R e d 650 + G r e e n 531 [67]
NG-2 G r e e n 560 / N I R 842 + R e d 668 + G r e e n 560
Difference VIDVI-1 N I R 842 R e d 650 [68]
DVI-2 N I R 842 R e d 668
Green Difference VIGDVI-1 N I R 842 G r e e n 531 [69]
GDVI-2 N I R 842 G r e e n 560
Normalized Red VINR-1 R e d 650 / N I R 842 + R e d 650 + G r e e n 531 [67]
NR-2 R e d 668 / N I R 842 + R e d 668 + G r e e n 560
Normalized Near Infrared VINNIR-1 N I R 842 / N I R 842 + R e d 650 + G r e e n 531 [67]
NNIR-2 N I R 842 / N I R 842 + R e d 668 + G r e e n 560
Green Ratio VIGRVI-1 N I R 842 / G r e e n 531 [70]
GRVI-2 N I R 842 / G r e e n 560
Red Ratio VIRVI-1 N I R 842 / R e d 650 [71]
RVI-2 N I R 842 / R e d 668
Red Edge Ratio VIRERVI-1 N I R 842 / R e d E d g e 705 [72]
RERVI-2 N I R 842 / R e d E d g e 717
RERVI-3 N I R 842 / R e d E d g e 740
Water Adjusted VIWAVI-1 1.5 N I R 842 B l u e 444 / N I R 842 + B l u e 444 + 0.5 [64]
WAVI-2 1.5 N I R 842 B l u e 475 / N I R 842 + B l u e 475 + 0.5
Table 5. Textural features calculated for each of the ten MicaSense bands (adapted from [73]).
Table 5. Textural features calculated for each of the ten MicaSense bands (adapted from [73]).
Textural FeatureFormula (*)
Homogeneity i = 0 N 1 j = 0 N 1 P i , j 1 + i j 2
Contrast i = 0 N 1 j = 0 N 1 P i , j i j 2
Dissimilarity i = 0 N 1 j = 0 N 1 P i , j i j
Mean i = 0 N 1 j = 0 N 1 i P i , j
Standard deviation i = 0 N 1 j = 0 N 1 P i , j i μ i 2 1 / 2
Entropy i = 0 N 1 j = 0 N 1 P i , j l o g e P i , j
Angular second moment i = 0 N 1 j = 0 N 1 P i , j 2
Angular correlation i = 0 N 1 j = 0 N 1 P i , j i μ i j μ j σ i σ j
GLDV angular second moment i = 0 N 1 v k 2
GLDV entropy i = 0 N 1 v k l o g e v k
(*) N = number of grey levels, P(i, j) = probability of grey tonal values i and j occurring in adjacent pixels in the original image within the window defining the neighborhood, i = digital number value of a target pixel, j = digital number value of its immediate neighbor, μ = mean tonal value, and σ = standard deviation of tonal values. v(k) is a vector of the normalized gray level differences, and k = |ij|.
Table 6. JM distances computed using reflectance values of PB training areas extracted from original reflectance images for 10 MicaSense bands acquired for Aulac reference (Site A) and restoration (Site B) sites in June, July, and August 2021. Landcover class pairs < 1.90 for each set of images listed in ascending order. Class pairs, including change classes, omitted from this table. See Supplementary Tables S2 and S3 for values of all class pairs.
Table 6. JM distances computed using reflectance values of PB training areas extracted from original reflectance images for 10 MicaSense bands acquired for Aulac reference (Site A) and restoration (Site B) sites in June, July, and August 2021. Landcover class pairs < 1.90 for each set of images listed in ascending order. Class pairs, including change classes, omitted from this table. See Supplementary Tables S2 and S3 for values of all class pairs.
SiteMonthAverage JM
Distance Value
Minimum JM
Distance Value
Class Pairs with JM Distance Values < 1.90
Reference (Site A)June1.961.217 and 17 (mixed mid-elevation vegetation), 16 (Triglochin) and 30 (mixed S. alterniflora and S. patens), 17 and 30, 6 and 7, 6 and 17, 5 and 16, 7 and 30, 6 and 30, 5 and 30
July1.951.077 and 17, 6 and 7, 16 and 30, 17 and 30, 5 and 30, 7 and 30, 18 (floating green algae) and 21 (submerged aquatic vegetation), 6 and 17, 6 and 30, 5 and 16, 16 and 17
August1.961.217 and17, 16 and 30, 17 and 30, 6 and 7, 5 and 17, 5 and 16, 7 and 30, 6 and 30, 5 and 30
Restoration (Site B)June1.941.156 (S. alterniflora clean, dense) and 13 (S. alterniflora clean, sparse), 5 (S. alterniflora muddy) and 13, 5 and 6, 12 (S. pectinata) and 15 (dike vegetation), 3 (rocks) and 10 (compacted soil light)
2 (compacted soil dark) and 3, 6 and 7 (S. patens), 7 and 13
July1.971.587 and 12, 5 and 6, 12 and 15, 3 and 10, 6 and 13
August1.971.7212 and 15, 3 and 10, 7 and 12, 7 and 15, 6 and 13, 2 and 3
Table 7. The reference site (Site A) out-of-bag classification accuracies (in %) computed by PB and OB Random Forest classifiers when applied to the 2021 multi-temporal image sets. See Supplementary Table S4 for full confusion matrices.
Table 7. The reference site (Site A) out-of-bag classification accuracies (in %) computed by PB and OB Random Forest classifiers when applied to the 2021 multi-temporal image sets. See Supplementary Table S4 for full confusion matrices.
Class NumberClass NameOOB Accuracy
PBOB
UAPAUAPA
1Bare mud exposed to air100100100100
2Compacted soil (dark)100100100100
3Rocks/eroded shoreline pieces100100100100
4Wrack100100100100
5Spartina alterniflora (muddy)10010010098.3
6Spartina alterniflora (clean, dense)100100100100
7Spartina patens100100100100
8Deep salt pool water100100100100
9Bare mud exposed to air → clean, dense S. alterniflora100100100100
16Triglochin maritima100100100100
17Mixed mid-elevation vegetation (S. patens, Puccinellia, etc.)100100100100
18Floating green algae (Chlorophyta)100100100100
19Emerged salt pool mud → shallow salt pool water100100100100
20Emerged salt pool mud (salty)100100100100
21Submerged aquatic vegetation100100100100
22Deep salt pool water → floating green algae100100100100
23Deep salt pool water (June) → submerged aquatic vegetation (July, August)100100100100
24Deep salt pool water (June, July) → submerged aquatic vegetation (August)100100100100
25Shallow salt pool water100100100100
26Shallow salt pool water → submerged aquatic vegetation100100100100
27Floating green algae → submerged aquatic vegetation → shallow salt pool water100100100100
28Floating green algae → deep salt pool water100100100100
29Wrack → vegetated areas of S. alterniflora and S. patens100100100100
30Mixed vegetation: S. alterniflora and S. patens10010098.1100
Average accuracy10099.8
Overall accuracy10099.8
Kappa coefficient10099.8
Table 8. The restoration site (Site B) out-of-bag classification accuracies (in %) computed by PB and OB Random Forest classifiers when applied to the 2021 multi-temporal image sets. See Supplementary Table S5 for full confusion matrices.
Table 8. The restoration site (Site B) out-of-bag classification accuracies (in %) computed by PB and OB Random Forest classifiers when applied to the 2021 multi-temporal image sets. See Supplementary Table S5 for full confusion matrices.
Class
Number
Class NameOOB Accuracy
PBOB
UAPAUAPA
1Bare mud exposed to air99.9100100100
2Compacted soil (dark)10099.9100100
3Rocks/eroded shoreline pieces99.7100100100
4Wrack10099.9100100
5Spartina alterniflora (muddy)100100100100
6Spartina alterniflora (clean, dense)100100100100
7Spartina patens100100100100
8Deep salt pool water100100100100
9Bare mud → S. alterniflora (clean, dense)100100100100
10Compacted soil (light)10099.8100100
11Wood99.6100100100
12Spartina pectinata99.610010095.0
13S. alterniflora (clean, sparse)100100100100
14S. alterniflora → wrack100100100100
15Dike vegetation10099.897.2100
Average accuracy99.999.7
Overall accuracy99.999.8
Kappa coefficient99.999.8
Table 9. Top 25 input variables of the reference site (Site A) ranked according to the mean decrease in accuracy computed by PB and OB Random Forest classifications applied to the 2021 multi-temporal images. See Supplementary Table S6 for full rankings.
Table 9. Top 25 input variables of the reference site (Site A) ranked according to the mean decrease in accuracy computed by PB and OB Random Forest classifications applied to the 2021 multi-temporal images. See Supplementary Table S6 for full rankings.
RankPBOB
1RedEdge717_TextureMean_AugustRedEdge740_TextureMean_July
2Green560_TextureMean_JulyRed668_August
3Red668_TextureMean_AugustRed668_TextureMean_August
4RedEdge705_TextureMean_AugustGreen531_August
5Green560_TextureMean_AugustNIR842_July
6Green560_TextureAngCorrelation_AugustNNIR-1_July
7Green531_TextureMean_AugustRed650_TextureMean_August
8Green531_TextureAngCorrelation_AugustRedEdge705_TextureMean_August
9Red650_TextureMean_AugustRedEdge717_TextureMean_August
10RedEdge717_AugustGreen560_August
11NDVI.2_JuneRedeEdge717_August
12RedEdge717_TextureAngCorrelation_JuneNIR842_TextureMean_July
13NR.2_JuneGreen560_TextureMean_August
14RedEdge705_AugustGNDVI-2_July
15Green531_JulyNG-2_July
16Blue444_TextureMean_JuneRedEdge705_August
17NDAVI.1_JuneRedEdge740_July
18Red668_TextureMean_JuneNDVI-2_June
19RVI.2_JuneGreen531_TextureMean_August
20Red668_AugustRed668_July
21Green531_TextureMean_JulyRERVI-1_June
22Green531_TextureAngCorrelation_JulyNG-1_July
23RedEdge740_TextureMean_JulyRed650_August
24Blue475_TextureMean_JuneBlue475_August
25RedEdge717_TextureAngCorrelation_JulyNR-2_June
Table 10. Top 25 input variables of the restoration site (Site B) ranked according to the mean decrease in accuracy computed by PB and OB Random Forest classifications applied to the 2021 multi-temporal images. See Supplementary Table S7 for full rankings.
Table 10. Top 25 input variables of the restoration site (Site B) ranked according to the mean decrease in accuracy computed by PB and OB Random Forest classifications applied to the 2021 multi-temporal images. See Supplementary Table S7 for full rankings.
RankPBOB
1RedEdge740_TextureAngCorrelation_AugustNR-2_August
2Green531_TextureMean_JulyBlue475_August
3Green531_JulyNDAVI-1_August
4Green560TextureMean_AugustNR-1_August
5NIR842_TextureMean_JuneNIR842_TextureMean_June
6GRVI.2_JuneRed668_August
7Red668_TextureMean_JulyGNDVI-2_July
8RedEdge717_TextureMean_AugustRedeEdge717_August
9NIR842_TextureMean_AugustNNIR-1_June
10Blue444_TextureMean_AugustGreen560_TextureMean_August
11Red650_TextureMean_JuneNDRE-1_July
12Blue444_TextureMean_JulyGNDVI-1_August
13RedEdge740_TextureMean_AugustBlue475_TextureMean_August
14NIR842_TextureMean_JulyGreen560_August
15Green560_JulyDVI-2_June
16Red650_TextureMean_JulyNDVI-1_August
17NNIR.1_JuneNNIR-2_August
18Green531_TextureMean_AugustRed650_August
19Green531_TextureContrast_AugustGreen560_TextureMean_June
20RedEdge717_AugustGreen531_TextureMean_August
21Blue444_TextureAngCorrelation_AugustGNDVI-2_August
22RedEdge717_TextureSt.Dev_AugustGRVI-1_July
23Blue475_TextureMean_JulyGDVI-1_August
24Blue444_TextureAngCorrelation_JulyRVI-1_June
25RedEdge717_TextureDissimilarity_JulyNDAVI-2_July
Table 11. Reference site (Site A) validation class accuracies (in %) computed by PB and OB Random Forest classifiers when applied to 2021 multi-temporal image sets. See Supplementary Table S4 for full confusion matrices.
Table 11. Reference site (Site A) validation class accuracies (in %) computed by PB and OB Random Forest classifiers when applied to 2021 multi-temporal image sets. See Supplementary Table S4 for full confusion matrices.
Class NumberClass NameOOB Accuracy
PBOB
UAPAUAPA
1Bare mud exposed to air86.797.588.192.5
2Compacted soil (dark)10086.710080.0
3Rocks/eroded shoreline pieces80.080.075.090.0
4Wrack10096.796.796.7
5Spartina alterniflora (muddy)98.293.396.695.0
6Spartina alterniflora (clean, dense)87.593.380.093.3
7Spartina patens93.291.794.688.3
8Deep salt pool water100100100100
9Bare mud exposed to air → clean, dense S. alterniflora10010010090.0
16Triglochin maritima75.793.383.986.7
17Mixed mid-elevation vegetation (S. patens, Puccinellia, etc.)96.283.392.380.0
18Floating green algae (Chlorophyta)95.095.095.095.0
19Emerged salt pool mud → shallow salt pool water10093.310093.3
20Emerged salt pool mud (salty)10086.785.780.0
21Submerged aquatic vegetation93.393.392.986.7
22Deep salt pool water → floating green algae 92.986.793.393.3
23Deep salt pool water (June) → submerged aquatic vegetation (July, August)10010093.8100
24Deep salt pool water (June, July) → submerged aquatic vegetation (August)10010090.9100
25Shallow salt pool water100100100100
26Shallow salt pool water → submerged aquatic vegetation10090.010090.0
27Floating green algae → submerged aquatic vegetation → shallow salt pool water10090.010090.0
28Floating green algae → deep salt pool water90.010090.9100
29Wrack → vegetated areas of S. alterniflora and S. patens93.810010093.3
30Mixed vegetation: S. alterniflora and S. patens89.484.083.088.0
Average accuracy93.992.4
Overall accuracy92.491.1
Kappa coefficient91.990.5
Table 12. Restoration site (Site B) validation class accuracies (in %) computed by PB and OB Random Forest classifiers when applied to 2021 multi-temporal image sets. See Supplementary Table S5 for full confusion matrices.
Table 12. Restoration site (Site B) validation class accuracies (in %) computed by PB and OB Random Forest classifiers when applied to 2021 multi-temporal image sets. See Supplementary Table S5 for full confusion matrices.
Class NumberClass NameValidation Accuracy
PBOB
UAPAUAPA
1Bare mud exposed to air10010096.8100
2Compacted soil (dark)10096.710086.7
3Rocks/eroded shoreline pieces100100100100
4Wrack96.693.396.693.3
5Spartina alterniflora (muddy)10010090.9100
6Spartina alterniflora (clean, dense)10095.086.882.5
7Spartina patens97.091.493.988.6
8Deep salt pool water100100100100
9Bare mud → S. alterniflora (clean, dense)10086.792.986.7
10Compacted soil (light)92.795.097.495.0
11Wood95.2100100100
12Spartina pectinata93.875.094.180.0
13S. alterniflora (clean, sparse)93.010090.595.0
14S. alterniflora → wrack10010093.8100
15Dike vegetation84.294.186.591.4
Average accuracy95.594.0
Overall accuracy95.293.2
Kappa coefficient94.892.6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Norris, G.S.; LaRocque, A.; Leblon, B.; Barbeau, M.A.; Hanson, A.R. Comparing Pixel- and Object-Based Approaches for Classifying Multispectral Drone Imagery of a Salt Marsh Restoration and Reference Site. Remote Sens. 2024, 16, 1049. https://doi.org/10.3390/rs16061049

AMA Style

Norris GS, LaRocque A, Leblon B, Barbeau MA, Hanson AR. Comparing Pixel- and Object-Based Approaches for Classifying Multispectral Drone Imagery of a Salt Marsh Restoration and Reference Site. Remote Sensing. 2024; 16(6):1049. https://doi.org/10.3390/rs16061049

Chicago/Turabian Style

Norris, Gregory S., Armand LaRocque, Brigitte Leblon, Myriam A. Barbeau, and Alan R. Hanson. 2024. "Comparing Pixel- and Object-Based Approaches for Classifying Multispectral Drone Imagery of a Salt Marsh Restoration and Reference Site" Remote Sensing 16, no. 6: 1049. https://doi.org/10.3390/rs16061049

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop