Next Article in Journal
Review: Renewable Energy in an Increasingly Uncertain Future
Next Article in Special Issue
Multi-Classifier Pipeline for Olive Groves Detection
Previous Article in Journal
A Novel Disturbance Rejection Control of Roll Channel for Small Air-to-Surface Missiles
Previous Article in Special Issue
Bedload Sediment Transport Estimation in Sand-Bed Rivers Comparing Traditional Methods and Surrogate Technologies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Scalable Earth Observation Service to Map Land Cover in Geomorphological Complex Areas beyond the Dynamic World: An Application in Aosta Valley (NW Italy)

by
Tommaso Orusa
1,2,*,
Duke Cammareri
2 and
Enrico Borgogno Mondino
1
1
Department of Agricultural, Forest and Food Sciences (DISAFA), GEO4Agri DISAFA Lab, Università Degli Studi di Torino, Largo Paolo Braccini 2, 10095 Grugliasco, Italy
2
Earth Observation Valle d’Aosta—eoVdA, Località L’Île-Blonde, 5, 11020 Brissogne, Italy
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(1), 390; https://doi.org/10.3390/app13010390
Submission received: 11 November 2022 / Revised: 15 December 2022 / Accepted: 23 December 2022 / Published: 28 December 2022
(This article belongs to the Special Issue Geomorphology in the Digital Era)

Abstract

:
Earth Observation services guarantee continuous land cover mapping and are becoming of great interest worldwide. The Google Earth Engine Dynamic World represents a planetary example. This work aims to develop a land cover mapping service in geomorphological complex areas in the Aosta Valley in NW Italy, according to the newest European EAGLE legend starting in the year 2020. Sentinel-2 data were processed in the Google Earth Engine, particularly the summer yearly median composite for each band and their standard deviation with multispectral indexes, which were used to perform a k-nearest neighbor classification. To better map some classes, a minimum distance classification involving NDVI and NDRE yearly filtered and regularized stacks were computed to map the agronomical classes. Furthermore, SAR Sentinel-1 SLC data were processed in the SNAP to map urban and water surfaces to improve optical classification. Additionally, deep learning and GIS updated datasets involving urban components were adopted beginning with an aerial orthophoto. GNSS ground truth data were used to define the training and the validation sets. In order to test the effectiveness of the implemented service and its methodology, the overall accuracy was compared to other approaches. A mixed hierarchical approach represented the best solution to effectively map geomorphological complex areas to overcome the remote sensing limitations. In conclusion, this service may help in the implementation of European and local policies concerning land cover surveys both at high spatial and temporal resolutions, empowering the technological transfer in alpine realities.

1. Introduction

Earth Observation (EO) data services are becoming very popular because of the significant increase in satellite missions and geospatial cloud-based platforms such as the Google Earth Engine and Microsoft Planetary [1,2]. New investments in the space economy have boosted the technological transfer in different fields opening new opportunities in terms of applied science [3,4,5]. It is worth noting that both the public and private sectors in alpine and rural areas are still far behind. Therefore, it is more crucial to fill this gap by realizing and exporting useful EO tools to strengthen the monitoring and study of the biophysical components of different territories and use public funds more efficiently and effectively [6,7,8]. This would permit mountainous areas to keep up, stay competitive and bring innovation even in apparently distant contexts. In particular, the Copernicus program, as well as many other scientific EO programs around the world, provide vast amounts of geographical datasets that may aid in achieving European and international technological transfer goals of to face considerable issues such as climate change, sustainable development and social inclusion worldwide [9,10,11].
Recently, Google announced its realization of the Dynamic World project. Dynamic World is a near real-time 10 m spatial resolution global land use land cover (LULC) dataset produced using deep learning and is freely available and openly licensed. It is the result of a partnership between Google and the World Resources Institute to produce a dynamic dataset of the physical material on the surface of the Earth. Dynamic World is intended to be used as a data product for users to add custom rules and assign final class values, producing derivative land cover maps. The main key innovation of Dynamic World is represented by near-real time image enabling the mapping of LULC every 5 days depending on the location and adopting Sentinel-2 top-of-atmosphere per-pixel probabilities across nine land cover classes with a 10 m GSD.
This EO service based on the Google Earth Engine is very powerful but regarding the accuracy in mountainous areas such as the Alps, it presents considerable criticalities. First, there is a strong confusion between the concept of land coverage and use (the first can be mapped by satellite, the second can only generally be used for certain uses such as mowing). Second, the system uses a probabilistic approach, not a deterministic one. Therefore, at a planning level, some problems can be encountered (e.g., a misleading biophysical component defined as an incorrect class for a forest or built up areas defined as water after snowmelt in alpine areas). Third, the Dynamic World training set distribution is almost completely absent in mountainous and alpine areas. Typically, these areas are the most complex to map for remote sensing. Furthermore, the classes are designed to map global changes at high resolution using fewer classes which do not answer to the local needs for land covers that adopt the new European EAGLE guidelines and that have local robust accuracies that are only obtainable with a continuous ground truth data validation set.
The EIONET Action Group on land monitoring in Europe (known as EAGLE group) is an open assembly of technical experts from different European Economic Area (EEA) Member States, mostly in their roles as national reference center (NRC) on LC. Currently, the development of the EAGLE concept and methodology is being funded by the EEA within the framework of the Copernicus program. In Italy, LC monitoring is performed by the Istituto Superiore per la Protezione e la Ricerca Ambientale (ISPRA). For their activities and policies, many user communities such as decision-makers, non-governmental organizations, European communities, scientists and researchers require various sorts of LC information [12]. LC data, for example, are used to assess the progress toward the UN Sustainable Development Goals (SDGs) targets [13], such as target 15.3 which relates to achieving land degradation neutrality (LDN) by 2030 [14]. Despite the importance of LC data in environmental monitoring and planning, the number of accessible national products is limited and their qualities are not always appropriate. The EAGLE idea was built on a clear difference between land cover and land use, and it is described in a matrix consisting of three descriptors: land cover components (LCC), land use attributes (LUA) and additional characteristics (LCH). The descriptors can be merged to develop unique categorization systems for various needs or to detect correspondences with existing classes while maintaining the independence of the three descriptors [15]. There are numerous algorithms for examining land cover, starting with the classification of satellite images. The most appropriate approach is determined by factors such as the type of data, class distribution, research interests and classifier interpretability, as well as the balance between the objectives and the available resources. In general, the automatic classification systems are more time consuming as the data dimension and volume grow and data interpretation might become problematic at times [16]. Supervised classifications may be used to analyze large amounts of data as they are based on selecting a sufficient number of training samples with known values [17], which are then used to predict unknown values in the testing data [18]. As a result, it’s critical to choose examples that fully depict the diversity of the studied territory’s characteristics.
Nowadays, the CORINE Land Cover (CLC) ensures a high level of thematic detail and a lengthy historical series, but it is limited in terms of geographical detail and updating frequency (https://land.copernicus.eu/pan-european/corine-land-cover, viewed on 6 November 2022). In recent years, the high-resolution layers (HRLs) made it possible to describe the principal land cover classes in high spatial detail while maintaining a multi-year update frequency. However, the low accuracies in the classification of alpine areas still persist [19]. At the same time, Copernicus national-scale products are still available with high thematic and spatial depth, including Urban Atlas, Riparian Zones, Coastal Zones and Natura 2000, but these are only for specific locations. The most recent official product is the CORINE LAND COVER v.2018 (CLC). However, this product does not permit a detailed mapping of the territory especially in alpine areas [20]. Nevertheless, in recent years, some institutions, academic centers and private enterprises have tried to overcome the spatial resolution issue by creating prototypal products at the national level by adopting Copernicus EO Data [21,22]. New evidence is represented by the 10 m land cover produced by ESRI on a global scale with 10 classes even if there are strong limitations and errors in the alpine area due to the absence of a local confusion matrix [23]. Another example linked to the Italian context is the prototypal LC of the whole Italian territory performed by the ISPRA for the year 2018. This LC proposes a methodology with the joint use of the optical, multispectral and radar data of Sentinel 1 and Sentinel 2 [12]. However, following the choice of the adopted input data and the need to map an entire territory, it has strong limits in mountainous areas. Compared to the validation set, these areas are low and are not adequate for mapping mountainous territories in detail as suggested by [19].
As previously mentioned, a robust local EO service only based on satellite remote sensing data can only map land cover (hereinafter called LC) with a higher accuracy.
In particular, only the definition of LC is fundamental, because in many existing classifications and legends it is confused with land use. LC is defined as the observed (bio)physical cover on the Earth’s surface. According to this definition, land covers include forests, agricultural areas, human settlements, glaciers, water and wetlands per the Directive 2007/02 of the European Commission [24,25]. When considering LC in a very pure and strict sense, it should be confined to describe vegetation and man-made features. Consequently, the areas where the surface consists of bare rock or bare soil are describing land itself rather than LC. Additionally, it is disputable whether water surfaces are considered land cover. However, in practice, the scientific community usually describes those aspects under the term LC as well as some agricultural types of cover such as orchards, vineyards and pastures.
The application of Sentinel-2, Sentinel-1 and PlanetScope in land cover mapping have rapidly spread since 2020 [26]. Earth Observation data applications from multi-platform sensors, in particular from Sentinel-2, Sentinel-1 and PlanetScope, are mainly focused on plain areas [27]. There is still a lack of local application mountainous area. Most of classification approaches are based on single one-shot classifications or combined approaches with two supervised classifications from optical and SAR data [28]. Others are focused on multimodal remote sensing data fusion from open-access and commercial EO data in cloud platform, such as the Google Earth Engine, and in deep learning classification (mainly focused on classifying a single geometrical land cover component such as urban areas) [29]. Nevertheless, a mixed hierarchical approach adopting different sensors and several classifications to improve the land cover quality on mountainous areas still continues to be minutely explored and exploited.
Under this scenario, the development of a local LC EO continuous service according to the new EAGLE guidelines is becoming of great interest to the public administration at the alpine level and beyond. In fact, at the national level in Italy, the Istituto Superiore Per la Protezione e Ricerca Ambientale (ISPRA) produced and updated the national land consumption map, as well as several national land use and LC maps [21,22], but they do not necessary fit with alpine needs. Conversely, other regional products are frequently produced with CLC and are not up to date [30,31] or, in the case of high resolution global updated products such as Dynamic World, they do not answer to the local needs in terms of the accuracies, methodologies followed and legends. Therefore, in order to fill the gap of a lack of a LC at high resolution and answering to the EAGLE requests, the Aosta Valley autonomous region charged the Regional Cartographic Office to develop a new map with a 10 m GSD. This body commissioned the regional public company INVA spa, in particular the GIS unit, to carry out this work. The intent was to create a static product and service capable of dynamically mapping the Aosta Valley territory according to the required needs.
Therefore, the principal aim of this work is to present the new Aosta Valley LC and create a scalable and economically sustainable local EO service capable of mapping LC according to the EAGLE guidelines. The EO service developed adopted SAR Sentinel-1; Sentinel-2; PlanetScope and updated GIS local datasets to overcome the common troubles that remote sensing (RS) has in mountainous areas due to the topography, weather condition and shadows, as well as uniform LC class distributions.

2. Study Area

The EO Geospatial service was developed in the Aosta Valley autonomous region in NW Italy. It is the smallest Italian region in terms of surface extent, located in the mid-west of the Alps. It is surrounded by the four highest mountain massifs in Italy: Mont Blanc, which is also the highest peak in Europe, the Cervino-Matterhorn (4478 m), Monte Rosa (4634 m) and Gran Paradiso (4061 m). The conformation of the entire regional territory is the result of the work of many glaciations [32,33]. Therefore, considering the mountainous topography of the whole Aosta Valley territory, a specific EO geospatial land cover procedure was developed based on the EAGLE guidelines (see Figure 1).

3. Materials

The development of the present EO geospatial local service is scalable to other geomorphological complex areas, such as other mountain territories, and is based on the following datasets:
-
Copernicus Sentinel-2A surface reflectance data to map the land cover components
-
Copernicus Sentinel-1A and B SLC and GRD data to map urban and water components, respectively
-
PlanetScope four band data to define a part of the training and validation set
-
GIS updated datasets such as GNSS ground truth data to define both the training set and the validation set.

3.1. Multispectral Optical Datasets

The Sentinel-2 (hereinafter called S2) mission is part of the European Copernicus programme. The satellite acquires multispectral optical data with a spatial resolution between 10–20 m as a function of the considered band. The temporal resolution is 5 days due to two twin satellites, S2A and S2B. The multispectral optical data were obtained and processed in the Google Earth Engine (GEE) referring to the COPERNICUS/S2_SR collection. Sentinel-2 is a high-resolution, broad-spectrum, multispectral optical mission that supports the Copernicus Land Monitoring Service, including monitoring vegetation, soil and water cover and observing inland waterways and coastal areas. Sentinel-2 L2 data were downloaded from the Copernicus SciHub (the official distribution portal of the Earth Observation data in question). The images were pre-processed in Sen2cor (the official tool released by the European Space Agency—ESA). The EO data S2 that was pre-processed in Sen2cor contained 12 spectral bands of UINT16 (see Table 1). The images were ortho-projected in WGS84 and were in ground reflectance rescaled in dimensionless values from 0 to 10,000 starting from the existing DN to calculate the ground reflectance by removing the atmospheric contribution. There are also three QA bands for each scene, one of which (QA60) is a bitmask band with cloud mask information. In GEE, clouds can be removed as an alternative to using pixels in QA quality using COPERNICUS/S2_CLOUD_PROBABILITY. In this case, the QA bands were used.
A yearly median composite imagery ranging from 1 May 2020 to 30 September 2020 without clouds and shadows was realized. The S2 data were used to create yearly harmonized and filtered NDVI and NDRE stacks with a 10 days step to map woody crops.
PlanetScope, as part of the private space program Planet acquired by Google with its ultra-high spatial resolution microsatellites, is increasingly becoming a reference reality in remote sensing activities due to the possibility of accessing the data free of charge for education and research purposes (https://www.planet.com/markets/education-and-research, last access on 6 November 2022). Starting with the daily data acquired by PlanetScope, a stack was created, including all the acquisitions in the reference period of study. With a self-developed GEE algorithm, a composite imagery was generated covering the same period as S2. This image was adopted as an extra product in the validation phase and in the definition of the training sets during a photo-interpretation phase. It is worth noting that the PlanetScope micro-satellites acquire multispectral optical data on a daily basis in four bands with a ground sample distance (GSD) of around 3 m with various levels of processing. In this case, geo-referenced and atmospheric calibrated products in surface reflectance were adopted. Considering that these data are not open-access and have a fee for use except in scientific purposes, they can be considered optional in the development of a fully free EO geospatial local service. However, they represent a useful tool during the suggested workflow.

3.2. Sentinel-1 SAR Dataset

The Sentinel-1 mission is part of the European Copernicus programme. The satellite acquires radar data with a spatial resolution between 5–40 m depending on the acquisition mode. The temporal resolution is 5 days due to two twin satellites, S1A and S1B.
The radar data were retrieved from the NASA Alaska Satellite Facility (ASF, https://asf.alaska.edu/, last accessed 7 November 2022) and processed in the SNAP v.8.0.0 [34] and the Google Earth Engine (GEE) [1,35].
The Sentinel-1 (hereinafter called S1) mission provides data from a dual-polarized C-band SAR (Synthetic Aperture Radar) instrument. The Google Earth Engine provided only the Sentinel-1 ground range detected (GRD) collection. Each scene was preprocessed with the Sentinel-1 Toolbox in the SNAP using the following steps: (1) thermal and other noise removal, (2) Speckle–Lee filter application, (3) radiometric calibration, (4) ground correction using DTM 10m VDA (normally SRTM 30m worldwide) and (5), the final corrected values for the ground were converted into decibels via log scaling (10 × log10 (x)).
The level-1 data were processed into either single look complex (SLC) and/or ground range detected (GRD) products. The SLC products preserved the phase information and were processed at the natural pixel spacing whereas the GRD products contained the detected amplitude and were multi-looked to reduce the impact of speckle. In particular. the level-1 SLC (IW) interferometric wide products (IW) were adopted [36].
The IW swath mode was the main acquisition mode over land and satisfied the majority of the service requirements (Richards 2009 [37]). As mentioned before, the SLC IW data were adopted by creating two separate datasets with the same orbit, frame and path of the scene in the study area. The two time series stacks, including all scenes ranging from 1 January 2020 to 31 December 2020 in ascending and descending mode, were considered. Those characteristics are reported in Table 2. As reported by [38], the main distortion in SAR data was the elevation displacement. In a radar image, the displacement was toward the sensor and became quite large when the sensor was nearly overhead. The displacement increased with a decreasing incidence angle. The characteristics resulting from the geometric relationship between the sensor and the terrain that were unique to radar imagery were foreshortening, layover and shadowing. The topographic features such as mountains and artificial targets such as tall buildings were displaced from their desired orthographic position. The effect was removed from an image through independent knowledge of the terrain profile.
The ascending and descending values were both processed in the SNAP v.8.0.0 and then imported into the GEE to create a mosaicked-median composite to reduce the geometric distortions in the slopes where, normally, a given acquisition mode occurs.

3.3. GIS Products and Ground Data

In this EO service, other datasets were also considered.
First, the digital terrain model (DTM) from the Aosta Valley autonomous region with a 2 m GSD was resampled in SAGA GIS with a nearest neighbor algorithm in order to perfectly overlay the Sentinel imagery. It is worth noting that the DTM was acquired with flight lidar sensors in 2008.
Second, the training set was defined as different ESRI shapefile polygons per each class in order to train the classifier. This dataset was defined from one side by object-based segmentation (OBIA) using red, green, blue, near-infrared, red edge and shortwave bands and considering the spectral signatures and the photo-interpretation analysis by adopting the ground truth data polygon (GTDP).
Third, the validation set was defined through the ESRI shapefile polygons to validate the classification. This dataset was obtained both through photo-interpretation and in-the-field GTDP. The validation was carried out in two phases: the first by calculating the confusion matrix by adopting the dataset obtained from S1 and S2 processing bands and finally by assessing the classification accuracy after merging each part.
It is worth noting that a Garmin 64S and Lemon GPS smartphone application developed by the Italian GeneGIS company were also used to define the GTDP.
As mentioned before, the collection of such data allowed us to populate both the training set and the validation set. In particular, a random GTDP selection was performed in SAGA GIS vers. 8.2.0. with an allocation of 70% of the GTDP to the training set and 30% of the GTDP to the validation set.
Finally, the Italian AGEA (Agency for Disbursements in Agriculture) yearly air flights imagery coupled with the Aosta Valley 2018 Orthophoto were used to perform deep learning on built-up areas and refine the final product with a minimum mapping unit of 100 m2 in order to keep the product coeval with the Sentinel datasets.
Generally, the tools adopted were the GEE [1], the SNAP vers. 8.0.0 to obtain and calibrate the data during the pre-processing phase, Orfeo Toolbox vers 8.0.0 [39,40], SAGA GIS vers.8.0.0 [41] to perform the classification during the processing phase and QGIS with GRASS and R v.3.0.1 [42,43,44] during the post-processing phase to prepare the final product.

4. Methods

4.1. Sentinel-2

The S2 data were obtained from the GEE. In particular, the collection COPERNICUS/S2_SR was used. A self-developed algorithm performed in the GEE was adopted to create the median composites. The S2 composite stack included bands, spectral indices and standard deviations. These input parameters are reported in Table 3. The S2 stack, including the DTM aspect and slope, was adopted as the input data during the classification while the S1 output layers served to better refine the urban and water classes. Each composite image was generated starting with the EO data available every 10 days for the period from 1 May 2020 to 30 September 2020 (t), i.e., the summer weather season, in order to correctly map the glacial surface of the territory falling within the ablation period and observe the vegetation during the phenological active season. It is worth noting that the generated composite images consisted of the median value for each pixel in the reference period t. For S2, we considered all the images that satisfied the condition in which each pixel had cloud cover equal to zero (the clouds and shadows were suitably masked and the pixel, if cloudy, was considered in the definition of the median value of the reflectance of each band). The S2 input data were reported in Table 3 as the input dataset for the k-nearest neighbor supervised classification considering all the classes. The input dataset was normalized.
The spectral indexes reported in Table 3 were calculated as follows using the S2 coefficient reported in (https://www.indexdatabase.de, last access 7 November 2022):
NDVI Normalized Difference Vegetation Index [45,46,47,48,49]
NDVI = NIR RED NIR + RED
BSI Bare Soil Index [50]
BSI = ( SWIR   1 + RED ) ( NIR + BLUE ) ( SWIR   1 + RED ) + ( NIR + BLUE )
NDWI Normalized Difference Water Index [51,52]
NDWI = NIR SWIR   1 NIR + SWIR   1
NDSI Normalized Difference Snow Index [53,54,55,56]
NDSI = NIR SWIR   1 NIR + SWIR   1
TCB (Tasseled Cap Brightness) [57,58,59,60]
( BLUE 0.3037 ) + ( GREEN 0.2793 + ( RED 0.4743 ) + ( NIR 0.5585 ) + ( SWIR 1 0.5082 ) + ( SWIR 2 0.1863 )
TCG (Tasseled Cap Greenness) [57,58,59,60]
( BLUE 0.2848 ) + ( GREEN 0.243 ) + ( RED 0.5436 ) + ( NIR 0.7243 ) + ( SWIR 1 0.0840 ) + ( SWIR 2 0.1800 )
TCW (Tasseled Cap Wetness) [57,58,59,60]
( ( BLUE 0.1509 ) + ( GREEN 0.1973 ) + ( RED 0.3279 ) + ( NIR 0.3406 ) + ( SWIR 1 0.7112 ) + ( SWIR 2 0.4572 ) )

4.2. Sentinel-1

S1 SAR images were used only to map urban and water components in addition to the optics. The other classes were mapped only with optical remote sensing due to the fact that SAR distortions in mountainous areas do not permit higher accuracy land cover mapping. Therefore, the data offered by optical remote sensing are the only data in alpine environments that are truly capable of offering consistent and reliable mapping, despite being bound to atmospheric conditions. However, due to the composite in land cover, it is possible for it to be overcome.
In order to create a mask for urban areas, as first step, pairs of S1 SLC images were downloaded from the NASA ASF. In particular, to achieve interferometry with an exact repeated coverage, only images derived from the same satellite sensor in the exact acquisition mode were used (ascending or descending see Table 2). Due to the low rate of urbanization in recent years in the Aosta Valley (in terms of an increase in built-up structures), the changes in the urban footprint observed within the last couple of years can be neglected if considering the spatial resolution of the Sentinel-1 SAR sensors (deep learning was performed to refine this). Therefore, we can consider the urban footprint as a constant value for all the Sentinel-1 images acquired within a single-year time frame. The use and interpretation of SAR imagery require a series of complex pre-processing procedures, which we ran on ESA’s SNAP v.8.0.0 software. Such procedures refer to the standard preprocessing commonly applied to Sentinel-1 products to derive interferometric coherence [61,62]. The interferometry was conducted only on those images pairs which had a perpendicular baseline possibly more of 130 m within the year (in the e.g., 2020) and a temporal baseline lower than 10 days. We reported the available adopted pairs from the ASF in Table 4.
In particular, we adopted the approach described by the ESA guidelines available in [63,64,65] by introducing a variation in the type of classification. In this case, the maximum likelihood was not chosen. Instead, random forest and batch processing were created to involve all the selected pairs. It is worth noting that co-registration and terrain-shadow correction were performed in the ESA SNAP v.8.0.0.0 toolbox. See more detail in Figure 2.
We used both that polarizations (VH and VV) on all the SAR input data, hence the output coherence image consisted of two separate raster files related to the different polarizations.
In terms of the processing procedure, we selected only the bursts that covered our study area (the Aosta Valley autonomous region) from the original product. In addition, we computed the coherence estimation using a range window size of 10 pixels. Finally, we employed the Range–Doppler terrain correction method, which used the 10 m Aosta Valley DTM implemented in the SNAP repository, selecting ED50-UTM 32 N (EPSG: 23032) as the projected reference system, and selecting an average output resolution of 10 m. The output coherence image consisted of two different bands, reporting interferometric coherence values (from 0 to 1) for the two polarizations (VH and VV). It is worth noting that the coherence between the two SAR images expressed the similarity of the radar reflection between them. Any changes in the complex reflectivity function of the scene were manifested as a decorrelation in the phase of the appropriate pixels between the two images.
Within this type of raster, it was possible to extract the urban footprint by applying supervised (and unsupervised) classification algorithms. In this case, a supervised classification was performed starting with the training set. Since we were interested in distinguishing only two different classes, i.e., urban and non-urban areas, we aggregated all non-urban land cover types into the same class (such as glaciers, lawn pastures, needle forests. etc.). A random forest classifier was performed in the SNAP. We specified the maximum number of decision trees in the RF classifier at 500 as the optimal value to achieve good noise removal and a homogeneous response [63,64,65]. Following the instructions in the ESA online material [66], we applied the interferometric coherence processing methodology outlined in the previous section to a set of S1 data obtained from January to December 2020.
The classification images produced from S1 imagery consisted of a discrete raster, with all the pixels classified into either “urban” or “non-urban” values (with values of 1 and 0, respectively) and water or “non-water”.

4.2.1. Water Mask

The calibration process was done following the [67] approach. Additionally, the normalized difference polarization index (NDPI) and the cross ratio (CR) were calculated to examine water and humid areas.
Four S1 stacks were created defining the GRD Sigma0 dB product, considering the ascending and descending modes for the VV and VH bands used in the NDPI and CR computations (see Table 1 in the materials section). In places where SAR geometrical distortion often impacts a portion of the imagery taken in the ascending or descending mode, these stacks of bands were finally trimmed using an aspect layer recovered by the 10 m DTM VDA. The angle of view and the aspect layer were taken into consideration beginning with the ancillary and metadata files during the clipping to exclude areas affected by significant distortions in both the ascending and descending mode as described by [38].
In order to fill the gaps left in each stack by the removal of the sections that were severely impacted by the distortions, the stacks were finally mosaicked. In the case of both the distortions, we evaluated those portions with a higher incidence angle in accordance with [38,62]. SAGA GIS was used for this task. The finished stack was then uploaded into the GEE to produce an annual SAR synthetic composite in order to compute the NDPI and CR. As indicated earlier, the SAR composite was employed to map the water component more accurately.
To assess the water area components, the following SAR bands and indexes (Table 5) were adopted after a pre-processing phase and the creation of a composite to reduce the SAR distortions.
NDPI and CR has been calculated as follows:
NDPI Normalized Difference Polarization Index [68]
NDPI = VH VV VH + VV
CR Cross ratio [68]
CR   = VH VV
As demonstrated by [69] in a complex morphological context, the SAT approach was more effective than the Otsu thresholding method. Therefore, these bands were included to map surface water areas through a robust stepwise automatic thresholding (SAT) approach [69]. The SAT approach consisted of the following steps. (1) SAR data was pre-processed to create a backscattering coefficient that was georeferenced with high resolution LiDAR-derived DEM (in this case the Aosta Valley DEM with 2 m step resampled at 10 m). (2) SAT for relief displacement and de-speckle filtering was used to reduce noise in the data. (3) The conversion to dB was performed in the SNAP vers.8.0.0. In fact, the strength of the radar signal reflected from a unit area on the corresponding point in the scene determined the pixel value of the SAR image. The backscatter coefficient β0 was calibrated and employed to convert the values from the digital number to the reflectivity of the surface objects. The target’s radar cross-section per surface unit with regard to the local incidence angle was parameter β0. After that, all SAR data were transformed from raw data to power units (decibels-dB). The de-speckle filter was used to eliminate the salt and pepper noise while keeping the edges and the textural structures prior to the data analysis due to the speckle effect created by the coherent radiation used by radar systems. A Speckle–Lee filter with a 5-pixel by 5-pixel window was adopted, resulting in a unique valley-hill pattern in the histogram that represented a better distinction between water and non-water surfaces. Additionally, a normalization between the incident angles were performed. In order to identify a proper threshold, a set of third-order polynomials was employed to fit the histogram in a manner of moving steps. The reason for this was that the third-order polynomial had a shape that best described the histogram of the backscattering coefficient after de-speckling and was easier to identify the turning points compared to the higher order polynomials.
Each pixel in the SAR image was identified as either land or water after the threshold was established, depending on whether its value was smaller or greater than the threshold. Through an iterative method that maximized between-class variations while simultaneously minimizing within-class variance, the threshold value was established. Finally, using the primary input pre-processed S1 GRD dataset and splitting the training set into water and non-water areas, a supervised classification (random forest) was carried out in the SNAP v.8.0.0 to improve the mapping of water areas.

4.2.2. Land Cover Legend Definition

The reference legend of the new Aosta Valley land cover and relative EO geospatial continuous service was agreed with the ISPRA and in particular the Land Remote Sensing Unit. The legend proposed perfectly reflected the new European guidelines defined by the EAGLE group. The EAGLE legend foresees more detailed levels at high resolution than those proposed in Dynamic World with a deterministic and probabilistic approach, allowing for detailed mapping of the various biomes at least at a European level. In particular, the new EAGLE legend moves away from the old Corine Land Cover which is tied to a mixture of cover and use similar to Dynamic World. In particular, given the characteristics of the mountainous areas, an expansion and more detailed definition for certain classes deemed of interest by local stakeholders was proposed. In Appendix A, the EAGLE–ISPRA legend was reported along with the agreements from the ISPRA for geomorphological complex areas such as the Aosta Valley region.

4.3. Training Set and Validation Set Definition

In order to better understand the spatial extent distribution of each class and determine the ideal number of training areas for each class in the training set, a K-means unsupervised classification with 15 classes was conducted after constructing the initial input dataset. Regarding the last criteria, there needed to be enough training pixels for each spectral class to enable accurate estimations of the components of the covariance matrix and the class conditional mean vector. The covariance matrix for an N-dimensional multispectral space was symmetric and has a size of N*N. Therefore, it required an estimation from the training data for 1/2N (N + 1) unique elements. It took at least N (N + 1) independent samples to keep the matrix from being singular. The good news was that each N-dimensional pixel vector contained N samples (one for each waveband). As a result, only (N + 1) independent training pixels were necessary. Since it was challenging to guarantee the independence of the pixels, more than the minimum amount was chosen. [37,70] advocate for using as many as 100 N training pixels per class, with 10 N being the lowest practical number. Therefore, a minimum of 250 polygons (containing a minimum of 5 pixels) were computed for this categorization, taking only the spectral bands and relative indices without the standard deviation (see Figure 3).
The object-based segmentation (OBS) approach was performed using the mean shift algorithm available on the Orfeo Toolbox software v.8.0.0 [71,72]. The OBS algorithms aimed at minimizing the spectral heterogeneity of the polygons by comparing the relative spectral properties of the neighboring pixels. The resulting segmentation vector layer (SVL) was generated according to a previously defined minimum mapping unit of 300 m2. In particular, the segmentation was performed with reference to the S2 bands performing a bilinear resampling on those without a native 10 GSD. Then, the images were segmented based on an internally homogeneous spectral response. The segments were then vectorized to generate the corresponding vector layer. During the segmentation, the required parameters were set to the values shown in Table 6. The SEG was then used to explore the internal features other than the spectral signatures, such as the recurrent radiometric patterns (texture) and the shape. Some of these polygons were then randomly extracted and others were created by analyzing the signatures of the entire stack to define the training areas, including GTDP.
As previously mentioned, the regions of interests (ROI) per each class were defined mostly on the field and partially by applying both a segmentation and a spectral signature-photo interpretation phase. Figure 3 depicts the distribution of the ROIs in the study area. Each ROI per class had a number of polygons up to 250. An overall of 4300 ROIs were defined and 70% of them were adopted as the training set 30% as the validation set. In the developed EO local service, the ROI detection and the relative changes through time were performed by coupling a self-developed semi-automatic technique. The fact that the technique required a manual check in case of an anomaly was linked to the reason that a simple probabilistic approach such as those developed in Dynamic World by Google did not permit the mapping of the real changes that happen with a high degree of accuracy, especially in alpine areas.
In the first phase a pixel-based analysis into each of the ROIs were performed in the GEE by analyzing the variance of each of the S2 bands. If the median value per each band received a variance value at the time t + 1 up to 1.5 of its previous variances at the time t (considering the same seasons, in the example the summer meteorological season) a second phase followed. In this phase, a photo-interpretation with different EO images such as PlanetScope was performed as well as the ground detection and a change in the ROI in the necessary training polygon class. This empirical formula was developed to analyze the specific case of the Aosta Valley autonomous region. It is worth noting that the S1 data showed an intrinsic time-phase decorrelation in the case of the SLC product and geomorphological effects due to the territory in SLC and GRD products. Therefore, the radar backscatter is not recommended to be considered in this procedure regarding the entire ROIs LC components.
n = λ S 2 1 σ λ ( t 0 ) 2 ( λ S 2 ) n = λ S 2 1 1.5   σ λ ( t 0 + 1 ) 2 ( λ S 2 )
where:
σλ(t0)2(λS2) is the sum of the variances of each of the S2 bands at the time t0.
σλ(t0+1)2(λS2) is sum of the variances of each of the S2 bands at the time t0 + 1.

4.4. Supervised Classification Algorithms

Starting with the S2 input dataset and the training set, the supervised classifications were performed in SAGA GIS vers. 8.0.0 and the confusion matrix was computed. Given the characteristics of the S2 input dataset and the analyzed alpine territory, the best performing algorithms adopted were the k-nearest neighbors classification-KMC and the minimum distance with pre-segmentation (SNIC) by applying a distance threshold of 50. The k-nearest neighbors (k-NN) is an algorithm used in pattern recognition for the classification of objects based on the characteristics of the objects in close proximity to the ones considered. It is a non-parametric classification method. In both cases, the input is the closest k training example in the feature space. The output depends on whether the k-NN is used for classification or regression. In the k-NN classification, the output is a membership in a class. An object is classified by a plurality vote of its neighbors, with the object assigned to the most common class among its k closest neighbors (k is a positive, typically small, integer). If k = 1, the object is simply assigned to the class of that single closest neighbor. In the k-NN regression, the output is the property value for the object. This value is the average of the closest neighboring k values. On the other hand, the minimum distance classifier is used to classify unknown image data to classes which minimize the distance between the image data and the class in the multi-feature space. The distance is defined as an index of the similarity so that the minimum distance is identical to the maximum similarity. Therefore, the minimum distance technique uses the mean vectors of each endmember and calculates the Euclidean distance from each unknown pixel to the mean vector for each class. All pixels are classified to the nearest class unless a standard deviation or distance threshold is specified, in which case some pixels may be unclassified if they do not meet the selected criteria. The classification was performed following a hierarchical approach described in the analysis section.

4.5. Deep Learning Using the Convolutional Neural Network (CNN)

Deep learning is a type of machine learning that relies on multiple layers of nonlinear processing for feature identification and pattern recognition, described in a model. Deep learning models can be used on different tools or by performing Python codes related to libraries such as PyTorch, Keras, TensorFlow, Onnx, Fats.ai etc. In this case, open-source libraries and Python scripts were integrated with ESRI ArcGIS Pro v.2.8 for object detection, object classification and image classification. In order to extract the building and roads, deep learning techniques using convolutional neural networks (CNNs) were adopted starting with the ortho-rectified images acquired by air flights over the Aosta Valley region. In particular, the two images regarding the AGEA (Agency for Disbursements in Agriculture) 2020 and the Aosta Valley 2018 orthophoto were used.
An inferencing process was performed to extract the roads and buildings. This phase was crucial because the information learned during the deep learning training process was put to work in detecting similar features in the datasets. ESRI ArcGIS Pro uses an external third-party framework and model definition file to run the inference geoprocessing tools. Therefore, the library and dependencies were appropriately installed. In this case, the two models provided by ESRI and edited accordingly considering the alpine areas were adopted. It is worth noting that the model definition files and (.dlpk) packages can be used multiple times as inputs for the geoprocessing tools, allowing for the assessment of multiple images over different locations and time periods using the same trained model.
The main settings adopted to perform CNN deep learning on ArcGIS Pro are reported in Table 7.
It is worth noting that filtering and threshold is normally not present in the parameter settings. In fact, to avoid deep filtering out features, in this case buildings and roads, with a surface less than 100 m square, a script was realized to include this command and perform this analysis during the building extraction phase.

5. Results and Discussion

The classification was performed following a hierarchical approach. First, a supervised k-nearest neighbor classification-KMC OpenCV considering all classes was performed. The KMC classification was carried out by normalizing the dataset due to the diversity of the input variables to make them homogeneous. The parameters adopted in the k-nearest neighbor classification-KMC (OpenCV) were a number of neighbors equal to 8, a training method classification and a type of Brute Force algorithm [41].
It is worth noting that the water and urban areas classified with SAR and urban deep learning were joined together with the urban and water classes that were mapped with the optical data to improve these classes especially in isolated mountain villages. This improved these classes by performing a semi-automatic GIS procedure. During this joining phase, a minimum mapping unit (mmu) of 100 m was considered. Therefore, only the pixels that have this mmu were mapped as urban while the other pixels that did not intersect with urban (both SAR/deep learning and optic multispectral) with less than 100 m were considered as classified by the optical data.
Since woody crops were particularly complex to discriminate (hereinafter called WC), performing only a KMC classification due to the single multispectral composite input dataset did not permit us to consider the whole phenological active season. A hierarchical classification approach was then implemented to try to overcome this issue. In particular, the developed EO service foresaw the first classification (considering the S2 main input dataset) with all the classes according to the new EAGLE land cover legend and a subsequent one with only WC class. In the end, the two classifications were subjected to a mosaicking process by first applying an overlap for WC. Then, the doubtful areas were corrected manually by photo-interpretation of composite PlanetScope imagery.
Regarding the WC class, a supervised minimum distance classification (MDC) was performed, including the following input datasets: a yearly cloud-shadow masked NDVI stack filtered (Savitzky-Golay) [73,74,75] and regularized at 10 days times-steps [76] on the GEE, and an annual stack of the NDRE index (normalized difference red-edge index for agriculture) following the same procedure of the NDVI stack [77]:
NDRE = NIR RE NIR + RE
NDVI composite Entropy [32,78]
H NDVI = i = 0 N 1 j = 0 N 1 NDVI i , j log ( NDVI i , j )
where NDVIi,j is the NDVI value at the i-th row and the j-th column in the local square window, measuring N pixels. For this study, a kernel window size of 10 × 10 pixels was adopted.
Using Rao’s Q Diversity index on the S2 NDVI composite [79], Rao’s Q is calculated using half the squared Euclidean distance. Therefore, the resulting index is [80]:
Q = d ij p i p j
where pi and pj are the proportion of the area for each category per the rows and columns in the pairwise distance dij.
The pattern analysis of the S2 NDVI composite used the following parameters: (a) dominance, (b) diversity, (c) relative richness and (d) fragmentation [81]. Then, the KMC data were mosaicked using as first overlap onto the MDC to refine only the WC class. The same was done considering urban and water masks mapped using the S1 data. As a last step, a simple filter was performed using a radius greater than 20 m. Furthermore, in the final classification, deep learning features considered in the urban and anthropic areas were included and the confusion matrix computed.
The scalable Earth Observation service to map land cover in geomorphological complex areas beyond the Dynamic World developed for the Aosta Valley region are reported in Figure 4.
The parameters reported in the confusion matrix [82] were: overall accuracy, errors of commission and omission, user and producer accuracy, sum users and sum producers and unclassified pixels (in this case each pixel were classified). The sum user indicates the number of pixels for each class in each row while the sum producer represents the number of pixels for each class in each column. The overall accuracy is calculated by summing the number of the correctly classified values and dividing it by the total number of values. Finally, the kappa coefficient measures the agreement between the classification and the truth values.
It is worth noting that a comparison with traditional methods was followed to prove the real effectiveness of the suggested approach and the developed EO services. Therefore, the k-coefficients were computed per each approach.
A traditional approach that adopted only the optic multispectral data was followed by performing a unique one-shot classification using KMC. A combined approach adopted a single KMC supervised classification with the optical data, considering all the classes and the two classifications involving only urban and water with random forest and SAT, respectively, including SAR data. Finally, the mixed hierarchical approach with the two optical supervised classifications (KMC + MD), the two SAR classifications (for urban and water respectively) and deep learning was described in this work.
The hierarchical approach improved the quality of the obtained classifications, as shown in Table 8.
The developed EO service represented a valuable to map land cover at high temporal and spatial resolutions. The combined application of S1 (only for a couple of classes) and S2 EO data coupling deep learning techniques boosted the classification of land cover components in geomorphological complex areas such as the Alps. It is worth noting that the S1 data were adopted only to better map urban and water areas due to misleading classifications that may occur due to the physical limitations of SAR in mountainous areas. Moreover, the S1 processing especially related to interferometry required high performance computing machines and would not permit a rapid land cover mapping. The developed EO service was considered to be scalable to other morphological complex realities, in particular the mountainous areas. The realized EO local service with free EO data and open-source tools, except from ESRI ArcGIS (that can be replaced by QGIS and Python script for deep learning), represented a possible workflow to perform ongoing territorial planning and management. The present EO service led to an important technology transfer in the Aosta Valley territory answering various requests at different levels (European, national and local). This EO service will streamline the implementation of local policies concerning land cover monitoring and assessment. In this regard, the Aosta Valley, similar to many Italian regions, needs to assign development funds to each municipality every year, which are largely based on the distribution and extension of the land cover components within its borders. The maps developed with the present EO service can be freely downloaded in an ESRI shapefile format or be requested in raster (.tif) from the official Aosta Valley geoportale, reachable at this link (last access 11 November 2022): https://geoportale.regione.vda.it/download/carta-copertura-suolo/. The land cover developed starting with the reference year 2020 is reported in Figure 5 with its confusion matrix in Figure 6.
The final confusion matrix was reported in the caption section below.

6. Conclusions

EO services regarding land cover mapping are crucial to monitor and assess land cover changes and propose useful sustainable management and planning policies. Free Copernicus data, offered by S1 and S2 missions as well as PlanetScope may play a great role in land cover mapping. Nevertheless, the exploitation of these kinds of EO data is well known in literature. However, there is still a lack in the development of robust services to map mountainous areas (such as the Alps) with a high level of accuracy according to the newest EAGLE guidelines. In this regard, this work has successfully explored a possible scalable and repeatable service for mountainous areas that predominantly uses optical data, but also use radar data for some components, aiming to compensate native SAR acquisition mode distortions by adopting a mixed hierarchical approach to map land cover. This geospatial service based on EO data may help with the implementation of European, global and local policies concerning land cover mapping both at high spatial and temporal resolutions to assess land cover changes due to anthropic pressure and climate change and pursue a sustainable development perspective, empowering the technological transfer in mountainous realities with a higher degree of detail beyond the GEE-based Dynamic World.

Author Contributions

Conceptualization, T.O.; methodology, T.O.; software: T.O. and D.C.; validation, T.O. and D.C.; formal analysis, T.O.; investigation, T.O.; resources, T.O.; data curation, T.O.; writing—original draft preparation, T.O.; writing—review and editing, T.O. and E.B.M.; visualization, T.O. and D.C.; supervision, E.B.M.; project administration, T.O.; funding acquisition, T.O. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The findings may be reachable at https://geoportale.regione.vda.it/download/carta-copertura-suolo/ (last accessed 27 December 2022).

Acknowledgments

Thanks to our colleagues at INVA spa and GEO4Agri DISAFA Laboratory and Luigi Perotti, as well as Annalisa Viani for the support in performing the following work. A remarkable thanks to Edoardo Cremonese for the great feedback regarding this product and Fabrizia Joly, both of the Environmental Protection Agency of Aosta Valley, and Luca Congedo, Michele Munafò and Ines Marinosci of the ISPRA Land Unit. A huge thanks to the Regione Autonoma Valle d’Aosta, the head of the INVA spa GIS area Davide Freppaz and the head of the Regional Cartographic Office Chantal Tréves of the Aosta Valley autonomous region for permitting the realization of this work. A final thanks to all INVA spa GIS colleagues and to Pierre Vuillermoz. Last but not least, thanks to everyone in the Aosta Valley who expressed positive or negative feedbacks about the present work that has pushed us to do better day after day and never give up.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

A legend description is described as follows.
RAVA LegendEAGLE—ISPRA LegendDescription
Urban and anthropic areas (111)Artificial abiotic surfacesSurfaces strongly influenced by anthropic activity and characterized by human settlements. These are areas in which structures are built without distinction for the intended use or are under construction, as well as roads, airports, railways, parking lots and any artifact capable of determining a permanent or semi-permanent loss of the soil resource, including caves and mines.
Shrubland and transitional woods
(324)
ShrublandNatural or natural-shaped surfaces. Areas characterized by arboreal species and generally sparse woods near grazing areas or areas with reduced herbaceous vegetation and rocks (such as rubble). These areas indicate the dynamics of the ecological forest succession following the abandonment of grazing areas and consequent expansion of forest areas or following disturbances to natural or anthropogenic disturbances to the forest.
Woody crops
(221)
Not defined
(considered separately vineyards and orchards)
Surfaces characterized by the presence of various cultivation systems, in particular orchards and vineyards. Surfaces influenced by human activity and agronomic practices.
Water surfaces
(512)
WaterNatural or natural-shaped surfaces. Areas characterized by the presence of bodies of water such as natural lakes of fluvial and/or glacial origin, artificial reservoirs and bodies of water in wetlands.
Water courses
(511)
WaterNatural or natural-shaped surfaces. Areas characterized by the presence of watercourses such as rivers, streams along runoff lines and slope impluviums.
Needle-leaved forests
(312)
Needle-leavedNatural or natural-shaped surfaces. Wooded areas characterized by a prevalent and widespread presence of coniferous trees on a given surface (larch, spruce, fir, pine, Douglas fir, etc.)
Broad-leaved forests
(311)
Broad-leaved forestsNatural or natural-shaped surfaces. Wooded areas characterized by a prevalent presence of broad-leaved trees on a given surface (oak, chestnut, ash, maple, linden, alder, birch, poplars, etc.)
Mixed forests and moors
(313)
Not definedNatural or natural-shaped surfaces. Wooded areas characterized by the presence of both broad-leaved and conifers with no evident prevalence and sometimes shrubs or the presence of heather (Erica spp. and Calluna vulgaris L.).
Permanent snow and ice
(335)
Permanent snow and iceNatural surfaces. Areas characterized by the presence of glaciers and glaciated surfaces such as seracs, icefalls and frozen or snow-covered surfaces such as snowfields in the considered observation period. It should be noted how the measurements carried out fall within the full ablation season and can, therefore, constitute a useful data on the perimeter in this sense. The rock glaciers entirely covered by debris and rocks are not included in this class, preferring to follow a criterion of spectral uniformity based on the typical characteristics of remote sensing with s1-s2 data in the context of the Copernicus Programme that are capable of investigating the surfaces, not the subsoil, as indicated in international scientific literature regarding both optical and sar remote sensing data of these missions. Therefore, we refer to the rock class.
Natural grasslands and alpine pastures
(321)
Defined as generic pasturesNatural or natural-shaped surfaces. Areas characterized by natural evolution or by pastoral management conditioning practices. These areas are characterized by the presence of medium-high altitude herbaceous species.
Lawn pastures
(231)
Defined as generic pasturesNatural-shaped surfaces. Areas characterized by herbaceous cover conditioned by pastoral and agronomic practices in this case mowing, haymaking and eventual irrigation. The areas can be characterized by both grazing and mowing.
Bare rocks
(332)
Consolidated surfacesNatural surfaces. Areas characterized by the presence of outcropping rocks and coherent non-vegetated soils.
Discontinuous herbaceous vegetation of medium-low altitude
(909)
Not defined
(only an unconsolidated class is present in a non-vegetated macro-class)
Natural or natural-shaped surfaces. Areas characterized by unconsolidated soils with continuous vegetation cover over time as they have reduced annual vegetation, xeric sparse vegetation or poorly managed grassing with little or no agronomic conditioning practices. This coverage also includes rock jumps provided with vegetation spots with occasional but not very powerful soils and extremely limited or absent vegetation.
Sparse herbaceous vegetation at high altitudes
(333)
Herbaceous vegetation permanenetNatural surfaces. Areas characterized by the presence of scarce but permanent vegetation that is difficult to graze given both the characteristics of the vegetation and, in some cases, the slope. These are high-altitude surfaces near rocks or natural grasslands and woods.
Alpine wetlands
(410)
Defined as generic wetlandsNatural surfaces. Areas characterized by the presence of wetlands at different altitudes such as swamps, peat bogs and vegetation typical of these areas. Only the stretches of water in correspondence with these areas return to the water bodies.

References

  1. Gorelick, N.; Hancher, M.; Dixon, M.; Ilyushchenko, S.; Thau, D.; Moore, R. Google Earth Engine: Planetary-Scale Geospatial Analysis for Everyone. Remote Sens. Environ. 2017, 202, 18–27. [Google Scholar] [CrossRef]
  2. Lukacz, P.M. Data Capitalism, Microsoft’s Planetary Computer, and the Biodiversity Informatics Community. In Proceedings of the International Conference on Information, Virtual Event, 28 February–4 March 2022; pp. 355–369. [Google Scholar]
  3. Mutanga, O.; Kumar, L. Google Earth Engine Applications. Remote Sens. 2019, 11, 591. [Google Scholar] [CrossRef] [Green Version]
  4. Highfill, T.C.; MacDonald, A.C. Estimating the United States Space Economy Using Input-Output Frameworks. Space Policy 2022, 60, 101474. [Google Scholar] [CrossRef]
  5. Miraux, L. Environmental Limits to the Space Sector’s Growth. Sci. Total Environ. 2022, 806, 150862. [Google Scholar] [CrossRef]
  6. Andreatta, D.; Gianelle, D.; Scotton, M.; Vescovo, L.; Dalponte, M. Detection of Grassland Mowing Frequency Using Time Series of Vegetation Indices from Sentinel-2 Imagery. GISci. Remote Sens. 2022, 59, 481–500. [Google Scholar] [CrossRef]
  7. Orusa, T.; Borgogno Mondino, E. Exploring Short-Term Climate Change Effects on Rangelands and Broad-Leaved Forests by Free Satellite Data in Aosta Valley (Northwest Italy). Climate 2021, 9, 47. [Google Scholar] [CrossRef]
  8. Orusa, T.; Orusa, R.; Viani, A.; Carella, E.; Borgogno Mondino, E. Geomatics and EO Data to Support Wildlife Diseases Assessment at Landscape Level: A Pilot Experience to Map Infectious Keratoconjunctivitis in Chamois and Phenological Trends in Aosta Valley (NW Italy). Remote Sens. 2020, 12, 3542. [Google Scholar] [CrossRef]
  9. Feranec, J.; Soukup, T.; Hazeu, G.; Jaffrain, G. European Landscape Dynamics: CORINE Land Cover Data; CRC Press: Boca Raton, FL, USA, 2016. [Google Scholar]
  10. Panagos, P.; Jones, A.; Van Liedekerke, M.; Orgiazzi, A.; Lugato, E.; Montanarella, L. JRC Support to the European Joint Programme for Soil (EJP SOIL); Technical Report by the Joint Research Centre (JRC), EUR 30450 EN, JRC122248; JRC: Ispra, Italy, 2020. [Google Scholar]
  11. Žlebir, S. Copernicus Earth Observation Programme. 40th COSPAR Sci. Assem. 2014, 40, A0–A1. [Google Scholar]
  12. De Fioravante, P.; Luti, T.; Cavalli, A.; Giuliani, C.; Dichicco, P.; Marchetti, M.; Chirici, G.; Congedo, L.; Munafò, M. Multispectral Sentinel-2 and SAR Sentinel-1 Integration for Automatic Land Cover Classification. Land 2021, 10, 611. [Google Scholar] [CrossRef]
  13. Anderson, K.; Ryan, B.; Sonntag, W.; Kavvada, A.; Friedl, L. Earth Observation in Service of the 2030 Agenda for Sustainable Development. Geo-Spat. Inf. Sci. 2017, 20, 77–96. [Google Scholar] [CrossRef]
  14. Wunder, S.; Kaphengst, T.; Frelih-Larsen, A. Implementing Land Degradation Neutrality (SDG 15.3) at National Level: General Approach, Indicator Selection and Experiences from Germany. In International Yearbook of Soil Law and Policy 2017; Springer: Berlin/Heidelberg, Germany, 2018; pp. 191–219. [Google Scholar]
  15. Arnold, S.; Kosztra, B.; Banko, G.; Milenov, P.; Smith, G.; Hazeu, G. Explanatory Content Documentation of the EAGLE Concept 2021; Version 3.1; EEA: Copenhagen, Denmark, 2021. [Google Scholar]
  16. Gómez, C.; White, J.C.; Wulder, M.A. Optical Remotely Sensed Time Series Data for Land Cover Classification: A Review. ISPRS J. Photogramm. Remote Sens. 2016, 116, 55–72. [Google Scholar] [CrossRef]
  17. Zhu, Z.; Gallant, A.L.; Woodcock, C.E.; Pengra, B.; Olofsson, P.; Loveland, T.R.; Jin, S.; Dahal, D.; Yang, L.; Auch, R.F. Optimizing Selection of Training and Auxiliary Data for Operational Land Cover Classification for the LCMAP Initiative. ISPRS J. Photogramm. Remote Sens. 2016, 122, 206–221. [Google Scholar] [CrossRef] [Green Version]
  18. Holloway, J.; Helmstedt, K.J.; Mengersen, K.; Schmidt, M. A Decision Tree Approach for Spatially Interpolating Missing Land Cover Data and Classifying Satellite Images. Remote Sens. 2019, 11, 1796. [Google Scholar] [CrossRef] [Green Version]
  19. Aune-Lundberg, L.; Strand, G.-H. The Content and Accuracy of the CORINE Land Cover Dataset for Norway. Int. J. Appl. Earth Obs. Geoinf. 2021, 96, 102266. [Google Scholar] [CrossRef]
  20. Waser, L.T.; Schwarz, M. Comparison of Large-Area Land Cover Products with National Forest Inventories and CORINE Land Cover in the European Alps. Int. J. Appl. Earth Obs. Geoinf. 2006, 8, 196–207. [Google Scholar] [CrossRef]
  21. De Fioravante, P.; Strollo, A.; Assennato, F.; Marinosci, I.; Congedo, L.; Munafò, M. High Resolution Land Cover Integrating Copernicus Products: A 2012–2020 Map of Italy. Land 2021, 11, 35. [Google Scholar] [CrossRef]
  22. Congedo, L.; Sallustio, L.; Munafò, M.; Ottaviano, M.; Tonti, D.; Marchetti, M. Copernicus High-Resolution Layers for Land Cover Classification in Italy. J. Maps 2016, 12, 1195–1205. [Google Scholar] [CrossRef] [Green Version]
  23. Karra, K.; Kontgis, C.; Statman-Weil, Z.; Mazzariello, J.C.; Mathis, M.; Brumby, S.P. Global Land Use/Land Cover with Sentinel 2 and Deep Learning. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; pp. 4704–4707. [Google Scholar]
  24. Comber, A.; Fisher, P.; Wadsworth, R. What Is Land Cover? Environ. Plan. B Plan. Des. 2005, 32, 199–209. [Google Scholar] [CrossRef] [Green Version]
  25. Comber, A.J.; Wadsworth, R.; Fisher, P. Using Semantics to Clarify the Conceptual Confusion between Land Cover and Land Use: The Example of “Forest”. J. Land Use Sci. 2008, 3, 185–198. [Google Scholar] [CrossRef] [Green Version]
  26. Vizzari, M. PlanetScope, Sentinel-2, and Sentinel-1 Data Integration for Object-Based Land Cover Classification in Google Earth Engine. Remote Sens. 2022, 14, 2628. [Google Scholar] [CrossRef]
  27. Velastegui-Montoya, A.; Rivera-Torres, H.; Herrera-Matamoros, V.; Sadeck, L.; Quevedo, R.P. Application of Google Earth Engine for Land Cover Classification in Yasuni National Park, Ecuador. In Proceedings of the IGARSS 2022—2022 IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 17–22 July 2022; pp. 6376–6379. [Google Scholar]
  28. Huang, K.; Yang, G.; Yuan, Y.; Sun, W.; Meng, X.; Ge, Y. Optical and SAR Images Combined Mangrove Index Based on Multi-Feature Fusion. Sci. Remote Sens. 2022, 5, 100040. [Google Scholar] [CrossRef]
  29. Meng, X.; Liu, Q.; Shao, F.; Li, S. Spatio–Temporal–Spectral Collaborative Learning for Spatio–Temporal Fusion with Land Cover Changes. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5704116. [Google Scholar] [CrossRef]
  30. Büttner, G.; Feranec, J.; Jaffrain, G.; Mari, L.; Maucha, G.; Soukup, T. The CORINE Land Cover 2000 Project. EARSeL eProceedings 2004, 3, 331–346. [Google Scholar]
  31. Büttner, G. CORINE Land Cover and Land Cover Change Products. In Land Use and Land Cover Mapping in Europe; Springer: Berlin/Heidelberg, Germany, 2014; pp. 55–74. [Google Scholar]
  32. Carella, E.; Orusa, T.; Viani, A.; Meloni, D.; Borgogno-Mondino, E.; Orusa, R. An Integrated, Tentative Remote-Sensing Approach Based on NDVI Entropy to Model Canine Distemper Virus in Wildlife and to Prompt Science-Based Management Policies. Animals 2022, 12, 1049. [Google Scholar] [CrossRef]
  33. Orusa, T.; Mondino, E.B. Landsat 8 Thermal Data to Support Urban Management and Planning in the Climate Change Era: A Case Study in Torino Area, NW Italy. In Proceedings of the Remote Sensing Technologies and Applications in Urban Environments IV, International Society for Optics and Photonics, Strasbourg, France, 9–10 September 2019; Volume 11157. [Google Scholar]
  34. Weiß, T.; Fincke, T. SenSARP: A Pipeline to Pre-Process Sentinel-1 SLC Data by Using ESA SNAP Sentinel-1 Toolbox. J. Open Source Softw. 2022, 7, 3337. [Google Scholar] [CrossRef]
  35. Amani, M.; Ghorbanian, A.; Ahmadi, S.A.; Kakooei, M.; Moghimi, A.; Mirmazloumi, S.M.; Moghaddam, S.H.A.; Mahdavi, S.; Ghahremanloo, M.; Parsian, S.; et al. Google Earth Engine Cloud Computing Platform for Remote Sensing Big Data Applications: A Comprehensive Review. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 5326–5350. [Google Scholar] [CrossRef]
  36. Braun, A. Retrieval of Digital Elevation Models from Sentinel-1 Radar Data–Open Applications, Techniques, and Limitations. Open Geosci. 2021, 13, 532–569. [Google Scholar] [CrossRef]
  37. Richards, J.A. Remote Sensing with Imaging Radar; Springer: Berlin/Heidelberg, Germany, 2009; Volume 1. [Google Scholar]
  38. Samuele, D.P.; Filippo, S.; Orusa, T.; Enrico, B.-M. Mapping SAR Geometric Distortions and Their Stability along Time: A New Tool in Google Earth Engine Based on Sentinel-1 Image Time Series. Int. J. Remote Sens. 2021, 42, 9126–9145. [Google Scholar] [CrossRef]
  39. Grizonnet, M.; Michel, J.; Poughon, V.; Inglada, J.; Savinaud, M.; Cresson, R. Orfeo ToolBox: Open Source Processing of Remote Sensing Images. Open Geospat. Data Softw. Stand. 2017, 2, 15. [Google Scholar] [CrossRef] [Green Version]
  40. Inglada, J.; Christophe, E. The Orfeo Toolbox Remote Sensing Image Processing Software. In Proceedings of the 2009 IEEE International Geoscience and Remote Sensing Symposium, Cape Town, South Africa, 12–17 July 2009; Volume 4, p. IV-733. [Google Scholar]
  41. Conrad, O.; Bechtel, B.; Bock, M.; Dietrich, H.; Fischer, E.; Gerlitz, L.; Wehberg, J.; Wichmann, V.; Böhner, J. System for Automated Geoscientific Analyses (SAGA) v. 2.1. 4. Geosci. Model Dev. 2015, 8, 1991–2007. [Google Scholar] [CrossRef] [Green Version]
  42. QGIS Development Team. QGIS Development Team. QGIS Geographic Information System. In A Free and Open Source Geographic Information System; QGIS: Rome, Italy, 2018. [Google Scholar]
  43. Neteler, M.; Bowman, M.H.; Landa, M.; Metz, M. GRASS GIS: A Multi-Purpose Open Source GIS. Environ. Model. Softw. 2012, 31, 124–130. [Google Scholar] [CrossRef] [Green Version]
  44. Neteler, M.; Mitasova, H. Open Source GIS: A GRASS GIS Approach; Springer Science & Business Media: Cham, Switzerland, 2013; Volume 689. [Google Scholar]
  45. Deering, D.W. Rangeland Reflectance Characteristics Measured by Aircraft and Spacecraftsensors; Texas A&M University: College Station, TX, USA, 1978. [Google Scholar]
  46. Deering, D. Measuring” Forage Production” of Grazing Units from Landsat MSS Data. In Proceedings of the Tenth International Symposium of Remote Sensing of the Envrionment, Ann Arbor, MI, USA, 6–10 October 1975; pp. 1169–1198. [Google Scholar]
  47. Rouse, J., Jr.; Haas, R.H.; Deering, D.; Schell, J.; Harlan, J.C. Monitoring the Vernal Advancement and Retrogradation (Green Wave Effect) of Natural Vegetation; NASA: College Station, TX, USA, 1974. [Google Scholar]
  48. Rouse, J.; Haas, R.; Schell, J.; Deering, D. NASA SP-351. Monitoring vegetation systems in the Great Plains with ERTS. In Proceedings of the Third ERTS (Earth Resources Technology Satellite) Symposium, Washington, DC, USA, 10–14 December 1973; Volume 1, pp. 309–317. [Google Scholar]
  49. Tucker, C.J. Red and Photographic Infrared Linear Combinations for Monitoring Vegetation. Remote Sens. Environ. 1979, 8, 127–150. [Google Scholar] [CrossRef]
  50. Liu, H.; Zhan, Q.; Yang, C.; Wang, J. The Multi-Timescale Temporal Patterns and Dynamics of Land Surface Temperature Using Ensemble Empirical Mode Decomposition. Sci. Total Environ. 2019, 652, 243–255. [Google Scholar] [CrossRef] [PubMed]
  51. McFeeters, S.K. The Use of the Normalized Difference Water Index (NDWI) in the Delineation of Open Water Features. Int. J. Remote Sens. 1996, 17, 1425–1432. [Google Scholar] [CrossRef]
  52. Xu, H. Modification of Normalised Difference Water Index (NDWI) to Enhance Open Water Features in Remotely Sensed Imagery. Int. J. Remote Sens. 2006, 27, 3025–3033. [Google Scholar] [CrossRef]
  53. Valovcin, F. Snow/Cloud Discrimination; Air Force Geophysics Laboratory Rep; AFGL-TR-76-0174/ADA 032385; AFRL: Wright-Patterson Air Force Base, OH, USA, 1976. [Google Scholar]
  54. Valovcin, F.R. Spectral Radiance of Snow and Clouds in the near Infrared Spectral Region; AFRL: Wright-Patterson Air Force Base, OH, USA, 1978; Volume 78. [Google Scholar]
  55. Kyle, H.; Curran, R.; Barnes, W.; Escoe, D. A Cloud Physics Radiometer. In Proceedings of the 3rd Conference on Atmospheric Radiation, Berkeley, CA, USA, 28–30 June 1978; pp. 107–109. [Google Scholar]
  56. Bunting, J.T. Improved Cloud Detection Utilizing Defense Meteorological Satellite Program near Infrared Measurements; AFRL: Wright-Patterson Air Force Base, OH, USA, 1982. [Google Scholar]
  57. Jensen, J. Introductory Digital Image Processing—A Remote Sensing Perspective; New Jersey Prentice Hall: Englewood Cliffs, NJ, USA, 1986; 379p. [Google Scholar]
  58. Crist, E.P.; Cicone, R.C. Others Application of the Tasseled Cap Concept to Simulated Thematic Mapper Data. Photogramm. Eng. Remote Sens. 1984, 50, 343–352. [Google Scholar]
  59. Kauth, R.; Lambeck, P.; Richardson, W.; Thomas, G.; Pentland, A. Feature Extraction Applied to Agricultural Crops as Seen by Landsat. NASA. Johns. Space Cent. Proc. Tech. Sess. 1979, 1–2, 705–721. [Google Scholar]
  60. Huang, C.; Wylie, B.; Yang, L.; Homer, C.; Zylstra, G. Derivation of a Tasselled Cap Transformation Based on Landsat 7 At-Satellite Reflectance. Int. J. Remote Sens. 2002, 23, 1741–1748. [Google Scholar] [CrossRef]
  61. Kellndorfer, J.; Cartus, O.; Lavalle, M.; Magnard, C.; Milillo, P.; Oveisgharan, S.; Osmanoglu, B.; Rosen, P.A.; Wegmüller, U. Global Seasonal Sentinel-1 Interferometric Coherence and Backscatter Data Set. Sci. Data 2022, 9, 73. [Google Scholar] [CrossRef]
  62. Ohki, M.; Abe, T.; Tadono, T.; Shimada, M. Landslide Detection in Mountainous Forest Areas Using Polarimetry and Interferometric Coherence. Earth Planets Space 2020, 72, 67. [Google Scholar] [CrossRef]
  63. Sica, F.; Pulella, A.; Nannini, M.; Pinheiro, M.; Rizzoli, P. Repeat-Pass SAR Interferometry for Land Cover Classification: A Methodology Using Sentinel-1 Short-Time-Series. Remote Sens. Environ. 2019, 232, 111277. [Google Scholar] [CrossRef]
  64. Veci, L. Sentinel-1 Toolbox—TOPS Interferometry Tutorial; European Space Agency; Array Systems Computing Inc.: Toronto, ON, Canada, 2015. [Google Scholar]
  65. Veci, L.; Interferometry Tutorial. Array Systems. 2015. Available online: http://sentinel1.s3.amazonaws.com/docs/S1TBX%20Stripmap%20Interferometry%20with%20Sentinel-1%20Tutorial.pdf (accessed on 12 August 2017).
  66. Semenzato, A.; Pappalardo, S.E.; Codato, D.; Trivelloni, U.; De Zorzi, S.; Ferrari, S.; De Marchi, M.; Massironi, M. Mapping and Monitoring Urban Environment through Sentinel-1 SAR Data: A Case Study in the Veneto Region (Italy). ISPRS Int. J. Geo-Inf. 2020, 9, 375. [Google Scholar] [CrossRef]
  67. Filipponi, F. Sentinel-1 GRD Preprocessing Workflow. In Proceedings of the Multidisciplinary Digital Publishing Institute Proceedings, Online Event, 22 May–5 June 2019; Volume 18, p. 11. [Google Scholar]
  68. Knott, E.F.; Schaeffer, J.F.; Tulley, M.T. Radar Cross Section; SciTech Publishing: Sunnyvale, CA, USA, 2004. [Google Scholar]
  69. Zhang, W.; Hu, B.; Brown, G.S. Automatic Surface Water Mapping Using Polarimetric SAR Data for Long-Term Change Detection. Water 2020, 12, 872. [Google Scholar] [CrossRef] [Green Version]
  70. Davis, S.M.; Swain, P.H. Remote Sensing: The Quantitative Approach; McGraw-Hill International Book Company: New York, NY, USA, 1978. [Google Scholar]
  71. Hossain, M.D.; Chen, D. Segmentation for Object-Based Image Analysis (OBIA): A Review of Algorithms and Challenges from Remote Sensing Perspective. ISPRS J. Photogramm. Remote Sens. 2019, 150, 115–134. [Google Scholar]
  72. Stromann, O.; Nascetti, A.; Yousif, O.; Ban, Y. Dimensionality Reduction and Feature Selection for Object-Based Land Cover Classification Based on Sentinel-1 and Sentinel-2 Time Series Using Google Earth Engine. Remote Sens. 2020, 12, 76. [Google Scholar] [CrossRef]
  73. Press, W.H.; Teukolsky, S.A. Savitzky-Golay Smoothing Filters. Comput. Phys. 1990, 4, 669–672. [Google Scholar] [CrossRef]
  74. Chen, J.; Jönsson, P.; Tamura, M.; Gu, Z.; Matsushita, B.; Eklundh, L. A Simple Method for Reconstructing a High-Quality NDVI Time-Series Data Set Based on the Savitzky–Golay Filter. Remote Sens. Environ. 2004, 91, 332–344. [Google Scholar] [CrossRef]
  75. Schafer, R.W. What Is a Savitzky-Golay Filter? IEEE Signal Process. Mag. 2011, 28, 111–117. [Google Scholar] [CrossRef]
  76. Nguyen, M.D.; Baez-Villanueva, O.M.; Bui, D.D.; Nguyen, P.T.; Ribbe, L. Harmonization of Landsat and Sentinel 2 for Crop Monitoring in Drought Prone Areas: Case Studies of Ninh Thuan (Vietnam) and Bekaa (Lebanon). Remote Sens. 2020, 12, 281. [Google Scholar] [CrossRef] [Green Version]
  77. Barnes, E.; Clarke, T.; Richards, S.; Colaizzi, P.; Haberland, J.; Kostrzewski, M.; Waller, P.; Choi, C.; Riley, E.; Thompson, T.; et al. Coincident Detection of Crop Water Stress, Nitrogen Status and Canopy Density Using Ground Based Multispectral Data. In Proceedings of the Fifth International Conference on Precision Agriculture, Bloomington, MN, USA, 16–19 July 2000; Volume 1619. [Google Scholar]
  78. De Marinis, P.; De Petris, S.; Sarvia, F.; Manfron, G.; Momo, E.J.; Orusa, T.; Corvino, G.; Sali, G.; Borgogno, E.M. Supporting Pro-Poor Reforms of Agricultural Systems in Eastern DRC (Africa) with Remotely Sensed Data: A Possible Contribution of Spatial Entropy to Interpret Land Management Practices. Land 2021, 10, 1368. [Google Scholar] [CrossRef]
  79. Rocchini, D.; Marcantonio, M.; Ricotta, C. Measuring Rao’s Q Diversity Index from Remote Sensing: An Open Source Solution. Ecol. Indic. 2017, 72, 234–238. [Google Scholar] [CrossRef]
  80. Pavoine, S. Clarifying and Developing Analyses of Biodiversity: Towards a Generalisation of Current Approaches. Methods Ecol. Evol. 2012, 3, 509–518. [Google Scholar] [CrossRef]
  81. Borgogno-Mondino, E.; Farbo, A.; Novello, V.; Palma, L. de A Fast Regression-Based Approach to Map Water Status of Pomegranate Orchards with Sentinel 2 Data. Horticulturae 2022, 8, 759. [Google Scholar] [CrossRef]
  82. Foody, G.M. Status of Land Cover Classification Accuracy Assessment. Remote Sens. Environ. 2002, 80, 185–201. [Google Scholar] [CrossRef]
Figure 1. EO geospatial service area of interest. Aosta Valley autonomous region in NW Italy (ED50 UTM 32N) EPSG:23032.
Figure 1. EO geospatial service area of interest. Aosta Valley autonomous region in NW Italy (ED50 UTM 32N) EPSG:23032.
Applsci 13 00390 g001
Figure 2. Interferometry and classification in the ESA SNAP tool.
Figure 2. Interferometry and classification in the ESA SNAP tool.
Applsci 13 00390 g002
Figure 3. ROIs considering both the training and validation sets.
Figure 3. ROIs considering both the training and validation sets.
Applsci 13 00390 g003
Figure 4. Workflow processing to process the scalable Earth Observation service to map land cover in geomorphological complex areas.
Figure 4. Workflow processing to process the scalable Earth Observation service to map land cover in geomorphological complex areas.
Applsci 13 00390 g004
Figure 5. Aosta Valley Land Cover 2020.
Figure 5. Aosta Valley Land Cover 2020.
Applsci 13 00390 g005
Figure 6. Aosta valley land cover 2020 confusion matrix.
Figure 6. Aosta valley land cover 2020 confusion matrix.
Applsci 13 00390 g006
Table 1. A simple overview of the Sentinel-2 surface reflectance product collection composition in the Google Earth Engine.
Table 1. A simple overview of the Sentinel-2 surface reflectance product collection composition in the Google Earth Engine.
BandsDescriptionSpatial Resolution (m)
B1Aerosols60
B2Blue10
B3Green10
B4Red10
B5Red Edge 120
B6Red Edge 220
B7Red Edge 320
B8NIR10
B8ARed Edge 420
B9Water vapor60
B11SWIR 120
B12SWIR 220
SCLMask10
MSK_CLD_PRBCloud probability mask20
QA10-60Cloud mask10–60
Table 2. SAR stacks parameters criteria.
Table 2. SAR stacks parameters criteria.
Absolute Orbit NumberPolarizationFramePathFlight Direction
24,789VV+VH14688ASCENDING
24,417VV+VH44166DESCENDING
Table 3. S2 input datasets.
Table 3. S2 input datasets.
IDBands/IndexDescription
1“B2”Blue
2“B3”Green
3“B4”Red
4“B5”Vegetation Red Edge 1
5“B6”Vegetation Red Edge 2
6“B7”Vegetation Red Edge 3
7“B8”NIR
8“B8A”Vegetation Red Edge 4
9“B11”SWIR 1
10“B12”SWIR 2
11“B2_STD”Standard deviation Blue
12“B3_STD”Standard deviation Green
13“B4_STD”Standard deviation Red
14“B5_STD”Standard deviation Red Edge 1
15“B6_STD”Standard deviation Red Edge 2
16“B7_STD”Standard deviation Red Edge 3
17“B8_STD”Standard deviation NIR
18“B8A_STD”Standard deviation Red Edge 4
19“B11_STD”Standard deviation SWIR 1
20“B12_STD”Standard deviation SWIR 2
21“NDVI”Normalized Difference Vegetation Index
22“NDVI_STD”Standard deviation Normalized Difference Vegetation Index
23“BSI”Bare Soil Index
24“BSI_STD”Standard deviation Bare Soil Index
25“NDWI”Normalized Difference Water Index
26“NDWI_STD”Standard deviation Normalized Difference Water Index
27“NDSI”Normalized Difference Snow Index
28“NDSI_STD”Standard deviation Normalized Difference Snow Index
29“TCB”Tasseled Cap Brightness
30“TCB_STD”Standard deviation Tasseled Cap Brightness
31“TCG”Tasseled Cap Greenness
32“TCG_STD”Standard deviation Tasseled Cap Greenness
33“TCW”Tasseled Cap Wetness
34“TCW_STD”Standard deviation Tasseled Cap Wetness
43DTMDigital Terrain Model 10 m
44
45
Slope
Aspect
Terrain Slope
Terrain aspect
Table 4. SAR S1 images pairs with the distance baseline and days.
Table 4. SAR S1 images pairs with the distance baseline and days.
S1 Pairs Ascending Orbit
(Product n°, Baseline, Temproal Distance in Days between the Two Acqusitions)
S1A_IW_SLC__1SDV_20200430T172327_20200430T172354_032360_03BEE8_2356S1B_IW_SLC__1SDV_20200506T172238_20200506T172305_021464_028C15_773E136 m5
S1B_IW_SLC__1SDV_20200530T172240_20200530T172307_021814_029680_5539S1A_IW_SLC__1SDV_20200605T172329_20200605T172356_032885_03CF21_34AB152 m7
S1A_IW_SLC__1SDV_20200804T172333_20200804T172400_033760_03E9BC_E6ADS1B_IW_SLC__1SDV_20200810T172255_20200810T172322_022864_02B66E_1179152 m6
S1A_IW_SLC__1SDV_20200828T172334_20200828T172401_034110_03F5FE_8B79S1B_IW_SLC__1SDV_20200903T172253_20200903T172320_023214_02C15A_3F08162 m6
S1B_IW_SLC__1SDV_20200903T172253_20200903T172320_023214_02C15A_3F08S1A_IW_SLC__1SDV_20200909T172335_20200909T172402_034285_03FC20_A288159 m6
S1B_IW_SLC__1SDV_20201009T172254_20201009T172321_023739_02D1C8_57D8S1A_IW_SLC__1SDV_20201015T172336_20201015T172402_034810_040E9B_A403134 m6
S1B_IW_SLC__1SDV_20201114T172240_20201114T172307_024264_02E22F_E4D7S1A_IW_SLC__1SDV_20201120T172335_20201120T172402_035335_0420C3_E828144 m7
S1 Pairs Ascending orbit
S1A_IW_SLC__1SDV_20200112T053523_20200112T053550_030763_03871C_D73ES1B_IW_SLC__1SDV_20200118T053455_20200118T053522_019867_02592E_ADC0165 m5
S1B_IW_SLC__1SDV_20200211T053455_20200211T053522_020217_026479_497ES1A_IW_SLC__1SDV_20200217T053522_20200217T053548_031288_03996E_2722155 m7
S1A_IW_SLC__1SDV_20200324T053522_20200324T053549_031813_03ABB5_4955S1B_IW_SLC__1SDV_20200330T053455_20200330T053522_020917_027ABA_DC4C129 m5
S1B_IW_SLC__1SDV_20200505T053456_20200505T053523_021442_028B5C_A52FS1A_IW_SLC__1SDV_20200511T053523_20200511T053550_032513_03C3F4_2251138 m7
S1B_IW_SLC__1SDV_20200118T053455_20200118T053522_019867_02592E_ADC0S1A_IW_SLC__1SDV_20200124T053522_20200124T053549_030938_038D40_8123147 m7
Table 5. SAR Sentinel-1 GRD bands in water mapping.
Table 5. SAR Sentinel-1 GRD bands in water mapping.
S1 GRD
IDBands/IndexDescription
1“VVSingle co-polarization, vertical transmit/vertical receive
2“VH”Dual-band cross-polarization, vertical transmit/horizontal receive
3“VV_STD”Standard deviation single co-polarization, vertical transmit/vertical receive
4“VH_STD”Standard deviation dual-band cross-polarization, vertical transmit/horizontal receive
5“NDPI”Normalized Difference Polarization Index
6“NDPI_STD”Standard deviation Normalized Difference Polarization Index
7“CR”Cross ratio
8“CR_STD”Standard deviation cross ratio
Table 6. Segmentation settings in SAGA GIS.
Table 6. Segmentation settings in SAGA GIS.
Segmentation ParameterSettings
Spatial radius3 pixels
Range radius100 DN
Mode convergence threshold0.1
Maximum numerous of iterations200
Minimum region size3 pixels
Table 7. Deep learning CNN settings in ArcGIS Pro v.2.8.
Table 7. Deep learning CNN settings in ArcGIS Pro v.2.8.
PARAMETERSINPUT SETTINGS
Input RasterOrthophoto.ecw
Output Detected ObjectBuildings and Roads
Model Definition:Edited models from ESRI .dplk
Padding:32
Batch_size:16
Threshold0.9
Filtering threshold99.999
Return_bboxesFalse
Non-Maximum SuppressionOther parametersCheckedDefault
ENVIRONMENTSINPUT SETTINGS
Processing ExtentRaster extent
Processor Type
GPU id
Cell size
Parallel processing
GPU
Default
Raster native GSD
8
Table 8. Accuracies.
Table 8. Accuracies.
ApproachOverall AccuracyK-Coefficient
Traditional approach88%0.88
Combined approach89%0.89
Mixed Hierarchical approach97%0.97
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Orusa, T.; Cammareri, D.; Borgogno Mondino, E. A Scalable Earth Observation Service to Map Land Cover in Geomorphological Complex Areas beyond the Dynamic World: An Application in Aosta Valley (NW Italy). Appl. Sci. 2023, 13, 390. https://doi.org/10.3390/app13010390

AMA Style

Orusa T, Cammareri D, Borgogno Mondino E. A Scalable Earth Observation Service to Map Land Cover in Geomorphological Complex Areas beyond the Dynamic World: An Application in Aosta Valley (NW Italy). Applied Sciences. 2023; 13(1):390. https://doi.org/10.3390/app13010390

Chicago/Turabian Style

Orusa, Tommaso, Duke Cammareri, and Enrico Borgogno Mondino. 2023. "A Scalable Earth Observation Service to Map Land Cover in Geomorphological Complex Areas beyond the Dynamic World: An Application in Aosta Valley (NW Italy)" Applied Sciences 13, no. 1: 390. https://doi.org/10.3390/app13010390

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop