Next Article in Journal
Polar Cloud Detection of FengYun-3D Medium Resolution Spectral Imager II Imagery Based on the Radiative Transfer Model
Previous Article in Journal
A Year of Volcanic Hot-Spot Detection over Mediterranean Europe Using SEVIRI/MSG
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Satellite–Derived Bathymetry in Shallow Waters: Evaluation of Gokturk-1 Satellite and a Novel Approach

1
Geomatics Engineering Program, Graduate School, Istanbul Technical University, Istanbul 34469, Turkey
2
Geomatics Engineering Department, Civil Engineering Faculty, Istanbul Technical University, Istanbul 34469, Turkey
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(21), 5220; https://doi.org/10.3390/rs15215220
Submission received: 13 September 2023 / Revised: 16 October 2023 / Accepted: 31 October 2023 / Published: 3 November 2023

Abstract

:
For more than 50 years, marine and remote sensing researchers have investigated the methods of bathymetry extraction by means of active (altimetry) and passive (optics) satellite sensors. These methods, in general, are referred to as satellite-derived bathymetry (SDB). With the advances in sensor capabilities and computational power and recognition by the International Hydrographic Organization (IHO), SDB has been more popular than ever in the last 10 years. Despite a significant increase in the number of studies on the topic, the performance of the method is still variable, mainly due to environmental factors, the quality of the deliverables by sensors, the use of different algorithms, and the changeability in parameterization. In this study, we investigated the capability of Gokturk-1 satellite in SDB for the very first time at Horseshoe Island, Antarctica, using the random forest- and extreme gradient boosting machine learning-based regressors. All the images are atmospherically corrected by ATCOR, and only the top-performing algorithms are utilized. The bathymetry predictions made by employing Gokturk-1 imagery showed admissible results in accordance with the IHO standards. Furthermore, pixel brightness values calculated from Sentinel-2 MSI and tasseled cap transformation are introduced to the algorithms while being applied to Sentinel-2, Landsat-8, and Gokturk-1 multispectral images at the second stage. The results indicated that the bathymetric inversion performance of the Gokturk-1 satellite is in line with the Landsat-8 and Sentienl-2 satellites with a better spatial resolution. More importantly, the addition of a brightness value parameter significantly improves root mean square error, mean average error, coefficient of determination metrics, and, consequently, the performance of the bathymetry extraction.

1. Introduction

Modern coastal zone planning and engineering projects require a crucial preliminary step: a high-resolution bathymetric survey. However, in the shallow nearshore areas, obtaining bathymetric information can be challenging due to the high costs, risks, reduced effectiveness due to lesser coverage, and logistical constraints associated with traditional survey techniques. Often, outdated or coarse soundings from published charts are relied upon. Ship-based acoustic measurements, utilizing single- or multi-beam echosounder systems, necessitate specific auxiliary equipment and trained personnel on board and are limited by the vessel’s draft and safe navigational areas. As an alternative to these methods, satellite-derived bathymetry (SDB) has emerged as a rapid, cost-effective, and high-quality solution for bathymetric surveys worldwide. Since the public availability of the first optical satellite images in the 1970s, SDB technology has evolved significantly and is now frequently employed to support hydrographers and other stakeholders in their work [1,2].
As part of the European Commission’s “4S” innovation project, an online questionnaire was conducted in early 2021 to assess the usage of shallow water survey techniques. Among the 213 stakeholders who participated, 28% identified the capability to map inaccessible locations as the primary reason for utilizing SDB technology. The survey also revealed other notable statistics: 22% mentioned the importance of accessing historic and recent information; 18% emphasized the need for better and more accurate data compared to existing sources; and 18% highlighted the advantage of reduced logistical efforts. Additionally, the survey indicated that SDB technology offers reduced dependency on weather conditions. These findings have significant implications for the application of SDB as it allows for independent mapping of inaccessible regions and facilitates the mapping of historical morphology without relying solely on local survey data (4S, 2021) [3].
Based on a chronological review of empirical studies on SDB, it can be observed that early investigations primarily utilized linear and logarithmic band-ratio-based algorithms [4,5,6]. However, in more recent times, there has been a shift towards the adoption of machine learning algorithms and deep learning-based approaches. These modern techniques have provided enhanced capabilities and improved accuracy in estimating bathymetry from satellite image. Support vector machine (SVM) [7,8], random forest (RF) [7,9,10,11] and extreme gradient boosting (XGBoost) algorithms [12,13] are proven to be favorable algorithms to be used in SDB in the machine-learning domain. As for deep learning, the foundation was laid by Ceyhun and Yalcın [14], who initially introduced artificial neural networks (ANN) in the SDB procedure, and the number of studies adopting deep learning has been increasing ever since.
In the past decade, there has been a significant body of academic research dedicated to advancing the field of SDB. Researchers have explored various aspects of SDB, including methodological advancements, algorithm development, data fusion techniques, and validation approaches.
One notable study by Misra et al. [8] focused on the integration of high-resolution satellite imagery and machine learning algorithms for accurate bathymetric mapping. They proposed a hybrid approach that combined multispectral satellite imagery with a support vector regression algorithm, achieving promising results in shallow water depth estimation. Another study by Manessa et al. [9] investigated the potential of utilizing multispectral WorldView-2 satellite imagery for SDB. They developed an approach based on water column correction and a depth retrieval algorithm, demonstrating the feasibility of obtaining accurate bathymetric information from high-resolution satellite data.
The application of deep learning techniques in SDB has gained significant attention, as stated above. Ai et al. [15] explored the use of convolutional neural networks (CNNs) for bathymetric mapping using multispectral satellite imagery. Their study demonstrated the potential of CNNs in capturing complex spatial patterns and achieving improved accuracy in bathymetric predictions. Furthermore, the work of Mudiyanselage et al. [11] focused on the fusion of satellite imagery and airborne LiDAR data for SDB. They proposed a data fusion framework that integrated the strengths of both datasets to enhance the accuracy and resolution of bathymetric mapping in coastal areas.
The development of satellite-borne lidar, together with the dataset and tools attached to it, has also contributed to the advancement of SDB research. The study by Babbel et al. [16] presented a comprehensive analysis towards the capabilities and offerings of ICESat-2 elevation retrievals coming from the ATLAS sensor in SDB procedures. Their analysis provided valuable insights for researchers and practitioners in selecting appropriate datasets and tools for SDB applications.
Efforts have been made to validate SDB results and assess their accuracy. A study by Xu et al. [17] focused on the validation of SDB using in situ measurements and high-resolution satellite imagery. They proposed a rigorous validation framework and demonstrated the reliability and accuracy of SDB models in various coastal environments, which can be utilized without the need of in situ data for training.
Additionally, environmental parameters that affect the process are in the crosshairs. In this regard, Caballero and Stumpf [18,19] tried to eliminate the effects of turbidity and atmospheric interactions, while Caballero et al. [20], included chlorophyll concentration in ther list of auxiliary parameters to monitor, which happens to be one of the few challenges faced during bathymetry extraction from multispectral images.
These academic studies conducted in the last ten years have significantly contributed to the field of SDB. They have advanced our understanding of the capabilities and limitations of SDB techniques, paving the way for improved applications in coastal zone management, marine navigation, and environmental studies. However, there is still a need to investigate the SDB inversion potential of new sensors, introducing new derivative products by aiming to boost the performance of methods and to extend the coverage of analysis to rarely investigated and ecologically important coasts such as Antarctica. Very recently, Gulher and Alganci [2] made a comprehensive analysis concerning the performance of various algorithm method and atmospheric correction pairs in the approaches to Horseshoe Island using multispectral images of Landsat-8 and Sentinel-2 constellations. According to their study, atmospheric corrections made by ATCOR, iCOR, and Acolite deliver reasonably close results, and the difference in performance is, for the most part, decided by the method used. Random forest and extreme gradient boosting (XGBoost) outshine the rest of the algorithms as top performers.
In this study, we examined the effectiveness of multispectral images provided by Gokturk-1 satellite in SDB for the first time and used the results from the above-mentioned study as a benchmark. Additionally, we newly introduced the use of pixel brightness values acquired from Sentinel-2 MSI and by means of TCT into the bathymetry extraction algorithms and monitored the improvement via three metrics: RMSE, MAE, and R2.

2. Materials and Methods

2.1. Study Area and Data

Antarctica, renowned for its extreme cold and gusty winds, stands as a distinctive and delicate environment. It reigns as the coldest, driest, and windiest continent on our planet, with its landscape predominantly defined by vast expanses of ice and snow [21].
The expansive Antarctic continent is blanketed by a substantial ice sheet, averaging approximately 2100 m in thickness. This ice sheet serves as a critical component of the Earth’s climate system, harboring around 70% of the planet’s freshwater resources [22]. Despite the formidable conditions it presents, Antarctica supports a diverse array of flora and fauna, including penguins, seals, and various species of algae and bacteria. However, due to its isolation and exceedingly harsh environmental factors, the continent exhibits relatively limited biodiversity compared to other regions worldwide. Regrettably, Antarctica’s ecosystem faces an imminent threat from climate change, resulting in ice melting and the subsequent sea-level rise in recent years [23].
Considering the delicate and challenging environment the Antarctic region uniquely provides, along with the availability of the necessary data, the study area selected for our research encompasses the approaches to Horseshoe Island, home of the Turkish National Research Center. Level-2A (L2A) Gokturk-1 imagery, collected on 15 January 2018 at 08:38 (UTC + 3), is obtained through the Command of Reconnaissance Satellite Operations of Turkish Air Force. Descriptions concerning technical specifications of Gokturk-1 images [24] are given in Table 1. The Gokturk-1 satellite, developed by Türkiye, represents a significant milestone in the nation’s pursuit of advanced remote sensing capabilities. Launched on 5 December 2016, Gokturk-1 is Türkiye’s first high-resolution Earth observation satellite, designed to provide valuable imagery for various applications including environmental monitoring, disaster management, urban planning, and national security.
Along with Gokturk-1, the Landsat-8 imagery (dated 19 February 2018) and Sentinel-2 imagery (dated 25 January 2019) were used for analysis as a benchmark [2,24]. The Landsat-8 imagery was obtained in Level 1GT and the Sentinel-2 in Level 1C, and both of which required further atmospheric correction.
The bathymetric data used in this study were acquired from the point cloud data collected with the SONIC 2022-v multibeam echosounder (MBE) instrument, produced by R2SONICTM company sourced in Texas, USA, during the Turkish Antarctic Expedition-III to the region between 29 January and 6 March 2018. The data come with a horizontal resolution of below-1-m and are used for the training of algorithms and the validation of the outcomes. A maximum measurement error margin of less than 1 m horizontally and 1 cm vertically are contained within the dataset for depths up to 400 m. Sea level measurements for tide correction are obtained through the Rothera Station and applied to the dataset before being fed into the bathymetry extraction algorithms. The tide range in the area during data collection is 0.48 m to 1.97 m [25].
In this context, a total of number of 10,000 points—2500 homogeneously for each 5-m intervals for the 0–20 m depth range—were randomly selected. Of these 10,000-point data, 8000 were used for model training and 2000 were used for validation. During the analysis, 0–5 m, 0–10 m, 0–15 m, and 0–20 m depth intervals were evaluated separately for holistic evaluation. The study region and the bathymetric surface created using this dataset are presented in Figure 1.

2.2. Methodology

This section presents the theoretical background of the processing stages and methods used for the SDB approach in this research. The process flowchart is shown in Figure 2.

2.2.1. Atmospheric Correction

Horseshoe Island possesses distinct attributes that make atmospheric correction a prominent consideration. Situated in an extreme latitude region, the area showcases deep and transparent oceanic waters, thawing ponds, pronounced contrasts resulting from luminous ice, and intricate surface compositions comprising snow layers atop sea ice, accompanied by the associated reflective influences from nearby surroundings.
The transfer of electromagnetic radiation through the atmosphere encounters significant distortions attributed to the absorption of aerosols and gases, as well as scattering phenomena such as Mie and Rayleigh scattering. These effects hold particular importance over water regions. The radiance along the atmospheric path accounts for a substantial portion of the electromagnetic energy in oceans, amounting to 85%, while it rises to 94% in darker water bodies and 60% in highly sediment-laden waters [26].
These challenges are further compounded in extreme latitudes and at large solar zenith angles, as the radiation pathway through the atmosphere becomes elongated [27]. Moreover, satellites with near-nadir viewing angles, such as Landsat and Sentinel 2, are susceptible to sun glint, which adds to the complexities. Another issue stemming from atmospheric transmission and radiation scattering is the adjacency effect, wherein the sensor records the combined radiation from the target pixel and the scattered radiation from adjacent surfaces, particularly when there is a stark contrast between the target pixel and its surroundings [28].
The atmospheric corrections of the multispectral images used in this research are performed using ATCOR algorithm. The ATCOR (atmospheric/topographic correction for satellite imagery) algorithm uses the MODTRAN 5 radiative transfer model, precomputed LUT tables, and other atmospheric components derived from the image [29,30]. The LUT (look-up table) tables based on MODTRAN 5 encompass four distinct aerosol models, namely, rural, urban, marine, and desert. Furthermore, the water vapor (Wv) parameter is parameterized into six discrete values ranging from 0.75 cm to 4.11 cm, which vary according to the season. Ozone concentrations are sourced from a readily available database. The determination of the AOT (aerosol optical thickness) parameter involves the utilization of the dense dark vegetation algorithm in combination with a user-defined visibility parameter. Users have the freedom to select the desired aerosol model, while the Wv parameter is chosen through the implementation of the atmospheric pre-corrected differential absorption algorithm. Within the ATCOR (atmospheric correction for turbid and opaque waters) algorithm, the adjacency effect is addressed by calculating the average reflectance based on the neighboring pixels.

2.2.2. Tasseled Cap Transformation

Optical bathymetry models utilize the impact of light’s exponential decay in water on the measured radiance values in remotely sensed images. The decrease in light intensity as it traverses water is governed by the logarithmic decay principle of the Beer–Lambert law, which is directly employed by models.
The Beer-Lambert law outlines the exponential attenuation of light in the water with Equation (1):
I = I0 × e−βD,
where “e” is the base of natural logarithms, “I” is the intensity of light at a certain depth, I0 is the intensity of light instantly after entering the water (i.e., light reflected from the water surface is discarded), “β” is the attenuation coefficient, and “D” is the distance that light travels through water. Due to this attenuation effect of the water, pixels in an image coinciding with the shallow water areas should appear brighter than those of the deep-water zone. This principle motivated us to feed the brightness value of the pixels of interest into the bathymetry extraction algorithms, together with the band reflectance values, in an attempt to increase the effectiveness of the algorithms. The brightness values are calculated via tasseled cap transformation (TCT), which is a valuable tool in remote sensing, enabling the extraction of key information from multispectral satellite imagery. The transformation ability of TCT to generate orthogonal components provides insights into the spectral characteristics of the land and/or sea surface, facilitating a wide range of remote sensing applications.
TCT is a widely used technique that provides a method by which to transform multispectral satellite imagery into a set of orthogonal components, known as brightness, greenness, and wetness. The TCT was initially introduced by Kauth and Thomas [31] as a means to extract meaningful information from satellite data for analyzing the growth of agricultural crops.
TCT operates by linearly combining the spectral bands of a multispectral image to create the three orthogonal components. Each component represents a specific aspect of the land surface’s reflectance characteristics. The brightness component captures the overall reflectance magnitude, representing the surface’s general level of brightness. It is typically linked to the amount of solar radiation received and reflected by a surface. It responds to changes in lighting conditions, rendering it useful for identifying areas with varying degrees of light and shadow. The greenness component is sensitive to vegetation vigor and density, highlighting variations in vegetation health and abundance. The wetness component detects variations in surface moisture content, such as water bodies, wetlands, or soil moisture levels.
This transformation has found numerous applications in remote sensing. One prominent use is in vegetation monitoring and analysis. By isolating the greenness component, researchers can assess vegetation vigor, biomass estimation, and detect changes in vegetation health over time [32,33]. The TCT has also been employed in land cover classification tasks. By considering the combination of brightness, greenness, and wetness components, researchers can discriminate between different land cover classes, such as forests, agricultural fields, urban areas, and water bodies [34,35]. The application of the TCT in environmental studies comprises the detection and monitoring of water bodies, the identification of wetlands, and the analysis of surface moisture conditions [36,37].
Along with the land sciences, marine sciences are also a good domain wherein the application of the TCT can extract valuable information about the coastal and marine environments. The areas in this context can be summarized as the detection of coastal features, water quality monitoring, coral reef health assessment, the identification of coastal pollution, and mangrove mapping. For instance, mangroves are examples of tropical vegetation specially adapted to grow in soft, moist soils, saltwater surroundings, and the cyclic inundation by tides. Due to their being periodically submerged by tides, a TCT-based approach is developed rather than using the well-established vegetation index, which performs well in land studies [38]. In our study, we utilize the TCT in the marine domain as well, but for a novel purpose: improving the SDB accuracy by assisting the reflectance values with pixel brightness values.
TCT for brightness can be calculated via Equation (2), where the numbers in front of the brackets represent the tasseled cap coefficients and the numbers within the brackets identify the wavelength interval that is suitable for the specific term of the equation [31]:
Brightness = 0.3037 × [450:520] + 0.2793 × [520:600] + 0.4743 × [630:690] + 0.5585 × [760:900] + 0.5082 × [1150:1750] + 0.1863 × [2080:2350].

2.2.3. Post Processing

Sun Glint correction is the next step in the process, which can dramatically affect the error margin depending on the sea surface roughness, viewing angle, etc., at the moment of image capturing. This has been pointed out in the previous studies as well [39]. For this purpose, we embraced the method proposed by Hedley et al. [40] This renowned method identifies the lowest near-infrared reflectance value in the image and performs a regression analysis between it and the near-infrared reflectance values of all other pixels. Once the regression formula is configured, it then applies it to all the pixels in visible bands to reduce the reflectance values accordingly.
In the subsequent stage of post-processing, the objective is to generate a land mask and execute the separation between land and sea. Through the analysis of near-infrared band BoA (bottom of Aamosphere) values, a threshold value was determined for the purpose of distinguishing between land and sea. By employing this threshold value, a mask was generated to exclude terrestrial areas from the near-infrared band. Subsequently, this mask was applied to the visible bands, resulting in the acquisition of images exclusively encompassing water areas.

2.2.4. Machine Learning Based SDB Algorithms

SDB has witnessed significant advancements with the integration of machine learning algorithms, such as random forest and XGBoost [2,10,41]. These methods have proven to be valuable tools for extracting bathymetric information from remotely sensed satellite imagery. Based on the findings of Gulher and Alganci [2], only random forest and XGBoost algorithms have been utilized in our study, with hyperparameter tuning remaining the same.
The RF algorithm is an ensemble learning method that combines multiple decision trees to create a robust prediction model [42]. It operates by generating a multitude of decision trees and aggregating their results to make predictions. A degree of randomness is introduced by selecting a random subset of features for each tree, mitigating the risk of overfitting and enhancing overall algorithm performance. Bootstrap aggregating (bagging) is employed as well, in order to create diverse datasets from the original data, facilitating the creation of distinct decision trees. Due to its above-said capabilities, RF has been utilized for a wide range of tasks, i.e., classification, regression, image recognition, anomaly detection, bioinformatics, and environmental sciences. In the context of SDB, random forest leverages a set of input variables derived from satellite imagery, including spectral bands, texture indices, and spatial features, to estimate bathymetric depths [9]. By capturing complex non-linear relationships between these input variables and bathymetric depths, random forest produces accurate and reliable predictions. Performing in similar areas of application as RF, XGBoost is another powerful machine learning algorithm employed in SDB [43]. It is based on gradient boosting, which sequentially adds weak learners to form a strong predictive model. Regularization and distributed computing are other components of XGBoost which help to forestall overfitting, enhance model generalization, and enable the management of extensive datasets. It excels at handling large and complex datasets, making it well-suited for the analysis of high-resolution satellite imagery. By iteratively optimizing the model’s performance, XGBoost can effectively capture intricate relationships between the input variables and bathymetry.
Both random forest and XGBoost have demonstrated their effectiveness in SDB applications. They offer advantages such as the ability to handle multi-dimensional satellite imagery, accommodate diverse input features, and account for complex interactions between variables. These methods have shown promise in overcoming the challenges associated with traditional bathymetric survey techniques, offering rapid and cost-effective alternatives.
Furthermore, the integration of random forest and XGBoost with SDB has the potential to utilize ancillary data, such as water clarity, bottom type, and environmental parameters, to enhance the accuracy and reliability of the bathymetric estimates. The incorporation of such information can enable the models to account for additional factors that influence the water depth estimation process, which is essentially the second deliverable of this research.

2.2.5. Hyperparameter Tunning

The hyperparameters of the two models constructed for bathymetry extraction were optimized using the “Grid Search” approach, and the results are shown in Table 2.

3. Results

In this section, the bathymetry inversion results from ML-based SDB approaches were assessed using a point-based test dataset while considering the two main objectives of the research: (i) testing the capability of the Gokturk-1 satellite in comparison with the Landsat-8 and Sentinel-2 images; (ii) measuring the possible performance-boosting of the brightness parameter added to SDB models.
Initially, the bathymetry inversion is carried out via RF and XGBoost algorithms coupled with the designated dataset and band-pairs in the visible spectrum (BG: blue and green; GR: green and red; BR: blue and red). The accuracy assessment is performed with regard to the verification metrics for the depth ranges of 0–5, 0–10, 0–15, and 0–20 m, and the results are presented in Table 3, Table 4 and Table 5. As another evaluation, the compliance of the Gokturk-1 SDB results according to IHO “Category of Zone of Confidence” (CATZOC) Levels (Table 6) is also presented in Table 7.
These findings show that the RF algorithm offered marginally superior outcomes with reduced RMSE and MAE values in most of the cases (Table 3 and Table 4, highlighted in blue). Sentinel-2 performed better in the 0–5 and 0–10 m depth intervals, while Landsat-8 performed marginally better in the 0–15 m depth interval and significantly better in the entire 0–20 m depth interval. For the 0–5 m interval, Gokturk-1’s performance was close to that of the other two, but for the remaining broader intervals, the RMSE and MAE significantly rose. The weak R2 metrics of Gokturk-1, when compared to other sensors, account for this low accuracy (Table 5). Gokturk-1 can provide C- and D-level bathymetric products, with the exception of the shallowest 0–5 m interval, according to CATZOC classification.
In the second experiment, distinct from the first run, the brightness values obtained from Sentinel-2 MSI through TCT were added as an auxiliary dataset to the ratio of reflectance values acquired from different satellite images during and the execution of bathymetric inversion algorithms. The updated results were tested through the accuracy metrics RMSE, MAE, and R2, and values across different depth intervals are provided in Table 8, Table 9 and Table 10, respectively. These results pointed out an accuracy improvement across all sensors, the most noticeable rise being for the Gokturk-1, mainly in association with a great improvement of R2. Similarly, RF was marginally better than the XGBoost in terms of RMSE and MAE (Table 8 and Table 9, highlighted in blue). As can be seen from Figure 3, RMSE and MAE improvement was up to 40% for Landsat-8 and 50% for Gokturk-1, especially in narrower depth intervals. These improvements were also noted with CATZOC classification findings for Gokturk-1, notably for depth intervals of 0–10 and 0–15 m (Table 11).
Lastly, the sea-bottom topography for the AOI is created by means of bathymetry extraction carried out with the multispectral image obtained from the Gokturk-1 satellite and RF algorithm for depths up to 20 m. The surfaces created from the ground-truth data [25], bathymetry extraction, and the difference between these two are presented in Figure 4.

4. Discussion

Despite being able to provide multispectral images with higher spatial resolution (2 m vs. 30 m of Landsat-8 and 10 m of Sentinel-2), the accuracy of the bathymetry extraction process with Gokturk-1 imagery falls behind that of Landsat-8 and Sentinel-2 in every category. Low R2 values were achieved compared to its counterparts indicate a poorer representation of the surveyed data via the predicted data, at least linearly. Considering the fact that all three images used in this study are close in time, fed into the same atmospheric correction method, and trained with the same dataset, the difference observed in accuracy can be explained by the difference in sensor design. Gokturk-1 was initially designed for reconnaissance missions rather than environmental observations. Thus, the 11-bit radiometric resolution of Gokturk-1, as opposed to 12-bit radiometric resolution of both Landsat-8 and Sentinel-2, is a factor suspected to hinder the accuracy in SDB that the high spatial resolution can offer. The fact that the results obtained via Gokturk-1 are not as accurate as those of Landsat-8 and Sentinel-2 does not necessarily mean that Gokturk-1 is incapable of SDB applications; it actually still is a reliable source to be used for scientific and cartographic purposes, especially for the first 5-m depth zone. The compliance evaluation of the results against the IHO standards clearly points that out. The top “Category of Zone of Confidence” (CATZOC) level of A1 is achieved for this depth range. The CATZOC level has an inverse relationship with the depth and decreases as the depth increases, starting immediately after the 5-m threshold is passed.
The introduction of pixel brightness values to the bathymetry extraction procedure is a game-changer in each and every category put into test. As given in Equation (1), calculating pixel brightness values necessitates the use of many bands that fall in the predefined ranges of the electromagnetic spectrum. This calculation could have been undertaken by means of Sentinel-2 and Landsat-8 imagery since they both offer the rich band sets required for this task. However, Sentinel-2 was selected for this research due to the higher spatial resolution it offers. Once pixel brightness values are calculated, they are fed into the bathymetry extraction algorithms as an auxiliary parameter along with band reflection values. The above-said dataset is utilized while carrying out the bathymetry extraction procedure with Gokturk-1 and Landsat-8 multispectral imagery as well. For this reason, the addition of pixel brightness values into the SDB procedure enables cross-platform cooperation and benefits from it successfully.
By carrying out the SDB procedure, this time with pixel brightness values involved, we achieved improvement in all metrics, all algorithms, and all platforms except for Landsat-8 at 0–20 m depth range. This specific sector combines the coarser spatial resolution with the maximum depth range tested in this research. So, the most intuitive interpretation of the lack of improvement in this part of the process is that the odds pile up, making it impossible for the additional parameter to contribute positively. Apart from this, the 0–5 m depth range for Sentinel-2 shows no-to-minimal improvement in the metrics observed. This specific section was already the most accurate one across all platforms and all ranges. Thus, the reason for this occurrence could be the fact that the outputs achieved for this sector were at their best possible level, so there was no-to-a-little room in which to make the results any better. The contribution to Landsat-8 and Gokturk-1 imagery for the same depth range, on the other hand, is quite impressive since it raises their accuracy to a level at which it is on par with that of Sentinel-2 imagery.
The addition of the auxiliary parameter has made another remarkable contribution, worthy of attention: it has dethroned Sentinel-2 imagery as the top performer in terms of accuracy for the depth range of 0–5 m. When the RF algorithm is fed with the auxiliary parameter, together with the logarithmic ratios of blue and red band reflectance values, the results obtained via Gokturk-1 satellite imagery are slightly better than those of Sentinel-2 and the best ever achieved for the region when RMSE and MAE metrics are considered.
The R2 values for Gokturk-1 imagery across all depth ranges are the clear winners and the top beneficiaries of this updated procedure wherein pixel brightness values are introduced. The effect is tremendous, with percentage increases of more than 100%. These increases are clear indicators of a better representation of the ground truth data in the predicted depth values.
In accordance with the improved accuracy and the better correlation achieved, the IHO-compliance of the results obtained through Gokturk-1 imagery is increased significantly. With this new setup, the CATZOC level of the 0–10 m depth range has improved to A2 from C, and the level of the 0–15 m depth range has improved to C from D, making the Gokturk-1 imagery a much more reliable source for nautical cartography.

5. Conclusions

Coastal regions around the world are of great importance for the well-being of our precious blue marble. This part of our seas and oceans is home to many sea creatures, yet more prone to human activity and contamination; thus, it should be monitored constantly. Since bathymetry is the foundation for almost all marine-related research, being able to obtain these data when and where necessary is an undeniable necessity. Apart from this scientific perspective, shallow coastal water zones pose the most danger to the safety of navigation. Since conducting bathymetric surveys in the shallow coastal water zones via traditional means is neither practical nor safe, SDB is one of the few promising alternatives. Although the research on SDB goes back more than 50 years, its efficiency is strongly dependent on the sensor, atmospheric correction, extraction algorithm, hyperparameter tunning, environmental conditions, etc.; thus, further investigation is still needed. Owing to its recognition as an official data source by the IHO [1], nautical chart producers also represent also a growing part of the client portfolio requiring these data.
The study carried out by Gulher and Alganci [2] provided a comprehensive analysis of the performance of the prospective method and atmospheric correction pairs in the approaches to Horseshoe Island using the multispectral images of Landsat-8 and Sentinel-2 constellations. As the main objective of this research, we relied on their findings and used them as a benchmark in determining the performance of multispectral images of Gokturk-1 in SDB. All the images are ATCOR-corrected atmospherically, and only the previously identified top-performer algorithms—namely, RF and XGBoost—are utilized. As the second objective, we introduced pixel brightness values obtained from the Sentinel-2 imagery via TCT as an auxiliary dataset into the bathymetry extraction algorithms and monitored the contribution. This auxiliary dataset is utilized with multispectral images coming from all three available data sources (Landsat-8, Sentinel-2, and Gokturk-1), thus establishing cross-platform cooperation and fusion of data. The verification process is undertaken by monitoring three different metrics (RMSE, MAE, and R2) in addition to the IHO standards established to determine the quality level of bathymetric data.
The bathymetry predictions made by employing Gokturk-1 imagery showed IHO-compliant admissible results, especially for the 0–5 m depth range. The accuracy drops as the depth range widens, while still maintaining compliance with IHO standards, albeit at a worse level. Despite being a reliable data source for SDB applications, Gokturk-1 imagery seems to be falling behind in terms of its overall performance in SDB when compared with that of Landsat-8 and Sentinel-2. Still, the much higher spatial resolution that it offers is a significant contribution to the SDB process, avoiding the step effect and boundary discontinuities and providing smother edge geometries.
In almost every tested category, the incorporation of pixel brightness values into the bathymetry extraction procedure has revolutionized the approach. The only exceptions were the 0–20 m depth range for Landsat-8 imagery and the 0–5 m depth range for Sentinel-2 imagery. Moreover, the remarkable addition of the auxiliary parameter deserves attention as it allowed Gokturk-1 imagery to outperform Sentinel-2 imagery in terms of accuracy for the depth range of 0–5 m, thus establishing itself as the new top performer with an RMSE value of 0.43 and an MAE value of 0.29.
Despite all the findings provided and the insights gained, we should bear in mind that this research represents the initial evaluation of Gokturk-1 imagery in SDB applications. Further investigation is necessary in order to unleash its full potential. Coupling it with Lidar data for training purposes, trying different atmospheric correction methods, and adapting a multi-temporal approach are a few examples to name here.

Author Contributions

Conceptualization, E.G. and U.A.; methodology, E.G. and U.A.; validation, E.G. and U.A.; formal analysis, E.G.; writing—original draft preparation, E.G. and U.A.; visualization, E.G. and U.A.; supervision, U.A.; project administration, U.A. All authors have read and agreed to the published version of the manuscript.

Funding

This study was funded by the Scientific and Technological Research Council of Turkey (TUBITAK), 1001 program, Project No: 121Y366.

Data Availability Statement

The Landsat 8 and Sentinel 2 images are freely available through USGS Earth Explorer and Copernicus Open Access Hub, respectively. The datasets and codes generated during this study will be available from the corresponding author upon reasonable request after the completion of the project.

Acknowledgments

This study was carried out under the auspices of the Presidency of the Republic of Turkey, supported by the Ministry of Industry and Technology, and coordinated by the TUBITAK MAM Polar Research Institute. The authors would like to thank the Turkish Naval Forces Office of Navigation, Hydrography and Oceanography (ONHO), for providing the bathymetry data.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. International Hydrographic Organization-IHO. Standards for Hydrographic Surveys of the International Hydrographic Organization, S-44 Edition 6.1.0; International Hydrographic Organization-IHO: Monaco, France, 2022; Available online: https://iho.int/uploads/user/pubs/standards/s-44/S-44_Edition_6.1.0.pdf (accessed on 2 November 2023).
  2. Gülher, E.; Alganci, U. Satellite-Derived Bathymetry Mapping on Horseshoe Island, Antarctic Peninsula, with Open-Source Satellite Images: Evaluation of Atmospheric Correction Methods and Empirical Models. Remote Sens. 2023, 15, 2568. [Google Scholar] [CrossRef]
  3. Hartmann, K.; Knauer, K.; Heege, T.; Mueller, A.; Berglund, J.; Paz von Friesen, C.; Marques, V.; Moura, A.; Filippone, M.; Adhiwijna, D.; et al. 4S, Preliminary Report on EO Innovations. H2020-SPACE-2020 Grant Agreement No 101004221. 2021. Available online: https://cordis.europa.eu/project/id/101004221/results (accessed on 2 November 2023).
  4. Lyzenga, D.R. Passive remote sensing techniques for mapping water depth and bottom features. Appl. Opt. 1978, 17, 379–383. [Google Scholar] [CrossRef] [PubMed]
  5. Lyzenga, D.R. Shallow-water bathymetry using combined lidar and passive multispectral scanner data. Int. J. Remote Sens. 1985, 6, 115–125. [Google Scholar] [CrossRef]
  6. Stumpf, R.P.; Holderied, K.; Sinclair, M. Determination of water depth with high-resolution satellite imagery over variable bottom types. Limnol. Oceanogr. 2003, 48 Pt 2, 547–556. [Google Scholar] [CrossRef]
  7. Duan, Z.; Chu, S.; Cheng, L.; Ji, C.; Li, M.; Shen, W. Satellite-derived bathymetry using Landsat-8 and Sentinel-2A images: Assessment of atmospheric correction algorithms and depth derivation models in shallow waters. Opt. Express 2022, 30, 3238–3261. [Google Scholar] [CrossRef]
  8. Misra, A.; Vojinovic, Z.; Ramakrishnan, B.; Luijendijk, A.; Ranasinghe, R. Shallow Water Bathymetry Mapping Using Support Vector Machine (SVM) Technique and Multispectral Imagery. Int. J. Remote Sens. 2018, 39, 4431–4450. [Google Scholar] [CrossRef]
  9. Manessa, M.D.M.; Kanno, A.; Sekine, M.; Haidar, M.; Yamamoto, K.; Imai, T.; Higuchi, T. Satellite-Derived Bathymetry Using Random Forest Algorithm and Worldview-2 Imagery. Geoplanning 2016, 3, 117. [Google Scholar] [CrossRef]
  10. Sagawa, T.; Yamashita, Y.; Okumura, T.; Yamanokuchi, T. Satellite derived bathymetry using machine learning and multi-temporal satellite images. Remote Sens. 2019, 11, 1155. [Google Scholar] [CrossRef]
  11. Mudiyanselage, S.S.J.D.; Abd-Elrahman, A.; Wilkinson, B.; Lecours, V. Satellite-derived bathymetry using machine learning and optimal Sentinel-2 imagery in South-West Florida coastal waters. GIScience Remote Sens. 2022, 59, 1143–1158. [Google Scholar] [CrossRef]
  12. Gafoor, F.A.; Al-Shehhi, M.R.; Cho, C.-S.; Ghedira, H. Gradient Boosting and Linear Regression for Estimating Coastal Bathymetry Based on Sentinel-2 Images. Remote Sens. 2022, 14, 5037. [Google Scholar] [CrossRef]
  13. Susa, T. Satellite Derived Bathymetry with Sentinel-2 Imagery: Comparing Traditional Techniques with Advanced Methods and Machine Learning Ensemble Models. Mar. Geod. 2022, 45, 435–461. [Google Scholar] [CrossRef]
  14. Ceyhun, Ö.; Yalçın, A. Remote sensing of water depths in shallow waters via artificial neural networks. Estuar. Coast. Shelf Sci. 2010, 89, 89–96. [Google Scholar] [CrossRef]
  15. Ai, B.; Wen, Z.; Wang, Z.; Wang, R.; Su, D.; Li, C.; Yang, F. Convolutional Neural Network to Retrieve Water Depth in Marine Shallow Water Area from Remote Sensing Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 2888–2898. [Google Scholar] [CrossRef]
  16. Babbel, B.; Parrish, C.; Magruder, L. ICESat-2 Elevation Retrievals in Support of Satellite-Derived Bathymetry for Global Science Applications. Geophys. Res. Lett. 2021, 48, e2020GL090629. [Google Scholar] [CrossRef]
  17. Xu, Y.; Cao, B.; Deng, R.; Cao, B.; Liu, H.; Li, J. Bathymetry over Broad Geographic Areas using Optical High-spatial-resolution Satellite Remote Sensing without in-situ Data. Int. J. Appl. Earth Observ. Geoinf. 2023, 109, 103308. [Google Scholar] [CrossRef]
  18. Caballero, I.; Stumpf, R.P. Atmospheric correction for satellite-derived bathymetry in the Caribbean waters: From a single image to multi-temporal approaches using Sentinel-2A/B. Opt. Express. 2020, 28, 11742–11766. [Google Scholar] [CrossRef]
  19. Caballero, I.; Stumpf, R.P. Confronting Turbidity, the Major Challenge for Satellite-derived Coastal Bathymetry. Sci. Total Environ. 2020, 28, 11742–11766. [Google Scholar] [CrossRef]
  20. Caballero, I.; Stumpf, R.P.; Meredith, A. Preliminary Assessment of Turbidity and Chlorophyll Impact on Bathymetry Derived from Sentinel-2A and Sentinel-3A satellites in South Florida. Remote Sens. 2019, 11, 645. [Google Scholar] [CrossRef]
  21. Antarctic Weather. Available online: https://www.antarctica.gov.au/about-antarctica/weather-and-climate/weather/ (accessed on 5 January 2023).
  22. Pan, X.L.; Li, B.F.; Watanabe, Y.W. Intense Ocean Freshening from Melting Glacier Around the Antarctica during Early Twenty-first Century. Sci. Rep. 2022, 12, 383. [Google Scholar] [CrossRef]
  23. Adusumilli, S.; Fricker, H.A.; Medley, B.; Padman, L.; Siegfried, M.R. Interannual Variations in Meltwater Input to the Southern Ocean from Antarctic Ice Shelves. Nat. Geosci. 2020, 13, 616–620. [Google Scholar] [CrossRef]
  24. Arasan, G.; Yılmaz, A.; Fırat, O.; Avşar, E.; Güner, H.; Ayğan, K.; Yüce, D. Accuracy Assessments of Göktürk-1 Satellite Imagery. Int. J. Eng. Geosci. 2020, 5, 160–168. [Google Scholar] [CrossRef]
  25. Tükenmez, E.; Gülher, E.; Kaya, Ö.; Polat, H.F. Bathymetric Analysis of Lystad Bay, Horseshoe Island by Using High Resolution Multibeam Echosounder Data. J. Nav. Sci. Eng. 2022, 18, 281–303. [Google Scholar]
  26. IOCCG. Atmospheric Correction for Remotely-Sensed Ocean-Colour Products. 2010. Available online: http://www.ioccg.org/reports/report10.pdf (accessed on 1 May 2023).
  27. IOCCG. Ocean Colour Remote Sensing in Polar Seas. In Reports of the International Ocean-Colour Coordinating Group, No. 16; Babin, M., Arrigo, K., Bélanger, S., Forget, M.-H., Eds.; International Ocean-Colour Coordinating Group: Dartmouth, NS, Canada, 2015; p. 130. [Google Scholar]
  28. Sterckx, S.; Knaeps, S.; Kratzer, S.; Ruddick, K. SIMilarity Environment Correction (SIMEC) applied to MERIS Data over Inland and Coastal Waters. Remote Sens. Environ. 2015, 157, 96–110. [Google Scholar] [CrossRef]
  29. Richter, R.; Schläpfer, D. Atmospheric/Topographic Correction for Satellite Imagery: ATCOR-2/3 User Guide; DLR IB 565-01/17; ResearchGate: Wessling, Germany, 2017. [Google Scholar]
  30. Berk, A.; Anderson, G.; Acharya, P.; Bernstein, L.; Muratov, L.; Lee, J.; Fox, M.; Adler-Golden, S.; Chetwynd, J.; Hoke, M. MODTRANTM5: 2006 update. In Proceedings of the SPIE 6233, Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, Orlando, FL, USA, 17 April 2006; Volume 6233. [Google Scholar]
  31. Kauth, R.J.; Thomas, G.S. The Tasselled-Cap—A Graphic Description of the Spectral-Temporal Development of Agricultural Crops as Seen by Landsat. In Proceedings of the Symposium on Machine Processing of Remotely Sensed Data, West Lafayette, IN, USA, 29 June–1 July 1976; pp. 41–51. [Google Scholar]
  32. Liu, Q.; Liu, G.; Huang, C.; Liu, S.; Zhao, J. A Tasseled Cap Transformation for Landsat 8 OLI TOA Reflectance Images. In Proceedings of the IEEE Geoscience and Remote Sensing Symposium, Quebec City, QC, Canada, 13–18 July 2014; pp. 541–544. [Google Scholar] [CrossRef]
  33. Liu, Q.; Guo, Y.; Liu, G.; Zhao, J. Classification of Landsat 8 OLI Image using Support Vector Machine with Tasseled Cap Transformation. In Proceedings of the 10th International Conference on Natural Computation (ICNC), Xiamen, China, 19–21 August 2014; pp. 665–669. [Google Scholar] [CrossRef]
  34. Moon, H.; Choi, T.; Kim, G.; Park, N.; Park, H.; Choi, J. Land Cover Classification of RapidEye Satellite Images Using Tesseled Cap Transformation (TCT). Korean J. Remote Sens. 2017, 33, 79–88. [Google Scholar] [CrossRef]
  35. Feng, Q.; Gong, J.; Liu, J.; Li, Y. Monitoring Cropland Dynamics of the Yellow River Delta based on Multi-Temporal Landsat Imagery over 1986 to 2015. Sustainability 2015, 7, 14834–14858. [Google Scholar] [CrossRef]
  36. Felton, B.R.; O’Neil, G.L.; Robertson, M.M.; Fitch, G.M.; Goodall, J.L. Using Random Forest Classification and Nationally Available Geospatial Data to Screen for Wetlands over Large Geographic Regions. Water 2019, 11, 1158. [Google Scholar] [CrossRef]
  37. Chen, C.; Chen, H.; Liang, J.; Huang, W.; Xu, W.; Li, B.; Wang, J. Extraction of Water Body Information from Remote Sensing Imagery While Considering Greenness and Wetness Based on Tasseled Cap Transformation. Remote Sens. 2022, 14, 3001. [Google Scholar] [CrossRef]
  38. Zhang, X.; Tian, Q. A Mangrove Recognition Index for Remote Sensing of Mangrove Forest from Space. Res. Commun. Curr. Sci. 2013, 105, 1149–1155. [Google Scholar]
  39. Goodman, J.A.; Lee, Z.; Ustin, S. Influence of atmospheric and sea-surface corrections on retrieval of bottom depth and reflectance using a semi-analytical model: A case study in Kaneohe Bay, Hawaii. Appl. Opt. 2008, 47, F1–F11. [Google Scholar] [CrossRef]
  40. Hedley, J.D.; Harborne, A.R.; Mumby, P.J. Technical note: Simple and robust removal of sun glint for mapping shallow-water benthos. Int. J. Remote Sens. 2005, 26, 2107–2112. [Google Scholar] [CrossRef]
  41. Yunus, A.P.; Dou, J.; Song, X.; Avtar, R. Improved Bathymetric Mapping of Coastal and Lake Environments using Sentinel-2 and Landsat-8 Images. Sensors 2019, 19, 2788. [Google Scholar] [CrossRef] [PubMed]
  42. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  43. Tianqi, C.; Carlos, G. XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar]
Figure 1. (a) The AOI for the bathymetry prediction. (b) The data obtained by the MBE on top of the Gokturk-1 image.
Figure 1. (a) The AOI for the bathymetry prediction. (b) The data obtained by the MBE on top of the Gokturk-1 image.
Remotesensing 15 05220 g001
Figure 2. Flowchart of bathymetry extraction methodology.
Figure 2. Flowchart of bathymetry extraction methodology.
Remotesensing 15 05220 g002
Figure 3. Percentage improvements of (a) RMSE and (b) MAE metrics introduced by the pixel brightness values.
Figure 3. Percentage improvements of (a) RMSE and (b) MAE metrics introduced by the pixel brightness values.
Remotesensing 15 05220 g003
Figure 4. The bathymetric surface for the 0–20 m depth range produced from (a) MBE data, (b) RF algorithm, and (c) the difference map of (a,b).
Figure 4. The bathymetric surface for the 0–20 m depth range produced from (a) MBE data, (b) RF algorithm, and (c) the difference map of (a,b).
Remotesensing 15 05220 g004
Table 1. Technical specifications of Gokturk-1 [24].
Table 1. Technical specifications of Gokturk-1 [24].
Orbit TypeSun Synchronous
Orbit Altitude681 km
Inclination Angle98.11°
Period98 min. 11 s.
Spot Size15 km × 15 km
Swath Width15 km
Strip Length780 km/14.300 km
Spatial Resolution0.5 m PAN–2 m RGB
Radiometric Resolution11-Bit
Horizontal Accuracy10 m (without GCP), 2 m (with GCP)
Vertical Accuracy20 m (without GCP), 3 m (with GCP)
Spectral BandsPAN, RGB, NIR
Number of Orbits per Day14–15 (14.7)
Revisit Time (±5° incidence)11 days
Revisit Time (±30° incidence)2–3 days
Expected Lifetime7 years 3 months
Table 2. Hyperparameter set for ML-based models.
Table 2. Hyperparameter set for ML-based models.
RFXGBoost
bootstrap: Trueobjective: “reg:squarederror”
ccp_alpha: 0.0base_score: 0.5
criterion: squared_errorbooster: “gbtree”
max_features: 1.0tree_method: “exact”
min_samples_leaf: 1colsample_bynode: 1
min_samples_split: 2colsample_bytree: 1
n_estimators: 100learning_rate: 0.3
oob_score: Falsen_estimators: 100
random_state: 45num_parallel_tree: 1
warm_start: Falsepredictor: “auto”
Table 3. RMSE metrics for the RF and XGBoost algorithms across different depth intervals and different band combinations.
Table 3. RMSE metrics for the RF and XGBoost algorithms across different depth intervals and different band combinations.
SensorLandsat-8Sentinel-2Gokturk-1
Algorithm/Depth (m)0–50–100–150–200–50–100–150–200–50–100–150–20
RF-BG0.781.101.581.920.480.932.133.610.822.444.155.71
RF-GR0.781.101.571.970.470.922.103.530.842.573.965.83
RF-BR0.781.121.561.940.480.942.173.480.802.354.075.90
XGBoost-BG0.781.101.712.280.491.252.554.090.932.243.725.05
XGBoost-GR0.781.101.672.390.481.232.523.970.912.323.635.10
XGBoost-BR0.781.121.72.390.481.322.583.990.922.163.625.16
Table 4. MAEmetrics for the RF and XGBoost algorithms across different depth intervals and different band combinations.
Table 4. MAEmetrics for the RF and XGBoost algorithms across different depth intervals and different band combinations.
SensorLandsat-8Sentinel-2Gokturk-1
Algorithm/Depth(m)0–50–100–150–200–50–100–150–200–50–100–150–20
RF-BG0.580.821.131.350.340.581.272.370.551.853.264.53
RF-GR0.580.811.111.380.340.581.262.350.571.973.144.66
RF-BR0.580.811.131.370.340.601.262.360.531.793.194.69
XGBoost-BG0.580.841.261.740.360.972.073.350.731.873.074.22
XGBoost-GR0.580.821.261.830.350.931.983.240.731.943.024.26
XGBoost-BR0.580.821.281.840.371.032.023.390.721.773.014.34
Table 5. R2 metrics for the RF and XGBoost algorithms across different depth intervals and different band combinations.
Table 5. R2 metrics for the RF and XGBoost algorithms across different depth intervals and different band combinations.
SensorLandsat-8Sentinel-2Gokturk-1
Algorithm/Depth0–50–100–150–200–50–100–150–200–50–100–150–20
RF-BG0.760.800.840.870.800.850.710.550.390.140.080.07
RF-GR0.760.800.840.860.800.850.720.570.360.090.110.06
RF-BR0.760.790.840.870.800.850.700.500.420.190.100.05
XGBoost-BG0.760.800.800.800.790.740.580.580.220.140.080.11
XGBoost-GR0.760.800.820.790.800.750.570.450.250.080.120.09
XGBoost-BR0.760.790.810.800.790.700.570.450.240.200.130.07
Table 6. Accuracy requirements according to IHO “Category of Zone of Confidence” (CATZOC) Levels.
Table 6. Accuracy requirements according to IHO “Category of Zone of Confidence” (CATZOC) Levels.
CATZOC LevelDepth (m)Horizontal Accuracy (±m)Vertical Accuracy (±m)
A155 + 5% depth0.55
100.60
150.65
200.70
A2/B520/501.10
101.20
151.30
201.40
C55002.30
102.50
152.80
203.00
D-Worse than CWorse than C
Table 7. CATZOC evaluation of the bathymetric predictions based on Gokturk-1 and MAE metrics.
Table 7. CATZOC evaluation of the bathymetric predictions based on Gokturk-1 and MAE metrics.
Algorithm/Depth(m)0–50–100–150–20
RF-BGA1CDD
RF-GRA2CDD
RF-BRA1CDD
XGBoost-BGA2CDD
XGBoost-GRA2CDD
XGBoost-BRA2CDD
Table 8. RMSE metrics for the RF and XGBoost algorithms across different depth intervals and different band combinations with the addition of the pixel brightness values.
Table 8. RMSE metrics for the RF and XGBoost algorithms across different depth intervals and different band combinations with the addition of the pixel brightness values.
SensorLandsat-8Sentinel-2Gokturk-1
Algorithm/Depth (m)0–50–100–150–200–50–100–150–200–50–100–150–20
RF-BG0.460.811.331.920.470.871.813.030.471.402.774.23
RF-GR0.460.811.461.970.470.821.83.030.461.562.864.43
RF-BR0.460.801.431.940.470.851.83.040.431.262.574.15
XGBoost-BG0.460.871.552.280.471.062.193.630.491.372.664.05
XGBoost-GR0.460.871.622.390.471.022.143.470.501.392.714.12
XGBoost-BR0.460.871.622.390.471.032.233.650.491.282.534.00
Table 9. MAE metrics for the RF and XGBoost algorithms across different depth intervals and different band combinations with the addition of the pixel brightness values.
Table 9. MAE metrics for the RF and XGBoost algorithms across different depth intervals and different band combinations with the addition of the pixel brightness values.
SensorLandsat-8Sentinel-2Gokturk-1
Algorithm/Depth (m)0–50–100–150–200–50–100–150–200–50–100–150–20
RF-BG0.330.530.861.350.340.551.122.020.300.982.133.38
RF-GR0.330.540.871.380.340.541.081.980.301.002.243.39
RF-BR0.330.540.911.370.340.561.132.020.290.861.913.15
XGBoost-BG0.330.611.101.740.340.771.692.870.351.042.123.31
XGBoost-GR0.330.621.141.830.340.741.592.640.351.052.093.22
XGBoost-BR0.330.621.141.840.340.741.662.900.351.002.003.25
Table 10. R2 metrics for the RF and XGBoost algorithms across different depth intervals and different band combinations with the addition of the pixel brightness values.
Table 10. R2 metrics for the RF and XGBoost algorithms across different depth intervals and different band combinations with the addition of the pixel brightness values.
SensorLandsat-8Sentinel-2Gokturk-1
Algorithm/Depth (m)0–50–100–150–200–50–100–150–200–50–100–150–20
RF-BG0.810.890.890.870.800.920.790.680.790.670.470.38
RF-GR0.810.890.860.860.800.900.790.680.810.590.530.35
RF-BR0.810.890.870.870.800.900.790.680.810.740.560.41
XGBoost-BG0.810.870.840.800.800.810.690.580.790.700.540.42
XGBoost-GR0.810.870.830.790.800.830.700.580.790.680.520.44
XGBoost-BR0.810.870.830.800.800.820.680.540.790.720.560.42
Table 11. Re-evaluation of CATZOC for the bathymetric predictions based on Gokturk-1 and MAE metrics with the addition of the pixel brightness values.
Table 11. Re-evaluation of CATZOC for the bathymetric predictions based on Gokturk-1 and MAE metrics with the addition of the pixel brightness values.
Algorithm/Depth (m)0–50–100–150–20
RF-BGA1A2CD
RF-GRA1A2CD
RF-BRA1A2CD
XGBoost-BGA1A2CD
XGBoost-GRA1A2CD
XGBoost-BRA1A2CD
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gülher, E.; Alganci, U. Satellite–Derived Bathymetry in Shallow Waters: Evaluation of Gokturk-1 Satellite and a Novel Approach. Remote Sens. 2023, 15, 5220. https://doi.org/10.3390/rs15215220

AMA Style

Gülher E, Alganci U. Satellite–Derived Bathymetry in Shallow Waters: Evaluation of Gokturk-1 Satellite and a Novel Approach. Remote Sensing. 2023; 15(21):5220. https://doi.org/10.3390/rs15215220

Chicago/Turabian Style

Gülher, Emre, and Ugur Alganci. 2023. "Satellite–Derived Bathymetry in Shallow Waters: Evaluation of Gokturk-1 Satellite and a Novel Approach" Remote Sensing 15, no. 21: 5220. https://doi.org/10.3390/rs15215220

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop