remotesensing-logo

Journal Browser

Journal Browser

Special Issue "Advancement in Undersea Remote Sensing"

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Ocean Remote Sensing".

Deadline for manuscript submissions: 31 October 2023 | Viewed by 9844

Special Issue Editors

Harbor Branch Oceanographic Institute at Florida Atlantic University, Fort Pierce, FL, United States
Interests: underwater imaging applications; computer vision in underwater laser imaging applications; real-time environmental monitoring and events detection; application of electro-optic imaging numerical model and deconvolution technique in image enhancement and pulse resolution improvements
L3Harris Technologies · Space & Airborne Systems, NASA Boulevard, Melbourne, FL 32919, USA
Interests: underwater imaging applications: computer vision in underwater laser imaging applications; real-time environmental monitoring and events detection; application of electro-optic imaging numerical model and deconvolution technique in image enhancement and pulse resolution improvements
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Gaining a better understanding of the marine environment has been a primary aim for humanity going back to the ancient times. However, it is only over the last several decades, enabled by the ongoing microelectronics and computer technological revolution, that significant progress has been made to develop the platforms, sensors, and other related technologies to overcome the opaque barrier between humans and the underwater world. Indeed, our desire to explore the ocean has recently spawned a plethora of advanced undersea remote sensing techniques and technologies that are still growing exponentially, and this Special Issue will be focused on compiling a balanced collection of papers that detail the most recent advancements in this area.

Submissions are hereby invited for original research, review articles and case studies that are new contributions in the advancement of underwater remote sensing. Theoretical and experimental contributions, original and review studies, and industrial and university research is welcome.

The main topics of interest include, but are not limited to, the following:

  • Underwater robotics and platforms;
  • Underwater sonar technology;
  • Underwater optical and acoustical communications;
  • Underwater lidar sensors and imagers;
  • Underwater signal processing and image enhancements;
  • Underwater turbulence sensing;
  • Marine species detection and identification;
  • Aquaculture monitoring systems;
  • Machine learning for undersea remote sensing.

Dr. Bing Ouyang
Dr. Fraser Dalgleish 
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2500 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Underwater robotics
  • Undersea remote sensing
  • Underwater lidar
  • Machine learning
  • Aquaculture
  • Marine species detection

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
DBFNet: A Dual-Branch Fusion Network for Underwater Image Enhancement
Remote Sens. 2023, 15(5), 1195; https://doi.org/10.3390/rs15051195 - 21 Feb 2023
Cited by 1 | Viewed by 825
Abstract
Due to the absorption and scattering effects of light propagating through water, underwater images inevitably suffer from severe degradation, such as color casts and losses of detail. Many existing deep learning-based methods have demonstrated superior performance for underwater image enhancement (UIE). However, accurate [...] Read more.
Due to the absorption and scattering effects of light propagating through water, underwater images inevitably suffer from severe degradation, such as color casts and losses of detail. Many existing deep learning-based methods have demonstrated superior performance for underwater image enhancement (UIE). However, accurate color correction and detail restoration still present considerable challenges for UIE. In this work, we develop a dual-branch fusion network, dubbed the DBFNet, to eliminate the degradation of underwater images. We first design a triple-color channel separation learning branch (TCSLB), which balances the color distribution of underwater images by learning the independent features of the different channels of the RGB color space. Subsequently, we develop a wavelet domain learning branch (WDLB) and design a discrete wavelet transform-based attention residual dense module to fully employ the wavelet domain information of the image to restore clear details. Finally, a dual attention-based selective fusion module (DASFM) is designed for the adaptive fusion of latent features of the two branches, in which both pleasing colors and diverse details are integrated. Extensive quantitative and qualitative evaluations of synthetic and real-world underwater datasets demonstrate that the proposed DBFNet significantly improves the visual quality and shows superior performance to the compared methods. Furthermore, the ablation experiments demonstrate the effectiveness of each component of the DBFNet. Full article
(This article belongs to the Special Issue Advancement in Undersea Remote Sensing)
Show Figures

Figure 1

Article
A Texture Feature Removal Network for Sonar Image Classification and Detection
Remote Sens. 2023, 15(3), 616; https://doi.org/10.3390/rs15030616 - 20 Jan 2023
Cited by 2 | Viewed by 872
Abstract
Deep neural network (DNN) was applied in sonar image target recognition tasks, but it is very difficult to obtain enough sonar images that contain a target; as a result, the direct use of a small amount of data to train a DNN will [...] Read more.
Deep neural network (DNN) was applied in sonar image target recognition tasks, but it is very difficult to obtain enough sonar images that contain a target; as a result, the direct use of a small amount of data to train a DNN will cause overfitting and other problems. Transfer learning is the most effective way to address such scenarios. However, there is a large domain gap between optical images and sonar images, and common transfer learning methods may not be able to effectively handle it. In this paper, we propose a transfer learning method for sonar image classification and object detection called the texture feature removal network. We regard the texture features of an image as domain-specific features, and we narrow the domain gap by discarding the domain-specific features, and hence, make it easier to complete knowledge transfer. Our method can be easily embedded into other transfer learning methods, which makes it easier to apply to different application scenarios. Experimental results show that our method is effective in side-scan sonar image classification tasks and forward-looking sonar image detection tasks. For side-scan sonar image classification tasks, the classification accuracy of our method is enhanced by 4.5% in a supervised learning experiment, and for forward-looking sonar detection tasks, the average precision (AP) is also significantly improved. Full article
(This article belongs to the Special Issue Advancement in Undersea Remote Sensing)
Show Figures

Figure 1

Article
Underwater Hyperspectral Imaging System with Liquid Lenses
Remote Sens. 2023, 15(3), 544; https://doi.org/10.3390/rs15030544 - 17 Jan 2023
Viewed by 1069
Abstract
The underwater hyperspectral imager enables the detection and identification of targets on the seafloor by collecting high-resolution spectral images. The distance between the hyperspectral imager and the targets cannot be consistent in real operation by factors such as motion and fluctuating terrain, resulting [...] Read more.
The underwater hyperspectral imager enables the detection and identification of targets on the seafloor by collecting high-resolution spectral images. The distance between the hyperspectral imager and the targets cannot be consistent in real operation by factors such as motion and fluctuating terrain, resulting in unfocused images and negative effects on the identification. In this paper, we developed a novel integrated underwater hyperspectral imaging system for deep sea surveys and proposed an autofocus strategy based on liquid lens focusing transfer. The calibration tests provided a clear focus result for hyperspectral transects and a global spectral resolution of less than 7 nm in spectral range from 400 to 800 nm. The prototype was used to obtain spectrum and image information of manganese nodules and four other rocks in a laboratory environment. The classification of the five kinds of minerals was successfully realized by using a support vector machine. We tested the UHI prototype in the deep sea and observed a Psychropotidae specimen on the sediment from the in situ hyperspectral images. The results show that the prototype developed here can accurately and stably obtain hyperspectral data and has potential applications for in situ deep-sea exploration. Full article
(This article belongs to the Special Issue Advancement in Undersea Remote Sensing)
Show Figures

Figure 1

Article
UIR-Net: A Simple and Effective Baseline for Underwater Image Restoration and Enhancement
Remote Sens. 2023, 15(1), 39; https://doi.org/10.3390/rs15010039 - 22 Dec 2022
Cited by 1 | Viewed by 945
Abstract
Because of the unique physical and chemical properties of water, obtaining high-quality underwater images directly is not an easy thing. Hence, recovery and enhancement are indispensable steps in underwater image processing and have therefore become research hotspots. Nevertheless, existing image-processing methods generally have [...] Read more.
Because of the unique physical and chemical properties of water, obtaining high-quality underwater images directly is not an easy thing. Hence, recovery and enhancement are indispensable steps in underwater image processing and have therefore become research hotspots. Nevertheless, existing image-processing methods generally have high complexity and are difficult to deploy on underwater platforms with limited computing resources. To tackle this issue, this paper proposes a simple and effective baseline named UIR-Net that can recover and enhance underwater images simultaneously. This network uses a channel residual prior to extract the channel of the image to be recovered as a prior, combined with a gradient strategy to reduce parameters and training time to make the operation more lightweight. This method can improve the color performance while maintaining the style and spatial texture of the contents. Through experiments on three datasets (MSRB, MSIRB and UIEBD-Snow), we confirm that UIR-Net can recover clear underwater images from original images with large particle impurities and ocean light spots. Compared to other state-of-the-art methods, UIR-Net can recover underwater images at a similar or higher quality with a significantly lower number of parameters, which is valuable in real-world applications. Full article
(This article belongs to the Special Issue Advancement in Undersea Remote Sensing)
Show Figures

Figure 1

Article
Sonar Image Target Detection Based on Style Transfer Learning and Random Shape of Noise under Zero Shot Target
Remote Sens. 2022, 14(24), 6260; https://doi.org/10.3390/rs14246260 - 10 Dec 2022
Viewed by 949
Abstract
With the development of sonar technology, sonar images have been widely used to detect targets. However, there are many challenges for sonar images in terms of object detection. For example, the detectable targets in the sonar data are more sparse than those in [...] Read more.
With the development of sonar technology, sonar images have been widely used to detect targets. However, there are many challenges for sonar images in terms of object detection. For example, the detectable targets in the sonar data are more sparse than those in optical images, the real underwater scanning experiment is complicated, and the sonar image styles produced by different types of sonar equipment due to their different characteristics are inconsistent, which makes it difficult to use them for sonar object detection and recognition algorithms. In order to solve these problems, we propose a novel sonar image object-detection method based on style learning and random noise with various shapes. Sonar style target sample images are generated through style transfer, which enhances insufficient sonar objects image. By introducing various noise shapes, which included points, lines, and rectangles, the problems of mud and sand obstruction and a mutilated target in the real environment are solved, and the single poses of the sonar image target is improved by fusing multiple poses of optical image target. In the meantime, a method of feature enhancement is proposed to solve the issue of missing key features when using style transfer on optical images directly. The experimental results show that our method achieves better precision. Full article
(This article belongs to the Special Issue Advancement in Undersea Remote Sensing)
Show Figures

Graphical abstract

Article
DP-ViT: A Dual-Path Vision Transformer for Real-Time Sonar Target Detection
Remote Sens. 2022, 14(22), 5807; https://doi.org/10.3390/rs14225807 - 17 Nov 2022
Cited by 2 | Viewed by 944
Abstract
Sonar image is the main way for underwater vehicles to obtain environmental information. The task of target detection in sonar images can distinguish multi-class targets in real time and accurately locate them, providing perception information for the decision-making system of underwater vehicles. However, [...] Read more.
Sonar image is the main way for underwater vehicles to obtain environmental information. The task of target detection in sonar images can distinguish multi-class targets in real time and accurately locate them, providing perception information for the decision-making system of underwater vehicles. However, there are many challenges in sonar image target detection, such as many kinds of sonar, complex and serious noise interference in images, and less datasets. This paper proposes a sonar image target detection method based on Dual Path Vision Transformer Network (DP-VIT) to accurately detect targets in forward-look sonar and side-scan sonar. DP-ViT increases receptive field by adding multi-scale to patch embedding enhances learning ability of model feature extraction by using Dual Path Transformer Block, then introduces Conv-Attention to reduce model training parameters, and finally uses Generalized Focal Loss to solve the problem of imbalance between positive and negative samples. The experimental results show that the performance of this sonar target detection method is superior to other mainstream methods on both forward-look sonar dataset and side-scan sonar dataset, and it can also maintain good performance in the case of adding noise. Full article
(This article belongs to the Special Issue Advancement in Undersea Remote Sensing)
Show Figures

Graphical abstract

Article
A Technique to Navigate Autonomous Underwater Vehicles Using a Virtual Coordinate Reference Network during Inspection of Industrial Subsea Structures
Remote Sens. 2022, 14(20), 5123; https://doi.org/10.3390/rs14205123 - 13 Oct 2022
Cited by 1 | Viewed by 911
Abstract
Industrial subsea infrastructure inspections using autonomous underwater vehicles (AUV) require high accuracy of AUV navigation relative to the objects being examined. In addition to traditional navigation tools with inertial navigation systems and acoustic navigation equipment, technologies with video information processing are also actively [...] Read more.
Industrial subsea infrastructure inspections using autonomous underwater vehicles (AUV) require high accuracy of AUV navigation relative to the objects being examined. In addition to traditional navigation tools with inertial navigation systems and acoustic navigation equipment, technologies with video information processing are also actively developed today. The visual odometry-based techniques can provide higher navigation accuracy for local maneuvering at short distances to objects. However, in the case of long-distance AUV movements, such techniques typically accumulate errors when calculating the AUV movement trajectory. In this regard, the present article considers a navigation technique that allows for increasing the accuracy of AUV movements in the coordinate space of the object inspected by using a virtual coordinate reference network. Another aspect of the method proposed is to minimize computational costs for AUV moving along the inspection trajectory by referencing the AUV coordinates to the object pre-calculated using the object recognition algorithm. Thus, the use of a network of virtual points for referencing the AUV to subsea objects is aimed to maintain the required accuracy of AUV coordination during a long-distance movement along the inspection trajectory, while minimizing computational costs. Full article
(This article belongs to the Special Issue Advancement in Undersea Remote Sensing)
Show Figures

Figure 1

Article
An Efficient Method for Detection and Quantitation of Underwater Gas Leakage Based on a 300-kHz Multibeam Sonar
Remote Sens. 2022, 14(17), 4301; https://doi.org/10.3390/rs14174301 - 01 Sep 2022
Viewed by 1106
Abstract
In recent years, multibeam sonar has become the most effective and sensitive tool for the detection and quantitation of underwater gas leakage and its rise through the water column. Motivated by recent research, this paper presents an efficient method for the detection and [...] Read more.
In recent years, multibeam sonar has become the most effective and sensitive tool for the detection and quantitation of underwater gas leakage and its rise through the water column. Motivated by recent research, this paper presents an efficient method for the detection and quantitation of gas leakage based on a 300-kHz multibeam sonar. In the proposed gas leakage detection method based on multibeam sonar water column images, not only the backscattering strength of the gas bubbles but also the size and aspect ratio of a gas plume are used to isolate interference objects. This paper also presents a volume-scattering strength optimization model to estimate the gas flux. The bubble size distribution, volume, and flux of gas leaks are determined by matching the theoretical and measured values of the volume-scattering strength of the gas bubbles. The efficiency and effectiveness of the proposed method have been verified by a case study at the artificial gas leakage site in the northern South China Sea. The results show that the leaking gas flux is approximately between 29.39 L/min and 56.43 L/min under a bubble radius ranging from 1 mm to 12 mm. The estimated results are in good agreement with the recorded data (32–67 L/min) for gas leaks generated by an air compressor. The experimental results demonstrate that the proposed method can achieve effective and accurate detection and quantitation of gas leakages. Full article
(This article belongs to the Special Issue Advancement in Undersea Remote Sensing)
Show Figures

Graphical abstract

Article
Imbalanced Underwater Acoustic Target Recognition with Trigonometric Loss and Attention Mechanism Convolutional Network
Remote Sens. 2022, 14(16), 4103; https://doi.org/10.3390/rs14164103 - 21 Aug 2022
Cited by 2 | Viewed by 1036
Abstract
A balanced dataset is generally beneficial to underwater acoustic target recognition. However, the imbalanced class distribution is always meted out in a real scene. To address this, a weighted cross entropy loss function based on trigonometric function is proposed. Then, the proposed loss [...] Read more.
A balanced dataset is generally beneficial to underwater acoustic target recognition. However, the imbalanced class distribution is always meted out in a real scene. To address this, a weighted cross entropy loss function based on trigonometric function is proposed. Then, the proposed loss function is applied in a multi-scale residual convolutional neural network (named MR-CNN-A network) embedded with an attention mechanism for the recognition task. Firstly, a multi-scale convolution kernel is used to obtain multi-scale features. Then, an attention mechanism is used to fuse these multi-scale feature maps. Furthermore, a cosx-function-weighted cross-entropy loss function is used to deal with the class imbalance in underwater acoustic data. This function adjusts the loss ratio of each sample by adjusting the loss interval of every mini-batch based on cosx term to achieve a balanced total loss for each class. Two imbalanced underwater acoustic data sets, ShipsEar and autonomous underwater vehicle (self-collected data) are used to evaluate the proposed network. The experimental results show that the proposed network outperforms the support vector machine and a simple convolutional neural network. Compared with the other three loss functions, the proposed loss function achieves better stability and adaptability. The results strongly demonstrate the validity of the proposed loss function and the network. Full article
(This article belongs to the Special Issue Advancement in Undersea Remote Sensing)
Show Figures

Figure 1

Back to TopTop