Topic Editors

College of Information Science and Technology, Dalian Maritime University, Dalian 116026, China
Dr. Wenqi Ren
School of Cyber Science and Technology, Sun Yat-Sen University, Guangzhou 510275, China
School of Information Science and Engineering, Ningbo University, Ningbo 315211, China
Department of Computer Science, National Chengchi University, Taipei 116011, Taiwan

Applications and Development of Underwater Robotics and Underwater Vision Technology

Abstract submission deadline
30 November 2024
Manuscript submission deadline
31 January 2025
Viewed by
17756

Topic Information

Dear Colleagues,

In today’s world, the ocean is one of the most important areas for human exploration and development. Underwater vision, as a cross-disciplinary field related to underwater environments, has a wide range of applications in marine resource development, marine biology research, underwater detection and control, and other fields.

In terms of marine resource development, underwater vision technology is a valuable tool for marine oil exploration and deep-sea mineral resource development. For example, high-precision underwater vision systems on underwater robots can facilitate oil exploration and development in deep-sea environments. Moreover, these robots can be utilized for the exploration and development of seabed mineral resources, enabling the development and utilization of deep-sea resources.

Regarding marine biology research, underwater vision technology is useful for observing and studying marine organisms. High-definition underwater cameras on underwater robots can capture and observe marine organisms in the ocean. Furthermore, these cameras can aid in the study of deep-sea organisms, which enables scientists to comprehend the distribution of biological communities and ecosystems in deep-sea environments.

As for underwater detection and control, underwater vision technology plays a vital role in underwater target detection, underwater 3D modeling, and more. For example, underwater robots equipped with underwater sonars and cameras can detect and identify underwater targets in underwater environments. Moreover, these robots can create 3D models of underwater scenes, providing a visualization tool for the detection and study of underwater environments.

Therefore, the application of underwater vision in the marine field has broad prospects and great significance in promoting the development of the marine field. In order to promote the development of the underwater vision field, we will edit a Special Issue on underwater vision, inviting experts and scholars to share both their research results and the latest developments in the field.

  • We welcome submissions of papers related to the following areas:
  • Underwater robot vision systems;
  • Underwater image enhancement and processing techniques;
  • Underwater object detection and recognition;
  • Underwater 3D reconstruction techniques;
  • Underwater optical imaging and laser scanning technologies;
  • Underwater physical environment modeling and simulation;
  • Underwater acoustic imaging and sonar technologies;
  • Underwater communication and networking technologies.

Dr. Jingchun Zhou
Dr. Wenqi Ren
Dr. Qiuping Jiang
Dr. Yan-Tsung Peng
Topic Editors

Keywords

  • computer vision
  • image processing
  • underwater vision
  • underwater image enhancement/restoration
  • underwater robot
  • underwater imaging

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Journal of Marine Science and Engineering
jmse
2.9 3.7 2013 15.4 Days CHF 2600 Submit
Machines
machines
2.6 2.1 2013 15.6 Days CHF 2400 Submit
Remote Sensing
remotesensing
5.0 7.9 2009 23 Days CHF 2700 Submit
Robotics
robotics
3.7 5.9 2012 17.3 Days CHF 1800 Submit
Sensors
sensors
3.9 6.8 2001 17 Days CHF 2600 Submit

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (11 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
19 pages, 15815 KiB  
Article
A Statistical Evaluation of the Connection between Underwater Optical and Acoustic Images
by Rebeca Chinicz and Roee Diamant
Remote Sens. 2024, 16(4), 689; https://doi.org/10.3390/rs16040689 - 15 Feb 2024
Viewed by 602
Abstract
The use of Synthetic Aperture Sonar (SAS) in autonomous underwater vehicle (AUV) surveys has found applications in archaeological searches, underwater mine detection and wildlife monitoring. However, the easy confusability of natural objects with the target object leads to high false positive rates. To [...] Read more.
The use of Synthetic Aperture Sonar (SAS) in autonomous underwater vehicle (AUV) surveys has found applications in archaeological searches, underwater mine detection and wildlife monitoring. However, the easy confusability of natural objects with the target object leads to high false positive rates. To improve detection, the combination of SAS and optical images has recently attracted attention. While SAS data provides a large-scale survey, optical information can help contextualize it. This combination creates the need to match multimodal, optical–acoustic image pairs. The two images are not aligned, and are taken from different angles of view and at different times. As a result, challenges such as the different resolution, scaling and posture of the two sensors need to be overcome. In this research, motivated by the information gain when using both modalities, we turn to statistical exploration for feature analysis to investigate the relationship between the two modalities. In particular, we propose an entropic method for recognizing matching multimodal images of the same object and investigate the probabilistic dependency between the images of the two modalities based on their conditional probabilities. The results on a real dataset of SAS and optical images of the same and different objects on the seafloor confirm our assumption that the conditional probability of SAS images is different from the marginal probability given an optical image, and show a favorable trade-off between detection and false alarm rate that is higher than current benchmarks. For reproducibility, we share our database. Full article
Show Figures

Graphical abstract

15 pages, 4491 KiB  
Article
G-Net: An Efficient Convolutional Network for Underwater Object Detection
by Xiaoyang Zhao, Zhuo Wang, Zhongchao Deng and Hongde Qin
J. Mar. Sci. Eng. 2024, 12(1), 116; https://doi.org/10.3390/jmse12010116 - 07 Jan 2024
Viewed by 1098
Abstract
Visual perception technology is of great significance for underwater robots to carry out seabed investigation and mariculture activities. Due to the complex underwater environment, it is often necessary to enhance the underwater image when detecting underwater targets by optical sensors. Most of the [...] Read more.
Visual perception technology is of great significance for underwater robots to carry out seabed investigation and mariculture activities. Due to the complex underwater environment, it is often necessary to enhance the underwater image when detecting underwater targets by optical sensors. Most of the traditional methods involve image enhancement and then target detection. However, this method greatly increases the timeliness in practical application. To solve this problem, we propose a feature-enhanced target detection network, Global-Net (G-Net), which combines underwater image enhancement with target detection. Different from the traditional method of reconstructing enhanced images for target detection, G-Net realizes the integration of image enhancement and target detection. In addition, our feature map learning module (FML) can effectively extract defogging features. The test results in a real underwater environment show that G-Net improves the detection accuracy of underwater targets by about 5%, but also has high detection efficiency, which ensures the reliability of underwater robots in seabed investigation and aquaculture activities. Full article
Show Figures

Figure 1

19 pages, 5080 KiB  
Article
Underwater Object Detection in Marine Ranching Based on Improved YOLOv8
by Rong Jia, Bin Lv, Jie Chen, Hailin Liu, Lin Cao and Min Liu
J. Mar. Sci. Eng. 2024, 12(1), 55; https://doi.org/10.3390/jmse12010055 - 25 Dec 2023
Viewed by 1594
Abstract
The aquaculture of marine ranching is of great significance for scientific aquaculture and the practice of statistically grasping existing information on the types of living marine resources and their density. However, underwater environments are complex, and there are many small and overlapping targets [...] Read more.
The aquaculture of marine ranching is of great significance for scientific aquaculture and the practice of statistically grasping existing information on the types of living marine resources and their density. However, underwater environments are complex, and there are many small and overlapping targets for marine organisms, which seriously affects the performance of detectors. To overcome these issues, we attempted to improve the YOLOv8 detector. The InceptionNeXt block was used in the backbone to enhance the feature extraction capabilities of the network. Subsequently, a separate and enhanced attention module (SEAM) was added to the neck to enhance the detection of overlapping targets. Moreover, the normalized Wasserstein distance (NWD) loss was proportionally added to the original CIoU loss to improve the detection of small targets. Data augmentation methods were used to improve the dataset during training to enhance the robustness of the network. The experimental results showed that the improved YOLOv8 achieved the mAP of 84.5%, which was an improvement over the original YOLOv8 of approximately 6.2%. Meanwhile, there were no significant increases in the numbers of parameters and computations. This detector can be applied on platforms for seafloor observation experiments in the field of marine ranching to complete the task of real-time detection of marine organisms. Full article
Show Figures

Figure 1

14 pages, 5305 KiB  
Article
New Insights into Sea Turtle Propulsion and Their Cost of Transport Point to a Potential New Generation of High-Efficient Underwater Drones for Ocean Exploration
by Nick van der Geest, Lorenzo Garcia, Roy Nates and Fraser Borrett
J. Mar. Sci. Eng. 2023, 11(10), 1944; https://doi.org/10.3390/jmse11101944 - 09 Oct 2023
Viewed by 1535
Abstract
Sea turtles gracefully navigate their marine environments by flapping their pectoral flippers in an elegant routine to produce the required hydrodynamic forces required for locomotion. The propulsion of sea turtles has been shown to occur for approximately 30% of the limb beat, with [...] Read more.
Sea turtles gracefully navigate their marine environments by flapping their pectoral flippers in an elegant routine to produce the required hydrodynamic forces required for locomotion. The propulsion of sea turtles has been shown to occur for approximately 30% of the limb beat, with the remaining 70% employing a drag-reducing glide. However, it is unknown how the sea turtle manipulates the flow during the propulsive stage. Answering this research question is a complicated process, especially when conducting laboratory tests on endangered animals, and the animal may not even swim with its regular routine while in a captive state. In this work, we take advantage of our robotic sea turtle, internally known as Cornelia, to offer the first insights into the flow features during the sea turtle’s propulsion cycle consisting of the downstroke and the sweep stroke. Comparing the flow features to the animal’s swim speed, flipper angle of attack, power consumption, thrust and lift production, we hypothesise how each of the flow features influences the animal’s propulsive efforts and cost of transport (COT). Our findings show that the sea turtle can produce extremely low COT values that point to the effectiveness of the sea turtle propulsive technique. Based on our findings, we extract valuable data that can potentially lead to turtle-inspired elements for high-efficiency underwater drones for long-term underwater missions. Full article
Show Figures

Figure 1

17 pages, 3271 KiB  
Article
Underwater Image Translation via Multi-Scale Generative Adversarial Network
by Dongmei Yang, Tianzi Zhang, Boquan Li, Menghao Li, Weijing Chen, Xiaoqing Li and Xingmei Wang
J. Mar. Sci. Eng. 2023, 11(10), 1929; https://doi.org/10.3390/jmse11101929 - 06 Oct 2023
Viewed by 744
Abstract
The role that underwater image translation plays assists in generating rare images for marine applications. However, such translation tasks are still challenging due to data lacking, insufficient feature extraction ability, and the loss of content details. To address these issues, we propose a [...] Read more.
The role that underwater image translation plays assists in generating rare images for marine applications. However, such translation tasks are still challenging due to data lacking, insufficient feature extraction ability, and the loss of content details. To address these issues, we propose a novel multi-scale image translation model based on style-independent discriminators and attention modules (SID-AM-MSITM), which learns the mapping relationship between two unpaired images for translation. We introduce Convolution Block Attention Modules (CBAM) to the generators and discriminators of SID-AM-MSITM to improve its feature extraction ability. Moreover, we construct style-independent discriminators that enable the discriminant results of SID-AM-MSITM to be not affected by the style of images and retain content details. Through ablation experiments and comparative experiments, we demonstrate that attention modules and style-independent discriminators are introduced reasonably and SID-AM-MSITM performs better than multiple baseline methods. Full article
Show Figures

Figure 1

15 pages, 4817 KiB  
Article
Underwater Geomagnetic Localization Based on Adaptive Fission Particle-Matching Technology
by Huapeng Yu, Ziyuan Li, Wentie Yang, Tongsheng Shen, Dalei Liang and Qinyuan He
J. Mar. Sci. Eng. 2023, 11(9), 1739; https://doi.org/10.3390/jmse11091739 - 04 Sep 2023
Cited by 1 | Viewed by 847
Abstract
The geomagnetic field constitutes a massive fingerprint database, and its unique structure provides potential position correction information. In recent years, particle filter technology has received more attention in the context of robot navigation. However, particle degradation and impoverishment have constrained navigation systems’ performance. [...] Read more.
The geomagnetic field constitutes a massive fingerprint database, and its unique structure provides potential position correction information. In recent years, particle filter technology has received more attention in the context of robot navigation. However, particle degradation and impoverishment have constrained navigation systems’ performance. This paper transforms particle filtering into a particle-matching positioning problem and proposes a geomagnetic localization method based on an adaptive fission particle filter. This method employs particle-filtering technology to construct a geomagnetic matching positioning model. Through adaptive particle fission and sampling, the problem of particle degradation and impoverishment in traditional particle filtering is solved, resulting in improved geomagnetic matching positioning accuracy. Finally, the proposed method was tested in a sea trial, and the results show that the proposed method has a lower positioning error than traditional particle-filtering and intelligent particle-filtering algorithms. Under geomagnetic map conditions, an average positioning accuracy of about 546.44 m is achieved. Full article
Show Figures

Figure 1

19 pages, 6136 KiB  
Article
Coupling Dilated Encoder–Decoder Network for Multi-Channel Airborne LiDAR Bathymetry Full-Waveform Denoising
by Bin Hu, Yiqiang Zhao, Guoqing Zhou, Jiaji He, Changlong Liu, Qiang Liu, Mao Ye and Yao Li
Remote Sens. 2023, 15(13), 3293; https://doi.org/10.3390/rs15133293 - 27 Jun 2023
Cited by 1 | Viewed by 909
Abstract
Multi-channel airborne full-waveform LiDAR is widely used for high-precision underwater depth measurement. However, the signal quality of full-waveform data is unstable due to the influence of background light, dark current noise, and the complex transmission process. Therefore, we propose a nonlocal encoder block [...] Read more.
Multi-channel airborne full-waveform LiDAR is widely used for high-precision underwater depth measurement. However, the signal quality of full-waveform data is unstable due to the influence of background light, dark current noise, and the complex transmission process. Therefore, we propose a nonlocal encoder block (NLEB) based on spatial dilated convolution to optimize the feature extraction of adjacent frames. On this basis, a coupled denoising encoder–decoder network is proposed that takes advantage of the echo correlation in deep-water and shallow-water channels. Firstly, full waveforms from different channels are stacked together to form a two-dimensional tensor and input into the proposed network. Then, NLEB is used to extract local and nonlocal features from the 2D tensor. After fusing the features of the two channels, the reconstructed denoised data can be obtained by upsampling with a fully connected layer and deconvolution layer. Based on the measured data set, we constructed a noise–noisier data set, on which several denoising algorithms were compared. The results show that the proposed method improves the stability of denoising by using the inter-channel and multi-frame data correlation. Full article
Show Figures

Graphical abstract

21 pages, 61283 KiB  
Article
Two-Branch Underwater Image Enhancement and Original Resolution Information Optimization Strategy in Ocean Observation
by Dehuan Zhang, Wei Cao, Jingchun Zhou, Yan-Tsung Peng, Weishi Zhang and Zifan Lin
J. Mar. Sci. Eng. 2023, 11(7), 1285; https://doi.org/10.3390/jmse11071285 - 25 Jun 2023
Viewed by 956
Abstract
In complex marine environments, underwater images often suffer from color distortion, blur, and poor visibility. Existing underwater image enhancement methods predominantly rely on the U-net structure, which assigns the same weight to different resolution information. However, this approach lacks the ability to extract [...] Read more.
In complex marine environments, underwater images often suffer from color distortion, blur, and poor visibility. Existing underwater image enhancement methods predominantly rely on the U-net structure, which assigns the same weight to different resolution information. However, this approach lacks the ability to extract sufficient detailed information, resulting in problems such as blurred details and color distortion. We propose a two-branch underwater image enhancement method with an optimized original resolution information strategy to address this limitation. Our method comprises a feature enhancement subnetwork (FEnet) and an original resolution subnetwork (ORSnet). FEnet extracts multi-resolution information and utilizes an adaptive feature selection module to enhance global features in different dimensions. The enhanced features are then fed into ORSnet as complementary features, which extract local enhancement features at the original image scale to achieve semantically consistent and visually superior enhancement effects. Experimental results on the UIEB dataset demonstrate that our method achieves the best performance compared to the state-of-the-art methods. Furthermore, through comprehensive application testing, we have validated the superiority of our proposed method in feature extraction and enhancement compared to other end-to-end underwater image enhancement methods. Full article
Show Figures

Figure 1

13 pages, 2904 KiB  
Article
An Onboard Point Cloud Semantic Segmentation System for Robotic Platforms
by Fei Wang, Yujie Yang, Jingchun Zhou and Weishi Zhang
Machines 2023, 11(5), 571; https://doi.org/10.3390/machines11050571 - 22 May 2023
Viewed by 1309
Abstract
Point clouds represent an important way for robots to perceive their environments, and can be acquired by mobile robots with LiDAR sensors or underwater robots with sonar sensors. Hence, real-time semantic segmentation of point clouds with onboard edge devices is essential for robots [...] Read more.
Point clouds represent an important way for robots to perceive their environments, and can be acquired by mobile robots with LiDAR sensors or underwater robots with sonar sensors. Hence, real-time semantic segmentation of point clouds with onboard edge devices is essential for robots to apprehend their surroundings. In this paper, we propose an onboard point cloud semantic segmentation system for robotic platforms to overcome the conflict between attaining high accuracy of segmentation results and the limited available computational resources of onboard devices. Our system takes raw a sequence of point clouds as inputs, and outputs semantic segmentation results for each frame as well as a reconstructed semantic map of the environment. At the core of our system is the transformer-based hierarchical feature extraction module and fusion module. The two modules are implemented with sparse tensor technologies to speed up inference. The predictions are accumulated according to Bayes rules to generate a global semantic map. Experimental results on the SemanticKITTI dataset show that our system achieves +2.2% mIoU and 18× speed improvements compared with SOTA methods. Our system is able to process 2.2 M points per second on Jetson AGX Xavier (NVIDIA, Santa Clara, USA), demonstrating its applicability to various robotic platforms. Full article
Show Figures

Figure 1

17 pages, 6569 KiB  
Article
An Improved YOLOv5s-Based Scheme for Target Detection in a Complex Underwater Environment
by Chenglong Hou, Zhiguang Guan, Ziyi Guo, Siqi Zhou and Mingxing Lin
J. Mar. Sci. Eng. 2023, 11(5), 1041; https://doi.org/10.3390/jmse11051041 - 13 May 2023
Viewed by 1476
Abstract
At present, sea cucumbers, sea urchins, and other seafood products have become increasingly significant in the seafood aquaculture industry. In traditional fishing operations, divers go underwater for fishing, and the complex underwater environment can cause harm to the divers’ bodies. Therefore, the use [...] Read more.
At present, sea cucumbers, sea urchins, and other seafood products have become increasingly significant in the seafood aquaculture industry. In traditional fishing operations, divers go underwater for fishing, and the complex underwater environment can cause harm to the divers’ bodies. Therefore, the use of underwater robots for seafood fishing has become a current trend. During the fishing process, underwater fishing robots rely on vision to accurately detect sea cucumbers and sea urchins. In this paper, an algorithm for the target detection of sea cucumbers and sea urchins in complex underwater environments is proposed based on the improved YOLOv5s. The following improvements are mainly carried out in YOLOv5s: (1) To enhance the feature extraction ability of the model, the gnConv-based self-attentive sublayer HorBlock module is proposed to be added to the backbone network. (2) To obtain the optimal hyperparameters of the model for underwater datasets, hyperparameter evolution based on the genetic algorithm is proposed. (3) The underwater dataset is extended using offline data augmentation. The dataset used in the experiment is created in a real underwater environment. The total number of created datasets is 1536, and the training, validation, and test sets are randomly divided according to the ratio of 7:2:1. The divided dataset is input to the improved YOLOv5s network for training. The experiment shows that the mean average precision (mAP) of the algorithm is 94%, and the mAP of the improved YOLOv5s model rises by 4.5% compared to the original YOLOv5s. The detection speed increases by 4.09 ms, which is in the acceptable range compared to the accuracy improvement. Therefore, the improved YOLOv5s has better detection accuracy and speed in complex underwater environments, and can provide theoretical support for the underwater operations of underwater fishing robots. Full article
Show Figures

Figure 1

28 pages, 19746 KiB  
Review
An Overview of Key SLAM Technologies for Underwater Scenes
by Xiaotian Wang, Xinnan Fan, Pengfei Shi, Jianjun Ni and Zhongkai Zhou
Remote Sens. 2023, 15(10), 2496; https://doi.org/10.3390/rs15102496 - 09 May 2023
Cited by 6 | Viewed by 4623
Abstract
Autonomous localization and navigation, as an essential research area in robotics, has a broad scope of applications in various scenarios. To widen the utilization environment and augment domain expertise, simultaneous localization and mapping (SLAM) in underwater environments has recently become a popular topic [...] Read more.
Autonomous localization and navigation, as an essential research area in robotics, has a broad scope of applications in various scenarios. To widen the utilization environment and augment domain expertise, simultaneous localization and mapping (SLAM) in underwater environments has recently become a popular topic for researchers. This paper examines the key SLAM technologies for underwater vehicles and provides an in-depth discussion on the research background, existing methods, challenges, application domains, and future trends of underwater SLAM. It is not only a comprehensive literature review on underwater SLAM, but also a systematic introduction to the theoretical framework of underwater SLAM. The aim of this paper is to assist researchers in gaining a better understanding of the system structure and development status of underwater SLAM, and to provide a feasible approach to tackle the underwater SLAM problem. Full article
Show Figures

Graphical abstract

Back to TopTop