Next Article in Journal
Method of Changing Running Direction of Cheetah-Inspired Quadruped Robot
Previous Article in Journal
IGWO-IVNet3: DL-Based Automatic Diagnosis of Lung Nodules Using an Improved Gray Wolf Optimization and InceptionNet-V3
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Recognition of Underwater Materials of Bionic and Natural Fishes Based on Blue-Green Light Reflection

1
State Key Laboratory of Chemical Engineering, Tianjin Key Laboratory of Membrane Science and Desalination Technology, School of Chemical Engineering and Technology, Tianjin University, Tianjin 300072, China
2
School of Marine Science and Technology, Tianjin University, Tianjin 300072, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(24), 9600; https://doi.org/10.3390/s22249600
Submission received: 24 October 2022 / Revised: 23 November 2022 / Accepted: 2 December 2022 / Published: 7 December 2022
(This article belongs to the Section Environmental Sensing)

Abstract

:
Thanks to the advantages of low disturbance, good concealment and high mobility, bionic fishes have been developed by many countries as equipment for underwater observation and data collection. However, differentiating between true and bionic fishes has become a challenging task. Commonly used acoustic and optical technologies have difficulty in differentiating bionic fishes from real ones due to their high similarity in shape, size, and camouflage ability. To solve this problem, this paper proposes a novel idea for bionic fish recognition based on blue-green light reflection, which is a powerful observation technique for underwater object detection. Blue-green light has good penetration under water and thus can be used as a signal carrier to recognize bionic fishes of different surface materials. Three types of surface materials representing bionic fishes, namely titanium alloy, carbon fiber, and nylon, are investigated in this paper. We collected 1620 groups of blue-green light reflection data of these three kinds of materials and for two real fishes. Following this, three machine learning algorithms were utilized for recognition among them. The recognition accuracy can reach up to about 92.22%, which demonstrates the satisfactory performance of our method. To the best of our knowledge, this is the first work to investigate bionic fish recognition from the perspective of surface material difference using blue-green light reflection.

1. Introduction

Recently, bionic fishes have been developed by many institutions as equipment for underwater environmental monitoring and ocean observation [1,2,3] since they look like real fishes and thus are not easily detected. In 1994, David Barrett of MIT developed the first bionic robotic fish in the world, the “RoboTuna” [4]. Later, they proposed an updated version of “RoboPike“ [5] through improvement in its mobility and stability. The University of Essex developed a bionic robotic fish named “G9” [6], which has been applied to marine pollution monitoring. In 1999, Beijing University of Aeronautics and Astronautics developed the first self-swimming bionic robot eel in China. Then, they improved it to create “SPC-II” [7], which was used for underwater archeological explorations of the Zheng Chenggong ancient warship site. The increased development of underwater bionic fishes has promoted studies of underwater target detection, fishery resource exploration, and so on. However, the detection and recognition of underwater bionic fishes has also been a challenging task due to their similarities to real fishes.
Acoustic “images” generated by active or passive sonars and visual images obtained from optical imaging systems are two types of media for underwater object detection. The former can usually detect objects from a long distance by determining the orientation, direction of motion, and structural information of targets based on sound transmission [8,9,10,11]. However, acoustic imaging methods are often criticized since they are easily disturbed by the marine environment, large target noise, and a low degree of automation. To improve the quality of acoustic images, several methods have been proposed. Yin et al. proposed a single-carrier multiuser receiver in underwater acoustic communications with strong multiple access interference (MAI), combining passive time-reversal (PTR) and direct-adaptation-based turbo equalization [12]. Zhou and Yang proposed a denoising method to suppress the noise interference in underwater acoustic signals for recognition [13]. Dunlop et al. used a multi-beam echosounder on a remotely operated vehicle to gather in situ information of abyssal benthopelagic assemblages and discern their distribution, behavior, and habitat associations [14]. Chen et al. proposed a network model (Efficientnet-S) adjusted by compound scaling based on the baseline model, to study active sonar target recognition on small samples. According to the experiments based on anechoic pool echoes, the proposed model achieves a recognition accuracy of more than 90% [15]. Wei et al. reviewed the literature from the previous decade on the use of these two types of imaging sonar in fish species identification, abundance estimation, length measurement and behavior analysis, as well as sonar imagery processing concerning fish, and proposed three challenges, including (1) the recognition of small fish forming dense aggregations; (2) species identification, which limits their use in species-specific studies; and (3) time-consuming massive data processing [16]. These studies are beneficial to the application of sonar in underwater object recognition. However, the acoustic images, which lack color and texture information, are still difficult to utilize for the detection of small volume targets. On the other hand, visual optical images with color and texture information have been used for the detection of underwater objects. To detect objects from optical images, several works have been proposed based on machine learning [17,18,19]. Liu and Liang proposed a method using background light estimation and improved adaptive transmission fusion [20]. Li et al. developed an underwater image enhancement method based on an underwater scene prior motived convolutional neural network, called UWCNN, which can be used to extract objects from videos frame by frame [21]. Although optical images can supply more abundant information than acoustic images, they are still challenged by several issues. For example, due to the physical properties of seawater, the color of underwater optical images is mainly blue and green with low contrast. Most of the reported methods recognize different underwater targets by shape structure, and to the best of our knowledge, there is no special method for recognizing real fishes and bionic fishes from the perspective of surface material difference. We aim to explore the feasibility of recognizing different materials underwater to provide new methods for recognition.
Blue-green light, the transmission window of sea water, which was discovered by Duntley [22], has been acknowledged as an important optical signal transmission medium for underwater observation. It has many advantages over the traditional white-light-based optical medium. It has stronger penetration and weaker attenuation in sea water, which leads to better performance when recognizing surface materials of different objects. Blue-green light has been widely used in many fields, such as underwater communication [23,24], sensing [25] and so on. However, no existing works have investigated its performance in the recognition of underwater bionic fishes from the perspective of surface material difference. In this paper, we will utilize blue-green light to classify materials, providing a novel idea for the recognition of real fishes and bionic fishes.
In this paper, the reflection characteristics defined by the parameter of reflection coefficient R C using the blue-green light of band 470.32–570.72 nm is utilized to classify different materials (titanium alloy, carbon fiber and nylon) representing bionic fishes and body parts of the real fishes (the abdomen, side, and back parts of sea bass and larimichthys crocea). We collected a dataset of 1620 groups of reflection coefficient R_C under varied environmental conditions, with differences in light propagation distance and salinity. Following that, three commonly used machine learning algorithms, including logistic regression (LR), back propagation (BP) neural network and support vector machine (SVM) are utilized for classification. To the best of our knowledge, this is the first work to investigate bionic fish recognition from the perspective of surface material difference based on blue-green light reflection.
The remainder of this paper is organized as follows. Section 2 describes the development of the whole system, including hardware construction, data collection, and data preprocessing. An evaluation of our method based on the experimental results is provided in Section 3. Finally, we conclude this paper with a description of future work in Section 4.

2. Methodology of Bionic Fish Recognition using Blue-Green Light Reflection

We conducted a feasibility study of the recognition of underwater materials of bionic and natural fishes by blue-green light in a small distance. Thus, we designed a recognition system using blue-green light reflection based on the theory that different material surfaces have different reflection characteristics. In this paper, we built optical hardware to collect reflectivity data of different materials and under different water environments to construct a dataset for recognition. Then, three traditional machine learning algorithms were utilized to verify the effectiveness of our method. The whole procedure is shown in Figure 1. Next, we will introduce each part of the system specifically.

2.1. System Development and Data Collection

Optical data acquisition hardware was built to collect the light reflection data of different materials under different conditions in a 25 °C constant temperature chamber. The system is shown in Figure 2.
A tungsten-halogen light source (DH-2000-BAL, Ocean Optics, Inc., Dunedin, FL, USA) was used to provide stable light at a wavelength of 200–1000 nm. The light travels through the underwater Y-type optical fiber and is reflected by the surface of the material. The reflected light is transmitted to the spectrometer through the other side of the underwater Y-type optical fiber. The underwater Y-type optical fiber is inserted into the water. Spectrometer data are recorded using the “SpectraSuite” software. The reflection measurement mode of “SpectraSuite” software can take the reflection data of a standard reference object (SPL-WS-1, Hangzhou SPL Photonics Co., Ltd.) as the benchmark. The reflection coefficient of the material can be automatically generated according to this benchmark. We used a 30 cm long cube, glass water tank as a container.
If we directly use the reflected light intensity to distinguish different materials, the quality of collected data would be greatly affected by factors such as measuring distances and water environments. Therefore, we designed a method that can eliminate the influences of these factors. We first obtain the reflected light intensity of the standard reference object I r e f ,     s t d and then the reflected light intensity I r e f of the material to be measured under the same conditions. The reflection coefficient R C of the material can be obtained by dividing I r e f by I r e f , s t d as shown in Equation (3). Since the reflectivity of the standard reference object R s t d in Equation1 and the incident light intensity I i n c remain stable, the reflection coefficient R C is proportional to the reflectivity R in Equation (2) of the material to be measured.
R s t d = I r e f , s t d I i n c
R = I r e f I i n c
R C = I r e f I r e f , s t d = R R s t d
Using this hardware system, we collected data of real and bionic fishes of different materials, under different light propagation distances and with different water conditions, respectively. Next, we will discuss about each kind of measuring situations in detail.

2.1.1. Data Collection on Real and Bionic Fishes of Different Materials

Different body parts (including back, abdomen and side) of two real fishes (including sea bass and larimichthys crocea) were used in data collection. Additionally, three materials representing bionic fishes, including titanium alloy, carbon fiber, and nylon, were selected for data collection. Two real fishes and the three different materials representing bionic fishes are illustrated in Figure 1.

2.1.2. Data Collection under Different Distances from the Light Source

During data collection, we ensured that light was almost perpendicular to the surface of the target. The distance from the end of the optical fiber head to the surface of the material, i.e., the underwater propagation distance of the light, can be obtained using the scale on the slide. We performed experiments under the following three distances for each object to be measured: 25 mm, 35 mm and 45 mm, as shown in Figure 3.

2.1.3. Data Collection under Different Physical Water Environments

We performed experiments under the following two water environments: clean water and 32‰ salinity simulated seawater. We averaged all the data of various materials under these two water environments and found that the shape of R C remains stable, but there was deviation along the longitudinal axis, as shown in Figure 4. This is because in order to ensure the recognition ability, we randomly selected 10 feature points on the surface of each material under each condition (repeating three times for each feature point) to collect data under each condition. Although the relative reflected light intensity of the same material is of great similarity, the difference still exists.

2.2. Bionic and Real Fish Recognition Based on Machine Learning

After data collection, all our data were intercepted in the range of 470.32–570.72 nm (blue-green band); then, we constructed a dataset comprised of 1620 groups of data with labels for classification, as shown in Table 1. We utilized machine learning algorithms to recognize different bionic and real fishes. Recently, machine learning has been greatly developed and received extensive attention thanks to the high development of computer hardware and the rapid improvement of the human ability to collect, store, transmit and process large data. Machine learning is advantageous in finding the inherent law of data to solve recognition and classification problems. In this paper, we utilize three commonly used machine learning algorithms, namely support vector machine (SVM), back propagation (BP) neural network and logistic regression (LR) to verify the effectiveness of our system and the developed dataset.
Support vector machine transforms the linearly inseparable data in the low dimension space to the high dimensional spaces [26,27,28]. It finds the hyperplane in the higher dimension to divide the data and utilizes the samples in the training set to locate the boundary. Several kernel functions are integrated in SVM, which make SVM more efficient and effective.
Linear regression combines various attributes of the data linearly to obtain the prediction function, then uses a monotone differentiable function to connect the true label of the classification task with the predicted value of the linear regression model, instead of just predicting categories. LR can achieve approximate probability prediction, which is useful for many tasks that need to use probability to assist in decision-making [29,30,31].
A back propagation neural network is a multilayer feedforward network trained according to error back propagation. When calculating the loss function, it is carried forward from input to output, while adjusting the weight and threshold backward. The BP neural network includes an input layer, hidden layer and output layer, the structure of which is similar to the neural structure of the human brain [32,33,34].

3. Experimental Results and Discussion

3.1. Experimental Setting

The experiments are performed using an AMD Ryzen 7 4800H (Advanced Micro Devices, Santa Clara, CA, USA) with Radeon Graphics and a memory of 16 GB. All the data were in txt form with 222 rows and 2 columns. We randomly selected 4/5 of samples (1296 groups) as training data and the remaining 1/5 data as test data. For verification, we used 5-fold cross validation.
The SVM algorithm has a hyperparameter C, which determines the regularization strength. A smaller C value represents a stronger regularization ability. In this work, we tried many values to determine the best hyperparameter C and found that the highest recognition accuracy can be achieved at a C of 5580. The kernel function we used was the radial basis function.
Compared to SVM, the performance of LR is not as sensitive to the hyperparameter C, as shown in Figure 5. For LR, the solver was set as “lbfgs” and the regularization term was L2. The structure of the BP neural network we constructed in this paper is shown in Figure 6. The input data contain 222 data points. It normalizes the input data from [0, 500] to [0, 1]. The neural network adopts the stochastic gradient descent method. There are 110 neurons in the first hidden layer and 20 neurons in the second hidden layer. We set the learning rate η as 0.001. The final output is mapped to five categories.

3.2. Results and Analysis

The experimental results are shown in Table 2. The fit time denotes the time cost for one round of training in 5-fold cross validation. The recognition accuracy is defined by the ratio of the number of samples that the model correctly identifies in the test set to the total number of samples. We can see that SVM performs better than the other two algorithms in terms of both training time and accuracy. Its recognition accuracy can reach up to 92.22%, which demonstrates the effectiveness of our whole system. Its computational complexity is also very low since SVM requires less time for parameter training to learn the internal law of the data and easily finds the super plane to classify the data. In contrast, a BP network needs to train many parameters, and LR is more suitable for binary classification than multi-class classification.
We drew the confusion matrices based on a certain recognition result (not 5-fold cross validation) to demonstrate the effectiveness of our method more comprehensively; the confusion matrix is normalized over the true conditions. As shown in Figure 7, the longitudinal axis represents the true value and the transverse axis is the predicted value. If the data of two real fishes are excluded, we can see that the recognition accuracy could reach 100% for the materials representing bionic fishes (titanium alloy, carbon fiber, nylon) in the SVM and LR, which validates our approach and the dataset well. Some other evaluation indicators including the precision, recall and F1-score are listed in Table 3. Precision is the proportion of correctly predicted instances among all instances predicted to be positive, recall is the proportion predicted to be correct among all positive categories, and the F1-score is twice the product of recall and precision divided by their sum. SVM still performs best under the evaluation of more indicators. We also tried to use the convolutional neural network method, as shown in Figure S1. In order to adapt the convolutional neural network method, we changed the data form to images. However, the accuracy of the convolutional neural network on the test set is 85.86%. Figure S2 shows the confusion matrix of results. When a large amount of training time and computing resources were spent, compared with traditional methods, this method did not show advantages.

4. Conclusions

In this work, we proposed a novel blue-green light reflection method to recognize different materials underwater, providing a new idea for underwater bionic fish recognition. Taking the reflected light intensity of the standard reference object as the benchmark, the reflection coefficients of different materials are proportional to their own reflectivity. The processed data are distributed between 470.32 nm and 570.72 nm and are composed of 222 data points. By taking advantage of the reflected light intensity of the standard object, the reflection coefficient data are less affected by distance and water quality. We collected a total of 1620 groups of data in five categories (including different parts of two real fishes and three kinds of bionic fish materials) under different environmental conditions, such as distances and salinities. Following this, the effectiveness of our method was verified with the following three machine learning algorithms: support vector machine, logistic regression and back propagation neural network. Each algorithm had a high accuracy, and the highest accuracy was obtained by the support vector machine, of 92.22%.
Matteoli et al. proposed a subspace-based approach to investigate underwater material discriminability. Synthetic test data were generated using a simulator: eight different objects (e.g., fiberglass, tin, neoprene, aluminum) were simulated as being submerged 10 m deep within the water column, while four out of eight objects were correctly recognized with an accuracy of more than 75% [35]. Liu et al. designed a method of matching the same underwater object in acoustic and optical images. The distance between the camera or sonar and the target object (e.g., shellfish, sea urchins, stone) was 60 cm, and the recognition accuracy was about 90% [36]. It can be seen that the shape of the targets recognized using the method with high recognition accuracy is quite different; the method with a long target recognition distance has low recognition accuracy.
The recognition method based on the differences of underwater bionic fish surface materials in the blue-green band has great application prospects. When applied to underwater robots, this method can effectively identify different underwater targets with high similarity, which is of great significance for underwater target detection, fishery resource exploration, and other research fields. In the future, we would like to collect more data of real fishes and bionic fish materials and improve hardware systems by using lasers for better classification. We also plan to use more sophisticated deep learning methods to improve the accuracy of recognition.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/s22249600/s1, Figure S1: The convolution neural network structure we designed; Figure S2: Confusion matrix of convolutional neural network on test set.

Author Contributions

Conceptualization, R.S., R.H., W.Q. and C.Z.; methodology, R.S., C.Z. and H.J.; software, C.Z. and H.J.; validation, R.S., C.Z., H.J., R.H. and W.Q.; formal analysis, R.S., C.Z., R.H. and W.Q.; investigation, H.J.; resources, R.S., C.Z., R.H. and W.Q.; data curation, H.J.; writing—original draft preparation, H.J. and C.Z.; writing—review and editing, R.S.; visualization, H.J.; supervision, R.S.; project administration, R.S.; funding acquisition, R.S. and C.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Basic Discipline Strengthening Program Technical Field Foundation of China (2020JCJQJJ323) and National Natural Science Foundation of China (41806116).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ryuh, Y.; Yang, G.; Liu, J.; Hu, H. A School of Robotic Fish for Mariculture Monitoring in the Sea Coast. J. Bionic Eng. 2015, 12, 37–46. [Google Scholar] [CrossRef]
  2. Chen, K.; Zhu, W.; Dou, L. Research on Mobile Water Quality Monitoring System Based on Underwater Bionic Robot Fish Platform. In Proceedings of the 2020 IEEE International Conference on Advances in Electrical Engineering and Computer Applications (AEECA), Dalian, China, 25–27 August 2020; pp. 457–461. [Google Scholar] [CrossRef]
  3. Li, G.; Chen, X.; Zhou, F.; Liang, Y.; Xiao, Y.; Cao, X.; Zhang, Z.; Zhang, M.; Wu, B.; Yin, S.; et al. Self-Powered Soft Robot in the Mariana Trench. Nature 2021, 591, 66–71. [Google Scholar] [CrossRef]
  4. Barrett, D.; Grosenbaugh, M.; Triantafyllou, M. The Optimal Control of a Flexible Hull Robotic Undersea Vehicle Propelled by an Oscillating Foil. In Proceedings of the Symposium on Autonomous Underwater Vehicle Technology (AUV), Monterey, CA, USA, 2–6 June 1996; pp. 1–9. [Google Scholar] [CrossRef]
  5. Kumph, J.M. Maneuvering of a Robotic Pike. Master’s Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 2000. [Google Scholar]
  6. Hu, H.; Liu, J.; Dukes, I.; Francis, G. Design of 3d Swim Patterns for Autonomous Robotic Fish. In Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 9–15 October 2006; pp. 2406–2411. [Google Scholar] [CrossRef]
  7. Liang, J.; Zou, D.; Wang, S.; Wang, Y. Trial Voyage of Spc-Ii Fish Robot. J. Beijing Univ. Aeronaut. Astronaut. 2005, 31, 709–713. [Google Scholar] [CrossRef]
  8. Wang, H.; Wang, B.; Wu, L.; Tang, Q. Multihydrophone Fusion Network for Modulation Recognition. Sensors 2022, 22, 3214. [Google Scholar] [CrossRef]
  9. Luo, X.; Feng, Y. An Underwater Acoustic Target Recognition Method Based on Restricted Boltzmann Machine. Sensors 2020, 20, 5399. [Google Scholar] [CrossRef]
  10. Yang, H.; Li, J.; Shen, S.; Xu, G. A Deep Convolutional Neural Network Inspired by Auditory Perception for Underwater Acoustic Target Recognition. Sensors 2019, 19, 1104. [Google Scholar] [CrossRef] [Green Version]
  11. Lee, Y.; Choi, J.; Ko, N.Y.; Choi, H.-T. Probability-Based Recognition Framework for Underwater Landmarks Using Sonar Images. Sensors 2017, 17, 1953. [Google Scholar] [CrossRef]
  12. Yin, J.-W.; Zhu, G.-J.; Han, X.; Ge, W.; Li, L.; Tian, Y.-N. Iterative Channel Estimation-Based Soft Successive Interference Cancellation for Multiuser Underwater Acoustic Communications. J. Acoust. Soc. Am. 2021, 150, 133–144. [Google Scholar] [CrossRef]
  13. Zhou, X.; Yang, K. A Denoising Representation Framework for Underwater Acoustic Signal Recognition. J. Acoust. Soc. Am. 2020, 147, 377–383. [Google Scholar] [CrossRef]
  14. Dunlop, K.M.; Benoit-Bird, K.J.; Waluk, C.M.; Henthorn, R.G. Ecological Insights into Abyssal Bentho-Pelagic Fish at 4000 M Depth Using a Multi-Beam Echosounder on a Remotely Operated Vehicle. Deep Sea Res. Part II Top. Stud. Oceanogr. 2020, 173, 104679. [Google Scholar] [CrossRef]
  15. Chen, Y.; Hong, L.; Shuo, P. Study on Small Samples Active Sonar Target Recognition Based on Deep Learning. J. Mar. Sci. Eng. 2022, 10, 1144. [Google Scholar] [CrossRef]
  16. Wei, Y.; Duan, Y.; An, D. Monitoring Fish Using Imaging Sonar: Capacity, Challenges and Future Perspective. Fish Fish. 2022, 23, 1347–1370. [Google Scholar] [CrossRef]
  17. Zhao, M.; Hu, C.; Wei, F.; Wang, K.; Wang, C.; Jiang, Y. Real-Time Underwater Image Recognition with Fpga Embedded System for Convolutional Neural Network. Sensors 2019, 19, 350. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Himri, K.; Ridao, P.; Gracias, N. Underwater Object Recognition Using Point-Features, Bayesian Estimation and Semantic Information. Sensors 2021, 21, 1807. [Google Scholar] [CrossRef]
  19. Lin, Y.-H.; Yu, C.-M.; Wu, C.-Y. Towards the Design and Implementation of an Image-Based Navigation System of an Autonomous Underwater Vehicle Combining a Color Recognition Technique and a Fuzzy Logic Controller. Sensors 2021, 21, 4053. [Google Scholar] [CrossRef]
  20. Liu, K.; Liang, Y. Enhancement of Underwater Optical Images Based on Background Light Estimation and Improved Adaptive Transmission Fusion. Opt. Express 2021, 29, 28307–28328. [Google Scholar] [CrossRef]
  21. Li, C.; Anwar, S.; Porikli, F. Underwater Scene Prior Inspired Deep Underwater Image and Video Enhancement. Pattern Recognit. 2020, 98, 107038. [Google Scholar] [CrossRef]
  22. Duntley, S.Q. Light in the Sea. J. Opt. Soc. Am. 1963, 53, 214–233. [Google Scholar] [CrossRef]
  23. Kong, M.; Lv, W.; Ali, T.; Sarwar, R.; Yu, C.; Qiu, Y.; Qu, F.; Xu, Z.; Han, J.; Xu, J. 10-M 9.51-Gb/S Rgb Laser Diodes-Based Wdm Underwater Wireless Optical Communication. Opt. Express 2017, 25, 20829–20834. [Google Scholar] [CrossRef]
  24. Liu, X.; Yi, S.; Zhou, X.; Fang, Z.; Qiu, Z.-J.; Hu, L.; Cong, C.; Zheng, L.; Liu, R.; Tian, P. 34.5 M Underwater Optical Wireless Communication with 2.70 Gbps Data Rate Based on a Green Laser Diode with Nrz-Ook Modulation. Opt. Express 2017, 25, 27937–27947. [Google Scholar] [CrossRef]
  25. Dong, L.; Li, N.; Xie, X.; Bao, C.; Li, X.; Li, D. A Fast Analysis Method for Blue-Green Laser Transmission through the Sea Surface. Sensors 2020, 20, 1758. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Chitambira, B.; Armour, S.; Wales, S.; Beach, M. Employing Ray-Tracing and Least-Squares Support Vector Machines for Localisation. Sensors 2018, 18, 4059. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Kang, M.; Shin, S.; Zhang, G.; Jung, J.; Kim, Y.T. Mental Stress Classification Based on a Support Vector Machine and Naive Bayes Using Electrocardiogram Signals. Sensors 2021, 21, 7916. [Google Scholar] [CrossRef] [PubMed]
  28. Yang, J.; Liu, L.; Zhang, L.; Li, G.; Sun, Z.; Song, H. Prediction of Marine Pycnocline Based on Kernel Support Vector Machine and Convex Optimization Technology. Sensors 2019, 19, 1562. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. Bellacicco, M.; Vellucci, V.; Scardi, M.; Barbieux, M.; Marullo, S.; D’Ortenzio, F. Quantifying the Impact of Linear Regression Model in Deriving Bio-Optical Relationships: The Implications on Ocean Carbon Estimations. Sensors 2019, 19, 3032. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Aspuru, J.; Ochoa-Brust, A.; Félix, R.A.; Mata-López, W.; Mena, L.J.; Ostos, R.; Martínez-Peláez, R. Segmentation of the Ecg Signal by Means of a Linear Regression Algorithm. Sensors 2019, 19, 775. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  31. Chen, W.-Y.; Wang, M.; Fu, Z.-X. Railway Crossing Risk Area Detection Using Linear Regression and Terrain Drop Compensation Techniques. Sensors 2014, 14, 10578–10597. [Google Scholar] [CrossRef]
  32. Ye, J.; Jin, M.; Gong, G.; Shen, R.; Lu, H. Passtcn-Ppll: A Password Guessing Model Based on Probability Label Learning and Temporal Convolutional Neural Network. Sensors 2022, 22, 6484. [Google Scholar] [CrossRef]
  33. Pavićević, M.; Popović, T. Forecasting Day-Ahead Electricity Metrics with Artificial Neural Networks. Sensors 2022, 22, 1051. [Google Scholar] [CrossRef]
  34. Long, Y.; Wang, Z.; He, B.; Nie, T.; Zhang, X.; Fu, T. Partitionable High-Efficiency Multilayer Diffractive Optical Neural Network. Sensors 2022, 22, 7110. [Google Scholar] [CrossRef]
  35. Matteoli, S.; Corsini, G. Underwater Material Discriminability with Fluorescence Lidar in Unknown Environmental Conditions. In Proceedings of the IGARSS 2018—2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 4599–4602. [Google Scholar] [CrossRef]
  36. Liu, J.; Li, B.; Guan, W.; Gong, S.; Liu, J.; Cui, J. A Scale-Adaptive Matching Algorithm for Underwater Acoustic and Optical Images. Sensors 2020, 20, 4226. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Schematic diagram of the whole process. The lower left shows the real fishes species and materials representing underwater bionic fishes we selected. On the upper left is the data acquisition device we built. The underwater Y-type optical fiber connects the spectrometer and the tungsten halogen lamp light source. The optical fiber head probing into the water is the transmitting end and receiving end of the light. The sliding platform fixing the optical fiber head is provided with a scale, which can read the light propagation distance. The data set we built is shown on the upper right, with a total of 1620 groups of data recorded to form the data set. The lower right are the machine learning methods we selected to verify the validity of the data set.
Figure 1. Schematic diagram of the whole process. The lower left shows the real fishes species and materials representing underwater bionic fishes we selected. On the upper left is the data acquisition device we built. The underwater Y-type optical fiber connects the spectrometer and the tungsten halogen lamp light source. The optical fiber head probing into the water is the transmitting end and receiving end of the light. The sliding platform fixing the optical fiber head is provided with a scale, which can read the light propagation distance. The data set we built is shown on the upper right, with a total of 1620 groups of data recorded to form the data set. The lower right are the machine learning methods we selected to verify the validity of the data set.
Sensors 22 09600 g001
Figure 2. The hardware system developed for bionic fish recognition.
Figure 2. The hardware system developed for bionic fish recognition.
Sensors 22 09600 g002
Figure 3. We chose nylon, one of the five materials, as a representative to show the difference in distance between the two data forms. (A) Light intensity I r e f of nylon collected at different distances. (B) Reflection coefficient R C of nylon collected at different distances.
Figure 3. We chose nylon, one of the five materials, as a representative to show the difference in distance between the two data forms. (A) Light intensity I r e f of nylon collected at different distances. (B) Reflection coefficient R C of nylon collected at different distances.
Sensors 22 09600 g003
Figure 4. Average data ( R C ) of various materials in clean water and water with salinity of 32‰, the type of material is marked in the subgraph. (A) Larimichthys crocea. (B) Sea bass. (C) Nylon. (D) Titanium alloy. (E) Carbon fiber.
Figure 4. Average data ( R C ) of various materials in clean water and water with salinity of 32‰, the type of material is marked in the subgraph. (A) Larimichthys crocea. (B) Sea bass. (C) Nylon. (D) Titanium alloy. (E) Carbon fiber.
Sensors 22 09600 g004
Figure 5. The hyperparameter C value and accuracy curve of SVM and LR, respectively.
Figure 5. The hyperparameter C value and accuracy curve of SVM and LR, respectively.
Sensors 22 09600 g005
Figure 6. The BP neural network we constructed in this paper. The input data contains 222 data points. There are 110 neurons in the first hidden layer and 20 neurons in the second hidden layer. The final output is mapped to five categories.
Figure 6. The BP neural network we constructed in this paper. The input data contains 222 data points. There are 110 neurons in the first hidden layer and 20 neurons in the second hidden layer. The final output is mapped to five categories.
Sensors 22 09600 g006
Figure 7. Confusion matrix of various methods. (A) SVM; (B) LR; (C) BP. The longitudinal axis is the real value and the transverse axis is the predicted value.
Figure 7. Confusion matrix of various methods. (A) SVM; (B) LR; (C) BP. The longitudinal axis is the real value and the transverse axis is the predicted value.
Sensors 22 09600 g007
Table 1. Dataset composition (number of various samples under different conditions).
Table 1. Dataset composition (number of various samples under different conditions).
Sea BassLarmichthys CroceaTitanium
Alloy
Carbon FiberNylon
25 mm distance in clean water9090303030
35 mm distance in clean water9090303030
45 mm distance in clean water9090303030
Total in clean water270270909090
Total in two water
environments
540540180180180
Table 2. Performance of each model in 5-fold cross validation.
Table 2. Performance of each model in 5-fold cross validation.
ModelTotalFit Time(s)Test Accuracy (%)
SVM3240.0992.22
LR3241.3981.11
BP32437.0784.62
Table 3. Performance of each model in a certain recognition result.
Table 3. Performance of each model in a certain recognition result.
SVMLRMLP
PrecisionRecallF1-ScorePrecisionRecallF1-ScorePrecisionRecallF1-Score
Larmichthys Crocea0.8780.8860.8820.7920.7370.7640.8420.7460.791
Sea bass0.8630.8540.8580.7160.7710.7420.7920.8330.812
Nylon111111111
Carbon Fiber1111110.7740.750.762
Titanium Alloy1111110.97510.987
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jiang, H.; Zhang, C.; Huang, R.; Qi, W.; Su, R. Recognition of Underwater Materials of Bionic and Natural Fishes Based on Blue-Green Light Reflection. Sensors 2022, 22, 9600. https://doi.org/10.3390/s22249600

AMA Style

Jiang H, Zhang C, Huang R, Qi W, Su R. Recognition of Underwater Materials of Bionic and Natural Fishes Based on Blue-Green Light Reflection. Sensors. 2022; 22(24):9600. https://doi.org/10.3390/s22249600

Chicago/Turabian Style

Jiang, Heng, Cuicui Zhang, Renliang Huang, Wei Qi, and Rongxin Su. 2022. "Recognition of Underwater Materials of Bionic and Natural Fishes Based on Blue-Green Light Reflection" Sensors 22, no. 24: 9600. https://doi.org/10.3390/s22249600

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop