Topic Editors

National Laboratory of Radar Signal Processing, School of Electronic Engineering, Xidian University, Xi’an 710071, China
School of Microelectronics and Communication Engineering, Chongqing University, Chongqing 400044, China
State Key Laboratory of Millimeter Waves, School of Information Science and Engineering, Southeast University, Nanjing 210096, China
College of Communication and Information Engineering, Xi’an University of Science and Technology, Xi'an 710054, China

Information Sensing Technology for Intelligent/Driverless Vehicle

Abstract submission deadline
closed (30 September 2022)
Manuscript submission deadline
closed (31 December 2022)
Viewed by
75017

Topic Information

Dear Colleagues,

As the basis for vehicle positioning and path planning, the environmental perception system is a significant part of intelligent/driverless vehicles, which is used to get the environmental information around the vehicle including roads, obstacles, traffic signs, and the vital signs of the driver. In the past few years, environmental perception technology based on various vehicle-mounted sensors (camera, laser, millimeter-wave radar, and GPS/IMU) has made rapid progress. With the further research of automatic driving and assisted driving, the information sensing technology of driverless cars has become a research hotspot, and thus the performance of the vehicle-mounted sensors should be improved to adapt to the complex driving environment in our daily life. However, in reality, there are still many development issues, such as the technology not being mature, the instrument not being advanced, and the experiment environment not being real. All these problems pose great challenges to the traditional vehicle-mounted sensor system and information perception technology. In general, it motivates the need for new environmental perception systems, signal processing methods, and even new types of sensors.

This topic is devoted to highlighting the most advanced studies in technology, methodology, and applications of sensors mounted on intelligent/unmanned driving vehicle. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world and/or emerging problems will be welcome. The journal publishes original papers, and from time to time invited review articles, in all areas related to the sensors mounted on intelligent/unmanned driving vehicles including, but not limited to, the following suggested topics:

  • Vehicle-mounted millimeter-wave radar technology;
  • Vehicle-mounted LiDAR technology;
  • Vehicle visual sensors;
  • High-precision positioning technology based on GPS/IMU;
  • Muti-sensor data fusion (MSDF);
  • New sensor systems mounted on intelligent/unmanned vehicle.

Dr. Shiyang Tang
Dr. Zhanye Chen
Dr. Yan Huang
Dr. Ping Guo
Topic Editors

Keywords

  • information sensing technology
  • intelligent/driverless vehicle
  • millimeter-wave radar
  • LiDAR
  • vehicle visual sensor

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Remote Sensing
remotesensing
5.0 7.9 2009 23 Days CHF 2700
Sensors
sensors
3.9 6.8 2001 17 Days CHF 2600
Geomatics
geomatics
- - 2021 18.6 Days CHF 1000
Smart Cities
smartcities
6.4 8.5 2018 20.2 Days CHF 2000
Vehicles
vehicles
2.2 2.9 2019 22.2 Days CHF 1600

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (29 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
15 pages, 4087 KiB  
Article
Improving the Efficiency of 3D Monocular Object Detection and Tracking for Road and Railway Smart Mobility
by Alexandre Evain, Antoine Mauri, François Garnier, Messmer Kounouho, Redouane Khemmar, Madjid Haddad, Rémi Boutteau, Sébastien Breteche and Sofiane Ahmedali
Sensors 2023, 23(6), 3197; https://doi.org/10.3390/s23063197 - 16 Mar 2023
Cited by 1 | Viewed by 1458
Abstract
Three-dimensional (3D) real-time object detection and tracking is an important task in the case of autonomous vehicles and road and railway smart mobility, in order to allow them to analyze their environment for navigation and obstacle avoidance purposes. In this paper, we improve [...] Read more.
Three-dimensional (3D) real-time object detection and tracking is an important task in the case of autonomous vehicles and road and railway smart mobility, in order to allow them to analyze their environment for navigation and obstacle avoidance purposes. In this paper, we improve the efficiency of 3D monocular object detection by using dataset combination and knowledge distillation, and by creating a lightweight model. Firstly, we combine real and synthetic datasets to increase the diversity and richness of the training data. Then, we use knowledge distillation to transfer the knowledge from a large, pre-trained model to a smaller, lightweight model. Finally, we create a lightweight model by selecting the combinations of width, depth & resolution in order to reach a target complexity and computation time. Our experiments showed that using each method improves either the accuracy or the efficiency of our model with no significant drawbacks. Using all these approaches is especially useful for resource-constrained environments, such as self-driving cars and railway systems. Full article
Show Figures

Figure 1

39 pages, 19855 KiB  
Article
A Localization Algorithm Based on Global Descriptor and Dynamic Range Search
by Yongzhe Chen, Gang Wang, Wei Zhou, Tongzhou Zhang and Hao Zhang
Remote Sens. 2023, 15(5), 1190; https://doi.org/10.3390/rs15051190 - 21 Feb 2023
Cited by 1 | Viewed by 1575
Abstract
The map-based localization method is considered an effective supplement to the localization under the GNSS-denied environment. However, since the map is constituted by the dispersed keyframes, it sometimes happens that the initial position of the unmanned ground vehicle (UGV) lies between the map [...] Read more.
The map-based localization method is considered an effective supplement to the localization under the GNSS-denied environment. However, since the map is constituted by the dispersed keyframes, it sometimes happens that the initial position of the unmanned ground vehicle (UGV) lies between the map keyframes or is not on the mapping trajectory. In both cases, it will be impossible to precisely estimate the pose of the vehicle directly via the relationship between the current frame and the map keyframes, leading to localization failure. In this regard, we propose a localization algorithm based on the global descriptor and dynamic range search (LA-GDADRS). In specific, we first design a global descriptor shift and rotation invariant image (SRI), which improves the rotation invariance and shift invariance by the methods of coordinates removal and de-centralization. Secondly, we design a global localization algorithm for shift and rotation invariant branch-and-bound scan matching (SRI-BBS). It first leverages SRI to obtain an approximate priori position of the unmanned vehicle and then utilizes the similarity between the current frame SRI and the map keyframes SRI to select a dynamic search range around the priori position. Within the search range, we leverage the branch-and-bound scanning matching algorithm to search for a more precise pose. It solves the problem that global localization tends to fail when the priori position is imprecise. Moreover, we introduce a tightly coupled factor graph model and a HD map engine to achieve real-time position tracking and lane-level localization, respectively. Finally, we complete extensive ablation experiments and comparative experiments to validate our methods on the benchmark dataset (KITTI) and the real application scenarios at the campus. Extensive experimental results demonstrate that our algorithm achieves the performance of mainstream localization algorithms. Full article
Show Figures

Graphical abstract

19 pages, 8819 KiB  
Article
Implementation Method of Automotive Video SAR (ViSAR) Based on Sub-Aperture Spectrum Fusion
by Ping Guo, Fuen Wu, Shiyang Tang, Chenghao Jiang and Changjie Liu
Remote Sens. 2023, 15(2), 476; https://doi.org/10.3390/rs15020476 - 13 Jan 2023
Cited by 3 | Viewed by 1715
Abstract
The automotive synthetic aperture radar (SAR) can obtain two-dimensional (2-D) high-resolution images and has good robustness compared with the other sensors. Generally, the 2-D high-resolution always conflicts with the real-time requirement in conventional SAR imaging. This article suggests an automotive video SAR (ViSAR) [...] Read more.
The automotive synthetic aperture radar (SAR) can obtain two-dimensional (2-D) high-resolution images and has good robustness compared with the other sensors. Generally, the 2-D high-resolution always conflicts with the real-time requirement in conventional SAR imaging. This article suggests an automotive video SAR (ViSAR) imaging technique based on sub-aperture spectrum fusion to address this issue. Firstly, the scene space variation problem caused by close observation distance in automotive SAR is analyzed. Moreover, the sub-aperture implementation method, frame rate and resolution of automotive ViSAR are also introduced. Then, the improved Range Doppler algorithm (RDA) is used to focus the sub-aperture data. Finally, a sub-aperture stitching strategy is proposed to obtain a high-resolution frame image. Compared with the available ViSAR imaging method, the proposed method is more efficient, performs better, and is more appropriate for automotive ViSAR. The simulation results and actual data of the automotive SAR validate the effectiveness of the proposed method. Full article
Show Figures

Graphical abstract

26 pages, 39727 KiB  
Article
InTEn-LOAM: Intensity and Temporal Enhanced LiDAR Odometry and Mapping
by Shuaixin Li, Bin Tian, Xiaozhou Zhu, Jianjun Gui, Wen Yao and Guangyun Li
Remote Sens. 2023, 15(1), 242; https://doi.org/10.3390/rs15010242 - 31 Dec 2022
Cited by 3 | Viewed by 1926
Abstract
Traditional LiDAR odometry (LO) systems mainly leverage geometric information obtained from the traversed surroundings to register lazer scans and estimate LiDAR ego-motion, while they may be unreliable in dynamic or degraded environments. This paper proposes InTEn-LOAM, a low-drift and robust LiDAR odometry and [...] Read more.
Traditional LiDAR odometry (LO) systems mainly leverage geometric information obtained from the traversed surroundings to register lazer scans and estimate LiDAR ego-motion, while they may be unreliable in dynamic or degraded environments. This paper proposes InTEn-LOAM, a low-drift and robust LiDAR odometry and mapping method that fully exploits implicit information of lazer sweeps (i.e., geometric, intensity and temporal characteristics). The specific content of this work includes method innovation and experimental verification. With respect to method innovation, we propose the cylindrical-image-based feature extraction scheme, which makes use of the characteristic of uniform spatial distribution of lazer points to boost the adaptive extraction of various types of features, i.e., ground, beam, facade and reflector. We propose a novel intensity-based point registration algorithm and incorporate it into the LiDAR odometry, enabling the LO system to jointly estimate the LiDAR ego-motion using both geometric and intensity feature points. To eliminate the interference of dynamic objects, we propose a temporal-based dynamic object removal approach to filter them out in the resulting points map. Moreover, the local map is organized and downsampled using a temporal-related voxel grid filter to maintain the similarity between the current scan and the static local map. With respect to experimental verification, extensive tests are conducted on both simulated and real-world datasets. The results show that the proposed method achieves similar or better accuracy with respect to the state-of-the-art in normal driving scenarios and outperforms geometric-based LO in unstructured environments. Full article
Show Figures

Figure 1

19 pages, 6550 KiB  
Article
Anti-Noise 3D Object Detection of Multimodal Feature Attention Fusion Based on PV-RCNN
by Yuan Zhu, Ruidong Xu, Hao An, Chongben Tao and Ke Lu
Sensors 2023, 23(1), 233; https://doi.org/10.3390/s23010233 - 26 Dec 2022
Cited by 3 | Viewed by 2395
Abstract
3D object detection methods based on camera and LiDAR fusion are susceptible to environmental noise. Due to the mismatch of physical characteristics of the two sensors, the feature vectors encoded by the feature layer are in different feature spaces. This leads to the [...] Read more.
3D object detection methods based on camera and LiDAR fusion are susceptible to environmental noise. Due to the mismatch of physical characteristics of the two sensors, the feature vectors encoded by the feature layer are in different feature spaces. This leads to the problem of feature information deviation, which has an impact on detection performance. To address this problem, a point-guided feature abstract method is presented to fuse the camera and LiDAR at first. The extracted image features and point cloud features are aggregated to keypoints for enhancing information redundancy. Second, the proposed multimodal feature attention (MFA) mechanism is used to achieve adaptive fusion of point cloud features and image features with information from multiple feature spaces. Finally, a projection-based farthest point sampling (P-FPS) is proposed to downsample the raw point cloud, which can project more keypoints onto the close object and improve the sampling rate of the point-guided image features. The 3D bounding boxes of the object is obtained by the region of interest (ROI) pooling layer and the fully connected layer. The proposed 3D object detection algorithm is evaluated on three different datasets, and the proposed algorithm achieved better detection performance and robustness when the image and point cloud data contain rain noise. The test results on a physical test platform further validate the effectiveness of the algorithm. Full article
Show Figures

Figure 1

24 pages, 10865 KiB  
Article
LiDAR Odometry and Mapping Based on Neighborhood Information Constraints for Rugged Terrain
by Gang Wang, Xinyu Gao, Tongzhou Zhang, Qian Xu and Wei Zhou
Remote Sens. 2022, 14(20), 5229; https://doi.org/10.3390/rs14205229 - 19 Oct 2022
Cited by 2 | Viewed by 1620
Abstract
The simultaneous localization and mapping (SLAM) method estimates vehicles’ pose and builds maps established on the collection of environmental information primarily through sensors such as LiDAR and cameras. Compared to the camera-based SLAM, the LiDAR-based SLAM is more geared to complicated environments and [...] Read more.
The simultaneous localization and mapping (SLAM) method estimates vehicles’ pose and builds maps established on the collection of environmental information primarily through sensors such as LiDAR and cameras. Compared to the camera-based SLAM, the LiDAR-based SLAM is more geared to complicated environments and is not susceptible to weather and illumination, which has increasingly become a hot topic in autonomous driving. However, there has been relatively little research on the LiDAR-based SLAM algorithm in rugged scenes. The following two issues remain unsolved: on the one hand, the small overlap area of two adjacent point clouds results in insufficient valuable features that can be extracted; on the other hand, the conventional feature matching method does not take point cloud pitching into account, which frequently results in matching failure. Hence, a LiDAR SLAM algorithm based on neighborhood information constraints (LoNiC) for rugged terrain is proposed in this study. Firstly, we obtain the feature points with surface information using the distribution of the normal vector angles in the neighborhood and extract features with discrimination through the local surface information of the point cloud, to improve the describing ability of feature points in rugged scenes. Secondly, we provide a multi-scale constraint description based on point cloud curvature, normal vector angle, and Euclidean distance to enhance the algorithm’s discrimination of the differences between feature points and prevent mis-registration. Subsequently, in order to lessen the impact of the initial pose value on the precision of point cloud registration, we introduce the dynamic iteration factor to the registration process and modify the corresponding relationship of the matching point pairs by adjusting the distance and angle thresholds. Finally, the verification based on the KITTI and JLU campus datasets verifies that the proposed algorithm significantly improves the accuracy of mapping. Specifically in rugged scenes, the mean relative translation error is 0.0173%, and the mean relative rotation error is 2.8744°/m, reaching the current level of the state of the art (SOTA) method. Full article
Show Figures

Graphical abstract

16 pages, 4339 KiB  
Article
An Innovative and Cost-Effective Traffic Information Collection Scheme Using the Wireless Sniffing Technique
by Wei-Hsun Lee, Teng-Jyun Liang and Hsuan-Chih Wang
Vehicles 2022, 4(4), 996-1011; https://doi.org/10.3390/vehicles4040054 - 30 Sep 2022
Viewed by 1526
Abstract
In recent years, the wireless sniffing technique (WST) has become an emerging technique for collecting real-time traffic information. The spatiotemporal variations in wireless signal collection from vehicles provide various types of traffic information, such as travel time, speed, traveling path, and vehicle turning [...] Read more.
In recent years, the wireless sniffing technique (WST) has become an emerging technique for collecting real-time traffic information. The spatiotemporal variations in wireless signal collection from vehicles provide various types of traffic information, such as travel time, speed, traveling path, and vehicle turning proportion at an intersection, which can be widely used for traffic management applications. However, three problems challenge the applicability of the WST to traffic information collection: the transportation mode classification problem (TMP), lane identification problem (LIP), and multiple devices problem (MDP). In this paper, a WST-based intelligent traffic beacon (ITB) with machine learning methods, including SVM, KNN, and AP, is designed to solve these problems. Several field experiments are conducted to validate the proposed system: three sensor topologies (X-type, rectangle-type, and diamond-type topologies) with two wireless sniffing schemes (Bluetooth and Wi-Fi). Experiment results show that X-type has the best performance among all topologies. For sniffing schemes, Bluetooth outperforms Wi-Fi. With the proposed ITB solution, traffic information can be collected in a more cost-effective way. Full article
Show Figures

Figure 1

18 pages, 10057 KiB  
Article
A Coupled Visual and Inertial Measurement Units Method for Locating and Mapping in Coal Mine Tunnel
by Daixian Zhu, Kangkang Ji, Dong Wu and Shulin Liu
Sensors 2022, 22(19), 7437; https://doi.org/10.3390/s22197437 - 30 Sep 2022
Cited by 2 | Viewed by 1895
Abstract
Mobile robots moving fast or in scenes with poor lighting conditions often cause the loss of visual feature tracking. In coal mine tunnels, the ground is often bumpy and the lighting is uneven. During the movement of the mobile robot in this scene, [...] Read more.
Mobile robots moving fast or in scenes with poor lighting conditions often cause the loss of visual feature tracking. In coal mine tunnels, the ground is often bumpy and the lighting is uneven. During the movement of the mobile robot in this scene, there will be violent bumps. The localization technology through visual features is greatly affected by the illumination and the speed of the camera movement. To solve the localization and mapping problem in an environment similar to underground coal mine tunnels, we improve a localization and mapping algorithm based on a monocular camera and an Inertial Measurement Unit (IMU). A feature-matching method that combines point and line features is designed to improve the robustness of the algorithm in the presence of degraded scene structure and insufficient illumination. The tightly coupled method is used to establish visual feature constraints and IMU pre-integration constraints. A keyframe nonlinear optimization algorithm based on sliding windows is used to accomplish state estimation. Extensive simulations and practical environment verification show that the improved simultaneous localization and mapping (SLAM) system with a monocular camera and IMU fusion can achieve accurate autonomous localization and map construction in scenes with insufficient light such as coal mine tunnels. Full article
Show Figures

Figure 1

17 pages, 4788 KiB  
Article
Unifying Deep ConvNet and Semantic Edge Features for Loop Closure Detection
by Jie Jin, Jiale Bai, Yan Xu and Jiani Huang
Remote Sens. 2022, 14(19), 4885; https://doi.org/10.3390/rs14194885 - 30 Sep 2022
Cited by 1 | Viewed by 1367
Abstract
Loop closure detection is an important component of Simultaneous Localization and Mapping (SLAM). In this paper, a novel two-branch loop closure detection algorithm unifying deep Convolutional Neural Network (ConvNet) features and semantic edge features is proposed. In detail, we use one feature extraction [...] Read more.
Loop closure detection is an important component of Simultaneous Localization and Mapping (SLAM). In this paper, a novel two-branch loop closure detection algorithm unifying deep Convolutional Neural Network (ConvNet) features and semantic edge features is proposed. In detail, we use one feature extraction module to extract both ConvNet and semantic edge features simultaneously. The deep ConvNet features are subjected to a Context Feature Enhancement (CFE) module in the global feature ranking branch to generate a representative global feature descriptor. Concurrently, to reduce the interference of dynamic features, the extracted semantic edge information of landmarks is encoded through the Vector of Locally Aggregated Descriptors (VLAD) framework in the semantic edge feature ranking branch to form semantic edge descriptors. Finally, semantic, visual, and geometric information is integrated by the similarity score fusion calculation. Extensive experiments on six public datasets show that the proposed approach can achieve competitive recall rates at 100% precision compared to other state-of-the-art methods. Full article
Show Figures

Graphical abstract

24 pages, 7357 KiB  
Article
Where Am I? SLAM for Mobile Machines on a Smart Working Site
by Yusheng Xiang, Dianzhao Li, Tianqing Su, Quan Zhou, Christine Brach, Samuel S. Mao and Marcus Geimer
Vehicles 2022, 4(2), 529-552; https://doi.org/10.3390/vehicles4020031 - 27 May 2022
Cited by 2 | Viewed by 1968
Abstract
The current optimization approaches of construction machinery are mainly based on internal sensors. However, the decision of a reasonable strategy is not only determined by its intrinsic signals, but also very strongly by environmental information, especially the terrain. Due to the dynamic changing [...] Read more.
The current optimization approaches of construction machinery are mainly based on internal sensors. However, the decision of a reasonable strategy is not only determined by its intrinsic signals, but also very strongly by environmental information, especially the terrain. Due to the dynamic changing of the construction site and the consequent absence of a high definition map, the Simultaneous Localization and Mapping (SLAM) offering the terrain information for construction machines is still challenging. Current SLAM technologies proposed for mobile machines are strongly dependent on costly or computationally expensive sensors, such as RTK GPS and cameras, so that commercial use is rare. In this study, we proposed an affordable SLAM method to create a multi-layer grid map for the construction site so that the machine can have the environmental information and be optimized accordingly. Concretely, after the machine passes by the grid, we can obtain the local information and record it. Combining with positioning technology, we then create a map of the interesting places of the construction site. As a result of our research gathered from Gazebo, we showed that a suitable layout is the combination of one IMU and two differential GPS antennas using the unscented Kalman filter, which keeps the average distance error lower than 2m and the mapping error lower than 1.3% in the harsh environment. As an outlook, our SLAM technology provides the cornerstone to activate many efficiency improvement approaches. Full article
Show Figures

Figure 1

15 pages, 4310 KiB  
Article
3D Object Detection Based on Attention and Multi-Scale Feature Fusion
by Minghui Liu, Jinming Ma, Qiuping Zheng, Yuchen Liu and Gang Shi
Sensors 2022, 22(10), 3935; https://doi.org/10.3390/s22103935 - 23 May 2022
Cited by 12 | Viewed by 2932
Abstract
Three-dimensional object detection in the point cloud can provide more accurate object data for autonomous driving. In this paper, we propose a method named MA-MFFC that uses an attention mechanism and a multi-scale feature fusion network with ConvNeXt module to improve the accuracy [...] Read more.
Three-dimensional object detection in the point cloud can provide more accurate object data for autonomous driving. In this paper, we propose a method named MA-MFFC that uses an attention mechanism and a multi-scale feature fusion network with ConvNeXt module to improve the accuracy of object detection. The multi-attention (MA) module contains point-channel attention and voxel attention, which are used in voxelization and 3D backbone. By considering the point-wise and channel-wise, the attention mechanism enhances the information of key points in voxels, suppresses background point clouds in voxelization, and improves the robustness of the network. The voxel attention module is used in the 3D backbone to obtain more robust and discriminative voxel features. The MFFC module contains the multi-scale feature fusion network and the ConvNeXt module; the multi-scale feature fusion network can extract rich feature information and improve the detection accuracy, and the convolutional layer is replaced with the ConvNeXt module to enhance the feature extraction capability of the network. The experimental results show that the average accuracy is 64.60% for pedestrians and 80.92% for cyclists on the KITTI dataset, which is 1.33% and 2.1% higher, respectively, compared with the baseline network, enabling more accurate detection and localization of more difficult objects. Full article
Show Figures

Figure 1

13 pages, 1591 KiB  
Article
Research on Dual-Frequency Electromagnetic False Alarm Interference Effect of a Typical Radar
by Xue Du, Guanghui Wei, Kai Zhao, Hongze Zhao and Xuxu Lyu
Sensors 2022, 22(9), 3574; https://doi.org/10.3390/s22093574 - 07 May 2022
Cited by 2 | Viewed by 1474
Abstract
In order to master the position variation rule of radar false alarm signal under continuous wave (CW) electromagnetic interference and reveal the mechanism of CW on radar, taking a certain type of stepping frequency radar as the research object, theoretical analysis of the [...] Read more.
In order to master the position variation rule of radar false alarm signal under continuous wave (CW) electromagnetic interference and reveal the mechanism of CW on radar, taking a certain type of stepping frequency radar as the research object, theoretical analysis of the imaging mechanism of radar CW electromagnetic interference false alarm signals from the perspective of time-frequency decoupling and receiver signal processing. Secondly, electromagnetic interference injection method is used to test the single-frequency and dual-frequency electromagnetic interference effect of the tested equipment. The results show that under the single frequency CW electromagnetic interference, the sensitive bandwidth of false alarm signal is about ±75 MHz, and the position of false alarm signal irregularity changes. Under the in-band dual-frequency CW electromagnetic interference, the position of non-intermodulation false alarm signal is similar to that of single frequency. However, the distance difference of two non-intermodulation false alarm signals is regular. In addition, the positions of the second-order intermodulation false alarm signals of the tested radar are also regular, and its position changes with the change of the second-order intermodulation frequency difference. Full article
Show Figures

Figure 1

26 pages, 8567 KiB  
Article
A SLAM System with Direct Velocity Estimation for Mechanical and Solid-State LiDARs
by Lu Jie, Zhi Jin, Jinping Wang, Letian Zhang and Xiaojun Tan
Remote Sens. 2022, 14(7), 1741; https://doi.org/10.3390/rs14071741 - 04 Apr 2022
Cited by 5 | Viewed by 3262
Abstract
Simultaneous localization and mapping (SLAM) is essential for intelligent robots operating in unknown environments. However, existing algorithms are typically developed for specific types of solid-state LiDARs, leading to weak feature representation abilities for new sensors. Moreover, LiDAR-based SLAM methods are limited by distortions [...] Read more.
Simultaneous localization and mapping (SLAM) is essential for intelligent robots operating in unknown environments. However, existing algorithms are typically developed for specific types of solid-state LiDARs, leading to weak feature representation abilities for new sensors. Moreover, LiDAR-based SLAM methods are limited by distortions caused by LiDAR ego motion. To address the above issues, this paper presents a versatile and velocity-aware LiDAR-based odometry and mapping (VLOM) system. A spherical projection-based feature extraction module is utilized to process the raw point cloud generated by various LiDARs, hence avoiding the time-consuming adaptation of various irregular scan patterns. The extracted features are grouped into higher-level clusters to filter out smaller objects and reduce false matching during feature association. Furthermore, bundle adjustment is adopted to jointly estimate the poses and velocities for multiple scans, effectively improving the velocity estimation accuracy and compensating for point cloud distortions. Experiments on publicly available datasets demonstrate the superiority of VLOM over other state-of-the-art LiDAR-based SLAM systems in terms of accuracy and robustness. Additionally, the satisfactory performance of VLOM on RS-LiDAR-M1, a newly released solid-state LiDAR, shows its applicability to a wide range of LiDARs. Full article
Show Figures

Figure 1

20 pages, 895 KiB  
Article
Multi-Target Localization of MIMO Radar with Widely Separated Antennas on Moving Platforms Based on Expectation Maximization Algorithm
by Jiaxin Lu, Feifeng Liu, Jingyi Sun, Yingjie Miao and Quanhua Liu
Remote Sens. 2022, 14(7), 1670; https://doi.org/10.3390/rs14071670 - 30 Mar 2022
Cited by 3 | Viewed by 1656
Abstract
This paper focuses on multi-target parameter estimation of multiple-input multiple-output (MIMO) radar with widely separated antennas on moving platforms. Aiming at the superimposed signals caused by multi-targets, the well-known expectation maximization (EM) is used in this paper. Target’s radar cross-section (RCS) spatial variations, [...] Read more.
This paper focuses on multi-target parameter estimation of multiple-input multiple-output (MIMO) radar with widely separated antennas on moving platforms. Aiming at the superimposed signals caused by multi-targets, the well-known expectation maximization (EM) is used in this paper. Target’s radar cross-section (RCS) spatial variations, different path losses and spatially-non-white noise appear because of the widely separated antennas. These variables are collectively referred to as signal-to-noise ratio (SNR) fluctuations. To estimate the echo delay/Doppler shift and SNR, the Q function of EM algorithm is extended. In addition, to reduce the computational complexity of EM algorithm, the gradient descent is used in M-step of EM algorithm. The modified EM algorithm is called generalized adaptive EM (GAEM) algorithm. Then, a weighted iterative least squares (WILS) algorithm is used to jointly estimate the target positions and velocities based on the results of GAEM algorithm. This paper also derives the Cramér-Rao bound (CRB) in such a non-ideal environment. Finally, extensive numerical simulations are carried out to validate the effectiveness of the proposed algorithm. Full article
Show Figures

Figure 1

18 pages, 14969 KiB  
Article
A Hierarchical Path Planning Approach with Multi-SARSA Based on Topological Map
by Shiguang Wen, Yufan Jiang, Ben Cui, Ke Gao and Fei Wang
Sensors 2022, 22(6), 2367; https://doi.org/10.3390/s22062367 - 18 Mar 2022
Cited by 11 | Viewed by 2298
Abstract
In this paper, a novel path planning algorithm with Reinforcement Learning is proposed based on the topological map. The proposed algorithm has a two-level structure. At the first level, the proposed method generates the topological area using the region dynamic growth algorithm based [...] Read more.
In this paper, a novel path planning algorithm with Reinforcement Learning is proposed based on the topological map. The proposed algorithm has a two-level structure. At the first level, the proposed method generates the topological area using the region dynamic growth algorithm based on the grid map. In the next level, the Multi-SARSA method divided into two layers is applied to find a near-optimal global planning path, in which the artificial potential field method, first of all, is used to initialize the first Q table for faster learning speed, and then the second Q table is initialized with the connected domain obtained by topological map, which provides the prior information. A combination of the two algorithms makes the algorithm easier to converge. Simulation experiments for path planning have been executed. The results indicate that the method proposed in this paper can find the optimal path with a shorter path length, which demonstrates the effectiveness of the presented method. Full article
Show Figures

Figure 1

18 pages, 4149 KiB  
Article
Spatial Attention Frustum: A 3D Object Detection Method Focusing on Occluded Objects
by Xinglei He, Xiaohan Zhang, Yichun Wang, Hongzeng Ji, Xiuhui Duan and Fen Guo
Sensors 2022, 22(6), 2366; https://doi.org/10.3390/s22062366 - 18 Mar 2022
Viewed by 2221
Abstract
Achieving the accurate perception of occluded objects for autonomous vehicles is a challenging problem. Human vision can always quickly locate important object regions in complex external scenes, while other regions are only roughly analysed or ignored, defined as the visual attention mechanism. However, [...] Read more.
Achieving the accurate perception of occluded objects for autonomous vehicles is a challenging problem. Human vision can always quickly locate important object regions in complex external scenes, while other regions are only roughly analysed or ignored, defined as the visual attention mechanism. However, the perception system of autonomous vehicles cannot know which part of the point cloud is in the region of interest. Therefore, it is meaningful to explore how to use the visual attention mechanism in the perception system of autonomous driving. In this paper, we propose the model of the spatial attention frustum to solve object occlusion in 3D object detection. The spatial attention frustum can suppress unimportant features and allocate limited neural computing resources to critical parts of the scene, thereby providing greater relevance and easier processing for higher-level perceptual reasoning tasks. To ensure that our method maintains good reasoning ability when faced with occluded objects with only a partial structure, we propose a local feature aggregation module to capture more complex local features of the point cloud. Finally, we discuss the projection constraint relationship between the 3D bounding box and the 2D bounding box and propose a joint anchor box projection loss function, which will help to improve the overall performance of our method. The results of the KITTI dataset show that our proposed method can effectively improve the detection accuracy of occluded objects. Our method achieves 89.46%, 79.91% and 75.53% detection accuracy in the easy, moderate, and hard difficulty levels of the car category, and achieves a 6.97% performance improvement especially in the hard category with a high degree of occlusion. Our one-stage method does not need to rely on another refining stage, comparable to the accuracy of the two-stage method. Full article
Show Figures

Figure 1

18 pages, 14069 KiB  
Article
Development of a GPU-Accelerated NDT Localization Algorithm for GNSS-Denied Urban Areas
by Keon Woo Jang, Woo Jae Jeong and Yeonsik Kang
Sensors 2022, 22(5), 1913; https://doi.org/10.3390/s22051913 - 01 Mar 2022
Cited by 4 | Viewed by 2889
Abstract
There are numerous global navigation satellite system-denied regions in urban areas, where the localization of autonomous driving remains a challenge. To address this problem, a high-resolution light detection and ranging (LiDAR) sensor was recently developed. Various methods have been proposed to improve the [...] Read more.
There are numerous global navigation satellite system-denied regions in urban areas, where the localization of autonomous driving remains a challenge. To address this problem, a high-resolution light detection and ranging (LiDAR) sensor was recently developed. Various methods have been proposed to improve the accuracy of localization using precise distance measurements derived from LiDAR sensors. This study proposes an algorithm to accelerate the computational speed of LiDAR localization while maintaining the original accuracy of lightweight map-matching algorithms. To this end, first, a point cloud map was transformed into a normal distribution (ND) map. During this process, vector-based normal distribution transform, suitable for graphics processing unit (GPU) parallel processing, was used. In this study, we introduce an algorithm that enabled GPU parallel processing of an existing ND map-matching process. The performance of the proposed algorithm was verified using an open dataset and simulations. To verify the practical performance of the proposed algorithm, the real-time serial and parallel processing performances of the localization were compared using high-performance and embedded computers, respectively. The distance root-mean-square error and computational time of the proposed algorithm were compared. The algorithm increased the computational speed of the embedded computer almost 100-fold while maintaining high localization precision. Full article
Show Figures

Figure 1

12 pages, 29144 KiB  
Article
Machine Vision-Based Method for Estimating Lateral Slope of Structured Roads
by Yunbing Yan and Haiwei Li
Sensors 2022, 22(5), 1867; https://doi.org/10.3390/s22051867 - 26 Feb 2022
Cited by 2 | Viewed by 2226
Abstract
Most of the studies on vehicle control and stability are based on cases of known-road lateral slope, while there are few studies on road lateral-slope estimation. In order to provide reliable information on slope parameters for subsequent studies, this paper provides a method [...] Read more.
Most of the studies on vehicle control and stability are based on cases of known-road lateral slope, while there are few studies on road lateral-slope estimation. In order to provide reliable information on slope parameters for subsequent studies, this paper provides a method of structured-road lateral-slope estimation based on machine vision. The relationship between the road lateral slope and the tangent slope of the lane line can be found out according to the image-perspective principle; then, the coordinates of the pre-scan point are obtained by the lane line, and the tangent slope of the lane line is used to obtain a more accurate estimation of the road lateral slope. In the implementation process, the lane-line feature information in front of the vehicle is obtained according to machine vision, the lane-line function is fitted according to an SCNN (Spatial CNN) algorithm, then the lateral slope is calculated by using the estimation formula mentioned above. Finally, the road model and vehicle model are established by Prescan software for off-line simulation. The simulation results verify the effectiveness and accuracy of the method. Full article
Show Figures

Figure 1

11 pages, 445 KiB  
Technical Note
Target Localization Based on High Resolution Mode of MIMO Radar with Widely Separated Antennas
by Jiaxin Lu, Feifeng Liu, Hongjie Liu and Quanhua Liu
Remote Sens. 2022, 14(4), 902; https://doi.org/10.3390/rs14040902 - 14 Feb 2022
Cited by 3 | Viewed by 1651
Abstract
Coherent processing of multiple-input multiple-output (MIMO) radar with widely separated antennas has high resolution capability, but it also brings ambiguity in target localization. In view of the ambiguity problem, different from other signal processing sub-directions such as array configuration optimization or continuity of [...] Read more.
Coherent processing of multiple-input multiple-output (MIMO) radar with widely separated antennas has high resolution capability, but it also brings ambiguity in target localization. In view of the ambiguity problem, different from other signal processing sub-directions such as array configuration optimization or continuity of phase in space/time, this paper analyzes it from the information level, that is, the tracking method is adopted. First, by using the state equation and measurement equation, the echo data of multiple coherent processing intervals (CPI) are collected to improve the target localization accuracy as much as possible. Second, the non-coherent joint probability data association filter (JPDAF) is used to achieve stable tracking of spatial cross targets without ambiguity measurements. Third, based on the tracking results of the non-coherent JPDAF, the ambiguity of coherent measurement is resolved, that is, the coherent JPDAF is realized. By means of non-coherent and coherent alternating JPDAF (NCCAF) algorithms, high accuracy localization of multiple targets is achieved. Finally, numerical simulations are carried out to validate the effectiveness of the proposed NCCAF algorithm. Full article
Show Figures

Graphical abstract

12 pages, 32750 KiB  
Technical Note
Research of Distance-Intensity Imaging Algorithm for Pulsed LiDAR Based on Pulse Width Correction
by Shiyu Yan, Guohui Yang, Qingyan Li, Yue Wang and Chunhui Wang
Remote Sens. 2022, 14(3), 507; https://doi.org/10.3390/rs14030507 - 21 Jan 2022
Cited by 4 | Viewed by 2071
Abstract
Walking error has been problematic for pulsed LiDAR based on a single threshold comparator. Traditionally, walk error must be suppressed by some time discrimination methods with extremely complex electronic circuits and high costs. In this paper, we propose a compact and flexible method [...] Read more.
Walking error has been problematic for pulsed LiDAR based on a single threshold comparator. Traditionally, walk error must be suppressed by some time discrimination methods with extremely complex electronic circuits and high costs. In this paper, we propose a compact and flexible method for reducing walk error and achieving distance-intensity imaging. A single threshold comparator and commercial time digital converter chip are designed to measure the laser pulse’s time of flight and pulse width. In order to obtain first-class measurement accuracy, we designed a specific pulse width correction method based on the Kalman filter to correct the laser recording time, significantly reducing the ranging walk error by echo intensity fluctuation. In addition, the pulse width obtained by our method, which is a recording of the laser intensity, is conducive to target identification. The experiment results verified plane point clouds of various targets obtained by the proposed method with a plane flatness less than 0.34. The novel contribution of the study is to provide a highly integrated and cost-effective solution for the realization of high-precision ranging and multi-dimensional detection by pulsed LiDAR. It is valuable for realizing multi-dimension, outstanding performance, and low-cost LiDAR. Full article
Show Figures

Figure 1

15 pages, 3196 KiB  
Article
Vehicle Destination Prediction Using Bidirectional LSTM with Attention Mechanism
by Pietro Casabianca, Yu Zhang, Miguel Martínez-García and Jiafu Wan
Sensors 2021, 21(24), 8443; https://doi.org/10.3390/s21248443 - 17 Dec 2021
Cited by 6 | Viewed by 2793
Abstract
Satellite navigation has become ubiquitous to plan and track travelling. Having access to a vehicle’s position enables the prediction of its destination. This opens the possibility to various benefits, such as early warnings of potential hazards, route diversions to pass traffic congestion, and [...] Read more.
Satellite navigation has become ubiquitous to plan and track travelling. Having access to a vehicle’s position enables the prediction of its destination. This opens the possibility to various benefits, such as early warnings of potential hazards, route diversions to pass traffic congestion, and optimizing fuel consumption for hybrid vehicles. Thus, reliably predicting destinations can bring benefits to the transportation industry. This paper investigates using deep learning methods for predicting a vehicle’s destination based on its journey history. With this aim, Dense Neural Networks (DNNs), Long Short-Term Memory (LSTM) networks, Bidirectional LSTM (BiLSTM), and networks with and without attention mechanisms are tested. Especially, LSTM and BiLSTM models with attention mechanism are commonly used for natural language processing and text-classification-related applications. On the other hand, this paper demonstrates the viability of these techniques in the automotive and associated industrial domain, aimed at generating industrial impact. The results of using satellite navigation data show that the BiLSTM with an attention mechanism exhibits better prediction performance destination, achieving an average accuracy of 96% against the test set (4% higher than the average accuracy of the standard BiLSTM) and consistently outperforming the other models by maintaining robustness and stability during forecasting. Full article
Show Figures

Figure 1

16 pages, 4787 KiB  
Article
Position Estimation of Vehicle Based on Magnetic Marker: Time-Division Position Correction
by Yeun Sub Byun and Rag Gyo Jeong
Sensors 2021, 21(24), 8274; https://doi.org/10.3390/s21248274 - 10 Dec 2021
Viewed by 2582
Abstract
During the automatic driving of a vehicle, the vehicle’s positional information is important for vehicle driving control. If fixed-point land markers such as magnetic markers are used, the vehicle’s current position error can be calculated only when a marker is detected while driving, [...] Read more.
During the automatic driving of a vehicle, the vehicle’s positional information is important for vehicle driving control. If fixed-point land markers such as magnetic markers are used, the vehicle’s current position error can be calculated only when a marker is detected while driving, and this error can be used to correct the estimation position. Therefore, correction information is used irregularly and intermittently according to the installation intervals of the magnetic markers and the driving speed. If the detected errors are corrected all at once using the position correction method, discontinuity of the position information can occur. This problem causes instability in the vehicle’s route guidance control because the position error fluctuates as the vehicle’s speed increases. We devised a time-division position correction method that calculates the error using the absolute position of the magnetic marker, which is estimated when the magnetic marker is detected, along with the absolute position information from the magnetic marker database. Instead of correcting the error at once when the position and heading errors are corrected, the correction is performed by dividing the errors multiple times until the next magnetic marker is detected. This prevents sudden discontinuity of the vehicle position information, and the calculated correction amount is used without loss to obtain stable and continuous position information. We conducted driving tests to compare the performances of the proposed algorithm and conventional methods. We compared the continuity of the position information and the mean error and confirmed the superiority of the proposed method in terms of these aspects. Full article
Show Figures

Figure 1

16 pages, 3098 KiB  
Technical Note
Compressed Sensing Imaging with Compensation of Motion Errors for MIMO Radar
by Haoran Li, Shuangxun Li, Zhi Li, Yongpeng Dai and Tian Jin
Remote Sens. 2021, 13(23), 4909; https://doi.org/10.3390/rs13234909 - 03 Dec 2021
Cited by 4 | Viewed by 1851
Abstract
Using a multiple-input-multiple-output (MIMO) radar for environment sensing is gaining more attention in unmanned ground vehicles (UGV). During the movement of the UGV, the position of MIMO array compared to the ideal imaging position will inevitably change. Although compressed sensing (CS) imaging can [...] Read more.
Using a multiple-input-multiple-output (MIMO) radar for environment sensing is gaining more attention in unmanned ground vehicles (UGV). During the movement of the UGV, the position of MIMO array compared to the ideal imaging position will inevitably change. Although compressed sensing (CS) imaging can provide high resolution imaging results and reduce the complexity of the system, the inaccurate MIMO array elements position will lead to defocusing of imaging. In this paper, a method is proposed to realize MIMO array motion error compensation and sparse imaging simultaneously. It utilizes a block coordinate descent (BCD) method, which iteratively estimates the motion errors of the transmitting and receiving elements, as well as synchronously achieving the autofocus imaging. The method accurately estimates and compensates for the motion errors of the transmitters and receivers, rather than approximating them as phase errors in the data. The validity of the proposed method is verified by simulation and measured experiments in a smoky environment. Full article
Show Figures

Figure 1

17 pages, 6413 KiB  
Article
A Distance Increment Smoothing Method and Its Application on the Detection of NLOS in the Cooperative Positioning
by Dongqing Zhao, Dongmin Wang, Minzhi Xiang, Jinfei Li, Chaoyong Yang, Letian Zhang and Linyang Li
Sensors 2021, 21(23), 8028; https://doi.org/10.3390/s21238028 - 01 Dec 2021
Viewed by 1647
Abstract
The wide use of cooperative missions using multiple unmanned platforms has made relative distance information an essential factor for cooperative positioning and formation control. Reducing the range error effectively in real time has become the main technical challenge. We present a new method [...] Read more.
The wide use of cooperative missions using multiple unmanned platforms has made relative distance information an essential factor for cooperative positioning and formation control. Reducing the range error effectively in real time has become the main technical challenge. We present a new method to deal with ranging errors based on the distance increment (DI). The DI calculated by dead reckoning is used to smooth the DI obtained by the cooperative positioning, and the smoothed DI is then used to detect and estimate the non-line-of-sight (NLOS) error as well as to smooth the observed values containing random noise in the filtering process. Simulation and experimental results show that the relative accuracy of NLOS estimation is 8.17%, with the maximum random error reduced by 40.27%. The algorithm weakens the influence of NLOS and random errors on the measurement distance, thus improving the relative distance precision and enhancing the stability and reliability of cooperative positioning. Full article
Show Figures

Figure 1

24 pages, 24794 KiB  
Article
A Novel Anti-Drift Visual Object Tracking Algorithm Based on Sparse Response and Adaptive Spatial-Temporal Context-Aware
by Yinqiang Su, Jinghong Liu, Fang Xu, Xueming Zhang and Yujia Zuo
Remote Sens. 2021, 13(22), 4672; https://doi.org/10.3390/rs13224672 - 19 Nov 2021
Cited by 7 | Viewed by 2113
Abstract
Correlation filter (CF) based trackers have gained significant attention in the field of visual single-object tracking, owing to their favorable performance and high efficiency; however, existing trackers still suffer from model drift caused by boundary effects and filter degradation. In visual tracking, long-term [...] Read more.
Correlation filter (CF) based trackers have gained significant attention in the field of visual single-object tracking, owing to their favorable performance and high efficiency; however, existing trackers still suffer from model drift caused by boundary effects and filter degradation. In visual tracking, long-term occlusion and large appearance variations easily cause model degradation. To remedy these drawbacks, we propose a sparse adaptive spatial-temporal context-aware method that effectively avoids model drift. Specifically, a global context is explicitly incorporated into the correlation filter to mitigate boundary effects. Subsequently, an adaptive temporal regularization constraint is adopted in the filter training stage to avoid model degradation. Meanwhile, a sparse response constraint is introduced to reduce the risk of further model drift. Furthermore, we apply the alternating direction multiplier method (ADMM) to derive a closed-solution of the object function with a low computational cost. In addition, an updating scheme based on the APCE-pool and Peak-pool is proposed to reveal the tracking condition and ensure updates of the target’s appearance model with high-confidence. The Kalam filter is adopted to track the target when the appearance model is persistently unreliable and abnormality occurs. Finally, extensive experimental results on OTB-2013, OTB-2015 and VOT2018 datasets show that our proposed tracker performs favorably against several state-of-the-art trackers. Full article
Show Figures

Figure 1

21 pages, 5253 KiB  
Article
An Overdispersed Black-Box Variational Bayesian–Kalman Filter with Inaccurate Noise Second-Order Statistics
by Lin Cao, Chuyuan Zhang, Zongmin Zhao, Dongfeng Wang, Kangning Du, Chong Fu and Jianfeng Gu
Sensors 2021, 21(22), 7673; https://doi.org/10.3390/s21227673 - 18 Nov 2021
Viewed by 1738
Abstract
Aimed at the problems in which the performance of filters derived from a hypothetical model will decline or the cost of time of the filters derived from a posterior model will increase when prior knowledge and second-order statistics of noise are uncertain, a [...] Read more.
Aimed at the problems in which the performance of filters derived from a hypothetical model will decline or the cost of time of the filters derived from a posterior model will increase when prior knowledge and second-order statistics of noise are uncertain, a new filter is proposed. In this paper, a Bayesian robust Kalman filter based on posterior noise statistics (KFPNS) is derived, and the recursive equations of this filter are very similar to that of the classical algorithm. Note that the posterior noise distributions are approximated by overdispersed black-box variational inference (O-BBVI). More precisely, we introduce an overdispersed distribution to push more probability density to the tails of variational distribution and incorporated the idea of importance sampling into two strategies of control variates and Rao–Blackwellization in order to reduce the variance of estimators. As a result, the convergence process will speed up. From the simulations, we can observe that the proposed filter has good performance for the model with uncertain noise. Moreover, we verify the proposed algorithm by using a practical multiple-input multiple-output (MIMO) radar system. Full article
Show Figures

Figure 1

27 pages, 8861 KiB  
Article
Adaptive Fast Non-Singular Terminal Sliding Mode Path Following Control for an Underactuated Unmanned Surface Vehicle with Uncertainties and Unknown Disturbances
by Yunsheng Fan, Bowen Liu, Guofeng Wang and Dongdong Mu
Sensors 2021, 21(22), 7454; https://doi.org/10.3390/s21227454 - 10 Nov 2021
Cited by 13 | Viewed by 2290
Abstract
This paper focuses on an issue involving robust adaptive path following for the uncertain underactuated unmanned surface vehicle with time-varying large sideslips angle and actuator saturation. An improved line-of-sight guidance law based on a reduced-order extended state observer is proposed to address the [...] Read more.
This paper focuses on an issue involving robust adaptive path following for the uncertain underactuated unmanned surface vehicle with time-varying large sideslips angle and actuator saturation. An improved line-of-sight guidance law based on a reduced-order extended state observer is proposed to address the large sideslip angle that occurs in practical navigation. Next, the finite-time disturbances observer is designed by considering the perturbations parameter of the model and the unknown disturbances of the external environment as the lumped disturbances. Then, an adaptive term is introduced into Fast Non-singular Terminal Sliding Mode Control to design the path following controllers. Finally, considering the saturation of actuator, an auxiliary dynamic system is introduced. By selecting the appropriate design parameters, all the signals of the whole path following a closed-loop system can be ultimately bounded. Real-time control of path following can be achieved by transferring data from shipborne sensors such as GPS, combined inertial guidance and anemoclinograph to the Fast Non-singular Terminal Sliding Mode controller. Two examples as comparisons were carried out to demonstrate the validity of the proposed control approach. Full article
Show Figures

Figure 1

24 pages, 3297 KiB  
Review
Deep Neural Networks to Detect Weeds from Crops in Agricultural Environments in Real-Time: A Review
by Ildar Rakhmatulin, Andreas Kamilaris and Christian Andreasen
Remote Sens. 2021, 13(21), 4486; https://doi.org/10.3390/rs13214486 - 08 Nov 2021
Cited by 47 | Viewed by 13430
Abstract
Automation, including machine learning technologies, are becoming increasingly crucial in agriculture to increase productivity. Machine vision is one of the most popular parts of machine learning and has been widely used where advanced automation and control have been required. The trend has shifted [...] Read more.
Automation, including machine learning technologies, are becoming increasingly crucial in agriculture to increase productivity. Machine vision is one of the most popular parts of machine learning and has been widely used where advanced automation and control have been required. The trend has shifted from classical image processing and machine learning techniques to modern artificial intelligence (AI) and deep learning (DL) methods. Based on large training datasets and pre-trained models, DL-based methods have proven to be more accurate than previous traditional techniques. Machine vision has wide applications in agriculture, including the detection of weeds and pests in crops. Variation in lighting conditions, failures to transfer learning, and object occlusion constitute key challenges in this domain. Recently, DL has gained much attention due to its advantages in object detection, classification, and feature extraction. DL algorithms can automatically extract information from large amounts of data used to model complex problems and is, therefore, suitable for detecting and classifying weeds and crops. We present a systematic review of AI-based systems to detect weeds, emphasizing recent trends in DL. Various DL methods are discussed to clarify their overall potential, usefulness, and performance. This study indicates that several limitations obstruct the widespread adoption of AI/DL in commercial applications. Recommendations for overcoming these challenges are summarized. Full article
Show Figures

Graphical abstract

13 pages, 2171 KiB  
Article
A Robust and Fast Method for Sidescan Sonar Image Segmentation Based on Region Growing
by Xuyang Wang, Luyu Wang, Guolin Li and Xiang Xie
Sensors 2021, 21(21), 6960; https://doi.org/10.3390/s21216960 - 20 Oct 2021
Cited by 13 | Viewed by 2453
Abstract
For high-resolution side scan sonar images, accurate and fast segmentation of sonar images is crucial for underwater target detection and recognition. However, due to the characteristics of low signal-to-noise ratio (SNR) and complex environmental noise of sonar, the existing methods with [...] Read more.
For high-resolution side scan sonar images, accurate and fast segmentation of sonar images is crucial for underwater target detection and recognition. However, due to the characteristics of low signal-to-noise ratio (SNR) and complex environmental noise of sonar, the existing methods with high accuracy and good robustness are mostly iterative methods with high complexity and poor real-time performance. For this purpose, a region growing based segmentation using the likelihood ratio testing method (RGLT) is proposed. This method obtains the seed points in the highlight and the shadow regions by likelihood ratio testing based on the statistical probability distribution and then grows them according to the similarity criterion. The growth avoids the processing of the seabed reverberation regions, which account for the largest proportion of sonar images, thus greatly reducing segmentation time and improving segmentation accuracy. In addition, a pre-processing filtering method called standard deviation filtering (STDF) is proposed to improve the SNR and remove the speckle noise. Experiments were conducted on three sonar databases, which showed that RGLT has significantly improved quantitative metrics such as accuracy, speed, and segmentation visual effects. The average accuracy and running times of the proposed segmentation method for 100 × 400 images are separately 95.90% and 0.44 s. Full article
Show Figures

Figure 1

Back to TopTop