sensors-logo

Journal Browser

Journal Browser

Mobile Multi-Sensors in Positioning, Navigation, and Mapping Applications

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: closed (15 May 2021) | Viewed by 62723

Special Issue Editors


E-Mail Website
Guest Editor
Mobile Multi-Sensor Systems Reaserch Group, University of Calgary, Calgary, AB T2N 1N4, Canada
Interests: multisensor systems; signal processing; error modeling and optimal estimation
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, Royal Military College of Canada (RMCC) with Cross-Appointment at both the School of Computing and the Department of Electrical and Computer Engineering, Queen’s University, ON K7L 3N6, Canada
Interests: inertial navigation; global navigation satellite systems; GPS; wireless location; navigation
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Sensors have always been the core of any system used in positioning, navigation, and mapping. Mobile sensing in particular is the main component for such systems in land, airborne, and marine applications. In recent decades, it has become a standard tool in those mobile systems to integrate different sensors that complement each other, hence adding more capabilities to the used system. These sensors include GNSS, inertial sensors (accelerometers and gyroscopes), magnetometers, compasses, odometers, vision-based sensors, LiDAR, scanners, etc. Although sensor integration has been implemented to improve overall system performance, it has introduced lots of challenges due to the added system complexities. This has led researchers to investigate several aspects such as sensor synchronization, data fusion, signal processing, sensor error models, integration schemes, and optimal estimation techniques. Moreover, with the advances in sensor technology, sensors costs are lower, and their sizes are smaller. This has come with the price of large sensor errors, which again has motivated researchers to investigate more approaches to overcome this issue.

Therefore, the main objective of this Special Issue is to feature the current advances related to mobile multi-sensors in postioning, navigation, and mapping applications. Invited original research contributions can cover a wide range of topics, including but not lmited to:

  • Sensor calibration and evaluation
  • Signal processing techniques
  • Sensor data fusion
  • Land, airborne, and marine applications
  • Sensor stochastic error models
  • Robotics
  • Autonomous driving
  • Optimal estimation techniques
  • Autonomous underwater vehicles (AUV)
  • Micro-electromechanical systems (MEMS)
  • Indoor positioning and navigation
  • Unmanned aerial vehicle (UAV) applications
  • Multi-sensor systems in challenging environments
  • Remote sensing applications
  • Pipeline surveying and monitoring
  • Vision-aided navigation

Dr. Sameh Nassar
Prof. Dr. Aboelmagd Noureldin
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Mobile multi-sensors
  • Positioning, navigation and mapping
  • Optimal estimation
  • Data fusion
  • Signal processing
  • Error modeling

Published Papers (15 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 6333 KiB  
Article
Fingerprint Feature Extraction for Indoor Localization
by Jehn-Ruey Jiang, Hanas Subakti and Hui-Sung Liang
Sensors 2021, 21(16), 5434; https://doi.org/10.3390/s21165434 - 12 Aug 2021
Cited by 16 | Viewed by 2754
Abstract
This paper proposes a fingerprint-based indoor localization method, named FPFE (fingerprint feature extraction), to locate a target device (TD) whose location is unknown. Bluetooth low energy (BLE) beacon nodes (BNs) are deployed in the localization area to emit beacon packets periodically. The received [...] Read more.
This paper proposes a fingerprint-based indoor localization method, named FPFE (fingerprint feature extraction), to locate a target device (TD) whose location is unknown. Bluetooth low energy (BLE) beacon nodes (BNs) are deployed in the localization area to emit beacon packets periodically. The received signal strength indication (RSSI) values of beacon packets sent by various BNs are measured at different reference points (RPs) and saved as RPs’ fingerprints in a database. For the purpose of localization, the TD also obtains its fingerprint by measuring the beacon packet RSSI values for various BNs. FPFE then applies either the autoencoder (AE) or principal component analysis (PCA) to extract fingerprint features. It then measures the similarity between the features of PRs and the TD with the Minkowski distance. Afterwards, k RPs associated with the k smallest Minkowski distances are selected to estimate the TD’s location. Experiments are conducted to evaluate the localization error of FPFE. The experimental results show that FPFE achieves an average error of 0.68 m, which is better than those of other related BLE fingerprint-based indoor localization methods. Full article
Show Figures

Figure 1

15 pages, 1165 KiB  
Article
Smartphone Location Recognition with Unknown Modes in Deep Feature Space
by Nati Daniel, Felix Goldberg and Itzik Klein
Sensors 2021, 21(14), 4807; https://doi.org/10.3390/s21144807 - 14 Jul 2021
Cited by 1 | Viewed by 2182
Abstract
Smartphone location recognition aims to identify the location of a smartphone on a user in specific actions such as talking or texting. This task is critical for accurate indoor navigation using pedestrian dead reckoning. Usually, for that task, a supervised network is trained [...] Read more.
Smartphone location recognition aims to identify the location of a smartphone on a user in specific actions such as talking or texting. This task is critical for accurate indoor navigation using pedestrian dead reckoning. Usually, for that task, a supervised network is trained on a set of defined user modes (smartphone locations), available during the training process. In such situations, when the user encounters an unknown mode, the classifier will be forced to identify it as one of the original modes it was trained on. Such classification errors will degrade the navigation solution accuracy. A solution to detect unknown modes is based on a probability threshold of existing modes, yet fails to work with the problem setup. Therefore, to identify unknown modes, two end-to-end ML-based approaches are derived utilizing only the smartphone’s accelerometers measurements. Results using six different datasets shows the ability of the proposed approaches to classify unknown smartphone locations with an accuracy of 93.12%. The proposed approaches can be easily applied to any other classification problems containing unknown modes. Full article
Show Figures

Figure 1

35 pages, 16703 KiB  
Article
Design and Implementation of an Enhanced Matched Filter for Sidelobe Reduction of Pulsed Linear Frequency Modulation Radar
by Ahmed Azouz, Ashraf Abosekeen, Sameh Nassar and Mohamed Hanafy
Sensors 2021, 21(11), 3835; https://doi.org/10.3390/s21113835 - 01 Jun 2021
Cited by 3 | Viewed by 9191
Abstract
Pulse compression techniques are commonly used in linear frequency modulated (LFM) waveforms to improve the signal-to-noise ratios (SNRs) and range resolutions of pulsed radars, whose detection capabilities are affected by the sidelobes. In this study, a sidelobe reduction filter (SRF) was designed and [...] Read more.
Pulse compression techniques are commonly used in linear frequency modulated (LFM) waveforms to improve the signal-to-noise ratios (SNRs) and range resolutions of pulsed radars, whose detection capabilities are affected by the sidelobes. In this study, a sidelobe reduction filter (SRF) was designed and implemented using software defined radio (SDR). An enhanced matched filter (EMF) that combines a matched filter (MF) and an SRF is proposed and was implemented. In contrast to the current commonly used approaches, the mathematical model of the SRF frequency response is extracted without depending on any iteration methods or adaptive techniques, which results in increased efficiency and computational speed for the developed model. The performance of the proposed EMF was verified through the measurement of four metrics, including the peak sidelobe ratio (PSLR), the impulse response width (IRW), the mainlobe loss ratio (MLR), and the receiver operational characteristics (ROCs) at different SNRs. The ambiguity function was then used to characterize the Doppler effect on the designed EMF. In addition, the detection of single and multiple targets using the proposed EMF was performed, and the results showed that it overcame the masking problem due to its effective reduction of the sidelobes. Hence, the practical application of the EMF matches the performance analysis. Moreover, when implementing the EMF proposed in this paper, it outperformed the common MF, especially when detecting targets moving at low speeds and having small radar cross-sections (RCS), even under severe masking conditions. Full article
Show Figures

Figure 1

16 pages, 5009 KiB  
Article
LiDAR-Based Glass Detection for Improved Occupancy Grid Mapping
by Haileleol Tibebu, Jamie Roche, Varuna De Silva and Ahmet Kondoz
Sensors 2021, 21(7), 2263; https://doi.org/10.3390/s21072263 - 24 Mar 2021
Cited by 23 | Viewed by 8800
Abstract
Creating an accurate awareness of the environment using laser scanners is a major challenge in robotics and auto industries. LiDAR (light detection and ranging) is a powerful laser scanner that provides a detailed map of the environment. However, efficient and accurate mapping of [...] Read more.
Creating an accurate awareness of the environment using laser scanners is a major challenge in robotics and auto industries. LiDAR (light detection and ranging) is a powerful laser scanner that provides a detailed map of the environment. However, efficient and accurate mapping of the environment is yet to be obtained, as most modern environments contain glass, which is invisible to LiDAR. In this paper, a method to effectively detect and localise glass using LiDAR sensors is proposed. This new approach is based on the variation of range measurements between neighbouring point clouds, using a two-step filter. The first filter examines the change in the standard deviation of neighbouring clouds. The second filter uses a change in distance and intensity between neighbouring pules to refine the results from the first filter and estimate the glass profile width before updating the cartesian coordinate and range measurement by the instrument. Test results demonstrate the detection and localisation of glass and the elimination of errors caused by glass in occupancy grid maps. This novel method detects frameless glass from a long range and does not depend on intensity peak with an accuracy of 96.2%. Full article
Show Figures

Figure 1

32 pages, 11489 KiB  
Article
Smart Artificial Markers for Accurate Visual Mapping and Localization
by Luis E. Ortiz-Fernandez, Elizabeth V. Cabrera-Avila, Bruno M. F. da Silva and Luiz M. G. Gonçalves
Sensors 2021, 21(2), 625; https://doi.org/10.3390/s21020625 - 18 Jan 2021
Cited by 14 | Viewed by 5012
Abstract
Artificial marker mapping is a useful tool for fast camera localization estimation with a certain degree of accuracy in large indoor and outdoor environments. Nonetheless, the level of accuracy can still be enhanced to allow the creation of applications such as the new [...] Read more.
Artificial marker mapping is a useful tool for fast camera localization estimation with a certain degree of accuracy in large indoor and outdoor environments. Nonetheless, the level of accuracy can still be enhanced to allow the creation of applications such as the new Visual Odometry and SLAM datasets, low-cost systems for robot detection and tracking, and pose estimation. In this work, we propose to improve the accuracy of map construction using artificial markers (mapping method) and camera localization within this map (localization method) by introducing a new type of artificial marker that we call the smart marker. A smart marker consists of a square fiducial planar marker and a pose measurement system (PMS) unit. With a set of smart markers distributed throughout the environment, the proposed mapping method estimates the markers’ poses from a set of calibrated images and orientation/distance measurements gathered from the PMS unit. After this, the proposed localization method can localize a monocular camera with the correct scale, directly benefiting from the improved accuracy of the mapping method. We conducted several experiments to evaluate the accuracy of the proposed methods. The results show that our approach decreases the Relative Positioning Error (RPE) by 85% in the mapping stage and Absolute Trajectory Error (ATE) by 50% for the camera localization stage in comparison with the state-of-the-art methods present in the literature. Full article
Show Figures

Figure 1

17 pages, 4887 KiB  
Article
A Self-Diagnosis Method for Detecting UAV Cyber Attacks Based on Analysis of Parameter Changes
by Elena Basan, Alexandr Basan, Alexey Nekrasov, Colin Fidge, Ján Gamec and Mária Gamcová
Sensors 2021, 21(2), 509; https://doi.org/10.3390/s21020509 - 13 Jan 2021
Cited by 17 | Viewed by 3252
Abstract
We consider how to protect Unmanned Aerial Vehicles (UAVs) from Global Positioning System (GPS) spoofing attacks to provide safe navigation. The Global Navigation Satellite System (GNSS) is widely used for locating drones and is by far the most popular navigation solution. This is [...] Read more.
We consider how to protect Unmanned Aerial Vehicles (UAVs) from Global Positioning System (GPS) spoofing attacks to provide safe navigation. The Global Navigation Satellite System (GNSS) is widely used for locating drones and is by far the most popular navigation solution. This is because of the simplicity and relatively low cost of this technology, as well as the accuracy of the transmitted coordinates. Nevertheless, there are many security threats to GPS navigation. These are primarily related to the nature of the GPS signal, as an intruder can jam and spoof the GPS signal. We discuss methods of protection against this type of attack and have developed an experimental stand and conducted scenarios of attacks on a drone’s GPS system. Data from the UAV’s flight log were collected and analyzed in order to see the attack’s impact on sensor readings. From this we identify a new method for detecting UAV anomalies by analyzing changes in internal parameters of the UAV. This self-diagnosis method allows a UAV to independently assess the presence of changes in its own subsystems indicative of cyber attacks. Full article
Show Figures

Figure 1

17 pages, 1260 KiB  
Article
Estimation and Analysis of GNSS Differential Code Biases (DCBs) Using a Multi-Spacing Software Receiver
by Ye Wang, Lin Zhao and Yang Gao
Sensors 2021, 21(2), 443; https://doi.org/10.3390/s21020443 - 10 Jan 2021
Cited by 3 | Viewed by 2253
Abstract
In the use of global navigation satellite systems (GNSS) to monitor ionosphere variations by estimating total electron content (TEC), differential code biases (DCBs) in GNSS measurements are a primary source of errors. Satellite DCBs are currently estimated and broadcast to users by International [...] Read more.
In the use of global navigation satellite systems (GNSS) to monitor ionosphere variations by estimating total electron content (TEC), differential code biases (DCBs) in GNSS measurements are a primary source of errors. Satellite DCBs are currently estimated and broadcast to users by International GNSS Service (IGS) using a network of GNSS hardware receivers which are inside structure fixed. We propose an approach for satellite DCB estimation using a multi-spacing GNSS software receiver to analyze the influence of the correlator spacing on satellite DCB estimates and estimate satellite DCBs based on different correlator spacing observations from the software receiver. This software receiver-based approach is called multi-spacing DCB (MSDCB) estimation. In the software receiver approach, GNSS observations with different correlator spacings from intermediate frequency datasets can be generated. Since each correlator spacing allows the software receiver to output observations like a local GNSS receiver station, GNSS observations from different correlator spacings constitute a network of GNSS receivers, which makes it possible to use a single software receiver to estimate satellite DCBs. By comparing the MSDCBs to the IGS DCB products, the results show that the proposed correlator spacing flexible software receiver is able to predict satellite DCBs with increased flexibility and cost-effectiveness than the current hardware receiver-based DCB estimation approach. Full article
Show Figures

Figure 1

26 pages, 8694 KiB  
Article
Modules and Techniques for Motion Planning: An Industrial Perspective
by Stefano Quer and Luz Garcia
Sensors 2021, 21(2), 420; https://doi.org/10.3390/s21020420 - 09 Jan 2021
Viewed by 2675
Abstract
Research on autonomous cars has become one of the main research paths in the automotive industry, with many critical issues that remain to be explored while considering the overall methodology and its practical applicability. In this paper, we present an industrial experience in [...] Read more.
Research on autonomous cars has become one of the main research paths in the automotive industry, with many critical issues that remain to be explored while considering the overall methodology and its practical applicability. In this paper, we present an industrial experience in which we build a complete autonomous driving system, from the sensor units to the car control equipment, and we describe its adoption and testing phase on the field. We report how we organize data fusion and map manipulation to represent the required reality. We focus on the communication and synchronization issues between the data-fusion device and the path-planner, between the CPU and the GPU units, and among different CUDA kernels implementing the core local planner module. In these frameworks, we propose simple representation strategies and approximation techniques which guarantee almost no penalty in terms of accuracy and large savings in terms of memory occupation and memory transfer times. We show how we adopt a recent implementation on parallel many-core devices, such as CUDA-based GPGPU, to reduce the computational burden of rapidly exploring random trees to explore the state space along with a given reference path. We report on our use of the controller and the vehicle simulator. We run experiments on several real scenarios, and we report the paths generated with the different settings, with their relative errors and computation times. We prove that our approach can generate reasonable paths on a multitude of standard maneuvers in real time. Full article
Show Figures

Figure 1

16 pages, 6680 KiB  
Article
Shortest Path Algorithm in Dynamic Restricted Area Based on Unidirectional Road Network Model
by Haitao Wei, Shusheng Zhang and Xiaohui He
Sensors 2021, 21(1), 203; https://doi.org/10.3390/s21010203 - 30 Dec 2020
Cited by 8 | Viewed by 3057
Abstract
Accurate and fast path calculation is essential for applications such as vehicle navigation systems and transportation network routing. Although many shortest path algorithms for restricted search areas have been developed in the past ten years to speed up the efficiency of path query, [...] Read more.
Accurate and fast path calculation is essential for applications such as vehicle navigation systems and transportation network routing. Although many shortest path algorithms for restricted search areas have been developed in the past ten years to speed up the efficiency of path query, the performance including the practicability still needs to be improved. To settle this problem, this paper proposes a new method of calculating statistical parameters based on a unidirectional road network model that is more in line with the real world and a path planning algorithm for dynamically restricted search areas that constructs virtual boundaries at a lower confidence level. We conducted a detailed experiment on the proposed algorithm with the real road network in Zhengzhou. As the experiment shows, compared with the existing algorithms, the proposed algorithm improves the search performance significantly in the condition of optimal path under the premise of ensuring the optimal path solution. Full article
Show Figures

Figure 1

20 pages, 10963 KiB  
Article
An Integrated Positioning and Attitude Determination System for Immersed Tunnel Elements: A Simulation Study
by Guanqing Li, Lasse Klingbeil, Florian Zimmermann, Shengxiang Huang and Heiner Kuhlmann
Sensors 2020, 20(24), 7296; https://doi.org/10.3390/s20247296 - 18 Dec 2020
Cited by 1 | Viewed by 2020
Abstract
Immersed tunnel elements need to be exactly controlled during their immersion process. Position and attitude of the element should be determined quickly and accurately to navigate the element from the holding area to the final location in the tunnel trench. In this paper, [...] Read more.
Immersed tunnel elements need to be exactly controlled during their immersion process. Position and attitude of the element should be determined quickly and accurately to navigate the element from the holding area to the final location in the tunnel trench. In this paper, a newly-developed positioning and attitude determination system, integrating a 3-antenna Global Navigation Satellite System (GNSS) system, an inclinometer and a range-measurement system, is presented. The system is designed to provide the absolute position of both ends of the element with sufficient accuracy in real time. Special attention in the accuracy analysis is paid to the influence of GNSS multipath error and sound speed profile. Simulations are conducted to illustrate the performance of the system in different scenarios. If both elements are very close, the accuracies of the system are higher than 0.02 m in the directions perpendicular to and along the tunnel axis. Full article
Show Figures

Figure 1

19 pages, 6036 KiB  
Article
Robust Building Extraction for High Spatial Resolution Remote Sensing Images with Self-Attention Network
by Dengji Zhou, Guizhou Wang, Guojin He, Tengfei Long, Ranyu Yin, Zhaoming Zhang, Sibao Chen and Bin Luo
Sensors 2020, 20(24), 7241; https://doi.org/10.3390/s20247241 - 17 Dec 2020
Cited by 30 | Viewed by 3223
Abstract
Building extraction from high spatial resolution remote sensing images is a hot spot in the field of remote sensing applications and computer vision. This paper presents a semantic segmentation model, which is a supervised method, named Pyramid Self-Attention Network (PISANet). Its structure is [...] Read more.
Building extraction from high spatial resolution remote sensing images is a hot spot in the field of remote sensing applications and computer vision. This paper presents a semantic segmentation model, which is a supervised method, named Pyramid Self-Attention Network (PISANet). Its structure is simple, because it contains only two parts: one is the backbone of the network, which is used to learn the local features (short distance context information around the pixel) of buildings from the image; the other part is the pyramid self-attention module, which is used to obtain the global features (long distance context information with other pixels in the image) and the comprehensive features (includes color, texture, geometric and high-level semantic feature) of the building. The network is an end-to-end approach. In the training stage, the input is the remote sensing image and corresponding label, and the output is probability map (the probability that each pixel is or is not building). In the prediction stage, the input is the remote sensing image, and the output is the extraction result of the building. The complexity of the network structure was reduced so that it is easy to implement. The proposed PISANet was tested on two datasets. The result shows that the overall accuracy reached 94.50 and 96.15%, the intersection-over-union reached 77.45 and 87.97%, and F1 index reached 87.27 and 93.55%, respectively. In experiments on different datasets, PISANet obtained high overall accuracy, low error rate and improved integrity of individual buildings. Full article
Show Figures

Figure 1

15 pages, 2423 KiB  
Article
Texture Synthesis Repair of RealSense D435i Depth Images with Object-Oriented RGB Image Segmentation
by Longyu Zhang, Hao Xia and Yanyou Qiao
Sensors 2020, 20(23), 6725; https://doi.org/10.3390/s20236725 - 24 Nov 2020
Cited by 9 | Viewed by 3569
Abstract
A depth camera is a kind of sensor that can directly collect distance information between an object and the camera. The RealSense D435i is a low-cost depth camera that is currently in widespread use. When collecting data, an RGB image and a depth [...] Read more.
A depth camera is a kind of sensor that can directly collect distance information between an object and the camera. The RealSense D435i is a low-cost depth camera that is currently in widespread use. When collecting data, an RGB image and a depth image are acquired simultaneously. The quality of the RGB image is good, whereas the depth image typically has many holes. In a lot of applications using depth images, these holes can lead to serious problems. In this study, a repair method of depth images was proposed. The depth image is repaired using the texture synthesis algorithm with the RGB image, which is segmented through a multi-scale object-oriented method. The object difference parameter is added to the process of selecting the best sample block. In contrast with previous methods, the experimental results show that the proposed method avoids the error filling of holes, the edge of the filled holes is consistent with the edge of RGB images, and the repair accuracy is better. The root mean square error, peak signal-to-noise ratio, and structural similarity index measure from the repaired depth images and ground-truth image were better than those obtained by two other methods. We believe that the repair of the depth image can improve the effects of depth image applications. Full article
Show Figures

Figure 1

17 pages, 5736 KiB  
Article
Optical and Mass Flow Sensors for Aiding Vehicle Navigation in GNSS Denied Environment
by Mohamed Moussa, Shady Zahran, Mostafa Mostafa, Adel Moussa, Naser El-Sheimy and Mohamed Elhabiby
Sensors 2020, 20(22), 6567; https://doi.org/10.3390/s20226567 - 17 Nov 2020
Cited by 9 | Viewed by 2802
Abstract
Nowadays, autonomous vehicles have achieved a lot of research interest regarding the navigation, the surrounding environmental perception, and control. Global Navigation Satellite System/Inertial Navigation System (GNSS/INS) is one of the significant components of any vehicle navigation system. However, GNSS has limitations in some [...] Read more.
Nowadays, autonomous vehicles have achieved a lot of research interest regarding the navigation, the surrounding environmental perception, and control. Global Navigation Satellite System/Inertial Navigation System (GNSS/INS) is one of the significant components of any vehicle navigation system. However, GNSS has limitations in some operating scenarios such as urban regions and indoor environments where the GNSS signal suffers from multipath or outage. On the other hand, INS standalone navigation solution degrades over time due to the INS errors. Therefore, a modern vehicle navigation system depends on integration between different sensors to aid INS for mitigating its drift during GNSS signal outage. However, there are some challenges for the aiding sensors related to their high price, high computational costs, and environmental and weather effects. This paper proposes an integrated aiding navigation system for vehicles in an indoor environment (e.g., underground parking). This proposed system is based on optical flow and multiple mass flow sensors integrations to aid the low-cost INS by providing the navigation extended Kalman filter (EKF) with forward velocity and change of heading updates to enhance the vehicle navigation. The optical flow is computed for frames taken using a consumer portable device (CPD) camera mounted in the upward-looking direction to avoid moving objects in front of the camera and to exploit the typical features of the underground parking or tunnels such as ducts and pipes. On the other hand, the multiple mass flow sensors measurements are modeled to provide forward velocity information. Moreover, a mass flow differential odometry is proposed where the vehicle change of heading is estimated from the multiple mass flow sensors measurements. This integrated aiding system can be used for unmanned aerial vehicles (UAV) and land vehicle navigations. However, the experimental results are implemented for land vehicles through the integration of CPD with mass flow sensors to aid the navigation system. Full article
Show Figures

Figure 1

17 pages, 8619 KiB  
Article
Multi-Constellation Software-Defined Receiver for Doppler Positioning with LEO Satellites
by Farzan Farhangian and René Landry, Jr.
Sensors 2020, 20(20), 5866; https://doi.org/10.3390/s20205866 - 16 Oct 2020
Cited by 47 | Viewed by 5413
Abstract
A Multi-Constellation Software-Defined Receiver (MC-SDR) is designed and implemented to extract the Doppler measurements of Low Earth Orbit (LEO) satellite’s downlink signals, such as Orbcomm, Iridium-Next, Globalstar, Starlink, OneWeb, SpaceX, etc. The Doppler positioning methods, as one of the main localization algorithms, need [...] Read more.
A Multi-Constellation Software-Defined Receiver (MC-SDR) is designed and implemented to extract the Doppler measurements of Low Earth Orbit (LEO) satellite’s downlink signals, such as Orbcomm, Iridium-Next, Globalstar, Starlink, OneWeb, SpaceX, etc. The Doppler positioning methods, as one of the main localization algorithms, need a highly accurate receiver design to track the Doppler as a measurement for Extended Kalman Filter (EKF)-based positioning. In this paper, the designed receiver has been used to acquire and track the Doppler shifts of two different kinds of LEO constellations. The extracted Doppler shifts of one Iridium-Next satellite as a burst-based simplex downlink signal and two Orbcomm satellites as continuous signals are considered. Also, with having the Two-Line Element (TLE) for each satellite, the position, and orbital elements of each satellite are known. Finally, the accuracy of the designed receiver is validated using an EKF-based stationary positioning algorithm with an adaptive measurement matrix. Satellite detection and Doppler tracking results are analyzed for each satellite. The positioning results for a stationary receiver showed an accuracy of about 132 m, which means 72% accuracy advancements compared to single constellation positioning. Full article
Show Figures

Figure 1

20 pages, 8702 KiB  
Article
Pavement Crack Detection from Mobile Laser Scanning Point Clouds Using a Time Grid
by Mianqing Zhong, Lichun Sui, Zhihua Wang and Dongming Hu
Sensors 2020, 20(15), 4198; https://doi.org/10.3390/s20154198 - 28 Jul 2020
Cited by 26 | Viewed by 4916
Abstract
This paper presents a novel algorithm for detecting pavement cracks from mobile laser scanning (MLS) data. The algorithm losslessly transforms MLS data into a regular grid structure to adopt the proven image-based methods of crack extraction. To address the problem of lacking topology, [...] Read more.
This paper presents a novel algorithm for detecting pavement cracks from mobile laser scanning (MLS) data. The algorithm losslessly transforms MLS data into a regular grid structure to adopt the proven image-based methods of crack extraction. To address the problem of lacking topology, this study assigns a two-dimensional index for each laser point depending on its scanning angle or acquisition time. Next, crack candidates are identified by integrating the differential intensity and height changes from their neighbors. Then, morphology filtering, a thinning algorithm, and the Freeman codes serve for the extraction of the edge and skeleton of the crack curves. Further than the other studies, this work quantitatively evaluates crack shape parameters: crack direction, width, length, and area, from the extracted crack points. The F1 scores of the quantity of the transverse, longitudinal, and oblique cracks correctly extracted from the test data reached 96.55%, 87.09%, and 81.48%, respectively. In addition, the average accuracy of the crack width and length exceeded 0.812 and 0.897. Experimental results demonstrate that the proposed approach is robust for detecting pavement cracks in a complex road surface status. The proposed method is also promising in serving the extraction of other on-road objects. Full article
Show Figures

Figure 1

Back to TopTop