Next Issue
Volume 3, September
Previous Issue
Volume 3, March
 
 

Signals, Volume 3, Issue 2 (June 2022) – 15 articles

Cover Story (view full-size image): We propose an ensemble of convolutional neural networks and transformers to solve the semantic segmentation task. Diversity among models of the ensemble is enforced by adopting different loss functions and testing different data augmentations. The developed solution is assessed through an extensive empirical evaluation in five different scenarios: polyp detection, skin detection, leukocytes recognition, environmental microorganism detection, and butterfly recognition. The final model provides state-of-the-art results. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
18 pages, 3026 KiB  
Article
A Perspective on Information Optimality in a Neural Circuit and Other Biological Systems
by Robert Friedman
Signals 2022, 3(2), 410-427; https://doi.org/10.3390/signals3020025 - 20 Jun 2022
Cited by 6 | Viewed by 2058
Abstract
The nematode worm Caenorhabditis elegans has a relatively simple neural system for analysis of information transmission from sensory organ to muscle fiber. Consequently, this study includes an example of a neural circuit from the nematode worm, and a procedure is shown for measuring [...] Read more.
The nematode worm Caenorhabditis elegans has a relatively simple neural system for analysis of information transmission from sensory organ to muscle fiber. Consequently, this study includes an example of a neural circuit from the nematode worm, and a procedure is shown for measuring its information optimality by use of a logic gate model. This approach is useful where the assumptions are applicable for a neural circuit, and also for choosing between competing mathematical hypotheses that explain the function of a neural circuit. In this latter case, the logic gate model can estimate computational complexity and distinguish which of the mathematical models require fewer computations. In addition, the concept of information optimality is generalized to other biological systems, along with an extended discussion of its role in genetic-based pathways of organisms. Full article
Show Figures

Figure 1

14 pages, 588 KiB  
Article
Manual 3D Control of an Assistive Robotic Manipulator Using Alpha Rhythms and an Auditory Menu: A Proof-of-Concept
by Ana S. Santos Cardoso, Rasmus L. Kæseler, Mads Jochumsen and Lotte N. S. Andreasen Struijk
Signals 2022, 3(2), 396-409; https://doi.org/10.3390/signals3020024 - 16 Jun 2022
Cited by 1 | Viewed by 1918
Abstract
Brain–Computer Interfaces (BCIs) have been regarded as potential tools for individuals with severe motor disabilities, such as those with amyotrophic lateral sclerosis, that render interfaces that rely on movement unusable. This study aims to develop a dependent BCI system for manual end-point control [...] Read more.
Brain–Computer Interfaces (BCIs) have been regarded as potential tools for individuals with severe motor disabilities, such as those with amyotrophic lateral sclerosis, that render interfaces that rely on movement unusable. This study aims to develop a dependent BCI system for manual end-point control of a robotic arm. A proof-of-concept system was devised using parieto-occipital alpha wave modulation and a cyclic menu with auditory cues. Users choose a movement to be executed and asynchronously stop said action when necessary. Tolerance intervals allowed users to cancel or confirm actions. Eight able-bodied subjects used the system to perform a pick-and-place task. To investigate the potential learning effects, the experiment was conducted twice over the course of two consecutive days. Subjects obtained satisfactory completion rates (84.0 ± 15.0% and 74.4 ± 34.5% for the first and second day, respectively) and high path efficiency (88.9 ± 11.7% and 92.2 ± 9.6%). Subjects took on average 439.7 ± 203.3 s to complete each task, but the robot was only in motion 10% of the time. There was no significant difference in performance between both days. The developed control scheme provided users with intuitive control, but a considerable amount of time is spent waiting for the right target (auditory cue). Implementing other brain signals may increase its speed. Full article
(This article belongs to the Special Issue Advancing Signal Processing and Analytics of EEG Signals)
Show Figures

Figure 1

37 pages, 4200 KiB  
Review
A Survey on MIMO-OFDM Systems: Review of Recent Trends
by Houda Harkat, Paulo Monteiro, Atilio Gameiro, Fernando Guiomar and Hasmath Farhana Thariq Ahmed
Signals 2022, 3(2), 359-395; https://doi.org/10.3390/signals3020023 - 02 Jun 2022
Cited by 16 | Viewed by 6611
Abstract
MIMO-OFDM is a key technology and a strong candidate for 5G telecommunication systems. In the literature, there is no convenient survey study that rounds up all the necessary points to be investigated concerning such systems. The current deeper review paper inspects and interprets [...] Read more.
MIMO-OFDM is a key technology and a strong candidate for 5G telecommunication systems. In the literature, there is no convenient survey study that rounds up all the necessary points to be investigated concerning such systems. The current deeper review paper inspects and interprets the state of the art and addresses several research axes related to MIMO-OFDM systems. Two topics have received special attention: MIMO waveforms and MIMO-OFDM channel estimation. The existing MIMO hardware and software innovations, in addition to the MIMO-OFDM equalization techniques, are discussed concisely. In the literature, only a few authors have discussed the MIMO channel estimation and modeling problems for a variety of MIMO systems. However, to the best of our knowledge, there has been until now no review paper specifically discussing the recent works concerning channel estimation and the equalization process for MIMO-OFDM systems. Hence, the current work focuses on analyzing the recently used algorithms in the field, which could be a rich reference for researchers. Moreover, some research perspectives are identified. Full article
Show Figures

Figure 1

18 pages, 759 KiB  
Article
An Empirical Study on Ensemble of Segmentation Approaches
by Loris Nanni, Alessandra Lumini, Andrea Loreggia, Alberto Formaggio and Daniela Cuza
Signals 2022, 3(2), 341-358; https://doi.org/10.3390/signals3020022 - 01 Jun 2022
Cited by 12 | Viewed by 2650
Abstract
Recognizing objects in images requires complex skills that involve knowledge about the context and the ability to identify the borders of the objects. In computer vision, this task is called semantic segmentation and it pertains to the classification of each pixel in an [...] Read more.
Recognizing objects in images requires complex skills that involve knowledge about the context and the ability to identify the borders of the objects. In computer vision, this task is called semantic segmentation and it pertains to the classification of each pixel in an image. The task is of main importance in many real-life scenarios: in autonomous vehicles, it allows the identification of objects surrounding the vehicle; in medical diagnosis, it improves the ability of early detecting of dangerous pathologies and thus mitigates the risk of serious consequences. In this work, we propose a new ensemble method able to solve the semantic segmentation task. The model is based on convolutional neural networks (CNNs) and transformers. An ensemble uses many different models whose predictions are aggregated to form the output of the ensemble system. The performance and quality of the ensemble prediction are strongly connected with some factors; one of the most important is the diversity among individual models. In our approach, this is enforced by adopting different loss functions and testing different data augmentations. We developed the proposed method by combining DeepLabV3+, HarDNet-MSEG, and Pyramid Vision Transformers. The developed solution was then assessed through an extensive empirical evaluation in five different scenarios: polyp detection, skin detection, leukocytes recognition, environmental microorganism detection, and butterfly recognition. The model provides state-of-the-art results. Full article
(This article belongs to the Special Issue Advances in Image Processing and Pattern Recognition)
Show Figures

Figure 1

15 pages, 16284 KiB  
Article
Antenna Boosters versus Flexible Printed Circuit Antennas for IoT Devices
by Jaume Anguera, Alejandro Fernández, Carles Puente, Aurora Andújar and Jaap Groot
Signals 2022, 3(2), 326-340; https://doi.org/10.3390/signals3020021 - 23 May 2022
Cited by 6 | Viewed by 2724
Abstract
Antennas should be small enough to fit in the limited space of IoT devices and, at the same time, with multi-band operation across several bands as well as ensure stability when embedded in a device. In this regard, two different technologies are compared: [...] Read more.
Antennas should be small enough to fit in the limited space of IoT devices and, at the same time, with multi-band operation across several bands as well as ensure stability when embedded in a device. In this regard, two different technologies are compared: antenna booster and flexible printed circuit antenna. A comparison is addressed from measured results in terms of efficiency, concluding that despite the antenna booster is more than fifty times smaller in area, it provides better efficiency across the frequency range of 698–960 MHz and 1710–2690 MHz across three different printed circuit boards (PCB): a big PCB of 131 mm × 60 mm, a medium PCB of 95 mm × 42 mm, and a small PCB of 65 mm × 42 mm. Moreover, the flexible printed antenna depends on the mounting process, whereas the antenna booster does not. Full article
(This article belongs to the Special Issue Internet of Things for Smart Planet: Present and Future)
Show Figures

Figure 1

13 pages, 3185 KiB  
Article
Using Python for the Simulation of a Closed-Loop PI Controller for a Buck Converter
by Acacio M. R. Amaral and Antonio J. Marques Cardoso
Signals 2022, 3(2), 313-325; https://doi.org/10.3390/signals3020020 - 20 May 2022
Cited by 2 | Viewed by 3737
Abstract
This paper presents a Python-based simulation technique that can be used to predict the behavior of switch-mode non-isolated (SMNI) DC-DC converters operating in closed loop. The proposed technique can be implemented in an open-source numerical computation software, such as Scilab, Octave or Python, [...] Read more.
This paper presents a Python-based simulation technique that can be used to predict the behavior of switch-mode non-isolated (SMNI) DC-DC converters operating in closed loop. The proposed technique can be implemented in an open-source numerical computation software, such as Scilab, Octave or Python, which makes it versatile and portable. The software that will be used to implement the proposed technique is Python, since it is an open-source programming language, unlike MATLAB, which is one of most-used programming and numeric computing platforms to simulate this type of system. The proposed technique requires the discretization of the equations that govern the open-loop operation of the converter, as well as the discretization of the transfer function of the controller. To simplify the implementation of the simulation technique, the code must be subdivided into different modules, which together form a package. The converter under analysis will be a buck converter operating in CCM. The proposed technique can be extended to any other SMNI DC-DC converter. The validation of the proposed technique will be carried out by comparing it with the results obtained in LTspice. Full article
(This article belongs to the Topic Engineering Mathematics)
Show Figures

Figure 1

17 pages, 7296 KiB  
Article
COVID-19 Detection from Radiographs: Is Deep Learning Able to Handle the Crisis?
by Muhammad Saqib, Abbas Anwar, Saeed Anwar, Lars Petersson, Nabin Sharma and Michael Blumenstein
Signals 2022, 3(2), 296-312; https://doi.org/10.3390/signals3020019 - 11 May 2022
Cited by 7 | Viewed by 2255
Abstract
Deep learning in the last decade has been very successful in computer vision and machine learning applications. Deep learning networks provide state-of-the-art performance in almost all of the applications where they have been employed. In this review, we aim to summarize the essential [...] Read more.
Deep learning in the last decade has been very successful in computer vision and machine learning applications. Deep learning networks provide state-of-the-art performance in almost all of the applications where they have been employed. In this review, we aim to summarize the essential deep learning techniques and then apply them to COVID-19, a highly contagious viral infection that wreaks havoc on everyone’s lives in various ways. According to the World Health Organization and scientists, more testing potentially helps contain the virus’s spread. The use of chest radiographs is one of the early screening tests for determining disease, as the infection affects the lungs severely. To detect the COVID-19 infection, this experimental survey investigates and automates the process of testing by employing state-of-the-art deep learning classifiers. Moreover, the viruses are of many types, such as influenza, hepatitis, and COVID. Here, our focus is on COVID-19. Therefore, we employ binary classification, where one class is COVID-19 while the other viral infection types are treated as non-COVID-19 in the radiographs. The classification task is challenging due to the limited number of scans available for COVID-19 and the minute variations in the viral infections. We aim to employ current state-of-the-art CNN architectures, compare their results, and determine whether deep learning algorithms can handle the crisis appropriately and accurately. We train and evaluate 34 models. We also provide the limitations and future direction. Full article
Show Figures

Figure 1

12 pages, 1807 KiB  
Article
Evolving Optimised Convolutional Neural Networks for Lung Cancer Classification
by Maximilian Achim Pfeffer and Sai Ho Ling
Signals 2022, 3(2), 284-295; https://doi.org/10.3390/signals3020018 - 05 May 2022
Cited by 8 | Viewed by 2589
Abstract
Detecting pulmonary nodules early significantly contributes to the treatment success of lung cancer. Several deep learning models for medical image analysis have been developed to help classify pulmonary nodules. The design of convolutional neural network (CNN) architectures, however, is still heavily reliant on [...] Read more.
Detecting pulmonary nodules early significantly contributes to the treatment success of lung cancer. Several deep learning models for medical image analysis have been developed to help classify pulmonary nodules. The design of convolutional neural network (CNN) architectures, however, is still heavily reliant on human domain knowledge. Manually designing CNN design solutions has been shown to limit the data’s utility by creating a co-dependency on the creator’s cognitive bias, which urges the development of smart CNN architecture design solutions. In this paper, an evolutionary algorithm is used to optimise the classification of pulmonary nodules with CNNs. The implementation of a genetic algorithm (GA) for CNN architectures design and hyperparameter optimisation is proposed, which approximates optimal solutions by implementing a range of bio-inspired mechanisms of natural selection and Darwinism. For comparison purposes, two manually designed deep learning models, FractalNet and Deep Local-Global Network, were trained. The results show an outstanding classification accuracy of the fittest GA-CNN (91.3%), which outperformed both manually designed models. The findings indicate that GAs pose advantageous solutions for diagnostic challenges, the development of which may to be fully automated in the future using GAs to design and optimise CNN architectures for various clinical applications. Full article
(This article belongs to the Special Issue Deep Learning and Transfer Learning)
Show Figures

Figure 1

18 pages, 3592 KiB  
Article
Activity Recognition Based on Millimeter-Wave Radar by Fusing Point Cloud and Range–Doppler Information
by Yuchen Huang, Wei Li, Zhiyang Dou, Wantong Zou, Anye Zhang and Zan Li
Signals 2022, 3(2), 266-283; https://doi.org/10.3390/signals3020017 - 02 May 2022
Cited by 17 | Viewed by 3847
Abstract
Millimeter-wave radar has demonstrated its high efficiency in complex environments in recent years, which outperforms LiDAR and computer vision in human activity recognition in the presence of smoke, fog, and dust. In previous studies, researchers mostly analyzed either 2D (3D) point cloud or [...] Read more.
Millimeter-wave radar has demonstrated its high efficiency in complex environments in recent years, which outperforms LiDAR and computer vision in human activity recognition in the presence of smoke, fog, and dust. In previous studies, researchers mostly analyzed either 2D (3D) point cloud or range–Doppler information from radar echo to extract activity features. In this paper, we propose a multi-model deep learning approach to fuse the features of both point clouds and range–Doppler for classifying six activities, i.e., boxing, jumping, squatting, walking, circling, and high-knee lifting, based on a millimeter-wave radar. We adopt a CNN–LSTM model to extract the time-serial features from point clouds and a CNN model to obtain the features from range–Doppler. Then we fuse the two features and input the fused feature into the full connected layer for classification. We built a dataset based on a 3D millimeter-wave radar from 17 volunteers. The evaluation result based on the dataset shows that this method has higher accuracy than utilizing the two kinds of information separately and achieves a recognition accuracy of 97.26%, which is about 1% higher than other networks with only one kind of data as input. Full article
(This article belongs to the Special Issue Intelligent Wireless Sensing and Positioning)
Show Figures

Figure 1

17 pages, 3305 KiB  
Article
Personalized PPG Normalization Based on Subject Heartbeat in Resting State Condition
by Francesca Gasparini, Alessandra Grossi, Marta Giltri and Stefania Bandini
Signals 2022, 3(2), 249-265; https://doi.org/10.3390/signals3020016 - 18 Apr 2022
Cited by 3 | Viewed by 2938
Abstract
Physiological responses are currently widely used to recognize the affective state of subjects in real-life scenarios. However, these data are intrinsically subject-dependent, making machine learning techniques for data classification not easily applicable due to inter-subject variability. In this work, the reduction of inter-subject [...] Read more.
Physiological responses are currently widely used to recognize the affective state of subjects in real-life scenarios. However, these data are intrinsically subject-dependent, making machine learning techniques for data classification not easily applicable due to inter-subject variability. In this work, the reduction of inter-subject heterogeneity was considered in the case of Photoplethysmography (PPG), which was successfully used to detect stress and evaluate experienced cognitive load. To face the inter-subject heterogeneity, a novel personalized PPG normalization is herein proposed. A subject-normalized discrete domain where the PPG signals are properly re-scaled is introduced, considering the subject’s heartbeat frequency in resting state conditions. The effectiveness of the proposed normalization was evaluated in comparison to other normalization procedures in a binary classification task, where cognitive load and relaxed state were considered. The results obtained on two different datasets available in the literature confirmed that applying the proposed normalization strategy permitted increasing the classification performance. Full article
(This article belongs to the Special Issue Biosignals Processing and Analysis in Biomedicine)
Show Figures

Figure 1

14 pages, 1896 KiB  
Article
Applying and Comparing LSTM and ARIMA to Predict CO Levels for a Time-Series Measurements in a Port Area
by Evangelos D. Spyrou, Ioannis Tsoulos and Chrysostomos Stylios
Signals 2022, 3(2), 235-248; https://doi.org/10.3390/signals3020015 - 15 Apr 2022
Cited by 9 | Viewed by 2826
Abstract
Air pollution is a major problem in the everyday life of citizens, especially air pollution in the transport domain. Ships play a significant role in coastal air pollution, in conjunction with transport mobility in the broader area of ports. As such, ports should [...] Read more.
Air pollution is a major problem in the everyday life of citizens, especially air pollution in the transport domain. Ships play a significant role in coastal air pollution, in conjunction with transport mobility in the broader area of ports. As such, ports should be monitored in order to assess air pollution levels and act accordingly. In this paper, we obtain CO values from environmental sensors that were installed in the broader area of the port of Igoumenitsa in Greece. Initially, we analysed the CO values and we have identified some extreme values in the dataset that showed a potential event. Thereafter, we separated the dataset into 6-h intervals and showed that we have an extremely high rise in certain hours. We transformed the dataset to a moving average dataset, with the objective being the reduction of the extremely high values. We utilised a machine-learning algorithm, namely the univariate long short-term memory (LSTM) algorithm to provide the predicted outcome of the time series from the port that has been collected. We performed experiments by using 100, 1000, and 7000 batches of data. We provided results on the model loss and the root-mean-square error as well as the mean absolute error. We showed that with the case with batch number equals to 7000, the LSTM we achieved a good prediction outcome. The proposed method was compared with the ARIMA model and the comparison results prove the merit of the approach. Full article
Show Figures

Figure 1

26 pages, 8711 KiB  
Article
Skyfall: Signal Fusion from a Smartphone Falling from the Stratosphere
by Milton A. Garcés, Daniel Bowman, Cleat Zeiler, Anthony Christe, Tyler Yoshiyama, Brian Williams, Meritxell Colet, Samuel Takazawa and Sarah Popenhagen
Signals 2022, 3(2), 209-234; https://doi.org/10.3390/signals3020014 - 14 Apr 2022
Cited by 4 | Viewed by 2893
Abstract
A smartphone plummeted from a stratospheric height of 36 km, providing a near-real-time record of its rapid descent and ground impact. An app recorded and streamed useful internal multi-sensor data at high sample rates. Signal fusion with external and internal sensor systems permitted [...] Read more.
A smartphone plummeted from a stratospheric height of 36 km, providing a near-real-time record of its rapid descent and ground impact. An app recorded and streamed useful internal multi-sensor data at high sample rates. Signal fusion with external and internal sensor systems permitted a more detailed reconstruction of the Skyfall chronology, including its descent speed, rotation rate, and impact deceleration. Our results reinforce the potential of smartphones as an agile and versatile geophysical data collection system for environmental and disaster monitoring IoT applications. We discuss mobile environmental sensing capabilities and present a flexible data model to record and stream signals of interest. The Skyfall case study can be used as a guide to smartphone signal processing methods that are transportable to other hardware platforms and operating systems. Full article
(This article belongs to the Special Issue Internet of Things for Smart Planet: Present and Future)
Show Figures

Figure 1

20 pages, 2777 KiB  
Article
XBeats: A Real-Time Electrocardiogram Monitoring and Analysis System
by Ahmed Badr, Abeer Badawi, Abdulmonem Rashwan and Khalid Elgazzar
Signals 2022, 3(2), 189-208; https://doi.org/10.3390/signals3020013 - 12 Apr 2022
Cited by 4 | Viewed by 3314
Abstract
This work presents XBeats, a novel platform for real-time electrocardiogram monitoring and analysis that uses edge computing and machine learning for early anomaly detection. The platform encompasses a data acquisition ECG patch with 12 leads to collect heart signals, perform on-chip processing, and [...] Read more.
This work presents XBeats, a novel platform for real-time electrocardiogram monitoring and analysis that uses edge computing and machine learning for early anomaly detection. The platform encompasses a data acquisition ECG patch with 12 leads to collect heart signals, perform on-chip processing, and transmit the data to healthcare providers in real-time for further analysis. The ECG patch provides a dynamically configurable selection of the active ECG leads that could be transmitted to the backend monitoring system. The selection ranges from a single ECG lead to a complete 12-lead ECG testing configuration. XBeats implements a lightweight binary classifier for early anomaly detection to reduce the time to action should abnormal heart conditions occur. This initial detection phase is performed on the edge (i.e., the device paired with the patch) and alerts can be configured to notify designated healthcare providers. Further deep analysis can be performed on the full fidelity 12-lead data sent to the backend. A fully functional prototype of the XBeats has been implemented to demonstrate the feasibly and usability of the proposed system. Performance evaluation shows that XBeats can achieve up to 95.30% detection accuracy for abnormal conditions, while maintaining a high data acquisition rate of up to 441 samples per second. Moreover, the analytical results of the energy consumption profile show that the ECG patch provides up to 37 h of continuous 12-lead ECG streaming. Full article
(This article belongs to the Special Issue Internet of Things for Smart Planet: Present and Future)
Show Figures

Figure 1

15 pages, 454 KiB  
Article
Constructing Features Using a Hybrid Genetic Algorithm
by Ioannis G. Tsoulos
Signals 2022, 3(2), 174-188; https://doi.org/10.3390/signals3020012 - 06 Apr 2022
Viewed by 1870
Abstract
A hybrid procedure that incorporates grammatical evolution and a weight decaying technique is proposed here for various classification and regression problems. The proposed method has two main phases: the creation of features and the evaluation of these features. During the first phase, using [...] Read more.
A hybrid procedure that incorporates grammatical evolution and a weight decaying technique is proposed here for various classification and regression problems. The proposed method has two main phases: the creation of features and the evaluation of these features. During the first phase, using grammatical evolution, new features are created as non-linear combinations of the original features of the datasets. In the second phase, based on the characteristics of the first phase, the original dataset is modified and a neural network trained with a genetic algorithm is applied to this dataset. The proposed method was applied to an extremely wide set of datasets from the relevant literature and the experimental results were compared with four other techniques. Full article
Show Figures

Figure 1

17 pages, 5126 KiB  
Article
Development of an Area Scan Step Length Measuring System Using a Polynomial Estimate of the Heel Cloud Point
by Nursyuhada Binti Haji Kadir, Joseph K. Muguro, Kojiro Matsushita, Senanayake Mudiyanselaga Namal Arosha Senanayake and Minoru Sasaki
Signals 2022, 3(2), 157-173; https://doi.org/10.3390/signals3020011 - 31 Mar 2022
Viewed by 1596
Abstract
Due to impaired mobility caused by aging, it is very important to employ early detection and monitoring of gait parameters to prevent the inevitable huge amount of medical cost at a later age. For gait training and potential tele-monitoring application outside clinical settings, [...] Read more.
Due to impaired mobility caused by aging, it is very important to employ early detection and monitoring of gait parameters to prevent the inevitable huge amount of medical cost at a later age. For gait training and potential tele-monitoring application outside clinical settings, low-cost yet highly reliable gait analysis systems are needed. This research proposes using a single LiDAR system to perform automatic gait analysis with polynomial fitting. The experimental setup for this study consists of two different walking speeds, fast walk and normal walk, along a 5-m straight line. There were ten test subjects (mean age 28, SD 5.2) who voluntarily participated in the study. We performed polynomial fitting to estimate the step length from the heel projection cloud point laser data as the subject walks forwards and compared the values with the visual inspection method. The results showed that the visual inspection method is accurate up to 6 cm while the polynomial method achieves 8 cm in the worst case (fast walking). With the accuracy difference estimated to be at most 2 cm, the polynomial method provides reliability of heel location estimation as compared with the observational gait analysis. The proposed method in this study presents an improvement accuracy of 4% as opposed to the proposed dual-laser range sensor method that reported 57.87 cm ± 10.48, an error of 10%. Meanwhile, our proposed method reported ±0.0633 m, a 6% error for normal walking. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop