Next Article in Journal
ARTFLOW: A Fast, Biologically Inspired Neural Network that Learns Optic Flow Templates for Self-Motion Estimation
Previous Article in Journal
Automated Condition-Based Suppression of the CPR Artifact in ECG Data to Make a Reliable Shock Decision for AEDs during CPR
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Hybrid NN-ABPE-Based Calibration Method for Improving Accuracy of Lateration Positioning System

by
Milica Petrović
1,*,†,
Maciej Ciężkowski
2,†,
Sławomir Romaniuk
2,
Adam Wolniakowski
2 and
Zoran Miljković
1
1
Faculty of Mechanical Engineering, University of Belgrade, 11120 Belgrade, Serbia
2
Faculty of Electrical Engineering, Białystok University of Technology, 15-351 Białystok, Poland
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2021, 21(24), 8204; https://doi.org/10.3390/s21248204
Submission received: 3 November 2021 / Revised: 30 November 2021 / Accepted: 3 December 2021 / Published: 8 December 2021
(This article belongs to the Topic Artificial Intelligence in Sensors)

Abstract

:
Positioning systems based on the lateration method utilize distance measurements and the knowledge of the location of the beacons to estimate the position of the target object. Although most of the global positioning techniques rely on beacons whose locations are known a priori, miscellaneous factors and disturbances such as obstacles, reflections, signal propagation speed, the orientation of antennas, measurement offsets of the beacons hardware, electromagnetic noise, or delays can affect the measurement accuracy. In this paper, we propose a novel hybrid calibration method based on Neural Networks (NN) and Apparent Beacon Position Estimation (ABPE) to improve the accuracy of a lateration positioning system. The main idea of the proposed method is to use a two-step position correction pipeline that first performs the ABPE step to estimate the perceived positions of the beacons that are used in the standard position estimation algorithm and then corrects these initial estimates by filtering them with a multi-layer feed-forward neural network in the second step. In order to find an optimal neural network, 16 NN architectures with 10 learning algorithms and 12 different activation functions for hidden layers were implemented and tested in the MATLAB environment. The best training outcomes for NNs were then employed in two real-world indoor scenarios: without and with obstacles. With the aim to validate the proposed methodology in a scenario where a fast set-up of the system is desired, we tested eight different uniform sampling patterns to establish the influence of the number of the training samples on the accuracy of the system. The experimental results show that the proposed hybrid NN-ABPE method can achieve a high level of accuracy even in scenarios when a small number of calibration reference points are measured.

1. Introduction

One of the founding principles in the Industry 4.0 paradigm is the emphasis on the autonomy of the agents participating in the technological process. In order to achieve the autonomy of the mobile agents, e.g., Automated Guided Vehicles (AGVs) or mobile robots, it is paramount to provide a source of reliable navigational data, such as the information about the position of the agent or the layout of the environment it is working in. The former is handled by employing various types of positioning systems.
Accurate, precise, and reliable navigational data are especially important in applications where a given AGV has to cooperate with another vehicle or object, e.g., warehouse inventory inspection [1], cargo carriage in storehouse [2], or autonomous picking and palletizing [3]. Another example of an implementation of the reliable positioning system that is worth mentioning is customer navigation in a retail shop [4]. Demesure et al. [5] presents a navigation approach of mobile agents in the AGV-based manufacturing system. Sprunk et al. [6] propose a complex navigational system for the omnidirectional robot to implement enhanced logistic technology in industrial environment applications.
Different absolute positioning systems exist that use miscellaneous operating principles, such as Time-Of-Flight (TOF), Time Difference Of Arrival (TDOA), Phase Of Arrival (POA), or Received Signal Strength Indicator (RSSI) of a signal [7,8]; however, these can be generally divided according to the basic calculation principle into those using lateration and angulation-based techniques. The former utilizes the distance measurements between a set of reference points (often referred to as anchors or beacons) and the tracked object, while the latter relies on the angles measured between the object and the beacons. Nevertheless, measurements made in the real world are subject to noise coming from various sources. This noise, being propagated through the position estimation algorithm results in inaccurate estimates, thus limiting the usefulness of the system; therefore, it is essential to establish a procedure to mitigate the effect of the measurement noise on the output of the positioning system.
In our research, we utilize an Ultra Wide Band (UWB) technology-based positioning system due to its high applicability and current popularity for indoor applications. Moreover, the UWB rangefinder modules can be characterized by high accuracy (∼10 cm), ability to propagate signal through thin, non-metallic obstacles, and satisfactory range in an indoor environment [9,10]; however, this technology, similarly to other TOF-based ranging technologies, is highly sensitive to Non-Line-Of-Sight (NLOS) measurements. Radio signal traveling through any medium that is not a vacuum has an extended propagation time due to lower speed, which results in an overestimated distance measurement. Moreover, there can be different anchor-specific bias errors, caused by antenna misalignment, clock bias, wear of electronic parts, etc. [11]. Having that in mind, in the last two decades extensive research has been carried out in the field of lateral positioning systems, primarily focused on the development of algorithms used for the minimization of the estimated positioning error.
Sinriech et al. [12] analyzed configuration sensitivity of landmark navigation methods to improve the accuracy of AGV-based material handling systems operating in an industrial environment. Experimental results, including simulations of the static and dynamic performance of the vehicle, indicate that the triangulation positioning system is sensitive to noisy data as well as to different landmark and vehicle configurations.
Loevsky and Shimshoni [13] proposed another efficient landmark-based system for indoor localization of mobile robots and AGVs: based on the efficient triangulation method and sensors for bearing measurements of different landmarks, the proposed localization system enables a mobile robot to be accurately localized in motion and eliminate misidentified landmarks.
Aksu et al. [14] proposed a neural-network-based method to estimate the location of Bluetooth-enabled devices. The multi-layer perceptron network model with a back-propagation learning algorithm was applied to predict 2D coordinate location according to Received Signal Strength Information (RSSI) collected by three Bluetooth USB Adapters (BUAs). Although the authors made conclusions related to the effect of training sets inside and outside of the triangle formed by the three BUAs, they did not analyze the effects of different neural network architectures and learning algorithms on the estimation accuracy as well as how the number of training samples affects the set-up time.
Pelka et al. [15] developed an iterative algorithm, which was applied to determine the anchor position according to available distance measurements between anchors. Although the position problem is solved with the mean error of 0.62 m and without requirements for GPS data or prior knowledge, simulation results indicate that the precision of the distance measurements significantly affects the outcome of the algorithm.
We can consider an example of the beacon-based systems for angle measurement in mobile robotic applications [16]. This low power and flexible solution for robot positioning is named BeAMS, requires only one communication channel, is used for angle measurement and beacon identification. The authors proposed the mechanical design of the sensor as well as the theoretical analysis of the errors of the measured angles. The proposed model was compared with simulated and real measurements, and the achieved final error is lower than 0.24 .
Meissner et al. [17] proposed an indoor positioning algorithm based on the UWB signal and the a priori given floor plan information. The robust and accurate indoor localization was achieved with a receiver that uses single and double reflections of the transmitted signal in the room walls. According to a priori known room geometry, the measured reflections of the transmitted signal are mapped to virtual anchors with known positions and further used to estimate the unknown position of the receiver. Although the authors proposed the scheme for mapping measurements to the virtual anchors, they did not investigate the influence of the number of calibration reference points to achieve a short set-up procedure time.
Soltani et al. [18] conducted research to improve the Cluster-based Movable Tag Localization (CMTL) [19]. They proposed a localization method based on a Radio Frequency Identification (RFID) system for localization of the resources and used neural networks to overcome the limitations of empirical weighted averaging formulas. The proposed method forms a grid of virtual reference tags within the selected cluster of real reference tags and uses neural networks to obtain the position of the target tag. However, the authors have not considered the NN architectures with more than one hidden layer, as well as the impact of different learning algorithms and activation functions on the localization accuracy of the target tag.
Another example of the optimization of the positioning system is the use of additional calibration modules with a different number of calibration units to improve the average-position error in the 3D real-time localization system [20]. Three localization methods were used in the proposed research. The optimal configuration of calibration units was obtained in a simulation and tested in two real-world experiments. The small number of calibration units provides the best improvement-to-cost ratio; however, the most significant improvement of the average-position error is in the Z (vertical) direction; therefore, the proposed system is not recommended for 2D lateral positioning systems.
In our previous work, we proposed a method for improving the accuracy of the static infrared (IR) triangulation positioning system and eliminating errors caused by signal disturbances (e.g., reflections and multipathing) and/or inaccurate determination of the position of the beacons [21]. The presented methodology uses beacon–receiver angles that are measured by the receiver being placed at the known reference points and thereafter estimates the apparent beacon positions. The main advantage of the proposed methodology is that the a priori information about the locations of the beacons is not required. In further research, we developed a calibration method for a lateration positioning system based on measuring beacon–receiver distances. According to measurements based on lateration and known reference positions of the receivers, the proposed method estimates unknown positions of the beacons (i.e., apparent positions) and compensates for the static errors [22].
Compared to the previously reported state-of-the-art methods, the major contributions of the paper can be summarized as follows:
  • A new hybrid procedure based on ABPE and NNs is used to correct the positioning system measurements;
  • Different neural network architectures are employed in order to find the optimally tuned parameters for the proposed calibration problem, e.g., 16 neural network architectures with 10 learning algorithms and 12 different activation functions for hidden layers are trained and validated in MATLAB environment to learn and predict measured positions;
  • The performance of the novel hybrid NN-ABPE-based method in terms of both the set-up time and accuracy is compared to the state-of-the-art calibration methods, i.e., mapping with a distortion model, Bias and Scale Factor Estimation (BSFE), and Apparent Beacon Position Estimation (ABPE). Experimental results obtained in two different scenarios (environment with and without obstacles) confirmed the effectiveness of the proposed methodology to predict positioning system measurement errors in real-world situations.

2. Methods

Let us consider a lateration-based local positioning system, where the positions of beacons/anchors are known, and the position of the tag/receiver has to be determined. In order to find the receiver position, the beacon–receiver distance measurements ( d i in Equation (1) and the known beacon positions ( X i , Y i ) are used to define a cost function described by the Equation (1) which can be minimized with respect to unknown receiver position ( x , y ) with the Nonlinear Least Squares (NLS) method.
arg min ( x , y ) R i = 0 m ( d i ( X i x ) 2 + ( Y i y ) 2 ) 2
More on the lateration method and NLS itself can be found in [23,24].

2.1. Position Correction in Positioning Systems

The position estimated directly through the Least Squares Method is prone to be inaccurate due to the influence of various disturbing factors. In order to improve the accuracy of the system, this initial estimate has to be corrected.
One of the common correction methods is to assume that the accuracy of the position estimate can be improved by applying a correction function to the original estimate (2):
r * = C ( r , F )
where r * = [ x * , y * ] T denotes the improved position estimate, C is the correction function, r = [ x , y ] T is the original estimate, and F is a set of correction function parameters. The selection of the correction function C and the learning of its parameters F is called the calibration procedure. Typically, the calibration involves the collection of a number of sample position measurements R m = [ r m 1 , r m 2 , , r m n ] obtained through the positioning system and a matching number of the ground-truth positions R r = [ r r 1 , r r 2 , , r r n ] obtained through a reference system. The correction function parameters F are then learned based on the relations between the sets R m and R r . Once the calibration procedure is completed during the set-up of the system, subsequent position measurements can be corrected online during the system operation with little overhead. Because of the initial set-up cost associated with learning the correction function C, it is desirable to select a function that can be robustly trained on the least amount of training samples possible.

2.1.1. Distortion Model

A simple and common approach, derived from the image rectification procedure in vision [25,26], is to assume a distortion model in the form of a quadratic mapping between the original position estimate r = [ x , y ] T and the corrected estimate r * = [ x * , y * ] T , where lower-case x, y, and x * , y * denote the original and corrected coordinates respectively. Such mapping can be defined as (3) and (4):
x * = f 11 x 2 + f 12 x y + f 13 y 2 + f 14 x + f 15 y + f 16
y * = f 21 x 2 + f 22 x y + f 23 y 2 + f 24 x + f 25 y + f 26
which can be conveniently expressed for simultaneous transformation of multiple original estimates R 2 x n = [ r 1 , r 2 , , r n ] , where r i = [ x i , y i ] T , into corrected estimates R 2 x n * = [ r 1 * , r 2 * , , r n * ] in the matrix form (5):
R 2 x n * = C ( R 2 x n , F 2 x 6 ) = F 2 x 6 · M 6 x n ( R )
where matrix M 6 x n ( R ) is constructed as (6):
M 6 x n ( R ) = x 1 2 x 2 2 x n 2 x 1 y 1 x 2 y 2 x n y n y 1 2 y 2 2 y n 2 x 1 x 2 x n y 1 y 2 y n 1 1 1
The units of f i j coefficients in the matrix F are chosen such that the units in Equations (3) and (4) match. For example, the coefficient f 11 which is multiplied by x coordinate squared, has a unit of [ m 1 ]. In general, the coefficients f i j have the following units: [ m 1 ] for j { 1 , 2 , 3 } , [1] for j { 4 , 5 } and [m] for j = 6 .
The quadratic distortion model involves 12 independent parameters that may be learned from n = 6 sample measurements and the matching reference positions using the following Equation (7):
F 2 x 6 = R r 2 x 6 · M 6 x 6 ( R m ) 1
It should be noted that matrix M may not always be invertible. Six measurements are required to obtain the exact solution, but typically a larger data set is collected in order to achieve better correction accuracy and thus the M matrix is no longer square. In that case, the F matrix is obtained in a more general way that satisfies the least-square relationship between the M ( R r ) and M ( R m ) 1 , (8):
F 2 x 6 = R r 2 x n · M 6 x n ( R m ) +
where M ( R m ) + denotes the Moore–Penrose pseudo-inverse of the M ( R m ) matrix.
This method is referred to as DQM (Distortion Quadratic Model) in further text.

2.1.2. Apparent Beacon Position Estimation

The most common approach to calculate the unknown receiver position in the lateration system, as was mentioned in the introduction to Section 2, is to utilize the measured receiver–beacon distances and known beacon positions in the local reference frame. This approach requires that the position of the beacons is accurately measured. The beacon–receiver distance measurements are made via radio waves channel and, unfortunately, are always error-prone. The sources of these errors may be the following: the orientation of radio antennas, measurement offsets of beacons hardware, electromagnetic noise, or some objects in the environment interfering with the measurement. These measurement errors cause the beacons to be “seen” by the receiver in a slightly different position than they actually are.
We have developed a method called the Apparent Beacon Position Estimation (ABPE) method that calculates these apparent positions. The great advantage of this method (apart from improving the receiver position estimation) is that there is no necessity to provide a priori measurements of the beacon positions—the ABPE method finds them itself. The method requires as input the positions of a number of reference points in a given local reference frame and distances between the beacons and the receiver placed in reference points. The positions of the reference points should be measured with an additional measurement system (e.g., with a measuring tape), while beacon–receiver distances are measured via a lateration positioning system (in our case it was the UWB positioning system).
The ABPE method utilizes the distance measurements between the beacons placed in unknown positions A i and the receiver that is placed in known reference points P j in order to determine these unknown beacons positions A i . The following algorithm estimates apparent position of the beacons, i.e., what positions of the beacons are seen by the receivers.
In the lateration-based positioning systems, we can distinguish the following Equation (9):
s i j = ( X i x j ) 2 + ( Y i y j ) 2
which describes the distance between the beacon A i = X i , Y i and the reference point P j = x j , y j (see Figure 1).
Since measurements are affected by numerous disturbances, the distance calculated in Equation (9) will never be equal to the beacon–receiver distance measurement in the real world when the receiver is placed in reference point P j . To find the best fit of the measurement data (distance measurements via UWB) to the distances calculated by Equation (9), the following cost function should be minimized with respect to unknown beacons positions A i (10):
arg min ( X i , Y i ) R j = 1 n i = 0 m ( d i j 2 s i j 2 ) 2
where ( X i , Y i ) are the apparent positions of the beacons and d i j correspond to the beacon–receiver distances measurements taken by the UWB receiver placed at points P j . We refer the reader to [22] for a more detailed description of the algorithm.

2.1.3. Bias and Scale Factor Estimation

The accuracy of the calculated positions by means of the lateration method strongly depends on the distances that are always measured with some noise. In the Bias and Scale Factor Estimation, it is assumed that distance measurements are disturbed by some bias and scale factor that are different for each of the anchors. The aforementioned assumptions can be summarized in the following Equation (11) describing the real distance:
l = S d + B
where d is the measured anchor-tag distance, S is an unknown scale factor and B is an unknown bias factor. From Equation (11) we can induce the real distance between anchor A i and the tag placed at position P j , (12):
l i j = S i d i j + B i
where d i j is the measured distance between anchor A i and tag placed in the reference point P j . The parameters S i , B i define, respectively, scale and bias factors for beacon i. Moreover, the real distances can also be calculated utilizing the knowledge on the position of anchors and tag (13):
l i j = ( X i x j ) 2 + ( Y i y j ) 2
where A i = X i , Y i is the known coordinates of the anchors and P j = x j , y j is the known coordinates of the tag placed in the reference point. Next, we can utilize Equations (12) and (13) in order to find the unknown parameters S i and B i by minimizing the following cost function (14):
arg min ( S 0 , B 0 , S 1 , B 1 , , S m , B m ) R j = 1 n i = 0 m ( l i j 2 d i j 2 ) 2
where ( S 0 , B 0 , S 1 , B 1 , , S m , B m ) are the scale and bias factors of the measurements from subsequent m anchor. The Nelder-–Mead method can be used the solve the presented minimization function [27].

2.1.4. Neural Networks

The neural networks represent a soft-computing paradigm of artificial intelligence defined as a connective model for reasoning based on an analogy with the biological neurological system. The connective model is composed of interconnected elements (aka neurons) that give the network the cognitive ability to learn and generalize acquired knowledge. The most widely used models are feed-forward neural networks with back-propagation algorithm (BP neural networks), which find wide application in solving prediction, classification, and approximation problems [28,29].
Generally, the BP network consists of neurons grouped in layers. In addition to the input and output layers, the network can have one or more hidden layers. Neurons receive input signals i.e., information from the environment or the other neurons, through connections with appropriate weight strength. Input data presented to the neural network through the input layer can be defined as matrix I (15):
I = [ x 1 , x i , , x M ] ,
where i represents the i-th input vector, i = 1 , , M . The weighted output of the k-th neuron in l-th layer is defined as (16):
o i k l = j = 1 K l w k j l I i j l + θ k l ,
where K l represents the total number of neurons in the previous layer, j = 1 , , K l , l represents number of layers, l = 1 , , L , the weight strength between neurons j and k is defined as w k j l , and θ k l represent a bias value of the k-th neuron.
The output value of k-th neuron in l-th layer is calculated by applying the activation function f i k l on weighted output o i k l , (17):
O i k l = f i k l ( o i k l )
The output of the network is the output of its final L-th layer and is denoted as:
n e t ( I , W , Θ , f ) = O i L k
where I is the input data, W is the matrix of the weights, Θ is the matrix of bias values, and f is the vector of activation functions for consecutive layers.
The overall error between the actual and pre-defined output is calculated by Equation (19):
E p i = 1 2 k = 1 K L ( y i k O i k L ) 2
The cognitive ability of the BP neural network can be achieved through the supervised learning (training) process based on gradient descent method that modifies the weights between the neurons by applying different modification procedures. Those iterative procedures can be formalized in various learning algorithms [30]. Therefore, after the overall error is calculated according to Equation (19), that error is propagated backwards through the network layers in order to modify weights between the neurons. The learning process is performed with the goal of minimizing the error between the actual output and the pre-defined output of the network.
The neural networks have attracted a significant amount of attention in the research community particularly due to their wide range of applications and successful implementations in solving various complex problems. The main advantages of neural networks that make them suitable for solving such problems are related to their (i) capability of flexible nonlinear modeling between dependent and independent variables, (ii) strong adaptability, as well as their (ii) learning and massively parallel computing abilities [31].
This data-driven approach is developed based on the features presented by the data sets, which makes it suitable to process fuzzy, nonlinear, and noise-containing data without the need to design any mathematical models. On the other side, a very important characteristic of neural networks is their adaptive nature, where learning by example is very appealing in scenarios there is a little or incomplete understanding of the problem to be solved, but experimental data are available. Finally, their high computational power is based on a densely interconnected large set of adaptive processing units forming the topological structure for distributed parallel information processing.
The neural networks provide significant success and benefits in solving processing problems that require real-time operation and interpretation of relationships between variables in multidimensional spaces. Furthermore, they have been used in scenarios when information about certain phenomena is noisy, partial, unknown and/or their connections are incomplete. As a result, they have been successfully applied for classification and regression [28] and widely used for solving many problems in the last decades in the domain of intelligent material transport [32,33]. In the following section, a novel hybrid calibration method based on BP neural networks and Apparent Beacon Position Estimation (ABPE) is proposed to improve the accuracy of a lateration positioning system.

2.1.5. Hybrid NN-ABPE Method

In order to improve the performance of the position calibration methods, the benefits of apparent beacon estimation are synergized with advantages of neural networks to learn nonlinear mapping between experimentally acquired data. In this section, a novel hybrid calibration method developed for accurate prediction of measurement position and minimizing measurement errors is presented hereinafter. It consists of two stages: (1) the offline calibration stage in which input–output pairs for neural network training and the ABPE method training are collected, and (2) the online stage where the trained methods are used to estimate the target object position. The diagram of the workflow of the hybrid NN-ABPE method is presented in Figure 2.
Let us assume that the positioning system consists of m beacons with unknown positions A i = ( X i , Y i ) , i = 1 , , m .
Offline stage. The calibration stage starts with the equipment setup, where beacons are placed in arbitrary positions surrounding the workspace. Next step is the data collection phase, where the pattern P j = ( x j , y j ) , j = 1 , , n of n reference points is assumed. The pattern P is chosen such that it uniformly covers the workspace with a desired resolution. The receiver is subsequently placed at consecutive points in the pattern. At each of these positions the beacon–receiver distances d i j are measured via UWB positioning system. The distances d i j and the pattern P are necessary as the input for the next stage of the algorithm.
The next step of the offline stage is to estimate the apparent beacon positions A i = ( X i , Y i ) , i = 1 , , m using the ABPE method (see Section 2.1.2). Based on these estimated beacon positions and the measured beacon–receiver distances d i j , the estimated positions of the reference points P j = ( x j , y j ) , j = 1 , , n are calculated with a NLS solver (1). The estimated positions of the reference points P are further used for neural network training. The neural network input is the reference points positions estimated via NLS: P , while the network is trained to output the ground-truth positions of the reference points P . This training process can be expressed as a problem of finding weight matrix W , bias matrix Θ , and activation function vector f such that:
n e t ( P , W , Θ , f ) = P
The NN architecture consists of two neurons in both the input and output layers, while the number of neurons and hidden layers are experimentally determined in Section 3.1. The outputs of the calibration stage are: the estimated apparent beacon positions A i = ( X i , Y i ) , i = 1 , , m , and the trained neural network represented as N = ( W , Θ , f ) , where W is the weight matrix, Θ is the bias matrix, and f is the vector of activation functions.
The offline calibration stage needs only to be executed once when the positioning system is initially set-up. The time required for this procedure depends mostly on the number of the reference positions in pattern P . The expected accuracy improvement is also based on the number of pattern samples.
Online stage. In the online stage, the distances d i j between the beacons and the receiver are measured, and the initial position estimate r = ( x , y ) is provided through NLS solver where the beacon positions A i = ( X i , Y i ) , i = 1 , , m are set according to the ABPE estimation obtained in the calibration stage. This position estimate r is further improved by setting it as the input of the neural network and acquiring the appropriate output. The NN used in this stage represents the one with the best validation performance obtained within the training process in the offline stage. The output of the network r = ( x , y ) = n e t ( r , W , Θ , f ) is the corrected estimate for the position of the receiver and is the final output of the hybrid method. By achieving such output, the proposed calibration method is able to predict a more accurate estimate of the receiver position while simultaneously mitigating the systematic error.
The online stage algorithm is integrated into the positioning system driver and is performed whenever a new position measurement is queried. The correction calculation adds a very minor overhead and thus can be implemented in a real-time system. The online stage does not require an involvement of the operator in terms of additional set-up.

3. Experimental Results

3.1. Experiment 1

The selection of the appropriate learning algorithm, the activation function of neurons, and neural network architecture (the number of layers and the number of neurons in each layer) are significant problems in neural network design. In order to test the performance of the proposed method, experiment 1 was performed for preliminary tuning of the parameters of the neural network. For this experiment, 158 data samples were collected using the UR5 robot-based high-precision reference system described in [34]. The neural networks are trained with the use of 10 different learning algorithms presented in Table 1.
Furthermore, 16 neural network architectures (one-layered, two-layered, three-layered, and four-layered architectures), with different numbers of neurons in each layer, are used in order to find the optimal neural network structure for the current calibration problem. Taking the one-layered architectures as an example, the number of neurons is adopted from minimal 3 to maximal 15. Four two-layered architectures are tested with the number of neurons in the first layer varied from 3 to 5 and in the second layer from 3 to 15. The number of neurons in four three-layered architectures varies from 3 to 5 in the first layer, from 3 to 10 in the second layer, and from 3 to 15 in the third layer. Finally, four different four-layered architectures are also tested; for example, the network architecture represented as 5-5-10-15 means that there are 5 neurons in both the first and second hidden layers, 10 neurons in the third, and 15 neurons in the fourth hidden layer. The list of 16 aforementioned architectures is shown in Table 2.
Another goal of this experiment was to test the effect of employing different activation functions of the neurons to the learning performance of the network. Table 3 shows 12 activation functions (‘logsig’, ‘tansig’, ‘softmax’, ‘radbas’, ‘compet’, ‘tribas’, ‘hardlim’, ‘hardlims’, ‘poslin’, ‘purelin’, ‘satlin’, ‘satlins’) used for tuning the neural networks.
Altogether, 10 learning algorithms are used, with 16 neural network architectures and 12 different activation functions for hidden layers. Therefore, to assess the performance of the proposed approach, the total number of tested neural networks is 10 × 16 × 12 = 1920 .
After the preliminary experimental tuning, the learning rate for all neural network architectures is set to 0.01. The training process is stopped when the root mean square error RMSE falls below 10 4 cm or in the case when the maximum number of iterations (2000) is reached. The experimental runs were repeated 50 times in order to collect data for statistical analysis. The algorithms were developed and experimentally validated in MATLAB environment running on the AMD Ryzen 7 3.8 GHz processor desktop computer with 8 GB of RAM. The input/output pairs (i.e., reference position/measured position) are divided in the following manner: 70% of data were used for training, and 30% of data were used for validation and testing. The accuracy of the network is measured with RMSE, where a lower value of the error indicates better calibration performance.
Table 3 shows comparative best result for 12 activation functions. These results are based on the series of trials where we have varied the network architecture, learning algorithm, and activation function. Detailed results of these calculations for the case of the ‘purelin’ activation function (lowest average RMSE of 0.99 cm) is shown in Table 4. As can be seen, ‘logsig’, ‘tansig’, ‘softmax’, ‘radbas’, ‘purelin’, and ‘satlin’ activation functions achieve the lowest values of minimum RMSE, and therefore they were chosen to be used in the further experiments (2 and 3). Moreover, for four out of six of the best activation functions, the best RMSE value was achieved with the Levenberg–Marquardt back-propagation algorithm. Figure 3 shows box plots of RMSE after 50 independent trials and reveal that the minimum RMSE is achieved with the ‘purelin’ activation function.

3.2. Experiment 2

In the second experiment, the aim was to test the performance of the hybrid NN-ABPE method over real-world datasets. Therefore, experimental data were collected in real-world conditions, i.e., the following set-up measuring system was placed in a 6.5 × 5.5 m empty room. Four anchors were located in positions indicated in Figure 4; measurement positions were taken on every vertex of the square grid with the spacing of 0.30 m.
The collected data were preprocessed by using the ABPE method and were used to train the neural networks. The goal of this experiment was to find the best neural network using the best activation functions obtained in experiment 1 (‘logsig’, ‘tansig’, ‘softmax’, ‘radbas’, ‘purelin’, ‘satlin’) and the best learning algorithm from experiment 1 (Levenberg–Marquardt back-propagation). The other parameters were set as follows: learning rate was adopted as 0.01, the stopping criteria for training were reaching the RMSE of 10 4 cm or reaching the maximum number of learning iterations (2000); all calculations were repeated 50 times.
In order to test the usefulness of our proposed method, it is necessary to test how robust the accuracy improvement is, depending on the number and the pattern of the training samples. Typically, uniform sampling patterns of varying sampling densities are used in real conditions. In this experiment, we defined eight sampling patterns, in which the sampling points were selected from the previously collected dataset such as to form uniform grid distributions with varying sampling densities. These sampling patterns, arranged from the highest density (number of samples) to the lowest, are presented in Figure 5. We have compared the accuracy improvement results of the state-of-the-art methods (DQM, ABPE, BSFE) and the different configurations of our proposed hybrid NN-ABPE method for these sampling patterns. The results are shown in Table 5. The improved rate IR is computed as follows (21):
I R = R M S E B S F E R M S E N N A B P E p u r e l i n R M S E B S F E × 100 %
As can be seen, the results achieved with the proposed hybrid method indicate a noticeable improvement in calibration accuracy. For seven out of eight tested sampling point distribution patterns, the NN-ABPE method with a ‘purelin’ activation function showed the best result with minimal RMSE. Moreover, the hybrid NN-ABPE method achieves better results when compared with the model based only on NNs (NN-RAW purelin). The overall success calculated by improved rate (IR) ranges from 5.22% (for pattern #2.1) to 41.83% (for pattern #2.6).
Figure 6 shows box plots of RMSE results after 50 repetitions with the hybrid NN-ABPE approach and a ‘purelin’ activation function. It can be concluded that even if there is a relatively small number of training samples (e.g., six training samples for pattern #2.6 and pattern #2.7, or five training samples for pattern #2.8), the proposed method achieves a high level of accuracy in scenarios where fast set-up time is an essential requirement.

3.3. Experiment 3

The third experiment was performed to test the hybrid NN-ABPE method in a real-world set-up including obstacle. The data set was collected in the same laboratory environment with a single obstacle in the form of a metal cabinet (1 × 0.4 m) placed in the middle of the room. The experimental set-up consists of four anchors, which were located as shown in Figure 4, while the measurement positions were taken on every vertex of the square grid with the spacing of 0.3 m. The obstacle is located in the environment as shown in Figure 4. The addition of the obstacle necessitates a change in the sampling pattern as shown in Figure 7.
The goal of this experiment was to find the best NN architecture while using the best activation function and learning algorithm selected in experiment 1. The other parameters of the neural networks are set as follows: learning rate is adopted as 0.01; the stopping criteria for training is achieving the RMSE of 10 4 cm or reaching the maximum number of learning iterations (2000). All calculations are repeated 50 times.
Table 6 shows the comparative results of the proposed hybrid NN-ABPE method and the other state-of-the-art calibration methods (DQM, ABPE, BSFE). The experimental results obtained with the proposed hybrid method indicate a noticeable improvement in calibration accuracy. For all eight patterns testing the different uniform distributions (i.e., densities of the positions taken for the training of the neural networks), the NN-ABPE method with a ‘purelin’ activation function showed the best result with minimal RMSE. The IR for all patterns is over 56%; the best-improved rate of 91% is reported for pattern #3.1, while the second-best improvement of 67% is recorded for pattern #3.8.
Box plots of RMSE results after 50 repetitions achieved with hybrid NN-ABPE approach and ‘purelin’ activation function are presented in Figure 8. It can be concluded that even when the obstacles are placed in an indoor environment, the proposed method can obtain a high level of accuracy (RMSE of 1.90 cm) when all data samples are measured. Moreover, in scenarios where the fast set-up time is an essential requirement, experimental results demonstrate that a relatively small number of training samples (i.e., 5–7 measured points) is sufficient to achieve a high level of accuracy; e.g., 10 training samples for pattern #3.5 leads to RMSE of 7.70 cm, while 5 training samples for pattern #3.8 provide RMSE of 8.05 cm.

4. Conclusions

In this paper, the authors propose and experimentally evaluate a novel quick set-up calibration method used for accurate compensation of positioning system measurement errors. The proposed hybrid approach is based on back-propagation neural networks trained on data estimated by the Apparent Beacon Position Estimation method (ABPE). The ABPE method determines the so-called apparent beacon positions, which are then used to estimate the receiver position by the NLS method.
Furthermore, the proposed learning mechanism based on neural networks is employed to predict the relation between the reference position and the position obtained through the ABPE method. Different neural network structures, learning algorithms, and activation functions are experimentally evaluated in order to find the optimal solution for the real-world implementation. For fine-tuning purposes, 16 neural network architectures with 10 learning algorithms and 12 different activation functions for hidden layers are investigated in MATLAB environment (1920 networks are tested in total). The results from experiment 1 show that a neural network trained by the Levenberg–Marquardt algorithm and ‘purelin’ activation function provides the overall best performance.
Moreover, in order to show the robustness of the proposed approach, the method is validated in two new experimental studies and compared with the state-of-the-art calibration methods, i.e., Distortion Quadratic Model (DQM), Bias and Scale Factor Estimation (BSFE), and Apparent Beacon Position Estimation (ABPE). The first real-world scenario foresees an indoor environment without obstacles, while the second one considers the measurement of reference positions in a laboratory environment with additional obstacles introduced.
Results from experiments 2 and 3 show that the proposed hybrid NN-ABPE method can predict the location of the target with a high level of accuracy (RMSE of 4.37 cm in experiment 2 with n = 15 samples and RMSE of 1.90 cm in experiment 3 with n = 189 samples). When it comes to the number of samples necessary to realize a fast set-up procedure, the achieved experimental results demonstrate the superiority of the proposed NN-ABPE method trained with Levenberg–Marquardt backpropagation algorithm and linear activation function ‘purelin’ over the state-of-the-art methods. For the lowest number of samples, it is necessary to have only five reference points, and the proposed method will predict the position of the target object with RMSE of 4.66 cm (experiment 2—without obstacles), which is a 33% improvement over raw results, and RMSE of 8.05 cm (experiment 3—with an added obstacle) with 68% improvement over the raw results.
It is worth noting that both DQM and BSFE methods seem to give slightly better results and thus are more suited to be the first stage of the hybrid method. However, it is important to observe that these methods do require a priori knowledge of the beacon positions. Since the ABPE method does not depend on that information, it is a much better candidate when a short set-up time is desired. Thus, we have constructed our hybrid method with the ABPE as the initial step.
Bearing in mind that the choice of the number and the pattern of the sampling points play an essential role in setting up the system, one of the future research directions might be oriented towards a new methodology for the optimal selection of the location of the reference points used in the calibration procedure.
It would also be prudent to establish the level of possible accuracy improvement using the proposed hybrid NN-ABPE method in the range of possible environments by performing multiple additional experiments in real-life industrial scenarios.

Author Contributions

Conceptualization, M.P.; methodology, M.P., M.C., S.R. and A.W.; software, M.P., M.C., S.R. and A.W.; validation, M.P., M.C., S.R. and A.W; formal analysis, M.P., M.C., S.R. and A.W.; investigation, M.P., M.C., S.R. and A.W.; resources, M.P., M.C., S.R. and A.W.; data curation, M.P., M.C., S.R. and A.W.; writing—original draft preparation, M.P., M.C., S.R. and A.W.; writing—review and editing, M.P., M.C., S.R., A.W. and Z.M.; visualization, M.P., M.C., S.R. and A.W.; supervision, Z.M.; project administration, Z.M.; funding acquisition, Z.M. All authors have read and agreed to the published version of the manuscript.

Funding

The research proposed in this paper was financed by: the Polish National Agency for Academic Exchange through the project: “Biologically inspired optimization algorithms for control and scheduling of intelligent robotic systems” grant No. PPN/ULM/2019/1/00354/U/00001, by the Ministry of Education, Science and Technological Development of the Serbian Government, under the contract No. 451-03-9/2021-14/200105, by the Science Fund of the Republic of Serbia, grant No. 6523109, AI—MISSION4.0, and by the Polish Ministry of Science and Higher Education in grant No. WZ/WE-IA/4/2020.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kwon, W.; Park, J.H.; Lee, M.; Her, J.; Kim, S.H.; Seo, J.W. Robust Autonomous Navigation of Unmanned Aerial Vehicles (UAVs) for Warehouses’ Inventory Application. IEEE Robot. Autom. Lett. 2020, 5, 243–249. [Google Scholar] [CrossRef]
  2. Shi, D.; Mi, H.; Collins, E.G.; Wu, J. An Indoor Low-Cost and High-Accuracy Localization Approach for AGVs. IEEE Access 2020, 8, 50085–50090. [Google Scholar] [CrossRef]
  3. Krug, R.; Stoyanov, T.; Tincani, V.; Andreasson, H.; Mosberger, R.; Fantoni, G.; Lilienthal, A.J. The Next Step in Robot Commissioning: Autonomous Picking and Palletizing. IEEE Robot. Autom. Lett. 2016, 1, 546–553. [Google Scholar] [CrossRef] [Green Version]
  4. Kamei, K.; Ikeda, T.; Shiomi, M.; Kidokoro, H.; Utsumi, A.; Shinozawa, K.; Miyashita, T.; Hagita, N. Cooperative customer navigation between robots outside and inside a retail shop—An implementation on the ubiquitous market platform. Ann. Telecommun. Ann. Télécommun. 2012, 67, 329–340. [Google Scholar] [CrossRef]
  5. Demesure, G.; Defoort, M.; Bekrar, A.; Trentesaux, D.; Djemai, M. Navigation Scheme with Priority-Based Scheduling of Mobile Agents: Application to AGV-Based Flexible Manufacturing System. J. Intell. Robot. Syst. 2016, 82, 495–512. [Google Scholar] [CrossRef]
  6. Sprunk, C.; Lau, B.; Pfaff, P.; Burgard, W. An accurate and efficient navigation system for omnidirectional robots in industrial environments. Auton. Robot. 2017, 41, 473–493. [Google Scholar] [CrossRef]
  7. Papapostolou, A.; Chaouchi, H. Scene analysis indoor positioning enhancements. Ann. Télécommun. 2011, 66, 519–533. [Google Scholar] [CrossRef]
  8. Schindhelm, C.; Macwilliams, A. Overview of Indoor Positioning Technologies for Context Aware AAL Applications. In Ambient Assisted Living; Springer: Berlin/Heidelberg, Germany, 2011; pp. 273–291. [Google Scholar] [CrossRef]
  9. Alarifi, A.; Al-Salman, A.; Alsaleh, M.; Alnafessah, A.; Al-Hadhrami, S.; Al-Ammar, M.A.; Al-Khalifa, H.S. Ultra Wideband Indoor Positioning Technologies: Analysis and Recent Advances. Sensors 2016, 16, 707. [Google Scholar] [CrossRef]
  10. Farid, Z.; Nordin, R.; Ismail, M. Recent Advances in Wireless Indoor Localization Techniques and System. J. Comput. Netw. Commun. 2013, 2013, 185138. [Google Scholar] [CrossRef]
  11. Marano, S.; Gifford, W.; Wymeersch, H.; Win, M. NLOS Identification and Mitigation for Localization Based on UWB Experimental Data. Sel. Areas Commun. IEEE J. 2010, 28, 1026–1035. [Google Scholar] [CrossRef] [Green Version]
  12. Sinriech, D.; Shoval, S. Landmark configuration for absolute positioning of autonomous vehicles. IIE Trans. 2000, 32, 613–624. [Google Scholar] [CrossRef]
  13. Loevsky, I.; Shimshoni, I. Reliable and efficient landmark-based localization for mobile robots. Robot. Auton. Syst. 2010, 58, 520–528. [Google Scholar] [CrossRef]
  14. Aksu, A.; Kabara, J.; Spring, M.B. Reduction of location estimation error using neural networks. In Proceedings of the First ACM International Workshop on Mobile Entity Localization and Tracking in GPS-Less Environments, San Francisco, CA, USA, 19 September 2008; pp. 103–108. [Google Scholar]
  15. Pelka, M.; Goronzy, G.; Hellbrück, H. Iterative approach for anchor configuration of positioning systems. ICT Express 2016, 2, 1–4. [Google Scholar] [CrossRef] [Green Version]
  16. Pierlot, V.; Droogenbroeck, M. BeAMS: A Beacon-Based Angle Measurement Sensor for Mobile Robot Positioning. IEEE Trans. Robot. 2014, 30, 533–549. [Google Scholar] [CrossRef] [Green Version]
  17. Meissner, P.; Steiner, C.; Witrisal, K. UWB positioning with virtual anchors and floor plan information. In Proceedings of the 2010 7th Workshop on Positioning, Navigation and Communication, Dresden, Germany, 11–12 March 2010; pp. 150–156. [Google Scholar] [CrossRef]
  18. Soltani, M.; Motamedi, A.; Hammad, A. Enhancing Cluster-based RFID Tag Localization using artificial neural networks and virtual reference tags. In Proceedings of the International Conference on Indoor Positioning and Indoor Navigation, Montbeliard, France, 28–31 October 2013; Volume 54, pp. 1–10. [Google Scholar] [CrossRef]
  19. Motamedi, A.; Soltani, M.; Hammad, A. Localization of RFID-equipped assets during the operation phase of facilities. Adv. Eng. Inform. 2013, 27, 566–579. [Google Scholar] [CrossRef]
  20. Krapež, P.; Munih, M. Anchor Calibration for Real-Time-Measurement Localization Systems. IEEE Trans. Instrum. Meas. 2020, 69, 9907–9917. [Google Scholar] [CrossRef]
  21. Wolniakowski, A.; Ciężkowski, M. Improving the Measurement Accuracy of the Static IR Triangulation System Through Apparent Beacon Position Estimation. In Proceedings of the 2018 23rd International Conference on Methods & Models in Automation & Robotics (MMAR), Miedzyzdroje, Poland, 27–30 August 2018; pp. 597–602. [Google Scholar] [CrossRef]
  22. Ciężkowski, M.; Romaniuk, S.; Wolniakowski, A. Apparent beacon position estimation for accuracy improvement in lateration positioning system. Measurement 2020, 153, 107400. [Google Scholar] [CrossRef]
  23. Zekavat, R.; Buehrer, R.M. Handbook of Position Location: Theory, Practice and Advances; John Wiley & Sons: Hoboken, NJ, USA, 2011; Volume 27. [Google Scholar]
  24. Dardari, D.; Closas, P.; Djuric, P. Indoor Tracking: Theory, Methods, and Technologies. Veh. Technol. IEEE Trans. 2015, 64, 1263–1278. [Google Scholar] [CrossRef] [Green Version]
  25. Hartley, R. Theory and Practice of Projective Rectification. Int. J. Comput. Vis. 1999, 35, 115–127. [Google Scholar] [CrossRef]
  26. Ronda, J.; Valdes, A. Geometrical Analysis of Polynomial Lens Distortion Models. J. Math. Imaging Vis. 2019, 61, 252–268. [Google Scholar] [CrossRef] [Green Version]
  27. Nelder, J.A.; Mead, R. A simplex method for function minimization. Comput. J. 1965, 7, 308–313. [Google Scholar] [CrossRef]
  28. Vuković, N.; Petrović, M.; Miljković, Z. A comprehensive experimental evaluation of orthogonal polynomial expanded random vector functional link neural networks for regression. Appl. Soft Comput. 2018, 70, 1083–1096. [Google Scholar] [CrossRef]
  29. Miljković, Z.; Petrović, M. Intelligent Manufacturing Systems—With Robotics and Artificial Intelligence Backgrounds, 1st ed.; Faulty of Mechanical Engineering, University of Belgrade: Belgrade, Serbia, 2021; 409p. [Google Scholar]
  30. Miljković, Z.; Aleksendrić, D. Artificial Neural Networks—Solved Examples with Theoretical Background, 2nd ed.; Faculty of Mechanical Engineering, University of Belgrade: Belgrade, Serbia, 2018; 225p. [Google Scholar]
  31. Wang, L.; Zeng, Y.; Chen, T. Back propagation neural network with adaptive differential evolution algorithm for time series forecasting. Expert Syst. Appl. 2015, 42, 855–863. [Google Scholar] [CrossRef]
  32. Petrović, M.; Miljković, Z.; Babić, B. Integration of process planning, scheduling, and mobile robot navigation based on TRIZ and multi-agent methodology. FME Trans. 2013, 41, 120–129. [Google Scholar]
  33. Petrović, M.; Miljković, Z.; Babić, B.; Vuković, N.; Čović, N. Towards a conceptual design of intelligent material transport using artificial intelligence. Stroj. Časopis Za Teor. Praksu Stroj. 2012, 54, 205–219. [Google Scholar]
  34. Petrović, M.; Wolniakowski, A.; Ciezkowski, M.; Romaniuk, S.; Miljković, Z. Neural Network-Based Calibration for Accuracy Improvement in Lateration Positioning System. In Proceedings of the 2020 International Conference Mechatronic Systems and Materials (MSM), Bialystok, Poland, 1–3 July 2020. [Google Scholar] [CrossRef]
Figure 1. The ABPE method based on the reference points.
Figure 1. The ABPE method based on the reference points.
Sensors 21 08204 g001
Figure 2. Diagram of the workflow of the hybrid NN-ABPE method.
Figure 2. Diagram of the workflow of the hybrid NN-ABPE method.
Sensors 21 08204 g002
Figure 3. Testing RMSE for 12 different activation functions in experiment 1. Red lines show the median, the blue boxes encompass the 25th and the 75th percentiles, the whiskers represent the range and the plus signs indicate outliers.
Figure 3. Testing RMSE for 12 different activation functions in experiment 1. Red lines show the median, the blue boxes encompass the 25th and the 75th percentiles, the whiskers represent the range and the plus signs indicate outliers.
Sensors 21 08204 g003
Figure 4. Experimental site.
Figure 4. Experimental site.
Sensors 21 08204 g004
Figure 5. Measuring position patterns with different number of densities ρ n and number of points n used for training of the neural networks in experiment 2.
Figure 5. Measuring position patterns with different number of densities ρ n and number of points n used for training of the neural networks in experiment 2.
Sensors 21 08204 g005
Figure 6. RMSE for ‘purelin’ activation function and different densities in experiment 2. Red lines show the median, the blue boxes encompass the 25th and the 75th percentiles, the whiskers represent the range and the plus signs indicate outliers.
Figure 6. RMSE for ‘purelin’ activation function and different densities in experiment 2. Red lines show the median, the blue boxes encompass the 25th and the 75th percentiles, the whiskers represent the range and the plus signs indicate outliers.
Sensors 21 08204 g006
Figure 7. Measuring position patterns with different number densities ρ n and number of points n used for training of the neural networks in experiment 3.
Figure 7. Measuring position patterns with different number densities ρ n and number of points n used for training of the neural networks in experiment 3.
Sensors 21 08204 g007
Figure 8. RMSE for ‘purelin’ activation function and different patterns in experiment 3. Red lines show the median, the blue boxes encompass the 25th and the 75th percentiles, the whiskers represent the range and the plus signs indicate outliers.
Figure 8. RMSE for ‘purelin’ activation function and different patterns in experiment 3. Red lines show the median, the blue boxes encompass the 25th and the 75th percentiles, the whiskers represent the range and the plus signs indicate outliers.
Sensors 21 08204 g008
Table 1. Summary of learning algorithms used for neural networks development.
Table 1. Summary of learning algorithms used for neural networks development.
No.Learning AlgorithmAcronym
1Levenberg–Marquardt back-propagationLM
2Bayesian regularizationBR
3Resilient back-propagationRP
4Scaled conjugate gradient back-propagationSCG
5Gradient descent back-propagationGD
6Gradient descent with momentum back-propagationGDM
7Gradient descent with momentum and adaptive learning rule back-propagationGDMA
8Powell–Beale conjugate gradient back-propagationPB
9Fletcher–Powell conjugate gradient back-propagationFP
10Polak–Ribiére conjugate gradient back-propagationPR
Table 2. Neural network architectures.
Table 2. Neural network architectures.
No.ArchitectureNo.Architecture
1393-3-3
25105-5-5
310113-5-10
415125-10-15
53-3133-3-3-3
65-5145-5-5-5
75-10153-3-10-10
83-15165-5-10-15
Table 3. Best results for 12 activation functions in the experiment 1. Best six activation functions according to minimum RMSE are highlighted. Bold text indicates the best value in the column.
Table 3. Best results for 12 activation functions in the experiment 1. Best six activation functions according to minimum RMSE are highlighted. Bold text indicates the best value in the column.
Activation FunctionArchAlgRMSE_Best [cm]
MaxMinMedianAverage
logsig1011.931.061.221.24
tansig611.310.821.141.13
softmax11831.071.212.448.58
radbas616.430.951.351.58
compet3336.7715.4626.4226.63
tribas293.791.782.252.35
hardlim4315.778.8810.9811.44
hardlims4215.437.7110.5910.66
poslin15747.413.0825.7019.11
purelin911.010.930.990.99
satlin721.430.961.141.16
satlins11917.981.391.993.21
Table 4. Experiment 1 results for ‘purelin; activation function—best, average, and standard deviation for the testing set.
Table 4. Experiment 1 results for ‘purelin; activation function—best, average, and standard deviation for the testing set.
Arch LM [cm]BR [cm]RP [cm]SCG [cm]GD [cm]GDM [cm]GDMA [cm]PB [cm]FP [cm]PR [cm]
3Best0.990.980.960.970.960.960.950.970.960.97
Ave1.003.311.001.004.154.161.511.001.001.00
Std0.013.290.030.045.785.813.340.020.020.04
5Best0.990.970.970.970.950.950.960.980.970.98
Ave0.992.581.021.001.151.141.031.001.001.00
Std0.012.950.040.030.370.360.050.020.010.01
10Best0.990.950.960.960.970.970.930.980.980.97
Ave0.993.381.021.000.990.991.051.001.001.00
Std0.003.970.060.040.020.020.100.020.040.03
15Best0.960.970.970.970.980.980.970.970.980.97
Ave1.002.481.021.000.990.991.051.001.001.01
Std0.022.910.040.020.020.020.080.030.030.04
3-3Best0.990.970.960.970.950.950.950.970.970.98
Ave0.990.991.001.003.965.543.971.001.001.02
Std0.010.010.030.026.879.429.000.020.030.05
5-5Best0.990.970.930.960.950.960.970.970.960.97
Ave0.990.991.001.001.007.792.681.001.001.01
Std0.000.010.040.040.0216.338.230.020.020.05
5-10Best0.960.970.970.970.980.980.970.970.970.97
Ave0.991.651.011.001.479.771.541.001.001.01
Std0.014.680.040.013.3617.283.610.030.030.03
3-15Best0.990.960.970.960.980.980.970.970.980.98
Ave0.990.981.011.011.0012.802.531.011.001.00
Std0.010.010.050.050.0219.226.170.040.020.02
3-3-3Best0.930.970.980.970.960.970.950.950.960.97
Ave0.995.761.021.016.399.686.881.271.001.01
Std0.0110.340.040.039.1212.5312.011.930.020.03
5-5-5Best0.990.960.940.990.970.970.980.980.970.97
Ave0.994.041.031.002.6117.185.891.011.011.00
Std0.018.350.070.026.5122.9112.880.040.040.02
3-5-10Best0.960.970.970.960.960.970.940.970.980.97
Ave0.995.141.011.011.7826.392.931.011.131.00
Std0.0110.490.030.044.2125.657.610.040.860.03
5-10-15Best0.990.960.940.970.980.980.930.970.960.97
Ave0.992.541.011.001.0030.821.021.001.001.01
Std0.016.190.040.020.0227.450.060.010.030.04
3-3-3-3Best0.980.970.960.970.940.960.950.940.960.97
Ave1.0018.111.021.0010.0215.5914.101.014.701.00
Std0.0213.870.040.0413.7515.3316.020.047.760.03
5-5-5-5Best0.990.970.970.970.980.980.970.960.940.97
Ave0.9917.531.011.001.5222.826.671.011.161.00
Std0.0113.930.040.023.6624.4615.890.040.870.03
3-3-10-10Best0.990.970.950.950.970.980.960.970.970.97
Ave1.0020.831.001.011.9335.235.391.011.651.02
Std0.0113.540.030.036.5627.0210.590.053.110.10
5-5-10-15Best0.980.970.960.940.980.980.970.970.970.96
Ave0.9919.391.011.021.0055.342.651.011.011.01
Std0.0013.070.030.060.0275.148.130.040.040.06
Table 5. Position RMSE for the different patterns and methods in experiment 2—without obstacles (in [cm]). Best result for each of the patterns is presented in bold.
Table 5. Position RMSE for the different patterns and methods in experiment 2—without obstacles (in [cm]). Best result for each of the patterns is presented in bold.
PatternnRaw [cm]DQM [cm]ABPE [cm]BSFE [cm]NN—ABPE Logsig [cm]NN—ABPE Tansig [cm]NN—ABPE Softmax [cm]NN—ABPE Radbas [cm]NN—ABPE Purelin [cm]NN—RAW Purelin [cm]NN—ABPE Satlin [cm]IR [%]
#2.11976.945.986.295.755.014.665.985.095.456.654.115.22
#2.2506.946.186.385.745.335.295.415.164.384.104.9623.69
#2.3246.946.816.335.724.745.175.475.034.394.794.9223.25
#2.4156.946.686.726.155.495.615.385.764.375.064.9528.94
#2.5106.947.676.345.905.465.215.365.494.394.955.8625.59
#2.666.9410.919.607.897.907.337.2310.424.594.738.3941.83
#2.766.948.326.407.145.185.265.287.014.504.895.5936.97
#2.856.948.099.396.019.029.568.4538.304.664.7511.9322.46
Table 6. Position RMSE for the different patterns and methods in experiment 3—with obstacles (in [cm]). Best result for each of the patterns is presented in bold.
Table 6. Position RMSE for the different patterns and methods in experiment 3—with obstacles (in [cm]). Best result for each of the patterns is presented in bold.
PatternnRaw [cm]DQM [cm]ABPE [cm]BSFE [cm]NN—ABPE Logsig [cm]NN—ABPE Tansig [cm]NN—ABPE Softmax [cm]NN—ABPE Radbas [cm]NN—ABPE Purelin [cm]NN—ABPE Satlin [cm]IR [%]
#3.118925.1917.0920.6321.133.933.425.624.421.903.8691.01
#3.24825.1917.8421.5121.2615.9113.7615.8617.849.0813.2157.29
#3.32325.1919.5420.7621.998.228.278.308.787.228.0067.17
#3.41425.1920.1821.4022.3910.7410.1410.5610.967.679.8765.74
#3.51025.1918.8921.0321.2610.3410.4110.1810.497.709.5263.78
#3.6625.1921.2926.3622.3230.0032.6628.7456.059.7429.2056.36
#3.7625.1920.3426.8622.6016.2919.5217.2739.138.2218.9263.63
#3.8525.1923.7424.3224.6211.3228.0815.1462.878.0514.4267.30
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Petrović, M.; Ciężkowski, M.; Romaniuk, S.; Wolniakowski, A.; Miljković, Z. A Novel Hybrid NN-ABPE-Based Calibration Method for Improving Accuracy of Lateration Positioning System. Sensors 2021, 21, 8204. https://doi.org/10.3390/s21248204

AMA Style

Petrović M, Ciężkowski M, Romaniuk S, Wolniakowski A, Miljković Z. A Novel Hybrid NN-ABPE-Based Calibration Method for Improving Accuracy of Lateration Positioning System. Sensors. 2021; 21(24):8204. https://doi.org/10.3390/s21248204

Chicago/Turabian Style

Petrović, Milica, Maciej Ciężkowski, Sławomir Romaniuk, Adam Wolniakowski, and Zoran Miljković. 2021. "A Novel Hybrid NN-ABPE-Based Calibration Method for Improving Accuracy of Lateration Positioning System" Sensors 21, no. 24: 8204. https://doi.org/10.3390/s21248204

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop