Next Article in Journal
Deep Joint Source-Channel Coding for Wireless Image Transmission with Adaptive Models
Previous Article in Journal
Passive Electrical and Optical Methods of Ultra-Short Pulse Expansion for Event Timer-Based TDC in PPM Receiver
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Methodology Based on a Deep Neural Network and Data Mining for Predicting the Segmental Voltage Drop in Automated Guided Vehicle Battery Cells

1
Department of Distributed Systems and Informatic Devices, Silesian University of Technology, 44-100 Gliwice, Poland
2
Department of Automated Control Systems, Lviv Polytechnic National University, 79000 Lviv, Ukraine
3
AIUT Sp. z o.o. (Ltd.), 44-109 Gliwice, Poland
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(22), 4636; https://doi.org/10.3390/electronics12224636
Submission received: 29 September 2023 / Revised: 8 November 2023 / Accepted: 10 November 2023 / Published: 13 November 2023
(This article belongs to the Section Electrical and Autonomous Vehicles)

Abstract

:
AGVs are important elements of the Industry 4.0 automation process. The optimization of logistics transport in production environments depends on the economical use of battery power. In this study, we propose a novel deep neural network-based method and data mining for predicting segmented AGV battery voltage drop. The experiments were performed using data from the Formica 1 AGV of AIUT Ltd., Gliwice, Poland. The data were converted to a one-second resolution according to the OPCUA open standard. Pre-processing involved using an analysis of variance to detect any missing data. To do this, the standard deviation, variance, minimum and maximum values, range, linear deviation, and standard deviation were calculated for all of the permitted sigma values in one percent increments. Data with a sigma exceeding 1.5 were considered missing and replaced with a smoothed moving average. The correlation dependencies between the predicted signals were determined using the Pearson, Spearman, and Kendall correlation coefficients. Training, validation, and test sets were prepared by calculating additional parameters for each segment, including the count number, duration, delta voltage, quality, and initial segment voltage, which were classified into static and dynamic categories. The experiments were performed on the hidden layer using different numbers of neurons in order to select the best architecture. The length of the “time window” was also determined experimentally and was 12. The MAPE of the short-term forecast of seven segments and the medium-term forecast of nine segments were 0.09% and 0.18%, respectively. Each study duration was up to 1.96 min.

1. Introduction

Automated guided vehicles (AGVs) play an important role in the development of the Industry 4.0 concept [1,2]. Due to the increasing complexity and diversity of tasks, businesses have begun to use them for more than just logistical purposes [3,4]. They can be used to implement intelligent production lines. But the primary problems of AGV applications remain unchanged [5,6], namely:
  • Determining the vehicle requirements;
  • Determining the required number of AGVs;
  • Routing the vehicles;
  • Optimizing the guide track;
  • Minimizing the downtime;
  • Managing the battery.
Predicting a battery voltage drop and managing the battery as a whole are the current research areas. Regardless of the type of battery, all AGVs have similar operational problems, namely, an insufficient operating time and limited energy reserves. For instance, the Formica 1 AGV, which was developed by AIUT Ltd., Gliwice, Poland [7,8], is allowed to discharge the battery down to 20% after which it is necessary to send it to be charged as it is prohibited to discharge the battery below 10% due to its technological features. Therefore, predicting the battery cell voltage drop is sufficient for determining whether it is appropriate to send an AGV to perform a particular type of task [9,10,11,12] or whether it is necessary to immediately send it to be charged. This issue is especially pertinent for the Formica 1 when the battery cell voltage approaches 20% of its remaining charge.
The path can consist of numerous segments that form a unit in which an AGV can transport up to 600 kg [7,8]. The route map is divided into a uniform grid, based on which the locations of the segments can be determined. In each segment, an AGV performs a specific type of task. It can transport up to 600 kg between the segments. If the battery is discharged to a state in which the AGV cannot drive to the charging point, especially with a load, serious difficulties could arise. Since the weight of the Formica 1 AGV itself is about 600 kg and the maximum weight that it can transport is also 600 kg, this AGV is extremely problematic when moving it to a charging station in the event of a complete battery discharge [13,14].
The approaches to saving the AGV battery power can be divided into the following categories:
  • Improving the battery charging and discharging processes;
  • Analyzing battery ageing and degradation and assessing the condition of its mechanisms;
  • Investigating the chemical processes within the batteries;
  • Combining the use of batteries and super capacitors;
  • Optimizing the logistics;
  • Improving the utilization of multiple AGVs and the use of their batteries as mobile energy storage;
  • Applying machine learning (ML) techniques to the aforementioned problems.
The goal of this study was to develop a new neural network-based methodology for predicting the battery cell voltage drop on each segment that is traveled by an AGV. The proposed approach uses ML [2,3,15,16] and data preprocessing techniques including modern information technologies that implement:
  • The OPCUA of the client [17,18,19,20];
  • Data normalization methods [21,22,23];
  • Algorithms for detecting and restoring any partially lost data [24,25,26,27];
  • Methods of correlation analysis [28,29,30];
  • Learning and forecasting using neural networks [31,32,33,34,35,36,37,38].
Most existing methods for predicting the voltage drop in AGV battery cells have a number of limitations. They have poor prediction accuracy for small datasets and are sensitive to noise and partially lost data as well as seasonal changes. Although they take into account a limited number of factors that affect battery discharge, they do not take into account wear and degradation. The models that can work with large datasets and take into account a significant number of parameters have a low level of performance, which makes it impossible to use them in real production conditions. Using the aggregation theory and the acquired results of short- and medium-term forecasts, it is possible to predict whether or not an AGV will be able to complete the task that has been assigned to it in the next aggregate. During the execution of a given task, while traveling from one stop point to another within the current aggregate, a drop in battery cell voltage in the subsequent aggregate can be predicted by considering the type of task that it will perform. After completing the current aggregate and considering the results of the next one, a decision can be made on whether to send the AGV to the charging station immediately or to begin work on the subsequent aggregate.
The main contributions of this article can be summarized as follows:
  • The newly developed neural network-based segmental AGV battery voltage drop prediction method identifies and recovers any partially lost data, establishes correlations between the parameters, and predicts the battery voltage drop within a single unit using a neural network;
  • On the example of the AGV Formica 1, the high efficiency of the proposed methodology for the medium- and short-term forecasting of the battery voltage drop is demonstrated;
  • An increase in the accuracy of predicting the voltage drop of batteries for different types of AGVs compared to existing methods is experimentally established.
The peculiarity of the work is that the research was performed on a dataset that described only one cycle of the discharge of an AGV battery. Therefore, the use of the methods that are described in most studies that detect the seasonal component is not very useful due to the large error for the forecasts that they obtain. Therefore, the number of effective research tools is quite limited. The most commonly used methods for these types of forecasts are smoothing methods; autoregressive models; machine learning, and others. Machine learning methods are the most flexible. They involve several stages: data collection and analysis; data preparation; model selection; model training; forecasting and monitoring; and adjusting due to changing conditions. This is why neural networks with data pre-processing, which includes methods of statistical and variance analysis, smoothing, etc., were chosen for the research.
The rest of the article is structured as follows: Section 2 is a review and critical analysis of the existing approaches for predicting the voltage drop in battery cells for various AGVs. In Section 3, there is an overview of the Formica 1 AGV data acquisition and processing system, including a description of (1) the proposed technique for detecting and removing any spontaneous outliers in the signals using dispersion analysis; (2) analysis using the Pearson, Kendall, and Spearman correlation coefficients and data normalization to the range of 0 to 1; and (3) the developed data mining technique and the data preparation for neural network training. In Section 4, a deep neural network (DNN) prediction model is developed to implement the sector-specific short- and medium-term forecasts of AGV battery cell voltage drop using the “time windowing” technique. Graphs of the neural network learning process are also provided, and the influence of the number of neurons on the error rate in the hidden layer (MAPE and MAE) was studied. The learning speed of the neural network was also assessed. In Section 5, the results that were obtained using the developed approach are compared with those that were obtained using existing methods. Finally, Section 6 presents the conclusions and directions for future works.

2. State-of-the-Art

Equivalent circuit models are too simple and cannot capture the dynamics of the changes. Generally, the uneven charge distribution that is caused by the diffusion effect of an electrolyte in the presence of a charging current can be approximated by a first-order low-pass filter. In [39], the authors analyze why the structure of argyrodite containing copper ions contributes to its low thermal conductivity. To this end, they studied the electronic, phonon, and thermoelectric properties of a Cu7PS6 crystal, which was calculated at the DFT level, in the context of the density functional theory and Boltzmann transfer theory. As a result, the authors found a strong nonlinearity of the vibrations of the copper atoms in Cu7PS6. The described process included charging/discharging at a rate of 0.1 C. Publication [40] is devoted to the main calculations of the steady-state geometric configuration and thermodynamic parameters of thin-film cadmium condensation sulfide. The authors found that modifying wurtzite provides a more stable crystal structure of cadmium sulfide at medium and high temperatures.
Publication [41] is devoted to increasing the operating time of AGVs by implementing a kinetic energy recovery system to store braking energy. In this paper, the authors analyze and compare different energy distribution systems using batteries and supercapacitor-coupled batteries. The comparison is based on the overall system characteristics and an initial Pareto efficiency/energy density analysis. This helps to optimize the design of the battery and supercapacitor, thereby saving up to 48% in volume and weight.
In [42], the authors proposed a system for assessing the level of the discharge of an AGV battery on a given route taking into account the available charging capacity. For this purpose, a data acquisition system was developed, obtaining the main characteristics of the battery by performing discharge tests at different speeds and load configurations. They also developed a mathematical model based on the extended Kalman filter method for online state of charge (SOC) estimation. A series of tests confirmed the effectiveness of this method.
The authors in [43] proposed a planning strategy with guaranteed convergence for AGV systems. In this strategy, AGVs interact based on their remaining operating time, including SOC and maintainability. This strategy is based on the assignment of alternative tasks to two interacting AGVs taking into account the status of the tasks that were completed in the previous stage of the process and the productivity of each assembly station. In addition, a new SOC estimator was developed that uses battery current and voltage measurements. The main advantage of the proposed method is the calculation of accurate SOC estimates, which is important for practical use.
Current research [44,45,46,47,48,49,50] analyzes the charging voltage curves to study the battery characteristics. The DC charging voltage curve can be used to analyze the mechanisms of battery aging and degradation, as well as to assess the condition of a battery. This approach often uses methods such as power gain analysis. Publication [51] is devoted to offline parameter identification and SOC estimation for new and used AGV battery packs for monitoring changes. The authors developed a method using genetic algorithm optimization to estimate the battery parameters for a first-order equivalent circuit model at different battery ages. Based on the optimized model, a smooth variable filtering strategy was analyzed to estimate the SOC. The authors confirmed the impact of AGV transportation and charging on production efficiency by testing the developed model for 12 months using real data from driving scenarios. The authors of [52] studied the degradation of lithium-ion (Li-ion) cells. They showed that Li-ion cells decompose through various physical and chemical mechanisms that occur after several cycles. This leads to a decrease in the capacity of the cell and an increase in its resistance. In addition, factors such as the electrode material, operating conditions, and battery temperature have a significant impact. Article [44] deals with the problem of predicting the service life of lithium-ion batteries and estimating the uncertainty that is associated with the forecast. The authors propose a hybrid method that combines empirical mode decomposition (EMD) and a particle filter (PF) for early shelf life prediction and uncertainty assessment. The method is validated on two publicly available lithium-ion battery degradation datasets from NASA Ames and the University of Maryland. To avoid overfitting, only the residual sequence that was obtained after the EMD decomposition of the original data are used as input to the PF.
To improve the utilization of AGVs, the authors of [45] proposed two strategies for the two-stage charging of AGVs that are equipped with lithium-ion batteries. These strategies are based on a heuristic algorithm that was developed to route an AGV to the nearest charging station or a charging station that operates with a minimal delay. Thus, taking into account the charging characteristics of the lithium-ion battery, the duration of each charge is reduced. Publication [46] is devoted to improving the charging and parking processes of AGV systems. To do this, the authors used the Trivial+ and Pearl Chain methods, as well as a method based on a generalized task. In addition, they proposed a combination of two types of vehicle availability rules. The proposed solution provides an efficient method for charging AGV batteries while reducing the traffic density in the AGV system. In [47], the authors investigated the relationship between AGV scheduling tasks, charging thresholds, and energy consumption. This made it possible to effectively address the issue of how AGV charging affects the scheduling of flexible production units using multiple AGVs.
Paper [48] describes a strategy for controlling an AGV device after optimizing its performance. The authors developed a model that simulates the energy approach to determine the predicted trajectory of an AGV over a three-hour period. Using the proposed model, the performance of an AGV can be evaluated using the SOC indicator in the proposed scenario. The authors of [49] developed a multifunctional AGV system (MAGS) and coordinated its management using the example of the containers in a terminal logistics system. The system is implemented using the RLS algorithm taking into account the AGV control method and multi-system coordinated control. As a result, the authors developed the design specifications for an electric AGV, which was then built in the port of Rotterdam.
Publication [50] explores the use of AGV batteries as portable energy storage devices to reduce peak loads at manufacturing enterprises. For this purpose, scenarios were created to determine the electricity costs of small and medium-sized enterprises. In addition, the number of charging stations required for each scenario was calculated to ensure that all of the AGV batteries are discharged during peak load periods. The cost results demonstrated the effectiveness of using AGV batteries as energy storage devices to reduce the peak load at production facilities. In [53], the authors presented a simulation model of production using AGVs. It describes the strategy of a manufacturing company to reduce the peak load using AGV batteries. The modeling results can be used to evaluate the abovementioned method in production.
A deep learning (DL) method for estimating the SOC of lithium-ion batteries in electric vehicles is described in [15]. The authors analyzed the design process using publicly available datasets and investigated the structural characteristics and applicability of different types of neural networks. Using a genetic algorithm, they optimized the process and were able to estimate the replenishment and waiting time more accurately. As a result, they identified the advantages and disadvantages of DL in assessing the SOC of lithium-ion batteries. The authors concluded that DL methods are superior to the mathematical models, because they better account for changes in battery state parameters. The authors of [43] propose a method for setting up and analyzing the charging voltage curve based on artificial neural networks (ANNs). The proposed method was compared with existing methods of polynomial curve selection and estimation. The results of the study were used to analyze the battery capacity in relation to the phase change reaction that occurred inside the battery. This method can be used in battery management systems for different types of batteries. The advanced model can be used in simulation, analysis, or online battery diagnostics modes.
In [54], the authors developed an ANN that estimates the SOC of a lithium-ion battery using measured voltage and current parameters. The network was trained using an R-RC model with a SOC/OCV relationship. The ANN was tested using control driving cycles. The authors of [55] developed a system for the short-term prediction of battery discharge using ML methods and data preprocessing. The proposed system was tested in real production conditions with an average absolute error of less than 1%. The authors of [56] developed a control system for lithium-ion batteries based on SOC prediction. They used a synthetic neural network system that was capable of maintaining the physical description of a battery by approximating the nonlinear dynamics of each component. The developed model also takes into account the ohmic effects, electrolyte diffusion, and uneven charge distribution within a cell. Tests have shown that the operating temperature and SOC at the input can negatively affect system performance. The authors of [57] demonstrated that an ANN that was trained on data on the cyclic behavior of a battery can determine the residual charge of a battery in one test cycle. In this case, the estimated forecast accuracy by the average absolute error was 6.46%.
Study [58] was devoted to the development of a controlled and automated transportation platform based on CMMs for the Industry 4.0 ecosystem. The authors described the design and implementation of a new differential path tracking controller for an AVG drive on a robotic platform that was designed to approximate general functionality with multiple CMM inputs and outputs. Thus, the proposed platform could cope with the unstructured and unmodeled disturbances and dynamics that are inherent in any environment in which an AGV operates.

3. Materials and Methods

In this paper, we present a new neural network method for predicting the AGV battery voltage drop segment by segment. It is based on AGV data collection and analysis methods, as well as on short- and medium-term neural network forecasting which was implemented with DL. The proposed method includes the following steps:
  • Collecting the AGV data;
  • Eliminating any spontaneous outliers in a signal;
  • Analyzing the data using Pearson’s correlation coefficient;
  • Data mining;
  • Training and testing the DNN prediction model.

3.1. Experimental Setup

The routing of the Formica 1 AGV is based on the AIUT plant map, which was created by the operator. On the map, all of the possible obstacles in the AGV’s path are indicated in brown such as, for example, the structural elements of buildings, furniture, workplaces, etc. A double blue line shows the route that the AGV will follow. The areas that the AGV passes through are indicated by small black dots. Stops are indicated by large black circles at the end of the blue route. At these stops, the AGV will perform a certain type of task. For example, unloading goods or performing high-precision work with the help of an installed manipulator.
Figure 1 shows the route of the AGV between stops, both empty and with a load of 600 kg. It was created using specialized Navitrol software v 6.4. The sections of the route where AGV traffic was prohibited are indicated in light brown. These are fixed obstacles in the AGV’s path. For example, walls, stationary workplaces where people are working, pellets, boxes that were in place at the time of mapping, and other obstacles. Blue lines indicate the path along which the AGV is allowed to move. The navigation points between which the AGV moves are indicated by large black circles with a brown vertical line. The small black dots are the segments into which the entire route is divided. The set of navigation points forms an aggregate, i.e., a logical unit within which the AGV will fully complete the task. For example, one aggregate is the transport of cargo from point A to point C, which consists of the following actions: loading the cargo at point A; driving and waiting in line to unload at point D; further transport of the cargo and unloading it at point C.
The route consists of numbered segments with red and green numbers. The segments are indicated in red when the AGV is moving forward and green when it is moving backward. These segments are assigned unique numbers when the route map is created and do not change until the route map is modified. The end stop is where the AGV stops and starts moving in the opposite direction. Therefore, the AGV follows the following sequence of stops: A, D, C, B, A, D, C, B, etc. The sections of the route from A to D and from C to B move forward, and the sections from A to D, from D to C, and from B to A move backward. In addition, the route from A to D usually runs in both directions.
A uniform grid is superimposed on the AGV’s movement map and is divided into 10 × 10 s. As is shown in Figure 1, the length of the segments and the distribution of the segments between the sectors are uneven. For example, the route to point A consists of one segment, while the route between points D and B consists of five segments. The battery discharge rate is measured segment by segment, which means that the battery level and other diagnostic values are measured at the end of each segment as the AGV moves between stops.

3.2. Collecting and Analyzing the AGV Data

The AGV architecture is typical of a standard automatic control system. A standard programmable logic controller (PLC) controls the AGV platform, which is equipped with various terminals such as drives with controllers, electric drives, and HMI devices. Depending on a customer’s order, the transportation work that is performed by a Formica 1 AGV may vary. The route that the AGV automatically follows is created in advance using an external computer system. This is done manually by technicians or automatically with the help of specialized software (manufacturing execution system or warehouse management system), which provides full access to production process data in real time.
AIUT Ltd. (Gliwice, Poland) has developed a solution for monitoring and diagnosing industrial equipment that is based on the OPC UA standard [17,20], which facilitates access to all of the process data that are generated by immersive devices. This development uses PLCs as the source of input data for the OPC UA training server. The collected information is transmitted over a digital communication bus using the CAN protocol. Communication between the controller and the server is conducted via the TCP/IP protocol. After the connection is established, the OPC UA server transmits data to a passive TCP server.
Table 1 shows the options for using the Formica 1 AGV 6000 chassis with the corresponding parameters. It also provides the abbreviations of the parameter group fields, the size in bytes, and a description of the field function.
The OPC client can access the information in Table 1, which is provided by the OPC UA server using the online registration mechanism. To optimize the bandwidth, information is transmitted between the server and the client only when the data in the source changes. The OPC UA server stores the frames that are received from the AGV in an external database to enable multiple accesses to the collected information.

3.2.1. Collecting the AGV Data

Publications [17,20,55] provide a detailed description of the developed OPC UA ML server, which provides access to the data that are generated, processed, and used by ANNs for ML algorithms. Due to the immutability of the data descriptor and the way in which the information is presented, the server ensures a stable connection if the internal data structure of the ANN changes.
In the new version of the internal data structure, the weight of the cargo that is being transported by the vehicle and the percentage of the vehicle’s battery charge have been added as parameters. The following four additional parameters are stored in the [WS] WEIGHT STATUSES structure:
  • Weight statuses—front left strain gauge weight;
  • Weight statuses—front right strain gauge weight;
  • Weight statuses—rear left strain gauge weight;
  • Weight statuses—rear right strain gauge weight.
The weight is measured at each of its four corners in order to determine whether the AGV is evenly loaded. Therefore, if the AGV has a heavier load on one side, the weight of that side will be greater than the weight of the other side. The total mass of the goods that are being transported by the AGV is calculated from the total mass that is obtained at its four corners, which is the sum of the parameter values described earlier.
The battery value is added as a parameter in [ENS]—energy signal. This parameter describes the current state of the battery as a percentage. This parameter should be used to assess the degree of battery wear as a new battery holds a charge for an extended period, whereas an older battery has worse parameters and discharges more quickly.
The software implementation of the information collection system was written in the multiparadigm language Python. All of the historical data were read from the database and sent to port 5000. Using the OPC UA client protocol, the data were written to a file with the extension “.pkl” in the CSV (comma-separated values) format. The Pandas software 2.1.2 library was used to extract the required AGV parameters from the “.pkl” file.
In order to perform the neural network forecasting using the “time windowing” technique, the signals must have a uniform discreteness. The main advantages of using the “time window” technique for deep learning models with time series data include reducing the number of model parameters by considering only a portion of the sequence; improving the prediction accuracy by incorporating more detailed information about the time series; efficiently using computational resources by reducing the number of model parameters; conducting a more detailed analysis of the time series data to establish dependencies between them; working with incomplete and partially missing data; adapting it to different lengths of time series; and generalizing it to different types of data.
The signals that are recorded in the database are partially lost, and therefore they need to be restored with a uniform sampling along the time axis. Firstly, signal recovery was performed according to the OPC UA protocol standard according to which, in order to save disk space and optimize communication channel bandwidth, data are recorded only when a parameter value changes. Therefore, the first step was supplemented with the data that were repeated at a second interval. Secondly, due to the large number of readings, a minute-by-minute averaging becomes necessary. To do this, the arithmetic mean of each parameter per minute was calculated. The signals “ActualSpeed_L” and “ActualSpeed_R” were rejected because they were not recorded in the database on the specified dates and therefore could not be used. Thirdly, using the MinMaxScaler function from the free ML library for the Python programming language, all data were normalized to a range from 0 to 1.

3.2.2. Eliminating Any Spontaneous Outliers in the Signals

The next step was to remove any spontaneous outliers in order to improve the quality of the ANN training, because after averaging, all of the outliers remain in the signals. The method that was used to detect and remove outliers was based on an analysis of variance. For each parameter in Table 1, outliers were found using advanced statistics: the window and number of standard deviations from the mean value. The variance, mean, and “noise” (the difference between the original data and its exponential mean) were calculated. All of the signal values that exceeded the specified sigma were considered to be spontaneous outliers. If the noise value of a particular parameter exceeded the sigma threshold, it was replaced by the average value. Table A1 summarizes the results of the calculations, including the standard deviation, variance, minimum and maximum values, range, linear deviation, and standard deviation. These parameters were calculated for an optimal time window length of 12 and a sigma value of 1.5.
Figure 2 shows the results of using the proposed method to detect and eliminate spontaneous emissions of the AGV signal at σ = 1.5 and a time window length of 12. The signal example provided was captured when the AGV was transporting cargo, which was gradually unloaded along a predetermined route. In the case of charged vehicles, emissions occur much more frequently than in the case of unloaded vehicles, so it would be more realistic to study the results of the proposed method for detecting and eliminating spontaneous emissions using such signals.
As is shown in Figure 2, most of the spontaneous outliers and signal deviations were eliminated by the proposed method. The signals became smoother, which improved the training accuracy of the ANN that was based on the processed signals. As is also shown in Figure 2, the proposed method eliminated most of the spontaneous emissions and noise in the signal, which became smoother and improved the accuracy of the ANN training that was based on the processed signals. However, when analyzing the outliers using a variance analysis, a large number of consecutive missing data could prevent the method from detecting outliers at this stage. In such cases, ML techniques can be used to detect any anomalies in the case of missing data, which may include the following methods: autoencoders, random forests, deep learning, clustering methods, signal processing methods, etc. The best solution is to use an integrated AP method that combines different anomaly detection methods to obtain the most accurate and reliable results.

3.2.3. Data Analysis

Pearson’s correlation coefficient is one of the simplest and most effective methods for determining linear correlation. Since an AGV is the source of a large amount of data, only parameters with a high correlation should be selected. For this purpose, we constructed a correlation matrix for the main AGV parameters as is shown in Table A2. The main diagonal of the matrix had the highest correlation between the parameter and itself, thus it should be excluded from the analysis. The correlation values for the parameters with high positive and negative correlations ranged from 0.5 to 1.0 and from −1.0 to −0.5, respectively. Correlation values from 0.3 to 0.5 and from −0.5 to −0.3 were considered to be moderate positive and negative correlations of the parameters. Parameters with a weak correlation were not taken into account. In addition, the parameters with a negative correlation dependence were also not considered because they had been experimentally proven to increase the error during the ANN training. Therefore, only parameters with a high and medium positive correlation dependence were used for the ANN training.
Pearson’s correlation coefficient is used to measure the degree of the statistical relationship between the ranks of two variables in a data set. It can be used to determine the extent to which variables change together, regardless of the specific functional relationship between them. Due to the robustness of Pearson’s correlation coefficient to outliers and anomalies, it was appropriate to use it for the dataset under study as it is based on ranks rather than exact values. It is also effective for comparing two variables that have different scales as it operates on ranks rather than specific values.
To measure the degree of the relationship between the ranks of two variables in a dataset, the Kendall correlation coefficient was used. It is convenient to use because it is non-parametric and does not require that the data have a normal distribution. The data can contain outliers and the variables can be ordinal or ranked. The result of calculating the Pearson, Spearman and Kendall correlation coefficients for the primary AGV parameters are shown in Table A2.
However, the correlation only reveals the formal relationships and does not enable the causal relationships to be identified, i.e., if a positive or negative correlation is found between two variables, it does not make it possible to determine which variable is the cause and which is the effect. A correlation analysis usually assumes a normal distribution of data. Based on the histogram, it was found that the data had a normal distribution that was truncated on the right (Figure 3).
However, each of the correlation coefficients has its limitations. Pearson’s correlation coefficient is sensitive to outliers and does not consider the nonlinear relationship between variables. This correlation coefficient also has limitations for large samples. Kendall’s and Spearman’s correlation coefficients are limited to ordinal variables and are not suitable for quantitative variables. Moreover, Spearman’s correlation requires large samples for statistical significance.
Therefore, the following main statistical indicators were additionally calculated:
  • Count 18,424.000000;
  • Mean 46,914.052866;
  • Std 1901.390125;
  • Min 42,830.000000;
  • 25% 45,360.000000;
  • 50% 47,020.000000;
  • 75% 48,520.000000;
  • Max 50,200.000000.
After constructing the autocorrelation and partial autocorrelation functions and checking the signal for white noise, the following were found:
  • The signal was probably not white noise;
  • The values in the lags were correlated with each other;
  • The histogram of the Gaussian distribution;
  • The mean and variance changed over time.
Next, we tested the frequency distributions of the analyzed data using the Kolmogorov–Smirnov test, which was used to compare the frequency distributions of the data with the theoretical distribution and to estimate the differences between the empirical function and the theoretical distribution functions. As a result, we obtained the Kolmogorov–Smirnov statistic: 0.359140341106519 and p-value: 0.2534443706650547, which meant that there was insufficient statistical evidence to reject the null hypothesis. According to the results, it can be assumed that the sample corresponded to the theoretical distribution that was tested at the 0.05 significance level.

3.3. Data Mining

The processed and selected signals were then used to train a DNN. To accomplish this, a data analysis was performed by calculating the following parameters for each segment:
  • Segment—number of the specified segment;
  • Duration—the average duration of the presence of the AGV in a given segment;
  • Samples—the average sample count for a given segment;
  • Voltage count delta—the mathematical expectation of the battery voltage drop after one passage of a given segment;
  • Voltage delta variance—the variance of the battery voltage drop after one passage of a given segment;
  • Mass—the mass carried by the AGV on a given segment;
  • Start segment voltage—the averaged battery voltage at the beginning of the movement in a given segment.
Table A4 shows the numerical results of the analysis of the data that were added to the dictionary. During the analysis, 25 separate segments were selected as is shown in Figure 1. Based on these segments, the following new parameters were calculated: segment, sample count, duration, voltage delta, mass, and segment start voltage. The obtained parameters were grouped by the value of the “segment”. For each segment, the “sample number”, the average number of measurements for all of the parameters being studied, was calculated. Since the time spent by an AGV in a segment can vary, the length of the time of the AGV’s passage through a segment was calculated and used as the “duration” parameter. When an AGV entered a segment, the voltage on the battery cells was recorded as the “initial segment voltage” parameter. The difference between the voltage values of the battery cell at the input and output of a segment was stored in the “delta voltage” parameter. The mass parameter represented the weight of the AGV; its calculation method is described in the signal description section. All of the new parameters were divided into static and dynamic parameters. The static parameters included segment, sample count, duration, voltage delta, and mass. The dynamic parameter was the start segment voltage.
In order improve the training of the ANN, the Python MinMaxScaler() function was used to normalize all of the parameters except the “segments” in the range from 0 to 1. Based on the data that were obtained, a training and test subset for the ANN was prepared.

4. Modeling and Results

4.1. Selecting the Type of Artificial Neural Network

In this study, we used a hierarchical learning neural network, also known as a deep machine learning ANN. To perform short- and medium-term forecasts, a simple ANN, which was a feed-forward neural network without feedback in which a signal was fed to the input layer, was developed. The signal was transmitted from the input layer to the hidden layer and the output layer. The results were obtained at the output layer. The proposed model included an input layer with 24 neurons, a hidden layer with 12 neurons, and an output layer with one neuron. The ANN was trained using the back-propagation method with gradient descent. This method was selected because of the simplicity of its implementation and its tendency to achieve local optima better than other training methods. However, it is computationally expensive and cannot be used for online ANN retraining. The calculation speed can be improved using techniques such as mini-batching and simultaneously computing the gradient on several training examples as well as on several machines.

4.2. DNN Prediction Model

The input to the neural network was the current time values of the parameters: “segment”, “count”, “duration”, “voltage delta” and “volume”. The resulting value at the output of the DNN was the “start segment voltage” for the next time moment. Therefore, the network accepted all of the signals for the current minute, and on the output layer, it produced the results for the next minute because the signal had a by-minute discreteness. 97% of the samples were used for the DNN training and 3% for the testing. Therefore, out of 5204 processed samples, the first 5052 were used to train the model, and the remaining 152 were used to evaluate the prediction accuracy of the constructed model.
To train the ANN, Keras, an open source neural network library running on TensorFlow, was used. This library was selected because of the practicality of its intuitive set of high-level abstractions, modularity, and extensible functionality. The “fit” method was used to train the model. To do this, the target data, batch size, and number of training epochs were determined. The linear activation function was used for the input and output layers, and the ReLU function was used for the hidden layer. The performance was evaluated by the mean square error (MSE), which was calculated as the square of the average difference between the predicted and actual observations. Adam’s algorithm was used as an optimizer with the mean absolute percentage error (MAPE) as a measure. MAPE is the average of the absolute difference between the predicted and actual observations expressed as a percentage. The “evaluate” function was used to evaluate the model. The DNN structure that was used is shown in Figure 4. This is a typical standardized scheme that was built in python using the keras.utils.plot_model function. It visualizes a neural network model in Keras.
An example of using the structure of the DNN prediction model is shown in Figure 3. On the left, there are six ANNs, each with 12 neurons in the input layer. It also contains a hidden layer consisting of a different number of neurons (from 2 to 11). The optimal number of neurons was determined experimentally based on a segmentation prediction (see Table A5), and one neuron was in the output layer. The middle block combined the input from the six ANNs into a single tensor along the final dimension using tf.keras.layers.concatenate. This was necessary for further processing in the DNN. The output block on the right was formed by two dense layers, the first layer contained the number of neurons in the hidden layer and the activation function ReLU. The second part describes the structure of the DNN prediction model, which consisted of one neuron and a linear activation function. The input data of the developed model were the segment, readings, duration, delta voltage, volume, initial voltage of the segment at time t, and the output was the initial voltage at the beginning of the segment at time t + 1.
The statistical method SOAT (statistical outcomes analysis technique) was used to test the effectiveness of the DNN prediction model in real-world conditions. The main steps of the SOAT method are collecting the data for the predictive model; dividing the data into training and test sets; building a predictive non-model; forecasting and evaluating the results of the forecasts; evaluating the accuracy using various metrics (MSE, MAPE, MAE); analyzing the results using graphs and the actual values and error distributions; and fitting and improving the model until satisfactory results are achieved.
Data preparation included uniform sampling using OPC UA, the detection and removal of partially missing data, and normalization to the range [0, 1]. We then established the correlation between the data by analyzing the Pearson, Spearman, and Kendall correlation coefficients. The entire dataset was then divided into subsets for training, validation, and testing. The size of a test sample should include 20–30% of the population, and the validation sample should include 10–20%. Various hyperparameters of the DNN model were determined, such as the number of neurons, the number of hidden layers, the learning rate, and the activation function. The transformation was defined for each hyperparameter: for the number of layers, values of 1, 2, or 3 could be selected; for the number of neurons in each layer, from 2 to 12; and for the learning rate, from 0.1 min to 2 min. Their value was selected by testing and evaluating their performance using the selected indicators. Using variation, variations of the ANN model were created.
The model was trained on the training dataset and the validation dataset was used to maintain the model invariance. The training was stopped when the model reached the desired accuracy criteria or when the number of epochs reached the maximum. For each experiment, the prediction results were stored based on the data in Table A5. To determine the accuracy of the model, the predictions that were obtained with the DNN model were compared with the actual results that were obtained on the test dataset. The accuracy measures such as mean absolute error (MAE), mean square error (MSE), and mean absolute percentage error (MAPE) were calculated. The accuracy indicators with the established permissible limits of fractional voltage drops in the AGV battery cells were also compared. If the indicators met the established limits, it could be concluded that the model was effective. On the other hand, it was also possible to make the necessary modifications to improve the model and iterate the SOAT by adjusting the hyperparameters or including/excluding certain input variables. As the scale of the data increases, if the prediction error exceeds an acceptable limit, the ANN should be retrained. However, as the scale of the data increases, the problem of overfitting may arise when the model overfills the training data and cannot effectively generalize new data. To avoid this problem, it is necessary to retrain the ANN, which involves adding new training data to the existing model and re-training the model on these data. Experiments with an increased data scale are presented in Table A5. The experiments were conducted for a different number of segments. Let us take a closer look at the result of one of these forecasts. The forecast was made for the longest AGV route from point B to point A, which consisted of nine segments. DNN training with the maximum allowable parameter values was performed based on 310 previous segments (ten more than in previous studies). The entire data set with the maximum possible number of segments was used to train and test the neural network. No significant changes were observed in the forecast accuracy or learning speed. It was not possible to test the DNN on more data due to a lack of data because according to the labor protection rules, it is forbidden to operate an AGV for more than eight hours on premises where people work. Moreover, when an AGV stops, its battery must be charged according to the appropriate technological process.
The name of the class, as well as its type and description, are displayed in the first line. The mean absolute error was used to measure the loss of the neural network model. When the validation loss did not improve after five consecutive epochs, the learning procedure was terminated. During training, the model with the lowest confirmation loss was saved. The dynamics of the ANN training process are shown in Figure 5.
The effect of the number of neurons in the hidden layer on the values of the learning index MSE, MAPE%, and MAE was also analyzed. Table A5 shows the results of the predictive analysis using the proposed method with a different number of segments. The network learning rate and the number of network iterations were also taken into account.
For clarity, Table 2 provides an example of the effects of predicting the different numbers of neurons in the hidden layer for medium-term predictions when the AGV passed through nine segments. Symbolic designations were used: TW1—DNN in the mode of segmental forecasting by using the “time windowing” technique based on the values that were obtained from an AGV without using data mining, TW2—DNN in the mode of segmental forecasting by using the “time windowing” technique based on the averaged data obtained using the proposed data mining, stat—statistically averaged values for each segment.
For example, the optimal model for nine segments consisted of an input layer of 72 neurons (12 neurons for each of the six parameters), a hidden layer of 36 neurons (six neurons for each of the six parameters), and an output layer with one neuron. The MSE, MAPE, and MAE errors for this TW1—DNN structure were 12.96, 0.01, and 3.07, respectively. This indicated that the model accuracy was highest when there were six neurons in the hidden layer. The model training took 1.06 min. In this case, the training time was longer but saved 0.90 s compared to the longest training time, which is shown in Table A5. Taking into account the performance of the model, the number of neurons in the hidden layer of this model was set to eight, thereby sacrificing time for better performance, and the number of epochs and the packet size were set to 100 and 2.

5. Comparison and Discussion

In this section, we analyze the effectiveness and efficiency of the proposed network method for predicting thepartial voltage drop in AGV batteries. To do so, we compared the prediction accuracy of two different types of neural networks with and without using the developed intelligent data mining method.
Using identical training sets, we compared the prediction results of two different neural network models, namely a DNN and a back propagation neural network. The following methods were considered:
  • OPC UA with a per-minute averaging;
  • With the elimination of spontaneous outliers in signals;
  • With the developed data mining methodology.
The accuracy was measured using software designed to segmentally predict the key parameters that affect the voltage drop in an AGV battery. The Python library was used for this purpose: tensorflow, keras, sklearn, time, pandas, numpy, seaborn, and matplotlib. We produced short- and medium-term forecasts for segments 1 through 7 and 8 through 20, respectively. For example, the longest route was from stop A to stop B and consisted of 11 segments. Therefore, a medium-term forecast was sufficient for one unit, including the performance of a certain type of task when the AGV moved between stops. We estimated the MSE, MAE, and MAPE in percentage terms. The results of the forecasting analysis using the proposed method with a different number of segments are presented in Table A1. The highest accuracy was achieved using the developed intelligent data analysis method. Its TW1 accuracy, based on the IAE error for short- and medium-term forecasts, was 3.26 at 2564.12 and 9.82 at 8780.56, respectively. As for the MAPE, the errors ranged from 0% to 0.09% and from 0.01% to 0.18% for the short- and medium-term forecasts, respectively. The training speed ranged from 0.16 min to 1.96 min, and the number of training vectors ranged from 2 to 11, respectively.
Figure 6 presents a comparison of the developed methodology in the prediction (TW1) and prediction (TW2) modes using the SARIMA, LSTM, RNN, AutoCTS, and statistical methods.
According to the SOAT, all of the traditional time series forecasting models were tested with the same hyperparameters, namely, the number of epochs was 100. The number of training samples was 100, 250, 500, and 5052. The models were tested on the nine values of the momentary energy consumption parameter. This study also analyzed the results of predicting the segmental voltage drop in the battery cells of an automated driving vehicle using different prognostic models that were based on the SOAT hyperparameter—metrics. The MAE (mean absolute error) is a metric that enables an assessment of how the forecasts differ from the actual data using the mean square error between the predicted and actual values. The tests were conducted for the following predictive models: auto CTS (auto ARIMA); SARIMA (seasonal autoregressive integrated moving average); RNN (recurrent neural network); LSTM (long short-term memory); statistical models; TW2 and TW1. The TW2 and TW1 models are proposed in this paper. The statistical method gave the best results for short time samples of up to 500 samples. However, on longer time samples, for example, 5052 samples, the presented TW1 predictive model had a higher accuracy. Therefore, it is recommended that it be used for short- and medium-term forecasts. The worst accuracy on the short training samples was obtained using the long short-term memory-based predictive model. However, when the number of samples was increased, the accuracy of its forecasts also increased significantly and even exceeded the RNN.
The MSE (mean squared error) error had identical results which enabled us to estimate how much the forecasts differed from the actual data based on the squared error between the forecasted and actual values. However, according to the MAPE (mean absolute percentage error), which enables how much the forecasts differ from the real data in percentage terms using the percentage average absolute error between the predicted and actual values to be estimated, the RNN had better results compared to the LSTM and classical time series forecasting methods. The statistical methods, TW1, and TW2, gave significantly better results in terms of the accuracy of the forecasting models. Another criterion for assessing the quality of the forecasting models according to the SOAT was the model training time. The classical time series forecasting models had the longest training time. The neural network forecasting models and the statistical method had a shorter training time. Therefore, taking into account the accuracy and speed of obtaining forecasts by the above analyzed forecasting models, it is recommended that the statistical and neural network models that are based on RNN be used for medium- and long-term forecasts.
These results can be explained by the peculiarity of the data that were collected from the AGV Formica 1. The analysis revealed a very weak seasonal component. Most classical time series forecasting methods had poor accuracy. The DNNs were also compared with other types of neural networks. However, the other more well-known “powerful” neural networks, such as LSTM, had worse predictive results as they are designed to detect the seasonal component, which was weakly expressed in this time series, as well as the variance and mean change over time. Therefore, the RNN had the highest forecast accuracy, and on this basis, two DNN forecasting models (TW1 and TW2) were developed, which had a higher accuracy with a high learning rate than the simple RNN. According to the obtained results, the most accurate was the proposed method, the MAPE, for which it was less than 1%. However, it lost in speed. Despite this, its speed was comparable to the other methods (up to 0.3 min). The RNN learned the fastest (less than 0.1 s), but its MAPE was 5%.
An important SOAT hyperparameter for evaluating a model’s performance is its training time, which is related to other hyperparameters, such as the number of ANN layers, the number of neurons in each layer, the training algorithm, the metric that is used to evaluate the accuracy, the number of input parameters, etc. An equally important hyperparameter is the hardware that is used to train the model.
The efficiency of using the data processing time is analyzed by comparing the processing time of the software implementation of the developed methodology using a central processor without a graphics processing unit (GPU). The following materials were used: an Intel(R) Core(TM) i7-5500U CPU 2.40 GHz; 16.0 GB of installed RAM; a 64-bit operating system, and an x64-based processor. The efficiency of the data processing time was analyzed and is presented in Table A5 in the column “Minimum learning speed”, which shows the data processing time when implementing the software.
The minimum recommended equipment was used for these studies. All of the hyperparameters that were used in this study were the minimum that was required. Accordingly, the presented speed of the forecasting model was the lowest. When the model is used on more powerful hardware, its speed will be higher. For example, the speed of the forecasts with identical accuracy that are presented in Table A5 for a MacBook M2 pro, 16 Gb CPU combined with 16 GPUs was six to seven times higher than the one for the minimum recommended hardware.
The practical deployment of technology that is based on a deep neural network and data mining to predict a segmental voltage drop in the battery cells of automated guided vehicles can be done in the following manner. This technology has been developed and tested on a real dataset that was obtained under production conditions. It contains only one complete discharge cycle of the AGV Formica 1 battery, which further complicates the construction of a predictive model. This cycle lasts about eight hours, i.e., a full working day at an enterprise where people and AGVs work in the same room (for example, AIUT, Gliwice, Poland). The dataset contained a variety of cases that sometimes led to an immediate increase in electricity consumption and, generally, to a faster discharge of the AGV battery. Given the features of the Formica 1 AGV, which weighs about 600 kg and can carry a maximum of 600 kg, the problem of a spike in electricity consumption in the event of an unpredictable stop that might be caused by various factors (for example, a person or a box, stick or other object in the way) is quite serious.
Most businesses that use AGVs can take full advantage of this development. To do this, data collection for the specific AGVs that are available needs to be set up. If the company has a large number of similar AGVs that perform the same tasks, then one DNN model can be used. However, due to the different operating modes of AGVs and the different states of their batteries, it is recommended that a DNN model be maintained for each AGV in order to improve the accuracy of the forecasts. If necessary, when more data are collected and the forecast accuracy is unsatisfactory, the model should be retrained to obtain a higher forecast accuracy with the most recent data. It is recommended that the predictive model be retrained on a server, which constitutes more powerful hardware, rather than on the AGV itself. In the testing mode, the trained model can be used locally on each AGV by transferring the parameters of the trained model from the server.

6. Conclusions

This paper presents a novel method that is based on deep learning neural networks to predict the voltage drop in AGV batteries during the segments of completing tasks. It involves several steps such as collecting AGV data, removing outliers, analyzing the data correlation, data mining, and training, validating, and testing the DNN prediction model. The proposed method was tested on the Formica 1 AGV routing system on an operator-generated AIUT factory map and demonstrated its ability to accurately predict the voltage drop in an AGV battery.
OPC UA was used to recover signals with a uniform discretization along the time axis in order to improve the prediction accuracy for neural network forecasting using the time window method. The Pearson, Kendall, and Spearman correlation coefficients were used to select the parameters with a high and moderate positive correlation for the ANN training, and the data analysis included calculating the various parameters for each segment and normalizing the data to a range from 0 to 1. Any spontaneous outliers were excluded using an analysis of variance. The optimal sigma value of 1.5 was estimated using standard deviation, variance, minimum and maximum value, swing, linear deviation, and root mean square deviation for all of the acceptable sigma values in increments of one-tenth. The length of the “time window” was also determined experimentally and was equal to 12. The MAPE of the short-term forecast for seven segments and the medium-term forecast for nine segments was 0.09% and 0.18%, respectively, with a longest training time of 1.96 min.
The proposed DNN model had a high accuracy in predicting the voltage at the start segment, and the use of the statistical SOAT method confirmed the model’s effectiveness in real conditions. To speed up the training time of ANNs, the software implementations of this technique can easily be parallelized for processing by multiple processors.
However, the developed method also has some limitations. The prediction accuracy decreased when there was a large amount of permanently missing data. When modifying the AGV route, it was necessary to train the samples according to the proposed data mining method and to retrain the DNN. Additional data processing and data analysis required additional resources. The use of the Pearson, Spearman, and Kendall correlation coefficients might be limited in detecting nonlinear relationships between the parameters. As the scale of the data increases, there might be a problem of overfitting the model, which can lead to inefficient forecasting of any new data.
The aim of further research should be to integrate other types of data in order to improve the accuracy in predicting the battery voltage drop and other power system parameters using the proposed DNN modeling structure and methodology. The focus of future research will be on more detailed detection and processing of partially lost data. For this purpose, the plan is to apply both traditional and modern methods of data imputation by determining the loss of statistical power. It is also planned to apply the method of multivariate imputation using chain equations for MAR missingness because this is the best approach for dealing with missing data [59]. When the analysis is based on probability or conditional probability, it is advisable to use adaptive two-phase plans [60].
It is also planned to use non-iterative neural networks to apply the proposed methodology online in real production conditions. This will increase the speed of retraining the neural network with satisfactory accuracy for production conditions.

Author Contributions

Conceptualization, R.C., O.P. and T.S.; methodology, O.P. and T.S.; software, O.P. and T.S.; validation, O.P. and T.S.; formal analysis, R.C. and M.M.; investigation, O.P. and T.S.; resources, M.D. and M.M.; data curation, M.D., O.P. and T.S.; writing—original draft preparation, R.C., O.P., T.S., M.M. and M.D.; writing—review and editing, R.C., O.P., T.S., M.M. and M.D.; visualization, O.P. and T.S.; supervision, R.C.; project administration, R.C.; funding acquisition, R.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by Norway Grants, which the National Centre operates for Research and Development under the project “Automated Guided Vehicles integrated with Collaborative Robots for Smart Industry Perspective” (Project Contract no.: NOR/POLNOR/CoBotAGV/0027/2019-00) and also by the Polish-Ukrainian grant “Automated Guided Vehicles integrated with Collaborative Robots—energy consumption models for logistics tasks planning” (02/110/ZZB22/1022). This work was supported by the Polish National Centre of Research and Development from the project “Hybrid systems of automated internal logistics supporting adaptive manufacturing” (grant agreement no POIR.01.01.01-00-0460/19-01). The project was realized as Operation 1.1.1.: “Industrial research and development work implemented by enterprises” of the Smart Growth Operational Program from 2014–2020 and is co-financed by the European Regional Development Fund.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All of the data presented in this study are available in publicly accessible repositories https://github.com/tsteclikatpolsl/CoBotAGV, accessed on 23 February 2023.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. The summary results after eliminating any spontaneous outliers with an optimal sliding window length and sigma value.
Table A1. The summary results after eliminating any spontaneous outliers with an optimal sliding window length and sigma value.
B -> A Path through the Nine Segments: 056 020 048 052 016 044 012 036 004
ParametersOrigin Standard DeviationOrigin DispersionOrigin Min ValueOrigin Max ValueOrigin SwingOrigin Mean Absolute Linear Deviation
Battery cell voltage291.215484,806.453341,670.000042,680.00001010.0000251.8642
ActualSpeed_L174.031230,286.8838−241.0000244.0000485.0000160.4683
ActualSpeed_R185.552334,429.6773−278.0000282.0000560.0000173.390411
Current segment21.4530460.23464.00000063.000059.000019.7064
Speed0.22840.0522−0.33180.32040.65220.2137
Cumulative energy consumption153.212023,473.9367887.00001420.0000533.0000132.8079
Momentary current consumption505.1302255,156.57904090.00008270.00004180.0000244.0466
Momentary energy consumption77.66456031.7775703.00001377.0000674.000036.2756
Momentary power consumption21.1794448.5675174.0000353.0000179.000010.2396
Mass0.0000.0000200.0000200.00000.00000.0000
C -> B Path through the Seven Segments 041 009 019 055 051 023 059
Battery cell voltage291.215484,806.453341,670.000042,680.00001010.0000251.8642
ActualSpeed_L174.031230,286.8838−241.0000244.0000485.0000160.4683
ActualSpeed_R185.552334,429.6773−278.0000282.0000560.0000173.3904
Current segment21.4530460.23464.000063.000059.000019.7064
Speed0.22840.0522−0.33180.32040.65220.2137
Cumulative energy consumption153.212023,473.9367887.00001420.0000533.0000132.8079
Momentary current consumption505.1302255,156.57904090.00008270.00004180.0000244.0466
Momentary energy consumption77.66456031.7775703.00001377.0000674.000036.2756
Momentary power consumption21.1794448.5675174.0000353.0000179.000010.239657
Mass0.00000.0000200.0000200.00000.00000.0000
Table A2. The results of calculating the Pearson, Spearman and Kendall correlation coefficients for the primary AGV parameters.
Table A2. The results of calculating the Pearson, Spearman and Kendall correlation coefficients for the primary AGV parameters.
Battery Cell VoltageActual
Speed L
Actual
Speed R
SpeedCumulative Energy ConsumptionMomentary Current ConsumptionMomentary Energy ConsumptionMomentary Power Consumption
B -> A Path through the nine segments: 056 020 048 052 016 044 012 036 004
Pearson
Battery cell voltage1.0000−0.0190−0.0187−0.0196−0.9957−0.1614−0.0555−0.0587
ActualSpeed_L−0.01901.00000.96720.99070.0184−0.0060−0.0165−0.0082
ActualSpeed_R−0.01870.96721.00000.98830.0186−0.0088−0.0199−0.0111
Speed−0.01960.99070.98831.00000.0193−0.0072−0.0184−0.0096
Cumulative energy consumption−0.99570.01840.01860.01931.00000.14340.03870.0411
Momentary current consumption−0.1614−0.0060−0.0088−0.00720.14341.00000.89070.9942
Momentary energy consumption−0.0555−0.0165−0.0199−0.01840.03870.89071.00000.8959
Momentary power consumption−0.0587−0.0082−0.0111−0.00960.04110.99420.89591.0000
Spearman
Battery cell voltage1.0000−0.0146−0.0159−0.0156−0.9996−0.20430.08760.0549
ActualSpeed_L−0.01461.00000.91140.96930.0154−0.0153−0.0410−0.0251
ActualSpeed_R−0.01590.91141.00000.95270.0168−0.0312−0.0600−0.0400
Speed−0.01560.96930.95271.00000.0165−0.0170−0.0474−0.0265
Cumulative energy consumption−0.99960.01540.01680.01651.00000.1930−0.0990−0.0669
Momentary current consumption−0.2043−0.0153−0.0312−0.01700.19301.00000.66420.9523
Momentary energy consumption0.0876−0.0410−0.0600−0.0474−0.09900.66421.00000.7081
Momentary power consumption0.0549−0.0251−0.0400−0.0265−0.06690.95230.70811.0000
Kendall
Battery cell voltage1.0000−0.0109−0.0115−0.0116−0.9895−0.14200.06050.0381
ActualSpeed_L−0.01091.00000.71370.84470.0120−0.0117−0.0285−0.0180
ActualSpeed_R−0.01150.71371.00000.79500.0127−0.0188−0.0385−0.0243
Speed−0.01160.84470.79501.00000.0125−0.0098−0.0298−0.0160
Cumulative energy consumption−0.98950.01200.01270.01251.00000.1331−0.0682−0.0461
Momentary current consumption−0.1420−0.0117−0.0188−0.00980.13311.00000.52170.8525
Momentary energy consumption0.0605−0.0285−0.0385−0.0298−0.06820.52171.00000.5635
Momentary power consumption0.0381−0.0180−0.0243−0.0160−0.04610.85250.56351.0000
C -> B Path through the seven segments 041 009 019 055 051 023 059
Pearson
Battery cell voltage1.0000−0.0190−0.0187−0.0196−0.9957−0.1614−0.0555−0.0587
ActualSpeed_L−0.01901.00000.96720.99070.0184−0.0060−0.0165−0.0082
ActualSpeed_R−0.01870.96721.00000.98830.0186−0.0088−0.0199−0.0111
Speed−0.01960.99070.98831.00000.0193−0.0072−0.0184−0.0096
Cumulative energy consumption−0.99570.01840.01860.01931.00000.14340.03870.0411
Momentary current consumption−0.1614−0.0060−0.0088−0.00720.14341.00000.89070.9942
Momentary energy consumption−0.0555−0.0165−0.0199−0.01840.03870.89071.00000.8959
Momentary power consumption−0.0587−0.0082−0.0111−0.00960.04110.99420.89591.0000
Spearman
Battery cell voltage1.0000−0.0146−0.0159−0.0156−0.9996−0.20430.08760.0549
ActualSpeed_L−0.01461.00000.91140.96930.0154−0.0153−0.0410−0.0251
ActualSpeed_R−0.01590.91141.00000.95270.0168−0.0312−0.0600−0.0400
Speed−0.01560.96930.95271.00000.0165−0.0170−0.0474−0.0265
Cumulative energy consumption−0.99960.01540.01680.01651.00000.1930−0.0990−0.0669
Momentary current consumption−0.2043−0.0153−0.0312−0.01700.19301.00000.66420.9523
Momentary energy consumption0.0876−0.0410−0.0600−0.0474−0.09900.66421.00000.7081
Momentary power consumption0.0549−0.0251−0.0400−0.0265−0.06690.95230.70811.0000
Kendall
Battery cell voltage1.0000−0.0109−0.0115−0.0116−0.9895−0.14200.06050.0381
ActualSpeed_L−0.01091.00000.71370.84470.0120−0.0117−0.0285−0.0180
ActualSpeed_R−0.01150.71371.00000.79500.0127−0.0188−0.0385−0.0243
Speed−0.01160.84470.79501.00000.0125−0.0098−0.0298−0.0160
Cumulative energy consumption−0.98950.01200.01270.01251.00000.1331−0.0682−0.0461
Momentary current consumption−0.1420−0.0117−0.0188−0.00980.13311.00000.52170.8525
Momentary energy consumption0.0605−0.0285−0.0385−0.0298−0.06820.52171.00000.5635
Momentary power consumption0.0381−0.0180−0.0243−0.0160−0.04610.85250.56351.0000
Table A3. AGV parameters.
Table A3. AGV parameters.
ParameterTypeComments
TimestampDate and TimeDTL Format by Siemens. Set before the TCP frame was sent
Weight statuses—front left strain gauge weightUINTPayload on strain gauge in kgs
Weight statuses—front right strain gauge weightUINTPayload on strain gauge in kgs
Weight statuses—rear left strain gauge weightUINTPayload on strain gauge in kgs
Weight statuses—rear right strain gauge weightUINTPayload on strain gauge in kgs
SOC—State Of ChargeUINTActual state of the battery in %
Battery cell voltageUINTMeasured by BMS or voltage divider. mV
ActualSpeed_LINTActual speed of left motor
ActualSpeed_RINTActual speed of right motor
Current segmentUD IntThe ID of the segment on which the AGV is currently located
Momentary current consumptionUINTMeasured by a BMS or Hall sensor at load output on battery. mA
Momentary energy consumptionUINTCalculated by PLC based on momentary power consumption and time. Ws
Momentary power consumptionUINTCalculated by PLC based on momentary current and voltage. W
Cumulative energy consumptionUINTCalculated by PLC based on momentary energy consumption and time. Wh
Cumulative distance leftDINTTraveled distance in mms. calculated from the encoder pulses
Cumulative distance rightDINTTraveled distance in mms. calculated from the encoder pulses
Table A4. The dictionary for the test sample was built according to the sequence of the segments on which the AGV moved sequentially.
Table A4. The dictionary for the test sample was built according to the sequence of the segments on which the AGV moved sequentially.
SegmentDurationDuration VarianceSamples CountVoltage DeltaVoltage Delta VarianceMass
B -> A Path through the nine segments: 056 020 048 052 016 044 012 036 004
42.026.50000026.50000056.500000−1.6666675.527708200.0
41.022.75000022.75000047.916667−10.0000007.071068200.0
9.011.00000011.00000024.5833333.3333334.714045200.0
19.03.7391303.7391309.391304−0.8695655.032973200.0
56.024.20000024.20000052.228571−8.85714312.135292200.0
51.04.0000004.00000010.347826−0.4347834.642208200.0
23.09.0000009.00000020.250000−3.3333339.428090200.0
59.063.75000063.750000132.833333−5.0000009.574271200.0
20.09.0833339.08333320.3333337.5000005.951190200.0
48.04.0000004.00000010.5652170.8695655.833221200.0
52.05.0000005.00000012.869565−2.6086965.289359200.0
16.03.7391303.7391309.3913041.7391306.360321200.0
44.012.16666712.16666726.750000−6.6666676.236096200.0
12.08.1666678.16666718.9166672.5000004.330127200.0
36.010.00000010.00000022.166667−3.3333334.714045200.0
4.057.63636457.636364117.000000−9.0909095.142595200.0
7.054.18181854.181818112.909091−10.9090916.680427200.0
39.010.54545510.54545523.272727−1.8181823.856946200.0
15.08.2727278.27272719.6363643.6363644.810457200.0
47.011.81818211.81818226.363636−6.3636366.428243200.0
27.010.18181810.18181822.454545−7.2727276.165755200.0
63.018.27272718.27272739.0000005.4545454.979296200.0
60.014.90909114.90909129.636364−10.0000006.030227200.0
24.010.00000010.00000023.090909−1.8181825.749596200.0
10.016.45454516.45454535.1818180.9090916.680427200.0
C -> B Path through the seven segments 041 009 019 055 051 023 059
42.026.50000026.50000056.500000−1.6666675.527708200.0
41.022.75000022.75000047.916667−10.0000007.071068200.0
9.011.00000011.00000024.5833333.3333334.714045200.0
19.03.7391303.7391309.391304−0.8695655.032973200.0
55.024.20000024.20000052.228571−8.85714312.135292200.0
51.04.0000004.00000010.347826−0.4347834.642208200.0
23.09.0000009.00000020.250000−3.3333339.428090200.0
59.063.75000063.750000132.833333−5.0000009.574271200.0
20.09.0833339.08333320.3333337.5000005.951190200.0
48.04.0000004.00000010.5652170.8695655.833221200.0
52.05.0000005.00000012.869565−2.6086965.289359200.0
16.03.7391303.7391309.3913041.7391306.360321200.0
44.012.16666712.16666726.750000−6.6666676.236096200.0
12.08.1666678.16666718.9166672.5000004.330127200.0
36.010.00000010.00000022.166667−3.3333334.714045200.0
4.057.63636457.636364117.000000−9.0909095.142595200.0
7.054.18181854.181818112.909091−10.9090916.680427200.0
39.010.54545510.54545523.272727−1.8181823.856946200.0
15.08.2727278.27272719.6363643.6363644.810457200.0
47.011.81818211.81818226.363636−6.3636366.428243200.0
27.010.18181810.18181822.454545−7.2727276.165755200.0
63.018.27272718.27272739.0000005.4545454.979296200.0
60.014.90909114.90909129.636364−10.0000006.030227200.0
24.010.00000010.00000023.090909−1.8181825.749596200.0
10.016.45454516.45454535.1818180.9090916.680427200.0
Table A5. Forecast analysis using the proposed methodology with a different number of segments.
Table A5. Forecast analysis using the proposed methodology with a different number of segments.
Neurons in the Hidden LayersMSEMAPE%MAETraining Speed
Min
Learn
Count
Segment
Epoch CountBatch Size
TW1TW2StatTW1TW2StatTW1TW2Stat
B -> A Path through the nine segments: 056 020 048 052 016 044 012 036 004
2.02614.41266.58245.990.060.030.0425.5414.2714.691.39300.0100.02.0
3.049.38159.66245.990.010.030.045.7611.3714.691.27300.0100.02.0
4.0363.24310.57245.990.040.040.0415.0315.6214.691.06300.0100.02.0
5.0768.61427.56245.990.050.040.0421.4316.9414.690.91300.0100.02.0
6.012.9660.37245.990.010.020.043.076.3214.691.06300.0100.02.0
7.0238.461067.95245.990.030.060.0412.5525.8514.691.38300.0100.02.0
8.0787.75190.53245.990.050.030.0419.5611.8914.690.89300.0100.02.0
9.097.71165.73245.990.020.030.047.8410.9814.690.91300.0100.02.0
10.0108.95258.94245.990.020.040.049.2514.8414.690.89300.0100.02.0
11.0155.5886.65245.990.020.020.049.647.0414.690.94300.0100.02.0
2.0617.56633.53245.990.060.060.0423.3223.5514.690.65300.0100.03.
3.0700.39747.8245.990.060.060.0425.0725.8714.690.69300.0100.03.0
4.032.18144.7245.990.010.030.044.5510.5614.690.8300.0100.03.
5.01403.141463.64245.990.090.090.0436.5835.9914.690.69300.0100.03.0
6.0665.27844.29245.990.040.050.0417.122.7514.690.96300.0100.03.0
7.01059.0778.0245.990.060.060.0424.3323.4814.690.83300.0100.03.0
8.0296.36209.15245.990.030.030.0410.5810.9614.690.71300.0100.03.0
9.01586.48832.53245.990.070.060.0430.8623.5514.690.73300.0100.03.0
10.01556.021008.44245.990.060.060.0425.2323.9714.690.74300.0100.03.0
11.01401.98849.24245.990.080.050.0432.019.214.690.71300.0100.03.0
2.021.2957.81245.990.010.010.043.495.9414.690.5300.0100.04.0
3.01917.951345.89245.990.090.070.0437.3328.0714.690.66300.0100.04.0
4.0178.6122.42245.990.020.020.049.469.8514.690.59300.0100.04.0
5.01810.52881.79245.990.050.050.0421.9821.9214.690.57300.0100.04.0
6.01925.582263.18245.990.10.110.0443.244.2314.690.54300.0100.04.0
7.0120.76242.15245.990.020.030.049.2713.3914.690.71300.0100.04.0
8.069.1184.58245.990.010.030.046.1610.4714.690.46300.0100.04.0
9.01415.141213.32245.990.070.070.0428.3628.314.690.79300.0100.04.0
10.0555.01154.4245.990.050.060.0418.9426.6314.690.96300.0100.04.0
11.0140.44160.6245.990.030.020.0410.598.0814.690.45300.0100.04.0
2.042.83114.85245.990.010.020.045.18.5514.690.39300.0100.05.0
3.043.97120.82245.990.010.020.045.48.114.690.55300.0100.05.0
4.037.591.38245.990.010.020.045.426.3714.690.44300.0100.05.0
5.01174.05860.59245.990.060.070.0426.6827.3214.690.71300.0100.05.0
6.0104.67241.98245.990.020.030.048.4213.6314.690.46300.0100.05.0
7.0635.351117.42245.990.040.070.0418.1328.6314.690.41300.0100.05.0
8.01657.431377.55245.990.050.060.0420.0924.2614.690.54300.0100.05.0
9.01016.13898.24245.990.070.070.0428.5427.3914.690.4300.0100.05.0
10.03091.743533.54245.990.080.110.0435.3245.1614.690.56300.0100.05.0
11.0968.72353.06245.990.050.090.0422.5538.3514.690.47300.0100.05.0
2.091.52256.85245.990.020.030.047.9614.1414.690.39300.0100.06.0
3.0113.34602.76245.990.020.040.048.5316.6914.690.39300.0100.06.0
4.0792.71160.1245.990.050.080.0422.6831.3914.690.53300.0100.06.
5.0311.491036.91245.990.040.060.0416.6324.3214.690.44300.0100.06.0
6.01448.93631.0245.990.080.060.0434.2723.014.690.45300.0100.06.0
7.02067.422726.67245.990.10.120.0440.8450.014.690.52300.0100.06.0
8.0135.02224.72245.990.020.030.047.6411.8614.690.39300.0100.06.0
9.0431.25471.52245.990.040.040.0414.9616.0314.690.47300.0100.06.0
10.01729.53951.96245.990.080.060.0431.4326.9714.690.47300.0100.06.0
11.0199.01379.71245.990.030.040.0411.1615.8814.690.44300.0100.06.0
2.01053.521830.94245.990.070.090.0428.4138.1414.690.39300.0100.07.0
3.0810.4760.82245.990.050.050.0420.6620.4814.690.38300.0100.07.0
4.02312.122413.16245.990.110.110.0446.1847.5414.690.37300.0100.07.0
5.01403.271659.74245.990.080.090.0433.0236.8614.690.35300.0100.07.0
6.02204.23998.79245.990.090.060.0437.1226.6714.690.44300.0100.07.0
7.01157.11973.55245.990.070.070.0429.6129.4314.690.45300.0100.07.0
8.0110.391043.21245.990.020.060.048.6625.1614.690.39300.0100.07.0
9.064.52193.81245.990.010.030.046.2110.7714.690.36300.0100.07.0
10.0450.421959.15245.990.040.070.0415.4731.0914.690.41300.0100.07.0
11.08780.562300.54245.990.180.070.0474.6729.8314.690.36300.0100.07.0
2.0316.58996.08245.990.030.060.0411.9224.8214.690.25300.0100.08.0
3.01340.611536.67245.990.080.070.0432.3330.8514.690.25300.0100.08.0
4.01183.361006.46245.990.070.060.0431.2327.1214.690.26300.0100.08.0
5.093.79222.94245.990.020.030.048.2112.0914.690.36300.0100.08.0
6.0420.98980.4245.990.040.060.0417.4726.5914.690.36300.0100.08.0
7.0135.68263.85245.990.020.030.049.4514.6214.690.26300.0100.08.0
8.0818.561718.34245.990.060.080.0425.7633.2514.690.26300.0100.08.0
9.094.33769.7245.990.020.060.046.6923.5214.690.27300.0100.08.0
10.0662.0314.84245.990.040.040.0417.8715.8414.690.36300.0100.08.0
11.0742.79307.04245.990.050.040.0420.815.8114.690.27300.0100.08.0
2.0825.951440.6245.990.050.070.0421.6728.714.690.23300.0100.09.0
3.048.4869.68245.990.010.020.046.127.0214.690.21300.0100.09.0
4.03302.632400.13245.990.110.10.0446.4641.2214.690.36300.0100.09.0
5.0187.92430.18245.990.030.040.0413.0717.4314.690.24300.0100.09.0
6.0581.962558.95245.990.050.110.0422.1844.6514.690.27300.0100.09.0
7.045.64280.77245.990.010.030.045.613.2814.690.25300.0100.09.0
8.01844.141306.57245.990.090.070.0436.1430.8514.690.24300.0100.09.0
9.0519.89416.71245.990.040.040.0417.6117.7114.690.35300.0100.09.0
10.025.64165.3245.990.010.030.044.1310.8314.690.19300.0100.09.0
11.0781.17502.24245.990.040.050.0418.619.3114.690.27300.0100.09.0
2.0783.611054.46245.990.060.070.0425.4530.6614.690.18300.0100.010.0
3.02310.753464.63245.990.10.120.0442.8449.0714.690.23300.0100.010.0
4.04186.72536.53245.990.130.10.0452.2743.4714.690.24300.0100.010.0
5.0271.87950.11245.990.030.060.0412.3224.3314.690.23300.0100.010.0
6.0949.332538.05245.990.060.090.0425.1737.714.690.21300.0100.010.0
7.0514.17329.61245.990.040.040.0416.9116.314.690.29300.0100.010.0
8.0571.321411.69245.990.040.070.0415.5530.2114.690.23300.0100.010.0
9.01087.821120.72245.990.070.070.0428.1827.514.690.22300.0100.010.0
10.0225.421315.91245.990.030.070.0410.828.4214.690.36300.0100.010.0
11.02130.581057.31245.990.10.060.0442.9123.1914.690.22300.0100.010.0
2.0742.553332.57245.990.050.120.0422.5948.4514.690.36300.0100.011.0
3.03587.943133.55245.990.120.110.0449.9547.4814.690.19300.0100.011.0
4.01055.461217.52245.990.060.070.0425.3629.7214.690.16300.0100.011.0
5.05525.175024.16245.990.110.10.0445.4239.7314.690.4300.0100.011.0
6.0370.84751.9245.990.040.060.0417.1424.4814.690.36300.0100.011.0
7.0691.711847.75245.990.060.090.0423.3835.7314.690.24300.0100.011.0
8.04165.012609.9245.990.130.110.0452.2544.0114.690.22300.0100.011.0
9.0992.96657.46245.990.060.050.0424.4619.3414.690.21300.0100.011.0
10.060.34235.89245.990.010.030.045.9713.9314.690.22300.0100.011.0
11.0557.2424.99245.990.050.040.0421.0718.014.690.72300.0100.011.0
C -> B Path through the seven segments 041 009 019 056 051 023 059
2.011.3927.1465.60.010.010.022.763.836.330.89300.0100.02.0
3.0442.03484.6865.60.050.050.0219.4720.456.331.0300.0100.02.0
4.053.5771.4265.60.020.020.027.167.726.330.92300.0100.02.0
5.03.2615.3765.60.00.010.021.672.96.330.9300.0100.02.0
6.078.24140.2265.60.010.020.025.887.826.331.38300.0100.02.0
7.04.514.3565.60.00.00.021.71.786.330.89300.0100.02.0
8.019.5937.3565.60.010.010.023.715.346.330.84300.0100.02.0
9.026.2730.4665.60.010.010.024.484.836.331.03300.0100.02.0
10.055.3174.8665.60.020.020.026.757.356.330.92300.0100.02.0
11.06.9717.7365.60.010.010.022.213.436.330.99300.0100.02.0
2.033.3615.5765.60.010.010.025.013.516.330.63300.0100.03.0
3.030.7648.5365.60.010.010.025.025.666.330.55300.0100.03.0
4.030.5946.1265.60.010.010.025.315.736.330.73300.0100.03.0
5.024.4119.2765.60.010.010.023.53.336.330.83300.0100.03.0
6.0226.55240.9565.60.030.030.0214.2914.476.330.82300.0100.03.0
7.016.9329.1465.60.010.010.022.854.526.331.39300.0100.03.0
8.039.5951.4765.60.010.010.025.296.056.330.7300.0100.03.0
9.042.6768.5165.60.010.020.025.846.626.330.76300.0100.03.0
10.0249.51265.1965.60.030.030.0214.3214.426.330.67300.0100.03.0
11.020.093.9665.60.010.020.024.127.36.330.67300.0100.03.0
2.031.4439.865.60.010.010.024.465.286.330.7300.0100.04.0
3.06.4318.9565.60.00.010.021.973.176.330.49300.0100.04.0
4.0120.55142.2165.60.020.020.027.979.136.330.57300.0100.04.0
5.0163.02132.1765.60.020.020.029.839.816.331.02300.0100.04.0
6.017.926.3765.60.010.010.023.874.46.330.42300.0100.04.0
7.0237.69226.665.60.040.030.0215.2214.566.330.5300.0100.04.0
8.01126.4985.4865.60.060.050.0223.3220.896.330.56300.0100.04.0
9.0241.93349.1465.60.030.030.0212.0113.126.330.52300.0100.04.0
10.05.158.565.60.00.010.022.052.526.330.56300.0100.04.0
11.0145.7180.3665.60.030.030.0211.3211.96.330.54300.0100.04.0
2.012.8622.5565.60.010.010.022.974.116.330.42300.0100.05.0
3.0366.83363.2265.60.040.040.0218.5918.66.330.43300.0100.05.0
4.074.6191.3665.60.020.020.026.937.526.330.52300.0100.05.0
5.027.7132.7565.60.010.010.024.664.816.330.56300.0100.05.0
6.090.66153.5465.60.020.030.028.8411.676.330.54300.0100.05.0
7.024.3246.0465.60.010.010.023.94.986.330.65300.0100.05.0
8.0178.14161.2165.60.020.020.027.927.66.330.47300.0100.05.0
9.053.8245.7765.60.010.010.026.245.976.330.55300.0100.05.0
10.024.8333.7465.60.010.010.024.535.286.330.34300.0100.05.0
11.0118.69158.5165.60.020.030.028.5110.936.330.41300.0100.05.0
2.044.8159.065.60.010.020.025.596.716.330.54300.0100.06.0
3.063.5567.5865.60.020.020.026.327.556.330.7300.0100.06.0
4.073.9579.665.60.020.020.027.356.656.330.37300.0100.06.0
5.097.85174.6865.60.020.030.029.5612.596.330.36300.0100.06.0
6.085.4860.9865.60.020.010.027.645.936.330.42300.0100.06.0
7.0628.49617.0265.60.050.060.0221.6323.016.330.36300.0100.06.0
8.0823.88887.2165.60.050.060.0222.4323.716.330.35300.0100.06.0
9.038.54317.465.60.010.030.025.1412.726.330.35300.0100.06.0
10.01258.991276.9865.60.080.080.0232.0732.046.330.34300.0100.06.0
11.052.0391.5665.60.010.020.025.977.26.330.38300.0100.06.0
2.0463.89463.8965.60.050.050.0220.1720.176.330.58300.0100.07.0
3.01033.961040.8865.60.050.050.0219.6519.836.330.34300.0100.07.0
4.0603.48549.1165.60.050.040.0219.3318.436.330.34300.0100.07.0
5.034.8383.865.60.010.010.024.265.076.330.36300.0100.07.0
6.027.3839.3265.60.010.010.024.515.366.330.7300.0100.07.0
7.043.61144.0665.60.010.020.026.1510.36.330.43300.0100.07.0
8.012.1825.9565.60.010.010.022.323.286.330.33300.0100.07.0
9.086.78193.7265.60.020.030.027.0311.686.330.37300.0100.07.0
10.051.7137.4265.60.010.010.024.954.946.330.31300.0100.07.0
11.0101.9890.965.60.020.020.027.928.226.330.42300.0100.07.0
2.0191.15234.865.60.030.030.0212.4813.676.330.61300.0100.08.0
3.0107.21179.8965.60.020.030.029.1910.666.330.44300.0100.08.0
4.0517.17616.2265.60.040.050.0216.7618.986.330.51300.0100.08.0
5.01224.541297.6565.60.050.050.0220.3922.926.330.73300.0100.08.0
6.067.0181.5265.60.020.020.026.627.216.330.67300.0100.08.0
7.0671.841009.5265.60.040.050.0215.918.836.330.42300.0100.08.0
8.0453.71330.8665.60.040.030.0216.1614.26.330.36300.0100.08.0
9.016.6537.965.60.010.010.023.764.916.330.35300.0100.08.0
10.090.2792.5765.60.020.020.028.587.826.330.49300.0100.08.0
11.0367.761453.3765.60.040.070.0214.8528.816.330.56300.0100.08.0
2.060.0937.0565.60.020.010.026.565.146.330.42300.0100.09.0
3.0257.74308.4365.60.030.040.0213.7615.756.330.47300.0100.09.0
4.0294.66228.2965.60.030.030.0214.4513.296.330.47300.0100.09.0
5.012.0222.3365.60.010.010.022.663.426.330.46300.0100.09.0
6.071.9785.665.60.020.020.027.78.456.330.43300.0100.09.0
7.080.04107.6165.60.020.020.028.529.126.330.43300.0100.09.0
8.0533.59491.3965.60.040.050.0218.1319.536.330.44300.0100.09.0
9.0201.59209.5165.60.020.030.0210.3311.56.330.49300.0100.09.0
10.01315.271294.1465.60.070.070.0229.3929.096.330.44300.0100.09.0
11.0416.54739.665.60.030.040.0212.8715.466.330.4300.0100.09.0
2.030.9849.7465.60.010.010.024.555.596.330.71300.0100.010.0
3.0411.51260.3865.60.040.030.0217.9114.56.330.76300.0100.010.0
4.0233.84172.865.60.030.030.0213.3410.836.330.24300.0100.010.0
5.0124.07140.0465.60.020.020.028.4510.16.330.24300.0100.010.0
6.0857.03686.5465.60.050.050.0220.8322.446.330.25300.0100.010.0
7.0317.79163.5665.60.030.020.0212.1110.06.330.23300.0100.010.0
8.01717.491502.9965.60.090.080.0236.133.656.330.25300.0100.010.0
9.0882.161112.3265.60.060.070.0226.4627.416.330.27300.0100.010.0
10.0463.5262.4565.60.040.040.0217.9314.766.330.23300.0100.010.0
11.01429.013354.1765.60.070.130.0231.2153.426.330.23300.0100.010.0
2.0457.9858.6465.60.050.060.0219.0625.096.330.36300.0100.011.0
3.0686.0643.7765.60.050.050.0221.0221.486.330.23300.0100.011.0
4.0191.13171.0165.60.020.020.029.247.676.330.21300.0100.011.0
5.0164.59116.0865.60.030.020.0211.619.646.330.24300.0100.011.0
6.0521.33721.2865.60.030.040.0214.4917.526.330.19300.0100.011.0
7.0263.6212.765.60.030.030.0214.3513.046.330.36300.0100.011.0
8.0230.32393.3565.60.030.040.0214.2117.356.330.25300.0100.011.0
9.0463.63463.3465.60.040.040.0218.0216.626.330.21300.0100.011.0
10.0123.08133.8365.60.020.020.027.759.126.330.36300.0100.011.0
11.0464.33567.8465.60.030.040.0211.8916.026.330.25300.0100.011.0

References

  1. Skarka, W. Model-Based Design and Optimization of Electric Vehicles. In Proceedings of the 25th ISPE Inc International Conference on Transdisciplinary Engineering Location, Univ Modena & Reggio Emilia, Modena, Italy, 3–6 July 2018; Transdisciplinary Engineering Methods for Social Innovation of Industry 4.0 Book Series: Advances in Transdisciplinary Engineering; IOS Press: Amsterdam, The Netherlands, 2018; pp. 566–575. [Google Scholar]
  2. Klein, P.; Bergmann, R. Generation of Complex Data for Ai-Based Predictive Maintenance Research with a Physical Factory Model. In Proceedings of the 16th International Conference on Informatics in Control, Automation and Robotics, Prague, Czech Republic, 29–31 July 2019; SCITEPRESS—Science and Technology Publications: Prague, Czech Republic, 2019; pp. 40–50. [Google Scholar]
  3. Kotsiopoulos, T.; Sarigiannidis, P.; Ioannidis, D.; Tzovaras, D. Machine Learning and Deep Learning in Smart Manufacturing: The Smart Grid Paradigm. Comput. Sci. Rev. 2021, 40, 100341. [Google Scholar] [CrossRef]
  4. Coleman, C.; Damodaran, S.; Deuel, E. Predictive Maintenance and the Smart Factory; Deloitte University Press: Toronto, ON, Canada, 2017. [Google Scholar]
  5. Pech, M.; Vrchota, J.; Bednář, J. Predictive Maintenance and Intelligent Sensors in Smart Factory: Review. Sensors 2021, 21, 1470. [Google Scholar] [CrossRef] [PubMed]
  6. Sharp, M.; Ak, R.; Hedberg, T. A Survey of the Advancing Use and Development of Machine Learning in Smart Manufacturing. J. Manuf. Syst. 2018, 48, 170–179. [Google Scholar] [CrossRef] [PubMed]
  7. Steclik, T.; Cupek, R.; Drewniak, M. Automatic Grouping of Production Data in Industry 4.0: The Use Case of Internal Logistics Systems Based on Automated Guided Vehicles. J. Comput. Sci. 2022, 62, 101693. [Google Scholar] [CrossRef]
  8. Cupek, R.; Drewniak, M.; Steclik, T. Data Preprocessing, Aggregation and Clustering for Agile Manufacturing Based on Automated Guided Vehicles. In Proceedings of the Computational Science—ICCS 2021, Krakow, Poland, 16–18 June 2021; Paszynski, M., Kranzlmüller, D., Krzhizhanovskaya, V.V., Dongarra, J.J., Sloot, P.M.A., Eds.; Springer International Publishing: Cham, Switzerland, 2021; Volume 12745, pp. 458–470, ISBN 9783030779696. [Google Scholar]
  9. Hu, L.; Miao, Y.; Wu, G.; Hassan, M.M.; Humar, I. IRobot-Factory: An Intelligent Robot Factory Based on Cognitive Manufacturing and Edge Computing. Future Gener. Comput. Syst. 2019, 90, 569–577. [Google Scholar] [CrossRef]
  10. Computational Science—ICCS 2020: 20th International Conference, Amsterdam, The Netherlands, 3–5 June 2020: Proceedings, 5. Part V; Lecture Notes in Computer Science; Krzhizhanovskaya, V.V.; Závodszky, G.; Lees, M.H.; Dongarra, J.J.; Sloot, P.M.A.; Brissos, S.; Teixeira, J. (Eds.) Springer: Cham, Switzerland, 2020; ISBN 9783030504267. [Google Scholar]
  11. Benecki, P.; Kostrzewa, D.; Grzesik, P.; Shubyn, B.; Mrozek, D. Forecasting of Energy Consumption for Anomaly Detection in Automated Guided Vehicles: Models and Feature Selection. In Proceedings of the 2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Prague, Czech Republic, 9–12 October 2022; IEEE: Prague, Czech Republic, 2022; pp. 2073–2079. [Google Scholar]
  12. Niestrój, R.; Rogala, T.; Skarka, W. An Energy Consumption Model for Designing an AGV Energy Storage System with a PEMFC Stack. Energies 2020, 13, 3435. [Google Scholar] [CrossRef]
  13. Grzesik, P.; Benecki, P.; Kostrzewa, D.; Shubyn, B.; Mrozek, D. On-Edge Aggregation Strategies over Industrial Data Produced by Autonomous Guided Vehicles. In Proceedings of the Computational Science—ICCS 2022, Las Vegas, NV, USA, 14–16 December 2022; Groen, D., de Mulatier, C., Paszynski, M., Krzhizhanovskaya, V.V., Dongarra, J.J., Sloot, P.M.A., Eds.; Springer International Publishing: Cham, Switzerland, 2022; Volume 13353, pp. 458–471. ISBN 9783031087592. [Google Scholar]
  14. Wang, X.; Garg, S.; Lin, H.; Kaddoum, G.; Hu, J.; Alhamid, M.F. An Intelligent UAV Based Data Aggregation Algorithm for 5G—Enabled Internet of Things. Comput. Netw. 2021, 185, 107628. [Google Scholar] [CrossRef]
  15. Essien, A.; Giannetti, C. A Deep Learning Model for Smart Manufacturing Using Convolutional LSTM Neural Network Autoencoders. IEEE Trans. Ind. Inf. 2020, 16, 6069–6078. [Google Scholar] [CrossRef]
  16. Medykovskvi, M.; Pavliuk, O.; Sydorenko, R. Use of Machine Learning Technologies for the Electric Consumption Forecast. In Proceedings of the 2018 IEEE 13th International Scientific and Technical Conference on Computer Sciences and Information Technologies (CSIT), Lviv, Ukraine, 11–14 September 2018; IEEE: Lviv, Ukraine, 2018; pp. 432–435. [Google Scholar]
  17. Cupek, R.; Ziębiński, A.; Drewniak, M.; Fojcik, M. Estimation of the Number of Energy Consumption Profiles in the Case of Discreet Multi-Variant Production. In Intelligent Information and Database Systems; Nguyen, N.T., Hoang, D.H., Hong, T.-P., Pham, H., Trawiński, B., Eds.; Springer International Publishing: Cham, Switzerland, 2018; Volume 10752, pp. 674–684. ISBN 9783319754192. [Google Scholar]
  18. Schleipen, M.; Gilani, S.-S.; Bischoff, T.; Pfrommer, J. OPC UA & Industrie 4.0—Enabling Technology with High Diversity and Variability. Procedia CIRP 2016, 57, 315–320. [Google Scholar] [CrossRef]
  19. Ioana, A.; Korodi, A. OPC UA Publish-Subscribe and VSOME/IP Notify-Subscribe Based Gateway Application in the Context of Car to Infrastructure Communication. Sensors 2020, 20, 4624. [Google Scholar] [CrossRef]
  20. Cupek, R.; Gólczyński, Ł.; Ziebinski, A. An OPC UA Machine Learning Server for Automated Guided Vehicle. In Computational Collective Intelligence; Nguyen, N.T., Chbeir, R., Exposito, E., Aniorté, P., Trawiński, B., Eds.; Springer International Publishing: Cham, Switzerland, 2019; Volume 11684, pp. 218–228. ISBN 9783030283735. [Google Scholar]
  21. Izonin, I.; Tkachenko, R.; Shakhovska, N.; Ilchyshyn, B.; Singh, K.K. A Two-Step Data Normalization Approach for Improving Classification Accuracy in the Medical Diagnosis Domain. Mathematics 2022, 10, 1942. [Google Scholar] [CrossRef]
  22. Tlebaldinova, A.; Denissova, N.; Baklanova, O.; Krak, I.; Györök, G. Normalization of Vehicle License Plate Images Based on Analyzing of Its Specific Features for Improving the Quality Recognition. Acta Polytech. Hung. 2020, 17, 193–206. [Google Scholar] [CrossRef]
  23. Vafaei, N.; Ribeiro, R.A.; Camarinha-Matos, L.M. Normalization Techniques for Multi-Criteria Decision Making: Analytical Hierarchy Process Case Study. In Technological Innovation for Cyber-Physical Systems; Camarinha-Matos, L.M., Falcão, A.J., Vafaei, N., Najdi, S., Eds.; Springer International Publishing: Cham, Switzerland, 2016; Volume 470, pp. 261–269. ISBN 9783319311647. [Google Scholar]
  24. Dick, J.; Ladosz, P.; Ben-Iwhiwhu, E.; Shimadzu, H.; Kinnell, P.; Pilly, P.K.; Kolouri, S.; Soltoggio, A. Detecting Changes and Avoiding Catastrophic Forgetting in Dynamic Partially Observable Environments. Front. Neurorobotics 2020, 14, 578675. [Google Scholar] [CrossRef] [PubMed]
  25. Pavliuk, O.; Kolesnyk, H. Machine-Learning Method for Analyzing and Predicting the Number of Hospitalizations of Children during the Fourth Wave of the COVID-19 Pandemic in the Lviv Region. J. Reliab. Intell. Environ. 2022, 9, 17–26. [Google Scholar] [CrossRef] [PubMed]
  26. M D’Angelo, G. Missing Data Methods for Partial Correlations. J. Biom. Biostat. 2012, 3, 155. [Google Scholar] [CrossRef] [PubMed]
  27. Sterne, J.A.C.; White, I.R.; Carlin, J.B.; Spratt, M.; Royston, P.; Kenward, M.G.; Wood, A.M.; Carpenter, J.R. Multiple Imputation for Missing Data in Epidemiological and Clinical Research: Potential and Pitfalls. BMJ 2009, 338, b2393. [Google Scholar] [CrossRef] [PubMed]
  28. Hauke, J.; Kossowski, T. Comparison of Values of Pearson’s and Spearman’s Correlation Coefficients on the Same Sets of Data. Quaest. Geogr. 2011, 30, 87–93. [Google Scholar] [CrossRef]
  29. Schober, P.; Boer, C.; Schwarte, L.A. Correlation Coefficients: Appropriate Use and Interpretation. Anesth. Analg. 2018, 126, 1763–1768. [Google Scholar] [CrossRef]
  30. Saccenti, E.; Hendriks, M.H.W.B.; Smilde, A.K. Corruption of the Pearson Correlation Coefficient by Measurement Error and Its Estimation, Bias, and Correction under Different Error Models. Sci. Rep. 2020, 10, 438. [Google Scholar] [CrossRef]
  31. Zhang, G.P. Neural Networks for Time-Series Forecasting. In Handbook of Natural Computing; Rozenberg, G., Bäck, T., Kok, J.N., Eds.; Springer: Berlin, Heidelberg, 2012; pp. 461–477. ISBN 9783540929093. [Google Scholar]
  32. Lishner, I.; Shtub, A. Using an Artificial Neural Network for Improving the Prediction of Project Duration. Mathematics 2022, 10, 4189. [Google Scholar] [CrossRef]
  33. Chen, F. Deep Neural Network Model Forecasting for Financial and Economic Market. J. Math. 2022, 2022, e8146555. [Google Scholar] [CrossRef]
  34. Galkin, D.; Dudkina, T.; Mamedova, N. Forecasting Time Series Using Neural Networks on the Example of Primary Sales of a Pharmaceutical Company. SHS Web Conf. 2022, 141, 01014. [Google Scholar] [CrossRef]
  35. Li, X. Application of Neural Networks in Financial Time Series Forecasting Models. J. Funct. Spaces 2022, 2022, e7817264. [Google Scholar] [CrossRef]
  36. Li, X.; Zhao, H.; Deng, W. BFOD: Blockchain-based Privacy Protection and Security Sharing Scheme of Flight Operation Data. IEEE Internet Things J. 2023, 1. [Google Scholar] [CrossRef]
  37. Li, M.; Zhang, W.; Hu, B.; Kang, J.; Wang, Y.; Lu, S. Automatic Assessment of Depression and Anxiety through Encoding Pupil-wave from HCI in VR Scenes. ACM Trans. Multimed. Comput. Commun. Appl. 2023, 20, 1–22. [Google Scholar] [CrossRef]
  38. Zhang, Z.; Guo, D.; Zhou, S.; Zhang, J.; Lin, Y. Flight Trajectory Prediction Enabled by Time-Frequency Wavelet Transform. Nat. Commun. 2023, 14, 5258. [Google Scholar] [CrossRef] [PubMed]
  39. Andriyevsky, B.; Barchiy, I.E.; Studenyak, I.P.; Kashuba, A.I.; Piasecki, M. Electron, Phonon and Thermoelectric Properties of Cu7PS6 Crystal Calculated at DFT Level. Sci. Rep. 2021, 11, 19065. [Google Scholar] [CrossRef] [PubMed]
  40. Oleksyn, Z.; Naidych, B.; Chernikova, O.; Glowa, L.; Ogorodnik, Y.; Solovyov, M.; Vashchynskyi, V.; Yavorskyi, R.; Il’chuk, G. First-Principles Calculations of Stable Geometric Configuration and Thermodynamic Parameters of Cadmium Sulfide Thin-Film Condensates. Phys. Chem. Solid State 2021, 22, 568–576. [Google Scholar] [CrossRef]
  41. Hanschek, A.J.; Bouvier, Y.E.; Jesacher, E.; Grbović, P.J. Analysis and Comparison of Power Distribution System Topologies for Low-Voltage DC–DC Automated Guided Vehicle Applications. Energies 2022, 15, 2012. [Google Scholar] [CrossRef]
  42. Poskart, B.; Iskierka, G.; Krot, K.; Burduk, R.; Gwizdal, P.; Gola, A. Multi-Parameter Predictive Model of Mobile Robot’s Battery Discharge for Intelligent Mission Planning in Multi-Robot Systems. Sensors 2022, 22, 9861. [Google Scholar] [CrossRef]
  43. Mrugalska, B.; Stetter, R. Health-Aware Model-Predictive Control of a Cooperative AGV-Based Production System. Sensors 2019, 19, 532. [Google Scholar] [CrossRef] [PubMed]
  44. Meng, J.; Azib, T.; Yue, M. Early-stage end-of-life prediction of lithium-ion battery using empirical mode decomposition and particle filter. Proc. Inst. Mech. Eng. Part A J. Power Energy 2023, 237, 185–197. [Google Scholar] [CrossRef]
  45. Zhan, X.; Xu, L.; Zhang, J.; Li, A. Study on AGVs Battery Charging Strategy for Improving Utilization. Procedia CIRP 2019, 81, 558–563. [Google Scholar] [CrossRef]
  46. Selmair, M.; Maurer, T. Enhancing Charging & Parking Processes of AGV Systems: Progressive Theoretical Considerations. In Proceedings of the Twelfth International Conference on Advances in System Simulation, Porto, Portugal, 18–20 October 2020; pp. 7–14. [Google Scholar]
  47. Li, J.; Cheng, W.; Lai, K.K.; Ram, B. Multi-AGV Flexible Manufacturing Cell Scheduling Considering Charging. Mathematics 2022, 10, 3417. [Google Scholar] [CrossRef]
  48. Szpytko, J.; Hyla, P.; Salgado, Y. Autonomous vehicles energy based operation capacity planning. J. Mach. Eng. 2020, 20, 126–138. [Google Scholar] [CrossRef]
  49. Saputra, R.P.; Rijanto, E. Automatic Guided Vehicles System and Its Coordination Control for Containers Terminal Logistics Application. arXiv 2021, arXiv:2104.08331. [Google Scholar] [CrossRef]
  50. Yesilyurt, O.; Bauer, D.; Emde, A.; Sauer, A. Why Should the Automated Guided Vehicles’ Batteries Be Used in the Manufacturing Plants as an Energy Storage? E3S Web Conf. 2021, 231, 01004. [Google Scholar] [CrossRef]
  51. Ahmed, R.; Rahimifard, S.; Habibi, S. Offline Parameter Identification and SOC Estimation for New and Aged Electric Vehicles Batteries. In Proceedings of the 2019 IEEE Transportation Electrification Conference and Expo (ITEC), Detroit, MI, USA, 19–21 June 2019; IEEE: Detroit, MI, USA, 2019; pp. 1–6. [Google Scholar]
  52. Han, X.; Feng, X.; Ouyang, M.; Lu, L.; Li, J.; Zheng, Y.; Li, Z. A Comparative Study of Charging Voltage Curve Analysis and State of Health Estimation of Lithium-Ion Batteries in Electric Vehicle. Automot. Innov. 2019, 2, 263–275. [Google Scholar] [CrossRef]
  53. Yesilyurt, O.; Kurrle, M.; Schlereth, A.; Jäger, M.; Sauer, A. Modelling AGV Operation Simulation with Lithium Batteries in Manufacturing. AKWI 2022, 7. [Google Scholar] [CrossRef]
  54. Ismail, M.; Dlyma, R.; Elrakaybi, A.; Ahmed, R.; Habibi, S. Battery State of Charge Estimation Using an Artificial Neural Network. In Proceedings of the 2017 IEEE Transportation Electrification Conference and Expo (ITEC), Harbin, China, 7–10 August 2017; IEEE: Chicago, IL, USA, 2017; pp. 342–349. [Google Scholar]
  55. Pavliuk, O.; Steclik, T.; Biernacki, P. The forecast of the AGV battery discharging via the machine learning methods. In Proceedings of the 2022 IEEE International Conference on Big Data (IEEE BigData 2022), Osaka, Japan, 17–20 December 2022; IEEE: Osaka, Japan, 2022; pp. 6315–6324. [Google Scholar]
  56. Leonori, S.; Baldini, L.; Rizzi, A.; Frattale Mascioli, F.M. A Physically Inspired Equivalent Neural Network Circuit Model for SoC Estimation of Electrochemical Cells. Energies 2021, 14, 7386. [Google Scholar] [CrossRef]
  57. Hsu, C.-W.; Xiong, R.; Chen, N.-Y.; Li, J.; Tsou, N.-T. Deep Neural Network Battery Life and Voltage Prediction by Using Data of One Cycle Only. Appl. Energy 2022, 306, 118134. [Google Scholar] [CrossRef]
  58. Simon, J.; Hošovský, A.; Sárosi, J. Neural Network Driven Automated Guided Vehicle Platform Development for Industry 4.0 Environment. Teh. Vjesn. 2021, 28, 1936–1942. [Google Scholar] [CrossRef]
  59. Little, R.J.; Rubin, D.B. Statistical Analysis with Missing Data; John Wiley & Sons: Hoboken, NJ, USA, 2019; Volume 793. [Google Scholar] [CrossRef]
  60. Yang, C.; Diao, L.; Cook, R.J. Adaptive response-dependent two-phase designs: Some results on robustness and efficiency. Stat. Med. 2022, 41, 4403–4425. [Google Scholar] [CrossRef]
Figure 1. The route map of a Formica 1 AGV in the AIUT enterprise.
Figure 1. The route map of a Formica 1 AGV in the AIUT enterprise.
Electronics 12 04636 g001
Figure 2. The results of using the proposed methodology for detecting and eliminating any spontaneous outliers in the AGV signals.
Figure 2. The results of using the proposed methodology for detecting and eliminating any spontaneous outliers in the AGV signals.
Electronics 12 04636 g002
Figure 3. Graphical representation of the battery cell voltage distribution in the form of bins.
Figure 3. Graphical representation of the battery cell voltage distribution in the form of bins.
Electronics 12 04636 g003
Figure 4. The structure of the DNN prediction model.
Figure 4. The structure of the DNN prediction model.
Electronics 12 04636 g004
Figure 5. The dynamics of the DNN learning procedure.
Figure 5. The dynamics of the DNN learning procedure.
Electronics 12 04636 g005
Figure 6. Comparison of the forecast results of the traditional time series forecasting models.
Figure 6. Comparison of the forecast results of the traditional time series forecasting models.
Electronics 12 04636 g006
Table 1. Frame 6000 for all of the Formica 1 AGV fields.
Table 1. Frame 6000 for all of the Formica 1 AGV fields.
FieldSize (Bytes)Description
TS12Time stamp
WS8Weight signals
G1LDS2Group 1—right drive signals
G2RDS2Group 2—left drive signals
ODS12Odometry signals
ENS16Energy signals
NNS26Natural navigation signals
NNCF20Natural navigation command feedback
Table 2. Prediction effects of the different numbers of neurons in the hidden layer.
Table 2. Prediction effects of the different numbers of neurons in the hidden layer.
Neurons in the Hidden LayersMSEMAPE%MAETraining Speed
Min
Batch Size
TW1TW2StatTW1TW2StatTW1TW2Stat
Path through the 9 segments
2.021.2957.81245.990.010.010.043.495.9414.690.54
3.043.97120.82245.990.010.020.045.48.114.690.555
4.032.18144.7245.990.010.030.044.5510.5614.690.83
5.093.79222.94245.990.020.030.048.2112.0914.690.368
6.012.9660.37245.990.010.020.043.076.3214.691.062
7.045.64280.77245.990.010.030.045.613.2814.690.259
8.069.1184.58245.990.010.030.046.1610.4714.690.464
9.064.52193.81245.990.010.030.046.2110.7714.690.367
10.025.64165.3245.990.010.030.044.1310.8314.690.199
11.0140.44160.6245.990.030.020.0410.598.0814.690.454
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pavliuk, O.; Cupek, R.; Steclik, T.; Medykovskyy, M.; Drewniak, M. A Novel Methodology Based on a Deep Neural Network and Data Mining for Predicting the Segmental Voltage Drop in Automated Guided Vehicle Battery Cells. Electronics 2023, 12, 4636. https://doi.org/10.3390/electronics12224636

AMA Style

Pavliuk O, Cupek R, Steclik T, Medykovskyy M, Drewniak M. A Novel Methodology Based on a Deep Neural Network and Data Mining for Predicting the Segmental Voltage Drop in Automated Guided Vehicle Battery Cells. Electronics. 2023; 12(22):4636. https://doi.org/10.3390/electronics12224636

Chicago/Turabian Style

Pavliuk, Olena, Rafal Cupek, Tomasz Steclik, Mykola Medykovskyy, and Marek Drewniak. 2023. "A Novel Methodology Based on a Deep Neural Network and Data Mining for Predicting the Segmental Voltage Drop in Automated Guided Vehicle Battery Cells" Electronics 12, no. 22: 4636. https://doi.org/10.3390/electronics12224636

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop