Next Article in Journal
Simultaneous Localization and Guidance of Two Underwater Hexapod Robots under Underwater Currents
Next Article in Special Issue
A Software Platform for Quadruped Robots with Advanced Manipulation Capabilities
Previous Article in Journal
Towards Automated Inspections of Tunnels: A Review of Optical Inspections and Autonomous Assessment of Concrete Tunnel Linings
Previous Article in Special Issue
Parametric Dynamic Distributed Containment Control of Continuous-Time Linear Multi-Agent Systems with Specified Convergence Speed
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Design and Implementation of an Integrated Control System for Omnidirectional Mobile Robots in Industrial Logistics

School of Mechanical Engineering, Yeungnam University, 280 Daehak-Ro, Gyeongsan 38541, Republic of Korea
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(6), 3184; https://doi.org/10.3390/s23063184
Submission received: 28 February 2023 / Revised: 14 March 2023 / Accepted: 14 March 2023 / Published: 16 March 2023
(This article belongs to the Special Issue Advanced Sensing and Control Technologies for Autonomous Robots)

Abstract

:
The integration of intelligent robots in industrial production processes has the potential to significantly enhance efficiency and reduce human adversity. However, for such robots to effectively operate within human environments, it is critical that they possess an adequate understanding of their surroundings and are able to navigate through narrow aisles while avoiding both stationary and moving obstacles. In this research study, an omnidirectional automotive mobile robot has been designed for the purpose of performing industrial logistics tasks within heavy traffic and dynamic environments. A control system has been developed, which incorporates both high-level and low-level algorithms, and a graphical interface has been introduced for each control system. A highly efficient micro-controller, namely myRIO, has been utilized as the low-level computer to control the motors with an appropriate level of accuracy and robustness. Additionally, a Raspberry Pi 4, in conjunction with a remote PC, has been utilized for high-level decision making, such as mapping the experimental environment, path planning, and localization, through the utilization of multiple Lidar sensors, IMU, and odometry data generated by wheel encoders. In terms of software programming, LabVIEW has been employed for the low-level computer, and the Robot Operating System (ROS) has been utilized for the design of the higher-level software architecture. The proposed techniques discussed in this paper provide a solution for the development of medium- and large-category omnidirectional mobile robots with autonomous navigation and mapping capabilities.

1. Introduction

The COVID-19 pandemic has presented the global community with a unique challenge, and the scientific community has been working diligently to protect human health and maintain societal and industrial progress. The field of robotics has played a crucial role in this context. The utilization of different types of robots has been a highly researched topic in the wake of the pandemic. In fact, a survey [1] conducted in 2020 found that over 3500 papers were published on the topic of robots in contagion scenarios. Furthermore, the most significant research keywords, based on 280 publications, were mapped, with “autonomous robot” being among the top keywords. During the pandemic, the world has witnessed the successful deployment of robotic nurses [2] in Hong Kong, delivery robots in the United States, and working robots in Japan and Korea. Additionally, a study published in 2020 [3] indicates that since the onset of the COVID-19 pandemic, consumers are willing to pay an extra 61.28% for robot delivery.
The widespread adoption of robots has broadened the spectrum of human–robot collaboration, leading to an improvement in task accuracy and proximity to human employees. Among the various types of robots, mobile robots have gained significant attention for both industrial and logistic uses. The incorporation of autonomous robots in large-scale factories and logistics centers has become a common practice for reducing the strain on human labor.
For many years, autonomous guided vehicles (AGVs) [4] have dominated the robot industry due to their efficiency in handling manufacturing processes and logistics tasks, such as picking, packing, and palletizing, along pre-defined pathways. However, their inflexibility in adjusting to route changes and limited ability to collaborate with other systems or human operators has led to the development of a more advanced technology: autonomous mobile robots (AMRs) [5]. These robots have the capability of decision-making and autonomous navigation, without being restricted to a pre-defined path.
Another crucial consideration in terms of the integration of robots into human environments is the requirement for a proper understanding of the surrounding environment to avoid obstacles and unexpected encounters with humans or other objects. In fast-growing industrial environments with high traffic and narrow hallways surrounded by various objects and people, omnidirectional mobile robots (OMRs) [6] may be a superior solution due to their ability to move in any direction. However, their overlooked lower-level control design may not be effective in handling continuously changing loads. Thus, advanced control design, even for the lower-level control, is necessary to ensure the effectiveness of OMRs in heavy logistics duties.
In this research study, a design for a mobile robot has been proposed, featuring four Mecanum wheels driven by a bridge motor driver and controlled by a myRIO microprocessor. The rotation speed of these wheels allows for control over the forward, backward, and sideways movements, as well as the turning, of the robot. This research focused on studying different research and ideas from different projects and putting those puzzles together to create an improved and better-performing autonomous mobile robot.
The study aimed to develop a closed-loop feedback control system that incorporated both feedforward and Disturbance Observer (Dob) [7] with a graphical interface. The upper computer software was designed to enable remote control and monitoring of the robot, as well as to provide a user-friendly human–computer interaction.
Automatic navigation and mapping were performed using the Robot Operating System (ROS), which provided a Navigation Stack or Automatic Navigation System. This 2D or 3D [8] method integrates information from odometry, sensor data, and a goal pose to produce safe velocity commands. The Navigation Stack can generate the shortest path and avoid obstacles, even if those obstacles are not predetermined in the map data.
In order to build a map of the environment, Simultaneous Localization and Mapping (SLAM) was utilized. The G-Mapping [9] Package was employed for the robot, utilizing multiple LiDAR and odometry data and employing graph-based optimization to generate a highly accurate representation of the environment.

2. Designing Hardware Architecture

The design and construction of an autonomous robot involves a holistic consideration of both its mechanical and electrical components. This integrated approach is critical in ensuring that the robot functions optimally and efficiently in fulfilling its intended tasks. The developed robot was named “Motion Bot” and its mechanical and electrical components are thoroughly described in the subsequent sections of this paper. The comprehensive analysis of the mechanical and electrical components plays a critical role in illuminating the intricacies and interdependencies of the various elements that comprise the autonomous robot’s architecture.

2.1. Mechanical Components Design

The autonomous robot is designed with a lightweight aluminum body suitable for indoor environments. The design of the robot’s body was created using computer-aided design (CAD) software, which was utilized to perform simulations to calculate the load-bearing capacity of the robot. Upon successful design, the chassis was manufactured using a computer numerical control (CNC) machine. Figure 1 depicts the actual physical appearance of the robot.
Mobile robots equipped with non-holonomic systems possess the ability to move in a variety of directions regarding their current positions and orientations. This feature, known as omnidirectionality, is highly sought after in the field of mobile robotics. Several types of omnidirectional wheels exist, each with their own distinct advantages and disadvantages. The most common types of omnidirectional drives are the Kiwi and Holonomic systems [10], which require a precise arrangement to achieve omnidirectional motion. However, these wheels are not suitable for climbing ramps and have a lower capacity (approximately 50%) [11] for multi-directional movement. In contrast, Mecanum wheels, invented by Bengt Ilon, are highly efficient for both forward and reverse movements, as well as lateral movements. The orientation of Mecanum wheels can be arranged in a conventional manner, with lateral motion achieved through wheel velocity control.
In the current research, “Motion Bot” was equipped with four Mecanum wheels with a 100 cm diameter each, with twelve internal rollers at a 45-degree angle with the Y axis of the wheel. The wheels were connected to the main body frame via a suspension mechanism that provides surface contact conformity and reduces vibrations on the robot body.
Figure 2 presents a visual representation of the kinematic vector direction of the chassis, which incorporates the Mecanum wheel and its internal rollers. The procedure for determining the kinematics [12] of the system involves first calculating the inverse kinematics, and then calculating the pseudo-inverse [13]. This was achieved by utilizing a Cartesian coordinate system, which facilitated the analysis of vectors and other relevant variables. The list of variables and their definitions are listed in Table 1 also list of all symbols used in this article is expressed in Appendix A section.
To derive the kinematic equation, first, the relation between wheel velocity and the vehicle velocity was studied:
X ˙ W i = X ˙ r + θ ˙ r · L · cos π 2 + α I
Y ˙ W i = Y ˙ r + θ ˙ r · L · sin π 2 + α i
θ ˙ W i = θ ˙ r
Additionally, the relation between wheel velocity and the roller velocity was found:
X ˙ W i = r · θ ˙ R o l l e r · cos π 2 + γ i
Y ˙ W i = r · θ ˙ R o l l e r · sin π 2 + γ i + R · θ ˙ W h e e l
θ ˙ W i = θ ˙ R o t
Now arranging Equations (1)–(3) in martrix form it can be written:
X ˙ W i Y ˙ W i θ ˙ W i = 1 0 L · c o s ( π 2 + α i ) 0 1 L · s i n ( π 2 + α i ) 0 0 1 X ˙ r Y ˙ r θ ˙ r
and arranging Equations (4)–(6) in matrix form:
X ˙ W i Y ˙ W i θ ˙ W i = 0 r · c o s ( π 2 + γ i ) 0 R r · s i n ( π 2 + γ i ) 0 0 0 1 θ ˙ W h e e l θ ˙ R o l l e r θ ˙ R o t
From Equations (7) and (8) it can be written:
θ ˙ W h e e l i θ ˙ R o l l e r i θ ˙ R o t i = 1 R · t a n ( γ i ) 1 R L R ( cos α i sin α i · cot γ i ) 1 r · s i n ( γ i ) 0 L · sin α i r · s i n ( γ i ) 0 0 1 X ˙ r Y ˙ r θ ˙ r
Within the context of Equation (9), the angular velocity of the roller is not a focal point of consideration as the wheels are securely attached to the motor, thereby eliminating any potential for rotational velocity in the yaw direction. Hence, through considering the angular velocity of the wheel, the conclusion can be:
θ ˙ W h e e l i = 1 R · t a n ( γ i ) · X ˙ r + 1 R · Y ˙ r + L R ( c o s ( α i ) sin α i · cot γ i ) · θ ˙ r
Table 2 is listed with the wheel and roller angular parameters for each wheel of the experimental robot.
Substituting the I   a n d   I i value in Equation (10), we can rewrite the equation as it is written below:
θ ˙ W h e e l 1 θ ˙ W h e e l 2 θ ˙ W h e e l 3 θ ˙ W h e e l 4 = 1 R 1 1 ( L l + L w ) ( ( c o s ( α 1 ) sin α 1 ) 1 1 ( L l + L w ) ( ( c o s ( α 2 ) sin α 2 ) 1 1 ( L l + L w ) ( ( c o s ( α 3 ) sin α 3 ) 1 1 ( L l + L w ) ( ( c o s ( α 4 ) sin α 4 ) X ˙ r Y ˙ r θ ˙ r
Equation (11) is the inverse kinematics of the system, and to find the forward kinematics, the pseudo-inverse process of Equation (11) must be processed, and then the equation will be:
X ˙ r Y ˙ r θ ˙ r = R 4 1 1 1 1 1 1 1 1 1 ( L l + L w ) ( ( c o s ( α 1 ) sin α 1 ) 1 ( L l + L w ) ( ( c o s ( α 2 ) sin α 2 ) 1 ( L l + L w ) ( ( c o s ( α 3 ) sin α 3 ) 1 ( L l + L w ) ( ( c o s ( α 4 ) sin α 4 ) θ ˙ W h e e l 1 θ ˙ W h e e l 2 θ ˙ W h e e l 3 θ ˙ W h e e l 4

2.2. Hardware Connection and Configuration

For the experimental robot divide, the electrical components were divided into three classes. The first one is the decision-making and control components, the second one is the sensors, and the last one is the power system. Figure 3 shows the hardware connection of all mobile robot parts, where remote PC is the upper computer base. The ROS master is executed from here, which sends all the control instructions using a common Wi-Fi signal channel. Raspberry Pi works as a second upper computer base that collects data from LiDAR and camera sensors. MyRIO works as the main controller, which receives control instructions from the upper computer base through Wi-Fi to control the DC (Direct Current) motors through the bridge driver, as well as send encoder data sets as a ROS node. For the power source of the robot, a battery of 24 V was used with BMS (Battery Management system) and the power carrying capacity was 12 Ah. A 24 V to 12 V DC to DC converter is used here, as the motors’ running voltage is 24 V, but myRIO and Raspberry Pi can operate with a 12 V maximum power supply.
It is acknowledged that utilizing a single upper computer, such as Raspberry Pi, for processing heavy data may result in a decrease in performance. To ensure efficient monitoring and prompt response, a remote PC is utilized in conjunction with Raspberry Pi. Figure 4 shows the data flow within this connection mentioning the ROS topic name.

3. Designing Software Architecture

The software architecture design will concentrate on the creation of velocity control mechanisms for the motors, the mapping of the surrounding environment, and the implementation of an autonomous navigation system. The control architecture has been bifurcated into two sections for comprehensive elucidation. The first component deals with the velocity control, which is referred to as the lower-level control and is exclusively accountable for executing directives without any decision-making capacity. Conversely, the higher-level control imbues the robot with the capacity to perceive its environment, generate trajectories towards a designated target, and make adaptive choices for obstacle avoidance.

3.1. Lower-Level Control Software Design

The control design of a mobile robot can be approached from either a dynamic or a kinematic perspective. While the dynamic approach involves the calculation of the real-time system and is more complex, the kinematic approach, which consists of both the kinematic loop and dynamics loop, is simpler and can guarantee stability through proper tuning. This study adopts the kinematic approach for the control design and classifies it into four sections. The first section focuses on finding the system identification and establishing a nominal model, followed by the feedback control loop, along with the feedforward and disturbance observer, in the second section. The third section addresses the design of various trajectories to evaluate the control performance, and the final section analyzes the robustness of the closed-loop system. LabVIEW programming was utilized for the lower-level control, providing a Human Machine Interface (HMI) that allows for real-time adjustment of control parameters and the creation of trajectories for automated guided robots.

3.1.1. System Identification

Since electrical components, such as motor resistance and inductance, are controlled by the motor driver, we will focus on the mechanical parts for system identification. The nominal model for each wheel was identified through this process. Figure 5 shows a block diagram of the process used for this process.
For system identification [14] of four wheels, a chirp sine signal of 0~10 Hz was applied for 10 s. PWM value was 0~1%, and the sine magnitude was 0.7, 0.75, 0.8, and 0.85. Figure 6 shows the body plot diagram of model design.
As it can be seen from the body plot, the magnitude has dropped around 20 dB during 1 log-based frequency change, so we can be assured that the system model is the 1st order [15] and that the mathematical form of the nominal model should be:
O u t p u t I n p u t = 1 J n s + B n

3.1.2. Control Design

The method for motor control [4] used in this experiment was speed-voltage looped control. Voltage was considered equivalent to velocity, and control was designed for each individual motor. Then, from the forward kinematics on Equation (12), we can calculate the individual motor’s velocity to find the total vehicle velocity. The actual velocity provided by each motor encoder can be calculated using Equation (11). Then, using the given velocity and actual velocity, a feedback control loop can be designed. Using the nominal model from Section 3.1.1, a feedback control loop was designed through pole-zero cancelation method [16]. The feedback control equation design was as follows:
C f b = ω f b · J n + ω f b · B n s   ( H e r e ,   ω f b = 2 π × 2   H z )
To soothe the loading torque on the DC motor speed and make the response time fast, feedforward compensation was designed by taking the inverse of the nominal model and multiplying it with a low-pass filter. The feedforward control equation for this robot was as follows:
C f f = J n s + B n s ω f f + 1   ( H e r e ,   ω f f = 2 π × 10   H z )
Even though the use of both feedback and feedforward control were adequate for operating under no-load conditions, there was a noticeable degradation in the control system’s performance under varying loads. Furthermore, it was necessary to consider model uncertainty. To mitigate this issue, a disturbance observer was incorporated. This addition will address system disturbances, as well as sensor noise, thereby leading to an enhanced control system performance. For designing a disturbance observer, we have used the inverse of our nominal model with a Q filter. The equation for the Q filter was as follows:
Q ( s ) = ω Q 2 s 2 + 2 ζ ω Q s + ω Q 2   ( H e r e ,   ω Q = 2 π × 2   H z )
Figure 7 shows a block diagram of the control algorithm, where x r ˙ ,   y r ˙ ,   a n d   θ r are linear x, linear y, and the angular velocity of the robot, and they are controlled with a feedforward and a feedback loop, along with a disturbance observer. The list of symbols used in Figure 7 and there meanings are listed in Table 3. A study using such kind of control algorithms is conducted in a journal by Mu-Tian Yan and Yau-Jung Shiu [17], and it was established that this kind of control strategy was adequate for controlling motors.

3.1.3. Control Performance Test

The performance evaluation of the lower-level control was conducted using a trajectory similar to the one shown in Figure 8. The trajectory incorporated straight motion, arc cornering, and turning motion with varying velocity for the purpose of testing. Data collection was performed utilizing the USATR (Universal Synchronous/Asynchronous Receiver/Transmitter) method [18], and the results were plotted using MATLAB. The velocity data was calculated directly from the kinematics, while the position data was obtained through the application of the discrete time integration method on the velocity data.
In Figure 9, the velocity plot and velocity error plot have been shown to follow the guided trajectory. Here, V x ,   V y ,   a n d   W are longitudinal, lateral, and angular velocity, respectively.
From the error plot, it can be clearly seen that the velocity error is below 0.05 m/s on average. There is some overshoot on certain positions, but the overall system is stable and there is no steady state error.
Figure 10 displays a plot of the commanded position and the actual position, as calculated by the motor encoder. The plot demonstrates that the robot is capable of following the command effectively while traversing straight motion and cornering. However, a negligible error, due to overshoot, is observed during the turning motion. During the evaluation of the lower-level control, the possibility of wheel slip was not taken into account, as it is addressed during the design phase of the higher-level control.

3.1.4. Robust Performance Test

In this section, the robustness of the designed control system based on the disturbance observer (DOB) will be analyzed [19]. To analyze robustness, the selection of system uncertainty was studied first. The system uncertainty ± 30 % of the nominal model for both inertia ( J n ) and friction ( B n ) was selected and analyzed.
Next, uncertainty weight selection was conducted through the following equations. To describe the generic model uncertainty with a complex norm-bounded multiplicative uncertainty, the equation is:
P s = 1 + W 2 s s P n s   w h e r e ,   s 1
The weight W 2 s   i s   s e l e c t e d   s o   t h a t :
max P P P j ω P n j ω P n j ω W 2 j ω
Here, a set of perturbed plant models P is obtained by varying the values of J and B within their variability ranges:
P = P s = 1 J s + B   [ H e r e , J = J n ± 30 % ,   B = B n ± 30 % ]
Now, the driven equation is as follows:
P j ω P n j ω P n j ω = 1 J s + B 1 J n s + B n 1 J n s + B n × J s + B J n s + B n J s + B J n s + B n = J n s + B n J s + B J s + B = J n J s + B n B J s + B
Figure 11 shows the selection of uncertainty weight function and its bode plot.
Here, uncertainty weight is selected as:
W 2 = K 1 + s ω z 1 + s ω p h e r e ,   ω z = 2 π 8 ,   ω p = 2 π 6 ,   K = 0.125
The robust stability for the overall system follows T = P n C + Q 1 + P n C , which is shown in Figure 12 for a feedback cutoff frequency from 2 to 10 Hz.

3.2. Higher-Level Control Software Design

The higher-level controller plays a crucial role in ensuring the efficient and safe operation of the robot by generating a reference path that avoids potential collisions. This is achieved through the creation of a map of the environment that localizes the robot within it. The software utilized by the upper computer is based on the Robot Operating System (ROS), which serves as a framework for programming hardware components such as motors, sensors, and drivers.
ROS supports multiple programming languages, including C++, Python, and Java, and allows for the use of multiple programming languages across multiple connected computers. Additionally, ROS is capable of executing multiple executables in parallel, allowing for both synchronous and asynchronous data exchange between them. These executables, referred to as ROS nodes, share data through ROS topics.
ROS also provides graphical interfaces, such as RVIZ [20], from which we can visualize all the sensor data and related values in real time. ROS also comes with SLAM and Navigation stack packages, which have the adequate processes to make a perfect map of the environment and navigate it with safety. For designing the higher-level control software, the ‘turtulebot3′ [21] and ‘Nox’ [22] package structures were used with modification needed for our experimental robot. Additionally, as three LiDAR sensors were installed, we used a lidar merger package to combine those scan data.

3.2.1. ROS Package Modification

For architecting the higher-level software, several suitable modifications were performed, the most notable of which was the odometry package modification. As robots can also move in the lateral direction, a calculation was needed to consider this motion. Additionally, to use mechanomes we must consider the pose error due to slip ratio. To overcome this, we used the pose created by the wheel encoder data and made an estimated odometry using sensor fusion of the lidar sensor, IMU, and encoder data.

3.2.2. Connection of Higher and Lower Software

For this experiment, NI myRIO was used for lower-level control and collecting odometry data, which can be programmed by NI LabVIEW software. LabVIEW provides an add-on named “ROS for LabVIEW,” which can be downloaded from the VI Package store. However, as ROS is operated mainly using the Ubuntu (Linux) system and LabVIEW software is mainly operated using the Windows system, we need to take several steps to connect these two systems. The preconditions to connect ROS with LabVIEW are:
  • All the Wi-Fi connections should be under the same network and the first 7 digits of the IP address have to be the same for all devices.
  • Host IP address should be added to both Ubuntu and Windows systems using Administrator’s access.
  • Accessibility of each device should be checked using the “ping” command.
  • The antivirus network protection should be off, or new protocols should be made for those IP addresses.
  • ROS Master IP address and ROS Host IP address should be set before running ROSCORE.
If Windows Firewall does not allow the ROS network to communicate LabVIEW, then Windows Firewall Rule should be made. The steps are:
  • Open Control Panel > System and Security > Windows Firewall > Advanced Settings
  • Right-click “Inbound Rules” and select “New Rule”
  • Assign the following properties to the new rule
    • Select “Custom Rule” under “Rule Type.”
    • Under the protocol and port for the protocol type, select “ICMPv4.”
    • Apply to all local and remote IP addresses in the range.
    • In terms of connections you are allowed to choose, check “Domain,” “Private,” and “Public” in Profile.
    • Assign a name, such as “ICMPv4 rule for ROS communication,” and choose “Finish.”
After successful establishment of the ROS network, it is time to Run ROS on the LabVIEW system. Figure 13 shows a simple VI, which will subscribe to the /cmd_vel topic and read the twist message of Linear and Angular Velocity. Reading those messages from ROS, the LabView will execute Linear and Angular motion by running the motors through the myRIO device.
Before running the VI, we should double click ROS_Topic_init.vi and re-correct the topic name and message type if needed. It is always best practice to run the ROS Master inside LabVIEW to ensure the node is working fine. Otherwise, some errors can occur, and it will become harder to reconnect.
The complete software, Architecture, is also divided into several tasks, such as receiving velocity commands through a node from the Master Computer, processing the input velocity through control algorithms to match that and generate the PWM and direction signal for motor drivers, and lastly, calculating the velocity of the robot reading the encoder data and sending it to the ROS Master through another node. Figure 14 shows a program in LabVIEW where a subscriber node is created, which will receive velocity command, and another publisher node is created, which will publish the linear and angular velocity of the robot.

4. SLAM based on ROS

SLAM refers to the process of creating a map of an unknown environment while simultaneously determining the robot’s location within it. This is achieved through the use of sensors, such as lidar sensors or GPS, and wheel odometry. The task of simultaneously performing both localization and mapping presents a significant challenge, akin to navigating and mapping a large, unknown house. The SLAM algorithm is dependent on probabilistic models, which take into account uncertainty and estimation processes. Researchers from diverse fields are actively exploring ways to improve the representation of both the environment and the robot’s position. The advancement of various sensors has led to the widespread use of SLAM in various applications, including rescue operations, archaeology, and military and industrial contexts. One of the most widely used SLAM methods in the ROS framework is the GMapping algorithm. This method is based on the Rao–Blackwellized particle filter (RBPF) [23] and has proven to be highly effective in acquiring maps of unknown dynamic environments. Other popular SLAM algorithms, such as Hector SLAM and First SLAM, have unique uses and capabilities, but GMapping stands out for its ability to fuse multiple sensor data sources together using a Kalman filter [24] to achieve more accurate estimations.
To make a perfect SLAM, four sets of data are required. First, the robot’s position in both the steady and the moving condition is needed. For this experiment, the initial position was introduced to the robot. Second, sensing or measuring surrounded obstacles from the robot; this was carried out by using the LiDAR sensor. Third, the initial map of the robot, which can be made at the steady position of the robot, and fourth, a path by which the robot moves in that unknown environment, which was covered using odometry and IMU sensor data. However, as our robot is a medium-sized mobile robot, using only one LiDAR sensor is not enough. This is because if the LiDAR is installed only on the top, it cannot cover the area below that. To solve this problem, we have implemented three LiDAR sensors, shown in Figure 15. One LiDAR on the top will cover 360°, and the other two LiDAR on the front and back will cover 180° from the bottom. Merging them together will provide precise information about surrounding obstacles.
To merge these three LiDAR data together, a ROS package was created by following different papers [25,26] related to multi-LiDAR sensor collaboration approaches. Figure 16 shows the algorithm used to merge three lidar sensor data and publish it as one laser data. In this algorithm, ROS slave on the Raspberry Pi board is responsible for collecting all data sets from three lidar sensors and publish it as a node with a different topic name for each individual LiDAR. Then, ROS Master, running on a laptop, will combine those topics and recollect those data sets. Then, through the synchronization of those data, a point cloud will be created. Then, we can merge data using the point cloud library and publish that merged point cloud data. After that process, we can convert the point cloud data into laser data and publish the merged laser data.
Figure 17 shows the difference between performance of SLAM using single LiDAR and Multiple LiDAR. In Figure 17b, we can clearly see a better performance and clear map of the environment using multiple LiDAR. We can also see some errors which were mainly generated due to noise, and this can be reduced through further research and development. The green line in Figure 17b indicates the trajectory of the robot while making the map with the SLAM algorithm.

5. Navigation Based on ROS

The Navigation Stack is a highly advanced package of ROS software, capable of performing both localization and autonomous navigation with a planned trajectory. This package is comprised of three sub-packages, including the Adaptive Monte Carlo Localization (AMCL) [27] module, which is responsible for localizing the robot within a map using particle filters and odometry and laser data. In its initial position, the upper computer has limited data to calculate the exact position of the robot, resulting in a large circular area. However, as the robot moves, the point cloud accumulates more data, allowing for a more accurate calculation of the robot’s position. Figure 18 shows an implementation of AMCL, where red arrows show the possible position of the robot within the map.
The second sub-package, the Map Server, is responsible for reading the map created by SLAM from disk storage and serving it as a topic named /map to the ROS master. The third sub-package, the Move Base package, is responsible for generating a secure and efficient path for autonomous navigation. This package reads various initial conditions, such as the robot’s footprint dimensions, obstacle range, and maximum and minimum linear and angular velocity, from YAML files. It then generates a path using algorithms [28] such as A-star, Rapidly-exploring Random Tree (RRT), or RRT Star, and various optimization techniques, such as Genetic Algorithm (GA), Artificial Intelligence (AI), and Particle Swarm Optimization (PSO).
Figure 19 shows the roll of different ROS navigation stack files [29]. An important thing to note here is that the cost map is divided into a global cost map and a local cost map, where the global cost map contains overall information about the entire environment and the local cost map contains information about the surrounding obstacles of the robot. The global path planner is responsible for creating the main trajectory to reach the goal, while the local path planner is responsible for avoiding small obstacles by correcting the main trajectory generated by the global planner. The path planner algorithm used in this paper is adopted from a research study conducted by LIU Tianyu, YAN Ruixin, WEI Guangrui, and SUN Lei [29].
In order to perform autonomous navigation with the robot, modifications to the ROS navigation stack parameters were necessary to account for the specific dimensions and environment of the robot. A threshold of 350 mm was applied on the edge of obstacles to avoid collisions, and proper path planning was executed. In the event of new obstacles (e.g., a walking person) appearing in the path of the planned trajectory, which are not present in the global map, they are added to the local map and the move base package re-plans the path to reach the goal. Additionally, lateral path planning freedom was added by modifying various files in the ROS navigation stack.

6. Results and Discussion

In this section, results and analyses will be discussed to check accomplishments. Through that discussion, some issues and observations that were faced during testing the robot will be mentioned, and further research goals will be determined to take the robot to the next level. To best discuss the results, this section was divided into two sub-sections. In the first section, the lower-level control performance will be discussed, and in the second section, the higher-level control performance will be discussed.

6.1. Lower-Level Control Results

From the experiment result attached in Figure 9, it can be observed that the lower-level controller can perform with a gratifying accuracy. In the position plot in Figure 10, the error was less than 0.05. Through the robustness analysis, we found that both the disturbance observer loop and overall control loop were under the curve of uncertainty, weighting the function magnitude line shown in Figure 11 and Figure 12. Thus, theoretically, both of the loops were stable, which means that even if we added 30% more load than expected, the velocity performance of the mobile robot would remain stable. Additionally, the control system parameter was adjustable with a graphic interface, which makes the robot suitable for operating with a variable load.

6.2. Higher-Level Control Results

In the higher computer, the software architecture was adequate to perform a successful SLAM, although lots of error lines can be found outside the boundary shown in Figure 20. These mainly occurred due to sensor noise and reflections from different light sources. Further noise reduction algorithms can be developed for future research. Additionally, the LiDAR sensor has less effectiveness while passing through a glass or mirror. This phenomenon can be avoided by using more precious sensors or 3D camera sensors.
For the navigation architecture, the software prosperously made a path to the goal, avoiding all known and unknown obstacles. Thus, it can perform automatic navigation inside an indoor environment successfully. Figure 21 shows a performance of autonomous navigation of our mobile robot. To start the navigation, the initialization of the robot’s current location should be input with the ROS RVIZ interface and, with some iteration, the robot can localize itself perfectly. Then, with the help of the RVIZ interface, or by directly commanding the goal pose, autonomous navigation can be initiated.
The ability to avoid sudden obstacles, such as a human or unknown object, is also checked with the experimental mobile robot. In Figure 22, it is shown that the robot creates the global path to reach the goal according to the global map. However, as soon as obstacles are detected on the path, the local path planner adjusts the global path to avoid that obstacle and reach the goal.
Path planning that considers lateral motion is also checked. Figure 23 shows a successful implication of the linear X and Y axis directional path planning, allowing the experimental mobile robot to take the shortest path to reach the goal.

7. Conclusions

The present study aimed to design and develop an omnidirectional mobile robot, which combined the characteristics of both an Autonomous Mobile Robot and an Automated Guided Vehicle. The results obtained from the practical operation of the ‘MotionBot’ robot, as discussed in previous sections, demonstrated the reliability, improvement, and effectiveness of the proposed techniques. The focus of the study was on enhancing the lower-level control through feedback and feedforward controllers to optimize vibrations and increase stability through a low computational cost. Additionally, the robustness of the robot was considered since it was expected to operate in different environments with different loads. A study and analysis of robustness was conducted, and the results confirmed its adequacy.
In order to enhance the sensing capabilities of a robot, a fusion of three LiDAR data was executed to improve the accuracy of localization and positioning. The performance of single LiDAR and multiple LiDAR using G-mapping SLAM was evaluated to increase mapping accuracy in unknown environments. The robot successfully reached the goal point while avoiding obstacles in a dynamic environment. A user-friendly GUI was developed using LabVIEW software. However, future research could be conducted to reduce LiDAR noise, address the wheel slip ratio problem, and implement object recognition and tracking technologies. The utilization of OpenCV and TensorFlow can enable the robot to analyze objects, such as human bodies, and follow them using object-following algorithms. The potential for further improvement, leveraging the capabilities of the ROS platform, holds promise for the logistics and courier industries.

Author Contributions

A.N. and S.L. take the lead in writing papers, developing control algorithms, and conducting experiments. K.N. has reviewed the overall contents and supervised the control development. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the 2020 Yeungnam University Research Grant (No. 220A380108) and was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2022R1C1C1011785).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

This study did not report any data.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this article:
AMRAutonomous Mobile Robot
AGVAutomated Guided Robot
ROSRobot Operating System
DOBDisturbance Observer
CNCComputer Numerical Control
IMUInertial Measurement Unit
CoMCenter Of Gravity
USATRUniversal Synchronous/Asynchronous Receiver/Transmitter
SLAMSimultaneous Localization And Mapping
LidarLight Detection And Ranging
URDFUnified Robot Description Format
DoFDegree Of Freedom
GPSGlobal Positioning System
RBPFRao-Blackwellized Particle Filter
AMCLAdaptive Monte Carlo Localization
YAMLYet Another Markup Language
PSOParticle Swarm Optimization
RRTRapidly Exploring Random Tree

Appendix A. Symbol and Definition

This appendix consists of a list (Table A1) with all the symbols used in this paper with their definitions.
Table A1. List of symbols and their definition.
Table A1. List of symbols and their definition.
VariableDefinition
Y r ˙ Instantaneous longitudinal velocity of the robot
X r ˙ Instantaneous lateral velocity of the robot
θ r ˙ Angular velocity of the robot
Y W i ˙ Instantaneous longitudinal velocity of the i wheel
X W i ˙ Instantaneous lateral velocity of the i wheel
θ ˙ W h e e l Angular velocity of the wheel along X W i axis (pitch axis)
θ ˙ R o t Angular velocity of the wheel along Z W i axis (yaw axis)
θ ˙ R o l l e r Angular velocity of the Roller when it contacts the ground
γ i Rotation angle between the i wheel frame and the roller frame
α i Angle between robot main frame and the i wheel frame
L w Distance between robot coordinate and i wheel along x-axis
L l Distance between robot coordinate and i wheel along y-axis
RWheel radius
rRoller radius
CoMCenter of Mass of the robot
J n M o m e n t   o f   i n e r t i a ( 0.0073969 )
B n F r i c t i o n   c o n s t a n t ( 0.43571 )
sOutput variable for Laplace transform
W f b Feedback band width
C f b Feedback control
C f f Feedforward control
W f f Feedback band width
ζ Damping ratio
W Q Q-filter band width
e i ( k ) Uncorrelated observation errors
L p i Jacobian matrix of observation model with respect to landmarks
L v Jacobian matrix of observation model with respect to robot odometry
L i Observation matrix that relates to the sensor output
z i The state vector x ( k ) when observing i t h landmark

References

  1. Wang, X.V.; Wang, L. A literature survey of the robotic technologies during the COVID-19 pandemic. J. Manuf. Syst. 2021, 60, 823–836. [Google Scholar] [CrossRef] [PubMed]
  2. Pani, A.; Mishra, S.; Golias, M.; Figliozzi, M. Evaluating public acceptance of autonomous delivery robots during COVID-19 pandemic. Transp. Res. Part Transp. Environ. 2020, 89, 102600. [Google Scholar] [CrossRef]
  3. Lozoya, C.; Marti, P.; Velasco, M.; Fuertes, J.M. Effective Real-Time Wireless Control of an Autonomous Guided Vehicle. In Proceedings of the 2007 IEEE International Symposium on Industrial Electronics, Vigo, Spain, 4–7 June 2007; IEEE: Vigo, Spain, 2007; pp. 2876–2881. [Google Scholar]
  4. Fragapane, G.; de Koster, R.; Sgarbossa, F.; Strandhagen, J.O. Planning and control of autonomous mobile robots for intralogistics: Literature review and research agenda. Eur. J. Oper. Res. 2021, 294, 405–426. [Google Scholar] [CrossRef]
  5. Dosoftei, C.-C.; Popovici, A.-T.; Sacaleanu, P.-R.; Gherghel, P.-M.; Budaciu, C. Hardware in the Loop Topology for an Omnidirectional Mobile Robot Using Matlab in a Robot Operating System Environment. Symmetry 2021, 13, 969. [Google Scholar] [CrossRef]
  6. Sariyildiz, E.; Oboe, R.; Ohnishi, K. Disturbance Observer-Based Robust Control and Its Applications: 35th Anniversary Overview. IEEE Trans. Ind. Electron. 2020, 67, 2042–2053. [Google Scholar] [CrossRef] [Green Version]
  7. Autonomous 2D SLAM and 3D Mapping of an Environment Using a Single 2D LIDAR and ROS | IEEE Conference Publication | IEEE Xplore. Available online: https://ieeexplore.ieee.org/document/8215333 (accessed on 8 February 2023).
  8. Balasuriya, B.L.E.A.; Chathuranga, B.A.H.; Jayasundara, B.H.M.D.; Napagoda, N.R.A.C.; Kumarawadu, S.P.; Chandima, D.P.; Jayasekara, A.G.B.P. Outdoor robot navigation using Gmapping based SLAM algorithm. In Proceedings of the 2016 Moratuwa Engineering Research Conference (MERCon), Moratuwa, Sri Lanka, 5–6 April 2016; pp. 403–408. [Google Scholar]
  9. Szayer, G. Kinematic and Dynamic Limits of Holonomic Mobile Robots. Ph.D. Thesis, Budapest University of Technology and Economics (Hungary), Budapest, Hungary, 2018. [Google Scholar]
  10. Kanjanawanishkul, K. Omnidirectional wheeled mobile robots: Wheel types and practical applications. Int. J. Adv. Mechatron. Syst. 2015, 6, 289. [Google Scholar] [CrossRef]
  11. Fahmizal; Kuo, C.-H. Trajectory and heading tracking of a mecanum wheeled robot using fuzzy logic control. In Proceedings of the 2016 International Conference on Instrumentation, Control and Automation (ICA), Bandung, Indonesia, 29–31 August 2016; pp. 54–59. [Google Scholar]
  12. Chevallereau, C.; Khalil, W. A new method for the solution of the inverse kinematics of redundant robots. In Proceedings of the 1988 IEEE International Conference on Robotics and Automation, Philadelphia, PA, USA, 24–29 April 1988; Volume 1, pp. 37–42. [Google Scholar]
  13. Tutunji, T.A. DC Motor Identification using Impulse Response Data. In Proceedings of the EUROCON 2005—The International Conference on “Computer as a Tool”, Belgrade, Serbia, 21–24 November 2005; IEEE: Belgrade, Serbia, 2005; pp. 1734–1736. [Google Scholar]
  14. Ljung, L.; Glover, K. Frequency domain versus time domain methods in system identification. Automatica 1981, 17, 71–86. [Google Scholar] [CrossRef]
  15. You, S.H.; Bonn, K.; Kim, D.S.; Kim, S.-K. Cascade-Type Pole-Zero Cancellation Output Voltage Regulator for DC/DC Boost Converters. Energies 2021, 14, 3824. [Google Scholar] [CrossRef]
  16. Yan, M.-T.; Shiu, Y.-J. Theory and application of a combined feedback–feedforward control and disturbance observer in linear motor drive wire-EDM machines. Int. J. Mach. Tools Manuf. 2008, 48, 388–401. [Google Scholar] [CrossRef]
  17. Burke, J.L.; Murphy, R.R. Human-robot interaction in USAR technical search: Two heads are better than one. In Proceedings of the RO-MAN 2004. 13th IEEE International Workshop on Robot and Human Interactive Communication (IEEE Catalog No.04TH8759), Kurashiki, Japan, 22–22 September 2004; IEEE: Kurashiki, Japan, 2004; pp. 307–312. [Google Scholar]
  18. Agarwal, J.; Parmar, G.; Gupta, R. Application of Sine Cosine Algorithm in Optimal Control of DC Motor and Robustness Analysis. Wulfenia 2017, 24, 77–95. [Google Scholar]
  19. Kam, H.R.; Lee, S.-H.; Park, T.; Kim, C.-H. RViz: A toolkit for real domain data visualization. Telecommun. Syst. 2015, 60, 337–345. [Google Scholar] [CrossRef]
  20. Guizzo, E.; Ackerman, E. The TurtleBot3 Teacher [Resources_Hands On]. IEEE Spectr. 2017, 54, 19–20. [Google Scholar] [CrossRef]
  21. Joshi, R.; Bhaiya, D.; Purkayastha, A.; Patil, S.; Deshpande, A. Simultaneous Navigator for Autonomous Identification and Localization Robot. In Proceedings of the 2021 IEEE Region 10 Symposium (TENSYMP), Jeju, Republic of Korea, 23–25 August 2021; IEEE: Jeju, Republic of Korea, 2021; pp. 1–6. [Google Scholar]
  22. Tam, N.D. The Implementation of Particle Filter Method in ROS for Localization. Bachelor’s Thesis, Vietnamese-German University, Bến Cát, Vietnam, 2017. [Google Scholar]
  23. Sasiadek, J.Z.; Hartana, P. Sensor data fusion using Kalman filter. In Proceedings of the Proceedings of the Third International Conference on Information Fusion, Paris, France, 10–13 July 2000; Volume 2. [Google Scholar]
  24. Gao, C.; Spletzer, J.R. On-line calibration of multiple LIDARs on a mobile vehicle platform. In Proceedings of the 2010 IEEE International Conference on Robotics and Automation, Anchorage, AK, USA, 3–7 May 2010; pp. 279–284. [Google Scholar]
  25. Ballardini, A.; Fontana, S.; Furlan, A.; Sorrenti, D. ira_laser_tools: A ROS LaserScan manipulation toolbox. arXiv 2014. [Google Scholar] [CrossRef]
  26. dos Reis, W.P.N.; da Silva, G.J.; Junior, O.M.; Vivaldini, K.C.T. An extended analysis on tuning the parameters of Adaptive Monte Carlo Localization ROS package in an automated guided vehicle. Int. J. Adv. Manuf. Technol. 2021, 117, 1975–1995. [Google Scholar] [CrossRef]
  27. Korkmaz, M.; Durdu, A. Comparison of optimal path planning algorithms. In Proceedings of the 2018 14th International Conference on Advanced Trends in Radioelecrtronics, Telecommunications and Computer Engineering (TCSET), Lviv-Slavske, Ukraine, 20–24 February 2018; pp. 255–258. [Google Scholar]
  28. Guimarães, R.L.; de Oliveira, A.S.; Fabro, J.A.; Becker, T.; Brenner, V.A. ROS Navigation: Concepts and Tutorial. In Robot Operating System (ROS): The Complete Reference (Volume 1); Koubaa, A., Ed.; Studies in Computational Intelligence; Springer International Publishing: Cham, Switzerland, 2016; pp. 121–160. ISBN 978-3-319-26054-9. [Google Scholar]
  29. Tianyu, L.; Ruixin, Y.; Guangrui, W.; Lei, S. Local Path Planning Algorithm for Blind-guiding Robot Based on Improved DWA Algorithm. In Proceedings of the 2019 Chinese Control And Decision Conference (CCDC), Nanchang, China, 3–5 June 2019; pp. 6169–6173. [Google Scholar]
Figure 1. Appearance of Motion Bot (Experimental Robot) (a) without robotic arm; (b) with robotic arm.
Figure 1. Appearance of Motion Bot (Experimental Robot) (a) without robotic arm; (b) with robotic arm.
Sensors 23 03184 g001
Figure 2. Direction of speed vector on Robot and Mecanum wheel.
Figure 2. Direction of speed vector on Robot and Mecanum wheel.
Sensors 23 03184 g002
Figure 3. Connection Diagram of Different electrical components.
Figure 3. Connection Diagram of Different electrical components.
Sensors 23 03184 g003
Figure 4. Visualization of Data flow.
Figure 4. Visualization of Data flow.
Sensors 23 03184 g004
Figure 5. System Identification process block diagram.
Figure 5. System Identification process block diagram.
Sensors 23 03184 g005
Figure 6. Bode plot Diagram.
Figure 6. Bode plot Diagram.
Sensors 23 03184 g006
Figure 7. Block Diagram of control algorithm.
Figure 7. Block Diagram of control algorithm.
Sensors 23 03184 g007
Figure 8. Trajectory of the experimental robot.
Figure 8. Trajectory of the experimental robot.
Sensors 23 03184 g008
Figure 9. Velocity input vs. output plot.
Figure 9. Velocity input vs. output plot.
Sensors 23 03184 g009
Figure 10. Position input vs. output plot.
Figure 10. Position input vs. output plot.
Sensors 23 03184 g010
Figure 11. Uncertainty weight selection.
Figure 11. Uncertainty weight selection.
Sensors 23 03184 g011
Figure 12. Robust stability for overall system.
Figure 12. Robust stability for overall system.
Sensors 23 03184 g012
Figure 13. ROS Programming with LabVIEW (subscriber to cmd_vel topic).
Figure 13. ROS Programming with LabVIEW (subscriber to cmd_vel topic).
Sensors 23 03184 g013
Figure 14. ROS Programming with LabVIEW (publisher and subscriber of different topic).
Figure 14. ROS Programming with LabVIEW (publisher and subscriber of different topic).
Sensors 23 03184 g014
Figure 15. Position of LiDAR sensors and covering area.
Figure 15. Position of LiDAR sensors and covering area.
Sensors 23 03184 g015
Figure 16. Algorithm for merging 3 lidar scan data for mapping.
Figure 16. Algorithm for merging 3 lidar scan data for mapping.
Sensors 23 03184 g016
Figure 17. (a) SLAM performance with single Lidar; (b) SLAM performance with multiple Lidar and IMU.
Figure 17. (a) SLAM performance with single Lidar; (b) SLAM performance with multiple Lidar and IMU.
Sensors 23 03184 g017
Figure 18. Monte Carlo Localization.
Figure 18. Monte Carlo Localization.
Sensors 23 03184 g018
Figure 19. ROS Navigation Stack parts and their roles.
Figure 19. ROS Navigation Stack parts and their roles.
Sensors 23 03184 g019
Figure 20. SLAM performance with MotionBot.
Figure 20. SLAM performance with MotionBot.
Sensors 23 03184 g020
Figure 21. ROS autonomous navigation with MotionBot.
Figure 21. ROS autonomous navigation with MotionBot.
Sensors 23 03184 g021
Figure 22. (a) Global path planner without considering local obstacles; (b) Added and detected obstacles on the Global path; (c) Correction of Global path to avoid obstacles.
Figure 22. (a) Global path planner without considering local obstacles; (b) Added and detected obstacles on the Global path; (c) Correction of Global path to avoid obstacles.
Sensors 23 03184 g022
Figure 23. (a) Path planning in linear X direction; (b) path planning in linear Y direction; (c) path planning in both linear X and Y direction.
Figure 23. (a) Path planning in linear X direction; (b) path planning in linear Y direction; (c) path planning in both linear X and Y direction.
Sensors 23 03184 g023
Table 1. Robot’s kinetic model variable and definition.
Table 1. Robot’s kinetic model variable and definition.
VariableDefinition
Y r ˙ Instantaneous longitudinal velocity of the robot
X r ˙ Instantaneous lateral velocity of the robot
θ r ˙ Angular velocity of the robot
Y W i ˙ Instantaneous longitudinal velocity of the i wheel
X W i ˙ Instantaneous lateral velocity of the i wheel
θ ˙ W h e e l Angular velocity of the wheel along X W i axis (pitch axis)
θ ˙ R o t Angular velocity of the wheel along Z W i axis (yaw axis)
θ ˙ R o l l e r Angular velocity of the Roller when it contacts the ground
γ i Rotation angle between the i wheel frame and the roller frame
α i Angle between robot main frame and the i wheel frame
L w Distance between robot coordinate and i wheel along x-axis
L l Distance between robot coordinate and i wheel along y-axis
RWheel radius
rRoller radius
CoMCenter of mass of the robot
Table 2. Wheel and roller angular parameter and their values.
Table 2. Wheel and roller angular parameter and their values.
Symbol Wheel 1 Wheel 2 Wheel 3 Wheel 4
α i π 6 5 π 6 7 π 6 11 π 6
γ i π 4 π 4 π 4 π 4
(Note: L · cos α i = L w // L · sin α i = L l ).
Table 3. Lower-level control system variable and definition.
Table 3. Lower-level control system variable and definition.
VariableDefinition
J n M o m e n t   o f   i n e r t i a ( 0.0073969 )
B n F r i c t i o n   c o n s t a n t ( 0.43571 )
sOutput variable for Laplace transform
ω f b Feedback band width
C f b Feedback control
C f f Feedforward control
ω f f Feedback band width
ζ Damping ratio
ω Q Q-filter band width
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Neaz, A.; Lee, S.; Nam, K. Design and Implementation of an Integrated Control System for Omnidirectional Mobile Robots in Industrial Logistics. Sensors 2023, 23, 3184. https://doi.org/10.3390/s23063184

AMA Style

Neaz A, Lee S, Nam K. Design and Implementation of an Integrated Control System for Omnidirectional Mobile Robots in Industrial Logistics. Sensors. 2023; 23(6):3184. https://doi.org/10.3390/s23063184

Chicago/Turabian Style

Neaz, Ahmed, Sunyeop Lee, and Kanghyun Nam. 2023. "Design and Implementation of an Integrated Control System for Omnidirectional Mobile Robots in Industrial Logistics" Sensors 23, no. 6: 3184. https://doi.org/10.3390/s23063184

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop