Next Article in Journal
Super-Pixel Guided Low-Light Images Enhancement with Features Restoration
Next Article in Special Issue
Model Predictive Control of a Novel Wheeled–Legged Planetary Rover for Trajectory Tracking
Previous Article in Journal
Multi-Step Hourly Power Consumption Forecasting in a Healthcare Building with Recurrent Neural Networks and Empirical Mode Decomposition
Previous Article in Special Issue
Control Design for Uncertain Higher-Order Networked Nonlinear Systems via an Arbitrary Order Finite-Time Sliding Mode Control Law
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Hybrid Position/Force Walking Robot Control Using Extenics Theory and Neutrosophic Logic Decision

by
Ionel-Alexandru Gal
,
Alexandra-Cătălina Ciocîrlan
and
Luige Vlădăreanu
*
Institute of Solid Mechanics of the Romanian Academy, 15 C. Mille, 010141 Bucharest, Romania
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(10), 3663; https://doi.org/10.3390/s22103663
Submission received: 18 March 2022 / Revised: 7 May 2022 / Accepted: 9 May 2022 / Published: 11 May 2022
(This article belongs to the Special Issue Advanced Intelligent Control in Robots)

Abstract

:
This paper presents a hybrid force/position control. We developed it for a hexapod walking robot that combines multiple bipedal robots to increase its load. The control method integrated Extenics theory with neutrosophic logic to obtain a two-stage decision-making algorithm. The first stage was an offline qualitative decision-applying Extenics theory, and the second was a real-time decision process using neutrosophic logic and DSmT theory. The two-stage algorithm separated the control phases into a kinematic control method that used a PID regulator and a dynamic control method developed with the help of sliding mode control (SMC). By integrating both control methods separated by a dynamic switching algorithm, we obtained a hybrid force/position control that took advantage of both kinematic and dynamic control properties to drive a mobile walking robot. The experimental and predicted results were in good agreement. They indicated that the proposed hybrid control is efficient in using the two-stage decision algorithm to drive the hexapod robot motors using kinematic and dynamic control methods. The experiment presents the robot’s foot positioning error while walking. The results show how the switching method alters the system precision during the pendulum phase compared to the weight support phase, which can better compensate for the robot’s dynamic parameters. The proposed switching algorithm directly influences the overall control precision, while we aimed to obtain a fast switch with a lower impact on the control parameters. The results show the error on all axes and break it down into walking stages to better understand the control behavior and precision.

1. Introduction

Worldwide, practical robot applications are diversifying more and more in the new world of robotics, automation, and artificial intelligence [1]. Researchers and engineers are working on developing solutions and solving problems for all kinds of robots. This research will enhance human motion and workspace investigation through different sensors and automate tasks for unattended robots [2]. Into this category falls every type of robot control method that can improve robot control and behavior, with or without autonomous capabilities [3]. For a robot to be capable of accomplishing a designated task, it must reject most of the uncertainties and disturbances within the work environment. The robot must also handle information from several sensors and fuse the data to reach a close decision to the truth value.
Most mobile robots combine kinematic and dynamic control methods to solve such a problem, each designated for certain joints in the robot structure. However, for a highly versatile robot structure, a hybrid position/force method is used. Although the technique is not new, having begun with Raibert and Craig [4], it has the attention of robotic control research, and it continues to bring adaptability to the robots using it. In recent years, different approaches have been researched. Zhang et al. [5] created a hybrid control method that could adjust the joint dynamic parameters online. The process allowed rough modeling of the robot parameters and left the fine-tuning to the control algorithm. A similar approach used a neural network to autocalibrate the control parameters [6]. Still, not all hybrid methods can use mechanical parameters because they increase the system’s complexity. For a robot with a simple structure for which the kinematic or dynamic equations are easier to define, a classic approach to hybrid control can be easier to implement. Some examples include the hybrid method with impedance control [7] or even the backstepping method with a Hamilton controller [8]. Applications of the hybrid control method can be found in all types of robots, from a robot used for mechanical tests [9] to an upper limb rehabilitation robot [10]. Because of its versatility, the hybrid position/force control was chosen as the primary control method for the walking robot.
As we know, hybrid control combines a kinematic control method for joints that do not require compensation of weight and inertia and a dynamic control method that can handle these parameters and reject environmental disturbances. The kinematic approach is used for positioning control of the robot and the dynamic approach for the force and torque control. Consequently, a classic proportional-integrative-derivative (PID) regulator was chosen for the position control and a sliding mode control (SMC) method [11] for the force and torque control. The main reason for using a SMC method was its robustness in the presence of external disturbances and uncertainties. Many scientists have used the control method to improve industrial robot trajectory [12], mobile robot trajectory in dynamic environments [13], n-link serial manipulator control [14], balance control of a two-wheel robot [15], and even airplane fuselage inspection [16]. SMC is not a perfect control method, and it has drawbacks, one of which is the chattering effect that it can introduce. New research is published every year [17] on eliminating the chattering impact in a general manner or for a specific robot structure or purpose.
Using multiple regulators or control methods on the same robot structure can separate the robot joints statically into two categories, starting with the design of the control law. However, this is not desirable if one needs to build a versatile robot. Hence, a real-time decision method must determine the degree of freedom (joints) controlled by each method. A combination of techniques and control methods was thus selected. The first one, Extenics [18] or extension logic [19], entails defining the control parameters and robot properties or abilities and is used by scientists to configure problem-solving algorithms [18] and even design toys for children with special needs [20].
For the proposed robot structure, Extenics helped ease the process of organizing the parameters of each control method and provided the offline means of solving potential conflicts, uncertainties, or mismatching of sensor data and regulators.
Neural networks [21,22] were considered for the decision method, but the process of training the network was too extensive for the proposed robot. Another possibility was using swarm optimization [23] to predict what the robot needs in terms of control methods. This also overcomplicated the control system, and it should be handled in future work. For the presented robot, neutrosophic logic [24] and the Dezert–Smarandache Theory (DSmT) [25] were chosen. DSmT combined with Extenics is used to manage decision making [20] and provides excellent results by combining the mapping process of Extension logic with the sensor fusion of DSmT in uncertain and contradictory conditions. As an extension of fuzzy logic, neutrosophic logic and DSmT have been used by researchers to develop applications [26] for aviation parking [27], multi-UAV surveillance [28], obstacle avoidance in unknown environments [29], and environmental detection and estimation [30].
A different approach in designing the decision algorithm of a hybrid system is using time triggers or an event-driven mechanism [31] with an event generation mechanism [32] to ensure control of the system at the precise times of important defined events. This approach is safe for robots in a known environment, but it can fail or make inconsistent decisions for robots moving inside unknown and unstructured environments.
To develop a mobile walking robot, one can reference many highly advanced robots, some designed by renowned institutes [33], that use dynamic control methods to provide stability and error rejection for the control architecture. While efficient mathematical solutions [34] are desired for a control law to give it low computational requirements, these can be difficult to obtain when the robot model is complex. Moreover, the dynamic control of any robot must overcome external disturbances [35] and reject any influence from other sources, including within the sensor information.
Here, we propose and describe a mobile walking robot hybrid position/force control that can be used within a group of linked robots. The research aim to obtain a control algorithm and method using both kinematic and dynamic control methods and an intelligent switching method between them. As a result, several experiments were conducted to improve the performance of the developed hybrid control, taking advantage of the extension set and neutrosophic logic. The extension set and neutrosophic logic were used to enhance the decision making required by the hybrid control, the first as an offline set of characteristics to extend the system’s definition, and the second as an online switching mechanism that works with uncertain and contradictory information. The resulting hybrid control using a two-stage decision algorithm was a robot control method that took advantage of the best properties of the kinematic and dynamic control laws while the robot was fulfilling its tasks in uncertain environments. The data fusion provided by the neutrosophic theory in contradictory or uncertain conditions improved the decision switching mechanism, while the overall reference tracking of the robot did not decrease. The computational requirements of the proposed hybrid control were reduced because of the kinematic control method when the robot did not require dynamic compensation.
The paper is divided into six main sections. Section 2 provides a visual description of the robot used in the experiments. Section 3 presents the offline decision using the extension set, while Section 4 presents the decision method based on neutrosophic logic that takes advantage of DSmT [25]. The hybrid control is presented in Section 5 with an in-depth analysis of the kinematic (Section 5.1) and SMC dynamic (Section 5.2) control methods. Section 6 and Section 7 contain the conducted experiments and simulations with the obtained results. In the end, Section 8 presents this paper’s conclusions.

2. System Description

Figure 1 presents the robot structure. The robot was a hexapod [36], and its design was selected to avoid stability problems. Future research will consider the stability of a single bipedal mobile walking robot. As can be seen, the robot platform was divided into three modules, resulting in a modular robot that could be further extended or reconfigured. Figure 1b presents the kinematic structure of the hexapod robot leg. Each leg had three degrees of freedom, ensuring the 3D positioning of the foot.
In the robot structure, aluminum was considered for rectangular bars with height, width, and length dimensions hi × wi × li and weight mi. hi and wi with the dimensions of hi = 3 cm and wi = 3 cm. Table 1 presents the robot dimensions used in the testing and simulations presented throughout the paper.

3. Extenics Theory and Extension Set Applied to Robots

Extenics is a scientific field that uses modeling and formal methods to extend elements or physical objects. The models and methods are then used to solve contradictory problems that cannot be solved in their defined form and conditions [37,38].
Because contradictory problems are omnipresent in any field, Extenics aims to define a set of methods that allows solving contradictory issues using virtual simulation with the help of computers.
The central parts of extension theory are the base element theory, extension set theory, and extension logic [39].
The fundamental element used to describe objects in extension theory is defined as:
M = ( O m , c m , v m )
where M is the object element for which Om is the object, cm is the characteristic, and vm is the measure. If one characteristic exists, the other matter element can be of only one dimension. If the object has more characteristics, however, the multidimension matter element is be defined as:
M = [ O m , c m 1 , v m 1 c m 2 , v m 2 c m n , v m n ] = ( O m , C m , V m ) .
If a hybrid control is used, then a multidimension matter element is used, described by many characteristics specific to the chosen control methods.
For a hybrid position/force control, the following matter element can be defined:
R 0 = [ Robot   Control Control   Type Hybrid   Control Overall   Computation   Speed Good Refference   Tracking   Speed Very   Good Refference   Tracking   Error Very   Good Inertia   Compensation Good Disturbance   Rejection Good ] .
By using Extenics principle 2.2 [40], which Ren et al. [18] used to design low carbon products, the object from Equation (3) can be extended and decomposed into two matter elements for which O1 and O2 are defined as kinematic control and dynamic control, respectively. The two objects have the same characteristics as the primary object (called “hybrid control”) but with different values:
R 1 = [ Robot   Control , Control   Type { Kinematic   Control PID   Control } Overall   Computation   Speed , Very   Good Refference   Tracking   Speed , Very   Good Refference   Tracking   Error , Good Inertia   Compensation , Very   Poor Perturbance   Rejection Poor ] ,
R 2 = [ Robot   Control , Control   Type { Dynamic   Control PID   Sliding   Control } Overall   Computation   Speed , Average Refference   Tracking   Speed , Good Refference   Tracking   Error , Very   Good Inertia   Compensation , Very   Good Perturbance   Rejection Very   Good ] .
As can be seen, the two matter elements from Equations (4) and (5) describe the control types, the kinematic and the dynamic, briefly. The matter elements are customized for control types, but numerous other features can be added for which the matter characteristics are different according to the desired control type.
The contradiction between the two control types is found using the matter element characterization. The kinematic type has a better computational speed for a real-time controller. Still, when a robot is subject to inertial forces, it has worse positioning error and tracking speed. On the other hand, the dynamic control method takes into consideration the inertial forces that act on a robot and has a better tracking error. However, although the tracking error is better, the tracking speed is worse. The overall computational speed is greatly diminished, owing to the many calculations inside the control loop.
As a simple reference trajectory, an ideal trajectory of the foot (Figure 2) was used. When a robot foot has the role of support (the unbroken line in Figure 2), precise control is needed that considers the weight and inertia of the robot, so the robot’s position does not oscillate on the vertical axis during the support phase. Additionally, the joints of a robot leg must complete or partially support the overall robot weight, including the other legs in the advancing stage. The dotted line in Figure 2 represents the leg balance trajectory when a robot takes a step for which the positioning is not required to be precise but must be fast and smooth. During the second motion phase, the leg joints support only their leg weight.
Knowing the gross characterization of the two types of trajectories that a robot foot takes, the problem of choosing the control type is solved by checking the properties of the two matter elements defined by Equations (4) and (5).
Thereby, to control the robot during the uniform pull and weight support phase, the matter element R2 was chosen. Its properties provided better precision in tracking the position reference and considered the robot’s inertia to compensate for the robot’s weight and inertial motion forces.
When the robot foot must follow a curve in space during the advancing phase, the robot’s weight was not supported by it, so we used matter element R1. The R1 properties better corresponded to the robot motion criteria. The kinematic controller was used when the robot needed to make a forward or reverse motion that was not in direct contact with the support plane. High positioning precision was not required for the advancing movement, only a faster speed to position the foot as quickly as possible on the next support point.
One of the properties of the two matter elements is “overall computation speed,” which indicates how many mathematical operations are needed to compute the actual reference for each separate joint. Therefore, a kinematic controller is much more efficient in computational requirements, performing fewer mathematical operations than a dynamic one. Results have indicated that a kinematic controller supplies not only a better tracking speed but also minimal resource consumption.
When defining and separating the control type that is best for the job, a real-time switching method was required. Thus, with the help of neutrosophic logic, the robot leg phase was determined, based not on reference values but on sensor information. According to the data calculated by Extenics theory on which type of control law to use, a hybrid control could be obtained to resolve the transition problem between kinematic and dynamic control laws. As an offline result, it could be reiterated for future datasets to enhance control properties or add a second layer of details and properties.

4. Neutrosophic Logic in Robot Control

As defined in [41], neutrosophic logic is the foundation of neutrosophic mathematics. Neutrosophic logic works with neutrosophic sets that generalize fuzzy sets and describe neutrosophic elements. The elements are based on <A>, <anti A>, and <neutral A>, where <A> is an attribute, <anti A> is the opposite of the attribute, and <neutral A> is the neutral area between <A> and <anti A>.
In neutrosophic logic, every affirmation Af is T% true, I% undetermined (uncertain), or F% false. Therefore, we can say Af (T, I, F), where T, I, and F are standard or non-standard subsets of the interval ] 0 , 1 + [ [41].
If U is the work universe and M is a set included in U, then one element x from U is written as x (T, I, F) according to set M and belongs to the same set in the following way: element x is t% true in set M; element x is i% undetermined in set M (either true or false); and element x is f% false in set M. The value of t varies in T, i varies in I, and f varies in F [42,43].
As described in the current paper, the robot control diagram presented used both kinematic and dynamic elements. At a specific time, the robot used only one of the control methods to maximize and optimize computing and motion speed or the positioning error. A precision element was needed to switch between the two control types. Using Extenics and extension theory, the contradictory elements were defined to separate the two control types between which the decision algorithm switched using neutrosophic theory.
The classic neutrosophic theory [25] chooses between the two control methods. The general equation is presented in Relation (6) and defines the generalized basic belief assignment:
m ( C ) = A , B D Θ A   B = C m 1 ( A ) m 2 ( B ) , C D Θ
where D Θ is a hyperpower set from the frame Θ = { θ 1 , θ 2 , , θ n } of n exhaustive elements and A , B 2 Θ . The basic belief assignment is ( ) : 2 Θ [ 0 , 1 ] , where 2 Θ = { , θ 1 , θ 2 , θ 3 , θ 1   θ 2 , θ 1   θ 3 , θ 2   θ 3 , θ 1   θ 2   θ 3 } when Θ = { θ 1 , θ 2 , θ 3 } .
In the case of the presented robot, two belief assignments were assigned to the two observers. The two observers were the force and proximity sensors that must determine which type of control was required at one time.
The experimental data for the two observers are presented in Table 2. These values divided the sensors’ measurement interval in a decision percentage, where the force sensor was more likely to decide on a dynamic control (75%) than the proximity sensor (65%). The rate was reversed for the kinematic control and was the same for the uncertain interval.
The values meant that the decision could be computed with a certain approximation by using Equation (6) if the robot was in contact with the support surface, according to the data received from the sensors, and a decision was made whether it would switch from one control type to another. The kinematic control type was used in the foot balancing phase and the dynamic control type in the support phase. The decision was made between the two contradictory objects, defined with the help of Extenics and extension theory.
Table 3 presents the cases in which the meutrosophic values m1(θD), m1(θC), m2(θD), m2(θC), m1(θDθC), or m2(θDθC) can be found in any combination for A and B to correspond to Equation (6), meaning that AB = C. The results were obtained using Equation (6) and represent the neutrosophic probabilistic values of truth (certainty of a valid value), falsity (assurance of a false value), uncertainty (the unknown state between two possible outcomes), and contradiction (two observers provide contradictory information with high certainty for both).
The values from Table 3 were computed in Equation (7).
m ( ϕ ) = 0 m ( θ D ) = m 1 ( θ D ) × m 2 ( θ D   θ C ) + m 1 ( θ D   θ C ) × m 2 ( θ D ) + m 1 ( θ D ) × m 2 ( θ D ) = 0.5575 m ( θ C ) = m 1 ( θ C ) × m 2 ( θ D   θ C ) + m 1 ( θ D   θ C ) × m 2 ( θ C ) + m 1 ( θ C ) × m 2 ( θ C ) = 0.085 m ( θ D   θ C ) = m 1 ( θ D   θ C ) × m 2 ( θ D   θ C ) = 0.0025 m ( θ D   θ C ) = m 1 ( θ D ) × m 2 ( θ C ) + m 1 ( θ C ) × m 2 ( θ D ) = 0.355
where m(θD) and m(θC) are the probabilistic values of certainty to choose a certain control law; m(θDθC) is the probabilistic uncertainty value of the two sensors; and m(θD∩θC) is the probabilistic contradiction values between the two sensors. As a test, when all five values are added, their sum must be equal to 1 (100%): m(φ) + m(θD) + m(θC) + m(θDθC) + m(θD∩θC) = 1.
Using the computed values, each observer’s decision (force and proximity sensor) had a certain probability that each of the two control systems required to control the robot.
Table 4 presents all cases presented in Figure 3a,b for the force sensor and proximity sensor where X and α were defined according to the sensor type.
For the last two cases in Table 4, the C= θDθC or C= θD∩θC uncertainty in decision making was due to the sensor values, leading to a contradiction. If, in the case of uncertainty, the control type running at that time could be kept, in the case of contradiction between sensor data, a decision must be made on which control should be used. Because the contradiction could appear only under specific conditions, a decision was made to use the same type of control as the robot in the case of uncertainty.
One exceptional or typical case is the robot stepping on very uneven ground. The force sensor indicates that the foot is on the floor, but the proximity sensor does not provide the same conclusion since it reads a value greater than the reference threshold. Therefore, the decision should be to switch to a dynamic controller. On the other hand, if a kinematic control is used and the foot is subject to external factors, the force sensor records high peak values in short periods, leading to the chattering effect. The algorithm switched from kinematic to dynamic control for any case of uncertainty. To prevent additional chattering effects, the algorithm switched the control method when the force sensor retained its contradictory value for a minimum Δt time interval. The time threshold provided a precision control law in uneven terrain and contradictory cases between input sensors and observers.
A supplementary condition was required in addition to the selected requirements for the control type. The condition was bound to the way the robot moves. Because the dynamic control was slower to compensate for high errors and its stationary points were unnecessary, we chose the control law based on robot kinematics to save computing time.
One could argue that the switching control law is unnecessary and uses simple triggers that act as switching mechanisms. However, a simple control switch cannot decide between options when the information received is inaccurate, which is one of the main reasons the neutrosophic switching mechanism was chosen and used.

5. The Walking Robot Leg Control Architecture

To control a walking robot, one has to design a control law for each leg, and the control has many walking phases that depend directly on the reference signal of the foot. Therefore, the design of a general control law is needed to control foot position and the motor’s torque according to the computed reference and to use the sensor signal (force and proximity) for environmental interaction and detection.
Figure 4 presents the general control diagram for one leg of the walking robot. The graph contains a reference generation block to generate the foot trajectory using detailed data chosen to test the control law.
The reference generation was made in the operational space, and the data were converted to the joint space from the operational space by using inverse kinematics. An inverse kinematics algorithm based on the Jacobian transpose was used and is presented in Equation (8). Compared to other algorithms, it provides a reference speed for the leg joint motors and not the angular position reference.
1 . Δ e = e g o a l e r e a l 2 . J J T d e = J × J T × Δ e 3 . α = Δ e T × J J T d e J J T d e T × J J T d e 4 . Δ θ = ( α × J T × Δ e ) T .
The speed reference value cannot be used to control the robot joints by the dynamic controller because the dynamic controller needs the angular reference for all the degrees of freedom it controls, and this is the reason why the angular values for each joint were computed using the foot position as the origin. The equations are:
q 1 = arctan ( M x M y ) q 2 = 2 × arctan ( sin q 2 sin 2 q 2 + cos 2 q 2 + cos q 2 ) q 3 = arctan ( sin q 3 cos q 3 )
where the sine and cosine values are given by
cos q 2 = ( M z l 1 ) × ( l 2 + l 3 cos q 3 ) + M x × l 3 sin q 3 ( M z l 1 ) 2 + M x 2 + M y 2 , sin q 2 = 1 cos 2 q 2 , cos q 3 = ( M z l 1 ) 2 + M x 2 + M y 2 l 2 2 l 3 2 2 l 2 l 3 , sin q 3 = 1 cos 2 q 3 .
For Equation (9) to be valid and to condition the leg posture, additional conditions were added:
1 : i f     My = 0   t h e n       q 1 = 0 2 :   q 1 ( π 2 , π 2 ) 3 : q 3 0 .
The two sensors’ data (proximity and force) were used as input signals for the neutrosophic block to decide. Because generated information was used, the two sensors were simulated. Therefore, the proximity sensor had a function based on the calculated distance from the foot to the support surface considered a plane, but to which a sinusoidal signal was added to generate the measurement error of the sensor. Regarding the force sensor, the foot–ground interaction was simulated using the system from Figure 5. The simulation was achieved with the help of a damper and a spring.
The equation used for contract modeling and determining the reaction force of ground interaction was the classical one:
F t o t = ( k x + c x ˙ )
where k and c are the constants of the spring and damper, respectively.
Having the two parameters, reaction force and proximity distance, as inputs for the decision method, the two control methods are defined in the following sections.

5.1. The Kinematic Control Method

This method used the data provided by the computing algorithm of inverse kinematics (Figure 6) and fed the output to the PI (proportional-integration) regulator that drove the robot joint motors.
As previously described in the Extenics method, the control method has fewer calculations. Still, the positioning error is not the best because of the inverse kinematics method. It does not consider the inertial force that the robot experiences during the actual motion.
The main component of the controller is the Jacobian matrix:
J = [ s 1 ( l 2 s 2 + l 3 s 23 ) c 1 ( l 2 c 2 + l 3 c 23 ) l 3 c 1 c 23 c 1 ( l 2 s 2 + l 3 s 23 ) s 1 ( l 2 c 2 + l 3 c 23 ) l 3 s 1 c 23 0 l 2 s 2 l 3 s 23 l 3 s 23 ]
where si = sin(θi), ci = cos(θi), sij = sin(θi + θj), and cij = cos(θi + θj).
The matrix was computed from the direct kinematics equations and was used to find the foot position in the operational space:
M ( x , y , z ) = [ c 1 ( l 2 s 2 + l 3 s 23 ) s 1 ( l 2 s 2 + l 3 s 23 ) l 1 + l 2 c 2 + l 3 c 23 ] .
The entire kinematic control loop was based on the Jacobian matrix. First, the matrix containing the actual angular joint position was calculated. After it followed its transpose matrix and the operational space reference position, the positioning error Δθ was obtained.
The positioning error was sent to two PI (proportional-integrative) feedback control loops for controlling the angular speed and motor torque. Thereby, the torque control for each joint was obtained, and the switch from one controller to another (from the kinematic control to the dynamic one, and vice versa) was more accessible since they both used torque to control the robot joints.
The transpose Jacobian method is not new and is based on using the transpose matrix of the Jacobian instead of the inverse matrix. Therefore, Δθ was computed using Equation (15):
Δ q ˙ = α J T e
for specific values of constant α.
The transpose-Jacobian-matrix-based algorithm presented in Equation (8) eliminated stability problems. The algorithm was also chosen because it had a higher computation speed than the control values of other algorithms, even if the computed values were not as precise as the inverse-Jacobian-matrix-based method [44].
Because the method of solving the inverse kinematics problem uses the Jacobian matrix, the final results are always formed by angular speeds that the robot joints must follow. Therefore, the control is suitable for PI and PID regulators and for controlling angular velocities. The downside is that the method cannot be used to compute a dynamic control reference since it needs a precise joint angular value. In contrast, if the values given by the Jacobian-based inverse kinematic problem are integrated, the result is not as accurate as is required.

5.2. The Dynamic Control Method

The dynamic control method used the same reference data as the kinematic one. Nevertheless, it computed the torque reference of the motors considering kinematic parameters, the inertial ones provided by the inertia matrix, and the Coriolis and gravity force effects supplied by the Coriolis and gravity matrices.
Figure 7 presents the dynamic control diagram. The most critical control blocks are shown, including those that compute the inertial parameters and values used by the slide control block. The three control blocks that formed the dynamic controller from Figure 7 were the PID error controller, the fuzzy controller, and the slide control. The first block passed the positioning error through a PID controller so that the control method could consider the error variations. Using the PID error controller data, the fuzzy amplification was obtained through the membership functions presented in Figure 8a,b. The command torque for each motor joint could be calculated after computing the fuzzy gain, using the inertial data and the reference values [45].
All the control values were computed from the presented robot structure, characteristics, structural weights, and measurements.
For Figure 8a,b the abbreviations are N = Negative, P = Positive, ZE = zero, S = Small, M = Medium, B = Big, and V = Very. For the membership functions, Table 5 presents the values for the membership parameters so the gain value Kfuzzy could be chosen. The membership functions that provided the gain were selected according to the values of the two parameters s and s ˙ , where s represents the error through the PID error controller and s ˙ is its derivate. A constant gain was not desired for each case, leading to a standard-step fuzzy controller, but a function-based one was selected.
Using Table 5 data, the parabola in Figure 9 was considered for computing the Kfuzzy gain, according to the two inputs s and s ˙ . The parabola equation was computed from Equation (16):
y ( x ) = 2 x 2 + 50 ,
and we modified it to introduce the fuzzy parameters:
K f u z z y ( s ˙ ) = 2 ( s ˙ 10 s ) 2 + 50 .
Equation (15) now provides the Kfuzzy parameter in the dynamic control.
The sliding control was made with the help of the slide control block (Figure 7). The control type was inspired by Shafiei [12] and modified to match the robot kinematic structure used, a design with three degrees of freedom instead of the two used by Shafiei [12]. Following that, the dynamic equations that allowed the dynamic controller’s development are presented.
The basic dynamic control equation was:
H ( q ) q ¨ + C ( q , q ˙ ) q ˙ + G ( q ) + τ d = τ .
From Equation (18), the signal for motor torque control was calculated. All the parameters from Equation (18) are required to be known. The unknown values are the torque τ, the matrices H (inertial parameters), C (Coriolis and centrifugal forces), and G (gravity effect), which are given by the following equations, in which the angles θ from the joint space are equal to the ones in the operational space:
H = M = [ M 11 M 12 M 13 M 21 M 22 M 23 M 31 M 32 M 33 ]
where M is the inertial parameters matrix.
The inertial matrix parameters can be computed using the following:
T ( θ , θ ˙ ) = 1 2 q ˙ T M q ˙ = 1 2 i , j M i j ( q ) q ˙ i q ˙ j 0 , T ( θ , θ ˙ ) = 1 2 m 1 ( x - ˙ 1 2 + y - ˙ 1 2 + z - ˙ 1 2 ) + 1 2 m 2 ( x - ˙ 2 2 + y - ˙ 2 2 + z - ˙ 2 2 ) + 1 2 m 3 ( x - ˙ 3 2 + y - ˙ 3 2 + z - ˙ 3 2 ) + 1 2 I z 1 θ 1 2 ˙ + 1 2 I z 2 ( θ 1 ˙ + θ 2 ˙ ) 2 + 1 2 I z 3 ( θ 1 ˙ + θ 2 ˙ + θ 3 ˙ ) 2 x - 1 = 0 ,   x - ˙ 1 = 0 , y - 1 = 0 , y - ˙ 1 = 0 , z - 1 = r 1 ,   z - ˙ 1 = 0 , x - 2 = r 2 sin q 2 · cos q 1 , x - ˙ 2 = r 2 sin q 1 sin q 2 · q ˙ 1 + r 2 cos q 1 · cos q 2 · q ˙ 2 , y - 2 = r 2 sin q 2 · sin q 1 , y ˙ 2 = r 2 cos q 1 · sin q 2 · q ˙ 1 + r 2 sin q 1 · cos q 2 · q ˙ 2 , z - 2 = l 1 + r 2 cos q 2 , z ˙ 2 = r 2 sin q 2 · q ˙ 2 , x - 3 = cos q 1 ( l 2 sin q 2 + r 3 sin ( q 2 + q 3 ) ) , y - 3 = sin q 1 ( l 2 sin q 2 + r 3 sin ( q 2 + q 3 ) ) , z - 3 = l 1 + l 2 cos q 2 + r 3 cos ( q 2 + q 3 ) , x - ˙ 3 = sin q 1 ( l 2 sin q 2 + r 3 sin ( q 2 + q 3 ) ) q ˙ 1 + cos q 1 ( l 2 cos q 2 + r 3 cos ( q 2 + q 3 ) ) q ˙ 2 + r 3 cos q 1 cos ( q 2 + q 3 ) q ˙ 3 , y - ˙ 3 = cos q 1 ( l 2 sin q 2 + r 3 sin ( q 2 + q 3 ) ) q ˙ 1 + sin q 1 ( l 2 cos q 2 + r 3 cos ( q 2 + q 3 ) ) q ˙ 2 + r 3 sin q 1 cos ( q 2 + q 3 ) q ˙ 3 z - ˙ 3 = ( l 2 sin q 2 + r 3 sin ( q 2 + q 3 ) ) q ˙ 2 r 3 sin ( q 2 + q 3 ) q ˙ 3 , I x i = m i 12 ( w i 2 + h i 2 ) ,   I y i = m i 12 ( l i 2 + h i 2 ) ,   I z i = m i 12 ( l i 2 + w i 2 )
where x - i , y - i , z - i , x - ˙ i , y - ˙ i , z - ˙ i are the coordinates of the center of mass for each element of a leg, and, respectively, their first derivate; and Ixi, Iyi, and Izi represent the inertia tensors of each leg element.
By using inertia matrix requirement parameters, the inertia matrix elements were computed by the following equations:
M 11 = r 2 2 m 2 + r 3 2 m 3 + l 2 2 m 3 + 1 12 l 1 2 m 1 + 1 12 w 2 m 1 + 1 12 l 2 2 m 2 + 1 12 w 2 m 2 + 1 12 l 3 2 m 3 + 1 12 w 2 m 3 + 2 l 2 r 3 m 3 sin ( q 2 + q 3 ) cos 2 ( q 2 ) ( l 2 2 m 3 + r 2 2 m 2 ) r 3 2 m 3 cos 2 ( q 2 + q 3 ) , M 12 = 1 6 l 2 2 m 2 + 1 6 w 2 m 2 , M 21 = 1 6 w 2 m 3 + 1 6 l 3 2 m 3 , M 22 = r 2 2 m 2 + l 2 2 m 3 + r 3 2 m 3 + 1 12 w 2 m 2 + 1 12 l 2 2 m 2 + 1 12 l 3 2 m 3 + 1 12 w 2 m 3 + 2 l 2 r 3 m 3 [ sin q 2 sin ( q 2 + q 3 ) + cos q 2 cos ( q 2 + q 3 ) ] , M 23 = 2 l 2 r 3 m 3 [ sin q 2 sin ( q 2 + q 3 ) + cos q 2 cos ( q 2 + q 3 ) ] , M 31 = 1 6 l 3 2 m 3 , M 32 = 1 6 w 2 m 3 + 1 6 l 3 2 m 3 + 2 r 3 2 m 3 , M 33 = r 3 2 m 3 + 1 12 l 3 2 m 3 + 1 12 w 2 m 3 .
The Coriolis matrix was computed using the following equation:
C i j ( q , q ˙ ) = k = 1 3 Γ i j k q ˙ k = 1 2 k = 1 3 ( M i j q k + M i k q j M k j q i ) q ˙ k
for which the Γijk parameters are:
Γ 111 = Γ 122 = Γ 123 = Γ 132 = Γ 133 = Γ 212 = Γ 213 = Γ 221 = Γ 222 = Γ 231 = Γ 312 = Γ 313 = Γ 321 = Γ 323 = Γ 331 = Γ 333 = 0 , Γ 112 = ( 2 l 2 m 3 r 3 ( sin q 2 cos ( q 2 + q 3 ) + cos q 2 sin ( q 2 + q 3 ) ) + 2 l 2 2 m 3 cos q 2 sin q 2 + + 2 m 2 r 2 2 cos q 2 sin q 2 + 2 m 3 r 3 2 cos ( q 2 + q 3 ) sin ( q 2 + q 3 ) ) q ˙ 2 , Γ 113 = ( 2 l 2 m 3 sin q 2 cos ( q 2 + q 3 ) + 2 m 3 r 3 2 cos ( q 2 + q 3 ) sin ( q 2 + q 3 ) ) q ˙ 3 Γ 121 = ( 2 l 2 m 3 r 3 ( sin q 2 cos ( q 2 + q 3 ) + cos q 2 sin ( q 2 + q 3 ) ) + 2 l 2 2 m 3 cos q 2 sin q 2 + + 2 m 2 r 2 2 cos q 2 sin q 2 + 2 m 3 r 3 2 cos ( q 2 + q 3 ) sin ( q 2 + q 3 ) ) q ˙ 1 , Γ 131 = ( 2 l 2 m 3 sin q 2 cos ( q 2 + q 3 ) + 2 m 3 r 3 2 cos ( q 2 + q 3 ) sin ( q 2 + q 3 ) ) q ˙ 1 Γ 211 = ( 2 l 2 m 3 r 3 ( sin q 2 cos ( q 2 + q 3 ) + cos q 2 sin ( q 2 + q 3 ) ) + 2 l 2 2 m 3 cos q 2 sin q 2 + + 2 m 2 r 2 2 cos q 2 sin q 2 + 2 m 3 r 3 2 cos ( q 2 + q 3 ) sin ( q 2 + q 3 ) ) q ˙ 1 , Γ 223 = 2 l 2 m 3 r 3 ( sin q 2 cos ( q 2 + q 3 ) cos q 2 sin ( q 2 + q 3 ) ) q ˙ 3 , Γ 232 = 2 l 2 m 3 r 3 ( sin q 2 cos ( q 2 + q 3 ) cos q 2 sin ( q 2 + q 3 ) ) q ˙ 2 , Γ 233 = 4 l 2 m 3 r 3 ( sin q 2 cos ( q 2 + q 3 ) cos q 2 sin ( q 2 + q 3 ) ) q ˙ 3 , Γ 311 = ( 2 l 2 m 3 sin q 2 cos ( q 2 + q 3 ) + 2 m 3 r 3 2 cos ( q 2 + q 3 ) sin ( q 2 + q 3 ) ) q ˙ 1 , Γ 322 = 2 l 2 m 3 r 3 ( sin q 2 cos ( q 2 + q 3 ) cos q 2 sin ( q 2 + q 3 ) ) q ˙ 2 , Γ 332 = 2 l 2 m 3 r 3 ( sin q 2 cos ( q 2 + q 3 ) cos q 2 sin ( q 2 + q 3 ) ) q ˙ 2 .
The last part of the dynamic equation is given by Equation (24), which computed the gravity effect matrix on the robot leg:
G ( q ) = N ( q , q ˙ ) = U q = [ U q 1 U q 2 U q 3 ]
where
U ( q ) = m 1 x 1 g + m 1 y 1 g + m 1 z 1 g + m 2 g ( x 2 + y 2 + z 2 ) + m 3 g ( x 3 + y 3 + z 3 ) = m 1 r 1 g + m 2 g ( l 1 + r 2 cos q 2 + r 2 sin q 2 ( sin q 1 + cos q 1 ) ) + + m 3 g ( l 1 + l 2 cos q 2 + r 3 cos ( q 2 + q 3 ) + ( sin q 1 + cos q 1 ) ( l 2 sin q 2 + r 3 sin ( q 2 + q 3 ) ) )
and its derivative components are:
U q 1 = m 2 r 2 g sin q 2 ( sin q 1 + cos q 1 ) + m 3 g ( sin q 1 + cos q 1 ) ( l 2 sin q 2 + r 3 sin ( q 2 + q 3 ) ) U q 2 = m 2 g ( r 2 sin q 2 + r 2 cos q 2 ( cos q 1 + sin q 1 ) ) + m 3 g [ l 2 sin q 2 r 3 sin ( q 2 + q 3 ) + ( sin q 1 + cos q 1 ) ( l 2 cos q 2 + r 3 cos ( q 2 + q 3 ) ) ] U q 3 = m 3 g [ r 3 sin ( q 2 + q 3 ) + r 3 cos ( q 2 + q 3 ) ( sin q 1 + cos q 1 ) ] .
Using the dynamic control equations, the PID sliding control can be developed, which computes the torque τ used in the joint motor torque control so that the position vector q can track the desired trajectory qd. The tracking error vector is defined as:
e = q d q .
Sliding motion control requires a sliding surface, given by Equation (28), and contains both the derivative term and the integral one:
s = e ˙ + λ 1 e + λ 2 0 t e d t
where λi is a positive diagonal matrix; it turns out that, for s = 0, a stable sliding surface is obtained (as shown by Shafiei in [12]). The dynamic robot equations can be written by using the sliding surface equation:
H s ˙ = C s + f + τ d τ
where
f = H ( q ¨ d + λ 1 e ˙ + λ 2 e ) + C ( q ˙ d + λ 1 e + λ 2 0 t e d t ) + G .
The control module input becomes:
τ = f ^ + K v s + K sgn ( s )
where
f ^ = H ^ ( q ¨ d + λ 1 e ˙ + λ 2 e ) + C ^ ( q ˙ d + λ 1 e + λ 2 0 t e d t ) + G ^ .
Equation (32) represents a force estimation f, and K v s = K v e ˙ + K v λ 1 e + K v λ 2 0 t e d t is the outer PID loop; Kv and K are positive diagonal matrices built so that the stability conditions are fulfilled and guaranteed. The sgn(s) function is the sign function. The function can also be written as:
| f ˜ | = | H ˜ ( q ¨ d + λ 1 e ˙ + λ 2 e ) + C ˜ ( q ˙ d + λ 1 e + λ 2 0 t e   d t ) + G ˜ | F
where f ˜ = f f ^ , H ˜ = H H ^ , and G ˜ = G G ^ . The vector F is:
F = | H ˜ ( q ¨ d + λ 1 e ˙ + λ 2 e ) | + | C ˜ ( q ˙ d + λ 1 e + λ 2 0 t e   d t ) | + | G ˜ | .
To control the system states ( e , e ˙ ) and to reach the sliding surface s = 0 in a limited time by staying on the surface, the control law should be formulated so that Condition (35) is fulfilled:
1 2 d d t [ s T H s ] < η ( s T s ) 1 2 ,   η > 0 .
Using the sign function in the control law, high oscillations in the control torque are found as the undesired phenomenon called chattering. To overcome this drawback, a saturation function was used (36) for the discontinuous part of the control law:
s a t ( s φ ) = { 1 s φ s φ φ < s < φ 1 s φ } .
As a result, a layer ϕ around the sliding surface was obtained so that, when the robot foot trajectory was inside the layer, it remained there. The values of λ1, λ2, K, and Kv were adjusted to better position the mobile walking robot foot.

6. Hybrid Control Simulation

The simulation was built with the help of MATLAB Simulink software to test the proposed methods and control laws. Figure 10 presents the diagram of the main components of the hybrid controller. The reference generation block for the OXYZ axis in the Cartesian space is shown in Figure 11, and the constant generation block defining the walking robot is presented in Figure 12. All the values were sent to a reference system on the robot structure, illustrated in Figure 1b. The foot’s vertical position was at a distance of 1.1 m from the origin set on the robot platform, not the foot.
The three lines in Figure 11 represent the reference system as follows: the top signal (green line) is the reference for the robot foot on the OZ axis, the trapezoidal signal (blue line) is the reference for the robot foot on the OX axis, and for the OY axis, a zero-value signal was used (purple line). These three datasets represent the Cartesian position of the robot foot for a complete cycle of a leg’s walking step. The reference on the OY axis is the heading direction of the robot and has a trapezoidal shape because the foot is moving relative to the robot platform.
Figure 13 shows the diagram corresponding to the sliding control method made in MATLAB Simulink in which all the described elements are found. With their help, the command signal for the three joint motors was calculated.
Algorithm 1 controlled the kinematic control block. It computed the angular speeds using the Jacobian matrix and the formula from Equation (8), which provided the angular reference speed.
Algorithm 1 The kinematic control block
function   [ Mx ,   My ,   Mz ,   qd ] = fcn ( Ref _ X ,   Ref _ Y ,   Ref _ Z ,   A 1 ,   L 2 ,   L 3 ,   q 1 ,   q 2 ,   q 3 )  
fi 1 = q 1 ;   fi 2 = q 2 ;   fi 3 = q 3 ;  
J 11 = sin d ( fi 1 ) × ( L 2 × sin d ( fi 2 ) + L 3 × sin d ( fi 2 + fi 3 ) ) ;  
J 12 = L 2 × cos d ( fi 1 ) × cos d ( fi 2 ) + L 3 × cos d ( fi 1 ) × cos d ( fi 2 + fi 3 ) ;  
J 13 = L 3 × cos d ( fi 1 ) × cos d ( fi 2 + fi 3 ) ;  
J 21 = cos d ( fi 1 ) × ( L 2 × sin d ( fi 2 ) + L 3 × sin d ( fi 2 + fi 3 ) ) ;  
J 22 = L 2 × sin d ( fi 1 ) × cos d ( fi 2 ) + L 3 × sin d ( fi 1 ) × cos d ( fi 2 + fi 3 ) ;  
J 23 = L 3 × sin d ( fi 1 ) × cos d ( fi 2 + fi 3 ) ;  
J 31 = 0 ;  
J 32 = L 2 × sin d ( fi 2 )   L 3 × sin d ( fi 2 + fi 3 ) ;  
J 33 = L 3 × sin d ( fi 2 + fi 3 ) ;
Jb = [ J 11   J 12   J 13 ;   J 21   J 22   J 23 ;   J 31   J 32   J 33 ] ;
Mx = cos d ( fi 1 ) × ( L 2 × sin d ( fi 2 ) + L 3 × sin d ( fi 2 + fi 3 ) ) ;  
My = sin d ( fi 1 ) × ( L 2 × sin d ( fi 2 ) + L 3 × sin d ( fi 2 + fi 3 ) ) ;  
Mz = L 2 × cos d ( fi 2 ) + L 3 × cos d ( fi 2 + fi 3 ) + A 1 ;  
M _ err = [ Ref _ X Mx ;   Ref _ Y My ;   Ref _ Z Mz ] ;  
JJTde = Jb × Jb xM _ err ;  
qd = alpha   × Jb ×   M _ err ;
Figure 14 and Figure 15 present the simulation diagrams for the two sensors used in determining which control law should be used at a particular moment in time according to the switching algorithm based on neutrosophic logic.
Using what was presented in Section 4 regarding the neutrosophic decision, the neutrosophic control switching block was implemented. It is illustrated in Figure 16 with its inputs and outputs. The two inputs already described are shown, bringing proximity and force information into the switching mechanism. In addition to these two, there was a third input called stable-state, and it provided the block with additional information. When the robot was homing or reached specific points, it was controlled only by the kinematic control law. The solution was chosen to save computing power and provide a higher speed for arriving at the initial position (homing phase).
The actual neutrosophic switching block followed the detailed conditions already described.

7. Experimental Results

Following the simulation results, we observed several things. One of them was that, to successfully simulate the control law, which was bounded by the interaction between the support surface and the robot, different conditions were needed by the decision and control methods. This case was observed during the support phase, for which the robot leg must hold the entire robot weight and carry out the forward robot motion. The force and proximity sensors must have values that assumed the support surface contact in actual case conditions. In contrast, in simulation conditions, if the positioning control error placed the robot foot slightly above the support surface, then the sensors could affect the control laws and the entire system. Consequently, the switching mechanism was built with the condition that switched the control law when there was permanent contact with the support surface. An example is the homing motion of the foot, for which the robot was controlled only through the kinematic control method.
Figure 17 presents the reference and position tracking for the robot foot in the operational space in Cartesian coordinates on the OX axis. The positioning error on the OX axis is shown in Figure 18. The movement represents the forward direction of motion for the robot and its legs. Thus, three steps are presented, for which the trapezoidal shape of the signal represents the forward and retreat motion relative to the robot platform. The movement was computed according to the reference system relative to the robot platform. Because the reference was considered in the robot’s operational space, the first coordinate system was selected at the point where the first joint of the robot was placed.
On average, the error on the OX axis was below 1 cm, but there were some spikes in the error signal. The high amplitude errors were due to the sudden change in the reference speed, which was used in controlling the angular velocity through the torque of the joint’s motor. The high amplitude errors were found at the points where the reference changed its path slope and control type. The error had a more continuous shape when the kinematic control was in place, and in the case of the dynamic control, the error tended to oscillate.
Figure 19 presents the robot foot’s reference, positioning, and error signals on the OY axis. The reference value was zero, and the positioning error was less than 1 mm. On the other two axes, spikes were found in the error signal at the moment when the control law changed.
Figure 20 presents the diagram for the reference and positioning signals of the robot foot on the OZ axis, which corresponds to the perpendicular axis on the support surface, meaning the vertical motion. The diagram presents the foot position during the leg’s swing phase. The leg was positioned on the vertical axis so it would not hit an obstacle or the support surface. Also, the vertical trajectory of the foot was in the support phase, for which the reference was zero. The foot followed a continuous and uniform reference value during the swing phase. In the support and moving-forward phase, a positioning error was observed. The error may have been due to the platform weight compensating at the moment the robot foot crossed the point of intersection with the platform center of the vertical gravity axis. The positioning error became zero on the OZ axis.
Figure 17, Figure 18, Figure 19, Figure 20 and Figure 21 present, in the same time frame, the motion of the robot leg stepping three times to move the robot forward. The diagrams show all motion stages for the robot foot to complete a step. They present reference and tracking signals. The homing occurred in the first second of the simulation, and a high error was observed. In the time interval of [2–5 s], the leg moved on the vertical axis and forward, controlled by the kinematic control law to reach a new position for the foot. In the next second of the virtual experiment at [5–6 s], the control method reached the vertical reference position to allow the robot leg to support the robot’s weight. Between 6 and 9 s, the leg moved backward in relation to the robot platform and was controlled by the dynamic SMC method. A different error pattern is observed in Figure 18 and Figure 21, considering that the robot leg supported the weight at this stage. After this stage was completed, the next step continued in the same manner, excluding the homing sequence.
In Figure 21, a maximum error of 4 cm was found at the amplitude peaks and an average value of 1 cm. The high amplitude values appeared as described above when the control laws were switched, which is the subject of future work to stabilize the system at the switch. The error peaks were also due to the sudden shape-change of the reference signal. By changing from a curve signal to a straight line, the derivative part of the controller received an extremely high value, which in turn affected the control signal. The influence should be attenuated or removed entirely in future work.

8. Conclusions

To summarize and conclude the results, Figure 22 presents the robot foot trace in a 3D space with the reference pattern. All three steps overlap in the same diagram. The coordinates are given in the Cartesian space. The first stage that was easily found was the homing curve, seen in the lower section of the image. The maximum error was found at the start or end of a step, where the reference system must be improved to avoid sharp changes of direction. For the simulation, a fixed Cartesian coordinates system was considered, with the origin placed in the first joint where the robot top platform joined with to the robot leg. The shape of the horizontal motion was not uniform. The trajectory had minor errors in the range of millimeters and hundreds of micrometers when the control method was not changed, and we considered the shape of the foot trajectory as close to the reference.
Overall, as presented, the dynamic controller was better at following a continuous reference than a simple positioning kinematic controller. Although the positioning was more precise when a dynamic-based controller was used, sudden changes were added in the reference value when changing from a linear trajectory to a half ellipse. Since these were the points where the decision algorithm should also switch the used control method, these points of interest became essential areas of disturbance in the system. We will dedicate our attention to mitigating the reference and switching effects in the reference-tracking algorithm in future work.
The dynamic controller tended to be slower than the kinematic controller in compensating for the disturbance, but it did not override the PI kinematic controller. Moreover, the PI kinematic controller oscillated around the reference value when the robot leg was subjected to exterior forces. The gravitational acceleration acted upon the entire leg when high gains were used inside the control loops to lower the reference tracking time. It resulted in a high tracking error on the vertical axis during the kinematic control. The reference tracking time was considered the time difference between the change to the kinematic control method and until the foot reached the target point, with an error small enough to consider that the position was reached. The target position was chosen near the support surface but far enough for the leg to lower its speed before hitting the surface.
A PI kinematic-based controller was used during the swing motion of the robot leg because high precision was not required to move the robot foot. Instead, a constant foot speed was needed to reach the point of contact with the support surface in a short time. In addition, along with the positioning precision, the controller did not need to compensate for the gravity and inertial forces during the leg and robot motion and could have severe and undesired consequences in the support phase. Therefore, we concluded that a dynamic-based controller was required to compensate for all the inertial forces and to better track the reference.
Finally, the proposed hybrid control efficiently used the two control methods for the mobile walking robot leg. The biggest problem was found during the transition between the two control techniques.
The main conclusion of the paper was on the decision algorithm side. By having a two-stage decision, the information that could be analyzed offline between simulations defined the outline of the critical decision in each case or phase of the robot. The final algorithm distinguished between robot motion phases and rejected contradictory conditions at the online stage.
The consequence of the presented hybrid force/position control with a two-stage decision algorithm was that it can successfully be used and further developed for other types of robots or tasks.
Our future work will focus on studies for removing the high peaks of positioning error. At the same time, new, improved simulations are required to visualize the leg motion cycle.

Author Contributions

Conceptualization, I.-A.G. and L.V.; simulation, I.-A.G.; validation, I.-A.G., L.V. and A.-C.C.; investigation, I.-A.G. and L.V.; resources, I.-A.G. and A.-C.C.; writing—original draft preparation, I.-A.G.; writing—review and editing, I.-A.G., L.V. and A.-C.C.; supervision, L.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

This work was supported by the Institute of Solid Mechanics of the Romanian Academy and by an interacademic project of IMSAR–Yanshan University: “Joint Laboratory of Intelligent Rehabilitation Robot” (KY201501009), a collaborative research agreement between Yanshan University, China, and the Romanian Academy by IMSAR, RO.

Conflicts of Interest

The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Glossary

θirobot leg join rotation
θrefrobot joint angular position reference
θrealrobot joint angular position measured by angular sensors
Δθileg joint positioning error
ωijoint angular speed
ΔωIjoint angular speed error
ridistance from the previous joint axis to the ith leg segment center of mass
lilength of the ith leg segment
mimass of the ith leg segment
Prefposition reference of the robot foot
Prealreal position of the robot foot
JJacobian matrix
JijJacobian matrix element with coordinates i (row) and j (column)
Fsimsimulated ground reaction force
Frealreal force given by force sensors placed on robot foot
Fgroundground reaction force
ddistance from robot foot to the ground given by proximity sensors
sisin(θi)
cicos(θi)
sijsin(θi + θj)
cijcos(θi + θj)
τPjoint torque computed by position control
τFjoint torque computed by force control
τjoint torque selected by the decision algorithm
qileg joint used by the dynamic control method
ssliding surface error in terms of SMC
SMCsliding motion control
Omextension theory object
cmextension theory characteristic
vmextension theory measure
m (C)the neutrosophic generalized basic belief assignment
Dθa hyperpower set
θCneutrosophic kinematic state
θDneutrosophic dynamic state
θDθCneutrosophic uncertain state
θDθCneutrosophic contradiction state

References

  1. Gao, Z.; Wanyama, T.; Singh, I.; Gadhrri, A.; Schmidt, R. From Industry 4.0 to Robotics 4.0-a Conceptual Framework for Collaborative and Intelligent Robotic Systems. Procedia Manuf. 2020, 46, 591–599. [Google Scholar] [CrossRef]
  2. Bragança, S.; Costa, E.; Castellucci, I.; Arezes, P.M. A Brief Overview of the Use of Collaborative Robots in Industry 4.0: Human Role and Safety. In Occupational and Environmental Safety and Health; Springer: Cham, Switzerland, 2019; Volume 202, pp. 641–650. [Google Scholar] [CrossRef]
  3. Yao, X.-Y.; Ding, H.-F.; Ge, M.-F. Task-space tracking control of multi-robot systems with disturbances and uncertainties rejection capability. Nonlinear Dyn. 2018, 92, 1649–1664. [Google Scholar] [CrossRef]
  4. Craig, J.J.; Raibert, M.H. A Systematic Method of Hybrid Position/Force Control of a Manipulator. In Proceedings of the COMPSAC 79, Computer Software and the IEEE Computer Society’s Third International Applications Conference, Chicago, IL, USA, 6–8 November 1979; IEEE: Piscataway, NJ, USA, 1979; pp. 446–451. [Google Scholar]
  5. Wu, J.; Zhang, B.; Wang, L.; Yu, G. An iterative learning method for realizing accurate dynamic feedforward control of an industrial hybrid robot. Sci. China Technol. Sci. 2021, 64, 1177–1188. [Google Scholar] [CrossRef]
  6. Liu, H.; Yan, Z.; Xiao, J. Pose error prediction and real-time compensation of a 5-DOF hybrid robot. Mech. Mach. Theory 2022, 170, 104737. [Google Scholar] [CrossRef]
  7. Huynh, B.-P.; Wu, C.-W.; Kuo, Y.-L. Force/Position Hybrid Control for a Hexa Robot Using Gradient Descent Iterative Learning Control Algorithm. IEEE Access 2019, 7, 72329–72342. [Google Scholar] [CrossRef]
  8. Sun, W.; Wu, Y.; Wang, L. Trajectory tracking of constrained robotic systems via a hybrid control strategy. Neurocomputing 2018, 330, 188–195. [Google Scholar] [CrossRef]
  9. Le Flohic, J.; Paccot, F.; Bouton, N.; Chanal, H. Application of Hybrid Force/Position Control on Parallel Machine for Mechanical Test. Mechatronics 2018, 49, 168–176. [Google Scholar] [CrossRef]
  10. Islam, M.R.; Assad-Uz-Zaman, M.; Al Zubayer Swapnil, A.; Ahmed, T.; Rahman, M.H. An Ergonomic Shoulder for Ro-bot-Aided Rehabilitation with Hybrid Control. Microsyst. Technol. 2021, 27, 159–172. [Google Scholar] [CrossRef]
  11. Shafiei, S.E. Sliding Mode Control of Robot Manipulators via Intelligent Approaches; INTECH Open Access Publisher: London, UK, 2010; ISBN 953-307-099-4. [Google Scholar]
  12. Yang, X.; Liu, H.; Xiao, J.; Zhu, W.; Liu, Q.; Gong, G.; Huang, T. Continuous Friction Feedforward Sliding Mode Controller for a TriMule Hybrid Robot. IEEE/ASME Trans. Mechatron. 2018, 23, 1673–1683. [Google Scholar] [CrossRef]
  13. Xie, Y.; Zhang, X.; Meng, W.; Zheng, S.; Jiang, L.; Meng, J.; Wang, S. Coupled fractional-order sliding mode control and obstacle avoidance of a four-wheeled steerable mobile robot. ISA Trans. 2020, 108, 282–294. [Google Scholar] [CrossRef]
  14. Zaare, S.; Soltanpour, M.R. Adaptive fuzzy global coupled nonsingular fast terminal sliding mode control of n-rigid-link elastic-joint robot manipulators in presence of uncertainties. Mech. Syst. Signal. Process. 2021, 163, 108165. [Google Scholar] [CrossRef]
  15. Jmel, I.; Dimassi, H.; Hadj-Said, S.; M’Sahli, F. Adaptive Observer-Based Sliding Mode Control for a Two-Wheeled Self-Balancing Robot under Terrain Inclination and Disturbances. Math. Probl. Eng. 2021, 2021, 8853441. [Google Scholar] [CrossRef]
  16. Feng, X.; Wang, C. Robust Adaptive Terminal Sliding Mode Control of an Omnidirectional Mobile Robot for Aircraft Skin Inspection. Int. J. Control. Autom. Syst. 2020, 19, 1078–1088. [Google Scholar] [CrossRef]
  17. Cheng, X.; Liu, H.; Lu, W. Chattering-Suppressed Sliding Mode Control for Flexible-Joint Robot Manipulators. In Proceedings of the Actuators, Orlando, FL, USA, 20–24 June 2021; Multidisciplinary Digital Publishing Institute: Basel, Switzerland, 2021; Volume 10, p. 288. [Google Scholar]
  18. Ren, S.; Gui, F.; Zhao, Y.; Zhan, M.; Wang, W.; Zhou, J. An Extenics-Based Scheduled Configuration Methodology for Low-Carbon Product Design in Consideration of Contradictory Problem Solving. Sustainability 2021, 13, 5859. [Google Scholar] [CrossRef]
  19. Cai, W.; Yang, C.Y.; He, B. Preliminary Extension Logic; Automation Institute on Chinese Academy: Beijing, China, 2003. [Google Scholar]
  20. Wu, S. Research on Toy Design for Special Children Based on Sensory Integration Training, D-S Theory, and Extenics: Taking Physical Toys for ADHD Children as an Example. Sci. Program. 2022, 2022, 1395265. [Google Scholar] [CrossRef]
  21. Melinte, D.O.; Travediu, A.-M.; Dumitriu, D.N. Deep Convolutional Neural Networks Object Detector for Real-Time Waste Identification. Appl. Sci. 2020, 10, 7301. [Google Scholar] [CrossRef]
  22. Melinte, D.O.; Vladareanu, L. Facial Expressions Recognition for Human–Robot Interaction Using Deep Convolutional Neural Networks with Rectified Adam Optimizer. Sensors 2020, 20, 2393. [Google Scholar] [CrossRef]
  23. Yan, H.; Wang, H.; Vladareanu, L.; Lin, M.; Vladareanu, V.; Li, Y. Detection of Participation and Training Task Difficulty Applied to the Multi-Sensor Systems of Rehabilitation Robots. Sensors 2019, 19, 4681. [Google Scholar] [CrossRef] [Green Version]
  24. Smarandache, F. A unifying field in logics: Neutrosophic logic. In Neutrosophy, Neutrosophic Probability, Set, and Logic; American Research Press: Rehoboth, MA, USA, 2002. [Google Scholar]
  25. Smarandache, F.; Dezert, J. An Introduction to DSmT; Smarandache, F., Dezert, J., Eds.; American Research Press: Rehoboth, MA, USA, 2009. [Google Scholar]
  26. Smarandache, F.; Vlădăreanu, L. Applications of Neutrosophic Logic to Robotics: An Introduction. In Proceedings of the 2011 IEEE International Conference on Granular Computing, Kaohsiung, Taiwan, 8–10 November 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 607–612. [Google Scholar]
  27. Saqlain, M.; Saeed, M. Fuzzy Logic Controller for Aviation Parking with 5G Communication Technology. In Intelligent and Fuzzy Techniques in Aviation 4.0; Springer: Berlin/Heidelberg, Germany, 2022; pp. 41–62. [Google Scholar]
  28. Li, H.; Xie, X.; Du, P.; Xi, J. Cooperative Object Recognition Method of Multi-UAVs Based on Decision Fusion. In Proceedings of the 2021 33rd Chinese Control and Decision Conference (CCDC), Kunming, China, 22–24 May 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 5424–5429. [Google Scholar]
  29. Chai, H.; Lv, S.; Fang, M. Obstacle Avoidance by DSmT for Mobile Robot in Unknown Environment. In Proceedings of the 2019 4th International Conference on Automation, Control and Robotics Engineering, Shenzhen, China, 19–21 July 2019; pp. 1–6. [Google Scholar]
  30. Yuan, S.; Guo, P.; Han, X.; Luan, F.; Zhang, F.; Liu, T.; Mao, H. DSmT-Based Ultrasonic Detection Model for Estimating Indoor Environment Contour. IEEE Trans. Instrum. Meas. 2019, 69, 4002–4014. [Google Scholar] [CrossRef]
  31. Liu, J.; Wei, L.; Cao, J.; Fei, S. Hybrid-Driven H∞ Filter Design for T–S Fuzzy Systems with Quantization. Nonlinear Anal. Hy-brid Syst. 2019, 31, 135–152. [Google Scholar] [CrossRef]
  32. Aslam, M.S.; Qaisar, I.; Saleem, M.A. Quantized Event-triggered feedback control under fuzzy system with time-varying delay and Actuator fault. Nonlinear Anal. Hybrid. Syst. 2019, 35, 100823. [Google Scholar] [CrossRef]
  33. Bledt, G.; Powell, M.J.; Katz, B.; Di Carlo, J.; Wensing, P.M.; Kim, S. MIT Cheetah 3: Design and Control of a Robust, Dynamic Quadruped Robot. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 2245–2252. [Google Scholar]
  34. Zhang, J.; Jin, L.; Yang, C. Distributed Cooperative Kinematic Control of Multiple Robotic Manipulators with an Improved Communication Efficiency. IEEE/ASME Trans. Mechatron. 2021, 27, 149–158. [Google Scholar] [CrossRef]
  35. Zhang, B.; Wu, J.; Wang, L.; Yu, Z. Accurate dynamic modeling and control parameters design of an industrial hybrid spray-painting robot. Robot. Comput. Manuf. 2019, 63, 101923. [Google Scholar] [CrossRef]
  36. Ion, I.; Vladareanu, L.; Simionescu, I.; Vasile, A. The Gait Analysis for Modular Walking Robot MERO Walks on the Slope. Nature 2008, 6, 8. [Google Scholar]
  37. Vladareanu, V.; Schiopu, P.; Deng, M.; Yu, H. Intelligent extended control of the walking robot motion. In Proceedings of the 2014 International Conference on Advanced Mechatronic Systems, Kumamoto, Japan, 10–12 August 2014; pp. 489–495. [Google Scholar] [CrossRef]
  38. Vladareanu, L.; Vladareanu, V.; Yu, H.; Mitroi, D.; Ciocîrlan, A.-C. Intelligent Control Interfaces Using Extenics Multidimen-sional Theory Applied on VIPRO Platforms for Developing the IT INDUSTRY 4.0 Concept. IFAC-Pap. 2019, 52, 922–927. [Google Scholar]
  39. Wen, C.; Chun-Yan, Y.; Bin, H. New Development of the Basic Theory of Extenics. Eng. Sci. 2003, 5, 80–87. [Google Scholar]
  40. Cai, W.; Yang, C.; Smarandache, F.; Vladareanu, L.; Li, Q.; Zou, G.; Zhao, Y.; Li, X. Extenics and Innovation Methods; CRC Press Leiden: Boca Raton, FL, USA, 2013; ISBN 1-306-50052-4. [Google Scholar]
  41. Smarandache, F. Foundations of Neutrosophic Logic and Set and Their Applications to Information Fusion, Tutorial. In Proceedings of the 17th International Conference on Information Fusion, Salamanca, Spain, 7 July 2014. [Google Scholar]
  42. Peng, J.-J.; Wang, J.-Q.; Wang, J.; Zhang, H.-Y.; Chen, X.-H. Simplified neutrosophic sets and their applications in multi-criteria group decision-making problems. Int. J. Syst. Sci. 2014, 47, 2342–2358. [Google Scholar] [CrossRef]
  43. Broumi, S.; Bakali, A.; Bahnasse, A. Neutrosophic Sets: An Overview. 2018, pp. 403–434. Available online: https://books.google.co.kr/books?hl=zh-TW&lr=&id=pftuDwAAQBAJ&oi=fnd&pg=PA408&dq=Neutrosophic+Sets:+An+Overview&ots=Ietca2c5tz&sig=mMXdkKa8sePH9cQ8GRleW5XcM-E&redir_esc=y#v=onepage&q=Neutrosophic%20Sets%3A%20An%20Overview&f=false (accessed on 17 March 2022).
  44. Buss, S.R. Introduction to Inverse Kinematics with Jacobian Transpose, Pseudoinverse and Damped Least Squares Methods. IEEE J. Robot. Autom. 2004, 17, 16. [Google Scholar]
  45. Gal, A.; Vladareanu, L.; Munteanu, M.S.; Melinte, O. PID Sliding Motion Control by Using Fuzzy Adjustment. In Proceedings of the SISOM 2012 and Session of the Commission of Acoustics, Bucharest, Romania, 30–31 May 2012. [Google Scholar]
Figure 1. The robot structure: (a) the hexapod walking robot; (b) kinematic structure of the robot leg.
Figure 1. The robot structure: (a) the hexapod walking robot; (b) kinematic structure of the robot leg.
Sensors 22 03663 g001
Figure 2. A simple trajectory for the robot foot.
Figure 2. A simple trajectory for the robot foot.
Sensors 22 03663 g002
Figure 3. Control-deciding graphs: (a) force sensor graph; (b) proximity sensor graph.
Figure 3. Control-deciding graphs: (a) force sensor graph; (b) proximity sensor graph.
Sensors 22 03663 g003
Figure 4. The general control diagram.
Figure 4. The general control diagram.
Sensors 22 03663 g004
Figure 5. Interaction between the robot and ground surface.
Figure 5. Interaction between the robot and ground surface.
Sensors 22 03663 g005
Figure 6. The kinematic control diagram.
Figure 6. The kinematic control diagram.
Sensors 22 03663 g006
Figure 7. The dynamic control scheme.
Figure 7. The dynamic control scheme.
Sensors 22 03663 g007
Figure 8. Membership functions: (a) member function for input s; (b) member function for input s ˙ .
Figure 8. Membership functions: (a) member function for input s; (b) member function for input s ˙ .
Sensors 22 03663 g008
Figure 9. Parameter Kfuzzy for s = 0 and s ˙ between −20 and 20.
Figure 9. Parameter Kfuzzy for s = 0 and s ˙ between −20 and 20.
Sensors 22 03663 g009
Figure 10. MATLAB Simulink diagram for the hybrid control.
Figure 10. MATLAB Simulink diagram for the hybrid control.
Sensors 22 03663 g010
Figure 11. Foot reference signals.
Figure 11. Foot reference signals.
Sensors 22 03663 g011
Figure 12. Constant values.
Figure 12. Constant values.
Sensors 22 03663 g012
Figure 13. The sliding control diagram made in MATLAB Simulink.
Figure 13. The sliding control diagram made in MATLAB Simulink.
Sensors 22 03663 g013
Figure 14. Proximity sensor simulation.
Figure 14. Proximity sensor simulation.
Sensors 22 03663 g014
Figure 15. Force sensor simulation.
Figure 15. Force sensor simulation.
Sensors 22 03663 g015
Figure 16. Neutrosophic switching diagram.
Figure 16. Neutrosophic switching diagram.
Sensors 22 03663 g016
Figure 17. Foot reference and position tracking on the OX axis.
Figure 17. Foot reference and position tracking on the OX axis.
Sensors 22 03663 g017
Figure 18. Foot positioning error on the OX axis.
Figure 18. Foot positioning error on the OX axis.
Sensors 22 03663 g018
Figure 19. Foot tracking position on the OY axis.
Figure 19. Foot tracking position on the OY axis.
Sensors 22 03663 g019
Figure 20. Foot reference and position tracking on the OZ axis.
Figure 20. Foot reference and position tracking on the OZ axis.
Sensors 22 03663 g020
Figure 21. Foot positioning error on the OZ axis.
Figure 21. Foot positioning error on the OZ axis.
Sensors 22 03663 g021
Figure 22. Foot trace in Cartesian coordinates.
Figure 22. Foot trace in Cartesian coordinates.
Sensors 22 03663 g022
Table 1. Robot values.
Table 1. Robot values.
Dimension Value
r10.05 [m]
l10.1 [m]
m10.25 [kg]
r20.25 [m]
l20.5 [m]
m21.25 [kg]
r30.35 [m]
l30.7 [m]
m31.76 [kg]
li and ri represent, respectively, leg segment dimensions and the distance from the joint axis to the center of mass for each leg segment, and mi is the segment’s mass.
Table 2. Observer experimental data.
Table 2. Observer experimental data.
θForce Sensor
m1 (θ)
Proximity Sensor
m2 (θ)
θD *0.750.65
θC **0.20.3
θDθC ***0.050.05
* θD = dynamic control; ** θC = kinematic control; *** θDθC is the indeterminate area.
Table 3. The experimental data after using Equation (6).
Table 3. The experimental data after using Equation (6).
C = ABm (C)
φ0-
θD0.5575Truth value for θD and falsity value for θC
θC0.085Truth value for θC and falsity value for θD
θDθC0.0025Uncertainty between θC and θD
θDθC0.355The contradiction between θC and θD
Table 4. Control probability.
Table 4. Control probability.
C = ABA (Force Sensor)B (Proximity Sensor)m (C)Control Type
φφφ0Robot stopped
θDθDθDθC0.5575Dynamic Control
θDθCθD
θDθD
θCθCθDθC0.085Cinematic Control
θDθCθC
θCθC
θDθCθDθCθDθC0.0025Uncertainty
θDθCθDθC0.355Contradiction
θCθD
Table 5. The output fuzzy gain computation.
Table 5. The output fuzzy gain computation.
sNBNMNSZPSPMPB
s ˙ S < −2−2 < =S < −1−1 < =S < 0S = 00 < S< = 11 < S< = 22 < S
NB s ˙ < 10 SVSSMBVBVB
N 10 s ˙ < 0 MSVSSMBVB
Z s ˙ = 0 BMSVSSMB
PS 0 < s ˙ 10 VBBMSVSSM
PB 10 < s ˙ VBVBBMSVSS
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Gal, I.-A.; Ciocîrlan, A.-C.; Vlădăreanu, L. The Hybrid Position/Force Walking Robot Control Using Extenics Theory and Neutrosophic Logic Decision. Sensors 2022, 22, 3663. https://doi.org/10.3390/s22103663

AMA Style

Gal I-A, Ciocîrlan A-C, Vlădăreanu L. The Hybrid Position/Force Walking Robot Control Using Extenics Theory and Neutrosophic Logic Decision. Sensors. 2022; 22(10):3663. https://doi.org/10.3390/s22103663

Chicago/Turabian Style

Gal, Ionel-Alexandru, Alexandra-Cătălina Ciocîrlan, and Luige Vlădăreanu. 2022. "The Hybrid Position/Force Walking Robot Control Using Extenics Theory and Neutrosophic Logic Decision" Sensors 22, no. 10: 3663. https://doi.org/10.3390/s22103663

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop