Next Article in Journal
Visible-Band Nanosecond Pulsed Laser Damage Thresholds of Silicon 2D Imaging Arrays
Next Article in Special Issue
Research of Online Hand–Eye Calibration Method Based on ChArUco Board
Previous Article in Journal
Digital Twin-Based Risk Control during Prefabricated Building Hoisting Operations
Previous Article in Special Issue
Spectral Diagnostic Model for Agricultural Robot System Based on Binary Wavelet Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Design of a Gough–Stewart Platform Based on Visual Servoing Controller

School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(7), 2523; https://doi.org/10.3390/s22072523
Submission received: 9 February 2022 / Revised: 18 March 2022 / Accepted: 24 March 2022 / Published: 25 March 2022

Abstract

:
Designing a robot with the best accuracy is always an attractive research direction in the robotics community. In order to create a Gough–Stewart platform with guaranteed accuracy performance for a dedicated controller, this paper describes a novel advanced optimal design methodology: control-based design methodology. This advanced optimal design method considers the controller positioning accuracy in the design process for getting the optimal geometric parameters of the robot. In this paper, three types of visual servoing controllers are applied to control the motions of the Gough–Stewart platform: leg-direction-based visual servoing, line-based visual servoing, and image moment visual servoing. Depending on these controllers, the positioning error models considering the camera observation error together with the controller singularities are analyzed. In the next step, the optimization problems are formulated in order to get the optimal geometric parameters of the robot and the placement of the camera for the Gough–Stewart platform for each type of controller. Then, we perform co-simulations on the three optimized Gough–Stewart platforms in order to test the positioning accuracy and the robustness with respect to the manufacturing errors. It turns out that the optimal control-based design methodology helps get both the optimum design parameters of the robot and the performance of the controller {robot + dedicated controller}.

1. Introduction

Parallel robots are becoming more and more attractive due to their better performances compared with classical serial robots in terms of high speed and acceleration, payload, stiffness, and accuracy [1]. Nevertheless, the traditional control of parallel robots is always troublesome because of the high non-linear input/output relations.
It can be found in [2] that a large number of researches have focused on the control of parallel robots. Generally, the only way to ensure high accuracy of a parallel robot is to get the robot model as detailed as possible for the model-based controller [3]. However, due to several factors such as errors from manufacturing and robot assembly, even detailed models still suffer from the problem of inaccuracy in practice. Therefore, more and more researches are currently focusing on finding an alternative controller to sidestep the complex kinematic architecture of the robot and to reach a better positioning accuracy performance compared with the classical model-based controllers. The sensor-based controller is an efficient method that estimates the pose of the end-effector with external sensors [4,5,6]. Visual servoing is a sensor-based controller, which takes one or several cameras as external sensors and closes the control loop by the vision information obtained from the camera. Visual servoing can be classified into two main groups: position-based visual servoing (PBVS) and image-based visual servoing (IBVS). PBVS directly controls the pose of the target with respect to the camera in Cartesian space [7]. Image-based visual servoing aims at minimizing the errors between current image features and desired image features directly in image space. It is more robust with the calibration errors compared with PBVS and can make sure that the target is always in the image plane so that we do not lose track when servoing. Therefore, we propose to apply IBVS as the external sensor-based controller in this paper. A large number of researches focused on controlling parallel robots with IBVS with the development of the image processing and image acquisition technology [5,8,9,10,11,12,13,14]. It has been proven that the end-effector pose can be estimated effectively throughout the direct observation by vision [15] or the indirect observation [14,16,17,18]. In addition, the choices of image features applied in visual servoing of parallel robots are numerous, such as image moments [19,20] when the camera can observe the end-effector directly or the observation of robot legs when directly observing the end-effector is difficult to realize (such as the machine tool) [9].
When the vision-based controller is applied to control parallel robots, the positioning accuracy is one of the most important internal performances and the positioning accuracy comes from the error of observation of the image features [21]. The types and numbers of cameras that are used, together with the kinds of image features, all have an influence on the observation error [22]. In addition, the geometric parameters of robots and the camera position also affect the positioning accuracy since they change the interaction models, which leads to effects on the positioning accuracy [23,24]. One problem that should be mentioned is that the mapping between the image feature space and the Cartesian space is not free of singularities [25]. The existence of the singularity of the interaction model has a great influence on the accuracy performance of the parallel robot [26]. In conclusion, in order to ensure the best accuracy performance for the pair {robot +controller} throughout its workspace, the robot geometric parameters and camera position should be optimized in advance.
The optimal design methodology of the robot aims at getting the optimal design geometric parameters of the robot to minimize a given objective under constraints. In [27], when visual servoing is applied to the control of parallel robots, the controller singularity and the internal performance (especially the positioning accuracy) should be taken into account in advance. In addition, the visual servoing controller is never considered in the optimal design process before. Therefore, in this work, the “control-based design” methodology considering the controller performance is developed and the positioning accuracy, together with the controller singularity of the corresponding controllers will be taken into account during the robot design process in order to get the optimal geometric parameters of a Gough–Stewart platform for a dedicated controller with the best performance of accuracy and to avoid the instability issues that appear in the control process. In this case, three types of vision-based controllers will be considered:
  • Leg-direction-based visual servoing (LegBVS) [20];
  • Line-based visual servoing (LineBVS) [28];
  • Image-moment-based visual servoing (IMVS) of a feature mounted on the platform [19].
To the best of our knowledge, this is the first time that we design a spatial 6 DOF parallel robot with the optimal control-based design methodology. In addition, this is the first time that the topological optimization is applied in the image moment visual servoing controller design.
This paper is organized as follows: Section 2 presents the robot architecture, design requirements, and the specifications of visual servoing controllers. The concept of visual servoing applied for controlling the Gough–Stewart platform is reviewed in Section 3. In Section 4, the controller accuracy performance (the error model relating the error from the camera observation to the positioning error of the robot ) and controller singularities, which lead to the instability of the robot, are discussed. Optimal design procedure based on the visual servoing controllers is introduced and solved in Section 5. Then, in Section 6, the co-simulation between Simulink and ADAMS with the result analysis are described. Finally, some conclusions are drawn in Section 7.

2. Robot Architecture and Specification

In this paper, we optimize the geometry of the Gough–Stewart platform with visual servoing in order to get excellent performance of the pair {robot+controller}. The Gough–Stewart platform, also called a hexapod, is a parallel robot with 6 degrees of freedom (DOF): the moving platform of the Gough–Stewart platform translates along the three axes of the space and rotates around the three axes of the space with respect to the fixed base [29]. The Gough–Stewart platform designed in this chapter is a 6-UPS robot (Figure 1a). The moving platform of the robot is linked to the fixed base by 6 individual chains B i P i ( i = 1 6 ) . The connection of the chains with the base is a U joint located at B i ( i = 1 6 ) , the chains are attached to the end-effector by a S joint located at P i ( i = 1 6 ) and the prismatic actuator allows the change of the lengths of the links B i P i ( i = 1 6 ) (Figure 1b).
The base and the moving platform of the considered Gough–Stewart platform are symmetric hexagons (Figure 1c). The radius of the circumcircle of the base is r b , and the radius of the circumcircle of the moving platform is r a . The angle B 1 B c B 2 = 2 α 1 , the angle P 1 P c P 2 = 2 α 2 and the angle x P c P 0 = α 0 (Figure 1c).
The complete workspace of the Gough–Stewart platform is a six-dimensional space. We should consider both its 3D location and the orientation of the moving platform. The definition of its orientation workspace is based on the Tilt and Torsion (T&T) angles proposed in [30]. The tilt and torsion angles are defined in two stages: a tilt and a torsion, In the first stage, as illustrated in Figure 1d, the frame i first rotates about the base z i -axis by an angle ϕ , then about the y i -axis by an angle θ , then about the z j -axis by an angle ϕ , and finally about the new z k -axis by an angle σ . The expression of the rotation matrix of the T&T angles can be found in [30]. With the T&T angles, a novel 3D workspace subset named maximum tilt workspace was developed in [31]. This workspace measure is defined as the set of positions that the center of the moving platform can attain with any direction of its z-axis making a tilt angle limited by a given value. Therefore. the orientation workspace of the Gough–Stewart platform can be kept to be symmetrical. Then, the configuration of the Gough–Stewart platform can be defined by the vector x = [ x t , y t , z t , ϕ , θ , σ ] while [ x t , y t , z t ] represents the 3D location of the center of the moving platform and [ ϕ , θ , σ ] defines the T&T angles.
The requirements that must be achieved by the Gough–Stewart platform in this case are given in Table 1. They have been fixed after discussion with some of our industrial partners. First of all, the maximum tilt workspace of the Gough–Stewart platform should cover a cube of side length l 0 100 mm and the range of T&T angles being ϕ ( π , π ] , θ [ 0 , π / 12 ] , and σ [ 0 , π / 12 ] . In this workspace, several performances should be guaranteed. Thus, this cube will be called the regular dexterous workspace (RDW) of the robot [1].
Additionally, considering the reality (gain of place), the footprint of the robot must be as small as possible.
The Gough–Stewart platform optimized ought to satisfy all the following geometric and kinematic constraints throughout the RDW:
  • The RDW should be free of singularity (both of the Gough–Stewart platform and the visual servoing controllers applied in this case);
  • The robot positioning error should be lower than 1 mm;
  • The robot orientation error should be lower than 0.01 rad;
  • Some distances are constrained in order to avoid collisions or to have unpractical designs: the distance r b between the origin of the base frame O and the U joint position B i , the distance r a between the origin of the platform frame P c and the S joint P i , the radius of the prismatic actuator’s B i P i cross-section denoted as R and, finally, the camera frame location (Figure 1b). These constraints will be further detailed in Section 5.
In order to get the desired 1 mm of positioning accuracy and 0.01 rad orientation accuracy specified in Table 1, we propose to apply visual servoing approaches. A single camera is chosen to be the external sensor and is mounted onto the ground in order to control the motions of the Gough–Stewart platform. The resolution of the camera is 1920 × 1200 pixels and the focal length is 10 mm). The best way is to observe some image features attached to the moving platform directly with the camera. However, in some cases, it is difficult to observe the end-effector, such as the milling operations. Alternative features proposed in [20] are the cylindrical legs of the robot’s prismatic actuators. Therefore, in this case, three types of classical visual servoing approaches (LegBVS [20], LineBVS [28], and IMVS [19]) will be tested.
The two first controllers take the image features extracted from the observation of robot legs, while the last one will be used to observe the platform directly. The optimal design parameters of the Gough–Stewart platform for each type of controller will be found and based on the analysis of the obtained results, the best pair {robot + controller} will be determined.
In addition, several comments should be illustrated here. First, the dynamic criterion is not mentioned in these specifications. In fact, for the visual servoing, high-speed motion is not the purpose, except for a few specific scenarios [32,33]. Therefore, only the geometry and kinematics performance of the robot will be considered. Besides, a repeatability of 1 mm and orientation accuracy of 0.01 rad could also be obtained by a standard encoder-based controller. However, this paper does not aim to prove that visual servoing gets a better accuracy performance compared with standard encoder-based control. This paper aims to prove that in the condition of controlling a robot with visual servoing (or any other types of sensor-based controllers), in order to obtain the guaranteed accuracy, it is essential to optimize the robot and the controller at the same time in the design process.
In the next section, some brief recalls on visual servoing will be given in front of presenting the optimization problem formulation.

3. Recalls on Visual Servoing

In this section, a simple review on visual servoing is presented. Then, we provide some recalls on three considered approaches in particular [19,20,28].

3.1. Basics of Image-Based Visual Servoing

Image-based visual servoing is an external sensor-based controller which uses the so-called interaction matrix L [5] to transform the twist c τ c between the camera and the scene (in what follows, the superscript “c” denotes the camera frame), to the time derivative s ˙ of the vector s of the visual primitives observed from the camera through the relationship:
s ˙ = L ( s , x ) c τ c
the components of L are highly nonlinear and are a function of the image features s and the robot end-effector configuration in Cartesian space x .
Based on (1),we can build a simple visual servoing error model:
Δ s = L ( s , x ) Δ x
where Δ s represents a small error from the camera observation and Δ x is the corresponding positioning error of the robot end-effector in Cartesian space. As mentioned above, the matrix L is a matrix whose components are nonlinear functions depending of the variables s and x . Therefore, the matrix L may meet some singularities. Positioning error models and singularities of the visual servoings [19,20,28] will be further detailed in Section 4. Now, let us provide some recalls about the features observed in the three different types of controllers [19,20,28].
In addition, based on the kinematic relationship, one classical controller that takes the image feature s as the feedback can be proposed:
c τ c = λ L + e
in which the vector e stacks the error between the desired image feature s * and the current one e = s s * , L + is the pseudo-inverse of the matrix L , and λ is a positive constant.
This expression can be transformed into a controller for the joint velocities:
q ˙ = λ J p i n v L + e
where J p i n v is the inverse Jacobian matrix of the robot linking the end-effector twist to the actuator velocities, i.e., J p i n v c τ c = q ˙ .

3.2. Recalls on Leg-Direction-Based Visual Servoing and Line-Based Visual Servoing

The legs of parallel robots are usually designed with slim and cylindrical rods, then the feature that can be extracted from the observation of the legs are their directions c u ̲ i (Figure 2) [20] and the line L i passing through robot link cylinder axis i expressed by its Plücker coordinates ( c u ̲ i , c h ̲ i ) (see definition in [11] and Figure 2).
For Leg-direction-based visual servoing, we can always find the relationship between the twist of robot end-effector c τ c and its leg direction velocity by:
c u ˙ ̲ i = M u i T c τ c
where M u i T is the interaction matrix for the leg i.
For Line-based visual servoing, the kinematic model aims at finding the relationship between the time variation of the Plücker coordinates ( c u ̲ i , c h ̲ i ) of the robot legs and the twist of its platform [9]:
[ c u ˙ ̲ i T c h ˙ ̲ i T ] = M u h i T c τ c
M u h i T is the interaction matrix for the leg i for this type of observation.
In image plane, the contour of these cylindrical links are projected into lines i ( k ) ( k = 1 , 2 ) defined as the intersections of the image plane and the plane S i ( k ) with normal c n i ( k ) lying on the camera frame origin C and the observed cylinder (see Figure 2). With the coordinates of the intersection points between the lines i ( k ) ( k = 1 , 2 ) and the image plane boundary, together with the position of the camera, we can get the vector c n i ( k ) of the plan S i ( k ) . Therefore, for each cylindrical robot leg, the normal vectors c n i ( k ) are information that we can extract from the camera observation. If n cylinders are observed, the vector s of the observed features is defined as: s = [ c n 1 ( 1 ) T c n 1 ( 2 ) T c n n ( 1 ) T c n n ( 2 ) T ] T . Moreover, as we see from Figure 2, it should be mentioned that only observing part of one robot leg (the entire view of the leg is not essential) can get the intersection points of its edges and the image plan boundary.
Therefore, for leg-direction-based visual servoing, we can get the relationship between the time variation of c u ̲ i and the derivative of image features [ c n i ( 1 ) T c n i ( 2 ) T ] T with respect to time with
c u ̲ ˙ i = J u i [ c n ̲ ˙ i 1 c n ̲ ˙ i 2 ]
J u i transforms the derivative with respect to time of ( c n ̲ i 1 , c n ̲ i 2 ) into the leg orientation velocities [28].
For Line-based visual servoing, similar to Leg-direction-based visual servoing, we obtain:
c u ̲ ˙ i = J u i [ c n ̲ ˙ i 1 c n ̲ ˙ i 2 ] , c h ̲ ˙ i = J h i [ c n ̲ ˙ i 1 c n ̲ ˙ i 2 ]
where J u i and J h i transform the time derivative of ( c n ̲ i 1 , c n ̲ i 2 ) into the vector velocities of ( c n ̲ i , c h ̲ i ) .
To fully control the six DOF of the Gough–Stewart platform, observing a minimum of three independent legs is necessary. Therefore, when using LegBVS, we obtain the end-effector twist c τ c with:
M u T c τ c = J u c n ˙ ̲
The matrix M u T can be obtained by stacking the matrices of M u i T of k legs ( k = 3 , , 6 ) (the way of getting the interaction matrix M u T is presented in Appendix A). J u is a block-diagonal matrix containing the matrix J u i . By using the pseudo-inverse M u T + of the matrix M u T , we have:
c τ c = M u T + J u c n ˙ ̲ = L u T + c n ˙ ̲
In the condition of Line-based visual servoing, the end-effector twist c τ c can be obtained from the function:
M u h T c τ c = [ J u J h ] c n ˙ ̲
The matrix M u h T is obtained by stacking the matrices M u h i T of k legs ( k = 3 , , 6 ) (the way of getting the interaction matrix M u h T is presented in Appendix A), J u , J h are block-diagonal matrices containing the matrices J u i , J h i . Then, by using the pseudo-inverse M u h T + of the matrix M u h T , we have:
c τ c = M u h T + [ J u J h ] c n ˙ ̲ = L u h T + c n ˙ ̲

3.3. Image Moment Visual Servoing

IMVS is different from the previous ones. It is an approach based on the observation of a target T mounted on the moving platform of the robot (Figure 3). The image moments can be extracted from the image plane through the observation of the camera [19]. The target T can be a dense object defined by a set of closed contours or a discrete set of m image points [34]. For the target T , we denote U the projection of the target in image plane. Then we can compute the image moment of U : the moment m w t of order w + t is defined by:
m w t = U u w v t d x d y
where u and v are the coordinates in the image plane of any point belonging to the surface U . The interaction matrix associated with any moment is provided in [19].
For a Gough–Stewart platform with six DOFs, a set of six independent moments should be selected as the image features. In this work, T is set to be a discrete model composed of three points ( A 1 , A 2 , A 3 ) (Figure 4). The selection of the proper image features is always a complex problem. We especially need to find six combinations of moments to control the six DOFs of the robot. The best selection of the visual servoing features is that they can be used to design a decoupled control scheme. That is try to associate each DOF to be controlled with only one visual feature. This can provide a large domain of convergence, a good behavior of the visual features, and an adequate camera trajectory. However, until now, no one found such a combination of image moments. In this paper, the objective is to obtain a sparse interaction matrix that changes slowly around the desired position and the selection of image moments is the same as it was in [34]. It has been proven that the coordinates x g , y g of the center of gravity, and the area a = m 00 of the object are the classical image features are enough to control the three translational DOFs. In addition, in order to control the rotational DOF, we need to use the object orientation α and two moments c 1 and c 2 (see definitions in [34]). c 1 and c 2 have been proven to be invariant to translation and 2-D rotation. In conclusion, for image moment visual servoing applied in this case, the image feature is m = [ x g y g a α c 1 c 2 ] T . Then we have:
m ˙ = L m c τ c
where m ˙ = [ x ˙ g y ˙ g a ˙ α ˙ c ˙ 1 c ˙ 2 ] T are the time derivatives of six image features observed. L m = [ L x g L y g L a L α L c 1 L c 2 ] T is the interaction matrix related to the set of image moments [19,34].
In the model for estimating the robot platform configuration based on the image features m = [ x g y g a α c 1 c 2 ] T , the coordinates of the three points ( A 1 , A 2 , A 3 ) (Figure 4) are involved, as well as the camera pose. The value of these parameters will be optimized later during the design optimization process.
It should be noticed that, despite the fact that there is no explicit appearance of the robot geometric parameters in the interaction model of image moment visual servoing controller, they still have an influence on its performance: the location of the robot workspace is defined by the robot geometric parameters. If the distance between the workspace and camera location is long, the accuracy performance will be worse than if the workspace is closer to the camera location. Accordingly, we still need to optimize the robot geometric parameters in order to optimize the overall robot accuracy.
In the next section, we deal with the computation of some performance indices of the visual servoing controller.

4. Controller Performance

Concerning the requirements of positioning accuracy for the robot design, two types of controller performance will be defined and considered:
  • The presence (or even proximity) controller singularities, the singularities of the interaction matrices impact both the positioning accuracy and the controller stability [4];
  • The positioning error comes from the camera observation error and the interaction model of the corresponding visual servoing controller.
Then, in this section, singularities of the corresponding controllers and the positioning error models are described.

4.1. Controller Singularities

It was defined in [35] that the rank deficiency of the interaction matrix L leads to the visual servoing controller singularity. In this section, based on the study of the controllers defined in Section 3, we show the conditions of rank deficiency of the corresponding interaction matrices.

4.1.1. Leg-Based Visual Servoing Singularities

The singularity problem of the mapping between the space of the observed image features and the Cartesian space has a great influence on the accuracy of visual servoing. Thanks to the work of [36], a tool named “Hidden robot” was developed in order to simplify the study of the controller singularity problem when visual servoing is applied to the control of the parallel robot. It reduces the study of the complex singularities of the interaction matrix to the study of the singularities of the virtual parallel robot hidden in the controller. The main idea of the “Hidden robot” is to find the virtual actuators that correspond to the observation image features. For example, when we apply the Leg-direction-based visual servoing to control the Gough–Stewart platform, we choose the unit vector u ̲ i as the image feature. The unit vector in space can be parameterized by two independent coordinates (see Figure 5) that can be the angles defined by the U joint rotations. Therefore, the displacement of the U joint can be measured by the vector u ̲ i . As a result, the U joint is the virtual actuator (in another way, the “hidden robot”) of the vector u ̲ i . From (4), we see that the visual servoing can meet numerical issues if the matrix L T is rank deficient and a null error vector e leads to a non-null platform twist c τ c ; or the matrix L T + is rank deficient and the controller may meet a local minimum, which means that the error e is not zero but the twist c τ c is zero. The interaction matrix L involved in the controller gives the value of s ˙ as a function of c τ c . Therefore, L T can be seen as the inverse Jacobian matrix of the hidden robot (moreover, H + is the hidden robot Jacobian matrix). Then, L T is rank deficient only when the corresponding hidden robot comes to Type 2 singularity loci and L T + is rank deficient only when the corresponding hidden robot comes to Type 1 singularity loci. Therefore, the hidden robot helps simplify the analysis of the interaction matrix singularity by reducing this problem to the singularity analysis of a new robot.
In [37], the problem of LegBVS controller singularities for the control of the Gough–Stewart platform has been detailed presented. The Gough–Stewart platform consists of six U P ̲ S legs.The corresponding hidden robot of the U P ̲ S leg is made of U ̲ P S legs. Since U ̲ P S legs have 2 degrees of actuation, only three legs to be observed are enough to fully control the Gough–Stewart platform when using leg direction observation [36].
The singular configurations of 3- U ̲ P S -like robots have been deeply studied in [38,39]. Type 2 singularities appear when the planes P 1 , P 2 , P 3 (whose normal directions are defined by the vectors u ̲ 1 , u ̲ 2 , u ̲ 3 and the plane P 4 (passing through the points P 1 , P 2 , P 3 in Figure 6) intersect in one point (which can be at infinity) (Figure 6).
Singularities of LineBVS applied to the control of the Gough–Stewart platform have never been studied before. The concept of the hidden robot is to find what kind of virtual actuators correspond to the features of observation applied in visual servoing. For LineBVS, we take the Plücker coordinates of a line L i as the image feature to be observed and it can be defined from the fact that a 3D point and a 3D orientation define a unique 3D line [11]. Therefore, we should find the virtual actuators corresponding to the 3D line L i .
As we see from Figure 7, B i is the 3D point and u ̲ i the unit vector, L i ( i = 1 , 2 , , 6 ) is the 3D line they define. The active U ̲ joint in space is the virtual actuator that makes the vector u ̲ i move. In general, the actuated P P P chain should be added on the preceding leg links so that the point B i can move in space. Therefore, for a U P ̲ S leg, its corresponding hidden robot when using line-based visual servoing is a P P P ̲ U ̲ P S leg (Figure 7). However, in the case of a Gough–Stewart platform, all the U joints are fixed on the base, which means that the points B i are fixed in space. Then the actuated P P P chain is no longer needed and the 3D lines L i passing through the robot links can be defined only by the vectors u ̲ i . Therefore, the corresponding hidden robot of the Gough–Stewart platform is the same as the hidden robot when applying leg-direction-based visual servoing, the 3 U ̲ P S robot, which means that these two visual servoing controllers share the same conditions of controller singularities. Then, we suppose that in terms of controller performances, LegBVS and LineBVS are the same (which will be proven in the following Section).

4.1.2. Image Moment Visual Servoing Singularities

For IMVS, the controller singularity appears when the matrix L m is rank deficient. The expression of the matrix L m is rather complex and it is difficult to find the condition of rank deficient analytically. We should define a criterion of “proximity” to controller singularities. A list of indices that could be adapted in the analysis of robot singularity was presented in [40]. In this case, we take the inverse conditioning of the interaction matrix as the index of the controller singularity to estimate the numerical stability of the interaction matrix L m .

4.2. Positioning Accuracy Model

4.2.1. Observation Errors in the Leg-Based Visual Servoing

The positioning error models when observing the robot links in the Leg-based visual servoing approaches have been detailed and presented in [22,24]. The positioning error comes from the camera observation error of image features (For LegBVS, the features are the leg directions, for LineBVS, the features are the leg Plücker coordinates). When we use the camera to observe the robot links, the link edges are projected into the image plane into lines i j ( k ) (Figure 3), which are then pixelized (Figure 8). We suppose that the error of estimation of the lines i ( k ) is due to a random shift of ± 0.5 px in the pixels corresponding to the intersections of the i ( k ) with the image plane boundary (points P i k ( 1 ) and P i k ( 2 ) in Figure 8) in this case.
As we presented in Section 3, the image features that we use in the leg-based visual servoing are the vectors c n ̲ i ( k ) (characterizing the line i ). Thus, we can find the mapping relating the time derivatives of the vectors c n ̲ i ( k ) to the derivative of p i k ( 1 ) , p i k ( 2 ) with respect to time:
c n ̲ ˙ i k ( k ) = J n i k [ p ˙ i k ( 1 ) p ˙ i k ( 2 ) ]
Then we get the error model,
Δ c n ̲ i k ( k ) = J n i j [ Δ p i k ( 1 ) Δ p i k ( 2 ) ]
where Δ c n ̲ i k ( k ) , Δ p i k ( 1 ) and Δ p i k ( 2 ) are the small variations of the vectors c n ̲ i k ( k ) , p i j k ( 1 ) and p i j k ( 2 ) respectively. Based on the controllers presented in Section 3, for the image features s = [ c n 1 ( 1 ) T c n 1 ( 2 ) T c n n ( 1 ) T c n n ( 2 ) T ] T , we have
Δ s = J n Δ p
where Δ s stands for the small variations of the image features s , Δ p contains all errors Δ p i k ( 1 ) and Δ p i k ( 2 ) .
In this case, the camera observation noise is set to be ± 0.5 pixel, which is a typical noise for cameras. Thus every component of vector Δ p i k ( 1 ) and Δ p i k ( 2 ) can take the values + 0.5 or 0.5 . With the help of Equation (17), we can get the observation error model for LegBVS and LineBVS written under the generic form:
Δ x = L P Δ p
where L P = L + J n .

4.2.2. Observation Errors in the Image Moment Visual Servoing

Image moments are calculated from the coordinates of the points belonging to the projection on the image plane of the object observed. We set ( x 1 p , y 1 p ) , ( x 2 p , y 2 p ) , ( x 3 p , y 3 p ) to be the coordinates of the projection of the three points A 1 , A 2 , A 3 (Figure 9) in pixel. Then we have
m t = m Q Q t = S Q t
where Q = [ x 1 p x 2 p x 3 p y 1 p y 2 p y 3 p ] T and S is the matrix which transforms the time derivatives of the set of image moments m to the time derivatives of the coordinates of the points projected to the pixel plane.
Thus, Equation (14) can be written as the form
τ = L m + m ˙ = L m + S Q ˙
We estimate that the error of estimation of each component of Q to be ± 0.5 pix (see Figure 9) for the location of each point projected in the image plane. Then the error model of the image moment visual servoing controller can be written in the form:
Δ x = L m + S Δ Q

4.2.3. Positioning Accuracy

For the Gough–Stewart platform, we have Δ x = [ Δ t x Δ t y Δ t z Δ w x Δ w y Δ w z ] with [ Δ t x Δ t y Δ t z ] being the translation errors along the three axes and [ Δ w x , Δ w y , Δ w z ] being the rotation errors around the three axes. Then the positioning error and the orientation error are defined as in [30]:
E t = Δ t x 2 + Δ t y 2 + Δ t z 2
and the orientation error is defined as
E w = Δ w x 2 + Δ w y 2 + Δ w z 2
In the next section, the optimal design problem for the Gough–Stewart platform will be formulated.

5. Optimal Design Procedure

In this section, the design procedure developed in order to obtain the optimal parameters of the Gough–Stewart platform together with the parameters of the controllers are described.

5.1. Design Variables

Robot design parameters: As we presented in Section 2, the Gough–Stewart platform can be defined by the following geometric parameters: r a , r b , α 0 , α 1 , α 2 (Figure 1c) ( r a = O P i , r b = O B i , α 0 = x P c P 0 , α 1 = 1 2 B 1 B c B 2 , α 2 = 1 2 P 1 P c P 2 ). All these parameters have an effect on the size of robot workspace and the physics performance, as well as on the controller performance. In addition, when LegBVS and LineBVS are applied, the radius of the cylindrical distal links of the Gough–Stewart platform also influence the positioning accuracy [22], thus the radius of the cylindrical distal links P i B i ( i = 1 , 2 , , 6 ), denoted as R (see Figure 3), is a decision variable of the optimization process. When image moment visual servoing is applied, the coordinates of the discrete three points model [ x 1 y 1 x 2 y 2 x 3 y 3 ] (in moving platform frame x O y ) defining the configuration of the model (Figure 4) affect the controller interaction model. They must be optimized when dealing with image moment visual servoing.
Controller design parameters: The configuration of the camera is normally parameterized by six independent parameters and it affects the controller interaction model. In order to observe the robot (both the robot legs and the end-effector) in a symmetrical way:
  • The camera frame orientation is set to be parallel to the robot fixed frame;
  • The camera origin is imposed to stay on a vertical line passing through O ( ( x c , y c ) of the camera frame origin set at ( 0 , 0 ) ).
Additionally, some other variables that we used in the optimal design process need to be defined: L is the length of the prismatic actuator B i P i ( L = B i P i , i = 1 , 2 , , 6 ). l 0 is the dimensions of the side length of the cube RDW (see Table 1) [1].
Design variables: Based on the explanations above, two different sets of design variables (grouped in a vector y ), depending of the types of controllers are defined:
  • For the leg-based controllers,
    y = [ r a , r b , α 0 , α 1 , α 2 , z c , R ] T ;
  • For the moment-based controllers,
    y = [ r a , r b , α 0 , α 1 , α 2 , z c , x 1 , x 2 , x 3 , y 1 , y 2 , y 3 ] T .

5.2. Objective Function

As mentioned in Section 2, the robot should be as compact as possible. The footprint of the Gough–Stewart platform is evaluated by the radius r b of its base. Therefore, the optimization problem is formulated in order to minimize the value of r b .

5.3. Constraints

The constraints provided in Section 2 are reviewed here. Throughout the RDW, the following geometric and kinematic constraints must be satisfied:
  • The RDW should be free of singularity (both of the robot and the controller): Singularities of the controllers are detailed and presented in Section 4.1. In this case, we used the inverse condition number of the interaction matrix L , denoted as κ 1 ( L ) . In the RDW, we want to have
    κ 1 ( L ) > 10 3
    The “mechanics” singularity of the Gough–Stewart platform is different. This problem is complex and was studied decades ago [1,41,42,43,44]. In [45,46], a kinetostatic approach taking account of the force transmission was proposed to determine the singularity-free zones of a parallel robot. When the pressure angle is close to 90 degrees, the parallel robot is close to a singular configuration. Therefore, we calculated the pressure angles β = [ β 1 , , β 6 ] T for all the six robot legs of the Gough–Stewart platform [45,46]. In the RDW, we want to have
    β i > 80 i = 1 , , 6
  • The value of the robot positioning accuracy ought to be lower than 1 mm and the orientation accuracy should be lower than 0.02 rad. The positioning error model is defined in Section 4.2. The error models are linear in terms of the observation error, the maximal positioning error E t max = max E t and the maximal orientation error E w max = max E w of the robot will be found at one the corners of the hyper-polyhedron defining the observation errors [47]. The repeatability constraint can be formulated as:
    E t max 1 mm E w max 0.02 rad
  • The discrete three points A 1 , A 2 , A 3 should be within the moving platform of the Gough–Stewart platform.
  • The end-effector should be within the view of the camera: ensuring that all the robot distal legs can be observed when using leg-based visual servoing, as well as the three points A 1 , A 2 , A 3 can be observed when using image moment visual servoing.
  • Several distances or angles are constrained in order to avoid collisions or to have unpractical designs. The values of these constraints are given here:
    0.4 m L 0.76 m , 0.1 m r a 0.3 m r b 0.5 m , r a < 0.9 × r b , π 6 α 0 π 6 , 0 α 1 π 9 , 0 α 2 π 4 , 0.2 m z c 0.3 m , 0.01 m R 0.03 m ,
The aforementioned RDW throughout which all the constraints (24)–(27) must be satisfied should cover a cube of side length l 0 100 mm and the range of T&T angles being ϕ ( π , π ] , θ [ 0 , π / 12 ] , and σ [ 0 , π / 12 ] . The algorithm of calculating the size of the Largest Regular Dexterous Workspace (LRDW) is detailed and presented in [27] and is adapted in this case for getting the cubic LRDW among the RDW of the manipulator for a given decision variable vector y .
We denote l L R D W the side length of the cubid LRDW whose T&T angles range are ϕ ( π , π ] , θ [ 0 , π / 12 ] , and σ [ 0 , π / 12 ] . We make sure that all constraints (24)–(27) are obligatory true throughout the LRDW, then only one is needed to replace all the other ones, which is defined by:
l L R D W 0.1 m

5.4. Problem Formulation and Optimization Results

For designing a compact Gough–Stewart platform with the detailed specifications given in Table 1, the following optimization problem is formulated:
minimize r b over y subject to l L R D W 100 mm
where the definition of y is given in Section 5.1.
As introduced in Section 3, observing three legs is enough to fully control the Gough–Stewart platform when leg-based visual servoing controllers are applied. In this case, as a matter of comparison, we will optimize the geometric parameters of the Gough–Stewart platform when observing only three legs ([Case 1]: observing robot links B 1 P 1 , B 3 P 3 , B 5 P 5 ) and observing all the six legs ([Case 2]: observing robot links B 1 P 1 , B 2 P 2 , B 3 P 3 , B 4 P 4 , B 5 P 5 , B 6 P 6 ) for leg-based visual servoing.
The optimization algorithm presented above is then applied to the design of the Gough–Stewart platform, for each of the three controllers defined in Section 3. These optimization problems have been solved by means of the ‘active-set’ algorithm implemented in the MATLAB fmincon function. A multistart algorithm, combined with random initial points initialized by a Genetic Algorithm, was also used in order to increase the chances to reach the global minima. The optimal design results are given in Table 2 and illustrated in Figure 10, Figure 11 and Figure 12.
As we see from the results of optimization, in terms of the footprint of the robot, the Gough–Stewart platform designed based on the LegBVS, LineBVS, and IMVS are close from each other and the differences are almost negligible. Especially, for robots designed for leg-based visual servoing controllers, the geometric parameters of the robot are the same under the same observing condition (Case 1 and Case 2). This result proves our hypothesis proposed in Section 4.1.1. The coordinates of points B i are constant since the points B i are fixed, then the time derivative of h i and u ̲ i are linearly dependent, which means that the LegBVS and LineBVS share the same controller performance.
In the next section, we will perform co-simulations with ADAMS and Simulink to test the robot accuracy performance.

6. Results Cross-Validations through Simulations

6.1. Simulation Method

In order to validate the optimization results and test the robot accuracy performance, the co-simulations are performed within a connected ADAMS-Simulink environment (Figure 13). Five Gough–Stewart platform models with the optimal geometric parameters obtained from the optimal design process (one model per controller) are created in the software ADAMS.
Real-time data (block “Data acquisition”) of the ADAMS simulator are extracted:
  • For LegBVS and LineBVS, we extract the coordinates of the points P i and B i (Figure 1b);
  • For IMVS, the coordinates of the three points A 1 , A 2 and A 3 (Figure 4) are extracted.
The scheme of the co-simulation is illustrated in Figure 13. The frequency of the simulation is set to be 200 Hz. Real-time data of mechanical models are the output of ADAMS and are sent to Simulink. In Simulink, the model of the camera is created and the real-time data are projected to pixel plane by the camera model to rebuild image features. The ± 0.5 pixel random noise related to the observation errors presented in Section 4 is added in the pixel plane. Then the image features with noise become the feedback of the control loop to generate the velocity command. The velocity command is the input of ADAMS and is used to control the motion of the robot mechanical model.
The RDW of the Gough–Stewart platform is a cube whose side length is 100 mm, and the orientation workspace is set based on the T&T angles ϕ ( π π ] , θ [ 0 π / 12 ] , σ [ 0 π / 12 ] . A home position T 1 and nine desired positions (including T 1 ) within the LRDW are defined in Table 3 with respect to the center of the LRDW. For each position, three orientation pose are defined with respect to [ ϕ θ σ ] T : Pose 1 [ 0 , 0 , 0 ] T , Pose 2 [ π / 2 , π / 12 , π / 12 ] T , and Pose 3 [ π / 2 , π / 12 , π / 12 ] T . Therefor, for each robot, a total of 27 desired poses are selected in the co-simulations.
Then, each robot is driven from their home pose to the desired poses with the dedicated controller. All their positioning accuracies and orientation accuracies are recorded during the co-simulation.
Additionally, in order to test the robustness of the accuracy of model with geometry errors, the same co-simulations were operated with the error added in model. The models we added errors on joints to are defined as below: we add a random error on the location of the joint B i on the base of the robot, the distance between the accurate joint B i , and the joint with error B i , denoted as l B i B i , ( l B i B i = 0.1 × r b ) (see the red parts of Figure 10, Figure 11 and Figure 12).
In the next step, The designed robot prototypes were controlled with another controller, different from the one dedicated during the design process, for verifying the original purpose of performing control-based design. In what follows, for brevity, only the result of LineBVS applied to the robot designed for the image-based moments will be given here.
Results are shown and analyzed in the next subsection.

6.2. Simulation Results

In this section, we denote as:
  • [Case A]: the Gough–Stewart platform optimized for LineBVS ([Model 1]) in ([Case 1]) and the error-added robot mechanical model ([Model 2]) controlled with their dedicated controller, robot links B i P i ( i = 1 , 3 , 5 ) are observed;
  • [Case B]: the Gough–Stewart platform optimized for LineBVS ([Model 3]) in ([Case 2]) and the error-added robot mechanical model ([Model 4]) controlled with their dedicated controller, all six robot links B i P i ( i = 1 , 2 , 3 , 4 , 5 , 6 ) are observed;
  • [Case C]: the Gough–Stewart platform optimized for IMVS ([Model 5]) and the error-added robot mechanical model ([Model 6]) controlled with its dedicated controller;
  • [Case D]: the Gough–Stewart platform optimized for IMVS ([Model 5]) controlled with the LineBVS, all six links B i P i ( i = 1 , 2 , 3 , 4 , 5 , 6 ) are observed.
  • [Case E]: the Gough–Stewart platform optimized for LineBVS ([Model 3]) controlled with the IMVS.
Since we have proved that the LegBVS and LineBVS have the same control performance for the Gough–Stewart platform and the geometric parameters of the robot designed for these two controllers are the same under the same observation condition, we only perform the co-simulations for the robot that controlled with LineBVS. We played each simulation for five seconds and recorded the positioning error. The simulation results show that the robot converges at around 0.5 s, then the moving platform oscillates around the desired pose due to the simulated observation noise. For all the simulation motions from home position to the desired poses in Case A to Case E, the maximal positioning error and orientation error along the time are recorded: for point T k j (k for the position k = 1 , , 9 , j for the pose j = 1 , 2 , 3 ) simulated in case α ( α = A , B , C , D ), this maximal positioning error is denoted as δ p k j α and the maximal orientation error is denoted as δ o k j α . Then, all the results are summarized in Table 4: for each case and different model, max, min, and mean value of positioning error δ p k j α and orientation error δ o k j α obtained for k = 1 , , 9 and j = 1 , 2 , 3 are shown for a given value of α .
Studying the results, we see that the robot Model 5 in Case C leads to minimal positioning error and orientation error. For robots in Case A and Case B, the mean value is very close to the requested value of 1 mm. However, there are some points in the workspace for which the error is slightly upper this limit (maximal error of 1.24 mm in both cases). In fact, the positioning accuracy model applied during the optimal design process (Section 5) in order to estimate the controller performance was really simple. It was thus the source of inaccuracies of positioning error estimation during the optimal design process. However, even with this simplistic model, the maximal robot positioning error (1.24 mm) is only slightly upper the threshold of 1 mm while their mean values stay close to 1 mm. Additionally, the measured orientation error obtained from all the cases are far lower than the requested 0.01 rad. The results obtained from the models with geometry errors are similar to the results obtained from the accurate models, which proves the robustness of the accuracy of models when applying the visual servoing controllers.
We then study the results of [Case D] and [Case E], which are the most important. For the Gough–Stewart platform optimized for IMVS but controlled with the LineBVS, the mean error is far bigger than the requested value of 1 mm, and the maximal error even grows up to 1.56 mm. For the Gough–Stewart platform optimized for LineBVS but controlled with the IMVS, the mean error is 1.39 mm, and the maximal error grows to 1.47 mm. These positioning errors are bigger than the requested value of 1 mm and they are worse than the results of [Case B] and [Case C]. These results confirm that it is necessary to optimize a robot for a dedicated controller. In other words, the control-based design of {robot+controller} helps ensure the vision-based control accuracy performance.
Another problem, which is the most interesting, is that the discrete three points model we obtained from the optimal design when using IMVS to form a triangle (Triangle 1) which is not a regular triangle. Therefore, in order to study why it is such a configuration, we create discrete three points whose configuration is a regular triangle (Triangle 2). The coordinates of the three points (with respect to the moving platform frame x O y ) A 1 r , A 2 r , A 3 r are (0, 0.222) m, (0.192, −0.111) m, and (−0.192, 0.111) m. The same noise was added on the projection of the points in pixel to see the variation of the set of image moments m . For Triangle 1 and Triangle 2, in terms of the image moments [ x g , y g , a ] , the variations are almost the same. However, in terms of the image moments [ α , c 1 , c 2 ] , the differences are huge: for Triangle 1, the variations of [ α , c 1 , c 2 ] are [0.01,0.08,0.08], while the variations of the image moments [ α , c 1 , c 2 ] for Triangle 2 are [1.6,20,400] (the results of α are illustrated in Figure 14 and Figure 15). In addition, we performed the same co-simulation as we did for Model 5 in Case C, but the target is changed to the new three discrete points model (Triangle 2) in IMVS. The simulation results show that the maximal positioning error comes to 1.6 mm and the maximal orientation error comes to 6.0 × 10 4 rad, while the corresponding results are 0.63 mm and 4.3 × 10 4 rad when observing the three discrete points model Triangle 1. The results prove that the configuration of the discrete points model has an influence on the observation of image moments and affects the controller accuracy performance. As a result, it is necessary to perform topology optimization on the configuration of the shape of the target observed during the design process.

7. Conclusions

In the work presented above, a novel advanced optimal design methodology “control-based design” is performed in order to design a Gough–Stewart robot with the best accuracy performance of the pair {robot + controller}. We have proven that the controller performance (accuracy, singularity) is affected by the robot geometric design parameters. Thus, in the design process of a robot, it is necessary to find the optimized geometric parameters of the robot that will allow the best performance of the pair {robot + controller}.
Three different classical types of visual servoing controllers: LegBVS, LineBVS, and IMVS were proposed to be applied on the Gough–Stewart platform. Positioning error models considering the camera observation error were developed based on the study of these three types of controllers. In addition, in order to avoid the instability issues, the singularities of these controllers were analyzed for purpose of avoiding the controller singularities. In the next step, the design optimization problem for getting the optimal geometric parameters and the placement of the camera for the Gough–Stewart platform has been formulated for each type of controller. Then, co-simulations between ADAMS and Simulink for the Gough–Stewart platforms optimized for the three controllers were performed. The results showed that the robots designed for these three visual servoing controllers had a similar size (robots designed for LegBVS and LineBVS share the same size). The robot designed for IMVS had a better positioning accuracy compared with the other two robots optimized for LegBVS and LineBVS. Especially, the co-simulation results show that when one controller is applied on a robot designed for another one, the positioning error performance was no longer guaranteed, confirming the importance of the control-based design approach. In the future, experimental works on real prototypes are necessary in order to verify the simulation results.

Author Contributions

Conceptualization, M.Z. and C.H.; data curation, M.Z. and D.G.; methodology, M.Z. and S.S.; validation, M.Z. and S.S.; formal analysis, M.Z. and D.G.; writing—original draft preparation, M.Z. and C.H.; writing—review and editing, D.G.; supervision, M.Z.; project administration, C.H.; funding acquisition, D.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (61803285, 62001332); the National Defense Pre-Research Foundation of China (H04W201018).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We acknowledge the contribution of Jun Qi and Chuan Lu for their help in software, investigation, resources, and visualization.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Leg-Based Visual Servoing Interaction Matrix

Appendix A.1. Leg-Direction-Based Visual Servoing Interaction Matrix

Leg-direction-based visual servoing kinematic aims at finding the relationship between the time variation of the unit vector u ̲ i of the robot legs and the twist of its platform.
The leg direction u ̲ i extracted from the observation of the robot leg B i P i (Figure 1b) is selected as the feature to do the visual servoing. We have
u ̲ i = ( P i G B i G ) / L i
where L i is the length of the prismatic actuator B i P i . P i G and B i G represent the coordinates with respect to the global frame. P i G ( i = 1 6 ) can be obtained from:
P i G = T + R ( ϕ , θ , σ ) P i L ( i = 1 6 )
T and R denote the position of the center of the mobile frame and its rotation matrix in the global frame. P i L denotes the coordinate of the point P i in local frame.
From Equation (A2), we have the vision-based kinematics of the Gough–Stewart platform expressed in the global frame:
L i u ̲ i = T + R ( ϕ , θ , σ ) P i L B i G
u ˙ ̲ i = 1 L i 1 d t P i B i L ˙ i L i u ̲ i
By inserting the interaction matrix associated to a 3D point [16], we get
u ˙ ̲ i = 1 L i [ I 3 B i × ] τ L ˙ i L i u ̲ i
where × is the antisymmetric matrix associated to the cross product [16]. With the help of the inverse Jacobian matrix, we can obtain the relationship between each u ̲ ˙ i and τ .
u ̲ ˙ i = M u i T τ
M u i T = 1 L i ( I 3 u ̲ i u ̲ i T ) [ I 3 B i × ]
Then, the interaction matrix M u T can be obtained by stacking the matrices M u i T of k legs (k = 3, 4, 5, 6).

Appendix A.2. Line-Based Visual Servoing Interaction Matrix

Line-based visual servoing kinematic aims at finding the relationship between the time variation of the Plücker coordinates ( u ̲ i , h i ) of the robot legs and the twist of its platform.
From the definition of the Plücker coordinates, we have h i = D × u ̲ i where D is the position of any point on the line passing through the center of the cylindrical leg. Then we have
h ˙ i = D ˙ × u ̲ i + D × u ̲ ˙ i
For the Gough–Stewart platform, the U joints B i ( i = 1 , 2 6 ) (Figure 1b) are all fixed on the base platform, then the Equation (A8) can be written in the form
h ˙ i = B ˙ i × u ̲ i + B i × u ̲ ˙ i = B i × u ̲ ˙ i
With the help of Equation (A6), the Equation (A9) can be written in the matrix form
h ˙ i = B i × × M u i T τ = M h i T τ
M i is the interaction matrix related to the vector u ̲ i .
Therefore, for a line L i , we have
[ u ̲ ˙ i h ˙ i ] = [ M u i T M h i T ] τ = M u h i T τ
Then, the interaction matrix M u h can be obtained by stacking the matrices of M u h i T of k legs (k = 3, 4, 5, 6).

References

  1. Merlet, J.P. Parallel Robots; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2006; Volume 128. [Google Scholar]
  2. Merlet, J.P. 2012. Available online: http://www-sop.inria.fr/members/Jean-Pierre.Merlet/merlet.html (accessed on 9 June 2021).
  3. Huang, H.; Yang, C.; Chen, C.P. Optimal robot-environment interaction under broad fuzzy neural adaptive control. IEEE Trans. Cybern. 2020, 51, 3824–3835. [Google Scholar] [CrossRef] [PubMed]
  4. Chaumette, F.; Hutchinson, S.; Corke, P. Visual servoing. In Springer Handbook of Robotics; Springer: Berlin/Heidelberg, Germany, 2016; pp. 841–866. [Google Scholar]
  5. Chaumette, F.; Hutchinson, S. Visual servo control. I. Basic approaches. IEEE Robot. Autom. Mag. 2006, 13, 82–90. [Google Scholar] [CrossRef]
  6. Yang, C.; Wu, H.; Li, Z.; He, W.; Wang, N.; Su, C.Y. Mind Control of A Robotic Arm with Visual Fusion Technology. IEEE Trans. Ind. Inform. 2017, 14, 3822–3830. [Google Scholar] [CrossRef]
  7. Li, P.; Shu, T.; Xie, W.F.; Tian, W. Dynamic Visual Servoing of A 6-RSS Parallel Robot Based on Optical CMM. J. Intell. Robot. Syst. 2021, 102, 40. [Google Scholar] [CrossRef]
  8. Dallej, T.; Andreff, N.; Mezouar, Y.; Martinet, P. 3D pose visual servoing relieves parallel robot control from joint sensing. In Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 9–15 October 2006; pp. 4291–4296. [Google Scholar]
  9. Andreff, N.; Dallej, T.; Martinet, P. Image-based visual servoing of a gough—Stewart parallel manipulator using leg observations. Int. J. Robot. Res. 2007, 26, 677–687. [Google Scholar] [CrossRef] [Green Version]
  10. Hamel, T.; Mahony, R. Visual servoing of an under-actuated dynamic rigid-body system: An image-based approach. IEEE Trans. Robot. Autom. 2002, 18, 187–198. [Google Scholar] [CrossRef]
  11. Andreff, N.; Espiau, B.; Horaud, R. Visual servoing from lines. Int. J. Robot. Res. 2002, 21, 679–699. [Google Scholar] [CrossRef]
  12. Wu, B.; Zhong, J.; Yang, C. A Visual-Based Gesture Prediction Framework Applied in Social Robots. IEEE/CAA J. Autom. Sin. 2021, 9, 510–519. [Google Scholar] [CrossRef]
  13. Lu, Z.; Wang, N.; Yang, C. A Constrained DMPs Framework for Robot Skills Learning and Generalization from Human Demonstrations. IEEE/ASME Trans. Mechatron. 2021, 26, 3265–3275. [Google Scholar] [CrossRef]
  14. Peng, G.; Chen, C.; Yang, C. Neural Networks Enhanced Optimal Admittance Control of Robot-Environment Interaction Using Reinforcement Learning. IEEE Trans. Neural Netw. Learn. Syst. 2021. [Google Scholar] [CrossRef]
  15. Merckel, L.; Nishida, T. Multi-interfaces approach to situated knowledge management for complex instruments: First step toward industrial deployment. AI Soc. 2010, 25, 211–223. [Google Scholar] [CrossRef]
  16. Martinet, P.; Gallice, J.; Khadraoui, D. Vision based control law using 3d visual features. In Proceedings of the World Automation Congress, WAC’96, Robotics and Manufacturing Systems, Montpellier, France, 28–30 May 1996; pp. 497–502. [Google Scholar]
  17. Marchand, É.; Chaumette, F. Virtual Visual Servoing: A framework for real-time augmented reality. Comput. Graph. Forum 2002, 21, 289–297. [Google Scholar] [CrossRef] [Green Version]
  18. Yang, C.; Peng, G.; Li, Y.; Cui, R.; Li, Z. Neural Networks Enhanced Adaptive Admittance Control of Optimized Robot-Environment Interaction. IEEE Trans. Cybern. 2018, 49, 2568–2579. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Chaumette, F. Image moments: A general and useful set of features for visual servoing. IEEE Trans. Robot. 2004, 20, 713–723. [Google Scholar] [CrossRef] [Green Version]
  20. Andreff, N.; Marchadier, A.; Martinet, P. Vision-based control of a Gough-Stewart parallel mechanism using legs observation. In Proceedings of the 2005 IEEE International Conference on Robotics and Automation, Barcelona, Spain, 18–22 April 2005; pp. 2535–2540. [Google Scholar]
  21. Kaci, L.; Briot, S.; Boudaud, C.; Martinet, P. Control-based Design of a Five-bar Mechanism. In New Trends in Mechanism and Machine Science; Springer: Berlin/Heidelberg, Germany, 2017; pp. 303–311. [Google Scholar]
  22. Kaci, L.; Boudaud, C.; Briot, S.; Martinet, P. Elastostatic Modelling of a Wooden Parallel Robot. In Computational Kinematics; Springer: Berlin/Heidelberg, Germany, 2018; pp. 53–61. [Google Scholar]
  23. Briot, S.; Martinet, P.; Rosenzveig, V. The hidden robot: An efficient concept contributing to the analysis of the controllability of parallel robots in advanced visual servoing techniques. IEEE Trans. Robot. 2015, 31, 1337–1352. [Google Scholar] [CrossRef]
  24. Zhu, M.; Chriette, A.; Briot, S. Control-based Design of a DELTA robot. In Proceedings of the Symposium on Robot Design, Dynamics and Control, Sapporo, Japan, 20–24 September 2020; pp. 204–212. [Google Scholar]
  25. Michel, H.; Rives, P. Singularities in the Determination of the Situation of a Robot Effector from the Perspective View of 3 Points. Ph.D. Thesis, INRIA Sophia Antipolis, Biot, France, 1993. [Google Scholar]
  26. Pascual-Escudero, B.; Nayak, A.; Briot, S.; Kermorgant, O.; Martinet, P.; El Din, M.S.; Chaumette, F. Complete Singularity Analysis for the Perspective-Four-Point Problem. Int. J. Comput. Vis. 2021, 129, 1217–1237. [Google Scholar] [CrossRef]
  27. Germain, C.; Caro, S.; Briot, S.; Wenger, P. Optimal Design of the IRSBot-2 Based on an Optimized Test Trajectory. In Proceedings of the ASME International Design Engineering Technical Conferences & Computers & Information in Engineering Conference, Portland, OR, USA, 4–7 August 2013. [Google Scholar]
  28. Vignolo, A.; Briot, S.; Philippe, M.; Chen, C. Comparative analysis of two types of leg-observation-based visual servoing approaches for the control of the five-bar mechanism. In Proceedings of the 2014 Australasian Conference on Robotics and Automation (ACRA 2014), Melbourne, Australia, 2–4 December 2014. [Google Scholar]
  29. Stewart, D. A platform with six degrees of freedom. Proc. Inst. Mech. Eng. 1965, 180, 371–386. [Google Scholar] [CrossRef]
  30. Bonev, I.A.; Ryu, J. Orientation workspace analysis of 6-DOF parallel manipulators. In Proceedings of the International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, Las Vegas, NV, USA, 12–16 September 1999; Volume 19715, pp. 281–288. [Google Scholar]
  31. Blaise, J.; Bonev, I.; Monsarrat, B.; Briot, S.; Lambert, J.M.; Perron, C. Kinematic characterisation of hexapods for industry. Ind. Robot. Int. J. 2010, 37, 79–88. [Google Scholar] [CrossRef] [Green Version]
  32. Fusco, F.; Kermorgant, O.; Martinet, P. Integrating Features Acceleration in Visual Predictive Control. IEEE Robot. Autom. Lett. 2020, 5, 5197–5204. [Google Scholar] [CrossRef]
  33. Dahmouche, R.; Andreff, N.; Mezouar, Y.; Ait-Aider, O.; Martinet, P. Dynamic visual servoing from sequential regions of interest acquisition. Int. J. Robot. Res. 2012, 31, 520–537. [Google Scholar] [CrossRef] [Green Version]
  34. Tahri, O.; Chaumette, F. Point-based and region-based image moments for visual servoing of planar objects. IEEE Trans. Robot. 2005, 21, 1116–1127. [Google Scholar] [CrossRef]
  35. Hutchinson, S.; Hager, G.D.; Corke, P.I. A tutorial on visual servo control. IEEE Trans. Robot. Autom. 1996, 12, 651–670. [Google Scholar] [CrossRef] [Green Version]
  36. Briot, S.; Martinet, P. Minimal representation for the control of Gough-Stewart platforms via leg observation considering a hidden robot model. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation (ICRA), Karlsruhe, Germany, 6–10 May 2013; pp. 4653–4658. [Google Scholar]
  37. Briot, S.; Rosenzveig, V.; Martinet, P.; Özgür, E.; Bouton, N. Minimal representation for the control of parallel robots via leg observation considering a hidden robot model. Mech. Mach. Theory 2016, 106, 115–147. [Google Scholar] [CrossRef] [Green Version]
  38. Ben-Horin, P.; Shoham, M. Singularity analysis of a class of parallel robots based on Grassmann–Cayley algebra. Mech. Mach. Theory 2006, 41, 958–970. [Google Scholar] [CrossRef]
  39. Caro, S.; Moroz, G.; Gayral, T.; Chablat, D.; Chen, C. Singularity analysis of a six-dof parallel manipulator using grassmann-cayley algebra and groebner bases. In Brain, Body and Machine; Springer: Berlin/Heidelberg, Germany, 2010; pp. 341–352. [Google Scholar]
  40. Merlet, J.P. Jacobian, manipulability, condition number, and accuracy of parallel robots. J. Mech. Des. 2006, 128, 199–206. [Google Scholar] [CrossRef]
  41. Fichter, E.F. A Stewart platform-based manipulator: General theory and practical construction. Int. J. Robot. Res. 1986, 5, 157–182. [Google Scholar] [CrossRef]
  42. Ben-Horin, P.; Shoham, M. Application of Grassmann—Cayley algebra to geometrical interpretation of parallel robot singularities. Int. J. Robot. Res. 2009, 28, 127–141. [Google Scholar] [CrossRef]
  43. St-Onge, B.M.; Gosselin, C.M. Singularity analysis and representation of the general Gough-Stewart platform. Int. J. Robot. Res. 2000, 19, 271–288. [Google Scholar] [CrossRef]
  44. Hunt, K.H.; Hunt, K.H.; Hunt, K.H. Kinematic Geometry of Mechanisms; Oxford University Press: New York, NY, USA, 1978; Volume 7. [Google Scholar]
  45. Arakelian, V.; Briot, S.; Glazunov, V. Improvement of functional performance of spatial parallel manipulators using mechanisms of variable structure. In Proceedings of the 12th IFToMM World Congress, Besançon, France, 17–21 June 2007. [Google Scholar]
  46. Arakelian, V.; Briot, S.; Glazunov, V. Increase of singularity-free zones in the workspace of parallel manipulators using mechanisms of variable structure. Mech. Mach. Theory 2008, 43, 1129–1140. [Google Scholar] [CrossRef]
  47. Briot, S.; Pashkevich, A.; Chablat, D. Optimal Technology-Oriented Design of Parallel Robots for High-Speed Machining Applications. In Proceedings of the 2010 IEEE International Conference on Robotics and Automation (ICRA), Anchorage, AK, USA, 3–7 May 2010. [Google Scholar]
Figure 1. Schematics of the Gough–Stewart platform. (a) CAD of the Gough–Stewart platform with its regular dexterous workspace; (b) Schematics of the Gough–Stewart platform architecture; (c) Gough–Stewart platform geometric design parameters; (d) Tilt and torsion angles.
Figure 1. Schematics of the Gough–Stewart platform. (a) CAD of the Gough–Stewart platform with its regular dexterous workspace; (b) Schematics of the Gough–Stewart platform architecture; (c) Gough–Stewart platform geometric design parameters; (d) Tilt and torsion angles.
Sensors 22 02523 g001
Figure 2. Projection of a cylinder in the image plane and the image from the camera observation.
Figure 2. Projection of a cylinder in the image plane and the image from the camera observation.
Sensors 22 02523 g002
Figure 3. A camera observing the robot legs.
Figure 3. A camera observing the robot legs.
Sensors 22 02523 g003
Figure 4. Discrete model composed of three points for image moment visual servoing.
Figure 4. Discrete model composed of three points for image moment visual servoing.
Sensors 22 02523 g004
Figure 5. A unit vector u ̲ i in space and its parameterization.
Figure 5. A unit vector u ̲ i in space and its parameterization.
Sensors 22 02523 g005
Figure 6. Example of a Type 2 singularity for a 3- U ̲ P S robot: the platform gets an uncontrollable rotation around P 1 P 2 [37].
Figure 6. Example of a Type 2 singularity for a 3- U ̲ P S robot: the platform gets an uncontrollable rotation around P 1 P 2 [37].
Sensors 22 02523 g006
Figure 7. Corresponding hidden robot leg when the line L i in space is observed.
Figure 7. Corresponding hidden robot leg when the line L i in space is observed.
Sensors 22 02523 g007
Figure 8. Error for the observation of a line.
Figure 8. Error for the observation of a line.
Sensors 22 02523 g008
Figure 9. Error on the three points discrete model.
Figure 9. Error on the three points discrete model.
Sensors 22 02523 g009
Figure 10. Gough–Stewart platform optimized using LineBVS [Case 1].
Figure 10. Gough–Stewart platform optimized using LineBVS [Case 1].
Sensors 22 02523 g010
Figure 11. Gough–Stewart platform optimized using LineBVS [Case 2].
Figure 11. Gough–Stewart platform optimized using LineBVS [Case 2].
Sensors 22 02523 g011
Figure 12. Gough–Stewart platform optimized using image moment visual servoing.
Figure 12. Gough–Stewart platform optimized using image moment visual servoing.
Sensors 22 02523 g012
Figure 13. Co-simulation control scheme of Gough–Stewart platform.
Figure 13. Co-simulation control scheme of Gough–Stewart platform.
Sensors 22 02523 g013
Figure 14. Variation of the image moment α for the Triangle 1.
Figure 14. Variation of the image moment α for the Triangle 1.
Sensors 22 02523 g014
Figure 15. Variation of the image moment α for the Triangle 2.
Figure 15. Variation of the image moment α for the Triangle 2.
Sensors 22 02523 g015
Table 1. Requirements of the Gough–Stewart platform.
Table 1. Requirements of the Gough–Stewart platform.
Cube RDW size
(side length of the cube) l 0
⩾100 mm
Tilt and Torsion angles ϕ ( π , π ] , θ [ 0 , π / 12 ] , σ [ 0 , π / 12 ]
Positioning accuracy in RDW⩽1 mm
Orientation accuracy in RDW⩽0.01 rad
No singularity in RDWof the controller
of the robot
Constraints on geom. param.will be provided in Section 5
Table 2. Design parameters and value of the objective function for the chosen controller.
Table 2. Design parameters and value of the objective function for the chosen controller.
LegBVSLineBVSLegBVSLineBVSIMVS
[Case 1][Case 1][Case 2][Case 2]
r a [ m ] 0.20540.20540.14020.14020.1600
r b [ m ] 0.30000.30000.30000.30000.3000
α 0 [ rad ] −0.4256−0.4256−0.4243−0.42430.2668
α 1 [ rad ] 0.23180.23180.22980.22980.1986
α 2 [ rad ] 0.14240.14240.69270.69270.2406
z c [ m ] −0.0523−0.0523−0.0551−0.05510.1204
R   [ m ] 0.01970.01970.01990.0199N/A
x 1 [ m ] N/AN/AN/AN/A−0.1311
x 2 [ m ] N/AN/AN/AN/A0.1303
x 3 [ m ] N/AN/AN/AN/A−0.1044
y 1 [ m ] N/AN/AN/AN/A−0.0870
y 2 [ m ] N/AN/AN/AN/A−0.0839
y 3 [ m ] N/AN/AN/AN/A0.0976
r b [ m ] 0.30000.30000.30000.30000.3000
Table 3. Coordinates of the test points parameterized with respect to the center of the LRDW.
Table 3. Coordinates of the test points parameterized with respect to the center of the LRDW.
PointCoordinate [m]PointCoordinate [m]PointCoordinate [m]
T 1 (0,0,0) T 4 (−0.05,−0.05,0.05) T 7 (0.05,0.05,−0.05)
T 2 (0.05,0.05,0.05) T 5 (−0.05,0.05,0.05) T 8 (0.05,−0.05,−0.05)
T 3 (0.05,−0.05,0.05) T 6 (−0.05,0.05,−0.05) T 9 (−0.05,−0.05,−0.05)
Table 4. Results of co-simulation in terms of end-effector accuracy: min, max, standard deviation, and mean values for the error recorded on the tested 24 points.
Table 4. Results of co-simulation in terms of end-effector accuracy: min, max, standard deviation, and mean values for the error recorded on the tested 24 points.
CaseMax Positioning Error [mm]Min Positioning Error [mm]Mean Positioning Error [mm]Max Orientation Error [rad]Min Orientation Error [rad]Mean Positioning Error [rad]
A ([Model 1])1.240.941.034.5 × 10 4 2.2 × 10 4 3.1 × 10 4
A ([Model 2])1.230.961.054.5 × 10 4 2.0 × 10 4 3.5 × 10 4
B ([Model 3])1.120.910.994.0 × 10 4 1.9 × 10 4 2.8 × 10 4
B ([Model 4])1.240.991.014.0 × 10 4 2.1 × 10 4 3.0 × 10 4
C ([Model 5])0.630.280.384.3 × 10 4 2.2 × 10 4 2.8 × 10 4
C ([Model 6])0.660.290.424.6 × 10 4 2.6 × 10 4 3.0 × 10 4
D ([Model 5])1.561.371.445.0 × 10 4 3.3 × 10 4 4.4 × 10 4
E ([Model 5])1.471.211.394.3 × 10 4 2.3 × 10 4 3.4 × 10 4
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhu, M.; Huang, C.; Song, S.; Gong, D. Design of a Gough–Stewart Platform Based on Visual Servoing Controller. Sensors 2022, 22, 2523. https://doi.org/10.3390/s22072523

AMA Style

Zhu M, Huang C, Song S, Gong D. Design of a Gough–Stewart Platform Based on Visual Servoing Controller. Sensors. 2022; 22(7):2523. https://doi.org/10.3390/s22072523

Chicago/Turabian Style

Zhu, Minglei, Cong Huang, Shijie Song, and Dawei Gong. 2022. "Design of a Gough–Stewart Platform Based on Visual Servoing Controller" Sensors 22, no. 7: 2523. https://doi.org/10.3390/s22072523

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop