Next Article in Journal
Unsupervised Offline Changepoint Detection Ensembles
Next Article in Special Issue
Soft Jumping Robot Using Soft Morphing and the Yield Point of Magnetic Force
Previous Article in Journal
Macroscopic Traffic-Flow Modelling Based on Gap-Filling Behavior of Heterogeneous Traffic
Previous Article in Special Issue
A Novel Type of Wall-Climbing Robot with a Gear Transmission System Arm and Adhere Mechanism Inspired by Cicada and Gecko
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Control Strategy for Direct Teaching of Non-Mechanical Remote Center Motion of Surgical Assistant Robot with Force/Torque Sensor

School of Mechanical Engineering, Pusan National University, Pusan 46241, Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(9), 4279; https://doi.org/10.3390/app11094279
Submission received: 15 April 2021 / Revised: 29 April 2021 / Accepted: 6 May 2021 / Published: 9 May 2021
(This article belongs to the Special Issue Advances in Bio-Inspired Robots)

Abstract

:

Featured Application

Surgical robot.

Abstract

This paper presents a control strategy that secures both precision and manipulation sensitivity of remote center motion with direct teaching for a surgical assistant robot. Remote center motion is an essential function of conventional laparoscopic surgery, and the most intuitive way a surgeon manipulates a robot is through direct teaching. The surgical assistant robot must maintain the position of the insertion port in three-dimensional space during the four-degree-of-freedom motions such as pan, tilt, spin, and forward/backward. In addition, the robot should move smoothly when controlling it with the hands during the surgery. In this study, a six-degree-of-freedom collaborative robot performs the cone-shaped trajectory with pan and tilt motion of an end-effector keeping the position of the remote center. Instead of the bulky mechanically constrained remote center motion mechanism, a conventional collaborative robot is used to mimic the wrist movement of a scrub nurse. A force/torque sensor that is attached between the robot and end-effector estimates the surgeon’s intention. A direct teaching control strategy based on position control is applied to guarantee precise remote center position maintenance performance. A motion generation algorithm is designed to generate motion by utilizing a force/torque sensor value. The parameters of the motion generation algorithm are optimized so that the robot can be operated with uniform sensitivity in all directions. The precision of remote center motion and the torque required for direct teaching are analyzed through pan and tilt motion experiments.

1. Introduction

Robotic surgery is the latest trend in minimally invasive surgery. The Da Vinci System (Intuitive Surgical Inc., Sunnyvale, CA, USA) is well known as the world’s most successful surgical robot. As the Da Vinci System is a remote-control console in which the operator controls the robot, the operator has no choice but to separate it from the operating table. However, many surgeons want to be able to freely intervene and flexibly cope with unexpected situations during robotic surgery. Since surgical robots are super expensive, only a small number of tertiary hospitals have them, and they are used for serious surgery considering the overall procedures such as installation and recovery of robots during surgery. Therefore, considering the entire medical field including the primary hospital, a robot that can be used for laparoscopic surgery, which requires two hours of operation, is needed. In addition, medical personnel are complaining of fatigue from surgery due to a shortage of medical assistants, and demands for surgical assistance robots as shown in Figure 1a are increasing.
In this study, we developed an entry-level surgical assistant robot with an end-effector attached to a six-joint cooperative robot [1]. The robot works with the surgeon on the operating table by holding a laparoscopic camera and pulling the internal tissue with forceps. To evaluate the validity and usability of the design of the prototype, animal experiments were conducted using pigs as shown in Figure 1b. The most basic function of laparoscopic surgery assistance robots is remote center motion (RCM), and direct teaching is provided on how the operator can intuitively manipulate them without additional training. The stability of RCM and the sensitivity to direct teaching are important indicators for performance evaluation from the point of view of surgeons.
The movement of an imaginary point outside the organization is called remote center motion. The RCM is an essential function for maintaining the position of the insertion port while the surgical robot inserts the surgical tool into the human body with a trocar and operates it. Medical robots have a variety of mechanisms for RCM [2]. The method of mechanically limiting the remote center and always maintaining the remote center in any movement of the robot is widely used for high safety. Many surgical robots, including Da Vinci, maintain a remote center by decoupled motion in a z-bar-shaped parallelogram link structure [3,4]. Some robots maintain the remote center by using circular guides [5,6]. Microsurgical robots such as eye surgery have parallel mechanisms with closed linkage [7]. However, the mechanically constrained system is not suitable for a surgical assistant robot that works in the same space as the operator because it consists of parallel mechanisms and support mechanisms and has a large possibility of colliding with the surgeon when placed on the operating table. To minimize the size of the robot, there is also a non-mechanical method of maintaining a virtual remote center through the control of articulated robots [8].
With the recent development of cooperative robots, various direct teaching methods are being studied. The most widely used manual traction method for low-cost cooperative robots is to control current without sensors [9,10]. However, since it operates only when the applied torque is large enough considering the reduction ratio and friction of the reduction gear, it does not meet the requirements of the surgical site, and because it is not position-based control, remote center maintenance performance is unreliable. There is a system that allows sensitive and precise operation by attaching a torque sensor to each rotating shaft of the robot, but it is mainly used for expensive cooperative robots [11,12]. Another method is to attach a force/torque sensor to the tip of a cooperative robot and teach it directly based on the sensor [13,14]. The robot has the disadvantage of being able to teach softly by taking the tip of the robot, but not by using other links in the middle.
In this study, a six-degree-of-freedom force/torque sensor is attached to the tip of the cooperative robot to connect the end-effector. A method of maintaining a non-mechanical remote center through control, measuring the force and torque applied by the user by grasping the end-effector, and directly using it for instruction is used. The value generated by the force/torque sensor is converted into a control command and input into position control, so the remote center can be kept solid, but care must be taken in the design of the motion generation algorithm to create a sensitive feeling. Through an experiment in which the user directly teaches by grasping the end-effector of the robot, it evaluates the ability to follow instructions well and the performance to which it maintains the remote center position.

2. Non-mechanical Remote Center Motion

2.1. System Configuration for Remote Center Motion

In general laparoscopic surgery, the surgical instrument requires four-degrees-of- freedom motion: Pan (Roll), Tilt (Pitch), Spin (Yaw) rotational and linear movements (Forward/Backward), as shown in Figure 2a. In this study, a two-degree-of-freedom end-effector is mounted on a six-degree-of-freedom cooperative robot, and a virtual remote center is defined by a control algorithm to achieve RCM. A six-degree-of-freedom cooperative robot embodies Pan and Tilt rotational movements in the four-degree-of-freedom RCM while maintaining the remote center of three-dimensional space to form a conical trajectory. Surgery assistance is performed in such a small space that the scrub nurse has to hold an endoscopic camera between the sides of a surgeon. This robot, which maintains a non-mechanical remote center, mimics the movement of a human wrist, as shown in Figure 2a. In this case, a small device mechanism can have the effect of human grasping surgical tools in a narrow space and minimize collisions when collaborating with surgeons. However, the robustness of the controller is very important for safe surgery. As the two-degree-of-freedom motion of an end-effector can be controlled independently, this paper focuses research on the pan, tilt RCM control of the cooperative robot.

2.2. Kinematic Modeling

The kinematic model of a robot is required as shown in Figure 2b. A robot base has a fixed frame {0} and a moving frame {e} is attached to a remote center point, and the rotation angle of each joint is qi and the link parameter is ri. The kinematic model of a robot is derived using a homogeneous transformation matrix between moving frames attached to each joint as shown in Equation (1). The forward kinematics of a robot manipulator can be obtained by a sequential product of transformation matrices as shown in Equation (2). Coordinates of the remote center point in the three-dimensional space, (xRC, yRC, zRC), can be extracted from the fourth column of the matrix T0e as shown in Equation (3), and the pan and tilt angles of the end-effector can be calculated from the rotation matrix elements of the matrix T0e as shown in Equation (4). The link parameters of the robot manipulator used in this study are shown in Table 1.
T 01 = [ cos q 1 sin q 1 0 0 sin q 1 cos q 1 0 0 0 0 1 r 1 0 0 0 1 ] , T 12 = [ cos q 2 0 sin q 2 0 0 1 0 r 2 sin q 2 0 cos q 2 0 0 0 0 1 ] , T 23 = [ cos q 3 0 sin q 3 0 0 1 0 0 sin q 3 0 cos q 3 r 3 0 0 0 1 ] , T 34 = [ cos q 4 0 sin q 4 0 0 1 0 r 4 sin q 4 0 cos q 4 r 5 0 0 0 1 ] , T 45 = [ 1 0 0 0 0 cos q 5 sin q 5 r 6 0 sin q 5 cos q 5 0 0 0 0 1 ] , T 56 = [ cos q 6 sin q 6 0 r 6 sin q 6 cos q 6 0 0 0 0 1 0 0 0 0 1 ] , T 6 e = [ 1 0 0 r 8 0 1 0 0 0 0 1 r 7 + r 9 0 0 0 1 ] .
T 0 e = T 01 T 12 T 23 T 34 T 45 T 56 T 6 e = R 11 R 12 R 13 p 1 R 21 R 22 R 23 p 2 R 31 R 32 R 33 p 3 0 0 0 1 .
x R C = p 1 , y R C = p 2 , z R C = p 3 .
P a n = arctan R 32 R 33 , T i l t = arctan R 31 R 32 2 + R 33 2 .
The Jacobian matrix of the differential kinematics is required to find an inverse kinematic solution. The Jacobian matrix can be derived through chain rules as shown in Equation (5), where zi and oi are vectors that describe the rotation axis and the origin of the ith moving frame, respectively [15]. Finally, an inverse kinematic solution for the position and orientation of the remote center can be obtained through the pseudoinverse matrix of the Jacobian matrix as shown in Equation (6).
J = z 1 × o e o 1 z 2 × o e o 2 z 3 × o e o 3 z 4 × o e o 4 z 5 × o e o 5 z 6 × o e o 6 z 1 z 2 z 3 z 4 z 5 z 6
q ˙ = J x ˙

3. Control Strategy for Direct Teaching of Remote Center Motion

A robot manipulator is controlled based on position control to maintain a solid remote center even while the posture of the end-effector is operated with direct teaching. A force/torque sensor mounted between the tip of the cooperative robot and the end-effector is used as an input device for direct teaching, and pan and tilt motions are realized based on the remote center position according to the user’s intention. A button is present on the end-effector of the surgical assistant robot, and force and moment are applied to the end-effector while the button is pressed so that the robot is driven manually. The direct teaching control of RCM is designed based on position control as shown in Figure 3. In a general position control algorithm for articulated robots, only the control input part for direct teaching is modified. The predefined remote center position and the end-effector’s posture are entered into the 6-degree-of-freedom robot as a control input. The position control to solve the inverse kinematics has feedback for the actual position and orientation through the forward kinematics in the closed-loop inverse kinematics. The position control is designed by differential kinematics with PID control and back-calculation [16], and each control gain is adjusted by optimization [17]. The orientation of the end-effector is input by calculating a value generated in the force/torque sensor by direct teaching from a command generation algorithm. Through a motion generation algorithm based on impedance control, the end-effector’s posture is calculated from the values generated by the force/torque sensor due to direct teaching.

3.1. Motion Generation Algorithm

When designing end-effector hand-guiding control algorithms for cooperative robots using force/torque sensors, the control model is considered a damper-mass mechanical system [18]. The torque due to the gravity force is compensated, and the robot is controlled in the velocity domain [19]. It is also important to define a proper dead zone and threshold for the values measured by the force/torque sensor [20]. Finally, the relation between the actuating variables of the input device and the robot velocity command value is designed by a motion generation algorithm [21].
The position vector from the sensor to the center of mass of the end-effector and the orientation difference between the robot base coordinate system and the sensor’s coordinate are shown in Figure 4. The transformation matrix from the 6th frame to the sensor frame allows us to obtain the transformation matrix from the base to the sensor as shown in Equations (7) and (8). Because the coordinate varies depending on the robot’s posture, the measured values from the 6 degrees-of-freedom force/torque sensor are converted using an adjoint matrix as shown in Equation (9) [22]. f and τ are force and torque vectors, and the subscript 0 and s means the robot base frame {0} and the sensor frame {s}, respectively. R0s is the rotation matrix of the sensor frame in the base coordinate and p ^ 0 s is the skew-symmetric matrix which is generated by the position vector of the sensor frame in the base coordinate.
T 6 s = 1 0 0 0 0 1 0 0 0 0 1 r 7 0 0 0 1 .
T 0 s = T 01 T 12 T 23 T 34 T 45 T 56 T 6 s = R 0 s p 0 s 0 1 .
f 0 τ 0 = R 0 s 0 p ^ 0 s R 0 s R 0 s f s τ s .
Gravity-induced torque is measured on the sensor as the position of the center of gravity changes depending on the posture. It can be obtained in the robot base coordinate as shown in Equation (10). The position vector, xcm, from the sensor to the center of mass of the end-effector is [106 mm, 0 mm, −12 mm]T. The mass, m, is 1.01 kg, and the gravitational acceleration, g, is 9.8 m/s2. Finally, the input torque is derived as shown in Equation (11) as a result of compensating the torque due to gravity.
x g = R 0 s x c m , τ g = x g × 0 0 m g T .
τ i n = τ 0 τ g .
The command generation function sets the dead band [−a, a] of the appropriate area so that the robot is not too sensitive to small changes in input torque and generates command values by multiplying the input torque by the proportional constant, k1 and k2, as shown in Figure 5. In this study, proportional constants are designed asymmetrically for sophisticated tuning of direct teaching sensitivity. The amplitude limitation of the conversion value [−b, b] is also defined so that the command is not suddenly changed by a large input torque. The command value is added to the current orientation of the end-effector calculated by the forward kinematics, and the motion change by direct teaching is applied in real-time. Adding the command value for each control period of 1 ms means that the command value is the velocity of the end-effector. Based on the damper-mass system model, the input of the command generation function is input torque, and the output is motion velocity. The sensitivity of operation can be adjusted by changing the dead zone and the proportional constant. When determining the proportional constant, the target torque and the target velocity, and the limitations of the driving speed of the system should be considered in combination.

3.2. Optimized Isotropic Sensitivity of Direct Teaching

The initial values of the command generation function are roughly determined among the characteristics of the system, and those are adjusted by an optimization method through a pre-test, as it is difficult to quantify the operating sensitivity of direct teaching. In general, modeling of friction terms is required when designing impedance control algorithms. However, the friction model modeled for each joint often differs from the friction model in the assembled manipulator. Although an observer is sometimes designed to estimate friction models, it increases the computation of the controller. In this work, we use a more practical approach that is simpler by tuning the command function through a pre-test. Parameter initial values of the command generation function are defined by considering the mechanical and electrical characteristics of the fabricated prototype and the desired operating sensitivity. Assuming that the dead zone range, a, is 0.2 N·m and the maximum speed, b, is 1 × 10−4 rad/ms, the slope k1 and k2 are determined as shown in Equation (12) so that the maximum speed can be reached with 1.5 N·m torque.
k d e s i r e d = 1 × 10 4 1.5 0.2 = 7.7 × 10 5 .
After that, the velocity of pan/tilt motion and the generated reaction torque for arbitrary direct teaching are measured through a pre-test. The actual torque required for motion velocity is shown by contours from 18,466 data as shown in Figure 6a. The actual sensitivity of a robot differs from its intended sensitivity in amplitude and isotropy. An objective function is defined by the error norm between the actual slope and the required slope as shown in Equation (13), and the slopes of the command generation function are tuned by an optimization algorithm to minimize the objective function.
τ o p t = τ m e a / α , τ m e a > 0 τ m e a / β , τ m e a < 0 , o b j α p a n , β p a n , α t i l t , β t i l t = n o r m ω p a n 2 + ω t i l t 2 τ o p t , p a n 2 + τ o p t , t i l t 2 k d e s i r e d / n ,
where τmea and τopt are measured torque and optimized torque respectively, and α, β are tuning parameters to be multiplied by each slope ki. ω is motion velocity, and n is the number of data. As a result, the error is reduced and the values of the slope are tuned as shown in Equation (14) and Table 2, resulting in the result shown in Figure 6b. Figure 6 shows the results of a verification test, and the error norm is decreased to 1.32 × 10−4 from the pre-test result.
k 1 = α k i n i t i a l , k 2 = β k i n i t i a l .

4. Experiments

The end-effector to hold an endoscope camera is attached to the cooperative robot, and the direct teaching performance of RCM is tested as shown in Figure 7. This cooperative robot consists of driver-integrated motors (SMT-DA-series, LS Mecapion Co., Ltd., Daegu, Korea), joint modules with the Harmonic drive (MR-series, SBB Tech Co., Ltd., Gyeonggi, Korea), and a six-degree-of-freedom force/torque sensor (RFT80-6A01, ROBOTOUS INC., Gyeonggi, Korea). When the force and torque are applied to the end-effector while the teaching button is pressed, the manual traction is performed directly according to the control method shown earlier. Pan motion, Tilt motion, and mixed motion are performed manually, and the sensitivity of manipulation is investigated. The precision of the RCM is evaluated using the optical tracking system (SAKD1, DigiTrack Co., Ltd., Daegu, Korea). Orientation error is estimated through the forward kinematics based on the encoder values of the robot joints.

4.1. Pan Motion

When holding the end-effector and teaching Pan motion, torque is generated on the force/torque sensor as shown in Figure 8a, generating in the orientation command such as a dotted line of Figure 8b that is applied to the robot and followed along like a solid line. In the case of Pan motion, it can be operated with a torque of 1.10 N·m or less. During direct teaching, the remote center position error remained within 2.26 mm, with an average error of 1.52 ± 0.29 mm, as shown in Figure 8c. The orientation error according to the generated command occurs within 0.27° as shown in Figure 8d.

4.2. Tilt Motion

In the case of Tilt motion, the robot can be manipulated with torque within 0.68 N·m as shown in Figure 9a, thereby moving in the trajectory shown in Figure 9b. As shown in Figure 9c, the error of the remote center position within a maximum of 3.89 mm and an average of 1.81 ± 1.14 mm. The tilt motion follows the command within 0.18° error, as shown in Figure 9d. Due to the kinematic characteristics of articulated robots, the remote center error of Tilt motion is greater than that of Pan motion. Position errors are amplified according to joint angle errors in proportion to the distance between the remote center position and the drive joint axis. Pan motion produces major movements by joints 1, 5, and 6, and the distance between the remote center and the joint is relatively short, especially due to the dominant movement of joint 5. Tilt motion controls the major movement by joints 2, 3, and 4. The remote center position error increases compared to Pan motion because the end position error of the drive joint increases by the link length of the robot.

4.3. Pan-Tilt Mixed Motion

Holding the end effector, Pan and Tilt motion are mixed to create a cone-shaped trajectory based on the remote center. The motion of the trajectory as shown in Figure 10b is controlled within a maximum torque of 1.20 N·m as shown in Figure 10a. As shown in Figure 10c, the remote center position error of up to 4.08 mm and an average of 1.80 ± 1.03 mm is maintained for approximately 3 min of continuous operation. Orientation errors for mixed motion are within 0.70° maximum. The torque in the x-axis direction and the torque in the y-axis direction are evenly repeated with similar magnitudes for the cone-shaped rotational motion, providing a guarantee of isotropic operating sensitivity.
Several surgeons have surveyed the performance of this surgical assistant robot through dry testing and animal experiments and suggested that the accuracy of RCM is sufficient to be less than 5 mm for 12 mm-diameter trocars, but the operating sensitivity of direct training should be slightly improved.

5. Conclusions

This study presents a control strategy for remote center motion and direct teaching that allows a surgical assistant robot to mimic the movements of the scrub nurse’s wrists. Using the proposed control method, the surgical assistant robot can be manually operated as if the surgeon holds the hand of a nurse holding the endoscope and guided it during surgery. The mechanically constrained RCM mechanism is bulky to collaborate with the surgeon on the operating table. Therefore, the conventional collaborative robot is used as the platform of the surgical assistant robot, and the precision of the RCM and the sensitivity of direct teaching are obtained simultaneously by designing the control algorithm. The position control achieves sensitive direct teaching performance by generating a posture command with the measured value from the force/torque sensor while maintaining the robust remote center position. The parameters of the command generation function are tuned through pre-test and optimization for isotropic operation sensitivity. By applying a torque within 1.20 N·m to the robot, we can hand-guide the Pan and Tilt movements of the surgical assistant robot with a remote center position error within a maximum of 4.08 mm and an average of 1.80 ± 1.03 mm. To achieve a smoother and more sensitive operation sensitivity, a more high-level command generation function will be designed and advanced techniques such as reinforcement learning will be applied to optimize parameters.

Author Contributions

Conceptualization, S.J.; methodology, S.J.; software, S.J.; validation, M.K. and Y.Z.; formal analysis, M.K.; data curation, M.K.; writing—original draft preparation, M.K.; writing—review and editing, S.J.; visualization, M.K.; supervision, S.J.; project administration, S.J.; funding acquisition, S.J. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (No. NRF-2018R1C1B5043640).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

This research was conducted using equipment from Erop Co., Ltd. (Daegu, Korea).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kim, M.; Jin, S. Study on direct teaching algorithm for remote center motion of surgical assistant robot using force/torque sensor. J. Korea Robot. Soc. 2020, 15, 309–315. [Google Scholar] [CrossRef]
  2. Kuo, C.H.; Dai, J.S.; Dasgupta, P. Kinematic design considerations for minimally invasive surgical robots: An overview. Int. J. Med. Robot. Comput. Assist. Surg. 2012, 8, 127–145. [Google Scholar] [CrossRef] [PubMed]
  3. Trochimczuk, R. Comparative analysis of RCM mechanisms based on parallelogram used in surgical robots for laparoscopic minimally invasive surgery. J. Theor. Appl. Mech. 2020, 58, 911–925. [Google Scholar] [CrossRef]
  4. Pan, B.; Fu, Y.; Niu, G.; Xu, D. Optimization and design of remote center motion mechanism of minimally invasive surgical robotics. In Proceedings of the 11th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI 2014), Kuala Lumpur, Malaysia, 12–15 November 2014. [Google Scholar] [CrossRef]
  5. Shim, S.; Lee, S.; Ji, D.; Choi, H.; Hong, J. Trigonometric ratio-based remote center of motion mechanism for bone drilling. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018. [Google Scholar] [CrossRef]
  6. Zhu, Y.; Zhang, F. A novel remote center of motion parallel manipulator for minimally invasive celiac surgery. Int. J. Res. Eng. Sci. (IJRES) 2015, 3, 15–19. [Google Scholar]
  7. Zhang, Z.; Yu, H.; Du, Z. Design and kinematic analysis of a parallel robot with remote center of motion for minimally invasive surgery. In Proceedings of the 2015 IEEE International Conference on Mechatronics and Automation (ICMA), Beijing, China, 2–5 August 2015. [Google Scholar] [CrossRef]
  8. Dombre, E.; Michelin, M.; Pierrot, F.; Poignet, P.; Bidaud, P.; Morel, G.; Ortmaier, T.; Sallé, D.; Zemiti, N.; Gravez, P.; et al. MARGE Project: Design, modeling and control of assistive devices for minimally invasive surgery. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention (MICCAI 2004), Saint-Malo, France, 26–29 September 2004. [Google Scholar] [CrossRef] [Green Version]
  9. Lee, S.D.; Ahn, K.H.; Song, J.B. Torque control based sensorless hand guiding for direct robot teaching. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016. [Google Scholar] [CrossRef]
  10. Ahn, K.H.; Song, J.B. Cartesian space direct teaching for intuitive teaching of a sensorless collaborative robot. J. Korea Robot. Soc. 2019, 14, 311–317. [Google Scholar] [CrossRef]
  11. Albu-Schäffer, A.; Haddadin, S.; Ott, C.; Stemmer, A.; Wimböck, T.; Hirzinger, G. The DLR lightweight robot: Design and control concepts for robots in human environments. Ind. Robot Int. J. Robot. Res. Appl. 2007, 34, 376–385. [Google Scholar] [CrossRef] [Green Version]
  12. Park, C.; Kyung, J.H.; Do, H.M.; Choi, T. Development of direct teaching robot system. In Proceedings of the 8th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI 2011), Incheon, Korea, 23–26 November 2011. [Google Scholar] [CrossRef]
  13. Ahn, J.H.; Kang, S.; Cho, C.; Hwang, J.; Suh, M. Design of robot direct-teaching tool and its application to path generation for die induction hardening. In Proceedings of the 2002 International Conference on Control, Automation and Systems (ICCAS), Jeonbuk, Korea, 16–19 October 2002. [Google Scholar]
  14. Kim, H.J.; Back, J.H.; Song, J.B. Direct teaching and playback algorithm for peg-in-hole task using impedance control. J. Inst. Control Robot. Syst. 2009, 15, 538–542. [Google Scholar] [CrossRef] [Green Version]
  15. Spong, M.W.; Hutchinson, S.; Vidyasagar, M. Robot Modeling and Control, 1st ed.; John Wiley & Sons, Inc.: New York, NY, USA, 2006; pp. 113–130. [Google Scholar]
  16. Jin, S.; Lee, S.K.; Lee, J.; Han, S. Kinematic model and real-time path generator for a wire-driven surgical robot arm with articulated joint structure. Appl. Sci. 2019, 9, 4114. [Google Scholar] [CrossRef] [Green Version]
  17. Jin, S.; Han, S. Gain optimization of kinematic control for wire-driven surgical robot with layered joint structure considering actuation velocity bound. J. Korea Robot. Soc. 2020, 15, 212–220. [Google Scholar] [CrossRef]
  18. Safeea, M.; Bearee, R.; Neto, P. End-Effector precise hand-guiding for collaborative robots. In ROBOT 2017: Third Iberian Robotics Conference; Springer: Cham, Switzerland, 2017; Volume 694, pp. 595–605. [Google Scholar] [CrossRef]
  19. Massa, D.; Callegari, M.; Cristalli, C. Manual guidance for industrial robot programming. Ind. Robot 2015, 42, 457–465. [Google Scholar] [CrossRef]
  20. Rodamilans, G.B.; Villani, E.; Trabasso, L.G.; Oliveira, W.R.; Suterio, R. A comparison of industrial robots interface: Force guidance system and teach pendant operation. Ind. Robot 2016, 43, 552–562. [Google Scholar] [CrossRef]
  21. Fujii, M.; Murakami, H.; Sonehara, M. Study on application of a human-robot collaborative system using hand-guiding in a production line. IHI Eng. Rev. 2016, 49, 24–29. [Google Scholar]
  22. Murray, R.M.; Li, Z.; Sastry, S.S. A Mathematical Introduction to Robotic Manipulation, 1st ed.; CRC Press: Boca Raton, FL, USA, 1994; pp. 61–69. [Google Scholar]
Figure 1. Introduction of the surgical assistant robot: (a) conceptual figure; (b) animal experiment.
Figure 1. Introduction of the surgical assistant robot: (a) conceptual figure; (b) animal experiment.
Applsci 11 04279 g001
Figure 2. Kinematic model of the surgical assistant robot: (a) remote center motion; (b) configuration of joints and links.
Figure 2. Kinematic model of the surgical assistant robot: (a) remote center motion; (b) configuration of joints and links.
Applsci 11 04279 g002
Figure 3. Block diagram of RCM control with motion generation algorithm.
Figure 3. Block diagram of RCM control with motion generation algorithm.
Applsci 11 04279 g003
Figure 4. End-effector and sensor with moving frame and position vector of the center of mass.
Figure 4. End-effector and sensor with moving frame and position vector of the center of mass.
Applsci 11 04279 g004
Figure 5. Relationship between input torque and robot velocity command.
Figure 5. Relationship between input torque and robot velocity command.
Applsci 11 04279 g005
Figure 6. Contour of required torque for motion velocity: (a) pre-test; (b) optimized result.
Figure 6. Contour of required torque for motion velocity: (a) pre-test; (b) optimized result.
Applsci 11 04279 g006
Figure 7. Experiments with a prototype of a surgical assistant robot.
Figure 7. Experiments with a prototype of a surgical assistant robot.
Applsci 11 04279 g007
Figure 8. Experimental results in pan motion: (a) measured torque; (b) trajectory of desired and actual orientation; (c) position error of remote center; (d) orientation error of motion.
Figure 8. Experimental results in pan motion: (a) measured torque; (b) trajectory of desired and actual orientation; (c) position error of remote center; (d) orientation error of motion.
Applsci 11 04279 g008
Figure 9. Experimental results in Tilt motion: (a) measured torque; (b) trajectory of desired and actual orientation; (c) position error of remote center; (d) orientation error of motion.
Figure 9. Experimental results in Tilt motion: (a) measured torque; (b) trajectory of desired and actual orientation; (c) position error of remote center; (d) orientation error of motion.
Applsci 11 04279 g009aApplsci 11 04279 g009b
Figure 10. Experimental results in cone-shaped motion: (a) measured torque; (b) trajectory of desired and actual orientation; (c) position error of remote center; (d) orientation error of motion.
Figure 10. Experimental results in cone-shaped motion: (a) measured torque; (b) trajectory of desired and actual orientation; (c) position error of remote center; (d) orientation error of motion.
Applsci 11 04279 g010
Table 1. Link parameters of the robot manipulator.
Table 1. Link parameters of the robot manipulator.
r1r2r3
88.5 mm151.3 mm403.0 mm
r4r5r6
146.5 mm359.0 mm99.5 mm
r7r8r9
83.5 mm193.4 mm126.0 mm
Table 2. Parameters of the command generation function.
Table 2. Parameters of the command generation function.
Pre-TestOptimized Results
Parametersa0.2
b1 × 10−4
k1PanTiltPanTilt
7.7 × 10−57.7 × 10−511.6 × 10−510.5 × 10−5
k2PanTiltPanTilt
7.7 × 10−57.7 × 10−59.9 × 10−510.5 × 10−5
Error norm2.01 × 10−41.28 × 10−4
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kim, M.; Zhang, Y.; Jin, S. Control Strategy for Direct Teaching of Non-Mechanical Remote Center Motion of Surgical Assistant Robot with Force/Torque Sensor. Appl. Sci. 2021, 11, 4279. https://doi.org/10.3390/app11094279

AMA Style

Kim M, Zhang Y, Jin S. Control Strategy for Direct Teaching of Non-Mechanical Remote Center Motion of Surgical Assistant Robot with Force/Torque Sensor. Applied Sciences. 2021; 11(9):4279. https://doi.org/10.3390/app11094279

Chicago/Turabian Style

Kim, Minhyo, Youqiang Zhang, and Sangrok Jin. 2021. "Control Strategy for Direct Teaching of Non-Mechanical Remote Center Motion of Surgical Assistant Robot with Force/Torque Sensor" Applied Sciences 11, no. 9: 4279. https://doi.org/10.3390/app11094279

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop