Next Article in Journal
A Numerical Solution for Modelling Mooring Dynamics, Including Bending and Shearing Effects, Using a Geometrically Exact Beam Model
Previous Article in Journal
Astronomical Tide and Storm Surge Signals Observed in an Isolated Inland Maar Lake Near the Coast
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Experimental Study on Tele-Manipulation Assistance Technique Using a Touch Screen for Underwater Cable Maintenance Tasks

Korea Institute of Robotics and Technology Convergence, Jigok-Ro 39, Nam-Gu, Pohang 37666, Korea
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2021, 9(5), 483; https://doi.org/10.3390/jmse9050483
Submission received: 2 March 2021 / Revised: 7 April 2021 / Accepted: 27 April 2021 / Published: 30 April 2021
(This article belongs to the Section Ocean Engineering)

Abstract

:
In underwater environments restricted from human access, many intervention tasks are performed by using robotic systems like underwater manipulators. Commonly, the robotic systems are tele-operated from operating ships; the operation is apt to be inefficient because of restricted underwater information and complex operation methods. In this paper, an assistance technique for tele-manipulation is investigated and evaluated experimentally. The key idea behind the assistance technique is to operate the manipulator by touching several points on the camera images. To implement the idea, the position estimation technique utilizing the touch inputs is investigated. The assistance technique is simple but significantly helpful to increase temporal efficiency of tele-manipulation for underwater tasks. Using URI-T, a cable burying ROV (Remotely Operated Vehicle) developed in Korea, the performance of the proposed assistance technique is verified. The underwater cable gripping task, one of the cable maintenance tasks carried out by the cable burying ROV, is employed for the performance evaluation, and the experimental results are analyzed statistically. The results show that the assistance technique can improve the efficiency of the tele-manipulation considerably in comparison with the conventional tele-operation method.

1. Introduction

In underwater environments restricted from human access, many tasks have been performed by using underwater robots [1,2]. The tasks involve installation and repair of underwater cables, e.g., High Voltage Direct Current (HVDC) cables between lands and islands [3], underwater communication cables [4], and underwater cables for ocean energy developments as offshore wind turbines [5]. When being installed, the cables are commonly buried below the seabed for protection. For repair, the underwater cable is cut and recovered to the ship [6]. The tasks regarding underwater cables are usually performed in the deep sea where people are restricted from accessing it, and require a high amount of power. Thus, large size underwater robots for heavy-duty tasks are used.
URI-T is a heavy-duty Remotely Operated Vehicle (ROV) developed in South Korea for the underwater cable burying tasks and the maintenance tasks [7]. As shown in Figure 1, URI-T has cable detecting systems and water-jetting systems for cable burying tasks. Equipping with manipulators and tools, URI-T is also used for cable maintenance tasks: cable cutting and cable gripping for recovering. Like other underwater ROVs, URI-T performs the tasks by the tele-operation from the ship.
Performing underwater tasks, especially manipulation tasks, by tele-operation is apt to be inefficient due to several reasons. One of the reasons is lack of underwater information. The operators have to understand the underwater situation from several camera images and sensors. The view points of the cameras are limited and the cameras only provide 2D images; the operators have to imagine the 3D situation from the limited information [2,8,9]. Another reason is the complexity of the operation. Commonly, the underwater manipulators are tele-operated with joint level commands: by using the commanding device, the operators have to generate the command for every joint motion of the manipulator. Thus, skilled operators are required to perform underwater tasks by the tele-operation.
Several research works have been tried to improve the efficiency in underwater manipulations [10,11]. Some of the works have been approached by utilizing the autonomy: the autonomous manipulation by recognizing the object utilizing several sensors. They were focused on how to obtain the position of the object automatically in underwater situations, e.g., the sonar sensor based technique [12,13], the vision based approaches [14,15], deep learning with vision images [16], and the laser scanner based methods [17,18]. The schemes based on autonomy can provide comfort ways in the operation; however, the scheme may not be well-accepted by operators. It is because underwater tasks demand high reliability and the autonomy may involve risks of malfunction [19]. There have been research works to improve the efficiency of the tele-operation by assisting with guidance techniques or perceptional improvements: augmented realities [20,21] or virtual realities [22] to improve the visual feedback or to guide the manipulation with virtual information, virtual guidance techniques to avoid collision with the environment [23], real-time collision detection algorithms to improve the human perception in the underwater environments having poor visibilities [24], and shared tele-operation techniques using model based learning [25,26].
In this paper, an assistance technique to improve efficiency of underwater manipulation is studied. The main idea behind the proposed technique is to tele-operate the manipulator by touching several points on the camera images via a touch screen, which is helpful to alleviate the mental burden of the operators. The proposed technique can be distinguished from the autonomy in the previous research works because the initial information of the assistance is obtained reliably by operators. The proposed technique focuses on assisting the gross motions of the manipulator as approaching the object, and the conventional tele-operation is still used for dexterous motions such as handling objects. The main issues to design the proposed assistance technique can be categorized as follows:
  • Object position estimation using inputs via touch screen, and
  • Control structure for assisted tele-operation utilizing touch based position estimation.
Regarding the position estimation, a six degree-of-freedom (DOF) position estimation technique by utilizing several inputs on the camera images via touch screen is studied. The touching process can alleviate mental loads of operators from commanding every joint of the manipulators. Moreover, it is easy to guarantee the reliability on the estimation results of object positions because the initial information for the estimation is given by the operators. Thus, the proposed method can reduce the risk of malfunction that may be involved in the automatic estimation technique. To implement the assistance technique, an appropriate control structure is also investigated. The control structure involves the switching mode of the controller between the touch screen based operation and conventional tele-operation.
The performance of the proposed assistance technique is verified through experimental studies with tasks for underwater cable maintenance using URI-T. By performing the cable gripping experiment and statistical analysis, the validity of the assistance technique is compared with the conventional tele-operation.

2. Cable Maintenance Using URI-T

URI-T is a heavy-duty ROV for the underwater cable burial and the cable maintenance. As shown in Figure 1, URI-T equips two foldable water-jetting arms and cable detection sensors, TSS350 and TSS440, for the burying tasks of underwater cables and the verifying tasks of the cable burial states [27]. URI-T is designed to bury cables with 3.0 m depth with two 300 HP water pumps, at the seabed of 2500 m water depth.
URI-T also equips manipulators and tools for the cable maintenance tasks as shown in Figure 2. Cable maintenance tasks are performed by using the manipulators with appropriate tools. When performing cable gripping tasks, for example, the operators deploy the gripping tool on the cable by using the manipulator, and execute the gripping tool to grip the cable. Figure 2 also shows a couple of snapshots of cable maintenance experiments using URI-T at the sea-ground.
When performing cable maintenance tasks, the manipulators and the tools are tele-operated from the operating room on the ship. The tele-operation is a significant burden process for the operators because the operators have to recognize the underwater situation from the limited information of cameras and sensors. As shown in Figure 3, there are several cameras in URI-T; however, all camera views are restricted to forward direction from the robot, and it is not possible to install cameras monitoring from side or backward. It is because all cameras should be installed in the robot. By using the commanding device, moreover, the operators have to generate every joint command to tele-operate the manipulator. As a result, the tele-operation demands considerable concentration of operators. The tele-operation is apt to be inefficient and requires well-experienced operators.
For the efficient tele-operation, in the paper, an assistance technique using touch screen is studied. The purpose of the assistance technique is to alleviate the burden of operators by providing them tele-operating method by only touching several points on the camera images.

3. Touch Screen Based Estimation of an Object Position

In this section, a position estimation technique of objects utilizing touch screen inputs is addressed. By utilizing several touch inputs fed by operators on the camera images, the technique can estimate translation and orientation of the object. The technique is derived under a couple assumptions:
  • two cameras providing different viewpoints to each other for the manipulation are installed in the ROV,
  • a touch screen is available as a commanding device for the operators.
The assumptions are practically reasonable because, in the ROVs, several cameras are commonly installed for monitoring the underwater situation. Nowadays, touch screens are used widely as command devices for operating ROVs. In the case of URI-T, there are 12 cameras on the ROV, including two cameras to provide stereo views for the manipulation. There are two touch screens in the operating room of URI-T: one for the main operator, and the other for the co-operator.

3.1. Touch Screen Inputs Acquisition

For the position estimation, in total, six points in two camera images (three points in each camera image) are gathered. The number of the touched points are designed to be as small as possible to estimate the object position including the translation and the orientation. When a point of each camera image is given, one can estimate the translation of a point in 3D space; when three points are given, one can determine two direction vectors of orientation. It is well-known that the direction vectors in orientation matrix are orthogonal, one can determine the last direction vector using the two direction vectors. As shown in Figure 4, the authors utilized three touch inputs from each camera images as follows:
  • a point on the center of gripping position in the object, I i P T i ,
  • a point laying on the approach direction of the object, I i P X i , and,
  • a point laying on the normal direction of the object, I i P Y i ,
where superscript I i signifies the camera image coordinate; i = 1 , 2 the camera number. Note that each touched point has 2D position information w.r.t. the camera image coordinates, I i .

3.2. Position Estimation of the Object

By utilizing the touched point information, a position estimation technique, including translation and orientation, is studied. The estimation technique is derived based on the kinematic relationship among manipulator, cameras, and the object. In Figure 5, illustrating the translation estimation method, one can find the coordinate systems including the camera image coordinates, I i , the camera focus coordinates, C i , and the world coordinate, W.
As a preprocessing stage, the 2D information of the touched points is transformed to the 3D information in the world coordinate. When the focal lengths of the camera are given, each touched information I i P = [ p x , p y ] T can be represented as C i P = [ p x / f x , p y / f y , 1 ] T in the camera focus coordinate, where f x and f y denote the focal lengths. By utilizing coordinate transform [28], one can obtain the touched point w.r.t. the world coordinate as follows:
P = C i W R C i P + P C i ,
where C i W R denotes the rotation matrix of the each camera focus coordinate, C i , w.r.t. the world coordinate, W; P C i the position vector of each camera focus coordinate w.r.t. the world coordinate.
For the translation estimation of the object, the points, P T i and P C i , are utilized. As shown in Figure 5, the one can determine the lines passing the points, P T i and P C i ; and, by finding the nearest point between the lines, the translation of the object, T * , is obtained. The process to obtain the translation is given in Figure 6 in which blocks are defined in Appendix A. Refer to [29] for a detailed procedure to obtain the translation.
The estimation technique for the orientation of the object is designed by utilizing all of the three touched inputs in Figure 4. In detail, the points, P T i and P X i , are utilized to find the approach vector of the rotation matrix; P T i and P Y i are for the normal vector. Then, one can find the sliding vector by taking the cross product of the two aforementioned vectors.
As depicted in Figure 7, the approach vector, n X , is determined as follows. First, the two surfaces spanning the lines, P X i P C i ¯ and P T i P C i ¯ ( i = 1 , 2 ) , are obtained. Note that, from Figure 4, the points, P X i and P T i , are on the approach vectors and the two surfaces contain the direction of the approach vector. Second, the approach vector is determined by finding the direction of the common line between the aforementioned surfaces. In Appendix A, the definitions of the blocks in Figure 7 are described. Refer to [29] for the detailed procedure. By replacing P X i with P Y i and following the same procedure in Figure 7, one can find the normal vector, n Y .
The sliding vector of the rotation matrix can be determined by taking the cross product of the approach vector and the normal vector. Before taking the cross product, some practical treatments are applied to the vectors, n X and n Y . The first treatment is for adjusting the directions of the vectors. In detail, the approach vector must have the direction from P T i to P X i ; the normal vector from P T i to P Y i . The adjustment of the vector direction are accomplished by the following signum operation:
n X = n X s g n n X · P X 1 P T 1 , n Y = n Y s g n n Y · P Y 1 P T 1 ,
where s g n ( ) denotes the signum function. The other treatment is for guaranteeing the orthogonality between n X and n Y . We assumed that, if the angle between the two surfaces for the each vector is larger, the vector is more reliable. If, according to the assumption, the approach vector is more reliable than the normal vector, the vectors are modified as follows:
n X = n X , n Y = n Y ( n X · n Y ) n X / n Y ( n X · n Y ) n X ,
and vice versa. Finally, the rotation matrix as a result of the orientation estimation is determined as follows:
T W R = n X , n Y , n X × n Y .

3.3. Performance Evaluation of the Proposed Estimation Technique

In this subsection, the performance of the proposed position estimation technique is verified experimentally. The experiment is designed to estimate positions of three points of which the true positions are known in advance. The experimental setup is depicted in Figure 8 in which the true positions of the points are given. The positions of the three points in Figure 8 are estimated 15 times per point for the performance evaluation.
Figure 9 shows the experimental results, and the average absolute errors (AAE) and the maximum absolute errors (MAE) for each axis are arranged in Table 1. As shown in Figure 9 and Table 1, the translation errors for each axis are under 71 mm, and the orientation errors for each axis are under 28 deg. The accuracy of the results may not be sufficient for dexterous manipulation. In the case of gross manipulation, whereas the accuracy is sufficient to be utilized to move the manipulator around the object.

4. Control Structure for the Assisted Tele-Operation

4.1. Control Structure with the Proposed Assistance Technique

In Figure 10, the control structure for the tele-operation with the proposed assistance technique is illustrated. As shown in Figure 10, the structure contains a couple of mode switching inputs. The mode selection input is for switching the control mode. By using the input, the operator can select an appropriate control mode between the conventional tele-operation mode and the proposed assistance mode, during tele-operating tasks. In the tele-operation mode, the operator tele-operates every joint of the manipulator by using the commanding device; in the assistance mode, whereas, the operator can handle the manipulator by using the touch screen. The control structure in Figure 10 contains another mode switching input for the tool possession selection to determine an appropriate offset distance to prevent the collision, of which the design method will be explained in the next subsection.
As illustrated in Figure 10, the controller for assistance mode includes the motion planner for generating smooth motion trajectory to the object, the inverse kinematics to transform the trajectory from the Cartesian space to the joint space, and the joint motion controller which is a low level controller for joint position tracking for the manipulator.
The proposed assistance technique is implemented with an industrial PC having a touch screen. Thus, the touch inputs can be gathered by touching directly on the screen or by other conventional input devices as a mouse.

4.2. Considering Offset Distance for Approaching the Object

If the manipulator moves to the exact object position, the end-effector of the manipulator will collide with the object. To prevent the collision, thus, we need to take into account adequate offset distance between the end-effector and the object. As depicted as offset setting in Figure 10, the offset distance should be selected differently according to whether the manipulator possesses a tool or not. When the manipulator does not possess any tool, the offset distance can be set by considering the safe distance between the end-effector and the object. When the manipulator possesses a tool, whereas, one needs to consider the length of the tool when determining the offset distance. As a result, the offset distance is designed as follows:
T L = T O , w h e n m a n i p u l a t o r d o e s n o t p o s s e s s a n y t o o l , a n d T L = T O + T t o o l , w h e n m a n i p u l a t o r p o s s e s s e s a t o o l ,
where T O denotes offset distance between the end-effector and the object; T t o o l the length of the tool. Then, one can determine final goal position in Cartesian space as follows:
T g o a l * = T * + T W R T L .
The offset is set as T L = [ T L , 0 , 0 ] T along to the negative direction of the approach in Figure 4.

4.3. Controller Design for the Assistance Technique

The motion planner in Figure 10 is designed by using the 5th order polynomial trajectory to generate smooth trajectory from the current position to the goal position in (6) [28]. For an appropriate moving speed, the time duration for the trajectory is designed as Δ t = max ( x g o a l x s t a r t ) / v ¯ , where v ¯ denotes the averaged velocity limit which is designed by the user. When designing the inverse kinematics in Figure 10, the prevention of the joint limit violation of the manipulator is considered. It is because, due to the nonlinear relationship between the Cartesian space motion and the joint space motion, any Cartesian trajectory possibly violates the joint limitation, in the position level or in the velocity level, of the manipulators. The weighted damped least squares (WDLS) is utilized for the inverse kinematics [30,31,32,33]. The Cartesian velocity of the manipulator is described from the joint velocity as follows [30]:
x ˙ = J W Θ ˙ W , w h e r e J W J W 1 / 2 , a n d , Θ ˙ W W 1 / 2 Θ ˙ ,
and x ˙ n denotes the Cartesian velocity of the end-effector; Θ ˙ m the joint velocity vector; and J n × m the Jacobian matrix; W m × m the positive definite weight matrix; J W the weighted Jacobian matrix; Θ W the weighted joint velocity vector. The inverse solution of (7) is obtained as follows [30]:
Θ ˙ W = J W + x ˙ + I J W + J W q ˙ 0 , w h e r e
J W + J W T J W J W T + λ 2 I 1 ,
and q ˙ 0 denotes a negative gradient vector of a cost function, h ( Θ ) , for optimizing the null space of the weighted Jacobian; λ a damping parameter. From (8) and (7), as a result, the kinematics solution of the WDLS is obtained as follows:
Θ ˙ = W 1 / 2 Θ ˙ W .
The weight matrix, W, is utilized to avoid to meet the joint limits [33]. We take into account both of the position limit and the velocity limit of each joint. Thus, the weight matrix is designed as W = W P L W V L , where W P L m × m denotes the weight matrix for the joint position limit; and W V L m × m the weight matrix for the joint velocity limit. In addition, λ and h ( Θ ) are utilized for preventing another problems as kinematic singularity. Refer to [30] for the detailed design procedure of the inverse kinematics. For the joint motion controller, the controllers built in the manipulator are utilized. The manipulator of URI-T involves joint velocity controllers. By adding external proportional feedbacks of joint position errors, we have modified the controllers to work as joint position controllers.

5. Experimental Study

5.1. Task

Through the experiment using URI-T, the performance of the assistance technique is evaluated in comparison with the conventional tele-operation. The task for evaluation is designed as the cable gripping task, one of the cable maintenance tasks performed by using URI-T. In the cable gripping task, the operator displaces the gripping tool using the manipulator and grips tightly the cable to tow the cable to the place for repairing. The procedure of the gripping task can be arranged as the following three steps:
  • Approaching: Approaching the manipulator to the gripping tool in the bucket,
  • Seizing: Seizing the handle of the gripping tool with the jaw of the manipulator,
  • Displacing: Moving the gripping tool to the cable and executing the tool to grip the cable.
A detailed description of the task is arranged in Table 2. As described in Table 2, the assistance technique is only applied to the gross motion of the manipulator. In the case of the dexterous motion like seizing the gripping tool, whereas the conventional tele-operation is still used to prevent the accidents as collision.

5.2. Experimental Setup

As illustrated in Figure 11, the equipment of URI-T is used for the experiment. The equipment includes a 7-function manipulator (UW3, KnR Systems), a gripping tool and two cameras providing different viewpoints for the manipulation. The offset, T O in (6), for the controller is designed as 150 mm to have an appropriate safety factor against the position estimation errors as shown in Figure 9 and Table 1. T L in (6) is designed based on the true length of the tool. To separate each step of the task, operations of the jaw and the gripping tool are utilized. For example, the steps between the approaching and the seizing are distinguished by the opening operation of the jaw of the manipulator. Similarly, the steps between the seizing and the displacing are divided by the jaw closing operation; and the end of the displacing is determined by the closing operation of the gripping tool to grip the cable. The operations of the jaw and the gripping tool are appropriate indications to divide the steps because the operations take a very short time just for toggling switches.

5.3. Experimental Method

Through the experiment for human based evaluation, the performance of the proposed assistance technique is compared with that of the conventional tele-operation. The performance index is designed as the time duration taken for each step of the task. To minimize the learning effect during the experiments, experts having lots of experiences in tele-operating the manipulator were collected for the subjects. As a result, two subjects participated for the experiment. One is an operator of URI-T having more than three years’ experiences including several field evaluations and underwater construction operations using URI-T. The other is an engineer developing control algorithms in the company manufacturing the manipulator of URI-T. They are males in their late 20s, and are very familiar with operating the manipulator. Each subject performed six sets of tests, and a total of twelve test results are obtained and analyzed statistically. For the statistical analysis, the paired t-test is conducted to determine if there were statistically significant difference in the time duration. An alpha level of 0.05 was taken to indicate statistical significance.
The experiment was performed at an in-lab environment, which may be a little different from the underwater situation. However, other setups of the experiment are designed as to match the real application. The equipment and the operating system of URI-T for underwater tasks are used for the experiment, and, the experimental scenario is designed similarly to the underwater cable maintenance task. Regarding the touching input method, a mouse device is used instead of directly touching the screen. The subjects prefer to use the mouse because it is easier to indicate a point accurately than to touch directly on the screen. Two movie clips of the experiments are available as Supplementary Materials of which links are mentioned at the last of the paper: one is when the proposed assistance technique is applied; the other is when the conventional tele-operation is used. Refer to the movie clips for further understanding of the experimental method and environment.

5.4. Results

Figure 12 and Table 3 show the experimental results. Table 3 reveals that, in the approaching step, the operation time using the proposed assistance technique is decreased on average as much as 18.62% compared with the conventional tele-operation. The results have statistical significance ( p < 0.05 ), implying the effectiveness of the assistance algorithm to improve the temporal efficiency for the tele-operation. In the case of seizing step, the averaged time is reduced as 41.69% with statistical significance. Note that, in both tele-operation mode (the conventional tele-operation and the assisted tele-operation), the seizing task is performed by the conventional tele-operation method. It is quite an interesting result that the seizing step in the assisted tele-operation mode takes a shorter amount of time than that in the conventional tele-operation mode. This is because, in the assisted tele-operation mode, the manipulator is placed in a better position to seize the gripping tool when the approaching step is finished. In the conventional tele-operation, the operator decides the end of the approaching step through only camera images. When finishing the approaching step, the manipulator positions between each test sets are not even; thus, it takes additional time to adjust the positions proper to seize the tool. In the case of the displacing step, the averaged time is decreased as 19.64% with statistical significance. Note that the displacing step is a mixed step with the assistance mode (displacing the gripping tool) and the tele-operation mode (delicate positioning of the gripping tool); the experimental results show that the assistance technique is still effective at increasing the temporal efficiency. In summary, the total time duration for the task using the assistance technique is decreased by 22.41% compared with the conventional tele-operation.

6. Conclusions

In this paper, an assistance technique using a touch screen for underwater manipulation tasks is addressed. The technique is designed to provide an easy way to operate the manipulator by simply touching several points on the camera images. The technique involves a position estimator of objects, including the translation and the orientation, by utilizing the touched information. Via the touch screen, the point information is fed by operators to guarantee reliable estimation results; the reliability is one of the most important issues in underwater tasks. An appropriate control structure for the assistance is also discussed. By switching the control mode between the assistance mode and the conventional tele-operation mode, the operator can perform the manipulation tasks efficiently. The validity of the assistance technique is evaluated experimentally with a cable gripping task using URI-T, a cable burying ROV. The experimental results show that the proposed technique improves temporal efficiency around 20% compared with the conventional tele-operation method.
As future works, the proposed assistance technique will be integrated into the operating system of URI-T. Then, the validity of the technique will be verified with the experimental studies in the underwater environments like water tanks. Finally, the technique will be applied to the underwater construction operations, helping the operators of URI-T to tele-operate the manipulation tasks and improving the operation efficiency.

Supplementary Materials

The following are available online at https://www.mdpi.com/article/10.3390/jmse9050483/s1, Video S1: Experimental results of the proposed assistance technique and the conventional tele-operation.

Author Contributions

Conceptualization, G.R.C.; methodology, G.R.C.; software, G.K. and H.K.; validation, G.K., M.-J.L., and M.-G.K.; formal analysis, G.R.C.; investigation, G.K. and M.-J.L.; resources, J.-H.L.; data curation, G.R.C.; writing—original draft preparation, G.R.C.; writing—review and editing, G.R.C.; visualization, G.R.C.; supervision, J.-H.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the project titled ‘Maritime equipment research result application promotion project,’ funded by the Ministry of Oceans and Fisheries (MOF) and Korea Institute of Marine Science&Technology Promotion (KIMST), Korea (PJT20190396, Field Demonstration and Industrialization of Heavy Duty ROV Technology).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
HVDCHigh Voltage Direct Current
DOFDegree Of Freedom
ROVRemotely Operated Vehicle
WDLSWeighted Damped Least Squares

Appendix A. Descriptions of the Blocks

The detailed descriptions of the blocks in Figure 6 and Figure 7 are provided as follows. In Figure A1, Figure A2, Figure A3 and Figure A4, α , β denote independent scalar variables; T = x , y , z T a translation vector.
Figure A1. Obtaining a line equation passing two given points.
Figure A1. Obtaining a line equation passing two given points.
Jmse 09 00483 g0a1
Figure A2. Obtaining the nearest point between the two given lines.
Figure A2. Obtaining the nearest point between the two given lines.
Jmse 09 00483 g0a2
Figure A3. Obtaining a common surface spanning two given lines.
Figure A3. Obtaining a common surface spanning two given lines.
Jmse 09 00483 g0a3
Figure A4. Obtaining the direction vector of the common line belonging to the two given surfaces.
Figure A4. Obtaining the direction vector of the common line belonging to the two given surfaces.
Jmse 09 00483 g0a4

References

  1. Christ, R.D.; Wernli, R.L., Sr. The ROV Manual: A User Guide for Remotely Operated Vehicles; Butterworth-Heinemann: Oxford, UK, 2014. [Google Scholar]
  2. Yuh, J. Design and control of autonomous underwater robots: A survey. Auton. Robot. 2000, 8, 7–24. [Google Scholar] [CrossRef]
  3. Sherwood, J.; Chidgey, S.; Crockett, P.; Gwyther, D.; Ho, P.; Stewart, S.; Strong, D.; Whitely, B.; Williams, A. Installation and operational effects of a HVDC submarine cable in a continental shelf setting: Bass Strait, Australia. J. Ocean Eng. Sci. 2016, 1, 337–353. [Google Scholar] [CrossRef] [Green Version]
  4. Takaaki, O. Recent status and trends in optical submarine cable systems. NEC Tech. J. 2010, 5, 4–7. [Google Scholar]
  5. Ng, C.; Ran, L. Offshore Wind Farms: Technologies, Design and Operation; Woodhead Publishing: Cambridge, UK, 2016. [Google Scholar]
  6. Kordahi, M.E.; Gleason, R.F.; Chien, T.M. Installation and maintenance technology for undersea cable systems. AT T Tech. J. 1995, 74, 60–74. [Google Scholar] [CrossRef]
  7. Kim, M.G.; Kang, H.; Lee, M.J.; Cho, G.R.; Li, J.H.; Yoon, T.S.; Ju, J.; Kwak, H.W. Study for Operation Method of Underwater Cable and Pipeline Burying ROV Trencher using Barge and Its Application in Real Construction. J. Ocean Eng. Technol. 2020, 34, 361–370. [Google Scholar] [CrossRef]
  8. Dhanak, M.R.; Xiros, N.I. Springer Handbook of Ocean Engineering; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
  9. Hildebrandt, M.; Albiez, J.; Kirchner, F. Computer-based control of deep-sea manipulators. In Proceedings of the OCEANS 2008 MTS/IEEE Kobe Techno-Ocean, Kobe, Japan, 8–11 April 2008; pp. 1–6. [Google Scholar]
  10. Petillot, Y.R.; Antonelli, G.; Casalino, G.; Ferreira, F. Underwater robots: From remotely operated vehicles to intervention-autonomous underwater vehicles. IEEE Robot. Autom. Mag. 2019, 26, 94–101. [Google Scholar] [CrossRef]
  11. Sivčev, S.; Coleman, J.; Omerdić, E.; Dooly, G.; Toal, D. Underwater manipulators: A review. Ocean Eng. 2018, 163, 431–450. [Google Scholar] [CrossRef]
  12. Marani, G.; Choi, S.K.; Yuh, J. Underwater autonomous manipulation for intervention missions AUVs. Ocean Eng. 2009, 36, 15–23. [Google Scholar] [CrossRef]
  13. Marani, G.; Choi, S.K. Underwater target localization. IEEE Robot. Autom. Mag. 2010, 17, 64–70. [Google Scholar] [CrossRef]
  14. Prats, M.; Garcia, J.; Wirth, S.; Ribas, D.; Sanz, P.; Ridao, P.; Gracias, N.; Oliver, G. Multipurpose autonomous underwater intervention: A systems integration perspective. In Proceedings of the 2012 20th Mediterranean Conference on Control & Automation (MED), Barcelona, Spain, 3–6 July 2012; pp. 1379–1384. [Google Scholar]
  15. García, J.; Fernández, J.; Sanz, P.; Marín, R. Increasing autonomy within underwater intervention scenarios: The user interface approach. In Proceedings of the 2010 IEEE International Systems Conference, San Diego, CA, USA, 5–8 April 2010; pp. 71–75. [Google Scholar]
  16. Martin-Abadal, M.; Piñar-Molina, M.; Martorell-Torres, A.; Oliver-Codina, G.; Gonzalez-Cid, Y. Underwater Pipe and Valve 3D Recognition Using Deep Learning Segmentation. J. Mar. Sci. Eng. 2020, 9, 5. [Google Scholar] [CrossRef]
  17. Palomer, A.; Ridao, P.; Youakim, D.; Ribas, D.; Forest, J.; Petillot, Y. 3D laser scanner for underwater manipulation. Sensors 2018, 18, 1086. [Google Scholar] [CrossRef] [Green Version]
  18. Himri, K.; Pi, R.; Ridao, P.; Gracias, N.; Palomer, A.; Palomeras, N. Object Recognition and Pose Estimation using Laser scans For Advanced Underwater Manipulation. In Proceedings of the 2018 IEEE/OES Autonomous Underwater Vehicle Workshop (AUV), Porto, Portugal, 6–9 November 2018; pp. 1–6. [Google Scholar]
  19. Zghyer, R.; Ostnes, R.; Halse, K.H. Is full-autonomy the way to go towards maximizing the ocean potentials? TransNav Int. J. Mar. Navig. Saf. Sea Transp. 2019, 13, 33–42. [Google Scholar] [CrossRef] [Green Version]
  20. Cárdenas, E.F.; Dutra, M.S. An augmented reality application to assist teleoperation of underwater manipulators. IEEE Lat. Am. Trans. 2016, 14, 863–869. [Google Scholar] [CrossRef]
  21. Laranjeira, M.; Arnaubec, A.; Brignone, L.; Dune, C.; Opderbecke, J. 3D Perception and Augmented Reality Developments in Underwater Robotics for Ocean Sciences. Curr. Robot. Rep. 2020, 1, 123–130. [Google Scholar] [CrossRef]
  22. de la Cruz, M.; Casañ, G.; Sanz, P.; Marín, R. Preliminary Work on a Virtual Reality Interface for the Guidance of Underwater Robots. Robotics 2020, 9, 81. [Google Scholar] [CrossRef]
  23. Lee, K.H.; Pruks, V.; Ryu, J.H. Development of shared autonomy and virtual guidance generation system for human interactive teleoperation. In Proceedings of the 2017 14th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), Jeju, Korea, 28 June–1 July 2017; pp. 457–461. [Google Scholar]
  24. Sivčev, S.; Rossi, M.; Coleman, J.; Omerdić, E.; Dooly, G.; Toal, D. Collision detection for underwater ROV manipulator systems. Sensors 2018, 18, 1117. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Tanwani, A.K.; Calinon, S. A generative model for intention recognition and manipulation assistance in teleoperation. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 43–50. [Google Scholar]
  26. Xi, B.; Wang, S.; Ye, X.; Cai, Y.; Lu, T.; Wang, R. A robotic shared control teleoperation method based on learning from demonstrations. Int. J. Adv. Robot. Syst. 2019, 16, 1729881419857428. [Google Scholar] [CrossRef] [Green Version]
  27. Li, J.H.; Lee, M.J.; Kang, H.; Kim, M.G.; Cho, G.R. Design, Performance Evaluation and Field Test of a Water Jet Tool for ROV Trencher. J. Mar. Sci. Eng. 2021, 9, 296. [Google Scholar] [CrossRef]
  28. Craig, J.J. Introduction to Robotics: Mechanics and Control; Addision Wesley: Boston, MA, USA, 1989. [Google Scholar]
  29. Cho, G.R.; Ki, H.; Lil, J.H.; Lee, M.; Jee, S.C. Assisted Teleoperation for Underwater Manipulation utilizing Touch Screen Inputs. In Proceedings of the 2018 15th International Conference on Ubiquitous Robots (UR), Honolulu, HI, USA, 26–30 June 2018; pp. 67–72. [Google Scholar]
  30. Cho, G.R.; Lee, M.J.; Kim, M.G.; Li, J.H. Inverse kinematics for autonomous underwater manipulations using weighted damped least squares. In Proceedings of the 2017 14th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), Jeju, Korea, 28 June–1 July 2017; pp. 765–770. [Google Scholar]
  31. Deo, A.S.; Walker, I.D. Overview of damped least-squares methods for inverse kinematics of robot manipulators. J. Intell. Robot. Syst. 1995, 14, 43–68. [Google Scholar] [CrossRef]
  32. Buss, S.R. Introduction to inverse kinematics with jacobian transpose, pseudoinverse and damped least squares methods. IEEE J. Robot. Autom. 2004, 17, 16. [Google Scholar]
  33. Chan, T.F.; Dubey, R.V. A weighted least-norm solution based scheme for avoiding joint limits for redundant joint manipulators. IEEE Trans. Robot. Autom. 1995, 11, 286–292. [Google Scholar] [CrossRef]
Figure 1. URI-T, an underwater cable burying ROV.
Figure 1. URI-T, an underwater cable burying ROV.
Jmse 09 00483 g001
Figure 2. Manipulators and tools for cable maintenance.
Figure 2. Manipulators and tools for cable maintenance.
Jmse 09 00483 g002
Figure 3. Operating room for URI-T.
Figure 3. Operating room for URI-T.
Jmse 09 00483 g003
Figure 4. Manipulators and tools for cable maintenance.
Figure 4. Manipulators and tools for cable maintenance.
Jmse 09 00483 g004
Figure 5. Translation estimation of object.
Figure 5. Translation estimation of object.
Jmse 09 00483 g005
Figure 6. Process of translation estimation.
Figure 6. Process of translation estimation.
Jmse 09 00483 g006
Figure 7. Estimation process for the approach vector of an orientation matrix.
Figure 7. Estimation process for the approach vector of an orientation matrix.
Jmse 09 00483 g007
Figure 8. Setup for position estimation experiments.
Figure 8. Setup for position estimation experiments.
Jmse 09 00483 g008
Figure 9. Position estimation errors.
Figure 9. Position estimation errors.
Jmse 09 00483 g009
Figure 10. Overall control structure for assisted tele-operation.
Figure 10. Overall control structure for assisted tele-operation.
Jmse 09 00483 g010
Figure 11. Experimental setup.
Figure 11. Experimental setup.
Jmse 09 00483 g011
Figure 12. Box plots of experimental time taken for each step: (a) approaching step, (b) seizing step, (c) displacing step, (d) total time for all steps.
Figure 12. Box plots of experimental time taken for each step: (a) approaching step, (b) seizing step, (c) displacing step, (d) total time for all steps.
Jmse 09 00483 g012
Table 1. The average absolute errors (AAE) and the max. absolute errors (MAE) of the position estimation experiments.
Table 1. The average absolute errors (AAE) and the max. absolute errors (MAE) of the position estimation experiments.
Point 1Point 2Point 3
axisAAEMAEAAEMAEAAEMAE
x (mm)36.765.627.860.927.070.6
y (mm)11.530.96.718.67.617.5
z (mm)23.145.321.242.828.469.5
roll (deg)11.327.52.37.29.519.9
pitch (deg)1.96.68.320.74.611.4
yaw (deg)0.82.20.51.10.91.9
Table 2. Detailed description of the task.
Table 2. Detailed description of the task.
Control Mode
Step of TaskDescriptionConventional Tele-Op.Assisted Tele-Op.
#0 Initial postureManipulator is in initial posture (same posture in Figure 11). Jaw is closed--
#1 ApproachingMoving manipulator to the gripping tooltele-operationassistance
Opening the jawtele-operationtele-operation
#2 SeizingDelicate positioning of the manipulator to seize the gripping tooltele-operationtele-operation
Closing jaw (seizing the tool)tele-operationtele-operation
#3 DisplacingDisplacing gripping tool to the cabletele-operationassistance
Delicate positioning of the gripping tool on the cabletele-operationtele-operation
Closing the gripping tool (gripping the cable)tele-operationtele-operation
Table 3. Experimental results: averaged time and p-value.
Table 3. Experimental results: averaged time and p-value.
ApproachingSeizingDisplacingTotal
➀ conventional tele-op. (s)32.4117.9380.58130.33
➁ assisted tele-op. (s)26.3810.4664.75101.59
(➀–➁)/➀ × 100 (%)18.62%41.69%19.64%22.41%
p-value0.0190.0040.0080.002
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Cho, G.R.; Ki, G.; Lee, M.-J.; Kang, H.; Kim, M.-G.; Li, J.-H. Experimental Study on Tele-Manipulation Assistance Technique Using a Touch Screen for Underwater Cable Maintenance Tasks. J. Mar. Sci. Eng. 2021, 9, 483. https://doi.org/10.3390/jmse9050483

AMA Style

Cho GR, Ki G, Lee M-J, Kang H, Kim M-G, Li J-H. Experimental Study on Tele-Manipulation Assistance Technique Using a Touch Screen for Underwater Cable Maintenance Tasks. Journal of Marine Science and Engineering. 2021; 9(5):483. https://doi.org/10.3390/jmse9050483

Chicago/Turabian Style

Cho, Gun Rae, Geonhui Ki, Mun-Jik Lee, Hyungjoo Kang, Min-Gyu Kim, and Ji-Hong Li. 2021. "Experimental Study on Tele-Manipulation Assistance Technique Using a Touch Screen for Underwater Cable Maintenance Tasks" Journal of Marine Science and Engineering 9, no. 5: 483. https://doi.org/10.3390/jmse9050483

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop