Next Article in Journal
Stress Characteristics Analysis of Vertical Bi-Directional Flow Channel Axial Pump Blades Based on Fluid–Structure Coupling
Previous Article in Journal
Digital Twins-Based Production Line Design and Simulation Optimization of Large-Scale Mobile Phone Assembly Workshop
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Distributed Dynamic Predictive Control for Multi-AUV Target Searching and Hunting in Unknown Environments

Institute of Ocean Installations and Control Technology, College of Intelligent Systems Science and Engineering, Harbin Engineering University, Harbin 150001, China
*
Author to whom correspondence should be addressed.
Machines 2022, 10(5), 366; https://doi.org/10.3390/machines10050366
Submission received: 25 March 2022 / Revised: 5 May 2022 / Accepted: 6 May 2022 / Published: 11 May 2022

Abstract

:
The research and development of the ocean has been gaining in popularity in recent years, and the problem of target searching and hunting in the unknown marine environment has been a pressing problem. To solve this problem, a distributed dynamic predictive control (DDPC) algorithm based on the idea of predictive control is proposed. The task-environment region information and the input of the AUV state update are obtained by predicting the state of multi-AUV systems and making online task optimization decisions and then locking the search area for the following moment. Once a moving target is found in the search process, the AUV conducts a distributed hunt based on the theory of potential points, which solves the problem of the reasonable distribution of potential points during the hunting process and realizes the formation of hunting rapidly. Compared with other methods, the simulation results show that the algorithm exhibits high efficiency and adaptability.

1. Introduction

It is always a challenging task to explore and develop the enormous, complex, and hazardous marine environment, and an autonomous underwater vehicle (AUV) is the best technical means to deal with the current challenges as it is an underwater device with good concealment, flexibility in underwater movement, economic applicability, and other technical characteristics of high-tech devices [1]. The reconnaissance efficiency of a single AUV is low because of realistic conditions such as limited energy consumption and restricted communication. Therefore, greater multi-AUV collaborative operation is required. In order to accomplish underwater tasks better, the AUV needs to have capabilities of adaptive searching, decision-making and dynamic target hunting. Therefore, how to utilize limited resources rationally and coordinate AUVs to complete target searching and hunting are the most critical problems.
For the problem of multiple-agent-system target searches in an unknown environment, extensive research has been carried out [2]. Liu et al. establish the distributed multi-AUVs collaborative search system (DMACSS) and proposed the autonomous collaborative search-learning algorithm (ACSLA) be integrated into the DMACSS. The test results demonstrate that the DMACSS runs stably and the search accuracy and efficiency of ACSLA outperform other search methods, thus better realizing the cooperation between AUVs and allowing the DMACSS to find the target more accurately and faster [3]. A fuzzy-based bio-inspired neural network approach was proposed by Sun for multi-AUV target searching, which can effectively plan search paths. Moreover, a fuzzy algorithm was introduced into the bio-inspired neural network to make the trajectory of AUV obstacle avoidance smoother. Simulation results show that the proposed algorithm can control a multi-AUV system to complete multi-target search tasks with higher search efficiency and adaptability [4]. For multiple AUVs, Yao et al. proposed bidirectional negotiation with a biased min-consensus (BN-BMC) algorithm to determine the allocation and prioritization of sub-regions to be searched and utilized an adaptive oval–spiral coverage (AOSC) strategy to plan the coverage path within each sub-region. Dubin’s curve is the taken path satisfying the AUV kinematic constraint as the transition path between sub-regions [5]. Yue et al. presented a combinatorial option and reinforcement learning algorithm for a target search against unknown environments, which improves the dynamic characteristics by reinforcement learning in the unknown environment to handle real-time tasks [6]. However, this kind of study method is not adaptable in an unknown working environment, with real-time performance and the target search efficiency is reduced. Luo et al. put forward a kind of biological heuristic in unknown environments such as a neural network algorithm [7]. The weight transfer attenuation process between neurons is constructed in the environment so as to construct a real-time map and realize the robot’s search in the unknown two-dimensional environment. However, due to the limited sonar detection range, this method can only be applied to target searches in local unknown environments. An improved PSO-based approach was proposed by Cai et al. for robot cooperative target searching in unknown environments, which used the potential field function as the fitness function of the particle swarm optimization (PSO) algorithm [8]. The unknown areas are divided into different search levels and the collaborative rules are redefined. Dadgar et al. proposed a distributed method based on PSO to overcome robot workspace limitations so that the PSO-based algorithm could achieve the overall optimum under a global mechanism [9]. A local PSO algorithm based on information fusion and sharing was proposed by Saadaoui et al. for collaborative a target search algorithm [10]. In addition, the probabilistic map and the deterministic map were established for searching a location based on the map data. Finally, the path was planned by PSO algorithm for further confirmation of the target. A reinforcement learning algorithm was applied to snake-game by Wu et al. to simulate the process of an unmanned aerial vehicle (UAV) searching for targets in an unknown area to simulate the process of a snake searching for fruit [11]. S Ivić et al. proposed the search algorithm of the dynamic system traversal theory for the specific case of MH370, combining optimal search theory with traversal theory [12].
For the problem of hunting a dynamic target, Wang et al. came up with a method to let robots form a pursuit team through negotiation, and then the team carried out cooperative pursuit according to the motion multi-target cooperative pursuit algorithm [13]. Wang et al. proposed an optimal capture strategy based on potential points, and the ant colony algorithm was used to realize capture and collision avoidance during the AUV task [14]. Taking into account port security issues, Meng et al. proposed a predictive planning interception (PPI) algorithm for dynamic targets [15]. The underwater vehicle predicts the target position by tracking the target, and the path to the target is planned by artificial potential field method in advance for interception. However, due to the randomness and uncontrollability of moving targets, it is often difficult to achieve advanced interception. Cao et al. used polynomial fitting to dynamically update the sampling points to find the navigation rules of the dynamic target, and then the reinforcement learning algorithm was used to find the shortest path to capture the target [16].
In conclusion, in view of the various interference factors in the unknown underwater environment and the limitations of its own search capacity and energy expenditure, the distributed dynamic predictive control (DDPC) search method is proposed. For the problem of hunting dynamic targets, a dynamic distribution method is proposed to make a reasonable distribution of potential points around the target, which allows the AUV to improve the accuracy and the rapidity of hunting.
The structure of this article is divided into following parts: Section 2 shows the model establishment; the process of searching and hunting are presented in Section 3; Section 4 describes DDPC algorithm; the details of simulation setup and results are given in Section 5; and ultimately, the conclusions and future work are summarized in Section 6.

2. Mathematical Modeling

2.1. The Environment Model

This paper divides the region into two levels [17]. Information such as the traversal status, search results, and area coverage of the area can be better obtained during the execution of the task by the grid division of the search area. The size of the first-level sub-region is related to the distance the AUV moves over a predicted time step, and the second-level sub-region is consistent with the effective detection range of the AUV sonar. The state information structure contained in different level regions and how it is updated is discussed in detail in Section 3.2.1. As shown in Figure 1, the bright red box represents the first-level sub-region while the dark red box represents the second-level sub-region, and the blue number represents the first-level sub-region number and the black number represents the second-level sub-region number.

2.2. The AUV Kinematic Model

According to the definition in reference [18], two reference coordinate systems are adopted, namely the earth-fixed reference coordinate system and the body-fixed reference coordinate system, respectively, which include velocity vector V = u , v , w T , angular velocity vector, and attitude angle vector η = ϕ , θ , φ T , as shown in Figure 2.
The AUV kinematic model is as follows:
x ˙ y ˙ z ˙ = R ( Θ ) u v w
ϕ ˙ θ ˙ φ ˙ = T Θ ( Θ ) p q r
R ( Θ ) = cos φ cos θ cos φ sin θ sin ϕ sin φ cos ϕ sin φ sin ϕ + cos φ sin θ cos ϕ sin φ cos θ cos φ cos ϕ + sin φ sin θ sin ϕ sin φ sin θ cos ϕ cos φ sin ϕ sin θ cos θ sin ϕ cos θ cos ϕ
T Θ ( Θ ) = 1 sin ϕ tan θ cos ϕ tan θ 0 cos φ sin ϕ 0 sin ϕ / cos θ cos ϕ / cos θ ,   θ π 2
Normally, in a two-dimensional environment, the kinematics model of AUV is usually simplified as:
x ˙ = u cos φ y ˙ = u sin φ φ ˙ = r

2.3. Forward-Looking Sonar Model

The mathematical model was established according to the real sonar working principle [19]. The detection radius of sonar is R , open angle of horizontal detection is α and the detection angle of depth is β . The array statistic matrix is established according to the open angle range of sonar and judging whether there is a target in the visible range through the array elements in the matrix. It can be simply represented as in Figure 3.
The mathematical model between the object and sonar is established. In a two-dimensional environment, the targets for which information is available should meet the following requirements:
| y t | x t 2 + y t 2 sin α 2 x t 2 + y t 2 R
where ( x t , y t , z t ) is as follows:
x t = x x 0 y t = y y 0
where ( x , y ) represents position coordinate of the object and ( x 0 , y 0 ) is the coordinate of sonar under the body coordinate system.
Forward-looking sonar is prone to interference by external noise in the process of collecting underwater information, which affects the detected data. Therefore, interference noise is added to the sonar model to simulate the attenuation of the visual field and obstruction of obstacles, which are specifically described as follows:
y x q = h + l n ζ l n L
where y x q is the environmental characteristic information detected by sonar, L is the sonar effective detection range, h is the sonar detection function under noise-free interference, l n is the distance between the detected target, and sonar at time n, and ζ denotes the nonlinear interference. When l n > L or the object to be detected is blocked by obstacles, the environmental characteristics information cannot be detected.

2.4. Target Model

It is assumed that all targets are regarded as particles. The model of a static target is shown as:
x n + 1 x ˙ n + 1 y n + 1 y ˙ n + 1 = x n 0 y n 0
where, x n + 1 and x ˙ n + 1 , respectively, represent the coordinate and velocity of the target in the X-axis direction at time n+1, and y n + 1 and y ˙ n + 1 , respectively, represent the coordinate and velocity in the Y-axis direction at time n + 1. x ˙ n + 1 and y ˙ n + 1 are identical to 0 when the target is static.
It is assumed that dynamic targets move at the same depth and perform uniform circular motion [20]. The discrete time equation of dynamic targets can be described as follows:
X n = x n x ˙ n y n y ˙ n T
X n + 1 = 1 sin ω T ω 0 1 cos ω T ω 0 cos ω T 0 sin ω T 0 1 cos ω T ω 1 sin ω T ω 0 sin ω T 0 cos ω T X n + T 2 / 2 0 T 0 0 T 2 / 2 0 T w x , n w y , n
where ω , T represent the turning angular velocity and sampling time, respectively.

2.5. Multiple AUV Communication Content

In order to realize multi-AUV cooperative searching and hunting, this paper analyzes the information interaction content of AUV systems. The communication contents are shown in Table 1, which are the AUV state information, target information and sub-region state information.
The communication mode is such that each AUV exchanges information in the form of broadcast over a period of time so that each AUV can obtain the information content of the global environment, thus realizing collaborative searching and hunting.

3. The Distributed Dynamic Predictive Control Algorithm

3.1. The Searching and Hunting Process

According to the task requirements, the problem of target search for AUV can be described as: N T static and moving targets, N P random obstacles, and N V AUVs are randomly distributed in the mission area. The cooperative control of multiple AUVs is required to search for as many unknown targets as possible with lower search cost and limited time [21].
In the target search process, the AUV marks the location of static targets; for the moving target, the first AUV which discovers moving target is confirmed as an organizer, and then a hunting message is sent to other AUVs according to the distance between the AUV and target. After receiving the message, the AUVs sail around the predetermined hunting position at first, and then narrow down to the required formation to hunt the dynamic target. When the AUV encounters obstacles during the search mission, it switches from the search mode to the obstacle avoidance mode. After successfully avoiding the obstacle, it continues to perform the search task. The search flowchart is shown in Figure 4.

3.2. The Distributed Dynamic Predictive Control Algorithm

The core of the DDPC algorithm is to predict the change of environment after each AUV obtains some prior information through communication, and then decides its next action based on the prediction. If the AUV can obtain the state of the mission environment area, multi-AUV system, and target for a period of time in the future when executing the target search decision, the entire system better adapts to the unknown underwater environment, which is the original purpose of the DDPC algorithm. Figure 5 shows the task execution process of the AUV system using the DDPC algorithm.
(1)
AUV and environment state feedback: The system feeds back the state information changes of the AUV of the current actuator and the task environment model and the feedback information is used as the input of the system state prediction;
(2)
System state prediction: The state for the N steps in the future is dynamically predicted by the feedback information, and the predicted state of the current time n is obtained. The predicted state is represented by X ( n ) = { x ( n + 1 | n ) , x ( n + 2 | n ) , , x ( n + N | n ) } ;
(3)
Online task optimization decision: The algorithm is based on distributed dynamic prediction combined with optimization methods for online decision making, confirming the actuator state input information and area state information, which are U ( n ) = { u ( n | n ) , , u ( n + N 1 ) } and O ( n ) = { o ( n | n ) , , o ( n + N 1 ) } . u ( n ) = u ( n | n ) , o ( n ) = o ( n | n ) and are taken as the state inputs;
(4)
State updates for AUV and task area: updates the state of the actuator and the state information of the entire environmental area through decision input to obtain M ( n ) and Y ( n ) , respectively, and finally controls the AUV system to perform collaborative target search.

3.2.1. Task Area State

The environment region is divided into N x × M y discrete task first-level sub-regions, and each first-level sub-region contains a state information structure, as follows:
( S x , S y ) ( S x { 1 , 2 , , N x } , S y { 1 , 2 , , N y } )
M 1 x y ( n ) = { A x y ( n ) , B x y ( n ) , T x y ( n ) , P x y ( n ) }
where A x y ( n ) = [ 0 , 1 ] describes the state of the first-level sub-region ( S x , S y ) allocated by dynamic prediction, A x y ( n ) = 0 indicates that the first-level sub-region has not been allocated by prediction, and A x y ( n ) = 1 indicates the first-level sub-region has been allocated and is locked by the AUV. When the number of regions that are not predicted to be allocated is less than the AUV, the unlocking action does not proceed; B x y ( n ) is the traversal state of the sub-region at time n, which is defined as follows:
B x y ( n ) = 0 ( d x y ( n ) r x y ) 0.5 ( d x y ( n ) r x y ) 1 ( d x y ( n ) < r x y )
where d x y ( n ) denotes the distance between the AUV and the allocated sub-region, r x y = α min ( L s x , L S y ) ( α ( 0 , 1 ) ) is the distance measure of the traversal degree of the sub-region, L S x , L S y are the length and width of region, and α is a dynamic regulator. T x y ( n ) represents the status value of the current first-level sub-region effectively traversed by AUV, which is defined as:
T x y ( n ) = k = 1 4 T x y , k ( n + N 1 ) , ( T x y , k { 0 , 0.25 } )
P x y ( n ) [0,1] represents the degree of certainty of target presence information in the current first-level sub-region.
P x y ( n ) = P x y ( l n )
Since the degree of certainty of the target existence information is obtained by observing the environment with sonar, the AUV is required to continuously conduct the search in the allocated area to update the target existence probability in the area. The update equation is as follows:
P x y ( n + 1 ) = τ P x y ( l n ) ( l min < l n l max , q ( n ) = 0 ) P x y ( l n ) ( l n l min , q ( n ) = 1 ) 0 ( l n > R )
where, τ [ 0 , 1 ] is the dynamic coefficient of the certainty degree of the target existence, q n ( n ) is a binary vector, q n ( n ) = 0 represents the target is found, and q n ( n ) = 1 represents the absence of detection.
The first-level dynamic predictive search state information set of the AUV is defined as:
O 1 x y ( n ) = { M 1 x y ( n ) | x { 1 , 2 , , N x } , y { 1 , 2 , , N y } }
The first-level sub-region ( S x , S y ) ( x { 1 , 2 , , N x } , y { 1 , 2 , , N y } ) is divided into L i × L j second-level sub-regions. There is an information structure M 2 i j ( n ) = { W i j ( n ) , Z i j ( n ) , P i j ( n ) } for each of second-level sub-region ( b i , b j ) ( i , j { 1 , 2 } ) , where W i j ( n ) { 0 , 1 } represents the state information of the second-level sub-region traversed by the AUV at time n, W i j ( n ) = 0 represents the region that has not been traversed by the AUV, and W i j ( n ) = 1 represents the opposite and no other AUVs need to traverse it again. Z i j ( n ) { 0 , 0.5 , 1 } indicates the three states of the second-level sub-region at the time n. The three states are not locked, locked but not arrived, and locked and arrived. Z i j ( n ) is expressed as follows:
Z i j ( n ) = 0 ( W i j ( n ) = 0 ) H i j ( n ) ( W i j ( n ) = 1 )
where H i j ( n ) is the effective traversal state of the second-level sub-region at the time n , which is defined as:
H i j ( n ) = 0.5 ( d i j ( n ) > r i j ) 1 ( d i j ( n ) r i j )
where d i j ( n ) represents the distance from the AUV to the predicted allocation sub-region center, r i j = β max ( D b i , D b j ) ( β ( 0 , 1 ) ) represents the distance measure of the traversal degree of the second-level sub-region. D b i , D b j represents the length and width of the second-level sub-region, and β is the dynamic adjustment factor.
Similar to the certainty of target presence information update method in the first-level sub-region, the update equation of the currently locked second-level sub-region is defined as:
P i j ( n + 1 ) = τ P i j ( l n ) ( l min < l n l max , q ( n ) = 0 ) P i j ( l n ) ( l n l min , q ( n ) = 1 ) 0 ( l n > R )
The parameter setting of Equation (21) is consistent with the parameter of Equation (17). The dynamic prediction search states of all the second-level sub-regions in a first-level sub-region are expressed as:
O 2 i j ( n ) = { M 2 i j ( n ) | i { 1 , 2 } , j { 1 , 2 } }
The dynamic predictive search states of the entire task environment region can be expressed as:
O ( n ) = x = 1 N x y = 1 M y O 1 x y ( n ) i = 1 2 j = 1 2 O 2 i j ( n )

3.2.2. The AUV State

Assuming that the AUVs are executing the target search task at a certain depth of a horizontal plane, the state of each AUV is denoted as X a i ( n ) = [ p s i ( n ) , ψ i ( n ) ] , where p s i ( n ) represents the position of the i-th AUV, the position coordinate is ( x a i ( n ) , y a i ( n ) ) , and ψ i ( n ) represents the heading angle of the AUV. The optimal decision input of the AUV is u a i ( n ) = [ v c i ( n ) , r i ( n ) ] , where v c i ( n ) is the sailing speed of the AUV at time n, and r i ( n ) is the heading deflection angle of the AUV, thus the state equation Q i of the AUV is:
p s i ( n + 1 ) ψ i ( n + 1 ) = p s i ( n ) + f a ( v c i ( n ) , r i ( n ) , ψ i ( n ) ) ψ i ( n ) + r i ( n )
According to the decision input of AUV, the sailing distance in a period of time in the future is calculated, and the distance is mapped to the coordinate axis according to the AUV heading angle at the current time to obtain increment ( Δ x , Δ y ) . The specific calculation is shown in the following formula:
f a = Δ x = v c i ( n ) Δ t cos ( ψ i ( n ) + r i ( n ) ) Δ y = v c i ( n ) Δ t sin ( ψ i ( n ) + r i ( n ) )

3.3. The Function of Decision-Making

The purpose of the multi-AUV cooperative target search is to find targets and determine target information as much as possible within a certain mission area. Therefore, the following requirements need to be met:
(1)
Reduce the cost of multi-AUV cooperative target search;
(2)
Improve the determination degree of target information in the task area;
(3)
Allocate search area reasonably.
J ( O ( n ) , X ( n ) , U ( n ) ) describes the comprehensive revenue of the multi-AUV system [22], which is a multi-objective synthesis function that needs to satisfy several conditions to make the final optimal solution.
This function comprehensively considers the regional target discovery revenue J T , environment target search revenue J S , execution cost C v , and the predicted allocation revenue J P of sub-regions in the task environment.
(1)
Regional target discovery revenue
The regional target discovery revenue of target searching is related to the degree of certainty of the target existence information in the first-level and the second-level sub-region allocated by the distributed predictive control method. The specific regional target discovery revenue J T is defined as:
J T ( n ) = k = 1 N v ( S x , S y ) N x y ( 1 q S x S y ( n ) ) p S x S y k ( n )
where p S x S y k ( n ) represents the target existence probability of the first-level sub-region where the k-th AUV is located in the task environment, which is related to the position of the AUV and the traversal state of the current sub-region. q S x S y ( n ) is a binary variable representing whether the target is found. The target is found when the deterministic probability of the target existence is greater than the threshold γ p . The specific definition of q S x S y ( n ) is expressed as:
q S x S y k ( n ) = 1 p S x S y k ( n ) γ p 0 o t h e r
(2)
Environment target search revenue
The environment target search revenue is defined to be associated with the reduction of the target information uncertainty in the sub-region within the effective sensor detection range of the k-th AUV. The concept of target information entropy is introduced to describe the revenue, which is specifically defined as:
J S ( n ) = k = 1 N v ( S x , S y ) N x y ( H s x s y k ( n ) H s x s y k ( n + 1 ) )
The information entropy H k ( n ) is expressed as:
H k ( n ) = log 2 ( p ( n ) )
(3)
Execution cost
The execution cost of the multi-AUV system represents the comprehensive consumption in the target search process, which is generally expressed as the time consumption or energy consumption in the process of the AUV arriving from the current position to the predicted allocation area. Here, the estimation of N steps in the future is performed, and the specific representation is as follows:
C v ( n ) = k = 1 N v m = 1 N | | X a k ( n + m ) X a k ( n + m 1 ) | | v c k ( n + m 1 )
(4)
Sub-region predicted allocation revenue
The sub-region predicted allocation revenue takes the change of the global environmental information due to the change of the sub-region information at time n into account. The algorithm can improve the certainty of regional target information during prediction, thereby reducing the uncertainty of the entire environment. The specific definition is expressed as:
J p ( n ) = k = 1 N v m = 1 N ( S x , S y ) N x y O s x s y k ( n + m ) O s x s y k ( n + m 1 )
where O s x s y k ( n ) represents the dynamic prediction search states of the first-level sub-regions in the task area.
To sum up, under the conditions of the state of the AUV and the search state of the task area, after the multi-AUV cooperative target search system adopts the control input of the online task optimization decision, the optimization objective function of the entire system J ( O ( n ) , X ( n ) , U ( n ) ) is defined as:
J ( O ( n ) , X ( n ) , U ( n ) ) = ω 1 J T + ω 2 J S + ω 3 J P ω 4 C v
where, X ( n ) represents the state of AUV, O ( n ) represents the search state of the task area, and U ( n ) is the control input, 0 ω i 1 i = 1 , 2 , 3 , 4 is the weight coefficient, and the different weight coefficients reflect the degree of performance preference for the system. The weight coefficient should be adjusted appropriately according to specific task requirements. In addition, as the above revenues have different dimensions, it is necessary to conduct normalization before the summation.

3.4. System-State Prediction and Online Optimization Decision-Making

(1)
System-state prediction based on rolling optimization
Rolling optimization can be used as the solution method for the multi-AUV cooperative target search objective function of distributed dynamic predictive control. Using state equation and objective functions, an optimal rolling model for a multi-AUV system with n-step prediction is established. The system state and control input at time n + m are dynamically predicted at time n. Within a period of time, the overall performance index of the system is denoted as:
J ( O ( n ) , X ( n ) , U ( n ) ) = Δ m = 1 N J ( O ( n + m | m ) , X ( n + m | n ) , U ( n + m | n ) )
The rolling model of task optimization decisions for the multi-AUV system at time n is obtained as follows:
[ O ( n ) , U ( n ) ] = arg max U ( n ) , O ( n ) J ( O ( n ) , X ( n ) , U ( n ) )
Finally, the state equation obtained according to the solution of the rolling model is expressed as:
X a ( n | n ) = X a ( n ) O ( n | n ) = O ( n )
X ( n + m + 1 | n ) = f ( x ( n + m + 1 | n ) , u ( n + m + 1 | n ) ) , m = 0 , 1 , , N 1 O ( n + m + 1 | n ) = ϕ ( O 1 x y ( n + m + 1 | n ) , O 2 i j ( n + m + 1 | n ) ) , m = 0 , 1 , , N 1
In rolling time, the state input sequence of the AUV state space and regional information structure is obtained through online optimization and decision-making, which are U ( n ) = { u ( n | n ) , , u ( n + N 1 | n ) } and O ( n ) = { o ( n | n ) , , o ( n + N 1 | n ) } , respectively. Then, u ( n ) = u ( n | n ) in the optimization sequence of the actuator is used as the input of the state of the actuator at the current moment, and o ( n ) = o ( n | n ) is used as the state input of the task area, thereby changing the decision input of the actuator in the future.
By optimizing all performance indicators of the system, which include the state of the task environment area, the predictive control, and the decision-making optimization input, the optimal decision sequence of the entire system can be obtained. The rolling optimization model based on a certain time window can transform an infinite time domain optimization problem into a series of finite time domain optimization problems. Therefore, it is very suitable for the online dynamic solution process of state input.
(2)
The online task-optimization decision
In Formula (36), the solution of the cooperative target searching mode of the multi-AUV system through the rolling optimization model is generated by a centralized solution method, which requires a unified modeling of all the actuators and the determination of the central solution node of the system. The node can be unified for all of the multi-AUV system state information X ( n ) = [ X a 1 , X a 2 , , X a N v ] Τ and task environment area state information O ( n ) = [ O a 1 , O a 2 , , O a N v ] Τ and solve the optimal task decision and sub-region state update information for all members of the system. Such a centralized solution is very computationally intensive and time-consuming for a large and complex multi-AUV system. Therefore, this method limits the scale of the multi-AUV system the whole system’s decision-making and control capabilities.
When AUVs perform the target search task in each task sub-region, they are decoupled from each other, that is, they exist independently in the global task environment. The only interrelated factor is the state information of the task sub-region and the communication between the AUV. Therefore, in a system for such an independent state, the global state information of the agents can be obtained through the communication network between the AUVs and the state information exchange of the task sub-region based on the distributed dynamic predictive control method so as to achieve the purpose of the multi-AUV system for performing the target search cooperatively. The structure chart of the DDPC for the AUV system is shown in Figure 6.
The whole system is decoupled into independent small systems on the basis of distributed dynamic prediction. Supposing that the state equation of the K-th AUV is denoted as f k , then the whole system is shown as follows:
f ( x a ( n ) , u a ( n ) ) = [ f 1 ( x a 1 ( n ) , u a 1 ( n ) ) , f 2 ( x a 2 ( n ) , u a 2 ( n ) ) , , f N v ( x a N v ( n ) , u a N v ( n ) ) ]
The task region state information is set as O k , then the whole system is:
O ( n ) = [ O 1 ( n ) , O 2 ( n ) , , O N v ( n ) ]
Then, the optimization objective function of the entire multi-AUV system can be decomposed into the optimization objective function of each of N v AUVs, with the specific form as follows:
J ( O ( n ) , X ( n ) , U ( n ) ) = k = 1 N v λ k J k ( O k ( n ) , X k ( n ) , U k ( n ) , O ˜ k ( n ) , X ˜ k ( n ) , U ˜ k ( n ) )
where, J k represents the optimization objective function of K-th AUV; λ k is the weight coefficient; and O k ( n ) denotes the state change of the task region at time n by the AUV. X k ( n ) , U k ( n ) represent the dynamic prediction state and optimization decision input of the AUV, respectively; O ˜ k ( n ) represents the influence of the other AUVs in the system on the environment state; X ˜ k ( n ) represents the dynamic prediction state of the other AUVs; and U ˜ k ( n ) represents the optimal decision input. The specific representation is shown below:
O k ( n ) = { o k ( n | n ) , , o k ( n + N 1 | n ) } X k ( n ) = { x a k ( n | n ) , , x a k ( n + N 1 | n ) } U k ( n ) = { u a k ( n | n ) , , u a k ( n + N 1 | n ) } O ˜ k ( n ) = { O l ( n ) | k l , l = 1 , 2 , , N v } X ˜ k ( n ) = { X a k ( n ) | k l , l = 1 , 2 , , N v } U ˜ k ( n ) = { u a k ( n ) | k l , l = 1 , 2 , , N v }
Aiming at the global optimization problem of the target search system, it can be decomposed into N v locally finite time domain problems. According to the solution of each AUV separately, the rolling optimization model of the K-th AUV is shown as follows:
[ O k * ( n ) , U k * ( n ) ] = arg max U k ( n ) , O k ( n ) J k ( O k ( n ) , X a k ( n ) , U a k ( n ) )
Then, according to the solving conditions of the rolling model, the optimization decision input and the sub-region state information of each AUV are obtained, as shown in the following formula:
O k ( n + m + 1 | n ) = ϕ k ( O 1 x y k ( n + m + 1 | n ) , O 2 i j k ( n + m + 1 | n ) ) , m = 0 , 1 , , N 1 X a k ( n + m + 1 | n ) = f a k ( x a k ( n + m + 1 | n ) , u a k ( n + m + 1 | n ) ) , m = 0 , 1 , , N 1
Finally, the state and decision variables of each optimization subsystem are obtained as follows:
X a k ( n | n ) = X a k ( n ) O k ( n | n ) = O k ( n )
It can be seen that the solution of the local optimization problem also contains the state of other AUV subsystems, the decision variables, and the state changes of sub-region, so the obtained solution is based on the AUV cooperation mechanism. The state of the other members of the system and the effects of decision-making information can be gained through communication. So the K-th AUV state and decision input are only associated with the current local state. The cooperative target search problem of the whole multi-AUV system becomes the optimization problem of the independent AUV and the update problem of the state information of the sub-region, which greatly reduces the optimization scale of the whole system.

4. The Hunting Algorithm

4.1. Hunting Formation

Considering the problems of the hunting task, this paper proposes a dynamically distributed hunting method which is suitable for hunting formation and transformation. The time that the AUV takes to adjust the heading is taken into account as the time consumption for forming the hunting formation during the analysis of the hunting conditions.
Suppose that there are three hunt executors with the target (red AUV in Figure 7) as the center of the hunting formation. According to the hunting critical diagram, the minimum value of the ratio of the speed of the hunting executor to the target is obtained as follows:
min ( v a i v D ) = 3 2
where v a i represents the speed of the hunting executor, which is the same for all AUVs, and v D represents the speed of the moving target. By reasonable extrapolation, the general formula of the required speed for the hunting formation of the multiple-AUV system is shown as follows:
min ( v a i v D ) = sin ( α )
where i is the serial number of the hunting AUV. According to Formula (45), it can be concluded that the greater the number of AUVs involved in hunting, the smaller the minimum moving speed required.
In this paper, the method of reducing the hunting circle is adopted to form an effective hunting formation for moving targets with a fixed number of AUVs. By analyzing this method, it can be concluded that the relation between the field angle θ of the target (as shown in Figure 8), the distance l between the executor and the target and effective detection radius r , is expressed as:
sin ( θ ) = r l
In the process when the AUV gets closer to the target to be hunted, with the reduction of the distance from the target, θ increases for the effective detection range, which is fixed, and thus the probability of the target escaping is reduced.
When the AUV reaches the effective range to be changed into a hunting formation, it moves to the respective hunting points for formation, as shown in Figure 9.

4.2. Formation of the Hunting Potential Point

In order to ensure the rapidity and effectiveness of hunting, it is necessary to develop an appropriate method for the formation of hunting potential points [9]. Assuming that the position of the moving target is D ( x D , y D ) and the speed and heading angle are v D and ϕ , respectively, then the coordinate formula of the hunting point is as follows:
x i = x D + l cos ( ϕ + ( i 1 ) 2 π n ) y i = y D + l sin ( ϕ + ( i 1 ) 2 π n )
where the radius of the virtual hunting circle is l and n represents the number of hunting AUVs. The arc length between the hunting potential points is L .
Assume that the maximum radius of the hunting potential point is r , and the maximum radius of the target is r D . The safe distance between the hunting executors is set as S , and L and r should satisfy the following requirements:
2 r + λ 2 S L r D + r + λ 1 S l
where S > r ,   S > r D , and λ 1 1 , λ 2 1 are the adjustment coefficients of the safety distance.
It is assumed that there are n hunting executors, the circumference of the virtual hunting circle is n L = 2 π l , and the relation between the number of hunting executors and parameters mentioned above is obtained after solving the minimum hunting circle radius from the above inequality:
n = 2 π ( r + r D + λ 1 S ) 2 r + λ 2 S
After determining the safety distance coefficient as a fixed constant according to the actual situation, the completion of an effective circle of hunting depends on the radius of the potential point of the executor and the target. Thus, with the increase of the number of the hunting executors, the radius of the virtual hunting circle also increases.

4.3. The Task Assignment of the Hunting Formation

In the process of the target search, when a dynamic target appears in the sonar field, it triggers the mechanism of the hunting task. In this paper, a triangle formation is adopted, as shown in Figure 10.
Here, D is the origin of polar coordinates, θ i is the polar angle, and the heading direction of the target is taken as the polar axis. The polar angles of the current positions of each hunter are calculated and then sorted from smallest to largest as P , in terms of their polar angles. After sorting, the elements in P and T sequentially correspond to achieve the optimal assignment of tasks, as follows:
P = { P 1 , P 2 , P 3 , , P n } T = { T 1 , T 2 , T 3 }
where T is the set of the potential hunting positions.
A virtual hunting circle and hunting potential point are formed immediately by the AUV that finds the target, and the information is sent to other AUVs in the environment. Each AUV with a decision-making mechanism decides whether to participate in hunting or not and sends the message back to the organizer. This creates a joint contractual relationship between the hunting members. In this way, the task assignment and role switching of AUVs can be described as the change process of the validity and invalidation of the hunting contract. The specific decision-making mechanism requirements are as follows:
(1)
If an AUV fails to reach a predetermined position within the time limit after it has been identified as a hunting actuator, the contract becomes invalid and the role is changed;
(2)
If the required cooperative hunting executors do not all reach the corresponding potential point within the time limit, the contract is re-established;
(3)
After the target is destroyed, the contract becomes invalid immediately. The initiator of the hunting shall send the message of giving up to other executors in the team for role switching;
(4)
When the initiator gives up, a message is sent to the other executors about the success of the chase.
The following provisions shall be made in the assignment of tasks to decide whether or not to join the hunting contract:
(1)
In affirming a commitment to hunt for a target, all other mission roles of the executor in effect of the contract are waived;
(2)
All AUVs are required to exchange information before the hunting contract becomes effective. The role switch is abandoned when the AUV that is about to sign the hunting contract has confirmed that the team does not need it.

5. Simulation

5.1. Search Algorithm Verification

By comparing the search results with the random method and the scan line method, this paper verified the high efficiency of the DDPC search algorithm. The experiments were run on a computer with an AMD Ryzen 5 4600H CPU at 3.00 GHz and 16.00 GB RAM.
Three AUVs were set in the simulation environment with 30 random static targets and obstacles with different shapes and positions. The simulation was set to compare the final target search results of each algorithm in 1000 time steps. Each target is marked after it is found by an AUV. The final experimental results of the three methods are displayed below. Figure 11 shows the target search method proposed in this paper, where there are still four targets left to be searched for after the deadline. Figure 12 shows the results of the random search method. Each AUV sailed randomly in the environment, and 16 static targets were found within the specified time. Figure 13 shows that the scan line method failed to cover the whole search environment smoothly and efficiently and 22 static targets were found within the specified time.
After the program runs for 1000 time steps, two indicators are counted to measure the target search conditions under different methods, respectively:
(1)
Regional coverage;
(2)
Average number of found targets.
The indexes respectively describe the ratio of the regions searched by the AUVs in a certain task time to the whole environment and the number of targets searched for by the AUVs. The specific formula is shown in (51).
R e g i o n a l   c o v e r a g e = i = 1 N V   AUVi   sonar   detection   range Task   area
Figure 14 indicates that the method proposed in this paper can better cover almost all task areas within the specified task time, followed by the scan line method. Due to its unique search method, the scan line method can cover every region it passes through. If the task time is long enough, the scanning line method can achieve full area coverage. However, the heading of the AUV of the random search method is random at each time point, so there are repeated searches in the same area, or it arrives at a certain sub-area and then leaves quickly, so the coverage rate also decreases. It can be seen from the data statistics that the coverage rate of the random search method is also the lowest.
In order to explore the influence of the prediction on the algorithm, we compared the influence of different prediction steps on the average computation time of each step and the detection rate of the target after 1500 search steps for 30 static targets, as shown in Figure 15. According to the data in the figure, we chose the predicted steps to be 10.

5.2. Hunting Algorithm Verification

In this section, four AUVs and two dynamic targets were set up in the environment to prove the validity of the hunting algorithm. At first, the AUVs were searching in different areas, as shown in Figure 16. Figure 17 shows that AUV 1 found the target and organized the other two AUVs to go to the hunting potential point while the fourth AUV was still searching. As can be seen in the figure, AUV 1 was already at the hunting potential point, while the other 2 AUVs were still heading towards the assigned positions. The formation of the hunt was created as shown in Figure 18. Figure 19 shows that after the dynamic target was destroyed, the formation disbanded and the search mission continued.
As can be seen from Figure 16, Figure 17, Figure 18 and Figure 19, the hunting algorithm was tested with a few AUVs in a simple environment, and in the next section it is shown to work in complex environments as well.

5.3. Cooperative Searching and Hunting Simulation

In order to prove the feasibility and effectiveness of the searching and hunting method proposed in this paper, the simulation environment was set to be a two-dimensional area of 2000 m × 2000 m, and the AUV was set to perform the target search and hunting task at a fixed depth in the horizontal plane. In the simulation environment, the speed of AUV as set at a constant 4 m/s while searching and accelerated to 5 m/s when executing the hunting mission. The maximum angular velocity of turning was r ¯ a = π / 20   ( r a d / s ) . The detection performance parameters of forward-looking sonar were V e = 0.8 , V n o n e = 0.2 , R = 150   m and α = 120 . Global communication was considered, obstacles with different shapes were set, and 40 static targets with random positions and 2 dynamic targets with different tracks were set in the simulation environment with obstacles in different shapes. There are 2 dynamic targets and 40 static targets set in the Environment, as shown in Table 2 and Table 3. Six AUVs were launched at the position according to Table 4 to perform the target search task with a specified running time of T = 2000 (steps). The operation scenario of the target search is shown in the figures below:
In the simulation experiment, the process of multi-AUV task execution at different times was selected. As can be seen from the figure, six AUVs marked static targets when they found them and maintained the formation to hunt and destroy dynamic targets.
Figure 20, Figure 21, Figure 22, Figure 23 and Figure 24 show the process of two dynamic targets being hunted and destroyed. Figure 21 shows that AUV 1 finds dynamic target 1 and then organizes the other two AUVs to hunt according to the target hunting algorithm. However, AUV 3 and AUV 5 are performing a target search. When they become hunting executors, they abandon the current search task and accelerate their movement towards the hunting potential point. After moving target 1 is hunted, AUV 5 found dynamic target 2. AUV 3 and AUV 4, which are not performing the hunting task at this time, become members of the new team and hunt the moving target. When encountering obstacles, the AUV switches to obstacle avoidance mode and does not return to hunting mode until obstacles no longer appear in the view of forward-looking sonar. Each team eventually forms as a triangle formation and maintains the search for a while before the dynamic target is finally destroyed, as shown in Figure 22 and Figure 23. After the hunting formation is disbanded, the target search is carried out again until the time limit is reached or all the targets are searched. Figure 24 shows the final result. Therefore, the effectiveness of the method can be proved and dynamic targets can be successfully hunted.

6. Conclusions

For the first time, dynamic prediction and online optimization decision-making are conducted based on the environmental region state and AUV state to solve the problem of multi-AUV cooperative searching and hunting. This algorithm divides the large-scale unknown environment faced by the AUV into two-level search sub-regions and establishes a mathematical model. Based on the distributed search theory, the AUV state model and the regional state information update mechanism are introduced. The predicted region state information and AUV input state are obtained through the time window rolling optimization model, and the online optimization decision function is used to solve the regional and AUV state update input, and finally, the purpose of the multi-AUV collaborative target search us realized. When the AUV finds a dynamic target, it hunts the dynamic target and destroys it. Combined with the traditional hunting organization method, a dynamic distribution hunting method is proposed to reasonably allocate the hunting potential points of the moving target so that the AUV can form a hunting formation more quickly. Finally, the simulation verification of the multi-AUV cooperative target searching and hunting is given, which proves the effectiveness of the method.
Because the actual unknown underwater environment is more complex, there are still many problems and deficiencies in the research content that need to be improved in the future, including the following:
(1)
Communication delay and loss of information;
(2)
Complex groups of dynamic obstacles;
(3)
Dynamic targets with multiple motion states;
(4)
Application in the 3D underwater environment.

Author Contributions

Conceptualization, J.L. and C.L.; methodology, J.L.; software, J.L.; validation, J.L. and C.L.; formal analysis, J.L.; investigation, J.L. and C.L.; resources, J.L.; data curation, H.Z.; writing—original draft preparation, J.L.; writing—review and editing, J.L. and C.L.; visualization, H.Z. and C.L.; supervision, J.L.; project administration, J.L.; funding acquisition, J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This search work was funded by the National Natural Science Foundation of China, grant no. 5217110503 and no. 51809060 and the Research Fund from Science and Technology on Underwater Vehicle Technology, grant no. JCKYS2021SXJQR-09.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wu, B.; Saigol, Z.; Han, X.; Lane, D. System Identification and Controller Design of a Novel Autonomous Underwater Vehicle. Machines 2021, 9, 109. [Google Scholar] [CrossRef]
  2. Li, J.; Zhai, X.; Xu, J.; Li, C. Target Search Algorithm for AUV Based on Real-Time Perception Maps in Unknown Environment. Machines 2021, 9, 147. [Google Scholar] [CrossRef]
  3. Liu, Y.; Wang, M.; Su, Z.; Luo, J.; Xie, S.; Peng, Y.; Pu, H.; Xie, J.; Zhou, R. Multi-AUVs Cooperative Target Search Based on Autonomous Cooperative Search Learning Algorithm. J. Mar. Sci. Eng. 2020, 8, 843. [Google Scholar] [CrossRef]
  4. Sun, A.L.; Cao, X. A Fuzzy-Based Bio-Inspired Neural Network Approach for Target Search by Multiple Autonomous Underwater Vehicles in Underwater Environments. Intell. Autom. Soft Comput. 2021, 27, 551–564. [Google Scholar] [CrossRef]
  5. Yao, P.; Qiu, L.Y. AUV path planning for coverage search of static target in ocean environment. Ocean Eng. 2021, 241, 110050. [Google Scholar] [CrossRef]
  6. Yue, W.; Guan, X. A Novel Searching Method Using Reinforcement Learning Scheme for Multi-UAVs in Unknown Environments. Appl. Sci. 2019, 9, 4964. [Google Scholar] [CrossRef] [Green Version]
  7. Luo, C.; Yang, X. A bioinspired neural network for real-time concurrent map building and complete coverage robot navigation in unknown environments. IEEE Trans. Neural Netw. 2008, 19, 1279–1298. [Google Scholar] [CrossRef]
  8. Cai, Y.; Yang, S. An improved PSO-based approach with dynamic parameter tuning for cooperative multi-robot target searching in complex unknown environments. Int. J. Control 2013, 86, 1720–1732. [Google Scholar] [CrossRef]
  9. Dadgar, M.; Jafari, S. A PSO-based multi-robot cooperation method for target searching in unknown environments. Neurocomputing 2016, 177, 62–74. [Google Scholar] [CrossRef]
  10. Saadaoui, H.; Bouanani, F.E. Information Sharing Based on Local PSO for UAVs Cooperative Search of Unmoved Targets. In Proceedings of the International Conference on Advanced Communication Technologies and Networking, Marrakech, Morocco, 2–4 April 2018. [Google Scholar]
  11. Wu, C.; Ju, B. UAV Autonomous Target Search Based on Deep Reinforcement Learning in Complex Disaster Scene. IEEE Access 2019, 7, 117227–117245. [Google Scholar] [CrossRef]
  12. Ivić, S.; Crnković, B.; Arbabi, H.; Loire, S.; Clary, P.; Mezić, I. Search strategy in a complex and dynamic environment: The MH370 case. Sci. Rep. 2020, 10, 19640. [Google Scholar] [CrossRef] [PubMed]
  13. Wang, Y.; Hong, B. Cooperative Multiple Mobile Targets Capturing Algorithm for Robot Troops. J. Xi’an Jiaotong Univ. 2003, 37, 573–576. [Google Scholar]
  14. Wang, H.; Wei, X. Research on Methods of Region Searching and Cooperative Hunting for Autonomous Underwater Vehicles. Shipbuild. China 2010, 51, 117–125. [Google Scholar]
  15. Meng, X.; Sun, B. Harbour protection: Moving invasion targrt interception for multi-AUV based on prediction planning interception method. Ocean Eng. 2021, 219, 108268. [Google Scholar] [CrossRef]
  16. Cao, X.; Xu, X.Y. Hunting Algorithm for Multi-AUV Based on Dynamic Prediction of Target Trajectory in 3D Underwater Environment. IEEE Access 2020, 8, 138529–138538. [Google Scholar] [CrossRef]
  17. Kapoutsis, A.; Chatzichristofis, S. DARP: Divide Areas Algorithm for Optimal Multi-Robot Coverage Path Planning. J. Intell. Robot. Syst. 2017, 86, 663–680. [Google Scholar] [CrossRef] [Green Version]
  18. Shojaei, K.; Arefi, M. On the neuro-adaptive feedback linearising control of underactuated autonomous underwater vehicles in three-dimensional space. IET Control Theory A 2015, 9, 1264–1273. [Google Scholar] [CrossRef]
  19. Zhao, S.; Lu, T. Automatic object detection for AUV navigation using imaging sonar within confined environments. In Proceedings of the IEEE Conference on Industrial Electronics & Applications, Xi’an, China, 25–27 May 2009. [Google Scholar]
  20. Zhou, G.; Wu, L. Constant turn model for statically fused converted measurement Kalman filters. Signal Process. 2015, 108, 400–411. [Google Scholar] [CrossRef]
  21. Zeigler, B.P. High autonomy systems: Concepts and models. In Proceedings of the Simulation and Planning in High Autonomy Systems, Tucson, AZ, USA, 26–27 March 1990. [Google Scholar]
  22. Wu, L.; Niu, Y.; Zhu, H. Modeling and characterizing of unmanned aerial vehicles autonomy. In Proceedings of the 9th World Congress on Intelligent Control and Automation, Jinan, China, 6–9 July 2010. [Google Scholar]
Figure 1. Environment map.
Figure 1. Environment map.
Machines 10 00366 g001
Figure 2. Earth-fixed reference coordinate system and body-fixed reference coordinate system.
Figure 2. Earth-fixed reference coordinate system and body-fixed reference coordinate system.
Machines 10 00366 g002
Figure 3. Simplified sonar model.
Figure 3. Simplified sonar model.
Machines 10 00366 g003
Figure 4. Search flowchart.
Figure 4. Search flowchart.
Machines 10 00366 g004
Figure 5. DDPC algorithm.
Figure 5. DDPC algorithm.
Machines 10 00366 g005
Figure 6. Structure of DDPC algorithm.
Figure 6. Structure of DDPC algorithm.
Machines 10 00366 g006
Figure 7. Critical condition analysis diagram of hunting.
Figure 7. Critical condition analysis diagram of hunting.
Machines 10 00366 g007
Figure 8. Reduction of encirclement.
Figure 8. Reduction of encirclement.
Machines 10 00366 g008
Figure 9. The formation and maintenance of the encirclement.
Figure 9. The formation and maintenance of the encirclement.
Machines 10 00366 g009
Figure 10. Distribution of potential points in the process of hunting.
Figure 10. Distribution of potential points in the process of hunting.
Machines 10 00366 g010
Figure 11. DDPC search.
Figure 11. DDPC search.
Machines 10 00366 g011
Figure 12. Random search.
Figure 12. Random search.
Machines 10 00366 g012
Figure 13. Scan line search.
Figure 13. Scan line search.
Machines 10 00366 g013
Figure 14. Regional coverage.
Figure 14. Regional coverage.
Machines 10 00366 g014
Figure 15. The influence of different prediction steps on the performance of the Algorithm.
Figure 15. The influence of different prediction steps on the performance of the Algorithm.
Machines 10 00366 g015
Figure 16. T = 95 (steps).
Figure 16. T = 95 (steps).
Machines 10 00366 g016
Figure 17. T = 139 (steps).
Figure 17. T = 139 (steps).
Machines 10 00366 g017
Figure 18. T = 315 (steps).
Figure 18. T = 315 (steps).
Machines 10 00366 g018
Figure 19. T = 567 (steps).
Figure 19. T = 567 (steps).
Machines 10 00366 g019
Figure 20. Initialization of simulation environment.
Figure 20. Initialization of simulation environment.
Machines 10 00366 g020
Figure 21. T = 345 (steps).
Figure 21. T = 345 (steps).
Machines 10 00366 g021
Figure 22. T = 711 (steps).
Figure 22. T = 711 (steps).
Machines 10 00366 g022
Figure 23. T = 1358 (steps).
Figure 23. T = 1358 (steps).
Machines 10 00366 g023
Figure 24. Final experimental result.
Figure 24. Final experimental result.
Machines 10 00366 g024
Table 1. Communication in multi-AUV system.
Table 1. Communication in multi-AUV system.
Information TypeInformation Content
AUV state informationCoordinate
Velocity
Course
Static target informationCurrent time
Serial number of AUV
Detected information
Dynamic target informationCurrent time
Hunting state
Serial number of AUV
Sub-region state informationFirst-level region state at time t
Second-level region state at time t
Table 2. Dynamic target position.
Table 2. Dynamic target position.
Serial NumberPosition XPosition Y
1460.2343.5
21448.0626
Table 3. Static target position.
Table 3. Static target position.
Position XPosition YPosition XPosition YPosition XPosition YPosition XPosition Y
895.11458.5471.6145.21138.91771.71793.01718.9
1181.11383.0755.1122.61369.61798.11876.21088.7
1101.11017.0913.9201.91231.61356.61904.5583.0
889.4866.0946.1579.21267.5209.41044.41424.5
717.31334.0592.6707.51426.3205.6738.21617.0
611.51122.6450.81518.91656.9216.91545.4190.5
484.8915.0320.41839.61770.3575.4855.3601.9
352.5590.5129.51703.81776.01009.4473.5715.1
218.3473.5172.91292.51762.81258.5101.1549.1
280.7232.097.4918.91505.71500607.7111.3
Table 4. AUV position.
Table 4. AUV position.
Serial NumberPosition XPosition Y
1165.4239.6
2235.31760.4
31110.61084.9
41475.41798.1
51464.11226.4
61832.71081.1
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, J.; Li, C.; Zhang, H. Distributed Dynamic Predictive Control for Multi-AUV Target Searching and Hunting in Unknown Environments. Machines 2022, 10, 366. https://doi.org/10.3390/machines10050366

AMA Style

Li J, Li C, Zhang H. Distributed Dynamic Predictive Control for Multi-AUV Target Searching and Hunting in Unknown Environments. Machines. 2022; 10(5):366. https://doi.org/10.3390/machines10050366

Chicago/Turabian Style

Li, Juan, Chengyue Li, and Honghan Zhang. 2022. "Distributed Dynamic Predictive Control for Multi-AUV Target Searching and Hunting in Unknown Environments" Machines 10, no. 5: 366. https://doi.org/10.3390/machines10050366

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop