Next Article in Journal
Water Quality Anomalies following the 2017 Hurricanes in Southwestern Puerto Rico: Absorption of Colored Detrital and Dissolved Material
Previous Article in Journal
Correction: Kolbe, C., et al. Precipitation Retrieval over the Tibetan Plateau from the Geostationary Orbit—Part 2: Precipitation Rates with Elektro-L2 and Insat-3D. Remote Sensing 2020, 12, 2114
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Granularity Mission Negotiation for a Decentralized Remote Sensing Satellite Cluster

School of Astronautics, Beihang University, Beijing 100191, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(21), 3595; https://doi.org/10.3390/rs12213595
Submission received: 2 October 2020 / Revised: 26 October 2020 / Accepted: 29 October 2020 / Published: 2 November 2020

Abstract

:
Satellite remote sensing is developing towards the micro-satellite cluster, which brings new challenges to mission assignment and planning for the cluster. A multi-agent system (MAS) is used, but the time delay caused by communication and computation is rarely considered. To solve the problem, a neural-network-based multi-granularity negotiation method under decentralized architecture is proposed. Firstly, we divided negotiation into three levels of granularity, and they work in different modes. Secondly, a neural network was trained to help the satellite select the best level in real-time. Through experiments, we compared the satellites working in three different levels of granularity, in which a multi-granularity decision was used. As a result of our experiments, a lower cost-effectiveness ratio was obtained, which proved that the multi-granularity negotiation method proposed in this paper is practical.

1. Introduction

Satellite remote sensing aims to obtain information from the Earth’s surface. It has been widely used in geography, Earth science, meteorology, military, etc. [1]. In traditional imaging missions, a single low Earth orbit satellite is often used to take images of multiple targets multiple times. However, the increasing number of missions and the higher time resolution requirements call for more satellite members [2]. Besides, different missions require different kinds of payload, but multiple payloads can hardly be carried by one satellite. Furthermore, multiple payloads can hardly work together at the same time in the narrow imaging time window. Therefore, current remote sensing missions rely more and more on large satellite clusters, which brings new challenges to mission assignment and planning [3].
A multi-agent system (MAS) is the main solution to the distributed autonomy problem for multiple satellites. Agent technology was first used in DeepSpace-1 (DS-1) [4,5]. Although MAS applied in DS-1 is a single satellite system, with its success the MAS started to be wildly used in satellite cluster task planning. Campbell [6,7,8] summarized the work of predecessors, and defined four different working modes of satellite cluster, including traditional mode, top-down mode, centralized mode, and distribute mode. The traditional mode and the top-down mode were used in the early satellite clusters. For instance, He [9] proposed an edge computing framework including a central node and several edge nodes using a constructive heuristic algorithm based on the density of residual tasks. Although these algorithms have achieved good results, the central node (ground station or central master satellite) works very hard.
Current research mainly focuses on the latter two modes: the centralized mode and the distributed mode. Cheng [10] developed a satellite cluster negotiation and mission assignment method by using contract net in the centralized mode. He proposed three negotiation models: acquaintance’s trust-based announcing bidding, adaptive bidding with swarm intelligence, and a multi-attribute decision-based fuzzy evaluation bidding method. Wang [11] discussed the cooperative multicenter problem on satellite imaging scheduling and proposed a cooperative co-evolutionary algorithm, and then proposed a novel fixed-length binary encoding mechanism for missions assignment. Iacopino [12] designed a self-organizing multi-agent structure belong to the distribution mode. The system can adapt to the change of the problems and can synchronize the satellites’ plans in order to avoid duplication. In order to improve cluster consistency, Zhao [13] designed a distributed time consensus protocol for multi-agent systems with general linear dynamics on a directed graph. Buzzi [14] proposed an agent-based framework to study the self-organization of the satellite cluster, which can establish a temporary alliance to change the modularity of the system.
Since a remote sensing mission is usually divided into multiple stages (discovery, identification, confirmation, tracking, and monitoring) and one stage can further be divided into multiple steps (collection, transmission, process, etc.), a micro-satellite is not able to finish a mission all by itself. Current research has begun to focus on the cooperation of satellite clusters toward finishing one complex mission instead of merely using one satellite. Globus [15] used multiple variants of the genetic algorithm, hill climbing, simulated annealing, squeaky wheel optimization, and iteration to realize the coordination of multiple satellites and multiple observations of the same target. Araguz [16] designed autonomous operational schemes for Earth-observing swarms of nano-satellites, in which all satellites cooperate to observe multiple targets. Gallud [17] presented an agent-based simulation framework. In his study, satellites could work together to perform a set of observational tasks. His framework supported allocation targets for satellites and collective sensing of a target for multi-satellites. He [18] considered three important sections of planning (store sequence of tasks, action sequence, and the satellite status) to deal with scientific event discovery, satellite faults, cloud obscuration, and emergencies.
The main problem of the remote sensing mission lies in the conflict between resource consumption and mission accomplishment. The researchers considered many factors. Nag Sreeja [19] proposed a modular framework that combines orbital mechanics, attitude control, and scheduling optimization working on Cubesats. Wang [20] discussed the impact of clouds in the case of joint probabilities, and established a sample average approximation (SAA) model accordingly. Wang [21] also proposed a branch and cut algorithm based on lazy constraint generation. Although these kinds of limited resources onboard have been considered, the influence of time delay on negotiation is less related to that. The time delay is mainly caused by limited communication ability and computational resources. When a satellite keeps tracking a target, it has to act out a continuous attitude maneuver which affects the pointing accuracy of its antenna, resulting in a decrease of the communication quality. Besides, tracking a target produces a large number of images, the process of which is heavy work for a single micro-satellite. A common way is to transmit those images to other micro-satellites. However, image transmission occupies a large amount of communication bandwidth, so that less communication bandwidth can be left for negotiation. Lastly, the image process itself is computationally expensive, so that a micro-satellite can hardly negotiate with others when processing images. In conclusion, a large time delay may cause the delay of negotiation and may finally lead to negotiation failure.
In this paper, we propose a neural-network-based multi-granularity negotiation method under decentralized architecture belonging to distributed mode in [6,7,8]. Firstly, we propose a multi-granularity negotiation model. The negotiation process is divided into three levels of granularity: low-level granularity, middle-level granularity, and high-level granularity. In each granularity, satellites transmit different kinds of information and use different negotiation processes and assignment algorithms. Secondly, we establish a neural network to intelligently select the appropriate level of granularity. The neural network could output the success probabilities of negotiation in each level of granularity. Then a simple decision mode based on the probabilities is used to select the level in real-time.
The remainder of the paper is organized as follows. In Section 2, the modeling process of the multi-granularity negotiation method is introduced considering three aspects: mission preprocesses for all levels of granularity, multi-granularity partitioning of the multi-agent negotiation model, and intelligent real-time level selection of negotiation granularity. In Section 3, experiments to verify this method in different conditions are described. The proposed method is discussed in Section 4. Finally, the conclusions of the study are given in Section 5.

2. Proposed Methods

To finish negotiation with limited communication resources, a multi-granularity negotiation method was developed. Firstly, the traditional multi-agent negotiation model is divided into three levels of granularity: low-level granularity, middle-level granularity, and high-level granularity. Mission selection algorithms were designed for each level of granularity. A fixed consistency algorithm is used to realize the decentralization of the satellite cluster. After that, we designed different negotiation processes and consistency mechanisms to simplify the negotiation process of different granularity. Secondly, different samples on different target numbers, bandwidths and transmit hop numbers were collected to train a neural network. Finally, the best neural network was established to select the best granularity onboard. The flowchart of the multi-granularity negotiation method proposed in this paper is illustrated in Figure 1.

2.1. Mission Preprocess for All Levels of Granularity

2.1.1. Time Window Calculation

Suppose that there are n targets, the set of all targets can be defined as T:
T = { t 1 , , t i , , t n } , i N + , 1 i n ,
where t i expresses the information of the ith target, including latitude, longitude, height, and motion characteristics. Motion characteristics contain course, angular velocity, speed, and acceleration. All these motion characteristics can be unknown.
The calculation of the time window is to find mission start time T i S and mission end time T i E to make sure that for t [ T i S , T i E ] , two constraints are satisfied: the optical visibility constraint and the geometric visibility constraint. The optical visibility constraint for the ith target C o i ( t ) can be expressed as:
C o i ( t ) = 1 R i · R S u n > 0 0 R i · R S u n 0 ,
where R i is the position vector of t i in the inertial coordinate, which can be calculated by integrating with time. R S u n is the position vector of the sun in inertial coordinate. The geometric visibility constraint for ith target C g i ( t ) can be expressed as:
C g i ( t ) = 1 α α m a x a n d θ θ m a x 0 e l s e ,
where α is the angle between the target’s position vector R i and its projection vector R ^ i in the orbital plane of the satellite, which can be expressed as:
α = a r c c o s ( R i · R ^ i | R | · | R ^ i | ) .
α m a x is the maximum value of α , expressed as:
α m a x = a r c c o s ( | R i | | R | ) ,
where R is satellite’s position vector in inertial coordinate. θ m a x in Equation (3) is the maximum attitude maneuver angle of the satellite. The desired attitude maneuver angle θ is expressed as:
θ = a r c t a n ( | R i | · s i n ( α ) | R | | R i | · c o s ( α ) ) .

2.1.2. Profit Calculation

One remote sensing mission can be divided into five stages: discovery, identification, confirmation, tracking, and monitoring. Different initial profits p j i n i are defined as constants for each stage of the mission in this paper, as is shown in Table 1.
Mission profit P i , j can be expressed as:
P i , j = A t i × A p i × p j i n i ,
where i is the index of the target. j is the index of the stage, and j N + , 1 j 5 . A t i is a constant coefficient varying with the type of the target. A p i is also a constant coefficient varying with the position of the target. In this model, it is easy to find out that the profit is only related to the targets.

2.1.3. Cost Calculation

Mission cost consists of three parts: time cost, memory cost, and power cost. The time cost c t i is defined as:
c t = 10 · log 10 T i S 100 T i S > 100 0 T i S 100 ,
where T i S is the start time of time window of target i.
The mission duration t d i , j corresponds to the ith target and jth stage of the mission, expressed as:
t d i , j = 2 j 2 T i E T i S j > 2 .
Memory cost per second is considered as a constant corresponding to the camera, written as MEM. The memory cost c m changing with mission duration can be expressed as
c m = M E M × t d i , j .
Similar to memory cost, average satellite power is also considered as a constant: POW. The power cost c p also changes with mission duration, expressed as
c p = P O W × t d i , j .
The total cost is the sum of these three costs, i.e., the cost for ith target during its jth stage, is c i , j = c t + c m + c p .

2.2. Multi-Granularity Partition of the Multi-Agent Negotiation Model

The framework in this paper is decentralization. In the blockchain technology, there are several consistency algorithms, including RAFT [22], PBFT [23], DPOS [24], RIPPLE [24], POW [25], POS [26], etc. We applied RAFT, PBFT, and RIPPLE in this paper. In RAFT, there are three roles of leader, candidate, and follower. At the start, all nodes in the net are followers. Then some nodes become candidates and initiate a vote. After voting, only one leader is elected, who manages the net by synchronizing logs and connects all followers by heartbeat. Followers only replicate the log. Differently from RAFT, PBFT introduces a result verification mechanism. It can avoid a byzantine fault by using the processes of REQUEST, PRE-PREPARE, PREPARE, COMMIT, and REPLY. In RIPPLE, more leaders are elected. The consistent process is divided into two parts: consistency in leader group and global consistency.
There are three roles of satellites set in this system: leader node (Leader), temporary leader node (TLeader), and follower node (Follower). The Leader is elected to organize the negotiation. The Leader periodically processes the mission requests from Follower and releases negotiation results to Followers. The Leader establishes a leader group consisting of itself and TLeaders. They firstly reach an agreement in the group by group vote. After being verified by TLeaders the negotiation result can be sent to Followers. Followers originate missions themselves and negotiate with each other under the Leader’s control.

2.2.1. Low-Level Granularity

In low-level granularity, the satellites are allowed to negotiate only once to save communication resources. The negotiation process can be described as the following steps (the TLeaders’ assistance process is ignored in this granularity and is introduced in high-Level granularity) shown in Figure 2:
  • Step 1: Preprocessing the missions. The Follower first preprocesses the missions by calculating the time window, profit, and cost.
  • Step 2: Select a set of desired mission stages. The desired mission stages are selected by decision algorithm in Follower.
  • Step 3: Transmit the mission set. The set of desired mission stages M S d e s i r e and their cost is broadcast, and the Leader records the mission set.
  • Step 4: Leader process. The Leader finishes the mission assignment using a greedy algorithm. According to the profit calculation model, the profits of each mission stage are the same. That means the mission stages are assigned to the satellite with less cost.
  • Step 5: Announce temporary result. The Leader announces a temporary result to all Followers.
  • Step 6: Result check and national vote. The Follower logs and checks the result. If the assigned missions M S a c t u a l are all in the desired mission set submitted, the Follower votes in favor, and if not votes against. The national vote result can be expressed as:
    V = 1 M S a c t u a l M S d e s i r e 0 e l s e .
  • Step 7: Result release. The Leader records the selection from the Follower who votes in favor. Then the Leader releases the final negotiation result.
  • Step 8: Execute. The Follower checks the result again, then picks the intersection of the set of assigned missions M S a c t u a l and the set of desired mission stages M S d e s i r e , and then executes the mission stages in the intersection.
Under the process, each Follower needs to decide the desired mission set by itself. It becomes a group knapsack problem with time constraint, which can be described as:
m a x i = 1 n j = 1 5 P i , j s i , j S . T . j = 1 5 s i , j < 2 i a n d i , T i S > T i E O R T i S > T i E ,
where s i , j is mission selection result, s i , j = 1 if the jth stage of ith target is selected; else s i , j = 0 . Both of i and i are the target ID in the selected mission set. Dynamic programming is chosen to solve the problem.
In the low-level granularity, the communication resource is most significant. Due to the RAFT algorithm’s low consumption, it was selected to be the consistency algorithm work at this level. The algorithm’s parameters are adjusted to space usage. The modifications include:
Heartbeat interval is longer. Due to the time-varying characteristics of satellite networks, the connection may be unstable. Frequent broadcast behavior for heartbeat may cause wastage of bandwidth. Additionally, the network delay may cause misjudgment of the connection between the Leader and Followers. The longer interval brings a bigger allowance.
The election interval is longer. Based on the principle of RAFT, the election happens when the Leader is unconnected. That means the election interval is related to the heartbeat interval. Therefore, the election interval is longer too.
The negotiation result is not released in a heartbeat message. Based on the principle of RAFT, a longer heartbeat interval causes a delay of mission response. Thus the release negotiation result package is designed to be independent. It usually transmits periodically; besides, it also supports emergency transmission to improve real-time performance for emergency missions.

2.2.2. Middle-Level Granularity Model

Middle-level granularity corresponding to a normal communication environment is designed to balance bandwidth usage and the negotiation effect.
Similar to low-level granularity, the satellites are divided into one Leader and several Followers. The process can be divided into the following steps shown in Figure 3:
  • Step 1: Preprocessing the missions. Same as the Step 1 in low-level granularity.
  • Step 2: Select a set of mission stages. Differently from the Step 2 in low-level granularity, the set of mission stages contains both desired and candidate mission stages.
  • Step 3: Transmit the mission set. Differently from the Step 3 in low-level granularity, these mission stages broadcast include their execution costs and time windows, and the Leader records the set.
  • Step 4: Leader process. Same as Step 4 in low-level granularity.
  • Step 5: Announce temporary result. Same as Step 5 in low-level granularity
  • Step 6: Result check and vote. The Followers working in the middle-level granularity are responsible for their own mission set. The assigned missions M S a c t u a l must all be in the submitted mission set. The total cost of the mission set is less than satellite’s capability c m a x . Besides, the assigned mission set must contain enough missions. The Follower votes in favor if the above two constraints are satisfied. If not, the Follower votes against. The vote result V is expressed as:
    V = 1 M S a c t u a l M S d e s i r e a n d c m a x / 2 c i , j c m a x 0 e l s e .
  • Step 7: Vote process. If all Followers vote in favor or number of negotiation failures is too high, the Leader stops negotiation and goes to Step 8. If not, the Leader adjusts the set of mission stages while still using a greedy algorithm, and then goes to Step 5.
  • Step 8: Result release. The Leader releases the final negotiation result.
  • Step 9: Execute. Same as the Step 8 in low-level granularity.
In this process, the Follower chooses some spare mission stages. Thus, the time constraint no longer fits for this process. It becomes a group knapsack problem, which can be described as:
m a x i = 1 n j = 1 5 P i , j s i , j S . T . j = 1 5 s i , j < 2 .
Dynamic programming is still chosen to solve the problem.
In the middle-level granularity, the RAFT algorithm is still valid. The usage is the same as the consistency algorithm in low-level granularity.

2.2.3. High-Level Granularity Model

High-level granularity is designed to a obtain better negotiation effect in a good communication environment. Differently from the other two levels of granularity, the satellites are divided into a Leader group and several Followers, and the Leader group consists of a Leader and TLeaders. The process can be divided into the following steps shown in Figure 4:
  • Step 1: Apply to join the Leader group. There exists one Leader in the system as the Leader for all levels of granularity. In order to increase the fault tolerance rate, the Follower with a direct connection (only one hop to the Leader) applies to join the Leader group. After being authorized by the Leader, it becomes a TLeader.
  • Step 2: Preprocessing the missions. Same as before.
  • Step 3: Select a set of mission stages. The set of mission stages, which contains all the supported mission stages, is selected by decision algorithm in Followers and TLeaders.
  • Step 4: Transmit the mission set. These mission stages are broadcast, including their execution cost and time window, and both the Leader and TLeaders record the set.
  • Step 5: Leader Process. The Leader and TLeaders assign missions using a greedy algorithm, considering an extra constraint of the maximum cost of each Follower.
  • Step 6: Announce temporary result. The Leader firstly announces a temporary result in the Leader group and votes for the result. Then the temporary result is announced if confirmed by most TLeaders. If not they go to Step 5 to process again.
  • Step 7: Result check and vote. Same as before.
  • Step 8: Vote process. If all Followers vote in favor or negotiation fails times too much, the Leader stops negotiations and goes to Step 9. If not, the Leader adjusts the set of mission stages still using a greedy algorithm and then goes to Step 6.
  • Step 9: Result release. Same as Step 6, the Leader releases the final negotiation result first in the leader group. After being confirmed, the negotiation results are released to all nodes.
  • Step 10: Execute. Same as before.
In this process, the nodes chose as many mission stages as possible. All mission stages satisfied cost effectiveness ratio are chosen, described as:
s i , j = 1 c i , j / P i , j C E R t h d 0 c i , j / P i , j > C E R t h d .
In the high-level granularity, RIPPLE and PBFT algorithms are combined to improve system stability and negotiation accuracy. The modifications include:
Leader group member selection method. In RIPPLE algorithm, the Leader group member is selected from all Followers. Considering the communication environment, all Followers satisfying a condition should make applications for the group.
The PBFT verification mechanism is introduced into the leader group. The PBFT verification mechanism could effectively prevent node failure or disturbance. This is well adapted to satellites because of their limited computation resources. Verification in the leader group can reduce the possibility of error information release. In other words, the leader group sacrifices its communication bandwidth and computational resources for other nodes working in lower granularity.

2.3. Real-Time Level Selection of Negotiation Granularity

The satellite needs to select an appropriate level of granularity. Selecting a higher level brings large data transmission and complex negotiation processes which may cause negotiation failure. Selecting a lower level can improve the success rate of negotiation but may cause bad negotiations.
Level selection is also difficult because the selections of satellites interact and unknown to each other. We collected lots of samples about selection and negotiation effectiveness. A neural network was trained to describe the relationship between the communication environment and success probabilities of negotiation in each level of granularity. Thus, the satellite can make decisions of level selection based on the probabilities in real-time onboard.

2.3.1. Neural Network Description

The negotiation information transmission is related to network status and size of information witch is selected as input parameters of the neural network. In this paper, the network status is described by two parameters: remaining communication bandwidth and hop number to the Leader. The size of information is described by the number of targets n. The outputs of the neural network are the success probabilities of negotiation in each level of granularity recorded as P L , P M m and P H .
Suppose there are N L layers. All layers are fully-connected. In the kth layer, there are N n k nodes. They are all connected to ( k 1 ) th layer’s nodes with parameters w k and b k . The activation function is optimized during training. The mean squared error M S E was chosen to evaluate network performance:
M S E = 1 M i = 1 M y i y ^ i 2 2 ,
where y i is the vector of real value form sample; y ^ i is the vector of predicted value output by the neural network.

2.3.2. Neural Network Training

The neural network training is divided into two steps: finding the best activation function and determining the neural network structure. In the first step, the two-layer model is set to be the initial neural network structure. The output layer’s activation function is set to be the purelin model. There are 14 activation functions and different node numbers of hide-layers are traversed to find out the best three activation functions. In the second step, the three-layers structure and also the different node numbers in each layer are selected and traversed, including 4, 6, 8, 16, 32, 64, 128, and 256. By comparing the MSE, the best structure of the neural network is found.

3. Experiments and Analyses

3.1. Experimental Data

3.1.1. Targets and Mission Status

We randomly generated a set of targets and missions. The target’s initial position was between 2 N and 2 S. The target’s motion was also random, including course angle ψ i between −180 and 180 and velocity v i between 0 and 30 knots. The status of the mission defined as the numbers shown in Table 2 was random, between 0 and 5.
Finally, we got the set of targets and mission information shown in Table 3; the rest of them are listed in Appendix A.

3.1.2. Imaging Satellite

Satellite parameters are shown in Table 4. They are ten isomorphic satellites in the scene, with an interval phase of −5 .
At the beginning of simulation, mission 1 was assigned to satellite 4. Therefore, satellite 4 executed a tracking mission stage which affects the negotiation.

3.2. Neural Network Training

3.2.1. Sample Collection

We traversed 3 levels of granularity for satellite 6 to satellite 10; different bandwidth usage is shown in Table 5; and target numbers are shown in Table 6.
We collected 3 5 × 7 × 8 = 13,608 samples. Negotiation results can be obtained from the Leader node and converted into connectivity probabilities of each level of granularity as the output of the neural network. Remaining communication bandwidth B w and the number of targets n are also the inputs of simulation. The hop number to the Leader node of each satellite Hn can be read from the simulation result.

3.2.2. Training Result

We chose 14 different activation functions (elliotsig, hardlim, hardlims, logsig, netinv, poslin, purelin, radbas, radbasn, satlin, satlins, softmax, tansig, and tribas) using the first layer. Additionally, the activation function of the second layer was purelin. After training, the best ten neural networks’ distributions of error for the first step in Section 2.3.2 were gathered to be shown in Figure 5, and their parameters are shown in Table 7.
The best three activation functions (radbas, softmax, satlin) were found out. The convergence process of the best neural network is shown in Figure 6.
Then we used these three activation functions and the three-layer structure to find out the best neural network. After training, the best ten neural networks’ distributions of error for the second step were recorded for Figure 7, and their parameters are shown in Table 8.
The training convergence process of the best neural network is shown in Figure 8.

3.3. Experimental Results

All parameters in Table 5 and Table 6 were chosen to verify the neural-network-based multi-granularity negotiation method. We collected and analyzed the profits and costs based on negotiation results. The profit shown in Figure 9 and the cost-effectiveness ratio shown in Figure 10 are most relevant.

4. Discussion

Resources onboard are limited, which means the resources for satellite clusters, such as communication resources and computation resources, may have conflicting demands from the mission and negotiations. This causes failure of negotiations easily. Therefore we need a method to balance the gains from negotiation and negotiation delay. We proposed a method in which the negotiations are divided into three levels of granularity with different working modes. As shown the result, these three levels of granularity perform differently, which means this division is effective and can be used in different situations. Moreover, the satellite selects the best level onboard in real-time, and achieves better performance than each of the three levels. Additionally, the satellite can select the level onboard without many resources. The cost-effectiveness ratio of the satellite cluster is increased. It is worth noting that the three levels of granularity were designed by the author based on the author’s task planning algorithm. That means the number and mode of each granularity may be different from those in other task planning algorithms.
We used machine learning to train a neural network. As shown in the training result, the MSE of the neural network reaches 0.002, which is not very good for the training itself, and this may be caused by a small sample set of 112 kinds of the sample. Those kinds of samples are the statistical result from 13,608 samples. However, this error is satisfactory in application, because possibility does not require high accuracy. Nonlinear fitting and small sample learning may obtain better results, but may not change the selection of granularity.

5. Conclusions

Limited communication and computational resources onboard are significant factors for negotiation, which is not getting enough attention. There usually exists cooperation in the micro-satellite cluster, which means the effect of time delay caused by communication and computational cannot be negligible. In this paper, a neural-network-based multi-granularity negotiation method under decentralized architecture is proposed, which is proven to be effective. Advantages of the proposed method are summarized as follows:
  • The three levels of granularity we allocated work for different situations. We combine the advantages of the levels and use the best level of granularity according to the situation, which brings about better profits and a higher cost-effectiveness ratio.
  • Complex situation analysis for granularity selection also brings about time delay; therefore, a neural network is trained for granularity selection in real-time.
  • The framework of satellites is decentralization, which means it is suitable for a large satellite cluster containing failure nodes and malicious nodes.

Author Contributions

Conceptualization, Y.D. and X.D.; methodology, X.D. and Y.D.; software, X.D. and S.X.; validation, X.D., S.X. and Y.D.; formal analysis, X.D. and S.X.; investigation, X.D.; resources, Y.D.; data curation, X.D.; writing, X.D.; visualization, X.D.; supervision, Y.D.; project administration, X.D.; funding acquisition, Y.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Program of China (number 2016YFB0501102).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Definition of the mission status.
Table A1. Definition of the mission status.
Target IDPositionCourse AngleVelocityMission Status
146.78 E, 0.16 N13 3.23
2135.27 E, 1.05 S78 16.31
3148.65 E, 1.62 N−51 8.60
4105.82 E, 1.14 N−133.97 24.122
522.83 W, 1.58 S−100.56 29.780
6170.52 W, 1.71 N10.64 2.111
7117.21 W, 1.63 N39.89 11.313
8155.38 W, 0.74 N−71.28 7.253
997.38 E, 0.82 S−145.84 6.680
10126.06 E, 1.98 S2.46 9.550
11105.34 W, 0.4 N−25.63 13.694
1277.51 E, 0.86 S168.43 26.081
13135.86 E, 0.35 S106.17 25.443
142.22 E, 1.64 N−67.71 22.444
1570.67 W, 0.56 N−16.66 22.431
1617.6 E, 0.68 N−76.47 27.613
17126.7 E, 0.43 S90.43 11.532
18112.02 E, 0.3 S−23.47 3.071
1941.83 E, 0.19 N−173.1 14.923
2095.34 E, 0.67 S153.94 12.872
2179.91 E, 1.81 S−33.18 14.691
2293.55 W, 0.25 S151.94 8.313
2367.62 E, 1.14 N−23.34 0.763
24137.7 W, 1.35 N151.97 6.764
252.81 E, 0.24 N−32.75 13.523
26129 E, 0.07 N116.76 21.592
2743.86 E, 0.48 S−85.93 21.444
2836.48 W, 1.31 N41.81 6.894
2997.03 E, 1.09 S29.79 8.492
30113.56 W, 0.49 N−169.45 17.230
31133.4 E, 1.63 S10.05 22.343
3219.59 W, 1.05 S53.3 20.343
338.16 W, 0.71 S93.46 1.854
341.83 W, 0.59 N−162.03 27.423
3512.1 E, 1.21 S−75.38 6.975
3638.86 W, 1.61 N−144.91 16.061
37162.48 E, 0.09 S−10.36 1.363
3819.94 W, 0.49 N−66.05 1.893
39135.58 E, 0.59 S5.57 4.113
40148.4 E, 0.92 S−152.6 3.020
4166.06 W, 1.62 S−0.55 17.063
4227.49 W, 1.47 N178.75 23.13
43152.13 E, 1.88 S82.93 14.511
44116.76 E, 0.43 N−160.38 29.134
4535.72 E, 0.95 S−47.87 21.91
46130.59 W, 0.77 N−72.46 3.64
47153.51 E, 0.68 S−179.4 17.784
48112.79 E, 1.31 N−151.89 14.865
49163.11 E, 1.76 S84.84 12.454
50126.49 E, 0.94 N87.96 18.173
5160.67 W, 1.37 S135.86 18.852
5222.05 E, 1.45 N−88.89 14.61
5370.15 W, 1.28 N10.91 13.852
54102.21 E, 0.24 N62.43 24.971
5522.49 W, 0.06 S92.65 14.143
5673.21 E, 1.43 N112.86 3.762
5772.99 E, 1.56 S−150.73 7.810
5897.73 E, 0.07 N−160.3 24.272
59120.5 E, 1.01 N−101.32 18.423
6083.49 E, 0.89 N−143.76 22.461
6142.29 E, 0.54 S−165.52 22.54
62119.69 E, 1.59 N−49.59 13.763
63137.64 E, 0.16 N109.7 20.110
6444.04 E, 0.9 N145.48 17.543
6534.77 E, 1.94 N93.66 10.24
6616.9 E, 1.71 S−172.48 9.421
67166.71 E, 1.4 N54.01 26.031
688.43 W, 1.62 S66.33 16.092
699.69 E, 1.17 S−136.29 6.590
7025.3 E, 1.55 N133.45 21.394
71162.65 W, 0.72 S45.93 22.265
7271.92 E, 0.75 S−48.13 18.142
73126.34 W, 1.3 S−59.5 11.534
74129.3 E, 1.21 S96.87 12.911
75148.87 E, 0.6 N−75.96 28.334
76175.76 E, 1.89 S−7.35 21.82
7751.35 W, 1.51 N−96.73 5.560
7882.54 W, 1.26 S149.41 19.722
7988.86 W, 0.78 S−19.87 19.762
80143.25 E, 1.43 S−21.42 15.941
8134.11 E, 0.25 S111.23 22.744
8266.3 E, 1.43 N125.25 14.131
8388.45 W, 1.37 N−42.87 18.763
84166.84 W, 0.1 S17.65 1.043
85114.67 W, 1.96 N29.31 20.075
86137.9 W, 1.33 S−119.95 24.722
87135.84 W, 0.78 S−91.32 2.484
88130.55 E, 1.93 S154.1 2.585
8965.21 W, 0.01 N21.85 13.071
90155.26 W, 1.65 N−112.8 29.732
91129.67 W, 1.78 N−36.53 7.344
9299.27 W, 0.41 S52.87 15.023
9333.13 E, 1.37 N−139.99 6.011
94168.98 W, 0.99 N170.18 7.952
9574.4 W, 1.4 N19.14 9.082
96137.41 E, 0.36 N165.85 20.564
9762.26 E, 1.99 S83.51 16.072
98177.51 W, 1.49 N−125.73 9.442
9934.16 W, 0.31 S−102.79 7.265
100173.14 W, 1.55 N167.35 22.785

References

  1. Van der Werf, G.R.; Randerson, J.T.; Giglio, L.; Collatz, G.J.; Mu, M.; Kasibhatla, P.S.; Morton, D.C.; DeFries, R.S.; Jin, Y.; van Leeuwen, T.T. Global fire emissions and the contribution of deforestation, savanna, forest, agricultural, and peat fires. Atmos. Chem. Phys. 2010, 10, 11707–11735. [Google Scholar] [CrossRef] [Green Version]
  2. Opgenoorth, H.J.; Lockwood, M.; Alcaydé, D.; Donovan, E.; Engebretson, M.J.; Van Eyken, A.P.; Kauristie, K.; Lester, M.; Moen, J.; Waterman, J.; et al. Coordinated ground-based, low altitude satellite and Cluster observations on global and local scales during a transient post-noon sector excursion of the magnetospheric cusp. Ann. Geophys. 2001, 19, 1367–1398. [Google Scholar] [CrossRef]
  3. Lockwood, M.; Opgenoorth, H.; Van Eyken, A.P.; Fazakerley, A.; Bosqued, J.M.; Denig, W.; Wild, J.A.; Cully, C.; Greenwald, R.; Lu, G.; et al. Coordinated Cluster, ground-based instrumentation and low-altitude satellite observations of transient poleward-moving events in the ionosphere and in the tail lobe. Ann. Geophys. 2001, 19, 1589–1612. [Google Scholar] [CrossRef] [Green Version]
  4. Rabideau, G.; Knight, R.; Chien, S.; Fukunaga, A.; Govindjee, A. Iterative Repair Planning for Spacecraft Operations in the ASPEN System. ISAIRAS 1999, 440, 99. [Google Scholar]
  5. Rayman, M.D.; Varghese, P.; Lehman, D.H.; Livesay, L.L. Results from the Deep Space 1 Technology Validation Mission. ACTA Astronaut. 2000, 47, 475–487. [Google Scholar] [CrossRef]
  6. Campbell, M.; Schetter, T. Comparison of multiple agent-based organizations for satellite constellations. J. Spacecr. Rockets 2002, 39, 274–283. [Google Scholar] [CrossRef]
  7. Schetter, T.; Campbell, M.; Surka, D. Agent Systems, Mobile Agents, and Applications. In Multiple Agent-Based Autonomy for Satellite Constellations; Elsevier Science Publishers Ltd.: Cambridge, UK, 2003. [Google Scholar]
  8. Schetter, T.; Campbell, M.; Surka, D. Multiple agent-based autonomy for satellite constellations. Artif. Intell. 2003, 145, 147–180. [Google Scholar] [CrossRef] [Green Version]
  9. He, Y.; Chen, Y.; Lu, J.; Chen, C.; Wu, G. Scheduling multiple agile earth observation satellites with an edge computing framework and a constructive heuristic algorithm. J. Syst. Archit. 2019, 95, 205–208. [Google Scholar] [CrossRef]
  10. Si-wei, C.; Jing, C.; Lin-Cheng, S. A MAS-based negotiation model and its application to multiple observation collaborative satellite planning. In Proceedings of the 2010 The 2nd International Conference on Computer and Automation Engineering (ICCAE), Singapore, 26–28 February 2010; pp. 205–208. [Google Scholar]
  11. Chong, W.; Ning, J.; Jun, L.; Jun, W.; Hao, C. Cooperative Co-evolutionary Algorithm in Satellite Imaging Scheduling of Cooperative Multiple Centers. In Proceedings of the IEEE Congress on Evolutionary Computation, Barcelona, Spain, 18–23 July 2010. [Google Scholar]
  12. Iacopino, C.; Palmer, P.; Brewer, A.; Policella, N.; Donati, A. EO Constellation MPS based on ant colony optimization algorithms. In Proceedings of the International Conference on Recent Advances in Space Technologies, Istanbul, Turkey, 12–14 June 2013; pp. 159–164. [Google Scholar]
  13. Zhao, Y.; Liu, Y.; Wen, G.; Ren, W.; Chen, G. Designing Distributed Specified-Time Consensus Protocols for Linear Multiagent Systems Over Directed Graphs. IEEE Trans. Autom. Control 2019, 64, 2945–2952. [Google Scholar] [CrossRef]
  14. Buzzi, P.G.; Selva, D.; Hitomi, N.; Blackwell, W.J. Assessment of constellation designs for earth observation: Application to the TROPICS mission. Acta Astronaut. 2019, 161, 166–182. [Google Scholar] [CrossRef] [Green Version]
  15. Globus, A.; Crawford, J.; Lohn, J.; Pryor, A. A comparison of techniques for scheduling earth observing satellites. In Proceedings of the Conference on Nineteenth National Conference on Artificial Intelligence, San Jose, CA, USA, 25–29 July 2004. [Google Scholar]
  16. Araguz, C.; Closa, M.; Bou-Balust, E.; Alarcon, E. A Design-Oriented Characterization Framework for Decentralized, Distributed, Autonomous Systems: The Nano-Satellite Swarm Case. In Proceedings of the 2019 IEEE International Symposium on Circuits and Systems (ISCAS), Sapporo, Japan, 26–29 May 2019. [Google Scholar]
  17. Gallud, X.; Selva, D. Agent-based simulation framework and consensus algorithm for observing systems with adaptive modularity. Syst. Eng. 2018, 21, 432–454. [Google Scholar] [CrossRef]
  18. He, L.; Li, G.; Xin, L.; Chen, Y. An autonomous multi-sensor satellite system based on multi-agent blackboard model. Eksploat. Niezawodn. Maint. Reliab. 2017, 19, 447–458. [Google Scholar] [CrossRef]
  19. Nag, S.; Li, A.S.; Merrick, J.H. Scheduling algorithms for rapid imaging using agile Cubesat constellations. Adv. Space Res. 2018, 61, 891–913. [Google Scholar] [CrossRef]
  20. Wang, J.; Demeulemeester, E.; Qiu, D. A pure proactive scheduling algorithm for multiple earth observation satellites under uncertainties of clouds. Comput. Oper. Res. 2016, 74, 1–13. [Google Scholar] [CrossRef]
  21. Wang, J.; Demeulemeester, E.; Hu, X.; Wu, G. Expectation and SAA Models and Algorithms for Scheduling of Multiple Earth Observation Satellites Under the Impact of Clouds. IEEE Syst. J. 2020, 1–12. [Google Scholar] [CrossRef]
  22. Ongaro, D.; Ousterhout, J. In search of an understandable consensus algorithm. USENIX Annu. Tech. Conf. 2014, 14, 305–319. [Google Scholar]
  23. Castro, M.; Liskov, B. Practical Byzantine fault tolerance. OSDI 1999, 99, 173–186. [Google Scholar]
  24. Dhillon, V.; Metcalf, D.; Hooper, M. Recent Developments in Blockchain. Blockchain Enabled Appl. 2017, 11, 151–181. [Google Scholar]
  25. Jakobsson, M.; Juels, A. Proofs of Work and Bread Pudding Protocols. In Secure Information Networks; Springer: Boston, MA, USA, 1999; pp. 258–272. [Google Scholar]
  26. Nakamoto, S. Bitcoin: A Peer-to-Peer Electronic Cash System. Environ. Sci. Technol. 2003, 37, 1–9. [Google Scholar]
Figure 1. The flowchart of the multi-granularity negotiation method.
Figure 1. The flowchart of the multi-granularity negotiation method.
Remotesensing 12 03595 g001
Figure 2. Low-level granularity negotiation process.
Figure 2. Low-level granularity negotiation process.
Remotesensing 12 03595 g002
Figure 3. Middle-level granularity negotiation process.
Figure 3. Middle-level granularity negotiation process.
Remotesensing 12 03595 g003
Figure 4. High-level granularity negotiation process.
Figure 4. High-level granularity negotiation process.
Remotesensing 12 03595 g004
Figure 5. The error distribution of the best ten neural networks under the two-layer structure.
Figure 5. The error distribution of the best ten neural networks under the two-layer structure.
Remotesensing 12 03595 g005
Figure 6. Training convergence process of the best neural network under the two-layer structure.
Figure 6. Training convergence process of the best neural network under the two-layer structure.
Remotesensing 12 03595 g006
Figure 7. The error distributions of the best ten neural networks under the two-layer structure.
Figure 7. The error distributions of the best ten neural networks under the two-layer structure.
Remotesensing 12 03595 g007
Figure 8. Training convergence process of the best neural network under the two-layer structure.
Figure 8. Training convergence process of the best neural network under the two-layer structure.
Remotesensing 12 03595 g008
Figure 9. Profit of each mode.
Figure 9. Profit of each mode.
Remotesensing 12 03595 g009
Figure 10. Cost-effectiveness ratio of each mode.
Figure 10. Cost-effectiveness ratio of each mode.
Remotesensing 12 03595 g010
Table 1. Initial profits p j i n i for each stage of one mission.
Table 1. Initial profits p j i n i for each stage of one mission.
StagesDiscoveryIdentificationConfirmationTrackingMonitoring
profit0.250.10.150.250.25
Table 2. Definition of the mission status.
Table 2. Definition of the mission status.
Mission Status IDMission Status
0not discovered
1discovered
2identified
3confirmed
4tracking
5monitoring
Table 3. Definition of the mission status.
Table 3. Definition of the mission status.
Target IDPositionCourse AngleVelocityMission Status
146.78 E, 0.16 N13 3.23
2135.27 E, 1.05 S78 16.31
3148.65 E, 1.62 N−51 8.60
Table 4. Parameters for satellite 1.
Table 4. Parameters for satellite 1.
CategoryParameterValue
Orbit ParameterSemimajor Axis (m)6,878,000
Eccentricity0
Inclination ( )0
RAAN ( )0
Argument of Perigee ( )0
True Anomaly ( )230
Attitude ParameterAgileY
Max. Maneuver Angle45
Angular Velocity1
Angular Acceleration0.1
Maneuver ModeTrapezoid Method
Payload ParameterImaging Width (km)4
Image Size per Frame (Mb)6.4
Communication ParameterFrequency BandS
Antenna Number4
Bandwidth (M)10
Modulation Mode (M)PSK
Spread Spectrum Mode (M)CDMA
Table 5. Simulation value of bandwidth usage.
Table 5. Simulation value of bandwidth usage.
Group ID1234567
bandwidth usage60%65%70%75%80%85%90%
Table 6. Simulation value of target number.
Table 6. Simulation value of target number.
Group ID12345678
target number30405060708090100
Table 7. Parameters of the best ten neural networks under the two-layer structure.
Table 7. Parameters of the best ten neural networks under the two-layer structure.
Network IDFirst LayerSecond Layer
Activation FunctionNodes NumberActivation FunctionNodes Number
1radbas32purelin3
2softmax32purelin3
3satlin32purelin3
4softmax256purelin3
5satlin64purelin3
6radbas32purelin3
7poslin32purelin3
8tansig32purelin3
9logsig32purelin3
10radbasn32purelin3
Table 8. Parameters of the best ten neural networks under the two-layer structure.
Table 8. Parameters of the best ten neural networks under the two-layer structure.
Network IDFirst LayerSecond LayerThird Layer
Activation
Function
Nodes NumberActivation
Function
Nodes NumberActivation
Function
Nodes Number
1satlin8softmax16purelin3
2satlin128radbas16purelin3
3satlin16radbas64purelin3
4satlin16softmax64purelin3
5satlin64softmax256purelin3
6softmax32radbas64purelin3
7softmax32satlin16purelin3
8satlin32satlin32purelin3
9satlin64softmax128purelin3
10softmax64softmax256purelin3
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Deng, X.; Dong, Y.; Xie, S. Multi-Granularity Mission Negotiation for a Decentralized Remote Sensing Satellite Cluster. Remote Sens. 2020, 12, 3595. https://doi.org/10.3390/rs12213595

AMA Style

Deng X, Dong Y, Xie S. Multi-Granularity Mission Negotiation for a Decentralized Remote Sensing Satellite Cluster. Remote Sensing. 2020; 12(21):3595. https://doi.org/10.3390/rs12213595

Chicago/Turabian Style

Deng, Xuelei, Yunfeng Dong, and Shucong Xie. 2020. "Multi-Granularity Mission Negotiation for a Decentralized Remote Sensing Satellite Cluster" Remote Sensing 12, no. 21: 3595. https://doi.org/10.3390/rs12213595

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop