Next Article in Journal
Simple and Robust Microfabrication of Polymeric Piezoelectric Resonating MEMS Mass Sensors
Previous Article in Journal
The Middleware for an Exoskeleton Assisting Upper Limb Movement
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Real-Time Effectiveness Evaluation Method for Remote Sensing Satellite Clusters on Moving Targets

School of Astronautics, Beihang University, Beijing 100191, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(8), 2993; https://doi.org/10.3390/s22082993
Submission received: 5 March 2022 / Revised: 7 April 2022 / Accepted: 11 April 2022 / Published: 13 April 2022

Abstract

:
Recently, remote sensing satellites have become increasingly important in the Earth observation field as their temporal, spatial, and spectral resolutions have improved. Subsequently, the quantitative evaluation of remote sensing satellites has received considerable attention. The quantitative evaluation method is conventionally based on simulation, but it has a speed-accuracy trade-off. In this paper, a real-time evaluation model architecture for remote sensing satellite clusters is proposed. Firstly, a multi-physical field coupling simulation model of the satellite cluster to observe moving targets is established. Aside from considering the repercussions of on-board resource constraints, it also considers the consequences of the imaging’s uncertainty effects on observation results. Secondly, a moving target observation indicator system is developed, which reflects the satellite cluster’s actual effectiveness in orbit. Meanwhile, an indicator screening method using correlation analysis is proposed to improve the independence of the indicator system. Thirdly, a neural network is designed and trained for stakeholders to realize a rapid evaluation. Different network structures and parameters are comprehensively studied to determine the optimized neural network model. Finally, based on the experiments carried out, the proposed neural network evaluation model can generate real-time, high-quality evaluation results. Hence, the validity of our proposed approach is substantiated.

1. Introduction

Remote sensing satellites obtain information from the Earth’s surface through optical or microwave payloads [1]. They are widely used in many fields, including agriculture, forestry, ocean, meteorology, and military [2]. With the growing emphasis on space and the ongoing growth of science and technology, the number of remote sensing satellites has expanded significantly in recent years. Meanwhile, the temporal, spatial, and spectral resolutions of satellites have also been gradually enhanced [3]. Satellite clusters, such as Planet Labs, Gaofen, and Jilin-1, are formed as a result of these advancements [4,5]. Remote sensing satellite clusters regularly consist of numerous wide-swath satellites and high-resolution satellites. They cooperate to complete census and detailed survey tasks for various targets. Nevertheless, constructing such satellite clusters is costly due to the several processes involved, including the design, manufacture, testing, launching, and management. Consequently, the effectiveness of satellite clusters has to be evaluated accurately for better decision making [6]. With the emerging drawbacks of qualitative evaluation methods, quantitative evaluation methods have received considerable attention in recent years [7,8].
The four main branches of the current quantitative effectiveness evaluation methods are the analytical method, experimental statistical method, multi-index synthesis method, and simulation method [9]. Although the analytical method is simple and efficient, it has difficulty in solving certain complex evaluation problems. Zhao [10] studied the effectiveness equivalence algorithm of the weapon system and proposed an approximate analytical expression of the weight coefficient. Next, despite the high reliability of the experimental statistical method, it requires real data, which is normally scarce and difficult to obtain. Xiong [11] used more than ten years of in-orbit data to evaluate NASA EOS Terra and Aqua MODIS on-orbit performance. The multi-index synthesis method has a good hierarchy and a wide range of applications. However, the weights of indices are usually affected by subjectiveness. Zheng [12] evaluated three typical remote sensing tasks using the analytic hierarchical process (AHP). Chen [13] constructed an evaluation indicator system to describe the target feature detection ability of satellites, where the weight parameters are obtained from the expert evaluation. Liu [14] utilized the AHP and the availability dependability capability (ADC) model to evaluate the comprehensive effectiveness of the earth observation satellite cluster. While the simulation method can consider more factors of complex systems and environments, it also requires the model to be highly accurate. Zhang [4] built a satellite observation effectiveness evaluation model based on the availability capacity profitability (ACP) framework. Tang [15] used the Satellite Tool Kit (STK) and C++ to build a nanosatellite constellation model and evaluated its military effectiveness. As one of the four branches mentioned above, the simulation method can compute the evaluation results presented in various conditions, solving the evaluation problem of the complex remote sensing satellite cluster effectiveness with good applicability [9].
The main challenge of using the simulation method to evaluate the remote sensing satellite cluster is the lack of model accuracy [9]. The satellite cluster is a complex and highly comprehensive system. Multiple physical fields, including mechanics, electricity, thermodynamics, optical, and magnetism, are coupled [16,17]. The remote sensing satellite clusters are usually used to observe three typical targets, which are point targets, regional targets, and moving targets [18]. Establishing the observation scene of the moving targets is more complex than the others, since additional aspects must be considered. Moving targets have time sensitivity and position uncertainty, requiring multi-satellite cooperation to discover, identify, confirm, and track them sequentially. The uncertainty of the observation results is mostly driven by factors, including resolution, light, cloud, and climate [18]. The traditional simulation evaluation modeling only considers a few factors, which are the satellite orbit, attitude, and optical payload visible model [4,9]. Nonetheless, apart from lacking resource constraint models, such as the power and computer storage, it also does not consider the impact of imaging quality on the mission status. Hence, to describe the complex coupling relationships of a remote sensing satellite cluster, a high-fidelity simulation model must be established [19].
Regarding the construction of indicator systems, most remote sensing satellite evaluation indicators only consider the time resolution (e.g., the revisit time and coverage time) and spatial resolution (e.g., the Ground Sampling Distance (GSD) and percentage of area coverage) [9,12]. Even though the aforementioned indicators can reflect the satellite’s availability to observe the target under ideal conditions, they are not capable of reflecting the effectiveness of the satellite’s actual mission process. Moreover, the establishment of an indicator system is generally proposed by experts, where each indicator in distinct systems might be inevitably correlated, thereby defying the principle of hierarchy and independence [20].
Furthermore, a high-fidelity simulation evaluation model requires heavy computation, especially when the cluster contains numerous satellites [21]. As a consequence, the time-consuming simulation limits the iteration speed of the design stage and the decision-making speed of the use stage [22]. A common approach to overcoming this problem is to establish a machine learning model. Machine learning technology has lately been applied to the theoretical studies of aerospace missions [23], including aircraft design [20,22], mission planning [1,24], and attitude control [25,26]. In those cases, machine learning has demonstrated its capability of learning complicated functions and responding quickly.
In this paper, by applying machine learning to the effectiveness evaluation of remote sensing satellite clusters on moving targets, a real-time evaluation model architecture is proposed. Firstly, we establish a multi-physical field coupling simulation model of the satellite cluster, which considers the repercussions of multiple satellite resource constraints and the uncertainty of imaging quality on the observation results. Secondly, we develop an indicator system to evaluate the effectiveness of satellite cluster observations on moving targets. A set of independent indicators is then filtered out through correlation analysis. Thirdly, neural networks are trained with high-fidelity simulation evaluation data. Neural networks of different hidden layers, neurons, and activation functions are trained to determine the optimized model, which can output effectiveness evaluation results in real time.
The remainder of the paper is organized as follows. In Section 2, the architecture of the real-time effectiveness evaluation model is presented. In Section 3, the high-fidelity model construction method is explained. In Section 4, the construction and screening process of the evaluation indicator system is provided. In Section 5, the neural network training method is presented. In Section 6, the experiments and results are described. In Section 7, we discuss the validity of the method proposed and conclude the paper.

2. Architecture of the Real-Time Satellite Cluster Effectiveness Evaluation Model

The scene of the satellite cluster observing the moving targets is illustrated in Figure 1. A satellite cluster may include different satellites, such as wide-swath satellites and high-resolution satellites. A variety of satellites in the satellite cluster collaborate to conduct the census and comprehensive investigation of the target. Wide-swath satellites are utilized to search for and discover targets in a specified area. High-resolution satellites are intended for identification, confirmation, and status tracking of discovered targets.
To evaluate the moving target’s observation effectiveness of the remote sensing satellite cluster, a real-time evaluation model architecture is designed, as shown in Figure 2.
The multi-physical field coupling simulation model of remote sensing satellite clusters is described in Section 3. The model is defined as
Y = f S i m u l a t i o n ( X A l l , t ) ,
where Y is the simulation result data and t is the simulation time. X A l l consists of simulation condition parameters, which can be expressed as
X A l l = [ X A D C S , X P O W E R , X T T C & D T , X P A Y L O A D , X S p a c e , X T a r g e t ] T .
The simulation condition parameters X A l l mainly consist of six parts, which are attitude determination and control subsystem (ADCS) parameters X A D C S , the power subsystem (POWER) parameters X P O W E R , the telemetry/telecommand and data transmission subsystem (TTC&DT) parameters X T T C & D T , the payload subsystem parameters X P A Y L O A D , the task environmental parameters X S p a c e , and the target parameters X T a r g e t , respectively.
The moving target observation effectiveness indicator system of the satellite cluster is established in Section 4, which can be expressed as
Z = [ z 1 , z i , z n ] T .
The effectiveness evaluation model based on the simulation result data can be expressed as
Z S i m u l a t i o n = [ z 1 , S i m u l a t i o n , z i , S i m u l a t i o n , z n , S i m u l a t i o n ] T = f S C E E S i m u l a t i o n ( Y ) .
In Section 5, the neural network evaluation model is trained, which can be expressed as
Z L e a r n i n g = [ z 1 , L e a r n i n g , z i , L e a r n i n g , z i , L e a r n i n g ] T = f S C E E L e a r n i n g S t a k e h o l d e r s , i ( X S t a k e h o l d e r s , i ) ,
where X S t a k e h o l d e r s , i X A l l contains the factors with which the i th stakeholder is concerned. f S C E E L e a r n i n g S t a k e h o l d e r s , i is the corresponding neural network evaluation model. For different stakeholders, such as the designers, manufacturers, and users, X S t a k e h o l d e r s , i contains different elements. On the one hand, the primary concern of the designers and manufacturers is the composition and structure of the remote sensing satellite cluster. On the other hand, the fundamental interest of the users is to what extent different task environments and target parameters impact the overall system efficiency.
With the rapid forward propagation process of the neural network, the calculation of the satellite cluster effectiveness indicators can be greatly accelerated. The neural network model error of the i th indicator can be expressed as
e r r o r i = | z i , l e a r n i n g z i , s i m u l a t i o n z i , s i m u l a t i o n | .

3. Establishing the Multi-Physical Field Coupling Model of the Remote Sensing Satellite Cluster

In this section, the function f s i m u l a t i o n of calculating Y according to X A l l and t is established. The function is realized via a numerical simulation based on the high-fidelity model, which should be the multi-physical field coupled. The multi-physical coupling relationship is illustrated in Figure 3.
The forces and moments of celestial bodies, such as Earth, the Sun, and the Moon, are coupled with the satellites’ orbits and attitudes. The ADCS controls the satellite’s position and attitude. They have coupling relationships with the solar array attitude of the POWER, the communication link of the TTC&DT, and the range of the payload’s field of view. Sunlight and the solar array within the POWER are coupled. The remaining battery capacity of POWER is mutually coupled with the power consumed by other subsystems. TTC&DT is coupled with other subsystems by sending and receiving commands and data. The payload’s coverage capability is coupled with the satellite’s orbit and attitude. At the same time, it is also bound by resources, such as power, storage, and data transmission. The payload imaging quality is affected by factors, such as resolution, climate environment, cloudiness level, and lighting conditions, which in turn affect the target detection probability.
The typical simulation evaluation modeling only considers a few factors, which are the satellite orbit, attitude, and optical payload visible model. Nonetheless, apart from lacking resource constraint models, such as the power and computer storage, it also does not consider the impact of imaging quality on the mission status. On the basis of the existing models, this paper established three supplement models, which are (1) the power subsystem model, (2) the optical payload’s target detection probability model, and (3) the satellite cluster mission allocation model that considers the entire satellite’s resource constraints.

3.1. The Power Subsystem Model

The power subsystem model includes the device power consumption model, the solar array power supply model, and the battery charge and discharge model.
(1) The device power consumption model.
The total power consumption P D e v i c e is calculated as follows:
P D e v i c e = i = 1 N D e v i c e S D e v i c e i P D i ,
where i is the i th device, i N + , and 1 i N D e v i c e . N D e v i c e is denoted as the total number of devices. The state variable S D e v i c e i is set to 1 when the device consumes electrical energy, or else it is set to 0. P D e v i c e is the power that the i th device consumes.
The on–off state of the devices on board are defined as S D e v i c e = { S D e v i c e 1 , , S D e v i c e i , , S D e v i c e N } , and it can be obtained by
S D e v i c e = f D S ( X P o w e r M a n a g e , X T C , J ( E 0 ) ) ,
where X T C is the power management command from the Earth station, and X P o w e r M a n a g e is the management command from the on-board computer.
Denote t as the current time and Δ t as the simulation time step. The power management command of t + Δ t can be obtained from
X P o w e r M a n a g e ( t + Δ t ) = f P M ( S D e v i c e ( t ) , P D e v i c e ( t ) , P W i n g ( t ) , C B a t t e r y ( t ) ) .
The power consumption P D i of each device is coupled with other subsystems, and it can be expressed as
P D i = f D P ( X D i , X D e v i c e _ i C o u p l i n g , J ( E 0 ) ) ,
where X D i is the state parameter of the i th device itself and X D e v i c e _ i C o u p l i n g represents the state parameter affected by other subsystems, including the temperature, data transmission load, and data processing load.
(2) The solar array power-supply model.
The output power of the solar array is calculated as follows:
P W i n g = C S h a d o w X W i n g S 0 A C η cos θ S W ( β P Δ T + 1 ) ,
where C S h a d o w represents whether the satellite is in the Earth’s shadow, X W i n g is the comprehensive coefficient of the solar array, S 0 is the solar constant, A C is the area of the solar array, η is the photoelectric conversion efficiency of the solar array, θ S W is the angle between the normal direction of the solar panel and the direction of sunlight, β P is the solar array power temperature coefficient, and Δ T is the difference between the standard temperature and the current working temperature of the solar array.
C S h a d o w is calculated according to the sun vector R S u n , the satellite position vector R S a t e , and the angle θ S S between the two vectors; the formulas are as follows:
θ S S = arccos ( R S u n R S a t e | R S u n | | R S a t e | ) ,
C S h a d o w = = { 1 , i f   θ S S > π 2   a n d   | R S a t e | sin θ S S < R e 0 , e l s e .
θ S W can be obtained from the solar vector R S u n and the normal vector W of the solar panel, which can be expressed as
θ S W = arccos ( R S u n W | R S u n | | W | ) .
The temperature difference Δ T is
Δ T = T W T O ,
where T W is the current working temperature and T O is the standard temperature of the solar array.
(3) The battery charge and discharge model.
When the output power of the solar array exceeds the sum of total device power consumption and battery charge power, the regulator consumes the surplus power. The surplus power can be calculated by
P L o s s = { P W i n g P D e v i c e P M a x C h arg e , P W i n g P D e v i c e > P M a x C h arg e 0 , e l s e ,
where P M a x C h a r g e is the maximum charging power of the battery and P L o s s is the power consumed by the regulator.
The battery is charged only if the output power of the solar array is greater than that of the on-board devices and the current battery capacity is below its maximum value, in which case P B a t t e r y > 0 . On the contrary, the battery is discharged if the electric power of the solar array is less than that of the on-board devices and the current battery capacity is above its minimum value, in which case P B a t t e r y < 0 . The battery power is defined as
P B a t t e r y = { P W i n g P D e v i c e P L o s s , C M i n B a t t e r y < C B a t t e r y < C M a x B a t t e r y 0 , e l s e ,
where C M i n B a t t e r y is the minimum capacity of the battery and C M a x B a t t e r y is the maximum capacity.
The battery capacity of t + Δ t can be expressed as
C B a t t e r y ( t + Δ t ) = { C B a t t e r y ( t ) + Δ t k C h a r g e P B a t t e r y ( t ) ,   P B a t t e r y > 0 C B a t t e r y ( t ) + Δ t k D i s c h a r g e P B a t t e r y ( t ) ,   P B a t t e r y < 0 ,
where k C h a r g e is the battery charging coefficient and k D i s c h a r g e is the battery discharge coefficient.

3.2. The Optical Payload Model

The optical payload model consists of two parts: the optical coverage model and the target detection probability model. The coverage state S i , j C o v e r can be obtained from the optical coverage model [9], where i is the target number and j is the task number of the i th target. The target detection probability is affected by factors, including the satellite imaging resolution, target size, light condition, cloud level, and visibility level. The detailed model is described as follows:
The resolution of the satellite optical payload is
G S D = d f | R s t i | ,
where d is the pixel size, f is the focal length, and R s t i is the vector of position difference from satellite to the target in the inertial frame.
The 2-D criterion (number cycles) N is generated by
N = d t G S D ,
d t = L t W t ,
where d t is the feature size of the target, L t is the length of the target, and W t is the width of the target.
The target static detection probability is defined as [27]
P S t a s t i c = P ( N ) = ( N / N 50 ) 2.7 + 0.7 ( N / N 50 ) 1 + ( N / N 50 ) 2.7 + 0.7 ( N / N 50 ) ,
where N 50 is the number of cycles corresponding to a 50% detection probability. The N 50 is set to 0.75, 3, 6, 1.5 for the discovery, identification, confirmation, and tracking tasks, respectively.
The solar altitude angle is calculated as
H = | arccos ( R S u n R T a r g e t | R S u n | | R T a r g e t | ) | .
The light factor is defined as
f s = { 1 , H 30 ° 0.04 ( H 5 ) , 5 ° H 30 ° 0 , H 5 ° .
The cloud factor is generated by [28]
f c = 1 ( N c 1 ) 2 81 ,
where N c is the cloud level, N c N + , 1 N c 10 .
The visibility factor is generated by [28]
f v = e ( N v 1 ) 4 1024 ,
where N v is the visibility level, N v N + , 1 N v 10 .
The target discovery probability can be derived from the product of static detection probability and the other three factors; it can be expressed as
P i , j = P S t a s t i c f s f c f v ,
where i is the target number and j is the task number of the i th target.
According to the target detection probability, it is possible to succeed and fail in a single observation mission. Thus, the simulation results may be different, given the same simulation condition. The task status is defined as follows:
S i , j T a s k = { 1 , i f   S i , j C o v e r = 1   a n d   P i , j r a n d o m ( 0 , 1 ) 0 , e l s e .
Finally, the status and the end time of each task are recorded, including the following items:
(1) The time series of discovery tasks is recorded as T i , j D i s c o v e r y , where 1 j N i D i s c o v e r y . N i D i s c o v e r y is the total discovery task number of the i th target.
(2) The time series of identification and confirmation tasks is recorded as T i , j I d e n t i f i c a t i o n , where 1 j N i I d e n t i f i c a t i o n . N i I d e n t i f i c a t i o n is the total confirmation task number of the i th target.
(3) The time series of successful tracking tasks is recorded as [ T i , j T r a c k i n g S t a r t , T i , j T r a c k i n g E n d ] , where 1 j N i T r a c k i n g . N i T r a c k i n g is the total tracking task number of the i th target.
(4) The time series of losing the targets is recorded as T i , k L o s t , where 1 k N i L o s t . N i L o s t is the total lost counts of the i th target. The T i , k L o s t is recorded when the simulation starts or the tasks of the target i fail twice in a row.

3.3. The Satellite Cluster Mission Allocation Model

The moving target observation mission can be divided into four tasks, which are discovery, identification, confirmation, and tracking. The discovery task refers to the scanning and discovery of regional targets when the potential area of the target is known, but the specific location is not. The identification and confirmation tasks refer to the identification of the target type and model when the approximate target location has been found. The tracking task refers to the continuous tracking of the target when the specific model of the target has been confirmed. The discovery task is generally completed by satellites with a low resolution and a wide field of view. As for the identification, confirmation, and tracking tasks, they are typically accomplished by satellites with a high resolution but a narrow field of view.
(1) High-resolution satellites mission allocation model.
The mission allocation steps for the identification, confirmation, and tracking tasks are designed as Figure 4, where i is the target index and 1 i N T a r g e t , N T a r g e t is the total number of the targets. j is the satellite index and 1 j N S a t e , N S a t e is the total number of satellites.
Step 1: obtain the planning current time t and planning period Δ t P l a n , then catalog the discovered targets.
T a r g e t = { T a r g e t 1 , T a r g e t i , T a r g e t n } ,
where T a r g e t i represents the set of the i th target’s state information, which comprises the target index; the latest observation state; the latest observation time; the latest identified type and model; and the latest positioning longitude, latitude, speed, and course angle.
Step 2: within a planning cycle Δ t P l a n , the positions of the targets are determined by their latest observed position and speed.
Step 3: the observation time window of all satellites for all targets during the planning period Δ t P l a n is calculated. The observation time window of satellite j on target i is defined as [ T i , j W i n d o w S t a r t , T i , j W i n d o w E n d ] [1].
Step 4: the mission profit for each target is presented as
P r o f i t i = { 1 , D i s c o v e r y S t a t e 0.8 , I d e n t i f i c a t i o n S t a t e 0.5 , C o n f i r m S t a t e k ( t t i ) , T r a c k i n g S t a t e ,
where t is the current time, t i is the last observed time of the i th target, and k is the time coefficient. The value of k is set to 0.001.
Step 5: the total observation cost of each satellite to the observable target is calculated, including the time cost, power consumption cost, and storage consumption cost.
The time cost is
c t i , j = T i , j W i n d o w S t a r t t .
The power consumption cost is
c p i , j = ( P I m a g i n g + k P D T ) ( T i , j W i n d o w E n d T i , j W i n d o w S t a r t ) + P M a n e u v e r ( T i , j M a n e u v e r E n d T i , j M a n e u v e r S t a r t ) ,
where P I m a g i n g , P D T , P M a n e u v e r are the average powers of the whole satellite during the imaging process, the data transmission process, and the attitude maneuver process, respectively. k is the comprehensive coefficient denoting the ratio of remote sensing payload data rate to the data transmission rate. T i , j M a n e u v e r S t a r t and T i , j M a n e u v e r E n d are the start and end times of the attitude maneuver, respectively, which can be achieved according to the start time of the time window, the expected angle of the attitude maneuver, the maximum angular velocity and angular acceleration of the satellite attitude maneuver, and the stabilization time of the attitude maneuver [1].
The storage consumption cost is defined as
c m i , j = M E M j ( T i , j W i n d o w E n d T i , j W i n d o w S t a r t ) ,
where M E M j is the remote sensing payload data rate.
The total observation cost is the product of the above three costs, which is expressed as
c i , j = c t i , j c p i , j c m i , j .
Step 6: allocate the missions to the satellites. The most profitable mission among the remaining missions is chosen and allocated to the satellite that has the lowest cost to fulfill that mission while satisfying the constraints of attitude maneuver, fixed storage, or power supply. The mission’s allocation process loops until all the missions are completed.
(2) Wide-swath satellites mission allocation model.
The allocation process of the discovery stage is similar to that of the identification, confirmation, and tracking stages mentioned above, with two main differences.
The first difference is the target cataloging. As shown in Figure 5, the potential emergence area of the target is meshed by a 1° × 1° grid of longitude and latitude, and the center of each grid is regarded as a “target”. The square boundaries of the potential area are determined by the following method. The target is located at the left boundary of the potential area at time t 0 . Meanwhile, the right boundary of the potential area can be calculated according to the maximum speed of 30 knots (15.43 m/s) away from the area’s left boundary.
The second difference is the observation profit of the grid target. The profit is calculated as follows:
P r o f i t = t t i ,
where t is the current time and t i is the last observed time of the i th grid.

4. The Effectiveness Evaluation Indicator System

In this section, we propose an evaluation indicator system of moving target observation performance to judge the effectiveness of satellite cluster observation tasks. The evaluation indicator system is screened through correlation analysis to form an independent indicator set. The following describes the methods for developing and filtering the effectiveness evaluation indicator system.

4.1. The Construction of the Evaluation Indicator System

The moving target observation effectiveness indicator system considers three abilities: search and discovery, identification and confirmation, and continuous tracking. The details of the effectiveness evaluation indicator system are shown in Figure 6.
The concept definition and mathematical description of the effectiveness indicators are as follows:
Denote i as the target index and 1 i N T a r g e t , N T a r g e t as the total number of targets, T T o t a l as the total simulation time, and T O r b i t as the orbit period of the satellite cluster.
  • The search and discovery ability.
The target discovery probability P D i s c o v e r y is defined as the average probability that the targets are rediscovered within a single orbit period from the moment they are lost. The discovery response time T D i s c o v e r y is defined as the average time taken to rediscover a lost target. The calculation procedures are as follows:
Search and obtain the time T i , j D i s c o v e r y immediately after T i , k L o s t . The time combination is recorded as [ T i , k L o s t , T i , k D i s c o v e r y ] . The total number of discovery tasks N D i s c o v e r y found within a single orbit period is generated by
N D i s c o v e r y = i = 1 N t a r g e t N i L o s t ,   T i , k D i s c o v e r y T i , k L o s t T O r b i t ,
where k is the k th loss of the i th target. N i L o s t is the total lost number of the i th target and 1 k N i L o s t .
P D i s c o v e r y is computed by
P D i s c o v e r y = N D i s c o v r e y i = 1 N T a r g e t N i L o s t .
T D i s c o v e r y is computed by
T D i s c o v e r y = i = 1 N T a r g e t k = 1 N i L o s t ( T i , k D i s c o v e r y T i , k L o s t ) i = 1 N T a r g e t N i L o s t .
2.
The identification and confirmation ability.
The target identification probability P I d e n t i f i c a t i o n is defined as the average probability of identifying and confirming targets within a single orbit period since targets are discovered. The identification response time T I d e n t i f i c a t i o n is defined as the average time taken from target discovery to identification. The calculation procedures are as follows:
Search and obtain the time T i , j I d e n t i f i c a t i o n immediately after the time T i , j D i s c o v e r y . The time combination is recorded as [ T i , j D i s c o v e r y , T i , j I d e n t i f i c a t i o n ] . The total number of successful confirmation tasks within a single orbit period since targets are detected is produced by
N I d e n t i f i c a t i o n = i = 1 N T a r g e t N i D i s c o v e r y ,   T i , j I d e n t i f i c a t i o n - T i , j D i s c o v e r y T O r b i t ,
where j means the j th discovery of the i th target. N i D i s c o v e r y is the total discovery number of the i th target and 1 j N i D i s c o v e r y .
P I d e n t i f i c a t i o n is computed by
P I d e n t i f i c a t i o n = N I d e n t i f i c a t i o n i = 1 N T a r g e t N i D i s c o v e r y .
T I d e n t i f i c a t i o n is computed by
T I d e n t i f i c a t i o n = i = 1 N T a r g e t k = 1 N i D e t e c t i o n ( T i , j I d e n t i f i c a t i o n T i , j D i s c o v e r y ) i = 1 N t a r g e t N i D i s c o v e r y .
3.
The continuous tracking ability.
The tracking time percentage A T r a c k i n g is defined as the average ratio of total target tracking time to the total runtime.
A T r a c k i n g = i = 1 N T a r g e t j = 1 N i T r a c k i n g ( T i , j T r a c k i n g E n d T i , j T r a c k i n g S t a r t ) N T a r g e t T T o t a l ,
where j means the j th tracking tasks of the i th target. N i T r a c k i n g is the total tracking number of the i th target and 1 j N i T r a c k i n g .
The average tracking interval T T r a c k i n g is defined as the average interval between two consecutive tracking tasks.
T T r a c k i n g = i = 1 N T a r g e t j = 1 N i T r a c k i n g 1 ( T i , j + 1 T r a c k i n g S t a r t T i , j T r a c k i n g E n d ) i = 1 N T a r g e t N i T r a c k i n g .

4.2. Evaluation Indicator Screening

The correlation of the indicator system Z = [ z 1 , , z n ] T is analyzed, and the indicators with strong correlations are then screened and eliminated. This results in an indicator system that satisfies the principles of hierarchy and independence. The steps for evaluation indicator screening are as follows:
Step 1: calculate the correlation coefficients between the parameters and build the coefficient matrix as
A = [ a i j ] n × n = [ a 11 a 12 a 1 n a 21 a 22 a 2 n a n 1 a n 2 a n n ] ,
where a i j is the correlation coefficient of the indicators i and j , note that the value of the main diagonal is 1.
Step 2: screen out the highly correlated indicators. If a i j > 0.5 , then indicators i and j are considered as highly correlated.
Step 3: calculate the sum of the linear correlation coefficients of indicators i and j with the other indicators.
C i = k = 1 n | a i k | k i   and   k j ,
C j = k = 1 n | a j k | k i   and   k j .
Step 4: compare the values of C i and C j . Remove the indicator with the larger value and keep the indicator with the smaller value.
Step 5: continue Steps 2 to 4, until the indicator system is screened and formed.

5. Neural Network Evaluation Model Training

The refining of simulation model granularity enhances the simulation accuracy, but diminishes the simulation efficiency. Hence, it is challenging to meet the efficiency requirement of both the iterative optimization at the designer end and the real-time decision making at the user end. To provide a solution to this problem, a backpropagation (BP) neural network model is designed and trained for user stakeholders to realize the rapid evaluation of satellite cluster effectiveness. This section introduces sample generation and neural network training in detail.

5.1. Sample Creation

Each sample has an input and an output. The sample input is defined as
X = [ t 0 , L o n g , L a t , N c , N v , N T a r g e t ] T ,
where t 0 is the task start time, L o n g and L a t are the longitude and latitude for the center of the initial area, N c is the cloud level, N v is the visibility level, and N T a r g e t is the total target number.
The sample output is the effectiveness indicator, which can be expressed as follows
Z = [ z 1 , , z n ] T .
A single-time simulation has high uncertainty resulting from the uncertainty of the random target initial parameters and the existence of the detection probability. Therefore, in order to generate reliable samples, each simulation case is performed multiple times to obtain the statistical value of the effectiveness indicators.
Denote N S a m p l e as the total sample number and N C o n d i t i o n as the designated simulation times of a single sample. The complete sample set S = [ S 1 , , S n ] T is generated after N S a m p l e × N C o n d i t i o n times of simulation. S i is the matrix corresponding to the i th effectiveness indicator of the sample set, which has N S a m p l e rows and seven columns.

5.2. Network Training

The neural network training process is divided into three different stages, which are the sample set division, network training, and performance testing, as shown in Figure 7.
To begin, the sample sets S i are first normalized to [ 1 , 1 ] and then split into a training set S i T r a i n and a test set S i T e s t . Denote σ [ 0 , 1 ] as the fraction of the sample training set. S i T r a i n has σ N S a m p l e rows whilst S i T e s t has ( 1 σ ) N S a m p l e rows.
Next, deep neural networks are trained in two steps, starting with the search for the best activation function, followed by the selection of the optimal network parameters. In the first step, a single hidden layer neural network traverses a set of activation functions to find several optimal activation functions. Those activation functions are softmax, tansig, logsig, elliotsig, poslin, purelin, radbas, satlin, satlins, and tribas. In the second step, a multi-hidden-layer neural network is implemented to determine the best combination of network structures and parameters. The neural network traverses the various combinations of network structures and parameters, including the number of hidden layers, the number of neurons in each layer, and the activation functions found in the first step.
Finally, the neural networks are tested on the training set and test set. Denote Z i T r a i n and Z i T e s t as the outputs of the neural network in the training and test set, respectively. The mean squared error (MSE) of the i th effectiveness indicator can be expressed as
M S E i T r a i n = 1 σ N S a m p l e j = 1 σ N S a m p l e ( Z i , j T r a i n Z ^ i , j T r a i n ) 2 .
The average error of the training and test sets is defined as
e ¯ i T r a i n = 1 σ N S a m p l e j = 1 σ N S a m p l e e r r o r i , j T r a i n ,
e ¯ i T e s t = 1 ( 1 σ ) N S a m p l e j = 1 ( 1 σ ) N S a m p l e e r r o r i , j T e s t .

6. Experiments and Results

In this section, the ship target observation scene is selected for the experiments. Firstly, we compared the proposed model with the model mentioned in the reference [6,9]. This is to investigate the influence of the high-fidelity model on the effectiveness evaluation results. Secondly, we used the proposed evaluation indicator system to calculate the effectiveness of the satellite cluster. Additionally, we employed the correlation analysis to filter the evaluation indicator system. The resulting indicator set was highly hierarchical and independent. Finally, we trained neural networks for effectiveness evaluation. To find the optimal network structure and parameters, numerous combinations of them traversed the neural network. As a result, the neural networks can output the effectiveness indicators instantaneously.

6.1. The Comparison of Different Simulation Model Granularities

The remote sensing satellite cluster is designed according to the Walker Constellation of solar synchronous orbit, which is composed of wide-swath satellites and high-resolution satellites. Each orbital plane of the cluster contains ten satellites with an interval phase of 36°. The wide-swath satellites lie in the first and sixth positions of each orbital plane, whereas the high-resolution satellites lie in the other eight positions. The satellite cluster configuration is shown in Figure 8.
The key parameters of the satellite cluster are shown in Table 1.
The mission area for this experiment is restricted to the Pacific, from 130° E 10° N to 150° E 30° N. The ships are initialized within a random region of 2° × 2°. The positions, velocities, and course angles are also arbitrarily selected. The ship size used here is 155.3 m × 20.4 m. The simulation starts at 0:00 a.m. on one day in 2021 and lasts for 21,600 s. The randomly generated environmental parameters used for the model comparison are shown in Table 2.
The initial information of ten arbitrary ships is presented in Table 3.
We built three simulation models of different granularity according to refs. [6,9], and our granularity, respectively, in which each is labeled as granularity 1, granularity 2, and granularity 3. Their model elements are presented in Table 4. Granularity 1 only considers the satellite’s orbit, attitude, and optical payload’s coverage model. On top of granularity 1, granularity 2 not only includes a data transmission model, but also considers the impact of weather uncertainty. As compared with the first two granularities, additionally, granularity 3 takes into consideration the constraints of the satellite power subsystem, as well as the influence of resolution, climatic conditions, and cloudiness level on the imaging detection probability.
The simulation process of satellite cluster observing the moving target is shown in Figure 9. The wide-swath satellites and the high-resolution satellites completed the task of discovering, identifying, confirming, and tracking the ship in turn.
The calculated effectiveness evaluation indicators of the three model granularities are presented in Table 5 and Figure 10a. The effectiveness values for granularity 1 and granularity 2 are higher than the proposed granularity 3. By comparing granularity 1 to granularity 3, it can be observed that the T ¯ I d e n t i f i c a t i o n and T ¯ T r a c k i n g errors of granularity 1 are above 75%, and its error of A ¯ T r a c k i n g is 7 times larger. As for granularity 2, even though its error, when compared to granularity 1, is lower than that of granularity 3, it still contains a large error, where the error of T ¯ I d e n t i f i c a t i o n is 37% and A ¯ T r a c k i n g is 64%. The main reason behind the errors is that the coarse-grained model ignores the power constraint and lacks the imaging detection probability model, causing the model to differ significantly from the true model. Therefore, the calculated indicators are more impractical and falsely higher, which cannot be achieved by the actual satellite during operation. The limitations of coarse-grained models and the necessity of fine-grained models in effectiveness evaluation problems are proven.
The improvement of the model fidelity leads to a decrement in computational efficiency. The simulations of the three model granularities were executed on a computer with Windows 10 and Intel i7-9700 @ 3.0 GHz CPU. The time taken to perform a single simulation under the three granularities was recorded, as shown in Figure 10b. From the statistical data, it is noticeable that the time consumed by granularity 3 is 1.67 times longer than granularity 1 and 1.21 times longer than granularity 2.

6.2. Sample Creation

A large quantity of sample data is required for both screening the indicator system and training the neural network. In accordance with the stakeholders’ requirements, data points are randomly scattered taking the mission and environmental parameters as the sample input. The range of the sample input task parameters is shown in Table 6.
We randomly generated 1000 samples as the inputs. For each sample, a set of 20 random simulation parameter combinations were created. The parameters included the position, velocity between 0 and 10 m/s, and course angle between 0° and 360°. After 1000 × 20 = 20,000 simulations, 1000 samples were produced. Subsequently, the effectiveness indicator calculations were carried out on the samples. The first ten samples are shown in Table 7.

6.3. Evaluation Indicator Screening

The initial evaluation indicator system of the moving target observation is
Z = [ z 1 , , z n ] T = [ P ¯ D i s c o v e r y , T ¯ D i s c o v e r y , P ¯ I d e n t i f i c a t i o n , T ¯ I d e n t i f i c a t i o n , A ¯ T r a c k i n g , T ¯ T r a c k i n g ] T .
We computed the correlation coefficients between the indicators. The correlation matrix A is presented below.
A = [ 1 0.8059 0.3061 0.0346 0.3439 0.2159 0.8059 1 0.2010 0.2326 0.3792 0.4441 0.3061 0.2010 1 0.6342 0.1377 0.0857 0.0346 0.2326 0.6342 1 0.3775 0.2925 0.3439 0.3792 0.1377 0.3775 1 0.3685 0.2159 0.4441 0.0857 0.2925 0.3685 1 ]
The correlation coefficient of the discovery probability P ¯ D i s c o v e r y and discovery response time T ¯ D i s c o v e r y is −0.8059, indicating a strong linear correlation between these two indicators. The sum of the correlation coefficient between P ¯ D i s c o v e r y with the other indicators is
C 1 = 0.3061 + 0.0346 + 0.3439 + 0.2159 = 0.9005
The sum of the correlation coefficient between T ¯ D i s c o v e r y with the other indicators is
C 2 = 0.2010 + 0.2326 + 0.3792 + 0.4441 = 1.2569
From a comparison of C 1 and C 2 , the discovery response time T ¯ D i s c o v e r y , which has a higher value, is removed.
Likewise, the correlation coefficient of the identification probability P ¯ I d e n t i f i c a t i o n and identification response time T ¯ I d e n t i f i c a t i o n is 0.6342, which means that these 2 indicators are strongly correlated. The sum of the correlation coefficient between P ¯ I d e n t i f i c a t i o n with the other indicators is
C 3 = 0.3061 + 0.1377 + 0.0857 = 0.5295
The sum of the correlation coefficient between T ¯ I d e n t i f i c a t i o n with the other indicators is
C 4 = 0.0346 + 0.3775 + 0.2925 = 0.7046
From a comparison of C 3 and C 4 , the identification response time T ¯ D i s c o v e r y is deleted.
As a result, the evaluation indicator system contains four indicators, which are the discovery probability P ¯ D i s c o v e r y , identification probability P ¯ I d e n t i f i c a t i o n , tracking time percentage A ¯ T r a c k i n g , and average tracking interval T ¯ T r a c k i n g .

6.4. Neural Network Training

A total of 1000 samples were randomly shuffled and divided into a training set of 800 samples and a test set of 200 samples. A single hidden layer network structure was used in the traversal of activation functions mentioned in Section 5.2. Additionally, the neuron number of the layer ranged from 20 to 100. After training, the ten optimal networks are shown in Table 8.
The five best activation functions were found, which are softmax, poslin, satlin, tansig, and logsig. Then, the multi-layer network structure and parameters were tested, including the number of hidden layers (2/3/4), the number of neurons in each layer (20–200), and the five best activation functions. The best neural network of each effectiveness indicator is summarized in Table 9.
The training and test results are illustrated in Figure 11, Figure 12, Figure 13 and Figure 14. The first panel demonstrates the training convergence process of the best neural network. It is obvious that only the neural network for discovery probability has an MSE value of above 0.02, while the MSE values for the other three neural networks are all below 0.01, demonstrating the efficacy of neural network training. The second panel shows the evaluation accuracy of the ten optimal neural networks compared to the simulation evaluation samples. The third panel indicates the performance of the networks on the test set. The results show that the majority of the errors are below 10% and the maximum error is below 20%. Therefore, the validity and generalization ability of the proposed neural network model are verified.
The time consumption of the effectiveness evaluation generated by the simulation method and the neural network model is shown in Figure 15. The average time consumption of a single simulation sample obtained through 20 simulations is 0.404 × 20 = 8.074 h. By comparison, the network can output the evaluation indicators in real time.

7. Discussion and Conclusions

Remote sensing satellite clusters are usually comprised of wide-swath satellites and high-resolution satellites. They are playing an increasingly important role in the remote sensing field owing to their ability to complete census and detailed survey tasks for various targets. Due to the high cost of development and operation, remote sensing satellite clusters require quantitative effectiveness evaluation to support decision making in their entire life cycle. The effectiveness evaluation is usually based on simulation, but there is a conflict between accuracy and speed. Thus, the main goal of this paper was to present an architecture to achieve real-time high-quality effectiveness evaluation of remote sensing satellite clusters. The significant advantages of the architecture are as follows:
  • The simulation model in the architecture is a multi-physical field coupled. Apart from considering the repercussion of on-board resource constraints, it also considers the consequence of imaging’s uncertainty on the observation results. As compared with our proposed model granularity, the traditional coarse-grained model has a maximum error of more than 60%, which proves the effectiveness of the proposed model’s granularity.
  • A moving target observation effectiveness indicator system of the satellite clusters is established. In comparison with the current indicator system that contains accessibility indicators, such as coverage and resolutions, our proposed indicators can better reflect the effectiveness of the operation process. Moreover, we screened the indicator system through model-based quantitative analysis. This not only reduces the redundancy of the indicator system developed by the domain experts, but also increases the indicator system’s hierarchy and independence.
  • The neural network model can be trained in the architecture to evaluate the effectiveness with real-time computation. The architecture supports the finding of the best network structure and parameters, including the number of hidden layers, as well as the number of neurons and activation function in each layer. The result indicates that the neural network model not only achieves high accuracy on the training set, but also generalizes well on the test set. The conflict between the accuracy and speed is therefore resolved.
Our suggested method is not merely applicable to the effectiveness evaluation of remote sensing satellite clusters, but also single satellites and other types of satellites. Yet, our method has no obvious advantage over the coarse-grained model when only low accuracy is required. A coarse-grained model is more advantageous as it already has high computational efficiency. On the contrary, the sample creation method that we mentioned demands high computational resources.
Future research will be undertaken in two aspects, which are increasing the sample size and sample quality. This could be useful in reducing the computational resources needed in neural network training and enhancing the accuracy of the neural network. Regarding the sample size issue, due to the limitation of computing resources, only 1000 samples are created to be applied in our network training process. The error of networks might be reduced if more training samples are generated. In the future, the simulation model can be further optimized to acquire as many samples as possible with the same computational resources. For example, the relationship between effectiveness and model granularity can be thoroughly investigated, so as to refine the granularity of the important parts and reduce the granularity of the less important ones. In terms of sample quality, the noise in the samples will cause errors in the network evaluation results. It is a worthwhile research direction in the future to improve the training effect by preprocessing the samples.

Author Contributions

Conceptualization, Y.D. and Z.L.; methodology, Z.L. and Y.D.; software, Z.L. and P.L.; validation, Z.L. and H.L.; formal analysis, Z.L. and P.L.; investigation, Z.L. and H.L.; resources, Z.L. and H.L.; data curation, Z.L. and P.L.; writing, Z.L. and Y.L.; supervision, Y.D. and H.L.; project administration, Z.L.; funding acquisition, Y.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used to support the findings of this study are available from the corresponding author upon request.

Acknowledgments

This work was partially supported by the Key Laboratory of Spacecraft Design Optimization and Dynamic Simulation Technologies, Ministry of Education.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Deng, X.; Dong, Y.; Xie, S. Multi-Granularity Mission Negotiation for a Decentralized Remote Sensing Satellite Cluster. Remote Sens. 2020, 12, 3595. [Google Scholar] [CrossRef]
  2. Lu, C.; Ren, F. Study on Autonomous Mission Management Method for Remote Sensing Satellites. In Proceedings of the International Conference on Wireless and Satellite Systems, Harbin, China, 12–13 January 2019; pp. 401–411. [Google Scholar]
  3. Xiao, Y.; Zhang, S.; Yang, P.; You, M.; Huang, J. A two-stage flow-shop scheme for the multi-satellite observation and data-downlink scheduling problem considering weather uncertainties. Reliab. Eng. Syst. Saf. 2019, 188, 263–275. [Google Scholar] [CrossRef]
  4. Weaver, O.A.; Kerekes, J.P. The role of large constellations of small satellites in emergency response situations. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015; pp. 4200–4203. [Google Scholar]
  5. Wang, Y.; Wang, T.; Zhang, G.; Cheng, Q.; Wu, J.Q. Small target tracking in satellite videos using background compensation. IEEE Trans. Geosci. Remote Sens. 2020, 58, 7010–7021. [Google Scholar] [CrossRef]
  6. Zhang, S.; Xiao, Y.; Yang, P.; Liu, Y.; Chang, W.; Zhou, S. An Effectiveness Evaluation Model for Satellite Observation and Data-Downlink Scheduling Considering Weather Uncertainties. Remote Sens. 2019, 11, 1621. [Google Scholar] [CrossRef] [Green Version]
  7. MacCalman, A.; Kwak, H.; McDonald, M.; Upton, S. Capturing experimental design insights in support of the model-based system engineering approach. Procedia Comput. Sci. 2015, 44, 315–324. [Google Scholar] [CrossRef] [Green Version]
  8. Henderson, K.; Salado, A. Value and benefits of model-based systems engineering (MBSE): Evidence from the literature. Syst. Eng. 2021, 24, 51–66. [Google Scholar] [CrossRef]
  9. Li, H.; Li, D.; Li, Y. A multi-index assessment method for evaluating coverage effectiveness of remote sensing satellite. Chin. J. Aeronaut. 2018, 31, 2023–2033. [Google Scholar] [CrossRef]
  10. Zhao, Z.; Huang, K.; Ni, Z. Model and arithmetic of effectiveness equivalence of weapon system. J. Syst. Simul. 2008, 5, 1103–1106. [Google Scholar]
  11. Xiong, X.; Chiang, K.; Sun, J.; Barnes, W.L.; Guenther, B.; Salomonson, V.V. NASA EOS Terra and Aqua MODIS on-orbit performance. Adv. Space Res. 2009, 43, 413–422. [Google Scholar] [CrossRef]
  12. Zheng, Z.; Li, Q.; Fu, K. Evaluation Model of Remote Sensing Satellites Cooperative Observation Capability. Remote Sens. 2021, 13, 1717. [Google Scholar] [CrossRef]
  13. Chen, S.; Wang, Q. Research on evaluation method of target characteristics detection capability of optical imaging satellite. In Proceedings of the AOPC 2019: Space Optics, Telescopes, and Instrumentation, International Society for Optics and Photonics, Beijing, China, 7–9 July 2019; Volume 11341, p. 113411T. [Google Scholar]
  14. Liu, C.; Lei, X.; Zhu, G. Effectiveness evaluation for earth observation satellite system based on analytic hierarchy process and adc model. In Proceedings of the 31st Chinese Control Conference, Hefei, China, 25–27 July 2012; pp. 2851–2854. [Google Scholar]
  15. Tang, Y.; Yu, X. The effectiveness evaluation of nano-satellites used in military operations. In Proceedings of the 2013 International Conference on Mechatronic Sciences, Electric Engineering and Computer (MEC), Shenyang, China, 20–22 December 2013; pp. 3012–3016. [Google Scholar]
  16. Madni, A.M.; Sievers, M. Model-based systems engineering: Motivation, current status, and research opportunities. Syst. Eng. 2018, 21, 172–190. [Google Scholar] [CrossRef]
  17. White, C.J.; Mesmer, B.L. Research needs in systems engineering: Report from a University of Alabama in Huntsville workshop. Syst. Eng. 2020, 23, 154–164. [Google Scholar] [CrossRef]
  18. Yao, S.; Chang, X.; Cheng, Y.; Jin, S.; Zuo, D. Detection of moving ships in sequences of remote sensing images. ISPRS Int. J. Geo.-Inf. 2017, 6, 334. [Google Scholar] [CrossRef] [Green Version]
  19. Li, J.; Dong, Y.; Xu, M.; Li, H. Genetic Programming Method for Satellite System Topology and Parameter Optimization. Int. J. Aerosp. Eng. 2020, 2020, 6673848. [Google Scholar] [CrossRef]
  20. Liu, F. Research on Evaluation Method for Mission Effectiveness of Remote Sensing Satellite Syetem; National Space Science Center CAS: Beijing, China, 2017. (In Chinese) [Google Scholar]
  21. Tahkola, M.; Keränen, J.; Sedov, D.; Far, M.F.; Kortelainen, J. Surrogate modeling of electrical machine torque using artificial neural networks. IEEE Access. 2020, 8, 220027–220045. [Google Scholar] [CrossRef]
  22. Tao, J.; Sun, G. Application of deep learning based multi-fidelity surrogate model to robust aerodynamic design optimization. Aerosp. Sci. Technol. 2019, 92, 722–737. [Google Scholar] [CrossRef]
  23. Gulfidan, G.; Beklen, H.; Arga, K.Y. Artificial Intelligence as Accelerator for Genomic Medicine and Planetary Health. OMICS 2021, 25, 745–749. [Google Scholar] [CrossRef]
  24. Li, H.; Gao, Q.; Dong, Y.; Deng, Y. Spacecraft Relative Trajectory Planning Based on Meta Learning. IEEE Trans. Aerosp. Electron. Syst. 2021, 57, 3118–3131. [Google Scholar] [CrossRef]
  25. Li, H.; Dong, Y.; Li, P. Real-Time Optimal Approach and Capture of ENVISAT Based on Neural Networks. Int. J. Aerosp. Eng. 2020, 2020, 8165147. [Google Scholar] [CrossRef]
  26. Li, P.; Dong, Y.; Li, H. Staring Imaging Real-Time Optimal Control Based on Neural Network. Int. J. Aerosp. Eng. 2020, 2020, 8822223. [Google Scholar] [CrossRef]
  27. Driggers, R.G.; Cox, P.G.; Kelley, M. National imagery interpretation rating system and the probabilities of detection, recognition, and identification. Opt. Eng. 1997, 36, 1952–1959. [Google Scholar] [CrossRef]
  28. Cao, Y.; Feng, S.; Guan, Q.; Zhang, Y.; Wang, Z.; Bai, H. Modeling and Simulation of Spacecraft Military Applications; National Defence Industry Press: Beijing, China, 2010; pp. 87–93. (In Chinese) [Google Scholar]
Figure 1. The scene of the satellite cluster observing the moving targets.
Figure 1. The scene of the satellite cluster observing the moving targets.
Sensors 22 02993 g001
Figure 2. The real-time satellite cluster effectiveness evaluation model architecture.
Figure 2. The real-time satellite cluster effectiveness evaluation model architecture.
Sensors 22 02993 g002
Figure 3. Multi-physical field coupling model of the satellite cluster.
Figure 3. Multi-physical field coupling model of the satellite cluster.
Sensors 22 02993 g003
Figure 4. High-resolution satellites mission allocation model.
Figure 4. High-resolution satellites mission allocation model.
Sensors 22 02993 g004
Figure 5. Regional grid target of the discovery task.
Figure 5. Regional grid target of the discovery task.
Sensors 22 02993 g005
Figure 6. Effectiveness evaluation indicator system.
Figure 6. Effectiveness evaluation indicator system.
Sensors 22 02993 g006
Figure 7. The training process of the neural network.
Figure 7. The training process of the neural network.
Sensors 22 02993 g007
Figure 8. The satellite cluster configuration.
Figure 8. The satellite cluster configuration.
Sensors 22 02993 g008
Figure 9. Moving targets’ observation missions.
Figure 9. Moving targets’ observation missions.
Sensors 22 02993 g009
Figure 10. The comparison results of the effectiveness evaluation and time consumption for different simulation model granularities. (a) Evaluation results comparison of the three different simulation model granularities. (b) Time consumption comparison of the three different simulation model granularities.
Figure 10. The comparison results of the effectiveness evaluation and time consumption for different simulation model granularities. (a) Evaluation results comparison of the three different simulation model granularities. (b) Time consumption comparison of the three different simulation model granularities.
Sensors 22 02993 g010
Figure 11. Training and test results of the discovery probability. (a) Training convergence process of the best neural network. (b) Errors of the ten best networks on the training set. (c) Errors of the ten best networks on the test set.
Figure 11. Training and test results of the discovery probability. (a) Training convergence process of the best neural network. (b) Errors of the ten best networks on the training set. (c) Errors of the ten best networks on the test set.
Sensors 22 02993 g011
Figure 12. Training and test results of the identification probability. (a) Training convergence process of the best neural network. (b) Errors of the ten best networks on the training set. (c) Errors of the ten best networks on the test set.
Figure 12. Training and test results of the identification probability. (a) Training convergence process of the best neural network. (b) Errors of the ten best networks on the training set. (c) Errors of the ten best networks on the test set.
Sensors 22 02993 g012
Figure 13. Training and test results of the tracking time percentage. (a) Training convergence process of the best neural network. (b) Errors of the ten best networks on the training set. (c) Errors of the ten best networks on the test set.
Figure 13. Training and test results of the tracking time percentage. (a) Training convergence process of the best neural network. (b) Errors of the ten best networks on the training set. (c) Errors of the ten best networks on the test set.
Sensors 22 02993 g013
Figure 14. Training and test results of the average tracking interval. (a) Training convergence process of the best neural network. (b) Errors of the ten best networks on the training set. (c) Errors of the ten best networks on the test set.
Figure 14. Training and test results of the average tracking interval. (a) Training convergence process of the best neural network. (b) Errors of the ten best networks on the training set. (c) Errors of the ten best networks on the test set.
Sensors 22 02993 g014
Figure 15. Time consumption of the simulation method and the neural network model.
Figure 15. Time consumption of the simulation method and the neural network model.
Sensors 22 02993 g015
Table 1. Parameters of the satellite cluster.
Table 1. Parameters of the satellite cluster.
CategoryParameterValue
Orbit parameterSatellite number100
Orbital plane number10
Phase factor1
Orbit altitude (km)650
Eccentricity0
Inclination (°)98
Attitude parameterMaximum maneuver angle (°)±45
Maximum maneuver angular velocity (°/s)1
Maximum maneuver angular acceleration (°/s2)0.1
Attitude stabilization time (s)10
Power parameterSolar array output current (A)15
Average power in normal mode (W)200
Average power in attitude maneuver mode (W)450
Average power in imaging mode (W)400
Average power in data transmission mode (W)270
Payload parameterPayload typeOptical
Imaging width of wide-swath satellites (km)113.74
Ground resolution of wide-swath satellites (m)9.48
Imaging width of high-resolution satellites (km)22.69
Ground resolution of high-resolution satellites (m)1.89
Payload data rate (Gbps)22.69
Solid-state memory capacity (GB)500
Table 2. Environmental parameters for the model comparison.
Table 2. Environmental parameters for the model comparison.
ParameterSymbolValue Range
Simulation start time (day) t 0 1
Longitude of initial area center (°) L o n g 139.26
Latitude of initial area center (°) L a t 16.44
Cloud level N c 4
Visibility level N v 4
Target number N T a r g e t 10
Table 3. The initial information of ten arbitrary ships.
Table 3. The initial information of ten arbitrary ships.
Target IDPositionVelocity (m/s)Course Angle (°)
1139.98° E, 17.83° N3.3436.49
2139.53° E, 17.08° N6.82254.03
3139.74° E, 16.33° N6.79160.27
4140.25° E, 16.67° N6.59255.95
5138.31° E, 17.63° N1.97314.51
6140.00° E, 17.54° N1.39350.93
7138.86° E, 17.62° N8.4244.62
8139.84° E, 17.37° N3.7568.55
9139.66° E, 17.40° N3.88182.86
10139.32° E, 17.91° N2.4455.22
Table 4. Comparison of the three different simulation model granularities.
Table 4. Comparison of the three different simulation model granularities.
Granularity IDOrbitAttitudePowerData TransmissionPayload CoverageDetection Probability
1
2 Partially
3
Table 5. Evaluation results of the three different granularities.
Table 5. Evaluation results of the three different granularities.
Granularity ID P ¯ D i s c o v e r y T ¯ D i s c o v e r y (s) P ¯ I d e n t i f i c a t i o n T ¯ I d e n t i f i c a t i o n (s) A ¯ T r a c k i n g T ¯ T r a c k i n g (s)
1100.00%2453.20100.00%574.9016.89%691.02
291.50%2720.1392.15%1711.133.14%2697.77
390.50%2958.4779.43%2719.961.92%3072.04
Table 6. The range of the sample input task parameters.
Table 6. The range of the sample input task parameters.
ParameterSymbolValue Range
Simulation start time (day) t 0 [1, 365]
Longitude of initial area center (°) L o n g [130, 150]
Latitude of initial area center (°) L a t [10–30]
Cloud level N c [1–6]
Visibility level N v [1–6]
Target number N T a r g e t [5–20]
Table 7. The input and output of the first ten samples.
Table 7. The input and output of the first ten samples.
Sample IDSample InputSample Output
t 0 (Day) L o n g (°) L a t (°) N c N v N T a r g e t P ¯ D i s c o v e r y T ¯ D i s c o v e r y (s) P ¯ I d e n t i f i c a t i o n T ¯ I d e n t i f i c a t i o n (s) A ¯ T r a c k i n g T ¯ T r a c k i n g (s)
1293139.3117.14261071.67%4449.5258.18%4212.330.86%3731.66
2296141.8724.55131152.27%6100.9863.36%2390.602.32%2508.34
3355134.5318.91328100.00%5562.0083.18%1842.851.68%3070.28
4284132.3021.73551763.73%6325.3742.28%3463.240.46%4046.58
5319148.8714.88542084.58%3592.7051.87%3382.250.79%4615.19
6355137.1222.5431693.05%5599.29100.00%1491.612.09%3026.74
7163142.1814.39127100.00%5388.48100.00%777.752.54%3027.65
8141137.7520.09461528.89%7854.6243.04%3196.580.65%2904.51
9128145.4119.13141095.00%3522.5881.77%2095.001.69%3697.10
10237139.6717.6463120.00%8956.6057.84%1831.081.02%2522.32
Table 8. The training result of the activation function traversal.
Table 8. The training result of the activation function traversal.
Network ID P ¯ D i s c o v e r y P ¯ I d e n t i f i c a t i o n A ¯ T r a c k i n g T ¯ T r a c k i n g
Activation FunctionNeuron NumberActivation FunctionNeuron NumberActivation FunctionNeuron NumberActivation FunctionNeuron Number
1softmax40softmax20softmax80softmax20
2elliotsig20softmax40softmax40tansig40
3satlin40softmax80softmax100softmax60
4poslin20softmax100softmax60logsig20
5softmax80logsig20satlins20satlins20
6purelin80logsig40softmax20logsig60
7purelin40softmax60logsig40softmax40
8purelin60satlin20purelin20softmax100
9logsig40tansig60purelin40elliotsig20
10purelin20satlins20purelin60poslin20
Table 9. The best neural network of each effectiveness indicator.
Table 9. The best neural network of each effectiveness indicator.
NetworkHidden Layer NumberActivation FunctionNeuron NumberMSE
P ¯ D i s c o v e r y 3[softmax, softmax softmax][100, 100, 100]0.0240
P ¯ I d e n t i f i c a t i o n 2[softmax, softmax][160, 160]0.0027
A ¯ T r a c k i n g 4[softmax, softmax, softmax, softmax][200, 200, 200, 200]0.0091
T ¯ T r a c k i n g 4[softmax, softmax, softmax, softmax][60, 60, 60, 60]0.0017
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, Z.; Dong, Y.; Li, P.; Li, H.; Liew, Y. A Real-Time Effectiveness Evaluation Method for Remote Sensing Satellite Clusters on Moving Targets. Sensors 2022, 22, 2993. https://doi.org/10.3390/s22082993

AMA Style

Li Z, Dong Y, Li P, Li H, Liew Y. A Real-Time Effectiveness Evaluation Method for Remote Sensing Satellite Clusters on Moving Targets. Sensors. 2022; 22(8):2993. https://doi.org/10.3390/s22082993

Chicago/Turabian Style

Li, Zhi, Yunfeng Dong, Peiyun Li, Hongjue Li, and Yingjia Liew. 2022. "A Real-Time Effectiveness Evaluation Method for Remote Sensing Satellite Clusters on Moving Targets" Sensors 22, no. 8: 2993. https://doi.org/10.3390/s22082993

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop