Next Article in Journal
Overview of the Application of Remote Sensing in Effective Monitoring of Water Quality Parameters
Next Article in Special Issue
Multi-Scale Ship Detection Algorithm Based on YOLOv7 for Complex Scene SAR Images
Previous Article in Journal
Insights into the Magmatic Feeding System of the 2021 Eruption at Cumbre Vieja (La Palma, Canary Islands) Inferred from Gravity Data Modeling
Previous Article in Special Issue
Adaptive-Attention Completing Network for Remote Sensing Image
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Space Target Material Identification Based on Graph Convolutional Neural Network

1
Institute of Artificial Intelligence, Beihang University, Beijing 100091, China
2
Key Laboratory of Precision Opto-Mechatronics Technology, Ministry of Education, Beijing 100091, China
3
School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing 100191, China
4
Changguang Satellite Technology Co., Ltd., Changchun 130000, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(7), 1937; https://doi.org/10.3390/rs15071937
Submission received: 25 February 2023 / Revised: 31 March 2023 / Accepted: 2 April 2023 / Published: 4 April 2023

Abstract

:
Under complex illumination conditions, the spectral data distributions of a given material appear inconsistent in the hyperspectral images of the space target, making it difficult to achieve accurate material identification using only spectral features and local spatial features. Aiming at this problem, a material identification method based on an improved graph convolutional neural network is proposed. Superpixel segmentation is conducted on the hyperspectral images to build the multiscale joint topological graph of the space target global structure. Based on this, topological graphs containing the global spatial features and spectral features of each pixel are generated, and the pixel neighborhoods containing the local spatial features and spectral features are collected to form material identification datasets that include both of these. Then, the graph convolutional neural network (GCN) and the three-dimensional convolutional neural network (3-D CNN) are combined into one model using strategies of addition, element-wise multiplication, or concatenation, and the model is trained by the datasets to fuse and learn the three features. For the simulated data and the measured data, the overall accuracy of the proposed method can be kept at 85–90%, and their kappa coefficients remain around 0.8. This proves that the proposed method can improve the material identification performance under complex illumination conditions with high accuracy and strong robustness.

Graphical Abstract

1. Introduction

Space target recognition is of great significance in the protection of the space environment as well as in the safe and sustainable development and utilization of space resources [1]. In a variety of space observation methods, spectra contain a large amount of information in the wavelength dimension, and they can provide a reliable basis for the identification of the space target surface materials, thereby meeting the needs for space security and determining space target behavior intention accurately [2,3]. Therefore, the development of spectra-based space target material identification research is an important topic in space target recognition.
The researchers hoped to identify materials by the difference in spectral reflectance and have developed a series of methods based on spectral features, including comparative analysis, statistical learning, deep learning, etc. The comparative analysis methods refer to the design of specific identification methods by directly comparing the features of the measurements and prior information. Abercromby et al. [4] used a short-wave infrared telescope to acquire spectral observations on two satellites and found that an absorption characteristic coincided with the absorption characteristic of the C-H bond in solar cell material; the authors inferred that the two satellites had solar cells. Vananti et al. [5] proposed a preliminary classification using three different classes purely based on the shape and appearance of the spectra.
Statistical learning methods refer to the establishment of probabilistic and statistical models based on large amounts of dat and are utilized to predict and analyze the data. Abercromby [6] and Deng [2] measured the visible spectra of geosynchronous orbit targets and made use of the partial least squares (PLS) method for material identification with high accuracy. Nie [7] proposed a method based on the tucker decomposition (TD). Velez-Reyes et al. [8,9] applied the SVD-based column subset selection (SVDSS) and the constrained non-negative matrix factorization (CNMF) to identify simulated models.
Among the statistical learning methods, deep learning methods can mine the deep correlation of data and are suitable for processing massive hyperspectral data. Li et al. [10] applied the extreme learning machine (ELM), and Liu [11] applied the back-propagation neural network (BPNN); they measured the hyperspectral curves of the material samples under the solar simulator illumination in the experimental darkroom and demonstrated the good performances of these methods. Then, as the convolutional neural network (CNN) showed advantages in feature extraction, researchers turned their attention to the CNN [12] and proposed a variety of CNN-based identification methods. Deng et al. [13] and Gazak et al. [14] adopted a one-dimensional (1-D) CNN to deal with 1-D measured data, in which only spectral features played a role.
However, with the development of research on the influencing factors of material identification [15,16,17,18], it has been found that the applicability domain of methods using only spectral features may be narrow. These methods require consistent spectral features of the same material, but, because the space target materials are non-Lambertian objects, the spectra of the same material can become different when the target is exposed to complex illumination conditions consisting of sunlight and earthshine [19]. Many researchers have experimentally proved this phenomenon and discussed its properties. Bédard et al. [20] measured the photometric and spectral bidirectional reflectance distribution function (BRDF) of small Canadian satellites in a controlled environment and found that the spectral reflectance varied rapidly even with small changes in the light–object–sensor geometry. Then, Bédard et al. [21] measured the red/blue color ratios of geostationary satellites and proposed that spectral reflectance was dependent on the angle of the incident sunlight, the angle of measuring, and the inherent characteristics of the material. Augustine et al. [22] collected glint spectra of the geostationary satellite WildBlue-1 and found obvious differences in the relative intensities and profiles in the glint spectra between the two nights of collection. This supported the point that solar illumination angles played a major role in observed spectral features.
To solve this problem, the researchers subsequently proposed the use of a two-dimensional (2-D) CNN and a three-dimensional (3-D) CNN to extract spatial–spectral features of the space target for identification. Perez et al. [23] and Chen et al. [24] obtained hyperspectral images with obvious spatial features through semi-physical simulations or computer simulations and extracted the spatial–spectral features using Residual Network (ResNet), a regional-based CNN (R-CNN), and other CNN-based models. These methods reached high accuracies, but the spatial features applied are local and sensitive to imaging conditions, which may lead to poor performance in other application scenarios with different illumination conditions.
Therefore, to increase the accuracy and robustness of material identification, a method based on the improved graph convolutional neural network is proposed for the material identification problem under complex illumination conditions. The main contributions of the proposed method are presented as follows:
  • The global spatial features of the space target are introduced, and they refer to the position of a pixel in the global structure of the space target and the connection relationship with each component. The global spatial features are strongly associated with the materials.
  • A multiscale, joint global-structure topological-graph-building method is designed. The topological graphs generated from the superpixel segmentation results at different scales are joined together, highlighting the size differences between different components and improving the identification accuracy of materials on the small components.
  • The graph convolutional neural network (GCN) is introduced to learn the global spatial–spectral features of the space target and combined with a 3-D CNN that can learn local spatial–spectral features. They work together to improve the identification performance under complex illumination conditions.
The rest of this article is organized as follows. Section 2 describes the related work and our motivation. Section 3 describes the details of the proposed method. Section 4 describes the experimental data, the experimental results of the proposed method, and some common methods. Section 5 discusses the results, deficiencies, and possible improvements of the proposed method. Finally, Section 6 describes the conclusion and our future work.

2. Related Work

In this section, we will describe the application scenarios of the proposed method and introduce the relevant prior works related to complex illumination conditions. Then, the current development state of GCN will be introduced.

2.1. Complex Illumination Conditions of the Space Target

Early spectroscopic measurements of the space targets were made through ground-based telescopes [2,4,5,6], and the data obtained in this way were a single spectrum, so the material identification methods can only rely on spectral features. At present, researchers are gradually transitioning from ground-based detection to space-based detection [3,23,24], and the close detection distance in space-based detection enables hyperspectral images to show relatively rich spatial features. Existing material identification methods using spatial features are usually designed under ideal imaging scenarios, in which the detector and space target are relatively stationary during the measurement of hyperspectral images (e.g., the detector is executing a hovering operation [25] on the space target with three-axis stabilization). Our work is also conducted on this premise.
Sunlight [26] and earthshine [27,28] together constitute the illumination conditions of the space target, as shown in Figure 1. As the space target materials are non-Lambertian objects, the light reflected by the surface can generally be divided into a specular reflection component and a diffuse reflection component [29]. Because the specular component of sunlight can cause an image to be overexposed and unusable, in legible images, the spectrometer receives the diffuse component of sunlight on each surface of the target.
As shown in Figure 2, commonly used materials for the space target exhibit the Fresnel effect, i.e., as the incidence angle of light on the surface increases, the specular component tends to be stronger, and the diffuse component tends to be weaker [30]. Therefore, when a surface has both sunlight and earthshine incidents at the same time, the greater the incidence angle of the sunlight, the more prominent the influence of the earthshine. In addition, mutual occlusion between the target components may cause spectral inconsistencies of the same material.
In summary, in various illumination conditions, affected by the volume, structure, and attitude of the space target, spectral data distributions are sensitive to the direction of sunlight in the local coordinate system of the space target, and the spectral features of the same material may be inconsistent across the surfaces with different orientations. The local coordinate system is set as shown in Figure 3, where the origin is set at the center of the space target model body.
To verify the above contents, we placed a space target scale-down model and an imaging spectrometer (440–780 nm) in our laboratory and measured the hyperspectral images of the model under different illumination conditions by adjusting the direction of solar simulation light. The results show that there are indeed inconsistencies in the material spectra under complex illumination conditions, which are specifically expressed as:
  • Under the same illumination conditions, the spectral features of the same material can be very different, i.e., the gold mylar spectra on two surfaces with different orientations shown in Figure 4.
  • Under the same illumination conditions, the spectral features of different materials can be very similar, i.e., the spectra of the solar cell, antenna, and cooling area shown in Figure 4.
  • Under different illumination conditions, the spectral features of the same material can change dramatically, i.e., the gold mylar spectra in the two images shown in Figure 5.
Because the spectral data distributions of the space target are no longer consistent, and because the local spatial features have poor robustness to the changes in detection distance, illumination conditions, and the target’s attitude, the spectral features and local spatial features are no longer suitable as the only basis for material identification. We consider the structural features of the space target as the connection relationships of the components, and the connection relationship of two components means that the two components are spatially next to each other. The distribution of different materials on the space target is closely related to the structural features, and although under different relative attitudes of the target, the global spatial features reflect the structural features to different degrees; however, compared with the spectral features and local spatial features, the global spatial features change more slowly and regularly with the relative attitude. Therefore, we decide to introduce the global spatial features of the space target as a powerful basis for material identification.

2.2. Graph Convolutional Neural Network

In order to achieve accurate and stable identification of the space target materials under complex illumination conditions, we introduce the GCN to learn the global spatial features. Recently, the GCN has been widely used in hyperspectral image classification [31] and achieved great success.
In hyperspectral image classification, early deep learning methods, such as the deep belief network (DBN) [32], the recurrent neural network (RNN) [33], and the 1-D CNN [34,35], only used the spectral features to effectively improve the classification performance, and the subsequently developed 2-D CNN [36,37] and 3-D CNN [38,39,40,41,42,43] used the local spatial–spectral features. However, researchers then noticed that the fixed-size convolution kernels would make the extracted local spatial–spectral features lack adaptability to changing scenarios, so some people chose to design the application strategies of the convolution kernels [44], and some people turned to GCN [45] hoping that the problem could be solved by applying global spatial features with the topological graph.
The topological graph  G  is an irregular data structure consisting of nodes,  V , and edges,  E . During computing, a topological graph  G = { A , X }  is represented as a set of an adjacency matrix  A R Nv × Nv  and a node feature matrix  X R Nv × Nf , where  Nv  is the number of nodes, and  Nf  is the length of the node features. The GCN can learn from topological graphs, and the basic formulas of the GCN processing the topological graph  G = { A , X }  are presented as:
d mm = n = 0 Nv 1 a mn
X = σ ( A ˜ XW )
where  σ ( · )  is an activation function,  Ch  is the number of channels,  d mm D a mn A D  is the node degree matrix,  A ˜  is the normalized adjacency matrix,  A ˜ X  is computed to propagate node features based on the node connection relationships and the normalized weights of  A ˜ , and  W R Nv × Ch  is a trainable weight matrix.
The initial solution for applying the GCN to hyperspectral images was to treat each pixel as a node [46], but this brought an excessive amount of computation. So, the researchers turned to building topological graphs through image segmentation [47,48,49,50].
In addition, researchers have proposed a number of strategies to improve the GCN-based models. For instance, different types of features have been fused by integrating different networks (e.g., CNN+GCN) [49,51,52]; multiscale features have been learned through the construction of topological graphs of different scales or the application of graph attention (GAT) [48,53]; the lack of data labeling has been addressed by self-correlated learning and self-supervised learning [54,55]; and auto-regressive moving average (ARMA) filters have been utilized to avoid over-smoothing of the GCN [56]. Appropriately adjusting the structure and strategy of the model according to the application scenarios and needs has effectively improved performance.

3. Methodology

The proposed material identification method, based on an improved graph convolutional neural network, is proposed, as shown in Figure 6. First, we implement superpixel segmentation on the hyperspectral images and produce a global-structure topological graph with global spatial features. Next, the material identification datasets for each pixel are created based on global-structure topological graphs and pixel neighborhoods, and they are divided into the training set and test sets.
Then, the GCN and 3-D CNN are combined into one model, and it is trained by the training set. The model learns the fusion features of the global spatial features, local spatial features, and spectral features to identify the material of each pixel. Finally, the test sets are used to evaluate the performance of the proposed method.

3.1. The Topological-Graph-Building Method of the Space Target Global Structure

A global-structure topological-graph-building method of the space target based on superpixel segmentation is designed to extract the global spatial features. First, we need to perform image segmentation, dividing pixels belonging to a given material on the same surface into a cluster. Due to the large dispersion of the space target spectral data, the pixel-by-pixel segmentation methods (e.g., threshold segmentation) may be ineffective, so the superpixel segmentation method is adopted, and the distance and spectral features are together used as the basis for segmentation. The unsupervised segmentation algorithm Simple Linear Iterative Clustering (SLIC) [57] can obtain relatively regular superpixels and control the number of superpixels; it also has a high overall evaluation of efficiency and accuracy. Its simplicity and controllability facilitate the effective extraction of the global spatial features, so it is employed to perform the image segmentation step.
SLIC is executed on the hyperspectral image  I R M × N × C , where  M × N  is the number of pixels in the image, and  C  is the number of bands. SLIC first sets an initial number of cluster centers,  K 0  and then distributes the cluster centers evenly on the image; next, it completes the segmentation through multiple iterations to obtain  q 0  superpixels  S 0 , i = { l 0 , 0 , l 0 , 1 , , l p 0 , i 1 } 0 i < q 0 S 0 , i  conists of  p 0 , i  pixels  l R C × 1  that are close to each other at a distance of  D , where  K 0  is the main parameter affecting  p 0 , i , and the larger  K 0  is, the smaller  p 0 , i  is.
The distance  D ij  between the two pixels  l i  and  l j  in SLIC is measured by Equations (3)–(5):
d l = 1 C [ ( l j 0 l i 0 ) 2 + ( l j 1 l i 1 ) 2 + + ( l j C 1 l i C 1 ) 2 ]
d s = 1 2 [ ( x j x i ) 2 + ( y j y i ) 2 ]
D ij = ε l d l + 2 K 0 MN d s
where  l 0 , l 1 , l C 1  are the grayscale values of the spectral bands; the spectral distance  d l  is the mean squared error of  l 0 , l 1 , l C 1 x , y  are the location coordinates of pixels in the images; the spatial distance  d s  is the mean squared error of  x , y ; and the distance scale factor  ε l  is utilized to adjust the emphasis on the spectral distance and the spatial distance. The spectral and spatial distances are both considered during segmentation.
After segmentation, a global-structure topological graph  G 0 = { A 0 , X 0 }  is built according to  S 0 , i S 0 , i  is set as a node, and the superpixel average spectrum  l ¯ 0 , i  is set as the features of the node. Then, we search for superpixels with adjacency: A line is drawn on the image to connect the center points of two superpixels,  S 0 , m  and  S 0 , n , and it will pass through multiple pixels; of these pixels, if the proportion of the pixels belonging to  S 0 , m  and  S 0 , n  exceeds the threshold (e.g., 0.6), the two superpixels can be considered adjacent, and their nodes will be connected so that the spectral information of the components and the structural information of the space target is fully reflected in the global-structural topological graph  G 0 .
To achieve spatial feature extraction of a small component structure, SLIC is executed twice with the initial number of cluster centers  K 0  and  K 1  ( K 1 > K 0 ) to obtain the multi-level topological graphs  G 0 = { A 0 , X 0 }  and  G 1 = { A 1 , X 1 } . The spectral information and connection relationships of the large components are reflected in  G 0 , and the spectral information and connection relationships of the small components are reflected in  G 1 .
We combine  G 0  and  G 1 ; then, the superpixels with an overlapping relationship are searched for in turn, and their nodes are connected. Specifically, if the cluster center  l ( x ¯ 1 , j , y ¯ 1 , j )  of  S 1 , j  belongs to  S 0 , i , i.e.,  l ( x ¯ 1 , j , y ¯ 1 , j ) S 0 , i S 1 , j  and  S 0 , i  have an overlapping relationship, meaning that the small component represented by  S 1 , j  is on top of the large component represented by  S 0 , i , and their nodes are connected; thus, the multiscale, joint global-structure topological graph  G 2 = { A 2 , X 2 }  is generated.
The building process of the multiscale, joint global-structure topological graph is given in Algorithm 1. In this process, the complexities of building graphs are much smaller than creating superpixels, so the computational complexity mainly depends on SLIC, i.e.,  O ( M × N ) , which is linear with the number of pixels in  I .
Algorithm 1 Multiscale, Joint Global-Structure Topological Graph
Input:
   (1) Dataset: the hyperspectral image  I R M × N × C ;
   (2) Parameters: the initial number of cluster centers  K 0  and  K 1 ( K 1 > K 0 ).
Procedure:
        (1) Execute SLIC on  I  with  K 0  and (3)–(5) and obtain superpixels  S 0 , i 0 i < q 0 ;
        (2) Build the first global-structure topological graph  G 0 = { A 0 , X 0 } ;
        (3) Execute SLIC on  I  with  K 1  and (3)–(5) and obtain superpixels  S 1 , j 0 j < q 1 ;
        (4) Build the second global-structure topological graph  G 1 = { A 1 , X 1 } ;
        (5) Combine  G 0  and  G 1  to obtain  A 2  and  X 2 ;
        (6) Connect  S 1 , j  and  S 0 , i , who have overlapping relationship;
        (7) Build the multiscale, joint global-structure topological graph  G 2 = { A 2 , X 2 } ;
Output:
    The multiscale, joint global-structure topological graph  G 2 = { A 2 , X 2 } .
After the building of  G 2 , the topological graphs  G l = { A l , X l }  for each pixel  l  are generated according to  G 2 . First,  l  is independently set as a superpixel  S l ; next, the nodes of  S l  and  S 0 , i  are connected if they are adjacent, and the nodes of  S l  and  S 1 , j  are connected under the same condition. The topological graph  G l = { A l , X l }  contains the spectral information of the pixel and reflects its global spatial features, such as its connection relationship with components of different scales, and its location on the space target.
Then, the fixed-size neighborhoods  Ne l R d 1 × d 2 × C  of each pixel are taken, where  d 1 × d 2  is the size of the region centered on pixel  l Ne l  contain the spectral information and local spatial features of the pixels.  G l  and  Ne l  of each pixel are collected to create material identification datasets,  Ds , which can be represented as:
Ds = { G l 0 , Ne l 0 ; G l 1 , Ne l 1 ; ; G l M × N , Ne l M × N } .
At last,  Ds  are divided into training sets and test sets. Throughout the process of creating  Ds  for  I , the computational complexity is also  O ( M × N ) .

3.2. Identification Method Based on Fusion of GCN and 3-D CNN

The GCN is utilized to learn the spatial features of the space target global structure and integrated with the spectral features and spatial features of the pixel neighborhood learned by the 3-D CNN, breaking the bottleneck of a single model and solving the problem of material identification difficulties caused by inconsistent distributions of spectral data. The specific structure of the identification method is shown in Figure 7. The topological graph  G l  and the fixed-size neighborhoods  Ne l  of a pixel are input to the identification method, and a class confidence vector  Y Class R 1 × Cl  is output, where  Cl  is the total number of background and all material classes.
In the 3-D CNN,  Ne l  is input to the 3-D convolutional layer  f C 3 ( · )  and the 3-D max-pooling layer  f MP 3 ( · )  repeatedly to learn the local spectral–spatial features.  N C 3  convolution kernels  W C 3 , kn R L k 1 C 3 × L k 2 C 3 × L k 3 C 3  ( 0 kn < N C 3 ) slide for convolution on  Ne l  in  f C 3 ( · ) , and, then, the windows slide for max pooling on its output in  f MP 3 ( · ) . The formulas are as follows:
Ne kn C 3 = σ [ f C 3 , kn ( Ne l ) ]
Ne kn P = f MP 3 ( Ne kn C 3 )
where  σ ( · )  is the activation function ReLu,  Ne kn P Ne P Ne P R L 1 P × L 2 P × L 3 P × N C 3 . Immediately  Ne P  is input to  f C 3 ( · )  and  f MP 3 ( · )  several times more, and  Ne CNN R L 1 C × L 2 C × L 3 C × N C 3  is output. For the 3-D CNN, when the dimensions of the input and the parameters of the layers are determined, the number of calculations is determined. Therefore, the computational complexity of the 3-D CNN for an image  I R M × N × C  is  O ( M × N ) .
In the GCN,  G l = { A l , X l }  is taken as the input. First,  X l = ( l 0 , l 1 , , l q 0 + q 1 ) T R ( q 0 + q 1 + 1 ) × C  is input to the one-dimensional (1-D) convolutional layers  f C 1 ( · )  and the 1-D max-pooling layers  f MP 1 ( · )  for extraction of the spectral features and a reduction in data dimension. A convolution kernel  W C 1 R 1 × L k C 1  slides for convolution on  l m X l  in  f C 1 ( · ) , and, then, a window slides for max pooling on its output in  f MP 1 ( · ) . The formulas are as follows:
l m C 1 = σ [ f C 1 ( l m ) ]
l m s = f MP 1 ( l m C 1 )
where  l m s X l s X l s = ( l 0 s , l 1 s , , l q 0 + q 1 s ) T R ( q 0 + q 1 + 1 ) × L s . Through  f C 1 ( · )  and  f MP 1 ( · ) , the spectral features are extracted, and the number of elements in the node feature matrix  X l s  has been reduced to  L s / C  of the original  X l . Meanwhile, the number of weights and the computational complexity of the next layer will decrease by the same proportion.
Next,  X l s  and  A l  are input to the graph convolutional layer  f G ( · )  repeatedly to learn the global spectral–spatial features in the topological graph  G l . The calculation formulas for  f G ( · )  are expressed as:
A ˜ = D 1 / 2 A l D 1 / 2
X = σ ( A ˜ X l s W 1 + X l s W 2 + b )
where the number of channels is  Ch D  is the node degree matrix; and  A ˜  is the symmetric normalized adjacency matrix.  A ˜ X l s  is computed to propagate node features based on node connection relationships and the normalized weights of  A ˜ . Then,  A ˜ X l s  and  X l s  are weighted and fused with two trainable weight matrices  W 1 R L s × Ch W 2 R L s × Ch  and a trainable bias matrix  b R ( q 0 + q 1 + 1 ) × Ch . Finally, a new node feature matrix  X R ( q 0 + q 1 + 1 ) × Ch  is output.  X  and  A l  are input to  f G ( · )  several times more, and  X GCN R ( q 0 + q 1 + 1 ) × Ch  is output. For the first  f G ( · ) , its computational complexity is  O ( ( N e + N v ) × L s × Ch )  [45], and, for the next several  f G , their computational complexity is  O ( ( N e + N v ) × Ch 2 ) , where  N e  and  N v  are the numbers of edges and nodes in  G l . Although the scale of  G l  is uncertain for different images, the numbers of edges and nodes in  G l  are usually much smaller than the number of pixels in an image; therefore, the computational complexity of the GCN for an image can also be  O ( M × N ) .
Ne CNN , the output of the 3-D CNN, is input to the flattened layer  f F ( · )  and the fully connected layer  f D ( · )  with  Ch  nodes.  X GCN , the output of the GCN, is input to the global average pooling layer  f GAP ( · ) . Then, they are fused using fusion strategies of addition (-A), element-wise multiplication (-M), or concatenation (-C) [46]. The processes can be formulated as:
Y A = f GAP ( X GCN ) + σ [ f F ( Ne CNN ) W D ]
Y M = f GAP ( X GCN ) σ [ f F ( Ne CNN ) W D ]
Y C = f GAP ( X GCN ) σ [ f F ( Ne CNN ) W D ]
where  W D  is a trainable weight matrix in  f D ( · ) , and ⊗ and ⊕, respectively, represent element-wise multiplication and concatenation. The function of  W D  is to adjust the contribution ratio of the features extracted by the 3-D CNN and the GCN to the material identification.
At last,  Y F  (i.e.  Y A Y M  or  Y C ) is input to the fully connected layer  f S ( · )  with  Cl  nodes and the activation function  Softmax ( · ) , and the formula can be written as:
Y Class = Softmax ( Y F W S )
where  W S  is a trainable weight matrix in  f S ( · ) y i Class Y Class i = 0 Cl 1 y i Class = 1 . If  y k Class y i Class  ( 0 i < Cl ), the result of the material identification is output as follows:
result = background , k = 0 k th material , 0 k < Cl
and the identification methods are constructed completely. According to the three fusion strategies, the identification methods can be distinguished into the proposed method-A, the proposed method-M, and the proposed method-C. In the entire process of material identification for an image, we can conclude that the computational complexity is  O ( M × N ) , i.e., linear with the number of pixels in the image, and, therefore, there is no risk of a sudden explosion of the calculation volume.
After that, the model parameters are trained multiple times using the training sets and the cross-entropy loss function. To improve the accuracy while retaining the training speed, the decayed learning rate of exponential decay [58] is applied, and the learning rate is gradually reduced during the training as follows:
Lr = 0.001 × 0.95 Ct
where  Lr  is the learning rate, and  Ct  is the cycle training time. The early stopping [59] is applied in the same manner as the stopping strategy of the training to avoid model parameters overfitting.
After training, the material class prediction for the test set is conducted using the proposed method-A, -M, and -C. Then, the identification performance is demonstrated, evaluated, and analyzed.

3.3. Data Quality Assessment

We chose spectral separability to describe the degree of data inconsistency. The higher the spectral separability, the lower the data inconsistency, and the less difficult it is to identify materials. The ratio of the inter-class distance to the intra-class distance is calculated to describe the spectral separability  J  of the materials in a dataset. The formulas are presented as:
m ( i ) = 1 N i j = 0 N i 1 X j ( i )
m = 1 M i = 0 M 1 m ( i )
P i = N i k = 0 M 1 N k
S B = i = 0 M 1 P i ( m ( i ) m ) ( m ( i ) m ) T
S W = i = 0 M 1 P i 1 N i j = 0 N i 1 ( X j ( i ) m ( i ) ) ( X j ( i ) m ( i ) ) T
J = tr ( S B ) tr ( S W )
where  M  is the total number of the materials in the dataset,  N i  is the total number of spectra for the  i th  material, and  X j ( i ) R 1 × C  is the  j th  spectra of the  i th  material.

4. Results

4.1. Experimental Data

The experimental data include simulated data and measured data. The simulated data were space target spectral images created through computer simulation, and the measured data were acquired by measuring the satellite scale-down model in the laboratory. The satellite scale-down model is a simulation model whose materials are different from the real materials of the space targets, but it can be used to verify the effectiveness of the identification methods.
The simulated data and the measured data are used to evaluate the identification performance of the proposed method (-A, -M, and -C). In this field, researchers often use their own datasets, and these datasets have different qualities and different imaging conditions; at the same time, there is a lack of benchmarks to uniformly compare the performances of all methods. Therefore, we applied several representative methods of different types as comparative methods, including the 3-D CNN [12,13,14,23,24], the CNMF [9], and the TD [7], wherein the proposed method and the 3-D CNN are fully supervised, the CNMF is semi-supervised, and the TD is unsupervised.

4.1.1. Simulated Data

We generated hyperspectral simulated data based on high-fidelity panchromatic simulated images of the “Tango” spacecraft released by the ESA, as shown in Figure 8. When computer simulating, a unique reflectance spectrum was set for each material, and changes in the illumination conditions were simulated by adjusting the proportion of sunlight and earthshine in incident light. The simulated model of the space target with different attitudes was labeled with material classes, and a spectrum was added to each pixel according to the label and set lighting conditions. Although different incidence spectra were used in different images, the incident spectra were the same for different surfaces on the target in an image. Then, the image was down-sampled to form simulated hyperspectral images with different illumination conditions and different spatial resolutions.
The band range of the simulated data is from 440 nm to 780 nm, and the spectral resolution is 10 nm. The material classes in the space target simulated images are solar cell, gold mylar, and antenna. The normalized materials’ spectra of these are shown in Figure 9, where the spectral features of the three materials are different from each other.

4.1.2. Measured Data

During the laboratory measurement, incandescent lamps were chosen to simulate earthshine, and a tungsten halogen lamp was aimed at the satellite model to simulate sunlight, as shown in Figure 10 and Figure 11. The direction from the halogen tungsten lamp to the satellite model and the attitude of the satellite model together determined the angle of incidence of the tungsten halogen lamp to simulate the complex illumination situation of the space target in the space environment.
The position and orientation of the tungsten halogen lamp and the attitude of the model were changed to simulate the changes in the spectral data distributions produced by different shooting times, attitudes of the space target, and other conditions, as shown in Figure 12. Then, the measured hyperspectral images of different illumination conditions and different spatial resolutions were obtained by adjusting the spatial resolution of the measuring instrument.
The measured data band range is from 440 nm to 780 nm, and the spectral resolution is 10 nm. The material classes in the measured data are solar cell, gold mylar, antenna, and cooling area. The normalized materials’ spectra of these are shown in Figure 13, where the spectral features of the gold mylar on the vertical incidence surface and the oblique incidence surface of the tungsten halogen lamp are quite different, and the spectral features of the other materials are similar.

4.2. Analysis of Identification Results

The F1-measure [60], overall accuracy (OA), average accuracy (AA), and kappa coefficient [61] are applied as evaluation indexes. The F1-measure is calculated for each material.

4.2.1. Simulated Data

Five simulated hyperspectral images were generated; two of them were chosen for training, and three of them for testing. We divided the images into four datasets, which are the training set (T0), the test set 1 (T1), the test set 2 (T2), and the test set 3 (T3). Among these, the imaging conditions of T1 were the same as those of T0; the illumination conditions of T2 were changed compared with T0; and the spatial resolution of T3 was decreased compared to T0. Additionally, the parameter describing the illumination conditions is the ratio of the sunlight irradiance to the earthshine irradiance in the incident light, and the parameters of the simulated data are shown in Table 1.
The dimensions of the input data, the dimensions of the output data, and the parameters chosen in the proposed method are shown in Table 2. The material identification results of the simulated data are shown in Figure 14. Since the results of the proposed methods (-A, -M, and -C) are similar, and the proposed method-A performs best, only the results of it are shown.
In the simulated data, as shown in Table 3 and Figure 15, the identification performances of the solar cell and gold mylar in the proposed method and the comparative methods are always good, and the performance of the antenna in the proposed method is better than that in the comparative methods. Further, the results of the proposed method-A are better than the proposed method-M and the proposed method-C.
As shown in Table 4 and Figure 16, the OAs of the proposed method and the comparative methods are close, but the AA and the kappa coefficient of the proposed method are basically higher than the comparative methods. This shows that the proposed methods, especially the proposed method-A, outperform the comparative methods on the identification performance of the material with small samples.

4.2.2. Measured Data

Four kinds of data were measured, including the training set (T0), the test set 1 (T1), the test set 2 (T2), and the test set 3 (T3). Among them, the imaging conditions of T1 were the same as those of T0; the illumination conditions of T2 were changed compared with T0; and the spatial resolution of T3 was decreased compared with T0. Additionally, the parameter describing the illumination conditions is the incidence direction of the tungsten halogen lamp in the local coordinate system of the space target model, and the parameters of the measured data are shown in Table 5.
The dimensions of the input data, the dimensions of the output data, and the parameters chosen in the proposed method are shown in Table 2. The material identification results of the measured data are shown in Figure 17. For the same reason as the simulated data, among the proposed methods (-A, -M, and -C), only the results of the proposed method-A are shown.
In the measured data, as shown in Table 6 and Figure 18, the identification performances of the solar cell and gold mylar in the proposed method are always good and better than in the CNMF and the TD. In the 3-D CNN, the change in the illumination conditions and the reduction in the spatial resolution led to a significant decrease in the identification performances of the solar cell and gold mylar. The performances of the antenna and cooling area in the proposed method-A are better than those in the comparative methods. Additionally, the change in the illumination conditions and the reduction in the spatial resolution led to the reduction in the performances of the antenna and cooling area in the proposed method-M and the proposed method-C.
As shown in Table 7 and Figure 19, The performances of the OA, AA, and kappa coefficient on the measured data also show the advantage of the proposed method on the identification performance of the materials with small samples. The kappa coefficient of the proposed method stays around 0.8. The performances of the CNMF and the TD are poor, and the kappa coefficients are always below 0.6. The performance of the 3-D CNN is close to the proposed method on T0 and T1, but the changes in the illumination conditions and the reduction in the space resolution lead to a significant decrease in its performance on T2 and T3. This shows that the prediction results of the proposed methods are substantially consistent with the labels, and the 3-D CNN is more resistant to changes in the imaging conditions than the comparative methods, especially the proposed method-A.

4.3. Data Quality Assessment Results

We calculated the spectral separability of the two experimental data using Equations (19)–(24). The calculation results are shown in Table 8.

5. Discussion

5.1. The Influence Analysis of Illumination Conditions on the Data Distributions

When measuring, the model attitude is adjusted to change the incidence angle of the tungsten halogen lamp, and the hyperspectral images are taken while the incidence relative direction is close to the negative X-axis, the positive Y-axis, and the negative Z-axis. The spectra of each material in the measured data are clustered by the k-means clustering algorithm, and the average grayscale values of the clusters are calculated. The grayscale values of the materials in different incidence directions and different spatial resolutions are shown in Figure 20.
From Figure 20, it is found that the gold mylar and solar cell on the space target model cover a wide area and high sample amounts, which can better reflect the influence of the incidence direction on the data distributions. The average spectra of the left solar cell, the right solar cell, and the gold mylar on the different surfaces of the model body in the datasets with different incidence directions are calculated and normalized, as shown in Figure 21, Figure 22, Figure 23 and Figure 24.
It is observed that the spectral data distributions of the left solar cell and the right solar cell are relatively consistent, and there are differences in the spectra of the gold mylar on different surfaces. This is because the solar cells are always oriented in the same direction, while the gold mylar faces different directions on different surfaces. On the different surfaces of the model body, the incidence angles of the gold mylar are different, thus the problem of inconsistent distributions of the spectral data is clearly observed.
Additionally, because the distributions of the solar cell and gold mylar on the space target model are symmetrical along the X-axis, the spectral features are similar when the incidence direction is close to the YOZ plane, as shown in Figure 22, Figure 23 and Figure 24. When the incidence relative direction is close to the negative X-axis, they change significantly, as shown in Figure 21. This indicates that the incidence direction of sunlight has a great influence on the spectral data distributions of hyperspectral images, and it is closely related to the structural features of the space target too.

5.2. Comparison of the Results between the Experimental Datasets

The simulated data are designed with simplified illumination and reflection models and contain hyperspectral images under ideal imaging conditions. Therefore, in the simulated data, the spectral data distributions of the same material in the same set are consistent. On the contrary, the illumination conditions and material reflection characteristics of the measured data are complex. According to the discussion in Section 5.1, in the measured data, the spectral data distributions of the same material in the same set are inconsistent.
As shown in the calculation results in Table 8, the separability of the simulated data is significantly greater than that of the measured data. Many methods can achieve good results on data with high separability, but only a method that performs equally well on data with low separability is really suitable for space target material identification.
The kappa coefficients of the proposed method-A, the 3-D CNN, the CNMF, and the TD on the simulated data and the measured data are compared, as shown in Figure 16 and Figure 19. In the simulated data, the kappa coefficients of the four methods are all high, and there is a downward trend from the proposed method-A to the 3-D CNN to the CNMF to the TD. This shows that, under ideal conditions, these methods have good results, and the proposed method-A has only a slight advantage.
Compared with the simulated data, the kappa coefficients of each method in the measured data have decreased, and the proposed method-A has the smallest decrease. This indicates that the proposed method-A can achieve good identification performance under complex illumination conditions and has obvious advantages over the comparative methods.
In the global spatial features extraction of the space target, the global-structur topological graph was built by segmenting hyperspectral images, and it is insensitive to the changes in the data distributions caused by the spatial resolution and illumination conditions. By learning the global spatial features of the space target, the proposed method-A can obtain high robustness, and it can better meet the requirements of the space target material identification with complex illumination conditions and spatial resolution changes.

5.3. Deficiencies and Improvements under Non-Ideal Imaging Conditions

The proposed method has the ability to handle the material identification of the space target with three-axis stabilization under complex illumination conditions when executing hovering operations. However, in fact, the imaging conditions of the space targets cannot always be so ideal. We list the main factors that may affect the material identification performance during imaging and propose possible strategies to address these issues.

5.3.1. Image Degradation

Image degradation exists in the hyperspectral imaging system, resulting in the loss of spatial and spectral features of the space target, thereby reducing the accuracy of material identification. Our team has researched and developed staring imaging spectrometers based on an acousto-optic tunable filter (AOTF) and measured the linear diffusion function and spectral response function of the spectrometers [62]. Therefore, we plan to design image restoration methods using the degradation function of the spectrometer as prior knowledge.

5.3.2. Background Distractions

The background types for space target images include a space background, earth background, and limb background [63]. In addition to the space background with uniform low grayscale values, both the earth background and limb background have complex texture features and change frequently with the atmospheric environment. As the example image shows in Figure 25, it is conceivable that, without processing of the background, the earth background and limb background will interfere with the representation of the target structural features when building the space target topological graph.
Therefore, complete background removal is one of the requirements to ensure the performance of the proposed method. Additionally, because the grayscale values and texture features of the earth background and limb background are so variable, it is difficult to cover the backgrounds in all images with one method.
We recommend contrastive learning [64] to solve this problem. As shown in Figure 25, the pixels of the space target are set as positive samples, and the other pixels are set as negative samples. Each image is trained to differentiate the feature distance between positive and negative samples, making it easy to distinguish between positive and negative samples to accurately remove background pixels.

5.3.3. Rapid Changes in Imaging Conditions

In some detection scenarios, imaging conditions may be in continuous high-speed changes, which cannot be ignored in hyperspectral imaging. For example, in rendezvous and proximity operations (RPO) [65], the imaging distance gradually decreases; in fly-around operations [66], the direction of imaging is constantly changed; and in accompanying flights [67], the rapid rotation of the space target leads to a constant change in its relative attitude to the detector.
In the above scenarios, the data acquired by scanning imaging spectrometers may become quite difficult to interpret. Additionally, the staring imaging spectrometer can intuitively acquire the spatial features of the target in each frame, as shown in Figure 26, so the structural features and spectral features can be extracted by operating on the image space for identification.
When processing staring hyperspectral images, we consider improving on the proposed method. On the one hand, when building topological graphs, every single frame image requires separate superpixel segmentation and topological graph building; then, all of the topological graphs will be connected selectively and merged together. On the other hand, the self-attention mechanism will be added to the GCN to enhance its ability to capture long-range interactions (LRI) [68] in graphs and learn the changes in structural features. In addition, we believe that it is possible to identify the motion status of the space target by the changes in structural features.

6. Conclusions

Under complex illumination conditions, the spectral features of materials in hyperspectral images become inconsistent. These phenomena are specifically expressed as obvious spectral feature differences of the same material, high spectral feature similarities of different materials, and intense changing of spectral features under different illumination conditions. Therefore, it is difficult to achieve accurate material identification only via spectral features and local spatial features.
Aiming at this problem, a material identification method based on an improved graph convolutional neural network is proposed. Superpixel segmentation is conducted on the hyperspectral images to build the multiscale, joint topological graph of the space target global structure. Based on this topological graph, material identification datasets including the topological graphs and neighborhoods of each pixel are obtained. These datasets contain the global spatial features, the local spatial features, and the spectral features. The proposed network model of the GCN and the 3-D CNN fusing with the strategies of -A, -M, or -C is constructed. Additionally, it is trained to learn the best weights of the three features. The simulated data and the measured data are used to demonstrate the performance of the proposed methods (-A, -M, and -C), and the 3-D CNN, the CNMF, and the TD are chosen as comparative methods. The overall accuracy of the proposed methods can be kept at 85–90%, and their kappa coefficients remain around 0.8, among which the proposed method-A has the best overall performance. Moreover, in the simulated data, the proposed methods and the comparative methods all perform well; in the measured data, the proposed methods perform more strongly than the comparative methods. This indicates that the proposed methods can improve the material identification performance under complex illumination conditions with high accuracy and strong robustness.
In future work, we plan to measure more hyperspectral images of the space target under different imaging conditions and verify the performances of the proposed method and other advanced methods on these images, determining the state-of-the-art (SOTA) methods and making the proposed method more convincing. Additionally, we will research the material identification of the space target under non-ideal imaging conditions to further improve the applicable scope and robustness of the proposed method, conduct space target status recognition, and pose estimation research.

Author Contributions

Conceptualization, N.L.; methodology, N.L.; software, C.G.; investigation, Y.M.; resources, H.Z.; data curation, C.G.; writing—original draft, C.G.; writing—review & editing, N.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Foundation Grant 6230103 and the National Natural Science Foundation of China under Grant 61975004.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mu, J.; Hao, X.; Zhu, W.; Li, S. Review and Prosepect of Intelligent Perception for Non-cooperative Targets. Chin. Space Sci. Technol. 2021, 41, 1–16. [Google Scholar] [CrossRef]
  2. Deng, S.; Liu, C.; Tan, Y. Research on Spectral Measurement Technology and Surface Material Analysis of Space Target. Spectrosc. Spectr. Anal. 2021, 41, 3299–3306. [Google Scholar] [CrossRef]
  3. Liu, Y.; Zhao, H.; Zhong, X. The Combined Computational Spectral Imaging Method of Space-based Targets. Spacecr. Recovery Remote Sens. 2021, 42, 74–81. [Google Scholar]
  4. Abercromby, K.; Okada, J.; Guyote, M.; Hamada, K.; Barker, E. Comparisons of Ground Truth and Remote Spectral Measurements of FORMOSAT and ANDE Spacecraft. In Proceedings of the Advanced Maui Optical and Space Surveillance Technologies Conference, Maui, HI, USA, 12–15 September 2007. [Google Scholar]
  5. Vananti, A.; Schildknecht, T.; Krag, H. Reflectance spectroscopy characterization of space debris. Adv. Space Res. 2017, 59, 2488–2500. [Google Scholar] [CrossRef] [Green Version]
  6. Abercromby, K.J.; Rapp, J.; Bedard, D.; Seitzer, P.; Cardona, T.; Cowardin, H.; Barker, E.; Lederer, S. Comparisons of Constrained Least Squares Model Versus Human-in-the-Loop for Spectral Unmixing to Determine Material Type of GEO Debris. In Proceedings of the 6th European Conference on Space Debris, Darmstadt, Germany, 22–25 April 2013; Volume 723, p. 22. [Google Scholar]
  7. Nie, B.; Yang, L.; Zhao, F.; Zhou, J.; Jing, J. Space Object Material Identification Method of Hyperspectral Imaging Based on Tucker Decomposition. Adv. Space Res. 2021, 67, 2031–2043. [Google Scholar] [CrossRef]
  8. Velez-Reyes, M.; Yi, J. Hyperspectral Unmixing for Remote Sensing of Unresolved Objects. In Proceedings of the Advanced Maui Optical and Space Surveillance Technologies Conference, Maui, HI, USA, 19–22 September 2023; pp. 573–580. [Google Scholar]
  9. Yi, J.; Velez-Reyes, M.; Erives, H. Studying the Potential of Hyperspectra Unmixing for Extracting Composition of Unresolved Space Objects using Simulation Models. In Proceedings of the Advanced Maui Optical and Space Surveillance Technologies Conference, Maui, HI, USA, 14–17 September 2021; pp. 603–614. [Google Scholar]
  10. Li, P.; Li, Z.; Xu, C.; Fang, Y.; Zhang, F. Research on Space Object’s Materials Multi-Color Photometry Identification Based on the Extreme Learning Machine Algorithm. Spectrosc. Spectr. Anal. 2019, 39, 363–369. [Google Scholar]
  11. Liu, H. The Technology of Spectral Recognition Based on Statistical Machine Learning. Master’s Thesis, Changchun University of Science and Technology, Changchun, China, 2017. [Google Scholar]
  12. Liu, H.; Li, Z.; Shi, J.; Xin, M.; Cai, H.; Gao, X.; Tan, Y. Study on Classification and Recognition of Materials Based on Convolutinal Neural Network. Laser Infrared 2017, 47, 1024–1028. [Google Scholar]
  13. Deng, S.; Liu, C.; Tan, Y.; Liu, D.; Zhang, N.; Kang, Z.; Li, Z.; Fan, C.; Jiang, C.; Lu, Z. A Combination of Multiple Deep Learning Methods Applied to Small-Sample Space Objects Classification. Spectrosc. Spectr. Anal. 2022, 42, 609–615. [Google Scholar]
  14. Gazak, Z.J.; Swindle, R.; McQuaid, I.; Fletcher, J. Exploiting Spatial Information in Raw Spectroscopic Imagery using Convolutional Neural Networks. In Proceedings of the Advanced Maui Optical and Space Surveillance Technologies Conference, Maui, HI, USA, 15–18 September 2020; pp. 395–403. [Google Scholar]
  15. Vananti, A.; Schildknecht, T.; Krag, H.; Erd, C. Preliminary Results from Reflectance Spectroscopy Observations of Space Debris in GEO. Fifth European Conference on Space Debris. Proc. Esa Spec. Publ. 2009, 672, 41. [Google Scholar]
  16. Cowardin, H.; Seitzer, P.; Abercromby, K.; Barker, E.; Schildknecht, T. Characterization of Orbital Debris Photometric Properties Derived from Laboratory-Based Measurements. In Proceedings of the Advanced Maui Optical and Space Surveillance Technologies Conference, Maui, HI, USA, 15–18 September 2020; p. 66. [Google Scholar]
  17. Bédard, D.; Lévesque, M. Analysis of the CanX-1 Engineering Model Spectral Reflectance Measurements. J. Spacecr. Rocket. 2014, 51, 1492–1504. [Google Scholar] [CrossRef]
  18. Bédard, D.; Wade, G.A.; Abercromby, K. Laboratory Characterization of Homogeneous Spacecraft Materials. J. Spacecr. Rocket. 2015, 52, 1038–1056. [Google Scholar] [CrossRef]
  19. Sun, C.; Yuan, Y.; Lu, Q. Modeling and Verification of Space-Based Optical Scattering Characteristics of Space Objects. Acta Opt. Sin. 2019, 39, 354–360. [Google Scholar]
  20. Bédard, D.; Lévesque, M.; Wallace, B. Measurement of the photometric and spectral BRDF of small Canadian satellites in a controlled environment. In Proceedings of the Advanced Maui Optical and Space Surveillance Technologies Conference, Maui, HI, USA, 13–16 September 2011; pp. 1–10. [Google Scholar]
  21. Bédard, D.; Wade, G.; Monin, D.; Scott, R. Spectrometric characterization of geostationary satellites. In Proceedings of the Advanced Maui Optical and Space Surveillance Technologies Conference, Maui, HI, USA, 11–14 September 2012; pp. 1–11. [Google Scholar]
  22. Augustine, J.; Eli, Q.; Francis, K. Simultaneous Glint Spectral Signatures of Geosynchronous Satellites from Multiple Telescopes. In Proceedings of the Advanced Maui Optical and Space Surveillance Technologies Conference, Maui, HI, USA, 11–14 September 2018; pp. 1–10. [Google Scholar]
  23. Perez, M.D.; Musallam, M.A.; Garcia, A.; Ghorbel, E.; Ismaeil, K.A.; Aouada, D.; Henaff, P.L. Detection and Identification of On-Orbit Objects Using Machine Learning. In Proceedings of the 8th European Conference on Space Debris, Darmstadt, Germany, 20–23 April 2021; Volume 8, pp. 1–10. [Google Scholar]
  24. Chen, Y.; Gao, J.; Zhang, K. R-CNN-Based Satellite Components Detection in Optical Images. Int. J. Aerosp. Eng. 2020, 2020, 8816187. [Google Scholar] [CrossRef]
  25. Liu, J.; Li, H.; Zhang, Y.; Zhou, J.; Lu, L.; Li, F. Robust Adaptive Relative Position and Attitude Control for Noncooperative Spacecraft Hovering under Coupled Uncertain Dynamics. Math. Probl. Eng. 2019, 2019, 8678473. [Google Scholar] [CrossRef] [Green Version]
  26. Meftah, M.; Damé, L.; Bolsée, D.; Hauchecorne, A.; Pereira, N.; Sluse, D.; Cessateur, G.; Irbah, A.; Bureau, J.; Weber, M.; et al. SOLAR-ISS: A new reference spectrum based on SOLAR/SOLSPEC observations. Astron. Astrophys. 2018, 2018, 611. [Google Scholar] [CrossRef]
  27. Guo, X. Study of Spectral Radiation and Scattering Characteristic of Background and Target. Ph.D. Thesis, Xidian University, Xi’an, China, 2018. [Google Scholar]
  28. Yan, P.; Ma, C.; She, W. Influence of Earth’s Reflective Radiation on Space Target for Space Based Imaging. Acta Phys. Sin. 2015, 64, 1–8. [Google Scholar] [CrossRef]
  29. Zou, Y.; Zhang, L.; Zhang, J.; Li, B.; Lv, X. Developmental Trends in the Application and Measurement of the Bidirectional Reflection Distribution Function. Sensors 2022, 22, 1739. [Google Scholar] [CrossRef]
  30. Liu, C.; Li, Z.; Xu, C. A Modified Phong Model for Fresnel Reflection Phenomenon of Commonly Used Materials for Space Targets. Laser Optoelectron. Prog. 2017, 54, 446–454. [Google Scholar]
  31. Li, J.; Huang, X.; Tu, L. WHU-OHS: A benchmark dataset for large-scale Hersepctral Image classification. Int. J. Appl. Earth Obs. Geoinf. 2022, 113, 103022. [Google Scholar] [CrossRef]
  32. Li, T.; Zhang, J.; Zhang, Y. Classification of hyperspectral image based on deep belief networks. In Proceedings of the 2014 IEEE International Conference on Image Processing, Paris, France, 27–30 October 2014; pp. 5132–5136. [Google Scholar]
  33. Mou, L.; Ghamisi, P.; Zhu, X. Deep recurrent neural networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3639–3655. [Google Scholar] [CrossRef] [Green Version]
  34. Hu, W.; Huang, Y.; Wei, L.; Zhang, F.; Li, H. Deep convolutional neural networks for hyperspectral image classification. J. Sens. 2015, 2015, 258619. [Google Scholar] [CrossRef] [Green Version]
  35. Li, J.; Du, Q.; Xi, B.; Li, Y. Hyperspectral Image Classification Via Sample Expansion for Convolutional Neural Network. In Proceedings of the 2018 9th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Amsterdam, The Netherlands, 23–26 September 2018; pp. 1–5. [Google Scholar] [CrossRef]
  36. Luo, Y.; Zou, J.; Yao, C.; Zhao, X.; Li, T.; Bai, G. HSI-CNN: A Novel Convolution Neural Network for Hyperspectral Image. In Proceedings of the International Conference on Audio, Language and Image Processing (ICALIP), Shanghai, China, 16–17 July 2018; pp. 464–469. [Google Scholar] [CrossRef] [Green Version]
  37. Zhu, L.; Chen, Y.; Ghamisi, P.; Benediktsson, J.A. Generative Adversarial Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5046–5063. [Google Scholar] [CrossRef]
  38. Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep Feature Extraction and Classification of Hyperspectral Images Based on Convolutional Neural Networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef] [Green Version]
  39. Zou, L.; Zhu, X.; Wu, C.; Liu, Y.; Qu, L. Spectral–Spatial Exploration for Hyperspectral Image Classification via the Fusion of Fully Convolutional Networks. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 659–674. [Google Scholar] [CrossRef]
  40. Wang, C.; Bai, X.; Zhou, L.; Zhou, J. Hyperspectral Image Classification Based on Non-Local Neural Networks. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 584–587. [Google Scholar] [CrossRef]
  41. Kanthi, M.; Sarma, T.H.; Bindu, C.S. A 3d-Deep CNN Based Feature Extraction and Hyperspectral Image Classification. In Proceedings of the 2020 IEEE India Geoscience and Remote Sensing Symposium (InGARSS), Virtual, 2–4 December 2020; pp. 229–232. [Google Scholar] [CrossRef]
  42. Zhang, H.; Chen, Y.; He, X.; Shen, X. Boosting CNN for Hyperspectral Image Classification. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; pp. 3673–3676. [Google Scholar] [CrossRef]
  43. Xu, Z.; Yu, H.; Zheng, K.; Gao, L.; Song, M. A Novel Classification Framework for Hyperspectral Image Classification Based on Multiscale Spectral-Spatial Convolutional Network. In Proceedings of the 2021 11th Workshop on Hyperspectral Imaging and Signal Processing: Evolution in Remote Sensing (WHISPERS), Amsterdam, The Netherlands, 24–26 March 2021; pp. 1–5. [Google Scholar] [CrossRef]
  44. Feng, J.; Wu, X.; Shang, R.; Sui, C.; Li, J.; Jiao, L.; Zhang, X. Attention multibranch convolutional neural network for hyperspectral image classification based on adaptive region search. IEEE Trans. Geosci. Remote Sens. 2021, 59, 5054–5070. [Google Scholar] [CrossRef]
  45. Kipf, T.; Welling, M. Semi-Supervised Classification with Graph Convolutional Networks. In Proceedings of the International Conference of Legal Regulators (ICLR), Singapore, 5–6 October 2017. [Google Scholar]
  46. Qin, A.; Shang, Z.; Tian, J.; Wang, Y.; Zhang, T.; Tang, Y.Y. Spectral–Spatial Graph Convolutional Networks for Semisupervised Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2019, 16, 241–245. [Google Scholar] [CrossRef]
  47. Wan, S.; Gong, C.; Zhong, P.; Pan, S.; Li, G.; Yang, J. Hyperspectral image classification with context-aware dynamic graph convolutional network. IEEE Trans. Geosci. Remote Sens. 2021, 59, 597–612. [Google Scholar] [CrossRef]
  48. Wan, S.; Gong, C.; Zhong, P.; Du, B.; Zhang, L.; Yang, J. Multiscale Dynamic Graph Convolutional Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2020, 58, 3162–3177. [Google Scholar] [CrossRef] [Green Version]
  49. Hong, D.; Gao, L.; Yao, J.; Zhang, B.; Plaza, A.; Chanussot, J. Graph Convolutional Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 5966–5978. [Google Scholar] [CrossRef]
  50. Zhang, M.; Luo, H.; Song, W.; Mei, H.; Su, C. Spectral-Spatial Offset Graph Convolutional Networks for Hyperspectral Image Classification. Remote Sens. 2021, 13, 4342. [Google Scholar] [CrossRef]
  51. Ding, Y.; Zhang, Z.; Zhao, X.; Hong, D.; Cai, W.; Yu, C.; Yang, N.; Cai, W. Multi-feature fusion: Graph neural network and CNN combining for hyperspectral image classification. Neurocomputing 2022, 501, 246–257. [Google Scholar] [CrossRef]
  52. Ding, Y.; Zang, Z.; Zhao, X.; Cai, W.; He, F.; Cai, Y.; Cai, W.W. Deep hybrid: Multi-graph neural network collaboration for hyperspectral image classification. Def. Technol. 2022. [Google Scholar] [CrossRef]
  53. Zhang, Z.; Ding, Y.; Zhao, X.; Siye, L.; Yang, N.; Cai, Y.; Zhan, Y. Multireceptive field: An adaptive path aggregation graph neural framework for hyperspectral image classification. Expert Syst. Appl. 2023, 217, 119508. [Google Scholar] [CrossRef]
  54. Ding, Y.; Zhang, Z.; Zhao, X.; Cai, W.; Yang, N.; Hu, H.; Huang, X.; Cao, Y.; Cai, W. Unsupervised Self-Correlated Learning Smoothy Enhanced Locality Preserving Graph Convolution Embedding Clustering for Hyperspectral Images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5536716. [Google Scholar] [CrossRef]
  55. Ding, Y.; Zhang, Z.; Zhao, X.; Cai, Y.; Li, S.; Deng, B.; Cai, W. Self-Supervised Locality Preserving Low-Pass Graph Convolutional Embedding for Large-Scale Hyperspectral Image Clustering. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5536016. [Google Scholar] [CrossRef]
  56. Ding, Y.; Zhao, X.; Zhang, Z.; Cai, W.; Yang, N.; Zhan, Y. Semi-Supervised Locality Preserving Dense Graph Neural Network with ARMA Filters and Context-Aware Learning for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5511812. [Google Scholar] [CrossRef]
  57. Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; Süsstrunk, S. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2274–2282. [Google Scholar] [CrossRef] [Green Version]
  58. Smith, L.N. Cyclical Learning Rates for Training Neural Networks. In Proceedings of the 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 3–7 January 2017; pp. 464–472. [Google Scholar] [CrossRef] [Green Version]
  59. Prechelt, L. Early Stopping—But When? In Neural Networks: Tricks of the Trade, Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2012; Volume 7700. [Google Scholar] [CrossRef]
  60. Sammut, C.; Webb, G.I. F1-Measure. In Encyclopedia of Machine Learning and Data Mining; Springer: Boston, MA, USA, 2017; p. 16. [Google Scholar] [CrossRef]
  61. Thompson, W.; Walter, S. A reappraisal of the kappa coefficient. J. Clin. Epidemiol. 1988, 41, 949–958. [Google Scholar] [CrossRef]
  62. Xu, Z.; Zhao, H.; Jia, G. Influence of The AOTF Rear Cut Angle on Spectral Image Quality. Infrared Laser Eng. 2022, 51, 373–379. [Google Scholar]
  63. Chen, X.; Wan, M.; Xu, Y.; Qian, W.; Chen, Q.; Gu, G. Infrared Remote Sensing Imaging Simulation Method for Earth’s Limb Scene. Infrared Laser Eng. 2022, 51, 24–31. [Google Scholar]
  64. Xie, J.; Xiang, J.; Chen, J.; Hou, X.; Zhao, X.; Shen, L. C2 AM: Contrastive learning of Class-agnostic Activation Map for Weakly Supervised Object Localization and Semantic Segmentation. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 979–988. [Google Scholar] [CrossRef]
  65. Borelli, G.; Gaias, G.; Colombo, C. Rendezvous and Proximity Operations Design of An Active Debris Removal Service to A Large Constellation Fleet. Acta Astronaut. 2023, 205, 33–46. [Google Scholar] [CrossRef]
  66. Zhang, R.; Han, C.; Rao, Y.I.; Yin, J. Spacecraft Fast Fly-Around Formations Design Using the Bi-Teardrop Configuration. J. Guid. Control. Dyn. 2018, 41, 1542–1555. [Google Scholar] [CrossRef]
  67. Li, Y.; Wang, B.; Liu, C. Long-term Accompanying Flight Control of Satellites with Low Fuel Consumption. In Proceedings of the 2016 Chinese Control and Decision Conference (CCDC), Yinchuan, China, 28–30 May 2016; pp. 360–363. [Google Scholar] [CrossRef]
  68. Dwivedi, V.P.; Rampášek, L.; Galkin, M.; Parviz, A.; Wolf, G.; Luu, A.; Beaini, D. Long Range Graph Benchmark. arXiv 2022, arXiv:2206.08164. [Google Scholar]
Figure 1. Schematic of the space target attitude and illumination.
Figure 1. Schematic of the space target attitude and illumination.
Remotesensing 15 01937 g001
Figure 2. Schematic of the Fresnel effect. When the incidence angle of light on the surface is larger, the specular reflection component is stronger, and the diffuse reflection component is weaker.
Figure 2. Schematic of the Fresnel effect. When the incidence angle of light on the surface is larger, the specular reflection component is stronger, and the diffuse reflection component is weaker.
Remotesensing 15 01937 g002
Figure 3. Local coordinate system of the space target model.
Figure 3. Local coordinate system of the space target model.
Remotesensing 15 01937 g003
Figure 4. Normalized materials’ spectra in the hyperspectral image of the space target model.
Figure 4. Normalized materials’ spectra in the hyperspectral image of the space target model.
Remotesensing 15 01937 g004
Figure 5. Normalized spectra of gold mylar in the images with different illumination conditions: (a) the first image; (b) the second image.
Figure 5. Normalized spectra of gold mylar in the images with different illumination conditions: (a) the first image; (b) the second image.
Remotesensing 15 01937 g005
Figure 6. An overview of the proposed method.
Figure 6. An overview of the proposed method.
Remotesensing 15 01937 g006
Figure 7. Flowchart for building the topological graph of the space target global structure.
Figure 7. Flowchart for building the topological graph of the space target global structure.
Remotesensing 15 01937 g007
Figure 8. Examples of the simulated data: (a) example 1; (b) example 2.
Figure 8. Examples of the simulated data: (a) example 1; (b) example 2.
Remotesensing 15 01937 g008
Figure 9. Examples of normalized materials’ spectra in the simulated data.
Figure 9. Examples of normalized materials’ spectra in the simulated data.
Remotesensing 15 01937 g009
Figure 10. Laboratory measurement scene.
Figure 10. Laboratory measurement scene.
Remotesensing 15 01937 g010
Figure 11. Spectra of the incandescent lamp and the tungsten halogen lamp.
Figure 11. Spectra of the incandescent lamp and the tungsten halogen lamp.
Remotesensing 15 01937 g011
Figure 12. Examples of the measured data: (a) example 1; (b) example 2.
Figure 12. Examples of the measured data: (a) example 1; (b) example 2.
Remotesensing 15 01937 g012
Figure 13. Examples of normalized materials’ spectra in the measured data.
Figure 13. Examples of normalized materials’ spectra in the measured data.
Remotesensing 15 01937 g013
Figure 14. Examples of material identification results using the simulated data: (a) example 1; (b) example 2.
Figure 14. Examples of material identification results using the simulated data: (a) example 1; (b) example 2.
Remotesensing 15 01937 g014
Figure 15. F1-measure chart of the simulated data.
Figure 15. F1-measure chart of the simulated data.
Remotesensing 15 01937 g015
Figure 16. Identification performance chart of the simulated data (OA-AA-Kappa).
Figure 16. Identification performance chart of the simulated data (OA-AA-Kappa).
Remotesensing 15 01937 g016
Figure 17. Examples of material identification results using the measured data (a) example 1; (b) example 2.
Figure 17. Examples of material identification results using the measured data (a) example 1; (b) example 2.
Remotesensing 15 01937 g017
Figure 18. F1-measure chart of the measured data.
Figure 18. F1-measure chart of the measured data.
Remotesensing 15 01937 g018
Figure 19. Identification performance chart of the measured data (OA-AA-Kappa).
Figure 19. Identification performance chart of the measured data (OA-AA-Kappa).
Remotesensing 15 01937 g019
Figure 20. Distributions of grayscale values of different materials in different incidence directions and spatial resolutions: (a) high spatial resolution, negative X-axis; (b) high spatial resolution, positive Y-axis; (c) high spatial resolution, negative Z-axis; (d) low spatial resolution, positive Y-axis.
Figure 20. Distributions of grayscale values of different materials in different incidence directions and spatial resolutions: (a) high spatial resolution, negative X-axis; (b) high spatial resolution, positive Y-axis; (c) high spatial resolution, negative Z-axis; (d) low spatial resolution, positive Y-axis.
Remotesensing 15 01937 g020
Figure 21. Spectral data distributions of high spatial resolution and negative X-axis incidence: (a) left and right solar cells; (b) gold mylar on different surfaces.
Figure 21. Spectral data distributions of high spatial resolution and negative X-axis incidence: (a) left and right solar cells; (b) gold mylar on different surfaces.
Remotesensing 15 01937 g021
Figure 22. Spectral data distributions of high spatial resolution and positive Y-axis incidence: (a) left and right solar cells; (b) gold mylar on different surfaces.
Figure 22. Spectral data distributions of high spatial resolution and positive Y-axis incidence: (a) left and right solar cells; (b) gold mylar on different surfaces.
Remotesensing 15 01937 g022
Figure 23. Spectral data distributions of high spatial resolution and negative Z-axis incidence; (a) left and right solar cells; (b) gold mylar on different surfaces.
Figure 23. Spectral data distributions of high spatial resolution and negative Z-axis incidence; (a) left and right solar cells; (b) gold mylar on different surfaces.
Remotesensing 15 01937 g023
Figure 24. Spectral data distributions of low spatial resolution and positive Y-axis incidence: (a) left and right solar cells; (b) gold mylar on different surfaces.
Figure 24. Spectral data distributions of low spatial resolution and positive Y-axis incidence: (a) left and right solar cells; (b) gold mylar on different surfaces.
Remotesensing 15 01937 g024
Figure 25. Increase the differences between target features and background features through contrastive learning.
Figure 25. Increase the differences between target features and background features through contrastive learning.
Remotesensing 15 01937 g025
Figure 26. Schematic of staring hyperspectral images with decreasing imaging distance.
Figure 26. Schematic of staring hyperspectral images with decreasing imaging distance.
Remotesensing 15 01937 g026
Table 1. Simulation data parameters.
Table 1. Simulation data parameters.
T0T1T2T3
Illumination conditions10:110:120:310:1
Spatial resolution (cm/pixel)3.23.23.26.4
Table 2. The table of dimensions and parameters in the proposed method on the simulated data.
Table 2. The table of dimensions and parameters in the proposed method on the simulated data.
StepsDimensions and Parameters
Simulated DataMeasured Data
1.
The hyperspectral image  I R 150 × 240 × 90 ;
1.
The hyperspectral image  I R 100 × 220 × 90 ;
Inputs
2.
The topological graph  G l = { A l , X l } A l R q × q X l R q × 90 16 q 22 ;
2.
The topological graph  G l = { A l , X l } A l R q × q X l R q × 90 23 q 42 ;
3.
The fixed-size neighborhoods  Ne l R 5 × 5 × 90 .
3.
The fixed-size neighborhoods  Ne l R 5 × 5 × 90 .
SLIC
1.
The initial number of cluster centers  K 0 = 30 K 1 = 60 ;
1.
The initial number of cluster centers  K 0 = 27 K 1 = 55 ;
2.
The distance scale factor  ε l = 9 / 40 .
2.
The distance scale factor  ε l = 9 / 40 .
The
Network
1.
The 3-D convolutional layer  f C 3 ( · )  with  W C 3 , kn R 3 × 3 × 3  ( 0 kn < 16 ) and step sizes 1, 1, and 1;
2.
The 3-D max-pooling layer  f MP 3 ( · )  with the windows of sizes 1 × 1 × 3 and step sizes 1, 1, and 3;
3.
The 1-D convolutional layers  f C 1 ( · )  with  W C 1 R 1 × 3  and step size 1;
4.
The 1-D max-pooling layers  f MP 1 ( · )  with the window of size 3 and step size 3;
5.
The graph convolutional layer  f G ( · )  with the number of channels  Ch = 32 ;
6.
The fully connected layer  f D ( · )  with the number of nodes  Ch = 32 ;
7.
The fully connected layer  f S ( · )  with the number of nodes  Cl = 4 .
7.
The fully connected layer  f S ( · )  with the number of nodes  Cl = 5 .
Output
1.
The class confidence vector  Y Class R 1 × 4 .
1.
The class confidence vector  Y Class R 1 × 5 .
Table 3. F1-measure (%) table of the simulated data.
Table 3. F1-measure (%) table of the simulated data.
ClassThe Proposed Method
-A-M-C
T0T1T2T3T0T1T2T3T0T1T2T3
Solar cell979697939593948997959591
Gold mylar979994829698866997999378
Antenna817880666765554081734185
Class3-D CNNCNMFTD
T0T1T2T3T0T1T2T3T0T1T2T3
Solar cell938994939588959490869088
Gold mylar969890829297948877877878
Antenna252026341220170000
Table 4. Identification performance table of the simulated data (OA-AA-KAPPA).
Table 4. Identification performance table of the simulated data (OA-AA-KAPPA).
MethodOA(%)AA(%)Kappa
T0T1T2T3T0T1T2T3T0T1T2T3
The-A96.897.795.988.790.791.789.080.70.9400.9450.9080.759
proposed-M94.295.891.279.988.791.377.067.30.8930.9020.7940.589
method-C96.197.293.087.193.392.773.080.30.9280.9340.8400.717
3-D CNN92.093.992.586.970.169.370.768.00.8490.8500.8260.737
CNMF92.191.393.189.466.364.765.065.00.8190.7990.8400.773
TD84.480.284.683.056.358.757.056.30.6320.6280.6410.633
Table 5. Measured data parameters.
Table 5. Measured data parameters.
T0T1T2T3
Illumination conditions+ Y-axis; − Z-axis− Z-axis− X-axis+ Y-axis
Spatial resolution (cm/pixel)5.45.45.410.8
Table 6. F1-measure (%) table of the measured data.
Table 6. F1-measure (%) table of the measured data.
ClassThe Proposed Method
-A-M-C
T0T1T2T3T0T1T2T3T0T1T2T3
Solar cell989895969796918897968695
Gold mylar898689909390928993918892
Antenna79646676835554387957046
Cooling area6753424061422286340029
Class3-D CNNCNMFTD
T0T1T2T3T0T1T2T3T0T1T2T3
Solar cell949765428466608291878875
Gold mylar929067706363597172677069
Antenna141703271210000
Cooling area412218112518136190120
Table 7. Identification performance table of the measured data (OA-AA-KAPPA).
Table 7. Identification performance table of the measured data (OA-AA-KAPPA).
MethodOA(%)AA(%)Kappa
T0T1T2T3T0T1T2T3T0T1T2T3
The-A89.987.787.489.486.381.376.872.80.8370.7990.7910.817
proposed-M92.289.387.684.392.879.058.071.50.8640.8130.7740.722
method-C92.890.585.090.391.874.344.871.00.8750.8300.7150.825
3-D CNN86.586.358.955.559.556.550.540.30.7730.7670.3500.242
CNMF71.856.353.869.449.540.837.041.50.5000.3310.2320.492
TD74.874.269.069.249.538.547.337.30.5820.5360.5150.428
Table 8. The spectral separability of the materials.
Table 8. The spectral separability of the materials.
tr ( S B ) tr ( S W ) J
Simulated data7.74731.65084.6930
Measured data0.77772.43070.3199
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, N.; Gong, C.; Zhao, H.; Ma, Y. Space Target Material Identification Based on Graph Convolutional Neural Network. Remote Sens. 2023, 15, 1937. https://doi.org/10.3390/rs15071937

AMA Style

Li N, Gong C, Zhao H, Ma Y. Space Target Material Identification Based on Graph Convolutional Neural Network. Remote Sensing. 2023; 15(7):1937. https://doi.org/10.3390/rs15071937

Chicago/Turabian Style

Li, Na, Chengeng Gong, Huijie Zhao, and Yun Ma. 2023. "Space Target Material Identification Based on Graph Convolutional Neural Network" Remote Sensing 15, no. 7: 1937. https://doi.org/10.3390/rs15071937

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop