Next Article in Journal
Observation of Magnetic Domains in Amorphous Magnetic Wires with a Diameter of 10 μm Used in GSR Sensors
Next Article in Special Issue
A Deep Learning Framework for Processing and Classification of Hyperspectral Rice Seed Images Grown under High Day and Night Temperatures
Previous Article in Journal
A Lightweight Remote Sensing Payload for Wildfire Detection and Fire Radiative Power Measurements
Previous Article in Special Issue
Determination of Bio-Based Fertilizer Composition Using Combined NIR and MIR Spectroscopy: A Model Averaging Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Graph Convolutional Network Using Adaptive Neighborhood Laplacian Matrix for Hyperspectral Images with Application to Rice Seed Image Classification

1
University of Puerto Rico at Mayaguez, Mayagüez, PR 00681, USA
2
University of Nebraska-Lincoln, Lincoln, NE 68583, USA
*
Authors to whom correspondence should be addressed.
Sensors 2023, 23(7), 3515; https://doi.org/10.3390/s23073515
Submission received: 25 February 2023 / Revised: 21 March 2023 / Accepted: 25 March 2023 / Published: 27 March 2023

Abstract

:
Graph convolutional neural network architectures combine feature extraction and convolutional layers for hyperspectral image classification. An adaptive neighborhood aggregation method based on statistical variance integrating the spatial information along with the spectral signature of the pixels is proposed for improving graph convolutional network classification of hyperspectral images. The spatial-spectral information is integrated into the adjacency matrix and processed by a single-layer graph convolutional network. The algorithm employs an adaptive neighborhood selection criteria conditioned by the class it belongs to. Compared to fixed window-based feature extraction, this method proves effective in capturing the spectral and spatial features with variable pixel neighborhood sizes. The experimental results from the Indian Pines, Houston University, and Botswana Hyperion hyperspectral image datasets show that the proposed AN-GCN can significantly improve classification accuracy. For example, the overall accuracy for Houston University data increases from 81.71% (MiniGCN) to 97.88% (AN-GCN). Furthermore, the AN-GCN can classify hyperspectral images of rice seeds exposed to high day and night temperatures, proving its efficacy in discriminating the seeds under increased ambient temperature treatments.

1. Introduction

Hyperspectral images (HSI) contain rich spectral and spatial information useful for material identification. Recently, many methods based on deep learning tools have been developed resulting in an increase in the classification accuracy and segmentation precision of these images [1]. One of these methods is the use of graphs together with deep learning. Many data structures that are non-Euclidean can be represented in the form of graphs [2,3]. Nowadays sophisticated sensing and imaging technologies are available for the acquisition of complex image datasets. The resultant images have higher information content requiring non-Euclidean space representations. Therefore, the use of graph theory combined with deep learning in images has proved to be more effective for processing these images. Based on convolutional networks and deep networks, the concept of Graph Convolutional Networks (GCN) has been developed and applied to image [4]. An image with a regular domain (regular grid in the Euclidean space) is represented as a graph, where each pixel in the image represents a node, and edges are the connections between adjacent nodes [5]. GCNs are used to model long-range spatial relationships in HSI where a CNN fails [6]. One of the most important parts of the development of the GCNs is the generation of the adjacency matrix and the derivation of the Laplacian [7,8,9]. The adjacency matrix of undirected graphs represents the relationship between vertices, in this case pixels. Quin et al. [10] proposed a GCN method that markedly improved the classification accuracy in HSI by taking the spatial distance between nodes and multiplying it by the adjacency matrix which contained spectral signature information. L Mou et al. [11] built the graph by using correlation to measure the similarity among pixels and thereby identifying pixels belonging to the same category. They built a two-layer GCN that improved the classification results. A drawback of using GCN is the computational cost involved in constructing the adjacency matrix. To address this problem, the authors in [12] constructed an adjacency matrix using pixels within a patch containing rich local spatial information instead of computing the adjacency matrix using all the pixels in the image. Another important work in reducing the computational cost is the one proposed by Hong et al. in [6], where the authors built a method called MiniGCN, which allows training large-scale GCN with small batches. The adjacency matrix is constructed using K-nearest neighbors, taking the 10 nearest neighbors of the central pixel into consideration. For representing a HSI as a graph, many authors rely on building the adjacency matrix using k-nearest neighbors [13,14], because it is an easy method to understand and apply, but this method does not sufficiently capture the spectral characteristics of hyperspectral data [15]. The superpixel approach [16,17] creates the graph from superpixel regions to reduce the graph size.
Some authors have implemented neighbor selection using threshold, for example, in hyperspectral unmixing applications [18]. Others have implemented adaptive shapes [19] in classification methods that does not involve GCNs. In this work, we propose a novel way of adaptively creating the adjacency matrix based on a neighbor selection approach called AN-GCN. The general idea is to iterate over each pixel, aggregating neighbors that belong to the same class as the central pixel based on a variance measure creating an adjacency matrix. The algorithm selects a different number of neighbors for each pixel thereby adapting to the spatial variability of the class in which the pixel belongs with the aim of improving the classification and solving the uncertainty problem that arises due to border pixels. The construction of the adjacency matrix is a crucial step for a GCN to succeed in HSI classification [13]. The performance of AN-GCN is compared with GCN methods that do not combine other machine learning stages in the architecture. An optimal graph representation of the HSI as the adjacency matrix impacts the classification result significantly. Recently, hyperspectral images are being used in agriculture for the analysis of rice seeds [20,21,22], because an accurate phenotype measurement of the seeds using non-destructive methods helps in evaluating the quality of the seeds, and contributing to improvement in agricultural production [23]. Another reason for using hyperspectral images for the classification of rice seeds is to save work and time, since these processes are conventionally done manually by expert inspectors in the area [20,24]. Hyperspectral rice seed images have been classified using only conventional machine learning methods, and there is no report of using GCN based methods. To test the effectiveness of AN-GCN method in classifying hyperspectral images of rice seeds, less than 10% of the rice image data is used for training the AN-GCN and the remaining is used for testing and validation. The main contributions of this work are (1) a novel way of computing the adjacency matrix using adaptive spatial neighborhood aggregation to improve the performance of GCN in HSI classification; (2) Performing classification of rice seed HSIs grown under high temperatures using the AN-GCN approach.

2. Materials and Methods

This section presents the AN-GCN method based on the construction of the adjacency matrix by adaptive neighbor selection. The goal is to characterize the homogeneity between pixels for better discrimination between classes. Pixels that have a degree of homogeneity most likely belong to the same class. For this, a variation metric is used to measure homogeneity [25].
S 2 = ( X i X ¯ ) / n 1 ,
where X i are the pixel intensity values per band that belong to a radius R, and n is the number of bands in each pixel.
A radius R is selected based on unit distances using spatial coordinates as is shown in the scheme of Figure 1. If R = 1, the algorithm selects only the four closest neighbors, a higher value of radius would select more neighbors. An initial radius is set to R = 100. In order to select homogeneous regions, a variance measure is applied to the area covered by radius R. The threshold values shown in Table 1 is applied to the variance value for the different HSIs for decreasing the area covered by radius R, so that only pixels from homogeneous neighborhoods are selected.
If the variance is larger than the threshold value, the coverage radius is decreased until the desired variance threshold is reached. If the variance is smaller than the threshold, the selected neighbors meet the criterion, and the pixels in the neighborhood belong to the same class, thereby resulting in successful class discrimination.
The threshold value for each HSI is determined by calculating the average variance of the set of pixels that are known to belong to the same class, then a unique threshold is chosen for all classes. The threshold value is different for each HSI. The neighbor selection is applied to each pixel of the HSI. Each pixel will have a different neighborhood depending on the homogeneity of the region surrounding it. Once the neighborhood region with the pixel neighbors for each center pixel is selected, the adjacency matrix A a d is built using the radial basis function (RBF).
A i , j = E x p ( x i x j ) / σ 2 ,
where x i and x j are the pixel intensity vectors per band for pixels i and j, σ 2 is a control parameter.
After the construction of the adjacency matrix, the GCN algorithm is applied. The Laplacian matrix computed from the adaptive neighborhood adjacency matrix is used to measure how much the function value is changing at each node (graph gradient).
L a d = I n D 1 2 A a d D 1 2 ,
where D is a diagonal matrix of node degrees and A is the adjacency matrix. The normalized Laplacian matrix is positive semi-definite, and with this property the Laplacian can be diagonalized by the Fourier basis U, therefore the Equation (3) can be written as L a d = U Λ U T , where U is a matrix of Eigenvectors and Λ is a diagonal matrix of Eigenvalues λ . The Eigenvectors satisfy the orthonormalization property U U T = I . The graph Fourier transform of a graph signal X is defined as F ( X ) = U T X and the inverse F ( X ) 1 = U T X ^ , where X is a feature vector of all nodes of a graph. Graph Fourier transform makes a projection of the input graph signal to an orthonormal space whose bases is determined from the Eigenvectors of the normalized graph Laplacian [5]. In signal processing the graph convolution process with a filter g is defined as:
X g = F ( X ) 1 ( F ( X ) F ( g ) ) ,
where ⊙ is the element-wise product. The filter is defined by g = d i a g ( U T g ) and the convolution Equation (4) using the transform of a graph signal is simplified as
X g = U g U T X ,
due to the computational complexity of Eigenvector decomposition in Equation (5), and as the filter g is not localized, which means that it may take nodes far away from the central node, Hammond et al. [26], approximate g using Kth order truncated expansion of Chebyshev polynomials. The Chebnet Kernel is defined as
g = i = 0 k θ i T i ( Λ ^ ) ,
where i represents the smallest order neighborhood, θ i are the Chebyshev coefficients, k represents the largest order neighborhood, T is the Chebyshev polynomial of the kth order and Λ ^ = 2 Λ / λ m a x I . Replacing the filter of Equation (6) in the convolutional Equation (5) is obtained,
X g = U ( i = 0 k θ i T i ( Λ ^ ) ) U T X ,
X g = i = 0 k θ i U T i ( Λ ^ ) U T X ,
As U T i ( Λ ^ ) U T = T i ( L ^ ) , the Equation (8) can be written as
X g = i = 0 k θ i T i ( L ^ ) X ,
where L ^ = 2 L / λ m a x I and taking k = 1 and the largest eigenvalue of λ m a x = 2 [27], Equation (9) can be written as:
X g = ( θ 0 T 0 ( L ^ ) + θ 1 T 1 ( L ^ ) ) X ,
where T 0 ( L ^ ) = 1 and T 1 ( L ^ ) = L , the Equation (10) becomes:
X g = ( θ 0 + θ 1 L ) X ,
To avoid over-fitting GCN assumes θ = θ 0 = θ 1 , and replacing the Equation (3) for L in Equation (11), the following equation is obtained:
X g = θ ( I + D 1 2 A a d D 1 2 ) X ,
Therefore, using a normalization proposed by [27] I + D 1 2 A a d D 1 2 D ^ 1 2 A a d ^ D ^ 1 2 where A a d ^ = A a d + I and D ^ i i = j A a d ^ i j . GCN uses a propagation rule for updating the weights in the hidden layer iteratively until the output of the GCN converges to the target classes. The propagation rule for GCNs is:
D ^ 1 2 A a d ^ D ^ 1 2 X Θ ,
where Θ is a matrix of filter parameters (weigths). Most articles related to GCN represent the propagation rule as follows:
H l + 1 = σ ( D ^ 1 2 A a d ^ D ^ 1 2 H ( l ) W ( l ) + b ( l ) ) ,
where matrix H ( l ) denotes the features output in the lth layer (input). σ is the activation function, W ( l ) and b ( l ) are the learned weights and biases.
In order to reduce the computational cost of AN-GCN, batch processing of MiniGCN [6] is followed. The batches of pixels are extracted as in CNNs, followed by the construction of subgraphs for each batch from the adjacency matrix A a d b . The propagation rule described in Equation (14) for each subgraph is:
H ^ b i l + 1 = σ ( D b i ^ 1 2 A a d b i ^ D b i ^ 1 2 H b i ( l ) W b i ( l ) + b b i ( l ) ) ,
where b i are the batches of pixels or subgraphs used for network training. The final output of the propagation rule is a vector that joins all the subgraph outputs.
H ^ l + 1 = [ H b 1 ( l + 1 ) , H b 2 ( l + 1 ) , H b 3 ( l + 1 ) . . . H b N ( l + 1 ) ] ,
The pseudocode for creation of the adjacency matrix for AN-GCN is described in Algorithm 1 and the pseudo-code for the AN-GCN is given in Algorithm 2. Figure 2 shows the architecture of AN-GCN. Batch normalization is implemented before GCN layer.
Algorithm 1 Adjacency Matrix for AN-GCN
  1 
Input: Original HSI, Ground Truth
  2 
for Each pixel in HSI do
  3 
       Set a radius R
  4 
       Find Homogeneity of pixels inside R using variance
  5 
       while Variance > threshold value do
  6 
             Decrease R until meet the desired threshold
  7 
             If R < 1 then Set minimum possible radius to 1
  8 
             end if
  9 
       end while
10 
       Calculate weights of pixel inside R that meet the criterion using Equation (2)
11 
end for
12 
Construct Adjacency matrix
13 
Compute the Laplacian matrix
    A graphical user interface is developed for constructing the image and ground truth mosaics for the pixel-based classification of rice seed HSIs. The hyperspectral rice seed images are calibrated using the workflow illustrated in Figure 3a. The input is a rice seed hypercube 3 dimensional array.    
Algorithm 2 Pseudo code of AN-GCN for HSI classification
  1 
Input: Original HSI, Laplacian Matrix L, Labels, epochs = 200, batch size = 100
  2 
Initialize W and b parameters
  3 
for Each batch of pixels do
  4 
        H ^ b l + 1 = σ ( D b ^ 1 2 A b ^ D b ^ 1 2 H b ( l ) W b ( l ) + b b ( l ) )
  5 
       Return H ^ b l + 1 , Loss
  6 
       Softmax
  7 
       Optimize loss function using Adam
  8 
       Update parameters
  9 
end for
10 
Output: Predicted label for each pixel using argmax.
where I c represents the calibrated image, I d is the dark reference, and I w is the white reference acquired by the sensor. After calibration, the image is segmented using two methods. The first is the Otsu thresholding algorithm which is applied to identify the seeds from the background. In the second step, the segmentation is improved using a Gaussian filter with a σ = 0.7 , and the labeling is performed using 2 nearest neighbor connectivity. The “ground truth block” generates the labels for the seeds and the background. In addition, a “crop image block” is available to select a Region Of Interest (ROI), from an input calibrated or segmented image. Once the images are cropped, a parallel block is implemented to create the mosaics for training and classification using the GCN architecture. A Hyperspectral Seed Application has been developed to perform the above processes. The GUI of the App is illustrated in Figure 3b, and is implemented in python using pyqt5 libraries.
The GUI is operated in the following manner. The user uploads the HSI of the seed and the white and dark references in the provided widgets. There are buttons for calibrating and saving the image. The user has the option to assign categorical labels to the seed classes, which can be based on the temperatures the seeds are exposed to, or different varieties of seeds. Once the integer labels are selected, the images are labeled and saved. Another useful function provided by the GUI is for cropping an HSI. As HSIs are large and occupy more space, the user can crop the images by specifying the row and column coordinates enclosing the seed. Finally, a mosaic with seed images of different categories is created by concatenating the individual images in the horizontal or vertical direction. A button is provided for visualization of the images at any stage.

Datasets

Four hyperspectral image datasets are used to test the proposed AN-GCN method. The training set is generated by randomly taking pixels in each class, the class 0 corresponding to the backgroung is not considered for training. Once the pixels for training and testing are selected, the adjacency matrix corresponding to the training and testing pixels is also selected. Table 2 gives the specification for each dataset.
(1) Indian Pine Dataset: This scene is taken from North-Western Indiana with an Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) optical sensor over the Indian pine test area. Each band contains 145 × 145 pixels with a total of 224 bands. Indian Pine scene contains 16 classes. The spectrum for a pixel from each of the 16 classes is shown in Figure 4. Among the classes are crops, vegetation, smaller roads, highways, and low-density housing, which are named in Table 3 along with the training and testing set used for training and testing the AN-GCN. 20 noisy bands due to water absorption are removed leaving a total of 200 bands.
(2) University of Houston dataset: This hyperspectral dataset is acquired using an ITRES-CASI sensor. It contains 144 spectral bands between 380–1050 nm; with a spatial domain of 349 × 1905 pixels per band and a spatial resolution of 2.5 m. The data is taken from the University of Houston campus and contains the land cover and urban regions with a total of 15 classes as shown in Table 4, along with the training and testing set used for training and testing the AN-GCN.
(3) Botswana dataset: This scene is taken over the Okavango Delta, Botswana in 2001–2004 using an EO-1 sensor. It contains 242 bands between 400–2500 nm. After the denoising process is applied, 145 bands remain [10–55, 82–97, 102–119, 134–164, 187–220]. Each band contains 1476 × 256 pixels with a spatial resolution of 2.5 m.
(4) Rice Seeds dataset: Hyperspectral images of rice seeds grown under high day/night temperature environments, and control environment are taken with a high-performance line-scan image spectrograph (Micro-Hyperspec ® Imaging Sensors, Extended VNIR version) [23]. This sensor covers the spectral range from 600 to 1700 nm. The dataset contains 268 bands and 150 × 900 pixels per band. there are four temperature treatments shown in Table 5.
The rice seed images are calibrated using the workflow outlined in Figure 5. The workflow is composed of five stages described as follows: the first stage consists of reading the input image and selecting a region of interest (ROI) containing rice samples. Once the ROI is obtained, an initial segmentation based on histogram selection is performed. The rice seeds are thresholded from the background, if their intensities x h are within the interval 50 < x h 100 .
The initial segmentation extracts the rice seeds in the regions of interest. However, some isolated pixels belonging to the background class are present within the rice seed region. To remove these pixels a Gaussian filter with a standard deviation of 0.7 is applied to the cropped image. In addition, similar regions are connected using connected component labeling using k-connectivity. Here, a 3 × 3 kernel with 2-connectivity is used to connect disconnected regions and assign a label. Once the refinement of the segmentation is performed, the label assignment stage assigns labels to each pixel using the tuple (0, class number), where 0 represents the background.
For rice seed HSI, two types of classification were made. The first is to classify each treatment by exposure time, that is, for the same time, the different treatments are taken as a class, where the class number is assigned as follows: 1 for HDT class, 2 for HDNT1 class, 3 for HDNT2 class, 4 for HNT class, and finally 5 for Control class. The second type of classification was to classify the rice seeds for different exposure times for the same treatment; for this case, the classes were assigned as follows: 1 for 168 class, 2 for 180 class, 3 for 204 class, 4 for 216 class, 5 for 228 class and finally 6 for 240 class.
The images are calibrated using the Shafer Model [28] described in Equation (17), I c λ is the constant reflection value at a predetermined wavelength, I λ is the measurement of reflection, W l is the white reference obtained from the calibration of a Teflon tile, and B is the black reference.
I c λ = I λ B W B
The rice seed HSI preprocessing workflow is performed for the five classes of images. The last stage is the mosaic generator which stitches the five groundtruth images, and the five calibrated HSIs into two mosaics, respectively. These two mosaics are then input to the GCN for classification.

3. Results

The performance of the AN-GCN method is evaluated by comparing its classification results with pure GCN based results reported by other authors in the literature, especially MiniGCN. Three metrics: Overall Accuracy (OA), Average Accuracy (AA), and Kappa score are used for this comparison. Increased accuracies using AN-GCN are highlighted in bold in the Tables. To further test the effectiveness of AN-GCN, several tests are done using fixed nearest neighbors and compared with the proposed method of adaptive neighbors.
To demonstrate that the adaptive neighbor selection method aggregates pixels that belong to the same class, the Laplacian matrix used for training the Botswana scene is plotted. This scene is taken since it is a small matrix compared to the other HSI images and the subgraphs belonging to the classes can be plotted. In Figure 6, a portion of the matrix is shown, where the subgraphs (pixels) belonging to the same class can be visualized. For example, pixels 20, 21, 29, 3 and 8 belong to class 7, corresponding to Hippo grass class.
A test is carried out to show that the construction of the adjacency matrix influences the final classification. For this test, the Indian Pine scene is used as a reference and the adjacency matrix is built using fixed neighbors, starting by taking the first 4 neighbors for each pixel. Then the matrix is built for 8 neighbors, 12, 20 neighbors and as a final step the matrix is built with the proposed method as shown in Table 6. Once the different matrices are built, each one is used in the GCN classification model. In the GCN training process, the same parameters shown in Algorithm 2 are used for each of the matrices and the final classification results of Overall Accuracy (OA), Average accuracy (AA) and kappa score using different numbers of neighbors are shown in Table 6.
For the construction of the adjacency matrix for the Indian Pine scene, a variance threshold of 0.16 is used for the selection of neighbors. The classification results are shown in Table 7, obtaining a perfect classification for classes: wheat, stone-steel-towers, alfalfa, grass-pasture-mowed and oats. Figure 7 shows the Indian Pines classification map where the different pixel classification results can be seen.
The results obtained for Houston university are reported in Table 8. The AN-GCN obtains the best classification values of OA, AA, Kappa score compared to the other reported GCN methods. A perfect classification is obtained for classes: healthy grass, synthetic grass, soil, water, tennis court and running track.
Figure 8 shows the classification map of the AN-GCN method for the Houston university scene, where Figure 8a is the groundtruth and Figure 8b is the classification map using the proposed model.
Table 9 shows the training and testing set used for the Botswana HSI and reports the classification accuracies obtained this dataset. For the construction of the adjacency matrix, a threshold of 0.024 is used, shown in the Table 1. The results show that AN-GCN obtains the highest accuracy values for 13 of the 14 classes that make up the Botswana scene. The highest reported classification accuracy of AA (99.2%), OA (99.11%) and Kappa score (0.9904) are obtained with the AN-GCN method.
For HSI rice seeds the classification results are shown in Table 10 and Table 11. The first table corresponds to the classification results by treatment for a specific time, the exposure time of 204 and 228 h had the best OA, AA and Kappa score values and time 240 had the lowest values.
Table 11 corresponds to the classification results for the different exposure times for the same treatment. The HDT treatment obtained the best OA, AA, and Kappa score classification values.
Figure 9 shows the classification map for the different classes at a specific time and Figure 10 shows the classification map for the different exposure times for the same treatment.

4. Discussion

4.1. Indian Pines

The results obtained for the classification of the Indian Pines scene are reported in Table 7. A considerable improvement in classification accuracy using AN-GCN can be seen when comparing with other GCN methods, especially the reported MiniGCN methods [6,29,30]. AN-GCN method improves the accuracy by more than 13 percent affirming the importance of adjacency matrix creation. Incorporating neighborhood information adaptively results in better discrimination between classes, especially in the boundary region between classes. The classification map is shown in Figure 7. For a better visual comparison, the Indian Pine groundtruth is provided in Figure 7a and the classification map from the AN-GCN method is shown in Figure 7b. Compared to the groundtruth, it can be seen that some pixels of the Wood class (blue color) are misclassified into Building-grass-trees drives (yellow color) because these classes have similar spectral signatures. However, the AN-GCN classification map looks similar to the groundtruth map with minimal misclassification errors.
Table 6 gives the results obtained for the test samples from the Indian Pines scene, showing that the adaptive spatial neighbor selection performs better than fixed neighborhood selection. Another test to check that there is better class discrimination using adaptive neighbors is to plot the training Laplacian matrix. In Figure 6, a subgraph of the Laplacian matrix for the Bostwana dataset is shown. It is observed that there are subgraphs grouping pixels that belong to the same class showing the effectiveness of the adaptive neighbors in aggregating pixels of the same class and avoiding pixels of a different class in being grouped together thereby improving the training of the GCN and the final classification results of the HSI.

4.2. Houston University

The training and testing set and the classification accuracies obtained for the Houston university dataset are reported in Table 8. The results show an increase in classification accuracy with respect to the other GCN-based methods. The minimum classification accuracy for the AN-GCN method is 90.88% corresponding to the parking lot 2 class, showing the effectiveness of this method compared to the other methods. One of the classes that has the lowest classification accuracy using almost all methods is the Commercial class. This behavior is also reported in [16]. However, with the AN-GCN method, this class achieves a higher classification accuracy of 97.06%, showing that AN-GCN has superior performance compared to the other methods. In Figure 8a,b, the groundtruth and the classification map obtained using the AN-GCN method are shown, respectively. Due to the improvement of classification performance using the AN-GCN method, it is difficult to observe the misclassification of some pixels. However, when observing the right side of the classification map in detail, some missed pixels can be identified.

4.3. Botswana

The classification results for the Botswana Scene are given in Table 9, along with the number of training pixels and testing pixels used. Botswana presents outstanding results using the AN-GCN method having a higher value of overall accuracy, average accuracy, and Kappa score compared to the other methods. There are only a few reported classification results using GCN for the Botswana dataset, however, the AN-GCN method is efficient in classifying this dataset.

4.4. Rice Seeds

The GCN architecture performs well in classifying the rice seed HSIs from different temperature treatments, as well as classifying the seeds from subclasses of six temperature exposure duration. The GCN integrates spatial and spectral information adaptively performing a pixel-based classification obtaining a good classification performance for all four temperature treatments as well as for sub-classes of varying temperature exposure duration.
The classification results shown in Table 11, show lower values in the classification than those shown in Table 10. These results are expected, since for this case the different exposure hours are being taken from the same treatment, but even so, the proposed method is able to discriminate the exposure times for the same treatment, showing that the rice seed undergoes changes as the exposure time to high temperature increases. The rice seed HSIs from the highest day and night temperature of 36/32 degree Celsius give the lowest accuracy, showing that higher temperatures alter the seed spectral-spatial characteristics drastically making it indiscriminable from other treatment classes.
Comparing these results with those reported in [23], the authors had obtained a classification accuracy of 97.5% only for two classes using a 3D CNN with 80% of the data for training. While using the AN-GCN method, four treatments classes and six temperature exposure duration classes are classified using only 10% of the data for training giving satisfactory classification results.

5. Conclusions

This paper presents a new method for neighborhood aggregation based on statistics which improves the graph representation of high spectral dimensional datasets such as hyperspectral images. This method performs better than recent state-of-the-art implementations of GCNs for hyperspectral image classification. The presented method selectively adapts to the spatial variability of the neighborhood of each pixel and hence acts as a significant measure of discriminability in localized regions. An increased classification accuracy of 4% is obtained with the University of Houston and Botswana datasets, and an increase in accuracy of 1% is obtained with the Indian Pine dataset. The AN-GCN method successfully characterizes intrinsic spatial-spectral properties of rice seeds grown under higher than normal temperatures and classifies hyperspectral images of these seeds with high precision using less than 10% of the data for training, placing the AN-GCN as a preferable method for agricultural applications compared to other CNN-based methods.

Author Contributions

Conceptualization, J.O., V.M., E.A. and H.W.; methodology, J.O., V.M., E.A.; software, J.O. and E.A.; validation, J.O. and E.A.; formal analysis, J.O.; investigation, V.M. and J.O.; resources, B.K.D., H.W. and V.M.; data curation, B.K.D. J.O. and E.A.; writing—original draft preparation, J.O. and V.M.; writing—review and editing, V.M.; visualization, J.O. and E.A.; supervision, V.M. and H.W.; project administration, V.M. and H.W.; funding acquisition, H.W. and V.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Science Foundation (NSF), grant number 1736192.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Acknowledgments

This work was funded and supported by National Science Foundation (Grant No. 1736192) under a supplement award (H.W. and V.M.). The authors would like to thank doctoral students Sergio David Manzanarez, David Tatis Posada and Victor Diaz Martinez, UPRM for their help in coding).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gogineni, R.; Chaturvedi, A. Hyperspectral image classification. In Processing and Analysis of Hyperspectral Data; IntechOpen: London, UK, 2019. [Google Scholar]
  2. Asif, N.; Sarker, Y.; Chakrabortty, R.; Ryan, M.; Ahamed, M.; Saha, D.; Badal, F.; Das, S.; Ali, M.; Moyeen, S.; et al. Graph Neural Network: A Comprehensive Review on Non-Euclidean Space. IEEE Access 2021, 9, 60588–60606. [Google Scholar] [CrossRef]
  3. Miller, B.; Bliss, N.; Wolfe, P. Toward signal processing theory for graphs and non-Euclidean data. In Proceedings of the 2010 IEEE International Conference on Acoustics, Speech and Signal Processing, Dallas, TX, USA, 14–19 March 2010; pp. 5414–5417. [Google Scholar]
  4. Ren, H.; Lu, W.; Xiao, Y.; Chang, X.; Wang, X.; Dong, Z.; Fang, D. Graph convolutional networks in language and vision: A survey. Knowl.-Based Syst. 2022, 251, 109250. [Google Scholar] [CrossRef]
  5. Wu, Z.; Pan, S.; Chen, F.; Long, G.; Zhang, C.; Yu, P. A Comprehensive Survey on Graph Neural Networks. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 4–24. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Hong, D.; Gao, L.; Yao, J.; Zhang, B.; Plaza, A.; Chanussot, J. Graph Convolutional Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote. Sens. 2021, 59, 5966–5978. [Google Scholar] [CrossRef]
  7. Ma, F.; Gao, F.; Sun, J.; Zhou, H.; Hussain, A. Attention Graph Convolution Network for Image Segmentation in Big SAR Imagery Data. Remote Sens. 2019, 11, 2586. [Google Scholar] [CrossRef] [Green Version]
  8. Wan, S.; Gong, C.; Zhong, P.; Du, B.; Zhang, L.; Yang, J. Multi-scale Dynamic Graph Convolutional Network for Hyperspectral Image Classification. arXiv 2019, arXiv:abs/1905.06133. [Google Scholar]
  9. Jia, S.; Jiang, S.; Zhang, S.; Xu, M.; Jia, X. Graph-in-Graph Convolutional Network for Hyperspectral Image Classification. IEEE Trans. Neural Netw. Learn. Syst. 2022, 1–15. [Google Scholar] [CrossRef]
  10. Qin, A.; Shang, Z.; Tian, J.; Wang, Y.; Zhang, T.; Tang, Y. Spectral–Spatial Graph Convolutional Networks for Semisupervised Hyperspectral Image Classification. IEEE Geosci. Remote. Sens. Lett. 2019, 16, 241–245. [Google Scholar] [CrossRef]
  11. Mou, L.; Lu, X.; Li, X.; Zhu, X. Nonlocal Graph Convolutional Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote. Sens. 2020, 58, 8246–8257. [Google Scholar] [CrossRef]
  12. Ma, Z.; Jiang, Z.; Zhang, H. Hyperspectral Image Classification Using Feature Fusion Hypergraph Convolution Neural Network. IEEE Trans. Geosci. Remote. Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
  13. Pu, S.; Wu, Y.; Sun, X.; Sun, X. Hyperspectral Image Classification with Localized Graph Convolutional Filtering. Remote Sens. 2021, 13, 526. [Google Scholar] [CrossRef]
  14. Liu, B.; Gao, K.; Yu, A.; Guo, W.; Wang, R.; Zuo, X. Semisupervised graph convolutional network for hyperspectral image classification. J. Appl. Remote. Sens. 2020, 14, 026516. [Google Scholar] [CrossRef]
  15. Bai, J.; Ding, B.; Xiao, Z.; Jiao, L.; Chen, H.; Regan, A. Hyperspectral Image Classification Based on Deep Attention Graph Convolutional Network. IEEE Trans. Geosci. Remote. Sens. 2022, 60, 1–16. [Google Scholar] [CrossRef]
  16. Wan, S.; Gong, C.; Zhong, P.; Pan, S.; Li, G.; Yang, J. Hyperspectral Image Classification With Context-Aware Dynamic Graph Convolutional Network. IEEE Trans. Geosci. Remote. Sens. 2021, 59, 597–612. [Google Scholar] [CrossRef]
  17. Huang, Y.; Zhou, X.; Xi, B.; Li, J.; Kang, J.; Tang, S.; Chen, Z.; Hong, W. Diverse-Region Hyperspectral Image Classification via Superpixelwise Graph Convolution Technique. Remote Sens. 2022, 14, 2907. [Google Scholar] [CrossRef]
  18. Li, Z.; Chen, J.; Rahardja, S. Graph construction for hyperspectral data unmixing. In Hyperspectral Imaging in Agriculture, Food and Environment; IntechOpen: London, UK, 2018. [Google Scholar]
  19. Hu, Y.; An, R.; Wang, B.; Xing, F.; Ju, F. Shape adaptive neighborhood information-based semi-supervised learning for hyperspectral image classification. Remote Sens. 2020, 12, 2976. [Google Scholar] [CrossRef]
  20. Fabiyi, S.; Vu, H.; Tachtatzis, C.; Murray, P.; Harle, D.; Dao, T.; Andonovic, I.; Ren, J.; Marshall, S. Varietal Classification of Rice Seeds Using RGB and Hyperspectral Images. IEEE Access 2020, 8, 22493–22505. [Google Scholar] [CrossRef]
  21. Thu Hong, P.; Thanh Hai, T.; Lan, L.; Hoang, V.; Hai, V.; Nguyen, T. Comparative Study on Vision Based Rice Seed Varieties Identification. In Proceedings of the 2015 Seventh International Conference on Knowledge and Systems Engineering (KSE), Ho Chi Minh City, Vietnam, 8–10 October 2015; pp. 377–382. [Google Scholar]
  22. Liu, Z.; Cheng, F.; Ying, Y.; Rao, X. Identification of rice seed varieties using neural network. J. Zhejiang-Univ.-Sci. B 2005, 6, 1095–1100. [Google Scholar] [CrossRef] [Green Version]
  23. Gao, T.; Chandran, A.; Paul, P.; Walia, H.; Yu, H. HyperSeed: An End-to-End Method to Process Hyperspectral Images of Seeds. Sensors 2021, 21, 8184. [Google Scholar] [CrossRef]
  24. Lu, B.; Dao, P.; Liu, J.; He, Y.; Shang, J. Recent advances of hyperspectral imaging technology and applications in agriculture. Remote Sens. 2020, 12, 2659. [Google Scholar] [CrossRef]
  25. Johnson, B.; Xie, Z. Unsupervised image segmentation evaluation and refinement using a multi-scale approach. ISPRS J. Photogramm. Remote Sens. 2011, 66, 473–483. [Google Scholar] [CrossRef]
  26. Hammond, D.; Vandergheynst, P.; Gribonval, R. Wavelets on graphs via spectral graph theory. Appl. Comput. Harmon. Anal. 2011, 30, 129–150. [Google Scholar] [CrossRef] [Green Version]
  27. Kipf, T.; Welling, M. Semi-supervised classification with graph convolutional networks. arXiv 2016, arXiv:1609.02907. [Google Scholar]
  28. Polder, G.; Heijden, G.; Keizer, L.; Young, I. Calibration and Characterisation of Imaging Spectrographs. J. Near Infrared Spectrosc. 2003, 11, 193–210. [Google Scholar] [CrossRef]
  29. Zhang, C.; Wang, J.; Yao, K. Global Random Graph Convolution Network for Hyperspectral Image Classification. Remote Sens. 2021, 13, 2285. [Google Scholar] [CrossRef]
  30. Zhang, M.; Luo, H.; Song, W.; Mei, H.; Su, C. Spectral-Spatial Offset Graph Convolutional Networks for Hyperspectral Image Classification. Remote Sens. 2021, 13, 4342. [Google Scholar] [CrossRef]
  31. Wan, S.; Pan, S.; Zhong, P.; Chang, X.; Yang, J.; Gong, C. Dual Interactive Graph Convolutional Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote. Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
  32. Chen, R.; Li, G.; Dai, C. DRGCN: Dual Residual Graph Convolutional Network for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
Figure 1. Scheme of neighbors selection using a variance radius.
Figure 1. Scheme of neighbors selection using a variance radius.
Sensors 23 03515 g001
Figure 2. AN-GCN architecture.
Figure 2. AN-GCN architecture.
Sensors 23 03515 g002
Figure 3. (a) Workflow for HSI seed images and ground truth mosaic construction, (b) HSI seed calibration application.
Figure 3. (a) Workflow for HSI seed images and ground truth mosaic construction, (b) HSI seed calibration application.
Sensors 23 03515 g003
Figure 4. Spectrum of each material in the Indian Pines hyperspectral image.
Figure 4. Spectrum of each material in the Indian Pines hyperspectral image.
Sensors 23 03515 g004
Figure 5. Workflow for preprocessing of rice seed hyperspectral images.
Figure 5. Workflow for preprocessing of rice seed hyperspectral images.
Sensors 23 03515 g005
Figure 6. Subgraphs of Laplacian matrix from Botswana dataset.
Figure 6. Subgraphs of Laplacian matrix from Botswana dataset.
Sensors 23 03515 g006
Figure 7. Classification map in Indian Pine dataset. (a) Groundtruth map. (b) AN-GCN.
Figure 7. Classification map in Indian Pine dataset. (a) Groundtruth map. (b) AN-GCN.
Sensors 23 03515 g007
Figure 8. Classification map for Houston university. (a) Groundtruth map. (b) AN-GCN.
Figure 8. Classification map for Houston university. (a) Groundtruth map. (b) AN-GCN.
Sensors 23 03515 g008
Figure 9. Classification map of HSIs of rice seeds from different hours of exposure for each temperature treatment.
Figure 9. Classification map of HSIs of rice seeds from different hours of exposure for each temperature treatment.
Sensors 23 03515 g009
Figure 10. Classification map for rice HSI datasets for different temperature treatments: (a) HDT, (b) HDNT1, (c) HNT, (d) HDNT2.
Figure 10. Classification map for rice HSI datasets for different temperature treatments: (a) HDT, (b) HDNT1, (c) HNT, (d) HDNT2.
Sensors 23 03515 g010
Table 1. Threshold values for selecting neighborhood pixels for each HSI.
Table 1. Threshold values for selecting neighborhood pixels for each HSI.
ScenesThreshold Value
Indian Pine0.16
University of Houston0.25
Botswana0.026
Rice seeds0.16
Table 2. Hyperspectral dataset specifications.
Table 2. Hyperspectral dataset specifications.
SceneSpatial Size (Pixels)Spatial
Resolution
Spectral Size
(Bands)
Spectral
Resolution
(nm)
Sensor
Indian Pines145 × 14520 m pixels200400–2500AVIRIS
Houston University349 × 19052.5 m pixels144380–1050ITRES-CASI
Botswana1476 × 25630 m pixels145400–2500HYPERION EO-1
Rice seed150 × 9001100–1600 pixels268600–1700Micro-Hyperspec ® Imaging
Table 3. Number of training and testing samples for the different classes in Indian Pine dataset.
Table 3. Number of training and testing samples for the different classes in Indian Pine dataset.
Class No.Class NameTrainingTesting
1Corn Notill501384
2Corn Mintill50784
3Corn50184
4Grass pasture50447
5Grass trees50697
6Hay Windrowed50439
7Soybean Notill50918
8Soybean Mintill502418
9Soybean clean50564
10Wheat50162
11Woods501244
12Buildings Grass Trees Drives50330
13Stone Steel Towers5045
14Alfalfa1539
15Grass Pasture Mowed1511
16Oats155
total6959671
Table 4. Number of training and testing samples for the different classes in Houston University dataset.
Table 4. Number of training and testing samples for the different classes in Houston University dataset.
Class No.Class NameTrainingTesting
1Healthy grass1981053
2Stressed grass1901064
3Synthetic grass192505
4Tree1881056
5Soil1861056
6Water182143
7Residential1961072
8Commercial1911053
9Road1931059
10Highway1911036
11Railway1811054
12Parking lot 11921041
13Parking lot 2184285
14tennis court181247
15Running track187473
Total283212,197
Table 5. Temperature treatments for rice seed HSI dataset.
Table 5. Temperature treatments for rice seed HSI dataset.
Treatment ClassDay/Night Temperature C
Control28/23
HDNT2 (High day night temperature 2)36/28
HDNT1 (High day night temperature 1)36/32
HDT (High day temperature)36/23
HNT (High night temperature)30/28
Table 6. Comparison of classification using different K-nearest neighbors in Indian Pines dataset.
Table 6. Comparison of classification using different K-nearest neighbors in Indian Pines dataset.
K-Nearest
Neighbors
OA (%)AA (%)Kappa Score
k = 482.7187.120.8003
k = 879.1485.530.7568
k = 1278.9183.490.7555
k = 2075.4776.620.7121
Adaptive
Neighborhood
88.3691.130.8453
Table 7. Classification performance (%) of various GCN methods for Indian Pines dataset.
Table 7. Classification performance (%) of various GCN methods for Indian Pines dataset.
Class No.MIniGCN [29]GCN [27]MiniGCN [30]GCN [12]Non-Local
GCN [11]
MiniGCN [6]AN-GCN
179.12 ± 7.0456.71 ± 4.4268.0753.5489.0372.5485.04
256.13 ± 6.4651.50 ± 2.5653.9753.01100.0055.9981.76
322.16 ± 16.3784.64 ± 3.1666.8487.7793.5192.9395.65
491.80 ± 1.1083.71 ± 3.2077.3790.8994.1292.6288.59
598.68 ± 0.6994.03 ± 2.1193.3887.9598.1894.9896.84
699.64 ± 0.3696.61 ± 1.8698.3697.9778.7898.6399.54
775.57 ± 5.6777.47 ± 1.2469.5253.8199.3864.7191.94
881.29 ± 5.5656.56 ± 1.5363.0454.9994.9468.7881.39
957.35 ± 4.0758.29 ± 6.5864.6438.2897.2769.3390.07
1060.00 ± 37.42100 ± 0.0098.0698.05100.0098.77100.00
1193.93 ± 2.0480.03 ± 3.9386.1784.5897.4487.7894.61
1256.67 ± 8.1269.55 ± 6.6669.6465.80100.0050.0090.61
1398.41 ± 0.0090.7097.85100.00100.00100.00
1495.00 ± 2.8017.5791.3083.0948.72100.00
1592.31 ± 0.00100.0085.7188.2472.73100.00
16100 ± 0.0080.00100.0086.7080.00100.00
OA(%)80.19 ± 0.5769.24 ± 1.5671.3365.9787.9275.1188.51
AA (%)72.70 ± 3.7680.93 ± 1.7174.8377.5493.7978.0393.50
Kappa0.7631 ± 0.06565.27 ± 1.8067.420.61840.86250.71640.8692
Table 8. classification performance(%) of various GCN methods for Houston University dataset.
Table 8. classification performance(%) of various GCN methods for Houston University dataset.
Class No.MiniGCN [31]DIGCN [31]DRGCN [32]GCN [27]CAD-
GCN [16]
MiniGCN [6]AN-GCN
194.85 ± 3.5893.07 ± 2.7382.888.16 ± 1.9094.45 ± 3.4998.39100.00
298.35 ± 1.7194.17 ± 2.9393.3897.20 ± 0.4896.43 ± 2.8392.1198.34
398.09 ± 1.7495.00 ± 1.6898.9597.91 ± 0.1395.17 ± 4.1199.6100.00
495.60 ± 2.1390.47 ± 4.0990.0396.55 ± 0.4194.82 ± 2.3896.7899.62
598.64 ± 0.72100.00 ± 0.0097.0289.79 ± 0.7198.91 ± 1.5197.73100.00
696.58 ± 1.8094.10 ± 3.8698.398.21 ± 1.1597.48 ± 3.4895.1100.00
776.05 ± 1.5396.06 ± 2.8088.7773.67 ± 1.9491.58 ± 3.1657.2895.94
877.28 ± 3.7573.36 ± 5.6380.0665.71 ± 4.6474.63 ± 4.8268.0997.06
978.98 ± 2.2494.33 ± 3.3394.1870.27 ± 3.0386.75 ± 3.5853.9291.76
1082.92 ± 3.8088.76 ± 7.6399.6674.71 ± 2.3294.24 ± 3.3477.4199.23
1170.07 ± 3.6990.68 ± 4.3297.4275.36 ± 2.3794.65 ± 2.7384.9197.97
1285.87 ± 3.9987.08 ± 4.2591.9379.29 ± 4.8089.55 ± 1.9377.2397.98
1380.93 ± 2.5792.79 ± 4.3484.5112.09 ± 2.6896.80 ± 3.6850.8890.88
1497.73 ± 1.87100.00 ± 0.0010086.03 ± 3.31100 ± 0.0098.38100.00
1599.04 ± 0.7697.90 ± 1.6295.0795.29 ± 1.6798.02 ± 1.4298.52100.00
OA(%)87.00 ± 0.7191.72 ± 0.6492.1580.35 ± 0.6192.51 ± 0.7381.7197.88
AA (%)88.73 ± 0.5892.52 ± 0.1692.880.02 ± 0.4693.57 ± 0.6083.0997.92
Kappa0.8594 ± 0.0770.9103 ± 0.0690.91510.7872 ± 0.0660.9189 ± 0.0780.80180.9770
Table 9. Number of training and testing samples for the different classes and classification performance (%) of various GCN methods for Botswana dataset.
Table 9. Number of training and testing samples for the different classes and classification performance (%) of various GCN methods for Botswana dataset.
Class NameTrainingTestingClass No.GCN [12]S2GCN [10]AN-GCN
Water302501100.00100.00100.00
Hippo grass3081298.02100.00100.00
Floodplain grasses 130226398.01100.00100.00
Floodplain grasses 230190497.67100.0098.38
Reeds30244580.6789.9793.72
Riparian30244665.4392.9798.74
Firescar30234796.1496.60100.00
Island interior30178898.0392.59100.00
Acacia woodlands30289980.25100.00100.00
Acacia shrublands302231094.7677.7898.17
Acacia grasslands302801186.89100.00100.00
short mopane301561286.7485.11100.00
Mixed mopane302431391.42100.00100.00
Exposed soils30701482.11100.00100.00
Total4202908AA (%)89.2294.4599.22
OA (%)89.7295.3699.11
Kappa0.87450.93990.9904
Table 10. Classification performance for rice HSI image datasets for different temperature treatments.
Table 10. Classification performance for rice HSI image datasets for different temperature treatments.
Class NoTreatments168 h180 h204 h216 h228 h240 h
1HDT0.950.990.971.000.980.98
2HDNT10.950.950.980.860.930.97
3HDNT20.810.860.940.940.980.90
4HNT0.950.810.960.890.980.65
5Control0.980.990.971.000.950.94
OA0.930.920.960.930.960.91
AA0.930.920.960.940.960.89
Kappa score0.920.900.950.910.950.89
Table 11. Classification performance for rice HSI datasets for different exposure times.
Table 11. Classification performance for rice HSI datasets for different exposure times.
Class NoExposure
Time
(Hours)
HDTHDNT1HDNT2HNT
11680.990.740.830.92
21800.980.840.900.96
32040.980.970.890.89
42160.950.710.840.79
52280.880.740.880.98
62400.960.970.970.95
OA0.960.810.880.91
AA0.960.830.880.91
Kappa score0.960.770.860.89
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Orozco, J.; Manian, V.; Alfaro, E.; Walia, H.; Dhatt, B.K. Graph Convolutional Network Using Adaptive Neighborhood Laplacian Matrix for Hyperspectral Images with Application to Rice Seed Image Classification. Sensors 2023, 23, 3515. https://doi.org/10.3390/s23073515

AMA Style

Orozco J, Manian V, Alfaro E, Walia H, Dhatt BK. Graph Convolutional Network Using Adaptive Neighborhood Laplacian Matrix for Hyperspectral Images with Application to Rice Seed Image Classification. Sensors. 2023; 23(7):3515. https://doi.org/10.3390/s23073515

Chicago/Turabian Style

Orozco, Jairo, Vidya Manian, Estefania Alfaro, Harkamal Walia, and Balpreet K. Dhatt. 2023. "Graph Convolutional Network Using Adaptive Neighborhood Laplacian Matrix for Hyperspectral Images with Application to Rice Seed Image Classification" Sensors 23, no. 7: 3515. https://doi.org/10.3390/s23073515

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop