Next Article in Journal
Some New Anderson Type h and q Integral Inequalities in Quantum Calculus
Next Article in Special Issue
Human–Object Interaction Detection with Ratio-Transformer
Previous Article in Journal
Rational Form of Amplitude and Its Asymptotic Factorization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Band Selection Approach for Hyperspectral Image Based on a Modified Hybrid Rice Optimization Algorithm

1
School of Computer Science, Hubei University of Technology, Wuhan 430068, China
2
Fujian Provincial Key Laboratory of Data Intensive Computing, Quanzhou 362000, China
3
Key Laboratory of Intelligent Computing and Information Processing, Quanzhou 362000, China
4
High Performance Computing Academician Workstation of Sanya University, Sanya 572022, China
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(7), 1293; https://doi.org/10.3390/sym14071293
Submission received: 31 May 2022 / Revised: 9 June 2022 / Accepted: 17 June 2022 / Published: 22 June 2022

Abstract

:
Hyperspectral image (HSI) analysis has become one of the most active topics in the field of remote sensing, which could provide powerful assistance for sensing a larger-scale environment. Nevertheless, a large number of high-correlation and redundancy bands in HSI data provide a massive challenge for image recognition and classification. Hybrid Rice Optimization (HRO) is a novel meta-heuristic, and its population is approximately divided into three groups with an equal number of individuals according to self-equilibrium and symmetry, which has been successfully applied in band selection. However, there are some limitations of primary HRO with respect to the local search for better solutions and this may result in overlooking a promising solution. Therefore, a modified HRO (MHRO) based on an opposition-based-learning (OBL) strategy and differential evolution (DE) operators is proposed for band selection in this paper. Firstly, OBL is adopted in the initialization phase of MHRO to increase the diversity of the population. Then, the exploitation ability is enhanced by embedding DE operators into the search process at each iteration. Experimental results verify that the proposed method shows superiority in both the classification accuracy and selected number of bands compared to other algorithms involved in the paper.

1. Introduction

Recently, hyperspectral remote sensing has been broadly and successfully applied in urban planning [1], precision agriculture [2], environmental monitoring [3] and other fields with the constantly increased spectral resolution of sensors. Hyperspectral remote sensing combines spectral features with spatial images that can accurately identify and detect ground objects, which provides strong technical support for ground feature extraction [4]. However, hyperspectral image (HSI) obtained in hundreds of narrow and contiguous bands from visible to infrared areas of the electromagnetic spectrum are characterized by high-dimensional space and a large number of spectral bands [5], which makes the processing and analysis of HSI a challenging task. Therefore, dimensionality reduction becomes a crucial task for hyperspectral data analysis [6].
Feature extraction and feature selection are two typical dimension-reduction methods. The original hyperspectral datasets are transformed into a low-dimensional and less-redundant feature space by feature extraction and common techniques such as independent component analysis (ICA) [7], principal component analysis (PCA) [8], and local linear embedding (LLE) [9]. Although these methods can extract valuable features from HSI datasets, they often lose physical information of the original data during the process of data compression [10]. In contrast, feature selection can select the feature subset with the most information while preserving the physical meaning of the original data, which is an important and popular method for reducing dimensions [11]. In the traditional filter methods, the feature subset is built independently of the classifier or classification algorithm and can be evaluated based on different measures such as distance measures, correlation measures and information measures [12], while wrapper methods use the classifier model to estimate feature subsets. Although filter methods are computationally simple and fast [13], they are generally less accurate than wrapper methods because they are not guided by classifiers [14]. In general, feature-selection methods can be divided into supervised and unsupervised according to the availability of sample tags [15]. Unsupervised methods can select a subset of bands without class labels, but they tend to be unstable and biased due to the lack of prior information [16]. In comparison, supervised methods tend to obtain better feature-selection results with the assistance of class labels.
Supervised feature-selection methods include three search strategies: exhaustive search, sequential search, and random search [10]. Exhaustive search requires enumerating all possible combinations of features [17], which results in unacceptable time complexity for HSI. Sequential search contains sequential forward search (SFS), sequential backward search (SBS), and sequential floating forward search (SFFS) [18]. These methods require much computation while tending to get stuck to the local optima, and it is difficult to perform well for the existence of bands with strong correlation in HSI [19]. By contrast, random search introduces randomness into the search process to distance from the local optima and deliver promising results with higher efficiency. Recently, a number of nature-inspired stochastic search algorithms have been extensively utilized for feature selection based on their strong search ability in the large-scale space [20]. These include genetic algorithm (GA) [21], differential evolution (DE) algorithm [22], particle swarm optimization (PSO) [23], gray wolf optimizer (GWO) [24], cuckoo search (CS) algorithm [25], artificial bee colony (ABC) algorithm [26] and whale optimization algorithm (WOA) [27], which may have superior performance in dealing with feature-selection problems.
For HSI band selection, Nagasubramanian et al. [28] used GA to select the optimal subset of bands and support vector machine (SVM) to classify the infected and healthy samples. Additionally, the classification accuracy was replaced by F1-Score to alleviate the skewness caused by unbalanced datasets. The results showed that the bands chosen by this approach were more informative compared to RGB images. Xie et al. [29] proposed a band selection method based on ABC algorithm and enhanced subspace decomposition to apply in HSI classification. Subspace decomposition was realized by computing the relevance between adjacent bands, and ABC algorithm was guided by enhanced subspace decomposition and maximum entropy to optimize the combination of selected bands, which provided high classification accuracy compared with six related techniques. Wang et al. [30] proposed a wrapper feature-selection approach based on improved ant lion optimizer (ALO) and wavelet SVM to reduce the dimension of HSI. Lévy flight was used to help ALO jump out of local optimum and the wavelet SVM was introduced to improve the stability of classification result. The results showed that the proposed method can provide satisfactory classification accuracy in fewer frequency bands. Subsequently, Wang et al. [31] designed a new band selection method using chaos operation to set corresponding indices for the top three gray wolves in GWO to improve the optimization ability of GWO, and experimental results demonstrated that a suitable band subset can be obtained and superior classification accuracy can be achieved by this approach. Kavitha and Jenifa [32] used Discrete Wavelet transform with eight taps and four taps for extracting the important features and applied PSO algorithm for searching the optimal band subsets and utilized SVM as a classifier to classify HSI effectively. Medjahed et al. [33] introduced a novel band selection framework based on binary CS algorithm. The experiment compared the optimization ability of CS under two different objective functions and proved that it could obtain more excellent results than relevant approaches by adopting a few instances for training. Su et al. [34] proposed a modified firefly algorithm (FA) to deal with the band selection problem by optimizing the minimum values of objective function, which outperformed superior results than SFS and PSO. In essence, band selection is a NP hard problem, as if the number of the bands increases, the above algorithms may suffer from premature convergence and even optimization stagnation.
Hybrid rice optimization (HRO) [35] is a newly proposed nature-inspired algorithm and has been successfully applied to image processing and knapsack problem because of its simple structure and strong optimization ability. For example, Liu et al. [36] presented an image segmentation method that used HRO to find the fittest multi-level thresholds by using Renyi’s entropy as the fitness function, and experiments proved that HRO prevailed over the other six commonly used evolutionary algorithms on most metrics. Su et al. [37] designed two different hybrid models for the complex large-scale 0–1 knapsack problem by using novel combinations of improved HRO and binary ant colony algorithm, which achieved better performance on different size datasets. In addition, Ye et al. [38] regarded the band selection problem as a combinatorial optimization problem and employed binary HRO to select the optimal band set for HSI, which obtained good results in classification precision and execution efficiency. Although HRO algorithm has contributed to acquiring satisfactory results, primary HRO performs the exploitation of the current best solution during each search process inadequately.
Recently, DE algorithm has been successfully combined with other swarm intelligence algorithms for solving diverse optimization problems. Tubishat et al. [39] employed evolutionary operators from DE algorithm to help each whale seek better positions and improve the local search capability of WOA for feature selection in sentiment analysis. Jadon et al. [40] proposed a hybrid DE algorithm with ABC algorithm to enhance the convergence and the balance between exploration and exploitation. Houssein et al. [41] hybridized the adaptive guided DE algorithm with slime mold algorithm for combinatorial optimization problems, which verified that evolutionary operators could boost the local search capability of swarm agents. Hence, a modified HRO (MHRO) based on opposition-based learning (OBL) strategy and differential evolution (DE) operators is proposed to overcome the disadvantages of standard HRO in the paper. The main contributions of this paper are concluded as follows:
(1)
OBL strategy is introduced to enhance the diversity of the initial population and accelerate the convergence of MHRO;
(2)
DE operators are embedded into the search process of MHRO to enhance the local exploitation ability;
(3)
The MHRO algorithm is applied in band selection and its performance is demonstrated on standard HSI datasets.
The remainder of the paper is organized as follows: Section 2 briefly gives a fundamental overview of the related technique and standard HRO algorithm. The methodology and the specific workflow of the proposed band selection approach are introduced in Section 3. Section 4 presents the experimental results and comparative studies. At last, conclusions and future work are summarized in the final section.

2. Background

2.1. Overview of Hybrid Rice Optimization Algorithm

HRO algorithm is a meta-heuristic algorithm that simulates the breeding process of three-line hybrid rice. At each iteration, the rice seed population is sorted by the fitness from superior to inferior and divided into three sub-populations. According to self-equilibrium and symmetry, each sub-population is designed as an equal number of individuals. The individuals in the top third of the fitness ranking are selected into the maintainer line, the bottom third as the sterile line, and the remaining belong to the restorer line. The algorithm consists of three stages: hybridization, selfing, and renewal.

2.1.1. Hybridization

Hybridization is performed to renew the rice seed genes in the sterile line. Two kinds of rice seeds presented to reconstruct one new individual are randomly chosen in the maintainer line and sterile line, respectively. If the new rice seed is superior to the current one, the current rice seed will be replaced by the new one. The new gene by hybridizing is shown in Equation (1).
X n e w ( i ) k = r 1 X s , r k + r 2 X m , r k r 1 + r 2
where X n e w ( i ) k represents the new k-th gene of the i-th rice seed in sterile line, X s , r k is the k-th gene of a individual randomly selected from the sterile line, X m , r k is the k-th gene of a individual randomly selected from the maintainer line, r 1 and r 2 are random numbers between [ 1 , 1 ] .

2.1.2. Selfing

Selfing is the behavior that optimizes the gene sequence of rice seeds in the restorer line, which makes rice seeds gradually approach the best one, and the updated Equation is shown in (2).
X n e w ( i ) = r a n d ( 0 , 1 ) ( X b e s t X j , r ) + X i
where X n e w ( i ) is the new individual produced by selfing of the i-th restorer, X b e s t represents the current optimal solution and X j , r is the j-th individual randomly selected from the restorer line ( i j ). If the new individual is superior to the old individual, the old individual is replaced by the new and the current self-crossing number ( t i ) is set to 0, otherwise the t i = t i + 1 .

2.1.3. Renewal

This stage is a reset operation for rice seeds in the restorer line that has not been updated for t max consecutive times (i.e., reaching the maximum selfing time), and the renewal strategy is shown in Equation (3).
X n e w ( i ) = X i + r a n d ( 0 , 1 ) ( R max R min ) + R min
where X n e w ( i ) is the new individual produced by renewal of the i-th restorer, R max is the upper bound of the search space and R min is the lower bound.
In summary, the flow of HRO is described in Algorithm 1.
Algorithm 1. Pseudo-Code of HRO
1: Input: the predefined parameters of HRO
2: Output: the global best solution and its fitness function value
3: Initialize the rice seed population randomly
4: Initialize  t i = 0 , k = 0
5: While ( k < maximum number of iterations)
6: Calculate the fitness function for each rice seed
7: Divide the rice seeds into three lines
8: for each rice seed in the sterile line
9:  Randomly select corresponding rice seeds in the sterile line and in the maintainer line
10:  The new gene is obtained by Equation (1)
11:  if the new rice seed is better
12:   Update the current rice seed
13:  end if
14: end for
15: for each rice seed in the restorer line
16:  if t i < t max
17:   The new rice seed is obtained by Equation (2)
18:   if the new rice seed is better
19:    Update the rice seed
20:     t i = 0
21:   else
22:     t i = t i + 1
23:   end if
24:  else
25:   The rice seed is renewed by Equation (3)
26:  end if
27: end for
26:  k = k + 1
27: end while

2.2. The Binary Coding

In general, data can be divided into two different types, that is, continuous or discrete. The basic HRO is presented for optimization problems with continuous search space. However, band selection for HSI is regarded as a discrete optimization problem, which is difficult to be solved by adopting the standard HRO. For binary coding, each individual in HRO is represented by a binary string where each element is only limited to 0 or 1. In order to solve the band selection problem, the continuous value of each candidate solution in the rice seeds population must be mapped to a probability value taking 0 or 1. Therefore, a sigmoid function is used to achieve data transform in the paper and is given as Equations (4) and (5).
S ( x ) = 1 1 + e x
X i k = { 1 , S ( X i k ) > 0.5 0 , e l s e        
where x is a real number, X i k represents the k-th gene of the i-th new rice seed.

2.3. The Opposition Based Learning

Generally, a good initial position of the population individual can accelerate the convergence speed of the algorithm. If the initial guess tends to be far from the position of the unknown optimal solution, the algorithm converges more slowly. OBL strategy can consider the current solution and its opposite solution to improve the diversity of the population. Then, the OBL method selected the fittest solutions from all initial solutions as the initial population, which can effectively broaden the search space of the algorithm.
Definition 1.
Let x be a real number between l b and u b , the opposite number x ˜ of x is calculated as Equation (6).
x ˜ = l b + u b x
where l b and u b are the lower and upper bounds of the search space, respectively. Similarly, the opposite number can also be used in multidimensional space.
Definition 2.
Let x = { x 1 , x 2 , , x D } be a point in D-dimensional space, where x i [ l b i , u b i ] . The opposite point x ˜ = { x ˜ 1 , x ˜ 2 , , x ˜ D } can be defined in Equation (7).
x ˜ i = l b i + u b i x i
Definition 3.
Let x = { x 1 , x 2 , , x D } be a bit string in D-dimensional space, where x i represents 0 or 1. The incomplete opposite point x ˜ = { x ˜ 1 , x ˜ 2 , , x ˜ D } can be expressed as Equation (8).
x ˜ i = { 1 x i ,   r a n d ( 0 , 1 ) > r x i ,   e l s e                
where r represents the proportion of taking the opposite value, and r [ 0 , 1 ] . It takes r = 0 . 5 in the paper.

2.4. Differential Evolution

DE algorithm, which includes mutation, crossover and selection operators, is a simple and potential method to solve optimization problems.

2.4.1. Mutation

The aim of this step is to form a new vector by randomly selecting three different target vectors in the population. In each iteration, a mutant vector V i ( i = 1 , , N , N is the population size) is generated using Equation (9).
V i = X r 1 + F ( X r 2 X r 3 )
where X r 1 , X r 2 and X r 3 are random solution vectors selected from the population and F is the mutation factor.

2.4.2. Crossover

After mutation operation, the mutant vector V i will crossover with its corresponding target vector X i . The crossover process is defined as Equation (10).
U i j = { V i j ,   i f   r a n d ( 0 , 1 ) C R   o r   j = j r a n d X i j , e l s e
where j = 1 , , D and D represents the dimension of the problem. C R is the crossover rate and j r a n d is a randomly chosen integer within [ 1 , D ] .

2.4.3. Selection

The selection process is to evaluate the fitness function of the target vector X i and the trial vector U i obtained after crossover operator, and the better vector will remain in the next generation. The selection strategy is given as Equation (11).
X i = { U i , i f   f ( U i ) < f ( X i ) X i , e l s e
where f ( U i ) and f ( V i ) are the fitness value of vectors U i and V i , respectively.

3. The Proposed Band Selection Method

To overcome the disadvantages of the primary HRO algorithm, two strategies are used to enhance the performance of HRO for handling the band selection problem. The main steps of the proposed technique are described in the following subsections.

3.1. The Coding Scheme

The key factor to handling the band selection issue is to make an appropriate mapping between the problem solution and algorithm coding. For band selection of HSI, each band has two candidate states of being selected or not being selected, which is suitable to be represented by binary coding. In HRO, each gene bit is represented by “1” or “0”, where “1” means that the corresponding band is selected and will be utilized for training, and “0” represents that the corresponding band is not chosen. Supposing that HSI contains ten bands, the binary coding of MHRO is “1001100101”. That is, the 1st, 4th, 5th, 8th and 10th bands will be selected to complete the subsequent classification task.

3.2. The Objective Function

Further, the proposed band selection method is developed to minimize the fitness function or the objective function by adopting MHRO algorithm. The main purpose of this method is to select the bands with the most informative subset from the original bands, so as to maximize the classification accuracy. Accordingly, SVM is adapted to conduct the classification on the HSI datasets, and the classification accuracy is selected as part of the objective function. In band selection technique, classification accuracy is an important measure metric, but how to reduce the number of redundant bands is also one of the most crucial goals. Therefore, the objective function as shown in Equation (12) is utilized in the paper.
F i t n e s s = α ( 1 OA ) + ( 1 α ) n s n c
where F i t n e s s denotes the fitness value, OA represents the overall classification accuracy and its concept is described in Appendix A.1. Note that n c and n s are the entire and the selected number of bands. α is a weight factor that balances classification accuracy and selected number of bands. It takes α = 0.99 in the paper.

3.3. The Implementation of the Proposed MHRO

The proposed band selection method is easy to implement, and its idea is to choose the optimal band subset with satisfactory classification results. Two improvements contained in the proposed algorithm MHRO are presented as shown in Figure 1. The first improvement is to adopt OBL in the initialization stage, whose aim is to improve the population diversity. The second improvement is the combination of DE operators and binary HRO algorithm, which improves the local search ability of the algorithm. The main procedure of these strategies utilized in MHRO is described as follows:
OBL: In the stage of population initialization, the position of each rice seed is randomly generated in the specified space. Then, a new population is formed by generating the corresponding opposite individual for each rice seed in the initial population by using OBL mechanism. Next, the individuals in the initial and new populations are sorted according to their fitness value, and the top N individuals are selected to enter the final population. The main steps to initialize the population by OBL are as follows:
(1)
Initialize the location of each rice seed randomly, Let X i = { x i 1 , x i 2 , , x i j , , x i D } be the i-th rice seed in the initial population X , where i = 1 , 2 , , N and j = 1 , 2 , , D . N denotes the population size and D represents the dimension of the problem;
(2)
A new population OX was obtained by using the Equation (8) for each rice seed in the population X;
(3)
The N fittest individuals are chosen from the set { X OX } to constitute the new initial population of the MHRO algorithm.
DE operators: In HRO, only individuals in sterile and restorer lines are updated, while maintenance lines are ignored, which reduces the search performance of the algorithm on high-dimensional band selection. Therefore, DE evolution operators are applied to the genetic sequences of each rice seed in the maintainer line to find better rice seeds by using Equations (9)–(11). In order to degrade the possibility of falling into the local optimum, the mutation factor F in Equation (9) is set as a random number between 0 and 1, where X r 1 , X r 2 and X r 3 are randomly selected individuals in the maintainer line. If the fitness value of the newly generated trial solution is better than the current individual, the current individual will be replaced. Otherwise, it is not replaced.

4. Experimental Results and Discussions

In this section, two sets of experiments are carried out to compare the proposed approach MHRO with multiple swarm intelligence algorithms and filter-based feature-selection methods. All the algorithms in these experiments are implemented by the language of python 3.9 and run on a PC with Intel(R) Core(TM) i7-10700 @ 2.9 GHz CPU and 16 GB memory under Windows 10 operating system.

4.1. Datasets Description

Five public HSI datasets are employed to evaluate the performance of the proposed band selection method, including the Kennedy Space Center (KSC), Botswana, Indian Pines, Salinas and Pavia University datasets.
KSC: KSC was captured by AVIRIS sensor over the Kennedy Space Center in Florida. The size of the KSC image is 512 × 614 pixels from 0.4 to 2.5 µm. The data contain 224 bands and have a spatial resolution of 18 m. The noisy bands of low signal-to-noise or water absorption are dropped, and the remaining 176 bands are utilized for analysis. Figure 2 shows the false-color image of the KSC dataset and the corresponding ground truth image. Details of the category information are listed in Table 1.
Botswana: The Botswana images, built over Okavango Delta, Botswana on May 31, 2001 by the NASA EO-1 satellite, are of size 1476 × 256. EO-1 acquires data in 242 bands from the 0.4 µm to 2.5 µm portion of the spectrum in 10 nm windows. Noise bands are discarded, and the remaining 145 bands are chosen as candidate features. The false-color image of the Botswana dataset and the corresponding ground truth image are illustrated in Figure 3. The specific category information is listed in Table 2.
Indian Pines: The Indian Pines dataset was gathered by AVIRIS sensor in northwestern Indiana. The acquired image consists of 145 × 145 pixels and 224 original spectral bands with a spectrum range from 0.4 µm to 2.5 µm. The total number of bands is reduced to 200 by removing bands containing the area of water absorption. Figure 4 shows the false-color image of the Indian Pines dataset and the corresponding ground truth image. The specific category information is given in Table 3.
Salinas: The Salinas dataset shown in Figure 5 was obtained by an AVIRIS sensor on Salinas Valley. The HSI is formed by 512 × 217 pixels and 224 bands in the spectrum range 0.4–2.5 µm. Further, 204 bands in this scene are retained by discarding 20 water absorption bands. The detailed category information of the Salinas dataset is given in Table 4.
Pavia University: The last dataset was collected from Pavia University in 2002. Pavia University is a 610 × 340 pixels image, and the number of spectral bands is 103. This image was taken by ROSIS on the wavelength range of 0.43 µm to 0.86 µm. The false-color image of Pavia University dataset and the corresponding ground truth image are illustrated in Figure 6. The concrete category information is listed in Table 5.

4.2. Parameter Settings

Appropriate parameter settings can improve the optimization ability of the algorithm. In the following experiments, the proposed band selection technique is compared with GA [28], PSO [32], CS [33], FA [34] and HRO [38]. The corresponding parameter settings of each algorithm are listed in Table 6. To make a fair comparison, all of the algorithms are adopted by binary coding and the corresponding band subset is used as input to SVM for classification. Each algorithm has an initial population size of 20 and a maximum number of iterations of 30. For all HSI datasets, 20% of samples are randomly selected as training data, and the remaining 80% are chosen as testing data. All algorithms are independently run 10 times for a case, and the average results are recorded.

4.3. Experiments for Different Optimization Algorithms

In this section, five benchmark datasets were used to test the performance of band selection based on MHRO. The overall classification accuracy (OA), kappa coefficient (The concept of kappa coefficient is described in Appendix A.2), number of selected bands and fitness function of each algorithm are utilized as evaluation indicators, as shown in Table 7 and Table 8. It could be seen from Table 7 that the optimization ability of the proposed method is apparently superior to GA, PSO, CS, FA and HRO algorithms in terms of OA and kappa coefficients. It indicates DE operators employed by the individuals in the maintainer line have more probability to enhance the exploitation ability of MHRO in local search. For the Indian Pines dataset, OA of MHRO is 6.6% higher than GA, 4.89% higher than CS, and 6.17% higher than FA. In addition, the kappa coefficient of MHRO in Botswana, Salinas and Pavia University datasets are all over 0.94. This shows that the classification results are basically consistent with the real category labels.
According to Table 8, it is observed that the number of selected bands using HRO and MHRO is significantly less than those of GA, PSO, CS and FFA. The average number of selected bands with FA is 88 in the five datasets, nearly 1.8 times that of MHRO. About 79% of the high correlation and redundancy bands from the KSC dataset are removed by MHRO, with an average of only 37 bands with satisfactory classification accuracy remaining. Moreover, it is noticed that MHRO has slightly more band subsets than HRO for Indian Pines and Salinas datasets, which is caused by the fact that MHRO prioritizes the superior precision between high classification accuracy and a smaller number of bands at each iteration. With regard to the fitness, the proposed method has a better fitness value compared with the other five algorithms, and its corresponding standard deviation does not exceed 0.003 in any dataset, which verifies that MHRO has a slight fluctuation in HSI datasets. More importantly, the standard deviation of fitness value with MHRO is only 0.0004 for Salinas and Pavia University datasets, which is stable for independent operations. Figure 7 depicts the variation of average fitness with the number of iterations for all the algorithms used on five datasets.
As it is shown in Figure 7, the initial fitness value of MHRO is lower than HRO in all datasets, which proved that the OBL strategy used in the initial stage could enhance population diversity and provide more high-quality solutions. With the increase of iteration times, the iteration curves of GA, CS and FA gradually tend to be stable, while MHRO keeps a downward trend. This shows that DE operators can help to improve the exploration and exploitation ability, and also implies that MHRO has a powerful potential to find better solutions. As a result, MHRO has excellent optimization capability on HSI datasets and can obtain an optimal band subset with satisfactory classification accuracy.

4.4. Experiments for Other Related Techniques

To further verify the reliability of the proposed algorithm, the MHRO is compared with two common feature-selection methods, including Joint Mutual Information (JMI) and its improvement, named Joint Mutual Information Maximization (JMIM) [42] on five datasets. The experiments are implemented with bands varying from 10% to 30% of the total number of bands in each dataset. The results of classification accuracy for each class, OA and kappa coefficients are recorded in Table 9, Table 10, Table 11, Table 12 and Table 13.
Table 9 shows OA and kappa values generated for the KSC dataset under different band subsets. It is clear that the proposed MHRO has the highest OA compared with other filter feature techniques. In particular, OA of MHRO is 6.5% higher than JMI and 1.8% higher than JMIM via 10% entire number of bands. In addition, only OA and Kappa coefficient of JMI are lower than 90% with different band subsets, and MHRO both exceed 92%. In brief, it is verified that the proposed band selection approach has good practicability for KSC dataset.
Table 10 reports OA and kappa values generated for the Botswana dataset under different band subsets. It is induced that OA corresponding to MHRO is satisfactory, which is more than 95% via exceeding 20% the number of bands. More importantly, the classification accuracy by MHRO is superior to JMI and JMIM in 12 categories with 30% of the total number of bands. Moreover, kappa coefficients of MHRO are approximately 0.02–0.04, 0.02–0.03 and 0.028–0.032 higher than other approaches in the total number of bands 10%, 20% and 30%, respectively. In sum, it is an effective band selection technique for the Botswana dataset.
OA and kappa values generated for the Indian Pines dataset under different band subsets are given in Table 11. It is observed that OA of the proposed MHRO significantly outperformed JMI, and the difference is over 14% even 26%. Further, the accuracy achieved by using MHRO reaches 100% on class number 1 and just 55.55% by using JMI via 10% of all number of bands. It is worth noting that the category named Oats is difficult to classify correctly by using JMI and JMIM, and the number of samples correctly distinguished by MHRO exceeded 86%. Therefore, it is a more promising technique than filtering feature-selection methods on the Indian Pines dataset.
Table 12 reports OA and kappa values generated for the Salinas dataset under different band subsets. It is obviously revealed that the proposed band selection approach can still obtain the best classification accuracy when the selected number of bands does not exceed 20%, and OA is lower than 87% by using JMI and JMIM techniques via 10% total number of bands. More than 98% of the samples could be correctly distinguished in the 13 categories by MHRO and the corresponding kappa coefficient is more than 0.92, which infers that the precisions are basically consistent with the real category labels. In short, the proposed MHRO has a strong optimization ability on the Salinas dataset.
According to Table 13, it has been concluded that the classification accuracy rate obtained by the proposed MHRO performs better than JMI and JMIM in most categories. For 10% total number of bands, OA is lower than 84% by using JMI and lower than 90% by JMIM. Kappa value by using MHRO has reached 0.91, 0.92 and 0.93 in the different number of bands, respectively, which are 0.9–15 higher than JMI. In conclusion, the proposed MHRO is a robust and feasible feature-selection approach for the Pavia University dataset.

5. Discussion

As shown in Table 7 and Table 8, the experimental results of MHRO on all datasets performed better than other swarm intelligence algorithms. Except for Indian Pines and Salinas, HRO has slightly smaller band subsets than MHRO. It can be seen from Figure 7 that the fitness function values of GA, CS and FA are only slight fluctuations throughout the iteration, indicating that they easily fall into local optimum at the early stage of iteration. In contrast, the fitness value of MHRO is always the lowest and keeps a decreasing trend, which implies that MHRO has strong optimization ability and is superior to algorithms GA [28], PSO [32], CS [33], FA [34] and HRO [38].
According to Table 9, Table 10, Table 11, Table 12 and Table 13, MHRO achieves higher accuracy than filter techniques JMI and JMIM [42] under different band subsets. This can be explained by the fact that the filter methods use mutual information to select feature subsets and they are independent of the classifier, while the wrapper approach MHRO proposed in the paper calculates the fitness function based on the accuracy of the classifier and the number of selected bands.

6. Conclusions

Band selection is a crucial phase to remove high-correlation bands and improve the classification accuracy for HSI. In the paper, a band selection approach based on MHRO is proposed and the basic idea is to obtain the fittest band combination. Experimental results are compared with commonly used feature-selection approaches optimized by GA, PSO, CS, FA and standard HRO on five datasets. In general, it is concluded that the proposed MHRO has excellent optimization capability and is able to achieve the highest classification accuracy with fewer bands. Moreover, OA and kappa coefficient are obviously higher than other related feature-selection techniques JMI and JMIM, which proved that the precisions are basically consistent with the real category labels. As a result, the proposed band selection technique has good robustness and practicability for HSI datasets. Future work will investigate other swarm intelligence algorithms and combine multiple optimization strategies and spatial information to further improve the performance of band selection. In addition, it is worthwhile to formulate different assessment criteria to solve the multi-objective optimization problem for feature selection on large-scale datasets.

Author Contributions

Conceptualization, Z.Y. and M.W.; methodology, W.C. and S.L.; software, W.C. and S.L.; validation, Z.Y. and K.L.; formal analysis, W.C.; investigation, W.C. and S.L.; resources, Z.Y.; data curation, W.C., K.L. and M.W.; writing—original draft preparation, Z.Y. and W.C.; writing—review and editing, Z.Y. and W.Z.; visualization, W.C.; supervision, Z.Y. and K.L.; project administration, Z.Y.; funding acquisition, Z.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under Grant No. 61502155, 61772180, funded by Fujian Provincial Key Laboratory of Data Intensive Computing and Key Laboratory of Intelligent Computing and Information Processing, Fujian No. BD201801.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The public data in section “Datasets description” are available at https://www.ehu.eus/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes (accessed on 25 March 2022).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Overall classification accuracy (OA) and Kappa coefficient are commonly used in HSI classification, and they will be described in detail as follows.

Appendix A.1. Overall Classification Accuracy

OA represents the ratio between the number of correctly classified samples by the classifier or classification algorithm and the total number of samples, and the mathematical expression is shown as Equation (A1).
OA = i = 1 N c C i i j = 1 N c i = 1 N c C i j
where N c is the number of classes, C i i represents the number of samples correctly classified to class i , and C i j denotes the number of samples of i-th class assigned to j-th category.

Appendix A.2. Kappa Coefficient

Kappa coefficient is a statistical indicator to measure the agreement between the final classification results and the ground-truth map, and its value is in the range of [ 1 , 1 ] . If the value of kappa coefficient is closer to 1, it indicates that the classification result is better. Kappa coefficient is given as Equation (A2).
Kappa = N s i = 1 N c C i i i = 1 N c C i + C + i N s 2 i = 1 N c C i + C + i
where N s is the number of samples, C i + represents the number of samples in class i and C + i denotes the total number of samples of non-category i predicted to be category i .

References

  1. Weber, C.; Aguejdad, R.; Briottet, X.; Avala, J.; Fabre, S.; Demuynck, J.; Zenou, E.; Deville, Y.; Karoui, M.S.; Benhalouche, F.Z.; et al. Hyperspectral Imagery for Environmental Urban Planning. In Proceedings of the IGARSS 2018-2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 1628–1631. [Google Scholar]
  2. Sethy, P.K.; Pandey, C.; Sahu, Y.K.; Behera, S.K. Hyperspectral imagery applications for precision agriculture-a systemic survey. Multimed. Tools Appl. 2022, 81, 3005–3038. [Google Scholar] [CrossRef]
  3. Yi, L.; Chen, J.M.; Zhang, G.; Xu, X.; Guo, W. Seamless Mosaicking of UAV-Based Push-Broom Hyperspectral Images for Environment Monitoring. Remote Sens. 2021, 13, 4720. [Google Scholar] [CrossRef]
  4. Wang, Z.; Tian, S. Ground object information extraction from hyperspectral remote sensing images using deep learning algorithm. Microprocess. Microsyst. 2021, 87, 104394. [Google Scholar] [CrossRef]
  5. Xie, W.; Li, Y.; Lei, J.; Yang, J.; Li, J.; Jia, X.; Li, Z. Unsupervised spectral mapping and feature selection for hyperspectral anomaly detection. Neural Netw. 2020, 132, 144–154. [Google Scholar] [CrossRef]
  6. Sawant, S.; Manoharan, P. Hyperspectral band selection based on metaheuristic optimization approach. Infrared Phys. Technol. 2020, 107, 103295. [Google Scholar] [CrossRef]
  7. Li, R.; Zhang, H.; Chen, Z.; Yu, N.; Kong, W.; Li, T.; Wang, E.; Wu, X.; Liu, Y. Denoising method of ground-penetrating radar signal based on independent component analysis with multifractal spectrum. Measurement 2022, 192, 110886. [Google Scholar] [CrossRef]
  8. Xie, S. Feature extraction of auto insurance size of loss data using functional principal component analysis. Expert Syst. Appl. 2022, 198, 116780. [Google Scholar] [CrossRef]
  9. Liu, Q.; He, H.; Liu, Y.; Qu, X. Local linear embedding algorithm of mutual neighborhood based on multi-information fusion metric. Measurement 2021, 186, 110239. [Google Scholar] [CrossRef]
  10. Ding, X.; Li, H.; Yang, J.; Dale, P.; Chen, X.; Jiang, C.; Zhang, S. An improved ant colony algorithm for optimized band selection of hyperspectral remotely sensed imagery. IEEE Access 2020, 8, 25789–25799. [Google Scholar] [CrossRef]
  11. Zhang, A.; Ma, P.; Liu, S.; Sun, G.; Huang, H.; Zabalza, J.; Wang, Z.; Lin, C. Hyperspectral band selection using crossover-based gravitational search algorithm. IET Image Processing 2019, 13, 280–286. [Google Scholar] [CrossRef] [Green Version]
  12. Ambusaidi, M.A.; He, X.; Nanda, P.; Tan, Z. Building an intrusion detection system using a filter-based feature selection algorithm. IEEE Trans. Comput. 2016, 65, 2986–2998. [Google Scholar] [CrossRef] [Green Version]
  13. Wah, Y.B.; Ibrahim, N.; Hamid, H.A.; Abdul-Rahman, S.; Fong, S. Feature Selection Methods: Case of Filter and Wrapper Approaches for Maximising Classification Accuracy. Pertanika J. Sci. Technol. 2018, 26, 329–340. [Google Scholar]
  14. Ghosh, M.; Guha, R.; Sarkar, R.; Abraham, A. A wrapper-filter feature selection technique based on ant colony optimization. Neural Comput. Appl. 2020, 32, 7839–7857. [Google Scholar] [CrossRef]
  15. Wang, J.; Tang, C.; Li, Z.; Liu, X.; Zhang, W.; Zhu, E.; Wang, L. Hyperspectral band selection via region-aware latent features fusion based clustering. Inf. Fusion 2022, 79, 162–173. [Google Scholar] [CrossRef]
  16. Shi, J.; Zhang, X.; Liu, X.; Lei, Y.; Jeon, G. Multicriteria semi-supervised hyperspectral band selection based on evolutionary multitask optimization. Knowl. -Based Syst. 2022, 240, 107934. [Google Scholar] [CrossRef]
  17. Bhadra, T.; Bandyopadhyay, S. Supervised feature selection using integration of densest subgraph finding with floating forward–backward search. Inf. Sci. 2021, 566, 1–18. [Google Scholar] [CrossRef]
  18. Li, A.D.; Xue, B.; Zhang, M. Improved binary particle swarm optimization for feature selection with new initialization and search space reduction strategies. Appl. Soft Comput. 2021, 106, 107302. [Google Scholar] [CrossRef]
  19. Ghosh, A.; Datta, A.; Ghosh, S. Self-adaptive differential evolution for feature selection in hyperspectral image data. Appl. Soft Comput. 2013, 13, 1969–1977. [Google Scholar] [CrossRef]
  20. Turky, A.; Sabar, N.R.; Dunstall, S.; Song, A. Hyper-heuristic local search for combinatorial optimisation problems. Knowl. -Based Syst. 2020, 205, 106264. [Google Scholar] [CrossRef]
  21. Faris, H.; Ala’M, A.Z.; Heidari, A.A.; Aljarah, I.; Mafarja, M.; Hassonah, M.A.; Fujita, H. An intelligent system for spam detection and identification of the most relevant features based on evolutionary random weight networks. Inf. Fusion 2019, 48, 67–83. [Google Scholar] [CrossRef]
  22. Hancer, E. Differential evolution for feature selection: A fuzzy wrapper–filter approach. Soft Comput. 2019, 23, 5233–5248. [Google Scholar] [CrossRef]
  23. Amoozegar, M.; Minaei-Bidgoli, B. Optimizing multi-objective PSO based feature selection method using a feature elitism mechanism. Expert Syst. Appl. 2018, 113, 499–514. [Google Scholar] [CrossRef]
  24. Al-Tashi, Q.; Md, R.H.; Abdulkadir, S.J.; Mirjalili, S.; Alhussian, H. A review of grey wolf optimizer-based feature selection methods for classification. Evol. Mach. Learn. Tech. 2020, 273–286. [Google Scholar] [CrossRef]
  25. Aziz, M.A.E.; Hassanien, A.E. Modified cuckoo search algorithm with rough sets for feature selection. Neural Comput. Appl. 2018, 29, 925–934. [Google Scholar] [CrossRef]
  26. Hancer, E.; Xue, B.; Zhang, M.; Karaboga, D.; Akay, B. Pareto front feature selection based on artificial bee colony optimization. Inf. Sci. 2018, 422, 462–479. [Google Scholar] [CrossRef]
  27. Mafarja, M.; Mirjalili, S. Whale optimization approaches for wrapper feature selection. Appl. Soft Comput. 2018, 62, 441–453. [Google Scholar] [CrossRef]
  28. Nagasubramanian, K.; Jones, S.; Sarkar, S.; Singh, A.K.; Singh, A.; Ganapathysubramanian, B. Hyperspectral band selection using genetic algorithm and support vector machines for early identification of charcoal rot disease in soybean stems. Plant Methods 2018, 14, 86. [Google Scholar] [CrossRef]
  29. Xie, F.; Li, F.; Lei, C.; Yang, J.; Zhang, Y. Unsupervised band selection based on artificial bee colony algorithm for hyperspectral image classification. Appl. Soft Comput. 2019, 75, 428–440. [Google Scholar] [CrossRef]
  30. Wang, M.; Wu, C.; Wang, L.; Xiang, D.; Huang, X. A feature selection approach for hyperspectral image based on modified ant lion optimizer. Knowl. -Based Syst. 2019, 168, 39–48. [Google Scholar] [CrossRef]
  31. Wang, M.; Liu, W.; Chen, M.; Huang, X.; Han, W. A band selection approach based on a modified gray wolf optimizer and weight updating of bands for hyperspectral image. Appl. Soft Comput. 2021, 112, 107805. [Google Scholar] [CrossRef]
  32. Kavitha, K.; Jenifa, W. Feature Selection Method for Classifying Hyper Spectral Image Based on Particle Swarm Optimization. In Proceedings of the 2018 International Conference on Communication and Signal Processing (ICCSP), Chennai, India, 3–5 April 2018; pp. 119–123. [Google Scholar]
  33. Medjahed, S.A.; Saadi, T.A.; Benyettou, A.; Ouali, M. Binary cuckoo search algorithm for band selection in hyperspectral image classification. IAENG Int. J. Comput. Sci. 2015, 42, 183–191. [Google Scholar]
  34. Su, H.; Yong, B.; Du, Q. Hyperspectral band selection using improved firefly algorithm. IEEE Geosci. Remote Sens. Lett. 2015, 13, 68–72. [Google Scholar] [CrossRef]
  35. Ye, Z.; Ma, L.; Chen, H. A hybrid rice optimization algorithm. In Proceedings of the 2016 11th International Conference on Computer Science & Education (ICCSE); Institute of Electrical and Electronics Engineers (IEEE), Nagoya, Japan, 23–25 August 2016; pp. 169–174. [Google Scholar]
  36. Liu, W.; Huang, Y.; Ye, Z.; Cai, W.; Yang, S.; Cheng, X.; Frank, I. Renyi’s entropy based multilevel thresholding using a novel meta-heuristics algorithm. Appl. Sci. 2020, 10, 3225. [Google Scholar] [CrossRef]
  37. Shu, Z.; Ye, Z.; Zong, X.; Liu, S.; Zhang, D.; Wang, C.; Wang, M. A modified hybrid rice optimization algorithm for solving 0-1 knapsack problem. Appl. Intell. 2021, 52, 5751–5769. [Google Scholar] [CrossRef]
  38. Ye, Z.; Liu, S.; Zong, X.; Shu, Z.; Xia, X. A Band Selection Method for Hyperspectral Image Based on Binary Coded Hybrid Rice Optimization Algorithm. In Proceedings of the 2021 11th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS), Cracow, Poland, 22–25 September 2021; Volume 1, pp. 596–600. [Google Scholar]
  39. Tubishat, M.; Abushariah, M.A.M.; Idris, N.; Aljarah, I. Improved whale optimization algorithm for feature selection in Arabic sentiment analysis. Appl. Intell. 2019, 49, 1688–1707. [Google Scholar] [CrossRef]
  40. Jadon, S.S.; Tiwari, R.; Sharma, H.; Bansal, J.C. Hybrid artificial bee colony algorithm with differential evolution. Appl. Soft Comput. 2017, 58, 11–24. [Google Scholar] [CrossRef]
  41. Houssein, E.H.; Mahdy, M.A.; Blondin, M.J.; Shebl, D.; Mohamed, W.M. Hybrid slime mould algorithm with adaptive guided differential evolution algorithm for combinatorial and global optimization problems. Expert Syst. Appl. 2021, 174, 114689. [Google Scholar] [CrossRef]
  42. Bennasar, M.; Hicks, Y.; Setchi, R. Feature selection using joint mutual information maximisation. Expert Syst. Appl. 2015, 42, 8520–8532. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The overall flowchart of MHRO.
Figure 1. The overall flowchart of MHRO.
Symmetry 14 01293 g001
Figure 2. (a) KSC HSI. (b) Ground truth.
Figure 2. (a) KSC HSI. (b) Ground truth.
Symmetry 14 01293 g002
Figure 3. (a) Botswana HSI. (b) Ground truth.
Figure 3. (a) Botswana HSI. (b) Ground truth.
Symmetry 14 01293 g003
Figure 4. (a) Indian Pines HSI. (b) Ground truth.
Figure 4. (a) Indian Pines HSI. (b) Ground truth.
Symmetry 14 01293 g004
Figure 5. (a) Salinas HSI. (b) Ground truth.
Figure 5. (a) Salinas HSI. (b) Ground truth.
Symmetry 14 01293 g005
Figure 6. (a) Pavia University HSI. (b) Ground truth.
Figure 6. (a) Pavia University HSI. (b) Ground truth.
Symmetry 14 01293 g006
Figure 7. The changing process of average fitness on five datasets.
Figure 7. The changing process of average fitness on five datasets.
Symmetry 14 01293 g007
Table 1. Detailed category information in KSC dataset.
Table 1. Detailed category information in KSC dataset.
Class NumberClass NameNumber of Samples
1Scrub761
2Willow swamp243
3Cabbage palm hammock256
4Cabbage palm/Oak hammock252
5Slash pine161
6Oak/Broadleaf hammock229
7Hardwood swamp105
8Graminoid marsh431
9Spartina marsh520
10Cattail marsh404
11Salt marsh419
12Mud flats503
13Water927
Total5211
Table 2. Detailed category information in Botswana dataset.
Table 2. Detailed category information in Botswana dataset.
Class NumberClass NameNumber of Samples
1Water270
2Hippo grass101
3Floodplain grasses 1251
4Floodplain grasses 2215
5Reeds269
6Riparian269
7Firescar259
8Island interior203
9Acacia woodlands314
10Acacia shrublands248
11Acacia grasslands305
12Short mopane181
13Mixed mopane268
14Exposed soils95
Total3248
Table 3. Detailed category information in Indian Pines dataset.
Table 3. Detailed category information in Indian Pines dataset.
Class NumberClass NameNumber of Samples
1Alfalfa46
2Corn-notill1428
3Corn-mintill830
4Corn237
5Grass-pasture483
6Grass-trees730
7Grass-pasture-mowed28
8Hay-windowed478
9Oats20
10Soybean-notill972
11Soybean-mintill2455
12Soybean-clean593
13Wheats205
14Woods1265
15Building-Grass-Trees-Drivers386
16Stone-Steel-Towers93
Total10,249
Table 4. Detailed category information in Salinas dataset.
Table 4. Detailed category information in Salinas dataset.
Class NumberClass NameNumber of Samples
1Brocoli_green_weeds_12009
2Brocoli_green_weeds_23726
3Fallow1976
4Fallow_rough_plow1394
5Fallow_smooth2678
6Stubble3959
7Celery3579
8Grapes_untrained11,271
9Soil_vinyard_develop6203
10Corn_senesced_green_weeds3278
11Lettuce_romaine_4wk1068
12Lettuce_romaine_5wk1927
13Lettuce_romaine_6wk916
14Lettuce_romaine_7wk1070
15Vinyard_untrained7268
16Vinyard_vertical_trellis1807
Total54,129
Table 5. Detailed category information in Pavia University dataset.
Table 5. Detailed category information in Pavia University dataset.
Class NumberClass NameNumber of Samples
1Asphalt6631
2Meadows18,649
3Gravel2099
4Trees3064
5Painted metal sheets1345
6Bare soil5029
7Bitumen1330
8Self-Blocking Bricks3682
9Shadows947
Total42,776
Table 6. Parameters setting of each algorithm.
Table 6. Parameters setting of each algorithm.
AlgorithmParametersValue
GA Crossover   rate   C R 0.8
Mutation   rate   C M 0.01
PSO Acceleration   coefficients   c 1 ,   c 2 2
Minimum   inertia   weight   ω min 0.2
Maximum   inertia   weight   ω max 0.9
CS Detection   probability   p a 0.25
Levy   flight   parameter   β 1.5
FA Absorption   coefficient   γ 1
Initial   attraction   β 0 1
Randomization   parameter   α 0.5
HRO, MHRO Maximum   selfing   time   t m a x 10
Table 7. OA and Kappa coefficient of six algorithms.
Table 7. OA and Kappa coefficient of six algorithms.
DatasetMetricsGAPSOCSFAHROMHRO
KSCOA (%)92.84 ± 0.1493.24 ± 0.192.92 ± 0.0792.83 ± 0.0693.34 ± 0.1993.60 ± 0.10
Kappa0.9201 ± 0.00160.9246 ± 0.00120.9211 ± 0.00070.9200 ± 0.00070.9257 ± 0.00210.9287 ± 0.0011
BotswanaOA (%)94.69 ± 0.1495.49 ± 0.2394.79 ± 0.0794.74 ± 0.0995.63 ± 0.2295.96 ± 0.10
Kappa0.9425 ± 0.00150.9511 ± 0.00240.9436 ± 0.00070.9431 ± 0.00100.9526 ± 0.00240.9562 ± 0.0010
Indian PinesOA (%)82.65 ± 0.8588.75 ± 0.5084.36 ± 0.6283.08 ± 0.4288.43 ± 0.2789.25 ± 0.32
Kappa0.8018 ± 0.00980.8718 ± 0.00570.8215 ± 0.00720.8068 ± 0.00490.8682 ± 0.00310.8776 ± 0.0036
SalinasOA (%)94.39 ± 0.0394.64 ± 0.0794.49 ± 0.0594.49 ± 0.0394.59 ± 0.0694.73 ± 0.04
Kappa0.9375 ± 0.00040.9403 ± 0.00080.9386 ± 0.00060.9386 ± 0.00030.9397 ± 0.00070.9413 ± 0.0005
Pavia UniversityOA (%)95.19 ± 0.0995.46 ± 0.0495.30 ± 0.0595.34 ± 0.0795.41 ± 0.0495.53 ± 0.06
Kappa0.9360 ± 0.00120.9396 ± 0.00050.9376 ± 0.00070.9380 ± 0.00100.9390 ± 0.00060.9406 ± 0.0008
Table 8. The number of selected bands and fitness value of six algorithms.
Table 8. The number of selected bands and fitness value of six algorithms.
DatasetMetricsGAPSOCSFAHROMHRO
KSCNum81.365.58289.74137.5
Fitness0.0756 ± 0.00150.0707 ± 0.00080.0747 ± 0.00060.0761 ± 0.00070.0683 ± 0.00200.0655 ± 0.0009
BotswanaNum70.556.272.477.637.935.3
Fitness0.0574 ± 0.00160.0486 ± 0.00250.0566 ± 0.00080.0574 ± 0.00060.0459 ± 0.00200.0425 ± 0.0010
Indian PinesNum96.569.489.1106.441.144.1
Fitness0.1766 ± 0.00850.1149 ± 0.00510.1593 ± 0.00630.1728 ± 0.00420.1166 ± 0.00270.1086 ± 0.0030
SalinasNum99.691.8102.8109.582.685.8
Fitness0.0604 ± 0.00030.0575 ± 0.00090.0596 ± 0.00050.0599 ± 0.00020.0576 ± 0.00040.0564 ± 0.0004
Pavia UniversityNum51.146.252.956.642.741.9
Fitness0.0526 ± 0.00080.0495 ± 0.00040.0516 ± 0.00040.0517 ± 0.00030.0496 ± 0.00030.0484 ± 0.0004
Table 9. The classification results for KSC dataset via different number of bands.
Table 9. The classification results for KSC dataset via different number of bands.
Class Number10%20%30%
JMIJMIMMHROJMIJMIMMHROJMIJMIMMHRO
192.575093.431992.535094.193593.914593.949094.480593.311693.5024
292.814492.655490.000093.750093.220391.534493.710793.888991.9786
365.298591.282187.046677.872387.441989.947176.652087.793487.8307
458.928661.450473.023370.256463.600073.303266.511663.052274.0741
576.136478.022079.130479.611781.318782.795785.185281.443380.7692
676.543260.365978.947479.166768.181875.722575.268867.549780.2395
764.655271.698181.707365.289372.641588.888964.166775.247585.3659
884.693989.014193.659990.099088.555990.760989.062588.108191.1602
977.346993.925294.090983.547095.203894.305287.168196.560294.5205
1089.1374100.0097.029795.8333100.0096.078495.2381100.0096.4286
1189.673999.699799.705091.193299.397699.403091.193299.101899.7015
1298.060997.698298.500096.675298.984898.740697.402698.492598.0000
13100.00100.00100.00100.00100.00100.00100.00100.00100.00
OA (%)86.663591.340893.163889.973692.084493.355789.949692.108493.3797
Kappa0.85140.90360.92380.88830.91190.92590.88810.91210.9262
Table 10. The classification results for Botswana dataset via different number of bands.
Table 10. The classification results for Botswana dataset via different number of bands.
Class Number10%20%30%
JMIJMIMMHROJMIJMIMMHROJMIJMIMMHRO
1100.0099.5392100.0099.539299.5392100.0099.539299.5392100.00
298.750098.765497.4359100.00100.0098.7342100.00100.0096.2963
398.029695.652299.005098.010097.561099.505098.048897.572899.0148
492.737494.972195.027692.222297.093096.629291.758294.350396.6480
585.294189.947188.732483.962390.000089.573588.442288.059790.3846
675.757676.995383.105080.927879.812288.679280.288580.476287.9070
7100.0099.509899.5074100.00100.00100.0099.505099.5000100.00
894.871897.515596.363696.202599.354894.736898.726198.742196.4286
981.297786.346994.715486.567287.037096.484487.547288.389596.0784
1083.561683.177691.326586.301487.019292.021388.679288.207594.2708
1195.319194.782696.137397.008594.142394.142395.762796.120796.6102
1293.877694.594691.216295.238193.333394.285791.447493.877694.3262
1388.888995.260797.058893.548493.055696.190592.924592.201897.1154
1497.3333100.0097.500098.6301100.0098.734298.630198.648698.7342
OA (%)90.919692.458694.574892.651093.228295.459892.958893.228295.8830
Kappa0.90160.91830.94120.92040.92660.95080.92370.92660.9554
Table 11. The classification results for Indian Pines dataset via different number of bands.
Table 11. The classification results for Indian Pines dataset via different number of bands.
Class Number10%20%30%
JMIJMIMMHROJMIJMIMMHROJMIJMIMMHRO
155.555691.6667100.0076.1905100.00100.0072.727392.8571100.00
243.493280.510689.860558.553784.890889.236458.643885.136389.6140
326.097679.120986.762151.735584.297585.443057.357485.313585.8034
436.363676.300676.616956.051075.274773.786449.275475.739677.2021
568.480790.322694.973586.146189.514193.686989.285788.549692.1182
679.393987.269992.635083.460988.401394.666784.107988.162094.6932
70.000095.238195.454564.516190.909188.000073.076990.9091100.00
879.735795.685397.215294.191996.666798.711389.367197.150398.7147
90.000062.500090.00000.000066.666787.500020.000071.428686.6667
1057.649382.210278.065265.222685.131679.314374.573984.949080.9187
1156.749383.269286.558973.984184.990388.681275.580886.653589.0834
1234.848579.476986.032452.996884.253687.835159.753184.462287.5510
1378.571496.202594.827696.710598.089296.511699.350698.089298.8095
1489.212092.402792.578593.677093.641693.918993.274094.525993.1232
1560.869677.966173.461571.129778.508879.922869.747976.378078.4000
1698.461598.550798.611198.484897.101498.360798.039298.507598.3607
OA (%)61.036684.792787.878073.146387.024488.902474.536687.524489.2317
Kappa0.54750.82600.86180.69260.85170.87370.70840.85750.8774
Table 12. The classification results for Salinas dataset via different number of bands.
Table 12. The classification results for Salinas dataset via different number of bands.
Class Number10%20%30%
JMIJMIMMHROJMIJMIMMHROJMIJMIMMHRO
199.810199.8101100.0099.749899.9373100.0099.9374100.00100.00
299.001799.001799.832199.598399.565499.798599.631799.899399.7650
387.978587.978596.135392.857193.184799.376995.170397.568699.5633
498.488998.488999.378998.665598.753399.290299.106398.929599.3783
588.962888.962899.534295.492295.360699.581898.487098.968699.6281
699.621199.621199.937399.905099.873599.937399.747299.778899.9373
799.333899.333899.964499.825299.755699.928899.965099.860299.9644
872.482072.482080.726277.629978.147983.229879.065681.420284.3931
993.771793.771799.438996.210896.382799.578797.154298.123599.5986
1092.604092.604097.749395.243795.136298.311695.616495.993898.3109
1179.819379.819398.926086.967787.563599.643388.649987.111199.4055
1295.854395.854398.763096.904696.956299.020996.843497.460399.2801
1395.733395.733399.315196.662298.503499.862697.840898.376299.8623
1496.715396.715398.584997.841798.000098.824997.638798.233298.5899
1566.622366.622384.275575.764075.578084.228477.669980.871784.7408
1698.224498.224499.583098.884298.813799.238899.102899.170199.3070
OA (%)86.957386.957393.266290.358990.515994.046791.319592.483494.3816
Kappa0.85430.85430.92490.89240.89420.93360.90320.91620.9374
Table 13. The classification results for Pavia University dataset via different number of bands.
Table 13. The classification results for Pavia University dataset via different number of bands.
Class Number10%20%30%
JMIJMIMMHROJMIJMIMMHROJMIJMIMMHRO
186.774991.529994.205489.336493.075595.445991.698594.408695.5572
281.073390.086195.004683.740593.867296.251387.295495.998296.6656
369.855178.106987.964871.121382.070989.738476.105183.612789.7283
492.380196.334695.769992.085496.576297.386293.782497.663797.7580
599.718099.906599.455099.440399.906499.364299.5323100.0099.6357
689.916792.050793.077589.375491.200995.025088.610294.745495.7552
773.953079.669386.447677.117883.237089.232384.482885.825288.9437
879.645780.073784.427080.566982.996084.877982.389986.556086.2015
999.868199.868299.868299.868199.868199.868299.868199.868199.8681
OA (%)83.188789.670193.442685.213892.145294.699288.115594.208295.1316
Kappa0.76770.86070.91260.79760.89510.92950.83900.92290.9353
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ye, Z.; Cai, W.; Liu, S.; Liu, K.; Wang, M.; Zhou, W. A Band Selection Approach for Hyperspectral Image Based on a Modified Hybrid Rice Optimization Algorithm. Symmetry 2022, 14, 1293. https://doi.org/10.3390/sym14071293

AMA Style

Ye Z, Cai W, Liu S, Liu K, Wang M, Zhou W. A Band Selection Approach for Hyperspectral Image Based on a Modified Hybrid Rice Optimization Algorithm. Symmetry. 2022; 14(7):1293. https://doi.org/10.3390/sym14071293

Chicago/Turabian Style

Ye, Zhiwei, Wenhui Cai, Shiqin Liu, Kainan Liu, Mingwei Wang, and Wen Zhou. 2022. "A Band Selection Approach for Hyperspectral Image Based on a Modified Hybrid Rice Optimization Algorithm" Symmetry 14, no. 7: 1293. https://doi.org/10.3390/sym14071293

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop