Next Article in Journal
Spatially Continuous Mapping of Forest Canopy Height in Canada by Combining GEDI and ICESat-2 with PALSAR and Sentinel
Previous Article in Journal
L2AMF-Net: An L2-Normed Attention and Multi-Scale Fusion Network for Lunar Image Patch Matching
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Kernel Minimum Noise Fraction Transformation-Based Background Separation Model for Hyperspectral Anomaly Detection

1
Key Laboratory of Space Active Opto-Electronics Technology, Shanghai Institute of Technical Physics, Chinese Academy of Sciences, Shanghai 200083, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
3
Department of Remote Sensing and Photogrammetry, Finnish Geospatial Research Institute, 02430 Kirkkonummi, Finland
4
Hangzhou Institute for Advanced Study, University of Chinese Academy of Sciences, Hangzhou 310024, China
5
Research Center for Intelligent Sensing Systems, Zhejiang Laboratory, Hangzhou 311100, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(20), 5157; https://doi.org/10.3390/rs14205157
Submission received: 20 September 2022 / Revised: 7 October 2022 / Accepted: 13 October 2022 / Published: 15 October 2022
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

:
A significant challenge in methods for anomaly detection (AD) in hyperspectral images (HSIs) is determining how to construct an efficient representation for anomalies and background information. Considering the high-order structures of HSIs and the estimation of anomalies and background information in AD, this article proposes a kernel minimum noise fraction transformation-based background separation model (KMNF-BSM) to separate the anomalies and background information. First, spectral-domain KMNF transformation is performed on the original hyperspectral data to fully mine the high-order correlation between spectral bands. Then, a BSM that combines the outlier removal, the iteration strategy, and the Reed–Xiaoli detector (RXD) is proposed to obtain accurate anomalous and background pixel sets based on the extracted features. Finally, the anomalous and background pixel sets are used as input for anomaly detectors to improve the background suppression and anomaly detection capabilities. Experiments on several HSIs with different spatial and spectral resolutions over different scenes are performed. The results demonstrate that the KMNF-BSM-based algorithms have better target detectability and background suppressibility than other state-of-the-art algorithms.

1. Introduction

With the characteristic of high spectral resolution, hyperspectral images (HSIs) reveal an enormous number of details about the spectral features of the Earth’s surface and have unique advantages in various applications such as spectral unmixing, classification, and anomaly detection (AD) [1,2,3,4,5,6]. Among these applications, AD is usually treated as detecting anomalies by referring to a background model and has attracted much attention because of its importance in civilian and military applications [7,8,9,10,11]. It usually possesses the following characteristics: (1) there is no prior spectral information about the anomalies or background; (2) anomalies are different from the background in terms of spectral signatures; and (3) anomalies are small objects that occupy a relatively small part of the image [12,13,14].
Many AD methods have been proposed in recent years. Currently, the statistical background model and geometrical background model are two widely used strategies in AD algorithms [7,15,16]. Several AD algorithms, such as the Reed–Xiaoli detector (RXD) [17], kernel RX detector (KRXD) [18], and cluster-based anomaly detection (CBAD) method [19], have been proposed to detect anomalies via a statistical background model. The most widely used RXD employs a multivariate Gaussian model to represent the background information. It compares the Mahalanobis distance difference between the test pixel and the mean vector of its neighboring background. Then, the anomalies are detected by suppressing the Mahalanobis distance matrix with covariance matrix inversion. The KRXD, a non-linear version of the RX detector, uses a Gaussian radial basis kernel function to map the initial data into a high-dimensional feature space. It estimates and suppresses the background in a high-dimensional feature space. Unlike RXD and KRXD, the CBAD employs categorical information for AD. It first utilizes spectral information to segment the entire image into different classes and then detects anomalies in each class [7,15]. Instead of utilizing statistical models to estimate the background, the geometrical background model assumes that background pixels can be approximately characterized by a set of main spectra or bases extracted from the original data, while anomalies cannot [7]. The AD methods, such as subspace-based RX (SSRX) [20] and the selective kernel principal component analysis (KPCA)-based detector [21], are proposed to detect anomalies via a geometrical background model. The SSRX detector estimates the background features by choosing representative eigenvectors of the original data covariance matrix, while the selective KPCA-based detector exploits maximum average local singularity to select specific kernel principal components as the background features [15]. However, all the above AD methods do not completely eliminate the influence of the anomalies in estimating background, which may lead to unsatisfactory performance of the detectors because of the deviation between the estimated and true background information [22].
A significant challenge in hyperspectral AD methods is determining how to construct an efficient estimation for background information, and this topic has received extensive attention and in-depth research [23,24,25]. The background estimation methods in anomaly detectors can be categorized into three groups: the outlier-removal-based methods, the statistical-strategy-based methods, and the iteration-strategy-based methods. The outlier-removal-based methods, such as the collaborative-representation-based detector (CRD) combined with principal component analysis (PCAroCRD) [26], the nonparametric background refinement methods based on local density [27], and the collaborative-representation-based method with outlier removal anomaly detector (CRBORAD) [28], aim to remove suspected anomaly pixels from the original data. The statistical-strategy-based methods, such as the probabilistic anomaly detection (PAD) method [29], are based on using statistical information to estimate the divergence between the probabilities of anomalies and background for the separation of an anomaly from the background. The iteration-strategy-based methods, such as blocked adaptive computationally efficient outlier nominators (BACON) [30] and locally adaptive iterative RXD (LAIRXD) [31], utilize iterative background information update strategies to eliminate anomaly interference in the background estimation. Nevertheless, for the HSIs with complex scenes, the outlier-removal-based methods often miss the anomalies similar to the background, and it is difficult for the statistical-strategy-based methods to extract the statistical information of the anomalies. Compared with the outlier-removal-based and statistical-strategy-based methods, the iteration-strategy-based anomaly detection methods are more time-consuming [22]. In order to combine the advantages of different background estimation strategies, Yan Zhang et al. [22] proposed a background-purification-based (BPB) framework for background estimation that combines the iteration strategy with the outlier removal. Furthermore, the BPB-based algorithms (such as the BPB RXD (BPB-RXD) and the BPB orthogonal subspace projection anomaly detector (BPB-OSPAD)) were proposed under this framework. The experimental results show that BPB-based algorithms perform better than other AD methods, such as the RXD, KRXD, OSPAD, PAD, CRD, BACON, and CRD combined with principal component analysis (PCAroCRD). However, while those methods have made some progress in background estimation, unfavorable effects from high data dimensionality and high-order structures of HSIs must be considered. The high-order correlation between spectral bands leads to tiny spectral value change between most adjacent bands and high redundancy in HSIs. The high-order structures will reduce the detection ability of the detectors and generate unnecessary calculation pressure [21]. As an effective method to solve this problem, feature extraction has become a critical step in the HSI processing tasks [32]. It can provide the benefits of reducing the computational complexity in processing HSIs, and it can help in solving the statistical ill-conditioning problem [33]. Among feature extraction methods, kernel minimum noise fraction (KMNF) transformation [34] is widely used in various applications because of its advantages in mining high-order correlation of HSIs.
To take account of the high-order structures of HSIs and construct an efficient representation for anomalies and the background, a KMNF-based background separation model (KMNF-BSM) is proposed in this article. First, KMNF transformation is performed on the original hyperspectral data to fully mine the high-order correlation between spectral bands and extract informative and discriminative features between anomalies and background. Then, a BSM that combines the outlier removal, the iteration strategy, and the RXD is proposed to obtain accurate anomalous and background pixel sets based on the extracted features. Finally, the anomalous and background pixel sets are used as input for two unsupervised anomaly detectors (RXD and low probability detector (LPD)) and two supervised detectors (orthogonal subspace projection anomaly detector (OSPAD) and constrained energy minimization anomaly detector (CEMAD)) to improve their background suppression and anomaly detection capabilities. Experiments on several HSIs with different spatial and spectral resolutions over different scenes are performed. The results demonstrate that the KMNF-BSM-based algorithms have better target detectability and background suppressibility than other state-of-the-art algorithms.
The main contributions of this article are summarized as follows:
(1)
To employ an effective feature extraction method before estimating anomalies and background, this article assesses the detection performance of KMNF and other feature extraction methods (including linear discriminant analysis (LDA), PCA, minimum noise fraction (MNF), optimized MNF (OMNF), factor analysis (FA), KPCA, optimized KMNF (OKMNF), local preserving projections (LPP), and locally linear embedding (LLE)) in AD for HSIs. The experimental results show the effectiveness and robustness of KMNF transformation in feature extraction for AD, which provides a reference for the application of feature extraction in AD.
(2)
Considering the high-order correlation between spectral bands in HSIs and the role of anomaly and background estimation in AD, a KMNF-BSM aiming at separating anomalies and background efficiently is proposed in this article. It can obtain accurate background and anomalous pixel sets, and it has significant anti-noise ability and adaptability to HSIs with different spatial and spectral resolutions, which offers a new solution for high-precision AD.
(3)
The proposed method achieves autonomous hyperspectral AD without pre-processing or post-processing procedures. It can accurately reconstruct the background and separate anomalies automatically, which solves the problem of low detection accuracy when the prior knowledge is insufficient for the supervised detection methods.
The remainder of this article is organized as follows: In Section 2, the detailed description of the proposed method is described. Section 3 shows the experimental results implemented on the San Diego dataset, the Airport–Beach–Urban dataset, and the Xiong’an dataset, followed by a detailed analysis and discussion of the results in Section 4. Finally, Section 5 presents the conclusions of this article.

2. Proposed Methods

The overall framework of the KMNF-BSM method is shown in Figure 1. In this schematic, an HSI is first input to KMNF transformation to extract the representative band subset. Then, the extracted bands are inputted into the BSM which combines the outlier removal, the iteration strategy, and the RXD to separate the anomalous pixel set and the background pixel set. Finally, the anomalous pixel set and background pixel set are used as prior knowledge for RXD, LPD, OSPAD, and CEMAD, and the AD results can be obtained.
In this section, the KMNF transformation for feature extraction, the BSM proposed in this article, and the KMNF-BSM-based anomaly detectors are introduced in detail.

2.1. KMNF Transformation for Feature Extraction

Because of atmospheric effects and instrumental noise, HSIs often undergo annoying degradation by various types of noise. For optical images, it is considered that signal and noise are independent [34],
x i = x S i + x N i
where x i is the pixel vector in location i , x S i is the signal contained in x i , and x N i is the noise contained in x i .
The covariance matrix of the image can be written as the sum of the signal covariance matrix and noise covariance matrix,
c o v X = c o v X S + c o v X N
and it is considered that an HSI contains n pixels and b bands, where X is the image matrix with n rows and b   columns; c o v X is the covariance matrix of the image; c o v X S and c o v X N are the signal covariance matrix and the noise covariance matrix, respectively.
Regarding x t ¯ as the average of the t th band, the matrix X m e a n with n rows and b   columns can be get, which expresses as follows:
X m e a n = x ¯ 1 x ¯ 2 x ¯ b x ¯ 1 x ¯ 2 x ¯ b x ¯ 1 x ¯ 2 x ¯ b
Then, the center matrix X c e n t e r can be obtained.
X c e n t e r = X X m e a n
The covariance matrix of the image c o v x can be written as
c o v X = X c e n t e r T X c e n t e r / n 1
Similarly, c o v X S and c o v X N can be obtained.
The noise fraction N F is defined as the ratio of the noise variance and the image variance:
N F = a T c o v X N a / a T c o v X a = a T X N c e n t e r T X N c e n t e r a / a T X c e n t e r T X c e n t e r a
where a is the eigenvector matrix of N F .
1 / N F = a T c o v X a / a T c o v X N a = a T X c e n t e r T X c e n t e r a / a T X N c e n t e r T X N c e n t e r a
In order to obtain the new components ordered by image quality after KMNF, we should minimize the N F or maximize the 1 / N F .
By reparametrizing and setting a Z T b , the kernelization of 1 / N F is obtained.
1 / N F = b T Z Z T Z Z T b / b T Z Z N T Z N Z T b
For the kernelization of 1 / N F , a non-linear mapping Φ   :   x Φ x is introduced to transform the initial data to higher-dimensional feature space.
After the non-linear mapping, the kernelization of 1 / N F can be written as follows:
1 / κ _ N F = b T K 2 b / b T K N K N T b
where Φ Z is the matrix of mapping Z , Φ Z N is the matrix of mapping Z N , K = Φ Z Φ Z T , and K N = Φ Z Φ Z N T .
Mathematically, maximized 1 / κ _ N F can be solved by the maximized Rayleigh entropy, the process is written as follows:
K 2 b = λ K N K N T b
K 2 b = λ ( K N K N T ) 1 2 ( K N K N T ) 1 2 b
K N K N T 1 2 K 2 K N K N T 1 2 K N K N T 1 2 b = λ K N K N T 1 2 b
where λ is the eigenvalue of K N K N T 1 2 K 2 K N K N T 1 2 and K N K N T 1 2 b is the eigenvector of K N K N T 1 2 K 2 K N K N T 1 2 . Then, the matrix of b can be obtained from Formula (12).
The feature extraction result after KMNF can be obtained by
Υ = Φ Z a = Φ Z Φ Z T b = K b

2.2. Background Separation Model (BSM)

For HSIs, AD aims to locate anomalous pixels whose spectral signatures differ significantly from the background [35]. The background pixels are highly similar to the mean vector of the entire image, while the anomalous pixels are not [22]. This article utilizes the similarity between the mean vector and each pixel in the HSI as a criterion for separating suspected anomalous and background pixels. Spectral matching is applied to measure the similarity among spectra. Commonly used such measures are spectral angle (SA), spectral information divergence (SID), spectral correlation angle (SCA), spectral gradient angle (SGA), and normalized Euclidean distance (NED). Related works suggest that NED outperforms SA, SCA, and SGA and is relatively equivalent to SID [36]. Therefore, this article adopts NED as a standard for similarity measurement. The NED between vector m and vector n is expressed as follows:
e m , n = i = 1 k m i n i 2
N m = m / m ¯
N n = n / n ¯
N E D m , n = e N m , N n
where m and n are k-dimensional spectra, e m , n is the Euclidean distance between m and n , N m and N n are the normalized vectors of m and n , m ¯ and n ¯ are the expected values of vectors m and n , and N E D m , n is the NED between vector m and vector n .
The NED set between the mean vector and all pixels is calculated by Formula (17). The minimum possible NED value of two spectra is 0 (when the two spectra are matched), and the maximum possible value is 1. The outlier-removal-based method is based on a range that represents the variation between the pixels most similar to the background and the pixels most different from the background. The range is defined as
R a n g e = N E D M a x N E D M i n
where N E D M a x is the maximum value of the NED set and N E D M i n is the minimum value of the NED set.
The anomalous pixels have low similarity to the mean vector. Therefore, the pixels corresponding to smaller values in the NED set can be considered as background pixels that should be separated. The threshold can be defined as
T h r e s h o l d = N E D M e d i a n + a R a n g e
where N E D M e d i a n is the median value of the NED set and a is the adjustable constant.
In order to separate the anomalous and the background pixels more effectively, the iteration strategy that repeats the separation process of the anomalous and the background pixels is employed in this article. R a n g e n is the range value of the NED set in the n th iteration process. As the iterations increase, R a n g e n gradually decreases until it reaches a stable value. Therefore, the parameter t n = R a n g e n 1 R a n g e n can be used as a condition to terminate the iteration.
The flowchart of the BSM is shown in Figure 2, where t t h r e s h o l d is the threshold to terminate the iteration process.

2.3. KMNF-BSM-Based Anomaly Detectors

To assess the performance of KMNF-BSM, the anomalous and background pixel sets are used as input for two unsupervised anomaly detectors (RXD and LPD) and two supervised detectors (OSPAD and CEMAD) to improve their background suppression and anomaly detection capabilities. The KMNF-BSM-based RXD (KMNF-BSM-RXD) algorithm, the KMNF-BSM-based LPD (KMNF-BSM-LPD) algorithm, the KMNF-BSM-based OSPAD (KMNF-BSM-OSPAD) algorithm, and the KMNF-BSM-based CEMAD (KMNF-BSM-CEMAD) algorithm are proposed and described in detail as follows.

2.3.1. KMNF-BSM-RXD Algorithm

RXD, a constant false alarm rate detection method, assumes that the dataspace is whitened and subject to Gaussian distribution and then sets up two hypotheses [17].
H 0   : x = n
H 1   : x = a s + n
where H 0 represents the absence of targets, H 1 represents the existence of targets, x is the pixel spectral vector, n is the vector that represents background information, a is the coefficient, and s is the target spectral vector.
Assuming that the data have the same covariance and different mean values in the two hypotheses [37], the RXD is defined as follows:
σ R X D = x μ b T C b 1 x μ b     η R X D < η R X D
where μ b is the mean value vector of the image, C b is the background covariance matrix estimated by the whole image, and η R X D is the judging threshold of RXD.
After the process of separating the anomalous and the background pixels, the anomalous pixel set and the background pixel set are obtained. The RXD can be redefined as follows:
σ K B R = x K M N F μ B E R T C B E R 1 x K M N F μ B E R     η K B R < η K B R
where x K M N F is the pixel spectral vector after KMNF transformation, μ B E R is the mean value vector of the background pixel set, C B E R is the background covariance matrix estimated by the background pixel set, and η K B R is the judging threshold of KMNF-BSM-RX detector.

2.3.2. KMNF-BSM-LPD Algorithm

The LPD detector suppresses the background information by applying orthogonal subspace projection to the detection progress [38]. The first step is the construction of the orthogonal projection operator P , which can be expressed as follows:
P = I V V +
where V = v 1 , v 2 , , v q is the q principal components of HSI which represent the background information, and V + = V T V 1 V T .
σ L P D = s ^ T P x     η L P D < η L P D
where s ^ = 1 , 1 , , 1 T is the estimation vector of the target, x is the pixel spectral vector, and η L P D is the judging threshold of the LPD detector.
After the process of separating the anomalous and the background pixels, the anomalous pixel set and the background pixel set are obtained. The orthogonal projection operator P B E R should be reconstructed, which can be expressed as follows:
P B E R = I V B E R V B E R +
where V B E R = v 1 , v 2 , , v q is the q principal components of HSI which represent the background pixel set, and V B E R + = V B E R T V B E R 1 V B E R T .
σ K B L = s ^ T P B E R x K M N F     η K B L < η K B L
where s ^ = 1 , 1 , , 1 T is the estimation vector of the target, x K M N F is the pixel spectral vector after KMNF transformation, and η K B L is the judging threshold of the KMNF-BSM-LPD detector.

2.3.3. KMNF-BSM-OSPAD Algorithm

The OSP detector can reduce the influence of background information and suppress various noises involved in HSIs effectively [39]. It is based on the signal linear mixed model
x = s α s + U α U T + n
where x is the pixel spectral vector, s is the target spectral vector, α s is the abundance of s , U is the background pixels spectral matrix, α U T is the abundance of U , and n is the noise term.
The OSP projects s to U direction by the projection operator
P 1 U = I U U +
where U + = U T U 1 U T .
Then, the orthogonal projection vector of x can be expressed as follows:
P 1 U x = P 1 U s α s + U α U T + n = P 1 U s α s + P 1 U n
The signal is input into the filter w to obtain
w T P 1 U x = w T P 1 U s α s + P 1 U n
The signal-to-noise ratio (SNR) can be expressed as
R S N R = ( w T P 1 U s α s ) 2 w T P 1 U n 2 = w T P 1 U s α s w T P 1 U s α s T w T P 1 U n w T P 1 U n T = w T P 1 U s α s α s s T P 1 U T w w T P 1 U E n n T P 1 U T w = α s 2 σ 2 w T P 1 U s s T P 1 U T w w T P 1 U P 1 U T w
where E n n T is the expected value of the noise term and σ 2 is the variance of the noise term.
Then, the detector can be obtained by solving the maximum SNR problem. The OSP detector is expressed as follows:
σ O S P = x T P 1 U     η O S P < η O S P
where η O S P is the judging threshold of the OSP detector.
After the process of separating the anomalous and the background pixels, the anomalous pixel set and the background pixel set are obtained. The KMNF-BSM-OSP is based on the signal linear mixed model
x K M N F = s B E R α B E R s + U B E R α U T B E R + n B E R
where x K M N F is the pixel spectral vector after KMNF transformation, s B E R is the mean vector of the target pixel set, α B E R s is the abundance of s B E R , U B E R is the background pixel set spectral matrix, α U T B E R is the abundance of U B E R , and n B E R is noise term.
The KMNF-BSM-OSP projects s B E R to U B E R direction by projection operator
P B E R 1 U B E R = I U B E R U B E R +
where U B E R + = U B E R T U B E R 1 U B E R T .
The KMNF-BSM-OSP detector is expressed as follows:
σ K B O = x K M N F T P B E R 1 U B E R     η K B O < η K B O
where η K B O is the judging threshold of the KMNF-BSM-OSP detector.

2.3.4. KMNF-BSM-CEMAD Algorithm

The CEM detector highlights the target by extracting signals of interest and attenuating other signals. The CEM detector designs a finite impulse response (FIR) linear filter to minimize the output [40]. The FIR linear filter and its output condition can be expressed as
w = w 1 , w 2 , , w b T
s T w = 1
where b is the band number of the image and s is the target spectral vector. The output of this filter is expressed as follows:
y i = w T x i = x i T w
where x i is the pixel spectral vector at location i and y i is the output of the FIR linear filter.
The average output energy of the HSI to be detected after the FIR linear filter is
1 N i = 1 N y i 2 = 1 N i = 1 N ( x i T w ) T x i T w = w T 1 N i = 1 N x i x i T w = w T R w  
where N is the number of pixels and R is the global autocorrelation matrix of the HSI.
The design of the filter can be transformed into solving the following minimum problem:
m i n 1 N i = 1 N y i 2 = m i n w T R w s T w = 1
By applying the Lagrange multiplier method to solve this minimum problem, the CEM operator can be obtained.
w = R 1 s s T R 1 s
Then, the CEM detector can be expressed as follows:
σ C E M = x T w     η C E M < η C E M
where η C E M is the judging threshold of the CEM detector.
After the process of separating the anomalous and the background pixels, the anomalous pixel set and the background pixel set are obtained. The FIR linear filter of the KMNF-BSM-CEM anomaly detector can be expressed as follows:
w B E R = R B E R 1 s B E R s B E R T R B E R 1 s B E R
where R is the global autocorrelation matrix of the HSI.
Then, the KMNF-BSM-CEM detector can be expressed as follows:
σ K B C = x K M N F T w B E R     η K B C < η K B C
where η K B C is the judging threshold of the KMNF-BSM-CEM anomaly detector.

3. Results

In this section, the experimental results of the proposed methods are described. In Section 3.1, three real HSI datasets with different spatial and spectral resolutions over different scenes (including the airport scene, the beach scene, the urban scene, and the vegetation scene) employed in the experiments are introduced. The KMNF-BSM-based algorithms involve several parameters that can be adjusted in practical applications. In Section 3.2, the parameter settings and the evaluation criteria of the proposed methods are described. Four experiments are designed to evaluate the background suppression and anomaly detection capabilities of KMNF-BSM-based algorithms. Before estimating anomalies and background, an effective feature extraction method is employed in this article to mine the high-order correlation between spectral bands and extract informative and discriminative features between anomalies and background. The first experiment is designed to solve this problem and provides a reference for the application of feature extraction in AD. This experiment assesses the detection performance of KMNF and other feature extraction methods (including LDA, PCA, MNF, OMNF, FA, KPCA, OKMNF, LPP, and LLE) in AD for HSIs, and the test results are shown in Section 3.3. In order to evaluate the background suppression and anomaly detection capabilities of KMNF-BSM-based algorithms, the remaining three experiments test the performance of proposed methods and other state-of-the-art algorithms (including BPB-CEMAD, BPB-LPD, BPB-OSPAD, BPB-RXD, abundance- and dictionary-based low-rank decomposition (ADLR) [41], low-rank and sparse representation (LRASR) [42], anomaly detection via integration of feature extraction and background purification (FEBPAD) [43], and kernel isolation forest-based hyperspectral anomaly detection method (KIFD) [44]) for HSIs over different scenes, HSIs under different noise levels, and HSIs with different spatial and spectral resolutions. The results are given in Section 3.4.1, Section 3.4.2, and Section 3.4.3, respectively.

3.1. Input Data

3.1.1. San Diego Dataset

The San Diego dataset is captured by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor in the San Diego airport area, CA, USA. The HSI data have 224 spectral bands initially. In experiments, we used 189 bands after removing the bands with noise due to the water absorption phenomenon (1–6, 33–35, 97, 107–113, 153–166, and 221–224). The spatial resolution is 3.5 m/pixel, and the spatial size of the data is 64 × 64 pixels, and three planes, represented by 134 pixels, are regarded as the anomalies to be detected [45]. The visualization image, its ground truth map, and the spectral curves of the main ground objects in this scene are shown in Figure 3a.

3.1.2. Airport–Beach–Urban Dataset

The Airport–Beach–Urban (ABU) dataset is collected by the AVIRIS sensor. Among these images, the beach scene and the urban scene are used in the experiments. The beach scene image is captured on Cat Island with 128 × 128 pixels with the corresponding references, and 19 pixels are chosen as anomalies to be detected in the scene, while the urban scene image is captured on the Texas coast with 100 × 100 pixels, and 67 pixels are chosen as anomalies to be detected. The spatial resolution of both images is 17.2 m/pixel, and in experiments, we used 188 bands for the beach scene and 204 bands for the urban scene [46]. The visualization image of the beach scene, its ground truth map, and the spectral curves of the main ground objects in the beach scene of this dataset are shown in Figure 3b, and those of the urban scene are shown in Figure 3c.

3.1.3. Xiong’an Dataset

Xiong’an dataset is captured in New District, Hebei Province, China, by a next-generation Chinese airborne hyperspectral sensor airborne multi-modular imaging spectrometer (AMMIS). The spatial resolution of this dataset is 0.5 m/pixel. In experiments, 250 spectral bands were used, and the wavelength range is 0.4–1.0 µm [47,48,49,50]. The dataset consists of 512 × 512 pixels and ten different vegetations; the spectrum of elm with 144 pixels is embedded in this image as anomalies to be detected in the experiment. The visualization image, its ground truth map, and the spectral curves of the main ground objects in this scene are shown in Figure 3d. From Figure 3d, it can be seen that there is a subtle difference between the spectrum of anomaly and background. This dataset is very suitable for assessing the validity of the proposed method in complex scenes.

3.2. Experimental Settings

The KMNF-BSM-based algorithms involve several parameters that can be adjusted in practical applications. In this section, the parameter settings and the evaluation criteria of the proposed methods are introduced.

3.2.1. Parameter Settings

For all experiments, each detector is a global anomaly detector. The value of a affects the result of every iteration in BSM, while after several iterations, the impact will be weakened. In order to wipe out the influence of selecting a parameter in the optimized algorithms, this article sets the value of a   to 0.01 for all experiments. Analysis of BSM shows that the value of t t h r e s h o l d is related to the accuracy of separating the anomalous pixel set and the background pixel set. The value of t t h r e s h o l d in the KMNF-BSM-based methods is set to 0.00001, a value close to 0.

3.2.2. Evaluation Criteria

The 3D receiver operating characteristic (3D ROC) curve and the 2D ROC curve of ( P D ,   P F ), 2D ROC curve of ( P D ,   t ), and 2D ROC curve of ( P F ,   t ), which are projected by the 3D ROC curve on their respective planes, are visualized to assess the performance of the detectors. Moreover, the area under the 2D ROC curve of ( P D ,   P F ) (AUC ( D ,   F )), area under the 2D ROC curve of ( P D ,   t ) (AUC ( D ,   t )), area under the 2D ROC curve of ( P F ,   t ) (AUC ( F ,   t )), AUC value of target detectability ( AUC TD ), AUC value of background suppressibility ( AUC BS ), AUC value of target detection in background ( AUC TD BS ), AUC value of overall detection probability ( AUC ODP ), AUC value of overall detection ( AUC OD ), and AUC value of signal-to-noise probability ratio ( AUC SNPR ) are selected as evaluation criteria in the experiments to quantitatively describe the detection performance of the detectors. Among them, AUC ( D ,   F ) and AUC ( D ,   t ) are indicators for evaluating the effectiveness and detection probability of a detector, while AUC ( F ,   t ) is used as an indicator to describe the background suppressibility of a detector. A detector with higher AUC ( D ,   F ), AUC ( D ,   t ), AUC TD , AUC BS , AUC TD BS , AUC ODP , AUC OD , and AUC SNPR values and lower AUC ( F ,   t ) value shows better performance. The calculation formulas of all the above evaluation criteria can be found in the literature [51].

3.3. Experimental Results for Feature Extraction

This section presents an assessment of the performance of KMNF transformation and other feature extraction methods (including LDA, PCA, MNF, OMNF, FA, KPCA, OKMNF, LPP, and LLE) in AD for HSIs with different spatial and spectral resolutions over different scenes. The features extracted using different methods are used as input to RXD, LPD, OSPAD, and CEMAD, and the AUC   D ,   F and its average value on four detectors are employed to assess the detection capability. The standard deviation value, which reflects the dispersion degree of AUC   D ,   F values, represents the adaptability of feature extraction methods in different anomaly detectors. In the experiment, “none” represents using raw data as the input of the anomaly detectors; the results are used to evaluate the impact of different feature extraction methods on AD. The detection capability and adaptability of different feature extraction methods in AD for HSIs over different scenes are shown in Table 1.
Table 1 shows the detection performance of different feature extraction methods on HSIs over different scenes, the improvement of KMNF transformation in average AUC   D ,   F values is up to 0.0924, 0.0044, 0.0046, and 0.0088 for HSIs over airport scene, beach scene, urban scene, and vegetation scene, respectively. The dispersion degree of AUC after KMNF transformation is relatively tiny compared with that of other methods. It can be proved that compared with the initial data and the other nine feature extraction methods, the band subset extracted by KMNF transformation contains more informative and discriminative information between anomalies and background. Therefore, this article applies KMNF transformation to feature extraction for subsequent processing.

3.4. Experimental Results for KMNF-BSM-Based Methods

In this section, to prove the validity of the KMNF-BSM, the anomalous and background pixel sets obtained by KMNF-BSM are used as input for two unsupervised anomaly detectors (RXD and LPD) and two supervised detectors (OSPAD and CEMAD), and the detection capability of KMNF-BSM in AD for HSIs over different scenes, under different noise levels, with different spatial resolutions and different spectral resolutions is assessed; the results are shown in Section 3.4.1, Section 3.4.2, and Section 3.4.3, respectively.

3.4.1. KMNF-BSM-Based Methods for HSIs over Different Scenes

To evaluate the adaptability of KMNF-BSM for HSIs with different scenes, the HSIs over the airport scene, beach scene, urban scene, and vegetation scene are used in this experiment, and the detection results are shown in Figure 4, Figure 5, Figure 6 and Figure 7. As mentioned above, anomalies are small objects that occupy a relatively small part of the image. To make the anomaly pixels in reference maps clear enough, the reference maps are set to grayscale images. The detection maps are set to cool–warm images to demonstrate the target detectability and background suppressibility of the detectors.
The detection maps of different detectors for the airport scene are shown in Figure 4. By observing these result maps, it can be found that the KMNF-BSM-based methods perform the best. CEMAD, RXD, BPB-CEMAD, BPB-RXD, ADLR, FEBPAD, and LRASR obtain an acceptable background suppression while leaving out certain abnormal pixels. LPD, OSPAD, BPB-LPD, BPB-OSPAD, and KIFD detect some anomaly pixels while failing to suppress some background areas, such as the parking apron. The KMNF-BSM-CEMAD and KMNF-BSM-OSPAD highlight the anomalies from the background, and the KMNF-BSM-LPD and KMNF-BSM-RXD suppress more of the background.
The visual detection results of different methods for the beach scene are depicted in Figure 5. Similar to the results of the airport scene, LPD, OSPAD, BPB-LPD, BPB-OSPAD, BPB-RXD, KIFD, KMNF-BSM-LPD, and KMNF-BSM-OSPAD demonstrate poor background suppression, generating high FARs. Compared with CEMAD, RXD, BPB-CEMAD, ADLR, FEBPAD, LRASR, and KMNF-BSM-RXD, the KMNF-BSM-CEMAD successfully suppresses the background and effectively detects the anomalies. The results suggest that the KMNF-BSM-RXD outperforms BPB-RXD and is relatively similar to RXD.
The detection results of each method for the urban scene are shown in Figure 6. It is obvious that the KMNF-BSM-based method performs the best. KMNF-BSM-CEMAD and KMNF-BSM-RXD highlight more of the anomalies than other compared methods.
In the detection maps of the vegetation scene in Figure 7, it can be seen that CEMAD, FEBPAD, KIFD, LRASR, KMNF-BSM-CEMAD, and KMNF-BSM-OSPAD have more uniform backgrounds than other methods because the anomalous pixels occupy a tiny part of the entire image. The KMNF-BSM-RXD is relatively similar to RXD and BPB-RXD.
To quantitatively assess the results, the 3D ROC curves along with their generated three 2D ROC curves of each method for different scenes are demonstrated in Figure 8, and the AUC values calculated from the three 2D ROC curves for HSI over the airport scene, the beach scene, the urban scene, and the vegetation scene are shown in Table 2.
As shown in Figure 8, the KMNF-BSM-based methods obtain the best ROC curves for the airport scene, the urban scene, and the vegetation scene. The 2D ROC curves of ( P D ,   P F ) and ( P D ,   t ) in the KMNF-BSM-based method are closest to the top-left corner among their compared methods, and the 2D ROC curves of ( P F ,   t ) in the KMNF-BSM-based method are closest to the bottom-right corner. The results show better performance of KMNF-BSM-based methods.
To further accurately assess the detection performance of each method, the AUC values calculated from the three 2D ROC curves for HSIs over the airport scene, the beach scene, the urban scene, and the vegetation scene are shown in Table 2. For HSI over the urban scene and the vegetation scene, the results generated by KMNF-BSM-based methods are not as expected. The AUC D ,   t and AUC TD of ADLR in the urban scene outperform those of KMNF-BSM-based methods, while the AUC   F ,   t of ADLR is much worse than that of KMNF-BSM-based methods, and other evaluation criteria values of KMNF-BSM-based methods outperform those of other algorithms. Similarly, the AUC D ,   t and AUC TD of FEBPAD in the vegetation scene outperform those of KMNF-BSM-based methods, while the AUC   F ,   t of FEBPAD is much worse than that of KMNF-BSM-based methods, and other evaluation criteria values of KMNF-BSM-based methods outperform those of other algorithms. In general, the results confirm that the proposed method has excellent performance for different scenes, demonstrating better adaptability to different scenes and generalization ability.

3.4.2. KMNF-BSM-Based Methods for HSIs under Different Noise Levels

To assess the anti-noise ability of KMNF-BSM-based methods, the zero-mean Gaussian noises with standard deviation σ set to 0.025, 0.050, 0.075, and 0.100 are added into each band of the HSIs. The AUC D ,   F values of HSIs under different noise levels are shown in Table 3.
From Table 3, it can be observed that the proposed methods achieve the best AUC D ,   F values. The KMNF-BSM-based methods perform well in terms of anti-noise performance.

3.4.3. KMNF-BSM-Based Methods for HSIs with Different Spatial and Spectral Resolutions

Because of its high spectral and spatial resolutions, the Xiong’an dataset is chosen to conduct this experiment to evaluate the adaptability of KMNF-BSM to HSIs with different spatial and spectral resolutions. HSIs with different spatial resolutions are obtained after pixel merging on the Xiong’an dataset, and HSIs with different spectral resolutions are obtained after band merging on the Xiong’an dataset.
The Xiong’an dataset is treated as Spatial1 and Spectral1; 4 adjacent pixels are averaged to obtain Spatial2 and 16 adjacent pixels are averaged to obtain Spatial3, two adjacent bands are averaged to obtain Spectral2 and four adjacent bands are averaged to obtain Spectral3. The spatial resolutions of Spatial1, Spatial2, and Spatial3 are 0.5 m/pixel, 1 m/pixel, and 2 m/pixel, respectively, and the spectral resolutions of Spectral1, Spectral2, and Spectral3, and Spectral4 are 2.4 nm, 4.8 nm, 9.6 nm, and 19.2 nm, respectively. The AUC D ,   F values of HSIs with different spatial resolutions and spectral resolutions are shown in Table 4 and Table 5.
The reported AUC D ,   F values of HSIs with different spatial resolutions support that the KMNF-BSM-based methods have better detection performances than the other compared methods. The results for HSIs with different spectral resolutions suggest that the KMNF-BSM-CEMAD outperforms other methods and is relatively equivalent to CEMAD. This result verifies that the KMNF-BSM-based methods have outstanding results, showing that the proposed methods have relatively better adaptability to HSIs with different spatial resolutions and spectral resolutions.

4. Discussion

In this article, a novel anomalous and background pixel set separation method is proposed. To prove the validity of the proposed method, the anomalous and background pixel sets obtained by KMNF-BSM are used as input for two unsupervised anomaly detectors (RXD and LPD) and two supervised detectors (OSPAD and CEMAD), and four experiments are designed to evaluate the detection capability of KMNF-BSM in AD for HSIs over different scenes, under different noise levels, and with different spatial resolutions and different spectral resolutions. In this section, the experimental results shown in Section 3 are discussed.
The first experiment is designed to provide a reference for the application of feature extraction in AD. In this experiment, the detection performance of KMNF and other feature extraction methods (including LDA, PCA, MNF, OMNF, FA, KPCA, OKMNF, LPP, and LLE) in AD for HSIs are assessed, and the test results are shown in Section 3.3. The results suggest that compared with the initial data and the other nine feature extraction methods, the band subset extracted by KMNF transformation contains more informative and discriminative information between anomalies and background. Therefore, it is helpful to employ KMNF transformation for subsequent processing.
In order to evaluate the background suppression and anomaly detection capabilities of KMNF-BSM-based algorithms for HSIs over different scenes, the second experiment tests the performance of proposed methods and other state-of-the-art algorithms (including BPB-CEMAD, BPB-LPD, BPB-OSPAD, BPB-RXD, ADLR, LRASR, FEBPAD, and KIFD) for HSIs over different scenes. The experimental results obtained for the airport scene, the beach scene, the urban scene, and the vegetation scene are shown in Section 3.4.1. The results confirm that the proposed method has excellent performance for different scenes, demonstrating better adaptability to different scenes and generalization ability.
The third experiment is designed to assess the anti-noise ability of KMNF-BSM-based methods; the zero-mean Gaussian noises with standard deviation σ are set to 0.025, 0.050, 0.075, and 0.100 are added into each band of the HSIs, and the performance of proposed methods and other state-of-the-art algorithms is tested. The experimental results, which are shown in Section 3.4.2, suggest that the proposed methods achieve the best AUC D , F values and perform well in terms of anti-noise performance.
To evaluate the adaptability of the proposed methods in HSIs with different spatial resolutions and spectral resolutions, HSIs with different spatial resolutions are obtained after pixel merging on the Xiong’an dataset, HSIs with different spectral resolutions are obtained after band merging on the Xiong’an dataset, and the last experiment tests the performance of proposed methods and other state-of-the-art algorithms in these images. The results obtained in the last experiment show that the proposed methods have relatively better adaptability to HSIs with different spatial resolutions and spectral resolutions. The results in HSIs with different spectral resolutions suggest that the KMNF-BSM-CEMAD outperforms other methods and is relatively equivalent to CEMAD. It is well known that spectral resolution has a significant impact on resolving spectral details of ground objects. The method proposed in this paper is based on feature extraction and a spectral-matching-based BSM. For images with complex backgrounds, the degradation of spectral resolution will affect the background and anomaly separation performance of the method proposed in this paper. Compared with the supervised method with prior knowledge, the proposed method did not obtain ideal results in processing the HSIs with lower spectral resolutions.
The above analysis confirms that the proposed method has relatively better adaptability to different scenes, better anti-noise performance, and better adaptability to HSIs with different spatial resolutions and spectral resolutions than other compared methods. However, each method has its limitations, and it is impossible for the proposed method to obtain satisfactory results in all cases. For the airport scene, the beach scene, the urban scene, and the vegetation scene, the proportion of anomalous pixels in the entire image are 3.27%, 0.12%, 0.67%, and 0.05%, respectively. When the abnormal pixels occupy a smart part of the entire image, there will be a slight influence of the anomalies on background estimation, and the deviation between the estimated and true backgrounds will be minor; the results obtained by the proposed methods may be unsatisfactory in this case. Because of changes in sunlight, atmospheric transmission, sensor noise, and other factors, the reflectivity of targets in HSIs is not uniquely determined. The spectra of the same substance may show differences (the phenomenon of “same substance with different spectra” commonly exists in hyperspectral remote sensing). Therefore, the supervised detection methods will have the problem of low detection accuracy when the prior knowledge is insufficient. The proposed method in this article aims at constructing an efficient representation of anomalies and background information; it can accurately reconstruct the background and separate anomalies automatically, which solves the problem of low detection accuracy when the prior knowledge is insufficient for the supervised detection methods. However, when the target spectrum is unique, the supervised detectors will obtain excellent results, and the proposed method may inapplicable.

5. Conclusions

Constructing an efficient representation of anomalies and a background is one of the critical steps in AD. In this article, a novel anomalous and background pixel set separation method which considers the high-order structures of HSIs is presented. The experimental results demonstrate that the proposed method has better adaptability to HSIs over different scenes, under different noise levels, with different spatial resolutions, and with different spectral resolutions. The results can be summarized as follows:
(1)
Taking the high-order correlation between spectral bands in HSIs into account, the detection ability of various feature extraction methods (including LDA, PCA, MNF, OMNF, FA, KPCA, OKMNF, LPP, LLE, and KMNF) in AD is evaluated in this article. The results illustrate that the KMNF transformation is more effective and robust in feature extraction for AD than other methods, providing a reference for further research on feature extraction in AD.
(2)
When the abnormal pixels occupy a small portion of the entire image, there will be little influence of the anomalies on background estimation, and the deviation between the estimated and true backgrounds will be minor.
(3)
Aiming to separate anomalies and background efficiently, a BSM that combines the outlier removal, the iteration strategy, and the RXD is proposed in this article. The results show that the KMNF-BSM has significant anti-noise ability and adaptability to HSIs with different spatial and spectral resolutions.
In conclusion, the results show that the KMNF-BSM has an excellent performance in separating anomalies and background information. Using its results as the input of the detector can effectively improve the detection capability. Moreover, the KMNF-BSM-based methods achieve autonomous hyperspectral anomaly detection without pre-processing or post-processing procedures. The realization of this process can provide a reference for subsequent research on the automatic realization of other methods. However, the proposed method may be unsatisfactory in processing HSIs with quite complex backgrounds and lower spectral resolutions, HSIs with abnormal pixels that occupy a small portion of the entire image, and HSIs with a unique anomalous spectrum, which will be the focus of our future works.

Author Contributions

Conceptualization, T.X. and Y.W.; methodology, T.X.; software, T.X. and H.X.; validation, T.X. and Y.W.; formal analysis, T.X.; investigation, T.X.; resources, Y.W.; data curation, T.X.; writing—original draft preparation, T.X.; writing—review and editing, T.X. and J.J.; visualization, C.Z. and X.D.; supervision, Y.W.; project administration, Y.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Civil Aerospace Project (No. D040102).

Data Availability Statement

Xiong’an dataset used in this article can be downloaded from http://www.hrs-cas.com/a/share/shujuchanpin/2019/0501/1049.html, accessed on 20 December 2021.

Acknowledgments

We are grateful to the State Key Laboratory of Remote Sensing Science, Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences, which provided the Xiong’an hyperspectral dataset for this research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Du, B.; Zhang, L. Random-Selection-Based Anomaly Detector for Hyperspectral Imagery. IEEE Trans. Geosci. Remote Sens. 2011, 49, 1578–1589. [Google Scholar] [CrossRef]
  2. Shaw, G.; Manolakis, D. Signal processing for hyperspectral image exploitation. IEEE Signal Process. Mag. 2002, 19, 12–16. [Google Scholar] [CrossRef]
  3. Landgrebe, D. Hyperspectral image data analysis. IEEE Signal Process. Mag. 2002, 19, 17–28. [Google Scholar] [CrossRef]
  4. Zhang, Y.; Du, B.; Zhang, L.; Wang, S. A Low-Rank and Sparse Matrix Decomposition-Based Mahalanobis Distance Method for Hyperspectral Anomaly Detection. IEEE Trans. Geosci. Remote Sens. 2016, 54, 1376–1389. [Google Scholar] [CrossRef]
  5. Sun, W.; Ren, K.; Meng, X.; Yang, G.; Xiao, C.; Peng, J.; Huang, J. MLR-DBPFN: A Multi-Scale Low Rank Deep Back Projection Fusion Network for Anti-Noise Hyperspectral and Multispectral Image Fusion. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
  6. Sun, W.; Yang, G.; Peng, J.; Meng, X.; He, K.; Li, W.; Li, H.; Du, Q. A Multiscale Spectral Features Graph Fusion Method for Hyperspectral Band Selection. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–12. [Google Scholar] [CrossRef]
  7. Kang, X.; Zhang, X.; Li, S.; Li, K.; Li, J.; Benediktsson, J.A. Hyperspectral Anomaly Detection with Attribute and Edge-Preserving Filters. IEEE Trans. Geosci. Remote Sens. 2017, 55, 5600–5611. [Google Scholar] [CrossRef]
  8. Manolakis, D.; Shaw, G. Detection algorithms for hyperspectral imaging applications. IEEE Signal Process. Mag. 2002, 19, 29–43. [Google Scholar] [CrossRef]
  9. Stein, D.W.J.; Beaven, S.G.; Hoff, L.E.; Winter, E.M.; Schaum, A.P.; Stocker, A.D. Anomaly detection from hyperspectral imagery. IEEE Signal Process. Mag. 2002, 19, 58–69. [Google Scholar] [CrossRef] [Green Version]
  10. Chang, C.I.; Chiang, S.S. Anomaly detection and classification for hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2002, 40, 1314–1325. [Google Scholar] [CrossRef]
  11. Sun, X.; Zhang, B.; Zhuang, L.; Gao, H.; Sun, X.; Ni, L. Anomaly Detection Based on Tree Topology for Hyperspectral Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022. early access. [Google Scholar] [CrossRef]
  12. Fan, G.; Ma, Y.; Mei, X.; Fan, F.; Huang, J.; Ma, J. Hyperspectral Anomaly Detection with Robust Graph Autoencoders. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
  13. Rao, W.; Gao, L.; Qu, Y.; Sun, X.; Zhang, B.; Jocelyn, C. Siamese Transformer Network for Hyperspectral Image Target Detection. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–19. [Google Scholar] [CrossRef]
  14. Rao, W.; Qu, Y.; Gao, L.; Sun, X.; Wu, Y.; Zhang, B. Transferable network with Siamese architecture for anomaly detection in hyperspectral images. Int. J. Appl. Earth Obs. Geoinf. 2022, 106, 102669. [Google Scholar] [CrossRef]
  15. Zhao, R.; Du, B.; Zhang, L.; Zhang, L. A robust background regression-based score estimation algorithm for hyperspectral anomaly detection. ISPRS J. Photogramm. Remote Sens. 2016, 122, 126–144. [Google Scholar] [CrossRef]
  16. Matteoli, S.; Veracini, T.; Diani, M.; Corsini, G. Models and Methods for Automated Background Density Estimation in Hyperspectral Anomaly Detection. IEEE Trans. Geosci. Remote Sens. 2013, 51, 2837–2852. [Google Scholar] [CrossRef]
  17. Reed, I.S.; Yu, X. Adaptive multiple-band CFAR detection of an optical pattern with unknown spectral distribution. IEEE Trans. Signal Process. 1990, 38, 1760–1770. [Google Scholar] [CrossRef]
  18. Kwon, H.; Nasrabadi, N.M. Kernel RX-algorithm: A nonlinear anomaly detector for hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2005, 43, 388–397. [Google Scholar] [CrossRef]
  19. Carlotto, M.J. A cluster-based approach for detecting man-made objects and changes in imagery. IEEE Trans. Geosci. Remote Sens. 2005, 43, 374–387. [Google Scholar] [CrossRef]
  20. Schaum, A.P. Hyperspectral anomaly detection beyond RX. SPIE Proc. 2007, 6565, 1–13. [Google Scholar] [CrossRef]
  21. Gu, Y.; Liu, Y.; Zhang, Y. A Selective KPCA Algorithm Based on High-Order Statistics for Anomaly Detection in Hyperspectral Imagery. IEEE Geosci. Remote Sens. Lett. 2008, 5, 43–47. [Google Scholar] [CrossRef]
  22. Zhang, Y.; Fan, Y.; Xu, M. A Background-Purification-Based Framework for Anomaly Target Detection in Hyperspectral Imagery. IEEE Geosci. Remote Sens. Lett. 2020, 17, 1238–1242. [Google Scholar] [CrossRef]
  23. Zhong, J.; Xie, W.; Li, Y.; Lei, J.; Du, Q. Characterization of Background-Anomaly Separability with Generative Adversarial Network for Hyperspectral Anomaly Detection. IEEE Trans. Geosci. Remote Sens. 2021, 59, 6017–6028. [Google Scholar] [CrossRef]
  24. Wang, S.; Wang, X.; Zhang, L.; Zhong, Y. Auto-AD: Autonomous Hyperspectral Anomaly Detection Network Based on Fully Convolutional Autoencoder. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
  25. Sun, X.; Qu, Y.; Gao, L.; Sun, X.; Qi, H.; Zhang, B.; Shen, T. Target Detection Through Tree-Structured Encoding for Hyperspectral Images. IEEE Trans. Geosci. Remote Sens. 2021, 59, 4233–4249. [Google Scholar] [CrossRef]
  26. Su, H.; Wu, Z.; Du, Q.; Du, P. Hyperspectral Anomaly Detection Using Collaborative Representation with Outlier Removal. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 5029–5038. [Google Scholar] [CrossRef]
  27. Zhao, C.; Wang, X.; Yao, X.; Tian, M. A background refinement method based on local density for hyperspectral anomaly detection. J. Cent. South Univ. 2018, 25, 84–94. [Google Scholar] [CrossRef]
  28. Vafadar, M.; Ghassemian, H. Anomaly Detection of Hyperspectral Imagery Using Modified Collaborative Representation. IEEE Geosci. Remote Sens. Lett. 2018, 15, 577–581. [Google Scholar] [CrossRef]
  29. Gao, L.; Guo, Q.; Plaza Antonio, J.; Li, J.; Zhang, B. Probabilistic anomaly detector for remotely sensed hyperspectral data. J. Appl. Remote Sens. 2014, 8, 083538. [Google Scholar] [CrossRef]
  30. Billor, N.; Hadi Ali, S.; Velleman Paul, F. BACON: Blocked adaptive computationally efficient outlier nominators. Comput. Stat. Data Anal. 2000, 34, 279–298. [Google Scholar] [CrossRef]
  31. Taitano, Y.P.; Geier, B.A.; Bauer, K.W., Jr. A Locally Adaptable Iterative RX Detector. EURASIP J. Adv. Signal Process. 2010, 2010, 341908. [Google Scholar] [CrossRef] [Green Version]
  32. Sun, X.; Qu, Y.; Gao, L.; Sun, X.; Qi, H.; Zhang, B.; Shen, T. Ensemble-Based Information Retrieval with Mass Estimation for Hyperspectral Target Detection. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–23. [Google Scholar] [CrossRef]
  33. Dong, Y.; Du, B.; Zhang, L.; Zhang, L. Dimensionality Reduction and Classification of Hyperspectral Images Using Ensemble Discriminative Local Metric Learning. IEEE Trans. Geosci. Remote Sens. 2017, 55, 2509–2524. [Google Scholar] [CrossRef]
  34. Nielsen, A.A. Kernel Maximum Autocorrelation Factor and Minimum Noise Fraction Transformations. IEEE Trans. Image Process. 2011, 20, 612–624. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Xie, W.; Fan, S.; Qu, J.; Wu, X.; Lu, Y.; Du, Q. Spectral Distribution-Aware Estimation Network for Hyperspectral Anomaly Detection. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–12. [Google Scholar] [CrossRef]
  36. Robila, S.A.; Gershman, A. Spectral Matching Accuracy in Processing Hyperspectral Data. In Proceedings of the International Symposium on Signals Circuits and Systems, Iasi, Romania, 14–15 July 2005. [Google Scholar] [CrossRef]
  37. Xie, W.; Li, Y.; Lei, J.; Yang, J.; Chang, C.; Li, Z. Hyperspectral Band Selection for Spectral–Spatial Anomaly Detection. IEEE Trans. Geosci. Remote Sens. 2020, 58, 3426–3436. [Google Scholar] [CrossRef]
  38. Harsanyi, J.C. Detection and Classification of Subpixel Spectral Signatures in Hyperspectral Image Sequences. Ph.D. Thesis, Department of Electronic & Electrical Engineering, University of Maryland Baltimore County, Baltimore, MD, USA, 1993. [Google Scholar]
  39. Harsanyi, J.C.; Chang, C.-I. Hyperspectral image classification and dimensionality reduction: An orthogonal subspace projection approach. IEEE Trans. Geosci. Remote Sens. 1994, 32, 779–785. [Google Scholar] [CrossRef] [Green Version]
  40. Chang, C.-I.; Heinz, D.C. Constrained subpixel target detection for remotely sensed imagery. IEEE Trans. Geosci. Remote Sens. 2000, 38, 1144–1159. [Google Scholar] [CrossRef] [Green Version]
  41. Qu, Y.; Wang, W.; Guo, R.; Ayhan, B.; Kwan, C.; Vance, S.; Qi, H. Hyperspectral Anomaly Detection Through Spectral Unmixing and Dictionary-Based Low-Rank Decomposition. IEEE Trans. Geosci. Remote Sens. 2018, 56, 4391–4405. [Google Scholar] [CrossRef]
  42. Xu, Y.; Wu, Z.; Li, J.; Plaza, A.; Wei, Z. Anomaly Detection in Hyperspectral Images Based on Low-Rank and Sparse Representation. IEEE Trans. Geosci. Remote Sens. 2016, 54, 1990–2000. [Google Scholar] [CrossRef]
  43. Ma, Y.; Fan, G.; Jin, Q.; Huang, J.; Mei, X.; Ma, J. Hyperspectral Anomaly Detection via Integration of Feature Extraction and Background Purification. IEEE Geosci. Remote Sens. Lett. 2021, 18, 1436–1440. [Google Scholar] [CrossRef]
  44. Li, S.; Zhang, K.; Duan, P.; Kang, X. Hyperspectral Anomaly Detection with Kernel Isolation Forest. IEEE Trans. Geosci. Remote Sens. 2020, 58, 319–329. [Google Scholar] [CrossRef]
  45. Zou, Z.; Shi, Z. Hierarchical Suppression Method for Hyperspectral Target Detection. IEEE Trans. Geosci. Remote Sens. 2016, 54, 330–342. [Google Scholar] [CrossRef]
  46. Li, Z.; Zhang, Y. Hyperspectral Anomaly Detection via Image Super-Resolution Processing and Spatial Correlation. IEEE Trans. Geosci. Remote Sens. 2021, 59, 2307–2320. [Google Scholar] [CrossRef]
  47. Jia, J.; Wang, Y.; Cheng, X.; Yuan, L.; Zhao, D.; Ye, Q.; Zhuang, X.; Shu, R.; Wang, J. Destriping Algorithms Based on Statistics and Spatial Filtering for Visible-to-Thermal Infrared Pushbroom Hyperspectral Imagery. IEEE Trans. Geosci. Remote Sens. 2019, 57, 4077–4091. [Google Scholar] [CrossRef]
  48. Jia, J.; Zheng, X.; Guo, S.; Wang, Y.; Chen, J. Removing Stripe Noise Based on Improved Statistics for Hyperspectral Images. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  49. Xue, T.; Wang, Y.; Chen, Y.; Jia, J.; Wen, M.; Guo, R.; Wu, T.; Deng, X. Mixed Noise Estimation Model for Optimized Kernel Minimum Noise Fraction Transformation in Hyperspectral Image Dimensionality Reduction. Remote Sens. 2021, 13, 2607. [Google Scholar] [CrossRef]
  50. Jia, J.; Chen, J.; Zheng, X.; Wang, Y.; Guo, S.; Sun, H.; Jiang, C.; Karjalainen, M.; Karila, K.; Duan, Z.; et al. Tradeoffs in the Spatial and Spectral Resolution of Airborne Hyperspectral Imaging Systems: A Crop Identification Case Study. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–18. [Google Scholar] [CrossRef]
  51. Chang, C.I. An Effective Evaluation Tool for Hyperspectral Target Detection: 3D Receiver Operating Characteristic Curve Analysis. IEEE Trans. Geosci. Remote Sens. 2021, 59, 5131–5153. [Google Scholar] [CrossRef]
Figure 1. Framework of KMNF-BSM-based anomaly detection.
Figure 1. Framework of KMNF-BSM-based anomaly detection.
Remotesensing 14 05157 g001
Figure 2. The flowchart of BSM.
Figure 2. The flowchart of BSM.
Remotesensing 14 05157 g002
Figure 3. The visualization image, its ground truth map, and the spectral curves of the main ground objects in (a) the airport scene, (b) the beach scene, (c) the urban scene, and (d) the vegetation scene.
Figure 3. The visualization image, its ground truth map, and the spectral curves of the main ground objects in (a) the airport scene, (b) the beach scene, (c) the urban scene, and (d) the vegetation scene.
Remotesensing 14 05157 g003
Figure 4. The detection maps of different detectors for the airport scene.
Figure 4. The detection maps of different detectors for the airport scene.
Remotesensing 14 05157 g004
Figure 5. The detection maps of different detectors for the beach scene.
Figure 5. The detection maps of different detectors for the beach scene.
Remotesensing 14 05157 g005
Figure 6. The detection maps of different detectors for the urban scene.
Figure 6. The detection maps of different detectors for the urban scene.
Remotesensing 14 05157 g006
Figure 7. The detection maps of different detectors for the vegetation scene.
Figure 7. The detection maps of different detectors for the vegetation scene.
Remotesensing 14 05157 g007
Figure 8. The 3D ROC curves along with their generated three 2D ROC curves of each method for (a) the airport scene, (b) the beach scene, (c) the urban scene, and (d) the vegetation scene.
Figure 8. The 3D ROC curves along with their generated three 2D ROC curves of each method for (a) the airport scene, (b) the beach scene, (c) the urban scene, and (d) the vegetation scene.
Remotesensing 14 05157 g008aRemotesensing 14 05157 g008b
Table 1. The detection capability and adaptability of different feature extraction methods in AD for HSIs over different scenes.
Table 1. The detection capability and adaptability of different feature extraction methods in AD for HSIs over different scenes.
MethodsCEMADLPDOSPADRXDAverageStandard Deviation
airport scene
None0.8570130.8118440.7729910.9573330.8497950.068840
FA0.7052870.6779840.5991390.9842780.7416720.145390
KPCA0.7293420.8671610.7791850.9757770.8378660.093673
LDA0.6208430.6647560.8161100.9789250.7701580.140623
LLE0.7017380.5760130.8437800.9775500.7747700.150599
LPP0.7299040.6813320.7649910.9766520.7882200.112774
MNF0.7155730.6284730.9789400.9838110.8266990.157721
OKMNF0.5429650.9836940.5170160.9832520.7567320.226927
OMNF0.5792100.5017440.9899400.9842920.7637970.225002
PCA0.6020210.6508770.5697760.9700200.6981740.159584
KMNF0.8776460.9327050.9294430.9813900.9302960.036705
beach scene
None0.8193800.9106110.9093120.9766380.9039850.055921
FA0.9210280.5521030.9672020.9803850.8551800.176364
KPCA0.9021150.9635780.9455450.9535940.9412080.023457
LDA0.9079130.7379840.7247240.9613160.8329840.103475
LLE0.9433370.9278260.9326250.9508570.9386610.009006
LPP0.9817730.9203380.9336020.9310360.9416870.023672
MNF0.9435310.8299970.6229440.9669670.8408600.136060
OKMNF0.7387560.9503950.9375300.9420070.8921720.088695
OMNF0.8372430.7026200.7071770.9668930.8034830.108732
PCA0.8846580.7242570.9229450.9557940.8719140.088889
KMNF0.9451300.9203470.9397270.9772370.9456100.020453
urban scene
None0.6678070.9795680.9852860.9493500.8955030.132167
FA0.9810160.6355290.5603540.9899760.7917190.195617
KPCA0.9828660.5370970.9727000.9905350.8708000.192767
LDA0.8604120.9562590.9867260.9905760.9484930.052563
LLE0.9929270.9601080.7996700.9917760.9361200.079873
LPP0.9050480.9833490.9849860.9911550.9661350.035388
MNF0.9819340.8222250.9898930.9919460.9465000.071847
OKMNF0.9815820.9865860.9045230.9914930.9660460.035693
OMNF0.8880500.6804710.9911220.9919740.8879040.126997
PCA0.9838200.9498670.9755970.9800330.9723290.013291
KMNF0.9463660.9831000.9867950.9913280.9768970.017866
vegetation scene
None0.9999920.5991490.5922370.6442180.7088810.169246
FA0.9998160.8481570.7067340.9513440.8765130.123557
KPCA0.9988710.5162780.6552230.9985180.7922230.161827
LDA0.9937300.6877110.6419451.0000000.8308470.153623
LLE0.9962320.6031710.6496940.9996240.8121800.194088
LPP0.9982820.5473740.6588150.9995490.8010050.184973
MNF0.9998160.5319120.8538900.9513440.8342410.142195
OKMNF0.9912590.7102320.6270051.0000000.8321240.146672
OMNF0.9998000.6715910.9201100.9574930.8872490.164683
PCA0.9841930.5222180.7342541.0000000.8101660.189983
KMNF0.9997060.8422630.7678050.9745030.8960690.148757
Table 2. The AUC values calculated from the three 2D ROC curves for HSIs over different scenes.
Table 2. The AUC values calculated from the three 2D ROC curves for HSIs over different scenes.
Methods A U C   D ,   F A U C   D ,   t A U C   ( F ,   t ) A U C T D A U C B S A U C T D B S A U C O D P A U C O D A U C S N P R
airport scene
CEMAD0.857010.233380.113461.090390.743550.119921.119920.976932.05690
LPD0.811840.368680.237691.180530.574160.131001.131000.942841.55114
OSPAD0.772990.471820.373991.244810.399000.097831.097830.870821.26158
RXD0.957350.128200.039441.085540.917900.088751.088751.046103.25005
BPB-CEMAD0.857680.235990.112991.093670.744690.123001.123000.980682.08856
BPB-LPD0.882110.460880.175531.343000.706580.285351.285351.167462.62561
BPB-OSPAD0.904400.653980.348491.558380.555910.305491.305491.209891.87662
BPB-RXD0.957430.128310.039491.085750.917940.088821.088821.046263.24911
ADLR0.601250.090630.036120.691870.565130.054511.054510.655762.50925
FEBPAD0.974010.124300.300691.098310.67332−0.176400.823600.797610.41336
KIFD0.800770.386600.222191.187370.578580.164411.164410.965181.73992
LRASR0.957220.365270.109141.322490.848080.256131.256131.213353.34679
KMNF-BSM-CEMAD0.988110.518590.112991.506700.875120.405601.405601.393714.58971
KMNF-BSM-LPD0.971640.483010.060691.454660.910950.422321.422321.393967.95840
KMNF-BSM-OSPAD0.941970.732730.332451.674700.609520.400281.400281.342252.20401
KMNF-BSM-RXD0.987970.163020.007831.150980.980140.155191.155191.1431520.8249
beach scene
CEMAD0.819380.255920.174671.07530 0.64471 0.08125 1.08125 0.90063 1.46516
LPD0.910610.284470.089981.195080.820640.194491.194491.105103.16160
OSPAD0.909310.254790.053931.164100.855390.200871.200871.110184.72476
RXD0.976640.189870.012181.166500.964460.177681.177681.1543215.5857
BPB-CEMAD0.712980.201720.211020.914700.50196−0.009300.990700.703680.95592
BPB-LPD0.918890.261240.050881.180120.868010.210361.210361.129255.13484
BPB-OSPAD0.940130.265400.046751.205530.893380.218651.218651.158775.67660
BPB-RXD0.947830.116000.020731.063830.927100.095271.095271.043105.59472
ADLR0.959800.253170.199131.212960.760670.054041.054041.013831.27138
FEBPAD0.963100.064360.015041.027450.948050.049321.049321.012414.27862
KIFD0.960290.221350.142161.181640.818130.079191.079191.039481.55701
LRASR0.965180.201190.010631.166370.954550.190561.190561.1557418.9302
KMNF-BSM-CEMAD0.979230.263750.064631.242980.914600.199111.199111.178344.08078
KMNF-BSM-LPD0.955030.300320.040651.255350.914380.259671.259671.214707.38800
KMNF-BSM-OSPAD0.946280.278270.043021.224540.903260.235251.235251.181526.46832
KMNF-BSM-RXD0.978440.255190.008961.233630.969480.246231.246231.2246728.4683
urban scene
CEMAD0.667810.213410.154090.881220.513720.059331.059330.727131.38501
LPD0.979570.622790.250431.602360.729140.372361.372361.351932.48687
OSPAD0.985290.634140.202861.619430.782430.431281.431281.416573.12600
RXD0.949350.284520.102871.233870.846490.181651.181651.131002.76595
BPB-CEMAD0.667810.213410.154090.881220.513720.059331.059330.727131.38501
BPB-LPD0.979460.648850.250421.628310.729040.398431.398431.377892.59107
BPB-OSPAD0.984180.623300.222711.607490.761470.400591.400591.384772.79871
BPB-RXD0.990550.297610.055511.288160.935040.242101.242101.232655.36121
ADLR0.986380.974680.998191.96106−0.01181−0.023510.976490.962870.97645
FEBPAD0.989570.363790.205871.353360.783710.157921.157921.147491.76711
KIFD0.926200.563010.186081.489210.740120.376931.376931.303133.02565
LRASR0.921570.251500.113501.173080.808070.138011.138011.059582.21595
KMNF-BSM-CEMAD0.995680.567360.117881.563040.877800.449481.449481.445164.81306
KMNF-BSM-LPD0.987020.648870.154711.635880.832300.494151.494151.481174.19399
KMNF-BSM-OSPAD0.985540.639880.147651.625420.837890.492231.492231.477774.33370
KMNF-BSM-RXD0.994550.311250.016371.305800.978180.294881.294881.2894319.0112
vegetation scene
CEMAD0.999990.009510.003281.009500.996710.006221.006221.006222.89552
LPD0.599150.001530.001720.600680.59743−0.000190.999810.598960.88773
OSPAD0.592240.002870.005310.595110.58693−0.002440.997560.589790.54010
RXD0.644220.000700.008730.644920.63549−0.008030.991970.636190.08040
BPB-CEMAD0.829030.003950.004660.832990.82437−0.000710.999290.828320.84752
BPB-LPD0.654990.002240.003710.657230.65128−0.001470.998530.653520.60475
BPB-OSPAD0.627050.003170.003550.630220.62350−0.000380.999620.626670.89308
BPB-RXD0.644490.000700.001400.645190.64309−0.000700.999300.643790.50215
ADLR0.892370.872980.674131.565350.21824−0.001140.998860.891220.99830
FEBPAD0.764630.982560.956041.64719−0.19141−0.073480.926520.691150.92314
KIFD0.565990.072980.078420.638970.48757−0.005430.994570.560560.93072
LRASR0.533500.155760.171730.689270.36177−0.015970.984030.517530.90701
KMNF-BSM-CEMAD0.999990.010000.002721.009990.997280.007281.007281.007283.68004
KMNF-BSM-LPD0.761460.003400.001470.764870.759990.001931.001930.763392.30868
KMNF-BSM-OSPAD0.992360.007390.003100.999750.989270.004291.004290.996652.38457
KMNF-BSM-RXD0.651280.009320.001400.660600.649880.007931.007930.659216.67837
Table 3. The AUC D , F values of HSIs under different noise levels.
Table 3. The AUC D , F values of HSIs under different noise levels.
Methods σ = 0.025 σ = 0.050 σ = 0.075 σ = 0.100
airport scene
CEMAD0.7678270.7905700.7596920.759671
LPD0.6061740.8329110.5438710.820496
OSPAD0.6693100.7304880.7658060.702719
RXD0.8843610.8168570.7756240.756052
BPB-CEMAD0.7629270.7902330.7575260.759521
BPB-LPD0.7487340.7939120.8804780.585944
BPB-OSPAD0.8462080.7805480.7741550.761130
BPB-RXD0.8795880.8188120.7786050.761802
ADLR0.9097850.9105240.9012630.901627
FEBPAD0.9869120.8709690.9453510.811967
KIFD0.5004620.5198170.5102120.579373
LRASR0.7634370.6157070.5497770.517181
KMNF-BSM-CEMAD0.9893170.8432230.9847070.982325
KMNF-BSM-LPD0.8102410.8457870.9462990.927393
KMNF-BSM-OSPAD0.9402650.8573120.8333550.783766
KMNF-BSM-RXD0.9817490.9768460.9623010.942672
beach scene
CEMAD0.7330250.6925760.6728320.650229
LPD0.9321820.8558410.8199750.756692
OSPAD0.9344620.8061980.7213370.651162
RXD0.8224260.5965430.5906860.620596
BPB-CEMAD0.7253570.6927240.6732630.649206
BPB-LPD0.6595940.6046090.8639360.508946
BPB-OSPAD0.9372150.9145670.8864460.853600
BPB-RXD0.8340810.5142300.6833750.587055
ADLR0.9236570.9286830.9164930.912059
FEBPAD0.9344400.9327330.9267870.931915
KIFD0.9423470.9293270.9308140.934508
LRASR0.9369230.9339850.9031250.867168
KMNF-BSM-CEMAD0.9329350.8405780.8666640.911052
KMNF-BSM-LPD0.9435350.9194910.9295420.891736
KMNF-BSM-OSPAD0.9381930.9348190.9363180.936855
KMNF-BSM-RXD0.8974800.6585200.8746300.889253
urban scene
CEMAD0.8796890.8903080.8895160.871628
LPD0.9861920.9748990.9156060.927822
OSPAD0.9815460.9009780.9758620.760433
RXD0.9527650.9414030.9348290.929170
BPB-CEMAD0.8836090.8901970.8890190.871261
BPB-LPD0.7390330.5002890.9087950.536912
BPB-OSPAD0.9854850.9499950.9830590.971710
BPB-RXD0.9532750.9412610.9346510.929149
ADLR0.9037380.9150740.9064580.904263
FEBPAD0.9465010.9822720.9291080.938624
KIFD0.9708450.9663430.9761790.976014
LRASR0.9230200.7074830.7826280.800433
KMNF-BSM-CEMAD0.9829470.9868910.9850460.988811
KMNF-BSM-LPD0.9900610.9868620.9389820.984113
KMNF-BSM-OSPAD0.9863950.9875430.9833300.987581
KMNF-BSM-RXD0.9748750.9750660.9786670.968654
vegetation scene
CEMAD0.6109960.5455340.5230830.522589
LPD0.5219420.5232860.5321230.513940
OSPAD0.5919750.5901660.5714380.595121
RXD0.5341270.5257010.5241490.518445
BPB-CEMAD0.6117820.5455640.5226800.522766
BPB-LPD0.5030180.5134040.5109230.503790
BPB-OSPAD0.5914510.5191430.5593670.609736
BPB-RXD0.5349320.5264770.5242210.518689
ADLR0.6135930.6142390.6025840.601359
FEBPAD0.6254410.6179460.5849230.573634
KIFD0.5138050.5779060.5600190.510556
LRASR0.5334730.5104110.5075960.513850
KMNF-BSM-CEMAD0.8883320.6199830.5906210.531612
KMNF-BSM-LPD0.5844350.6365600.7134700.565725
KMNF-BSM-OSPAD0.6604920.6924820.6906090.666776
KMNF-BSM-RXD0.5633610.7050720.6728630.596751
Table 4. The AUC D , F values of HSIs with different spatial resolutions.
Table 4. The AUC D , F values of HSIs with different spatial resolutions.
MethodsSpatial1Spatial2Spatial3
CEMAD0.9999920.9865710.957254
LPD0.5991490.5433850.605616
OSPAD0.5922370.6330030.599736
RXD0.6442180.5501810.624800
BPB-CEMAD0.8290340.9866590.957718
BPB-LPD0.6549850.6368960.559173
BPB-OSPAD0.6270500.6015650.532161
BPB-RXD0.6444850.5492450.624319
ADLR0.8923660.8563240.802396
FEBPAD0.7646300.7041300.613324
KIFD0.5659880.5117370.942766
LRASR0.5335040.5482520.556380
KMNF-BSM-CEMAD0.9999920.9976040.963849
KMNF-BSM-LPD0.7614620.6396300.610452
KMNF-BSM-OSPAD0.9923620.7178300.603370
KMNF-BSM-RXD0.6512860.6260070.715264
Table 5. The AUC D , F values of HSIs with different spectral resolutions.
Table 5. The AUC D , F values of HSIs with different spectral resolutions.
MethodsSpectral1Spectral2Spectral3
CEMAD0.9999920.9999960.999992
LPD0.5991490.7568510.561840
OSPAD0.5922370.6333890.797157
RXD0.6442180.6527560.578054
BPB-CEMAD0.8290340.9999960.999992
BPB-LPD0.6549850.7939470.515565
BPB-OSPAD0.6270500.5425460.544691
BPB-RXD0.6444850.6524390.578836
ADLR0.8923660.8653260.886534
FEBPAD0.7646300.6959510.710523
KIFD0.5659880.5566260.652714
LRASR0.5335040.5698810.588985
KMNF-BSM-CEMAD0.9999920.9999960.999996
KMNF-BSM-LPD0.7614620.9825460.701198
KMNF-BSM-OSPAD0.9923620.6507480.626065
KMNF-BSM-RXD0.6512860.7342630.703591
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Xue, T.; Jia, J.; Xie, H.; Zhang, C.; Deng, X.; Wang, Y. Kernel Minimum Noise Fraction Transformation-Based Background Separation Model for Hyperspectral Anomaly Detection. Remote Sens. 2022, 14, 5157. https://doi.org/10.3390/rs14205157

AMA Style

Xue T, Jia J, Xie H, Zhang C, Deng X, Wang Y. Kernel Minimum Noise Fraction Transformation-Based Background Separation Model for Hyperspectral Anomaly Detection. Remote Sensing. 2022; 14(20):5157. https://doi.org/10.3390/rs14205157

Chicago/Turabian Style

Xue, Tianru, Jianxin Jia, Hui Xie, Changxing Zhang, Xuan Deng, and Yueming Wang. 2022. "Kernel Minimum Noise Fraction Transformation-Based Background Separation Model for Hyperspectral Anomaly Detection" Remote Sensing 14, no. 20: 5157. https://doi.org/10.3390/rs14205157

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop