# Hyperspectral Feature Extraction Using Sparse and Smooth Low-Rank Analysis

^{1}

^{2}

^{3}

^{*}

## Abstract

**:**

## 1. Introduction

#### Notation

## 2. Hyperspectral Modeling and Sparse and Smooth Low-Rank Analysis

#### 2.1. Initialization

#### 2.2. **F**-Step

#### 2.3. **S**-Step

#### 2.4. **V**-Step

Algorithm 1: SSLRA. |

## 3. Experimental Results

#### 3.1. Datasets

#### 3.1.1. Trento

#### 3.1.2. Houston

#### 3.2. Performance of SSLRA with Respect to Tuning Parameters

#### 3.3. Performance of SSLRA Compared to OTVCA

#### 3.3.1. Comparisons with Respect to the Tuning Parameter

#### 3.3.2. Comparisons with Respect to the Number of Features

#### 3.3.3. Comparisons with Respect to the Number of Training Samples

#### 3.3.4. Visual Comparisons of Extracted Features

**F**) extracted by SSLRA from the Houston dataset. As can be seen, the shadow removal areas are apparent in components 4, 8, 10, and 15. The comparisons show that the sparse structures in the components extracted by OTVCA are not present in the smooth components extracted by SSLRA. The features extracted using SSLRA contain more homogeneous regions compared to the ones extracted by OTVCA. Figure 8 demonstrates this better. It shows a portion of feature 2 extracted by SSLRA compared with the corresponding OTVCA component. Figure 8 depicts the sparse structures extracted by SSLRA. The sparse structures in the sparse components decrease the classification accuracies since they are not frequently included in the region of interests, and, therefore, the class labels are not available for these sparse structures. SSLRA separates the sparse structures from the smooth ones which increases the classification accuracy and provides homogeneous class regions in the final classification map.

#### 3.4. Performance of SSLRA with Respect to the State-of-the-Art

#### 3.5. Discussion

## 4. Conclusions

## Author Contributions

## Funding

## Acknowledgments

## Conflicts of Interest

## Appendix A. Total Variation Norm

## References

- Ghamisi, P.; Yokoya, N.; Li, J.; Liao, W.; Liu, S.; Plaza, J.; Rasti, B.; Plaza, A. Advances in Hyperspectral Image and Signal Processing: A Comprehensive Overview of the State of the Art. IEEE Geos. Remote Sens. Mag.
**2017**, 5, 37–78. [Google Scholar] [CrossRef] [Green Version] - Landgrebe, D. Signal Theory Methods in Multispectral Remote Sensing; Wiley Series in Remote Sensing and Image Processing; Wiley: Hoboken, NJ, USA, 2005. [Google Scholar]
- Ghamisi, P.; Benediktsson, J.A. Feature Selection Based on Hybridization of Genetic Algorithm and Particle Swarm Optimization. IEEE Geos. Remote Sens. Lett.
**2015**, 12, 309–313. [Google Scholar] [CrossRef] [Green Version] - Jia, X.; Kuo, B.C.; Crawford, M. Feature Mining for Hyperspectral Image Classification. Proc. IEEE
**2013**, 101, 676–697. [Google Scholar] [CrossRef] - Benediktsson, J.A.; Ghamisi, P. Spectral-Spatial Classification of Hyperspectral Remote Sensing Images; Artech House Publishers: Norwood, MA, USA, 2015. [Google Scholar]
- Fukunaga, K. Introduction to Statistical Pattern Recognition; Computer Science and Scientific Computing; Elsevier Science: New York, NY, USA, 1990. [Google Scholar]
- Lee, C.; Landgrebe, D. Feature extraction based on decision boundaries. IEEE Trans. Pattern Anal. Mach. Intell.
**1993**, 15, 388–400. [Google Scholar] [CrossRef] [Green Version] - Kuo, B.C.; Landgrebe, D. Nonparametric weighted feature extraction for classification. IEEE Trans. Geosci. Remote Sens.
**2004**, 42, 1096–1105. [Google Scholar] [Green Version] - Du, Q.; Chang, C.I. A linear constrained distance-based discriminant analysis for hyperspectral image classification. Pattern Recognit.
**2001**, 34, 361–373. [Google Scholar] [CrossRef] - Du, Q. Modified Fisher’s Linear Discriminant Analysis for Hyperspectral Imagery. IEEE Geosci. Remote Sens. Lett.
**2007**, 4, 503–507. [Google Scholar] [CrossRef] - Zhang, L.; Zhang, L.; Tao, D.; Huang, X. Tensor Discriminative Locality Alignment for Hyperspectral Image Spectral-Spatial Feature Extraction. IEEE Trans. Geosci. Remote Sens.
**2013**, 51, 242–256. [Google Scholar] [CrossRef] - Li, W.; Prasad, S.; Fowled, J.E.; Bruce, L.M. Locality-preserving dimensionality reduction and classification for hyperspectral image analysis. IEEE Trans. Geosci. Remote Sens.
**2012**, 50, 1185–1198. [Google Scholar] [CrossRef] - Sugiyama, M. Dimensionality reduction of multimodal labeled data by local Fisher discriminant analysis. J. Mach. Learn. Res.
**2007**, 8, 1027–1061. [Google Scholar] - Zhou, Y.; Peng, J.; Chen, C.L.P. Dimension Reduction Using Spatial and Spectral Regularized Local Discriminant Embedding for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens.
**2015**, 53, 1082–1095. [Google Scholar] [CrossRef] - Xue, Z.; Du, P.; Li, J.; Su, H. Simultaneous sparse graph embedding for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens.
**2015**, 53, 6114–6133. [Google Scholar] [CrossRef] - Jolliffe, I. Principal Component Analysis; Springer Series in Statistics; Springer: Berlin/Heidelberg, Germany, 2002. [Google Scholar]
- Green, A.; Berman, M.; Switzer, P.; Craig, M. A transformation for ordering multispectral data in terms of image quality with implications for noise removal. IEEE Trans. Geosci. Remote Sens.
**1988**, 26, 65–74. [Google Scholar] [CrossRef] [Green Version] - Lee, J.; Woodyatt, A.; Berman, M. Enhancement of high spectral resolution remote-sensing data by a noise-adjusted principal components transform. IEEE Trans. Geosci. Remote Sens.
**1990**, 28, 295–304. [Google Scholar] [CrossRef] - Hyvärinen, A.; Karhunen, J.; Oja, E. Independent Component Analysis; Adaptive and Learning Systems for Signal Processing, Communications and Control Series; Wiley: Hoboken, NJ, USA, 2001. [Google Scholar]
- Villa, A.; Benediktsson, J.; Chanussot, J.; Jutten, C. Hyperspectral Image Classification With Independent Component Discriminant Analysis. IEEE Trans. Geosci. Remote Sens.
**2011**, 49, 4865–4876. [Google Scholar] [CrossRef] [Green Version] - Lee, D.D.; Seung, H.S. Algorithms for Non-Negative Matrix Factorization; NIPS; MIT Press: Cambridge, MA, USA, 2000; pp. 556–562. [Google Scholar]
- Lin, B.; Tao, G.; Kai, D. Using non-negative matrix factorization with projected gradient for hyperspectral images feature extraction. In Proceedings of the 2013 IEEE 8th Conference on Industrial Electronics and Applications (ICIEA), Melbourne, Australia, 19–21 June 2013; pp. 516–519. [Google Scholar]
- Sigurdsson, J.; Ulfarsson, M.; Sveinsson, J. Total variation and l
_{q}based hyperspectral unmixing for feature extraction and classification. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015. [Google Scholar] - Sigurdsson, J.; Ulfarsson, M.; Sveinsson, J. Hyperspectral unmixing with l
_{q}regularization. IEEE Trans. Geosci. Remote Sens.**2014**, 52, 6793–6806. [Google Scholar] [CrossRef] - Ma, L.; Crawford, M.; Tian, J. Local Manifold Learning-Based k-Nearest-Neighbor for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens.
**2010**, 48, 4099–4109. [Google Scholar] [CrossRef] - Fang, Y.; Li, H.; Ma, Y.; Liang, K.; Hu, Y.; Zhang, S.; Wang, H. Dimensionality Reduction of Hyperspectral Images Based on Robust Spatial Information Using Locally Linear Embedding. IEEE Geosci. Remote Sens. Lett.
**2014**, 11, 1712–1716. [Google Scholar] [CrossRef] - He, X.; Cai, D.; Yan, S.; Zhang, H.J. Neighborhood preserving embedding. In Proceedings of the Tenth IEEE International Conference on Computer Vision, Beijing, China, 17–21 October 2005; Volume 2, pp. 1208–1213. [Google Scholar]
- He, X.; Niyogi, P. Locality Preserving Projections. In Advances in Neural Information Processing Systems; Thrun, S., Saul, L., Scholkopf, B., Eds.; MIT Press: Cambridge, MA, USA, 2003. [Google Scholar]
- Zhang, T.; Yang, J.; Zhao, D.; Ge, X. Linear local tangent space alignment and application to face recognition. Neurocomputing
**2007**, 70, 1547–1553. [Google Scholar] [CrossRef] - Fong, M. Dimension Reduction on Hyperspectral Images; Technical Report; University of California: Los Angeles, CA, USA, 2007. [Google Scholar]
- Huang, H.Y.; Kuo, B.C. Double Nearest Proportion Feature Extraction for Hyperspectral-Image Classification. IEEE Trans. Geosci. Remote Sens.
**2010**, 48, 4034–4046. [Google Scholar] [CrossRef] - Deng, Y.J.; Li, H.C.; Pan, L.; Shao, L.Y.; Du, Q.; Emery, W.J. Modified Tensor Locality Preserving Projection for Dimensionality Reduction of Hyperspectral Images. IEEE Geosci. Remote Sens. Lett.
**2018**, 15, 277–281. [Google Scholar] [CrossRef] - Yan, S.; Xu, D.; Zhang, B.; Zhang, H.J.; Yang, Q.; Lin, S. Graph Embedding and Extensions: A General Framework for Dimensionality Reduction. IEEE Trans. Pattern Anal. Mach. Intell.
**2007**, 29, 40–51. [Google Scholar] [CrossRef] [Green Version] - Ly, N.H.; Du, Q.; Fowler, J. Sparse Graph-Based Discriminant Analysis for Hyperspectral Imagery. IEEE Trans. Geosci. Remote Sens.
**2014**, 52, 3872–3884. [Google Scholar] - Pan, L.; Li, H.C.; Li, W.; Chen, X.D.; Wu, G.N.; Du, Q. Discriminant Analysis of Hyperspectral Imagery Using Fast Kernel Sparse and Low-Rank Graph. IEEE Trans. Geosci. Remote Sens.
**2017**, 55, 6085–6098. [Google Scholar] [CrossRef] - Gastal, E.S.L.; Oliveira, M.M. Domain Transform for Edge-aware Image and Video Processing. ACM Trans. Graph.
**2011**, 30, 69. [Google Scholar] [CrossRef] - Kang, X.; Li, S.; Benediktsson, J.A. Feature Extraction of Hyperspectral Images With Image Fusion and Recursive Filtering. IEEE Trans. Geosci. Remote Sens.
**2014**, 52, 3742–3752. [Google Scholar] [CrossRef] - Sun, W.; Yang, G.; Du, B.; Zhang, L.; Zhang, L. A Sparse and Low-Rank Near-Isometric Linear Embedding Method for Feature Extraction in Hyperspectral Imagery Classification. IEEE Trans. Geosci. Remote Sens.
**2017**, 55, 4032–4046. [Google Scholar] [CrossRef] - Rasti, B.; Sveinsson, J.R.; Ulfarsson, M.O. Total Variation Based Hyperspectral Feature Extraction. In Proceedings of the 2014 IEEE Geoscience and Remote Sensing Symposium, Quebec City, QC, Canada, 13–18 July 2014; pp. 4644–4647. [Google Scholar]
- Rasti, B.; Sveinsson, J.; Ulfarsson, M. Wavelet-Based Sparse Reduced-Rank Regression for Hyperspectral Image Restoration. IEEE Trans. Geosci. Remote Sens.
**2014**, 52, 6688–6698. [Google Scholar] [CrossRef] - Rasti, B. Sparse Hyperspectral Image Modeling and Restoration. Ph.D. Thesis, Department of Electrical and Computer Engineering, University of Iceland, Reykjavik, Iceland, 2014. [Google Scholar]
- Rasti, B.; Ulfarsson, M.O.; Sveinsson, J.R. Hyperspectral Feature Extraction Using Total Variation Component Analysis. IEEE Trans. Geosci. Remote Sens.
**2016**, 54, 6976–6985. [Google Scholar] [CrossRef] - Rasti, B.; Ulfarsson, M.; Sveinsson, J. Hyperspectral Subspace Identification Using SURE. IEEE Geosci. Remote Sens. Lett.
**2015**, 12, 2481–2485. [Google Scholar] [CrossRef] [Green Version] - Bioucas-Dias, J.; Nascimento, J. Hyperspectral Subspace Identification. IEEE Trans. Geosci. Remote Sens.
**2008**, 46, 2435–2445. [Google Scholar] [CrossRef] [Green Version] - Bertsekas, D. Nonlinear Programming; Athena Scientific: Belmont, MA, USA, 1995. [Google Scholar]
- Luenberger, D. Linear Nonlinear Programming, 3rd ed.; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
- Tseng, P.; Mangasarian, C.O.L. Convergence of a block coordinate descent method for nondifferentiable minimization. J. Opt. Theory Appl.
**2001**, 109, 475–494. [Google Scholar] [CrossRef] - Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Phys. D
**1992**, 60, 259–268. [Google Scholar] [CrossRef] - Goldstein, T.; Osher, S. The Split Bregman Method for ℓ
_{1}-Regularized Problems. SIAM J. Imaging Sci.**2009**, 2, 323–343. [Google Scholar] [CrossRef] - Eckstein, J.; Bertsekas, D.P. On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program.
**1992**, 55, 293–318. [Google Scholar] [CrossRef] - Zou, H.; Hastie, T.; Tibshirani, R. Sparse Principal Component Analysis. J. Comput. Graph. Stat.
**2004**, 15, 2006. [Google Scholar] [CrossRef] - He, X.F.; Niyogi, P. Locality Preserving Projections; MIT Press: Cambridge, MA, USA, 2004; pp. 153–160. [Google Scholar]
- Sugiyama, M.; Ide, T.; Nakajima, S.; Sese, J. Semi-supervised local Fisher discriminant analysis for dimensionality reduction. Mach. Learn.
**2010**, 78, 35–61. [Google Scholar] [CrossRef] - Liao, W.; Pizurica, A.; Scheunders, P.; Philips, W.; Pi, Y. Semi-Supervised Local Discriminant Analysis for Feature Extraction in Hyperspectral Images. IEEE Trans. Geosci. Remote Sens.
**2013**, 51, 184–198. [Google Scholar] [CrossRef] - Luo, R.; Liao, W.; Huang, X.; Pi, Y.; Philips, W. Feature Extraction of Hyperspectral Images with Semi-Supervised Graph Learning. IEEE J. Sel. Top. App. Earth Obs. Remote Sens.
**2016**, 9, 4389–4399. [Google Scholar] [CrossRef]

**Figure 1.**Trento (from top to bottom): A color composite representation of the hyperspectral data using bands 40, 20, and 10, as R, G, and B, respectively; training samples; test samples; and the corresponding color bar.

**Figure 2.**Houston (from left to right): A color composite representation of the hyperspectral data using band 70, 50, and 20 as R, G, and B, respectively; training samples; test samples; and the corresponding color bar.

**Figure 3.**Performance of OA with respect to the tuning parameters ${T}_{1}$ and ${T}_{2}$ obtained by applying RF and SVM classifiers on the extracted features from the University of Houston dataset.

**Figure 4.**Performance of OA with respect to the tuning parameters T obtained by applying RF and SVM classifiers on the extracted features from the University of Houston dataset.

**Figure 5.**Performance of OA with respect to r obtained by applying RF and SVM classifiers on the extracted features from the University of Houston dataset.

**Figure 6.**Performance of OA with respect to the number of training samples obtained by applying RF and SVM classifiers on the extracted features from the University of Houston dataset.

**Figure 7.**Houston components extracted by using OTVCA and SSLRA (the smooth components (F))—From top to bottom: components 1, 2, 4, 8, 10 and 15.

**Figure 9.**Classification maps obtained by applying SVM and RF classifiers on the features extracted from the Houston hyperspectral dataset.

**Figure 10.**Classification maps obtained by applying SVM and RF classifiers on the features extracted from the Trento hyperspectral dataset.

**Figure 11.**The cost function and the stopping criterion values of SSLRA applied on Houston and Trento.

**Figure 12.**The cost function values using spectral eigenvectors and random orthogonal matrix for the Initialization of SSLRA applied on Houston and Trento.

Sym. | Definition |
---|---|

${x}_{i}$ | the ith entry of the vector $\mathbf{x}$ |

${x}_{ij}$ | the $(i,j)$th entry of the matrix $\mathbf{X}$ |

${\mathbf{x}}_{\left(i\right)}$ | the ith column of the matrix $\mathbf{X}$ |

${\mathbf{x}}_{j}^{T}$ | the jth row of the matrix $\mathbf{X}$ |

${\u2225\mathbf{x}\u2225}_{1}$ | ${l}_{1}$-norm of the vector $\mathbf{x}$, obtained by ${\sum}_{i}\left|{x}_{i}\right|$ |

${\u2225\mathbf{x}\u2225}_{2}$ | ${l}_{2}$-norm of the vector $\mathbf{x}$, obtained by $\sqrt{{\sum}_{i}{x}_{i}^{2}}$ |

${\u2225\mathbf{X}\u2225}_{1}$ | ${l}_{1}$-norm of the matrix $\mathbf{X}$, obtained by ${\sum}_{i,j}\left|{\mathbf{x}}_{ij}\right|$ |

${\u2225\mathbf{X}\u2225}_{F}$ | Frobenius-norm of the matrix $\mathbf{X}$, obtained by $\sqrt{{\sum}_{i,j}{x}_{ij}^{2}}$ |

$\hat{\mathbf{X}}$ | the estimate of the variable $\mathbf{X}$ |

${\u2225\mathbf{x}\u2225}_{TV}$ | Total variation norm (explained in Appendix A) |

Class | Number of Samples | ||
---|---|---|---|

No. | Name | Training | Test |

1 | Apple Tree | 129 | 3905 |

2 | Building | 125 | 2778 |

3 | Ground | 105 | 374 |

4 | Wood | 154 | 8969 |

5 | Vineyard | 184 | 10,317 |

6 | Road | 122 | 3252 |

Total | 819 | 29,595 |

Class | Number of Samples | ||
---|---|---|---|

No. | Name | Training | Test |

1 | Grass Healthy | 198 | 1053 |

2 | Grass Stressed | 190 | 1064 |

3 | Grass Synthetic | 192 | 505 |

4 | Tree | 188 | 1056 |

5 | Soil | 186 | 1056 |

6 | Water | 182 | 143 |

7 | Residential | 196 | 1072 |

8 | Commercial | 191 | 1053 |

9 | Road | 193 | 1059 |

10 | Highway | 191 | 1036 |

11 | Railway | 181 | 1054 |

12 | Parking Lot 1 | 192 | 1041 |

13 | Parking Lot 2 | 184 | 285 |

14 | Tennis Court | 181 | 247 |

15 | Running Track | 187 | 473 |

Total | 2832 | 12,197 |

**Table 4.**Classification accuracies obtained by applying SVM on the features extracted from the Houston hyperspectral dataset. The highest accuracy in each row is shown bold.

Cl. # | HSI | PCA | MNF | DAFE | NWFE | SELD | OTVCA${}_{\mathit{T}=0.2}$ | OTVCA${}_{\mathit{T}=0.4}$ | SSLRA${}_{\mathit{T}=0.2}$ | SSLRA${}_{\mathit{T}=0.4}$ |
---|---|---|---|---|---|---|---|---|---|---|

1 | 0.8348 | 0.8158 | 0.8167 | 0.8224 | 0.8243 | 0.8338 | 0.8367 | 0.8367 | 0.8262 | 0.8243 |

2 | 0.9643 | 0.9445 | 0.9511 | 0.9699 | 0.9690 | 0.9765 | 0.9699 | 0.9727 | 0.9868 | 0.9868 |

3 | 0.9980 | 0.9980 | 0.9980 | 1.0000 | 0.9980 | 1.0000 | 0.9980 | 0.9980 | 1.0000 | 1.0000 |

4 | 0.9877 | 0.9716 | 0.9830 | 0.9782 | 0.9678 | 0.9934 | 0.9820 | 0.9877 | 0.9744 | 0.9413 |

5 | 0.9811 | 0.9839 | 0.9848 | 0.9792 | 0.9867 | 0.9659 | 0.9839 | 0.9877 | 1.0000 | 1.0000 |

6 | 0.9510 | 0.9510 | 0.9650 | 0.9930 | 0.9860 | 0.9930 | 0.9720 | 0.9650 | 0.9510 | 0.9510 |

7 | 0.8909 | 0.8582 | 0.8405 | 0.8200 | 0.8591 | 0.8843 | 0.8284 | 0.8741 | 0.8563 | 0.8330 |

8 | 0.4587 | 0.6144 | 0.5556 | 0.4311 | 0.5508 | 0.4701 | 0.5878 | 0.5176 | 0.8015 | 0.8642 |

9 | 0.8253 | 0.7753 | 0.7885 | 0.5826 | 0.8225 | 0.6922 | 0.7762 | 0.7941 | 0.8555 | 0.8612 |

10 | 0.8320 | 0.7008 | 0.8678 | 0.7500 | 0.7905 | 0.7017 | 0.6728 | 0.7915 | 0.9431 | 0.9681 |

11 | 0.8387 | 0.8416 | 0.8121 | 0.7135 | 0.9127 | 0.8425 | 0.8330 | 0.8463 | 0.9753 | 0.9099 |

12 | 0.7099 | 0.7320 | 0.7810 | 0.5437 | 0.7992 | 0.6667 | 0.8415 | 0.8357 | 0.9001 | 0.8146 |

13 | 0.7053 | 0.7018 | 0.6842 | 0.5895 | 0.7018 | 0.6807 | 0.7228 | 0.7298 | 0.7930 | 0.8105 |

14 | 1.0000 | 1.0000 | 1.0000 | 0.9919 | 0.9960 | 0.9960 | 0.9960 | 1.0000 | 1.0000 | 1.0000 |

15 | 0.9746 | 0.9641 | 0.9619 | 0.9852 | 0.9810 | 0.9767 | 0.9683 | 0.9683 | 0.9979 | 1.0000 |

AA | 0.8635 | 0.8569 | 0.8660 | 0.8100 | 0.8764 | 0.8449 | 0.8646 | 0.8737 | 0.9241 | 0.9177 |

OA | 0.8469 | 0.8391 | 0.8509 | 0.7818 | 0.8611 | 0.8215 | 0.8463 | 0.8578 | 0.9183 | 0.9088 |

$\kappa $ | 0.8340 | 0.8253 | 0.8382 | 0.7632 | 0.8492 | 0.8063 | 0.8332 | 0.8457 | 0.9113 | 0.9010 |

**Table 5.**Classification accuracies obtained by applying RF on the features extracted from the Houston hyperspectral dataset. The highest accuracy in each row is shown bold.

Cl. # | HSI | PCA | MNF | DAFE | NWFE | SELD | OTVCA${}_{\mathit{T}=0.2}$ | OTVCA${}_{\mathit{T}=0.4}$ | SSLRA${}_{\mathit{T}=0.2}$ | SSLRA${}_{\mathit{T}=0.4}$ |
---|---|---|---|---|---|---|---|---|---|---|

1 | 0.8338 | 0.8395 | 0.8566 | 0.8291 | 0.8215 | 0.8215 | 0.8367 | 0.8443 | 0.7683 | 0.8091 |

2 | 0.9840 | 0.9840 | 0.9859 | 0.9746 | 0.9774 | 0.9831 | 0.9915 | 0.9699 | 1.0000 | 1.0000 |

3 | 0.9802 | 0.9960 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 0.9960 | 0.9980 |

4 | 0.9754 | 0.9593 | 0.9659 | 0.9555 | 0.9706 | 0.9688 | 0.9915 | 0.9025 | 0.9924 | 0.9669 |

5 | 0.9640 | 0.9839 | 0.9811 | 0.9508 | 0.9839 | 0.9754 | 1.0000 | 0.9991 | 0.9972 | 1.0000 |

6 | 0.9720 | 0.9930 | 0.9930 | 0.9231 | 0.9930 | 0.9930 | 1.0000 | 0.9580 | 1.0000 | 0.9580 |

7 | 0.8209 | 0.8909 | 0.9123 | 0.8004 | 0.9151 | 0.8806 | 0.9104 | 0.8881 | 0.9188 | 0.9198 |

8 | 0.4065 | 0.6068 | 0.6610 | 0.8196 | 0.6296 | 0.7816 | 0.7018 | 0.8110 | 0.8015 | 0.8338 |

9 | 0.6969 | 0.8499 | 0.8121 | 0.6081 | 0.8546 | 0.7460 | 0.8791 | 0.9216 | 0.8971 | 0.9330 |

10 | 0.5763 | 0.6766 | 0.7017 | 0.4672 | 0.8185 | 0.6274 | 0.6921 | 0.9266 | 0.5512 | 0.8050 |

11 | 0.7609 | 0.9127 | 0.9393 | 0.7078 | 0.9194 | 0.8577 | 0.8340 | 0.7590 | 0.9592 | 0.8700 |

12 | 0.4938 | 0.7099 | 0.8482 | 0.6321 | 0.8386 | 0.6052 | 0.9222 | 0.8703 | 0.9366 | 0.9107 |

13 | 0.6140 | 0.7754 | 0.7930 | 0.6526 | 0.7895 | 0.6667 | 0.8386 | 0.8281 | 0.6491 | 0.6526 |

14 | 0.9960 | 0.9919 | 0.9960 | 0.9879 | 1.0000 | 0.9879 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |

15 | 0.9767 | 0.9767 | 0.9746 | 0.9831 | 0.9767 | 0.9767 | 0.9789 | 0.9746 | 0.9937 | 0.9894 |

AA | 0.8034 | 0.8764 | 0.8947 | 0.8194 | 0.8992 | 0.8581 | 0.9051 | 0.9102 | 0.8974 | 0.9098 |

OA | 0.7747 | 0.8569 | 0.8790 | 0.7959 | 0.8846 | 0.8402 | 0.8886 | 0.8988 | 0.8902 | 0.9089 |

$\kappa $ | 0.7563 | 0.8449 | 0.8688 | 0.7785 | 0.8749 | 0.8266 | 0.8792 | 0.8904 | 0.8808 | 0.9011 |

**Table 6.**Classification accuracies obtained by applying SVM on the features extracted from the Trento hyperspectral dataset. The highest accuracy in each row is shown bold.

Cl. # | HSI | PCA | MNF | DAFE | NWFE | SELD | OTVCA${}_{\mathit{T}=0.2}$ | OTVCA${}_{\mathit{T}=0.4}$ | SSLRA${}_{\mathit{T}=0.2}$ | SSLRA${}_{\mathit{T}=0.4}$ |
---|---|---|---|---|---|---|---|---|---|---|

1 | 0.8809 | 0.9004 | 0.9465 | 0.7982 | 0.9286 | 0.8866 | 0.8863 | 0.9106 | 0.9782 | 0.9944 |

2 | 0.8197 | 0.8535 | 0.9068 | 0.7412 | 0.8967 | 0.8179 | 0.8726 | 0.8769 | 0.9312 | 0.9230 |

3 | 0.9733 | 0.9786 | 0.9733 | 0.9492 | 0.9572 | 0.9332 | 0.9679 | 0.9813 | 0.9439 | 0.9733 |

4 | 0.9691 | 0.9604 | 0.9709 | 0.8956 | 0.9699 | 0.9679 | 0.9652 | 0.9611 | 0.9803 | 0.9871 |

5 | 0.7697 | 0.7518 | 0.7863 | 0.7087 | 0.7552 | 0.6571 | 0.8000 | 0.8558 | 0.8539 | 0.8082 |

6 | 0.6701 | 0.6461 | 0.7333 | 0.6946 | 0.6737 | 0.6016 | 0.6638 | 0.6628 | 0.6225 | 0.6252 |

AA | 0.8471 | 0.8485 | 0.8862 | 0.7979 | 0.8635 | 0.8107 | 0.8593 | 0.8748 | 0.8850 | 0.8852 |

OA | 0.8423 | 0.8367 | 0.8722 | 0.7823 | 0.8512 | 0.7953 | 0.8567 | 0.8788 | 0.8934 | 0.8814 |

$\kappa $ | 0.7916 | 0.7847 | 0.8315 | 0.7136 | 0.8038 | 0.7300 | 0.8098 | 0.8386 | 0.8595 | 0.8442 |

**Table 7.**Classification accuracies obtained by applying RF on the features extracted from the Trento hyperspectral dataset. The highest accuracy in each row is shown in bold.

Cl. # | HSI | PCA | MNF | DAFE | NWFE | SELD | OTVCA${}_{\mathit{T}=0.2}$ | OTVCA${}_{\mathit{T}=0.4}$ | SSLRA${}_{\mathit{T}=0.2}$ | SSLRA${}_{\mathit{T}=0.4}$ |
---|---|---|---|---|---|---|---|---|---|---|

1 | 0.8576 | 0.8318 | 0.9088 | 0.7588 | 0.8522 | 0.8615 | 0.7218 | 0.8878 | 0.9496 | 0.9785 |

2 | 0.8542 | 0.8884 | 0.8913 | 0.7135 | 0.9237 | 0.8762 | 0.7218 | 0.9068 | 0.9456 | 0.9190 |

3 | 0.9652 | 0.9305 | 0.9599 | 0.9545 | 0.9652 | 0.9652 | 0.7899 | 0.9786 | 0.9920 | 0.9893 |

4 | 0.9566 | 0.9189 | 0.9687 | 0.8845 | 0.9427 | 0.9478 | 0.7484 | 0.9404 | 0.9783 | 0.9881 |

5 | 0.8001 | 0.7596 | 0.7456 | 0.7323 | 0.8111 | 0.7553 | 0.7054 | 0.6676 | 0.9824 | 0.9737 |

6 | 0.6396 | 0.6065 | 0.7054 | 0.7307 | 0.6216 | 0.6209 | 0.7866 | 0.5672 | 0.6599 | 0.6281 |

AA | 0.8456 | 0.8226 | 0.8633 | 0.7957 | 0.8528 | 0.8378 | 0.7453 | 0.8247 | 0.9179 | 0.9128 |

OA | 0.8461 | 0.8163 | 0.8477 | 0.7831 | 0.8496 | 0.8283 | 0.8923 | 0.7962 | 0.9399 | 0.9379 |

$\kappa $ | 0.7955 | 0.7567 | 0.7985 | 0.7136 | 0.8002 | 0.7731 | 0.9245 | 0.7292 | 0.9190 | 0.9168 |

**Table 8.**CPU processing times in seconds consumed by different techniques applied on the Trento and the Houston datasets.

PCA | MNF | DAFE | NWFE | SELD | OTVCA | SSLRA | |
---|---|---|---|---|---|---|---|

Trento | 0.10 | 0.37 | 0.07 | 6.53 | 1.17 | 19.93 | 22.43 |

Houston | 0.63 | 7.53 | 0.04 | 253.86 | 2.64 | 360.44 | 376.92 |

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Rasti, B.; Ghamisi, P.; Ulfarsson, M.O.
Hyperspectral Feature Extraction Using Sparse and Smooth Low-Rank Analysis. *Remote Sens.* **2019**, *11*, 121.
https://doi.org/10.3390/rs11020121

**AMA Style**

Rasti B, Ghamisi P, Ulfarsson MO.
Hyperspectral Feature Extraction Using Sparse and Smooth Low-Rank Analysis. *Remote Sensing*. 2019; 11(2):121.
https://doi.org/10.3390/rs11020121

**Chicago/Turabian Style**

Rasti, Behnood, Pedram Ghamisi, and Magnus O. Ulfarsson.
2019. "Hyperspectral Feature Extraction Using Sparse and Smooth Low-Rank Analysis" *Remote Sensing* 11, no. 2: 121.
https://doi.org/10.3390/rs11020121