# Deep Self-Learning Network for Adaptive Pansharpening

^{*}

## Abstract

**:**

## 1. Introduction

- We propose a CNN-1 for the PSF calculation, which can directly estimate the blur kernel of the MSI without its higher-resolution counterpart.
- We develop an edge-detection-based algorithm for unsupervised image registration with the pixel shifts. The method ensures the same overlaps between the MSI and PAN at the pixel level.
- We construct the training set using the estimated PSF and present a CNN-2 to learn the end-to-end pansharpening. This enables the model to adaptively learn the mapping for any dataset.

## 2. Proposed Method

#### 2.1. General Architecture

#### 2.2. PSF Estimation

Algorithm 1 The CNN-1-based blur estimation procedure of the proposed method. |

Require: |

- $\mathbf{P}$: PAN; |

- $\mathbf{X}$: MSI; |

- $\mathbf{\sigma}\in \{{\sigma}_{1},{\sigma}_{2},\dots ,{\sigma}_{n}\}$: A vector of predefined parameters of the blur kernel; |

- The designed CNN-1. |

Blur Estimation: |

- Slice $\mathbf{P}$ to the patches of the size $33\times 33$: ${\mathbf{P}}^{\prime}$. |

- Use $\mathbf{\sigma}\in \{{\sigma}_{1},{\sigma}_{2},\dots {\sigma}_{n}\}$ to generate the various down-sampled version of ${\mathbf{P}}_{d}^{\prime}\in \{{\mathbf{P}}_{{d}_{1}}^{\prime},{\mathbf{P}}_{{d}_{2}}^{\prime}\dots ,{\mathbf{P}}_{{d}_{n}}^{\prime}\}$. |

- Train the CNN-1 via ${\mathbf{P}}_{d}^{\prime}$ and $\mathbf{\sigma}$. |

- Slice each of the band of $\mathbf{X}$ to the patches ${\mathbf{X}}^{\prime}$ of the size $33\times 33$. |

- Use the trained CNN-1 to predict the ${\sigma}^{\prime}$ for ${\mathbf{X}}^{\prime}$. |

- Use the $majority$ $voting$ to combine the results of ${\sigma}^{\prime}$ and output the final prediction $\widehat{\sigma}$. |

Ensure: |

- $\widehat{\sigma}$: The estimated blur kernel in $\mathbf{X}$. |

#### 2.3. Edge-Detection-Based Image Registration

Algorithm 2 The edge-detection-based image registration procedure of the proposed method. |

Require: |

-$\mathbf{X}$: MSI; |

-$\mathbf{P}$: PAN; |

-$\widehat{\sigma}$: Estimated PSF of MSI. |

Edge-Detection-based Registration: |

- Shift the $\mathbf{P}$ toward horizontal and vertical directions and the combination of them with $\mathit{q}\in \{1,2\dots q\}$ pixels to ${\mathbf{P}}_{q}$, respectively. |

- Downscale the ${\mathbf{P}}_{q}$ to ${\mathbf{P}}_{{q}_{d}}^{\widehat{\sigma}}$ using $\widehat{\sigma}$. |

- Calculate the edges $E\left({\mathbf{P}}_{{q}_{d}}^{\widehat{\sigma}}\right)$ and $E\left(\mathbf{X}\right)$ of the ${\mathbf{P}}_{{q}_{d}}^{\widehat{\sigma}}$ and each band of $\mathbf{X}$ using Canny algorithm. |

- Calculate the ${L}_{1}$ distance ${\mathcal{D}}_{q}$ between $E\left({\mathbf{P}}_{{q}_{d}}^{\widehat{\sigma}}\right)$ and $E\left(\mathbf{X}\right)$ pixel-to-pixel. |

- Select the directions and the corresponding $\tilde{q}$ which minimizes $\mathcal{D}$. |

- Apply the $\tilde{q}$ and its directions on $\mathbf{P}$ and output the registered images. |

Ensure: |

- The registered $\mathbf{P}$ and $\mathbf{X}$. |

#### 2.4. Adaptive Pansharpening

Algorithm 3 The adaptive image pansharpening procedure of the proposed method. |

Require: |

- $\mathbf{X}$: Registered MSI; |

- $\mathbf{P}$: Registered PAN; |

- $\widehat{\sigma}$: Estimated PSF of MSI; |

- The designed CNN-2. |

Adaptive Pansharpening: |

- Downscale the $\mathbf{P}$, $\mathbf{X}$ to the ${\mathbf{P}}_{d}^{\widehat{\sigma}}$ and ${\mathbf{X}}_{d}^{\widehat{\sigma}}$, respectively, using $\widehat{\sigma}$. |

- Train the pansharpening CNN-2 via ${\mathbf{P}}_{d}^{\widehat{\sigma}}$ and the bicubic-interpolated ${\mathbf{X}}_{d}^{\widehat{\sigma}}$ with the objective $\mathbf{X}$, using the MSE loss function. |

- Input $\mathbf{P}$ and the interpolated $\mathbf{X}$ into the trained CNN-2, and obtain the final prediction ${\widehat{\mathbf{X}}}_{h}$. |

Ensure: |

- ${\widehat{\mathbf{X}}}_{h}$: The predicted HRMSI. |

## 3. Experiments

#### 3.1. Data Description

- GF-2 is a high-resolution optical earth observation satellite which is independently developed by China. It has two PAN/MSI cameras, where the spatial resolution is 1 m and 4 m, respectively. The MSI has four spectral channels including the blue, green, red, and near-infrared (NIR) bands. The data we used in this paper reveal the urban area of Guangzhou city, China and were collected on 4 November 2016. Figure 5a,d depict the PAN and MSI of this data, respectively.
- GF-1 is configured with two PAN and MSI with the spatial resolution 2 m and 8 m, respectively, and another four MSI cameras with 16 m resolution. The 8 m MSI includes four spectral bands including the blue, green, red, and the NIR. In this paper, we adopt the 2 m PAN and the 8 m MSI as the experimental data and the research region is the Guangzhou city, China. The data were acquired on 24 October 2015. The PAN and MSI of this data are displayed in Figure 5b,e, respectively.
- JL-1A is independently developed by China and was launched in 2015. The satellite provides a PAN at 0.72 m and a MSI at 2.88 m, respectively. The MSI has three optical bands including blue, green, and red. The data in this paper cover the region of Qi’ao island in Zhuhai City, China and were collected on 3 January 2017. Figure 5c,f show the PAN and MSI of the data, respectively.

#### 3.2. Experimental Setup

#### 3.3. Experimental Results

#### 3.3.1. Comparison with Pansharpening Methods

#### 3.3.2. Effectiveness of PSF Estimation Method

#### 3.3.3. Comparison with Image Registration Methods

## 4. Discussion

#### 4.1. Impacts of Image Patch Size

#### 4.2. Impacts of Training Epochs

## 5. Conclusions

## Author Contributions

## Funding

## Conflicts of Interest

## References

- Qu, Y.; Qi, H.; Kwan, C. Unsupervised sparse Dirichlet-net for hyperspectral image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Lake City, UT, USA, 18–22 June 2018; pp. 2511–2520. [Google Scholar]
- He, Z.; Li, J.; Liu, K.; Liu, L.; Tao, H. Kernel Low-Rank Multitask Learning in Variational Mode Decomposition Domain for Multi-/Hyperspectral Classification. IEEE Trans. Geosci. Remote Sens.
**2018**, 56, 4193–4208. [Google Scholar] [CrossRef] - Chen, B.; Huang, B.; Xu, B. Multi-source remotely sensed data fusion for improving land cover classification. ISPRS J. Photogramm. Remote Sens.
**2017**, 124, 27–39. [Google Scholar] [CrossRef] - Matteoli, S.; Diani, M.; Corsini, G. Automatic target recognition within anomalous regions of interest in hyperspectral images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens.
**2018**, 11, 1056–1069. [Google Scholar] [CrossRef] - Murray, N.J.; Keith, D.A.; Simpson, D.; Wilshire, J.H.; Lucas, R.M. Remap: An online remote sensing application for land cover classification and monitoring. Methods Ecol. Evol.
**2018**, 9, 2019–2027. [Google Scholar] [CrossRef] - Liu, Z.; Li, G.; Mercier, G.; He, Y.; Pan, Q. Change detection in heterogenous remote sensing images via homogeneous pixel transformation. IEEE Trans. Image Process.
**2018**, 27, 1822–1834. [Google Scholar] [CrossRef] [PubMed] - Shahdoosti, H.R.; Ghassemian, H. Combining the spectral PCA and spatial PCA fusion methods by an optimal filter. Inform. Fusion
**2016**, 27, 150–160. [Google Scholar] [CrossRef] - Carper, W.; Littlesand, T.; Kiefer, R. The use of intensity-hue-saturation transformations for merging SPOT panchromatic and multispectral image data. Photogramm. Eng. Remote Sens.
**1990**, 56, 459–467. [Google Scholar] - Aiazzi, B.; Baronti, S.; Selva, M. Improving component substitution pansharpening through multivariate regression of MS + Pan data. IEEE Trans. Geosci. Remote Sens.
**2007**, 45, 3230–3239. [Google Scholar] [CrossRef] - Maglione, P.; Parente, C.; Vallario, A. Pan-sharpening Worldview-2: IHS, Brovey and Zhang methods in comparison. Int. J. Eng. Technol.
**2016**, 8, 673–679. [Google Scholar] - Zhang, Y.; De Backer, S.; Scheunders, P. Noise-resistant wavelet-based Bayesian fusion of multispectral and hyperspectral images. IEEE Trans. Geosci. Remote Sens.
**2009**, 47, 3834–3843. [Google Scholar] [CrossRef] - Shensa, M.J. The discrete wavelet transform: Wedding the à trous and Mallat algorithms. IEEE Trans. Signal Process.
**1992**, 40, 2464–2482. [Google Scholar] [CrossRef] - Burt, P.; Adelson, E. The Laplacian pyramid as a compact image code. IEEE Trans. Commun.
**1983**, 31, 532–540. [Google Scholar] [CrossRef] - Do, M.N.; Vetterli, M. The contourlet transform: An efficient directional multiresolution image representation. IEEE Trans. Image Process.
**2005**, 14, 2091–2106. [Google Scholar] [CrossRef] [PubMed] - Liao, W.; Huang, X.; Coillie, F.V.; Gautama, S.; Pizurica, A.; Philips, W.; Liu, H.; Zhu, T.; Shimoni, M.; Moser, G.; et al. Processing of Multiresolution Thermal Hyperspectral and Digital Color Data: Outcome of the 2014 IEEE GRSS Data Fusion Contest. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens.
**2015**, 8, 2984–2996. [Google Scholar] [CrossRef] - Li, Z.; Jing, Z.; Yang, X.; Sun, S. Color transfer based remote sensing image fusion using non-separable wavelet frame transform. Pattern. Recog. Lett.
**2005**, 26, 2006–2014. [Google Scholar] [CrossRef] - Loncan, L.; De Almeida, L.B.; Bioucas-Dias, J.M.; Briottet, X.; Chanussot, J.; Dobigeon, N.; Fabre, S.; Liao, W.; Licciardi, G.A.; Simoes, M.; et al. Hyperspectral pansharpening: A review. IEEE Geosci. Remote Sens. Mag.
**2015**, 3, 27–46. [Google Scholar] [CrossRef] - Khademi, G.; Ghassemian, H. Incorporating an Adaptive Image Prior Model Into Bayesian Fusion of Multispectral and Panchromatic Images. IEEE Geosci. Remote Sens. Lett.
**2018**, 15, 917–921. [Google Scholar] [CrossRef] - Huang, B.; Song, H.; Cui, H.; Peng, J.; Xu, Z. Spatial and spectral image fusion using sparse matrix factorization. IEEE Trans. Geosci. Remote Sens.
**2014**, 52, 1693–1704. [Google Scholar] [CrossRef] - Guo, M.; Zhang, H.; Li, J.; Zhang, L.; Shen, H. An online coupled dictionary learning approach for remote sensing image fusion. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens.
**2014**, 7, 1284–1294. [Google Scholar] [CrossRef] - Simões, M.; Bioucas-Dias, J.; Almeida, L.B.; Chanussot, J. A convex formulation for hyperspectral image superresolution via subspace-based regularization. IEEE Trans. Geosci. Remote Sens.
**2015**, 53, 3373–3388. [Google Scholar] [CrossRef] - Zou, Q.; Ni, L.; Zhang, T.; Wang, Q. Deep learning based feature selection for remote sensing scene classification. IEEE Geosci. Remote Sens. Lett.
**2015**, 12, 2321–2325. [Google Scholar] [CrossRef] - Chen, Y.; Tai, Y.; Liu, X.; Shen, C.; Yang, J. Fsrnet: End-to-end learning face super-resolution with facial priors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Lake City, UT, USA, 18–22 June 2018; pp. 2492–2501. [Google Scholar]
- Yuan, Y.; Zheng, X.; Lu, X. Hyperspectral image superresolution by transfer learning. IEEE J. Sel. Top. Appl. Earth Obs.
**2017**, 10, 1963–1974. [Google Scholar] [CrossRef] - Schroff, F.; Kalenichenko, D.; Philbin, J. Facenet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 815–823. [Google Scholar]
- Dian, R.; Li, S.; Guo, A.; Fang, L. Deep hyperspectral image sharpening. IEEE Trans. Neur. Net. Lear.
**2018**, 29, 5345–5355. [Google Scholar] [CrossRef] [PubMed] - Huang, W.; Xiao, L.; Wei, Z.; Liu, H.; Tang, S. A new pan-sharpening method with deep neural networks. IEEE Geosci. Remote Sens. Lett.
**2015**, 12, 1037–1041. [Google Scholar] [CrossRef] - Dong, C.; Loy, C.C.; He, K.; Tang, X. Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell.
**2016**, 38, 295–307. [Google Scholar] [CrossRef] - Dong, C.; Loy, C.C.; He, K.; Tang, X. Learning a deep convolutional network for image super-resolution. In Proceedings of the European Conference on Computer Vision, Zuerich, Switzerland, 5–12 September 2014; pp. 184–199. [Google Scholar]
- Masi, G.; Cozzolino, D.; Verdoliva, L.; Scarpa, G. Pansharpening by convolutional neural networks. Remote Sens.
**2016**, 8, 594. [Google Scholar] [CrossRef] - Rao, Y.; He, L.; Zhu, J. A residual convolutional neural network for pan-shaprening. In Proceedings of the 2017 International Workshop on Remote Sensing with Intelligent Processing (RSIP), Shanghai, China, 19–21 May 2017; pp. 1–4. [Google Scholar]
- Wei, Y.; Yuan, Q.; Shen, H.; Zhang, L. Boosting the Accuracy of Multispectral Image Pansharpening by Learning a Deep Residual Network. IEEE Geosci. Remote Sens. Lett.
**2017**, 14, 1795–1799. [Google Scholar] [CrossRef] [Green Version] - Song, H.; Liu, Q.; Wang, G.; Hang, R.; Huang, B. Spatiotemporal satellite image fusion using deep convolutional neural networks. IEEE J. Sel. Top. Appl. Earth Obs.
**2018**, 11, 821–829. [Google Scholar] [CrossRef] - Liu, X.; Wang, Y.; Liu, Q. Psgan: A Generative Adversarial Network for Remote Sensing Image Pan-Sharpening. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018; pp. 873–877. [Google Scholar]
- Palsson, F.; Sveinsson, J.R.; Ulfarsson, M.O. Multispectral and Hyperspectral Image Fusion Using a 3-D-Convolutional Neural Network. IEEE Geosci. Remote Sens. Lett.
**2017**, 14, 639–643. [Google Scholar] [CrossRef] - Liu, X.; Wang, Y.; Liu, Q. Remote Sensing Image Fusion Based on Two-stream Fusion Network. In Proceedings of the 2018 International Conference on Multimedia Modeling, Bangkok, Thailand, 3–17 November 2018; pp. 428–439. [Google Scholar]
- Yang, J.; Fu, X.; Hu, Y.; Huang, Y.; Ding, X.; Paisley, J. PanNet: A deep network architecture for pan-sharpening. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 1753–1761. [Google Scholar]
- Xing, Y.; Wang, M.; Yang, S.; Jiao, L. Pan-sharpening via deep metric learning. ISPRS J. Photogramm. Remote Sens.
**2018**, 145, 165–183. [Google Scholar] [CrossRef] - Azarang, A.; Ghassemian, H. A new pansharpening method using multi resolution analysis framework and deep neural networks. In Proceedings of the International Conference on Pattern Recognition and Image Analysis (IPRIA), Shahrekord, Iran, 19–20 April 2017; pp. 1–6. [Google Scholar]
- Zhong, J.; Yang, B.; Huang, G.; Zhong, F.; Chen, Z. Remote sensing image fusion with convolutional neural network. Sens. Imaging
**2016**, 17, 1–16. [Google Scholar] [CrossRef] - Lanaras, C.; Bioucas-Dias, J.; Galliani, S.; Baltsavias, E.; Schindler, K. Super-resolution of Sentinel-2 images: Learning a globally applicable deep neural network. ISPRS J. Photogramm. Remote Sens.
**2018**, 146, 305–319. [Google Scholar] [CrossRef] [Green Version] - Shocher, A.; Cohen, N.; Irani, M. “Zero-shot” Super-Resolution using Deep Internal Learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Lake City, UT, USA, 18–22 June 2018; pp. 3118–3126. [Google Scholar]
- Ballester, C.; Caselles, V.; Igual, L.; Verdera, J.; Rougé, B. A Variational Model for P + XS Image Fusion. Int. J. Comput. Vis.
**2006**, 69, 43–58. [Google Scholar] [CrossRef] - Aiazzi, B.; Alparone, L.; Garzelli, A.; Santurri, L. Blind correction of local misalignments between multispectral and panchromatic images. IEEE Geosci. Remote Sens. Lett.
**2018**, 15, 1625–1629. [Google Scholar] [CrossRef] - Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Proceedings of the Advances in Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
- Dahl, G.E.; Sainath, T.N.; Hinton, G.E. Improving deep neural networks for LVCSR using rectified linear units and dropout. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013; pp. 8609–8613. [Google Scholar]
- Liu, W.; Wen, Y.; Yu, Z.; Yang, M.M. Large-margin softmax loss for convolutional neural networks. In Proceedings of the 33rd International Conference on International Conference on Machine Learning, New York, NY, USA, 19–24 June 2016; pp. 507–516. [Google Scholar]
- Chen, F.; Ma, J. An empirical identification method of gaussian blur parameter for image deblurring. IEEE Trans. Image Process.
**2009**, 57, 2467–2478. [Google Scholar] [CrossRef] - Ding, L.; Goshtasby, A. On the Canny edge detector. Pattern Recognit.
**2001**, 34, 721–725. [Google Scholar] [CrossRef] - Kwan, C.; Choi, J.H.; Chan, S.H.; Zhou, J.; Budavari, B. A super-resolution and fusion approach to enhancing hyperspectral images. Remote Sens.
**2018**, 10, 1416. [Google Scholar] [CrossRef] - Chan, S.H.; Wang, X.; Elgendy, O.A. Plug-and-play ADMM for image restoration: Fixed-point convergence and applications. IEEE Trans. Comput. Imaging
**2017**, 3, 84–98. [Google Scholar] [CrossRef] - Krishnan, D.; Fergus, R. Fast image deconvolution using hyper-Laplacian priors. In Proceedings of the Advances in Neural Information Processing Systems (NIPS), Vancouver, BC, Canada, 7–10 December 2009; pp. 1033–1041. [Google Scholar]
- Aiazzi, B.; Alparone, L.; Baronti, S.; Garzelli, A.; Selva, M. MTF-tailored multiscale fusion of high-resolution MS and pan imagery. ISPRS J. Photogramm. Remote Sens.
**2006**, 72, 591–596. [Google Scholar] [CrossRef] - King, R.L.; Wang, J. A wavelet based algorithm for pan sharpening Landsat 7 imagery. In Proceedings of the 2001 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Sydney, Australia, 9–13 July 2001; Volume 2, pp. 849–851. [Google Scholar]
- Yokoya, N.; Yairi, T.; Iwasaki, A. Coupled nonnegative matrix factorization unmixing for hyperspectral and multispectral data fusion. IEEE Trans. Geosci. Remote Sens.
**2012**, 50, 528–537. [Google Scholar] [CrossRef] - Wei, Y.; Yuan, Q. Deep residual learning for remote sensed imagery pansharpening. In Proceedings of the 2017 International Workshop on Remote Sensing with Intelligent Processing (RSIP), Shanghai, China, 19–21 May 2017; pp. 1–4. [Google Scholar]
- Michaeli, T.; Irani, M. Nonparametric blind super-resolution. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013; pp. 945–952. [Google Scholar]
- Shao, W.Z.; Elad, M. Simple, accurate, and robust nonparametric blind super-resolution. In Proceedings of the International Conference on Image and Graphics, Tianjin, China, 13–15 August 2015; pp. 333–348. [Google Scholar]

**Figure 3.**Blur kernel with different $\sigma $, and the blurred results using the GF-2 satellite image and the corresponding blur kernels. The blur region is fixed as 21 × 21.

**Figure 5.**Three remotely sensed images used in the experiments. (

**a**–

**c**) depict panchromatic images (PAN) images. Multispectral images (MSI) images with the red, green, and blue channels are displayed in (

**d**–

**f**), respectively.

Abbreviation | Description | Reference |
---|---|---|

PCA | Principle component analysis | [17] |

GS | Gram–Schmidt algorithm | [9] |

MTF-GLP | Generalized Laplacian pyramid algorithm | [53] |

Wavelet | Wavelet transform | [54] |

CNMF | Coupled non-negative matrix factorization | [55] |

PNN | Pansharpening by convolutional neural networks | [30] |

DRPNN | Deep residual learning for pansharpening | [56] |

PANNET | Pansharpening by deep network | [37] |

DSL | The proposed deep self-learning for pansharpening | - |

**Table 2.**Quantitative results of the comparison pansharpening methods for the GF-2 dataset. Bold values indicate the best result for a column.

Methods | RMSE | PSNR | SSIM | SAM | CC | ERGAS | Execution Time |
---|---|---|---|---|---|---|---|

PCA | 0.042 | 27.55 | 0.94 | 3.04 | 0.92 | 3.32 | 1.31 s (CPU) |

GS | 0.039 | 28.22 | 0.95 | 2.88 | 0.94 | 3.08 | 0.86 s (CPU) |

MTF-GLP | 0.030 | 30.43 | 0.96 | 2.06 | 0.97 | 2.37 | 0.95 s (CPU) |

Wavelet | 0.048 | 26.44 | 0.92 | 3.03 | 0.89 | 3.84 | 1.22 s (CPU) |

CNMF | 0.032 | 29.80 | 0.96 | 2.86 | 0.95 | 2.51 | 11.47 s (CPU) |

PNN | 0.043 | 27.20 | 0.92 | 2.53 | 0.93 | 3.50 | 1277.50 s (GPU) |

DRPNN | 0.051 | 25.93 | 0.90 | 3.41 | 0.92 | 4.10 | 2082.92 s (GPU) |

PANNET | 0.044 | 27.11 | 0.94 | 2.89 | 0.91 | 3.55 | 1614.02 s (GPU) |

DSL | 0.029 | 30.88 | 0.96 | 2.07 | 0.97 | 2.31 | 1506.83 s (GPU) |

**Table 3.**Quantitative results of the comparison pansharpening methods for the GF-1 dataset. Bold values indicate the best result for a column.

Methods | RMSE | PSNR | SSIM | SAM | CC | ERGAS | Execution Time |
---|---|---|---|---|---|---|---|

PCA | 0.030 | 30.51 | 0.91 | 4.00 | 0.87 | 2.84 | 3.67 s (CPU) |

GS | 0.025 | 32.11 | 0.95 | 3.09 | 0.88 | 2.68 | 1.22 s (CPU) |

MTF-GLP | 0.018 | 35.06 | 0.96 | 1.92 | 0.94 | 1.90 | 1.05 s (CPU) |

Wavelet | 0.027 | 31.45 | 0.91 | 3.53 | 0.85 | 2.80 | 1.73 s (CPU) |

CNMF | 0.019 | 34.65 | 0.96 | 2.66 | 0.92 | 2.00 | 11.34 s (CPU) |

PNN | 0.017 | 35.25 | 0.96 | 2.21 | 0.95 | 1.80 | 860.30 s (GPU) |

DRPNN | 0.020 | 34.10 | 0.95 | 2.51 | 0.94 | 2.10 | 996.49 s (GPU) |

PANNET | 0.022 | 33.35 | 0.95 | 2.91 | 0.94 | 2.28 | 1829.33 s (GPU) |

DSL | 0.016 | 35.99 | 0.97 | 1.93 | 0.95 | 1.67 | 946.95 s (GPU) |

**Table 4.**Quantitative results of the comparison pansharpening methods for the JL-1A dataset. Bold values indicate the best result for a column.

Methods | RMSE | PSNR | SSIM | SAM | CC | ERGAS | Execution Time |
---|---|---|---|---|---|---|---|

PCA | 0.039 | 28.22 | 0.93 | 3.04 | 0.62 | 3.71 | 0.47 s (CPU) |

GS | 0.038 | 28.33 | 0.93 | 2.99 | 0.63 | 3.66 | 0.59 s (CPU) |

MTF-GLP | 0.014 | 36.81 | 0.97 | 1.06 | 0.95 | 1.37 | 1.46 s (CPU) |

Wavelet | 0.018 | 34.46 | 0.96 | 1.08 | 0.98 | 1.80 | 0.94 s (CPU) |

CNMF | 0.028 | 30.96 | 0.96 | 1.95 | 0.77 | 2.63 | 8.23 s (CPU) |

PNN | 0.016 | 35.65 | 0.97 | 1.17 | 0.94 | 1.56 | 790.09 s (GPU) |

DRPNN | 0.016 | 36.02 | 0.97 | 1.08 | 0.94 | 1.50 | 1452.06 s (GPU) |

PANNET | 0.016 | 35.95 | 0.97 | 0.99 | 0.94 | 1.52 | 1516.01 s (GPU) |

DSL | 0.014 | 37.11 | 0.97 | 1.07 | 0.95 | 1.31 | 976.61 s (GPU) |

Data | GF-2 | GF-1 | JL-1A |
---|---|---|---|

OA | 92.86% | 90.84% | 56.17% |

Data | PSF Estimation | Default Kernel | ||||||||
---|---|---|---|---|---|---|---|---|---|---|

PSNR | SSIM | SAM | CC | ERGAS | PSNR | SSIM | SAM | CC | ERGAS | |

GF-2 | 30.88 | 0.96 | 2.07 | 0.97 | 2.31 | 26.88 | 0.92 | 2.80 | 0.93 | 3.66 |

GF-1 | 35.99 | 0.97 | 1.93 | 0.95 | 1.67 | 34.14 | 0.95 | 2.45 | 0.94 | 2.07 |

JL-1A | 37.11 | 0.97 | 1.07 | 0.95 | 1.31 | 35.92 | 0.97 | 1.01 | 0.94 | 1.51 |

**Table 7.**The comparison results of the DSL with different local registration methods for GF-2 dataset.

Methods | PSNR | SSIM | SAM | CC | ERGAS |
---|---|---|---|---|---|

Proposed | 30.88 | 0.96 | 2.07 | 0.97 | 2.31 |

Aiazzi’s | 30.04 | 0.96 | 2.27 | 0.96 | 2.58 |

Without registration | 26.00 | 0.91 | 2.61 | 0.89 | 4.03 |

**Table 8.**The comparison results of the DSL with different local registration methods for GF-1 dataset.

Methods | PSNR | SSIM | SAM | CC | ERGAS |
---|---|---|---|---|---|

Proposed | 35.99 | 0.97 | 1.93 | 0.95 | 1.67 |

Aiazzi’s | 33.89 | 0.95 | 2.62 | 0.94 | 1.99 |

Without registration | 32.49 | 0.93 | 2.55 | 0.96 | 2.35 |

**Table 9.**The comparison results of the DSL with different local registration methods for JL-1A dataset.

Methods | PSNR | SSIM | SAM | CC | ERGAS |
---|---|---|---|---|---|

Proposed | 37.11 | 0.97 | 1.07 | 0.95 | 1.31 |

Aiazzi’s | 22.67 | 0.78 | 4.97 | 0.61 | 6.97 |

Without registration | 36.42 | 0.97 | 1.00 | 0.95 | 1.43 |

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Hu, J.; He, Z.; Wu, J.
Deep Self-Learning Network for Adaptive Pansharpening. *Remote Sens.* **2019**, *11*, 2395.
https://doi.org/10.3390/rs11202395

**AMA Style**

Hu J, He Z, Wu J.
Deep Self-Learning Network for Adaptive Pansharpening. *Remote Sensing*. 2019; 11(20):2395.
https://doi.org/10.3390/rs11202395

**Chicago/Turabian Style**

Hu, Jie, Zhi He, and Jiemin Wu.
2019. "Deep Self-Learning Network for Adaptive Pansharpening" *Remote Sensing* 11, no. 20: 2395.
https://doi.org/10.3390/rs11202395