Next Article in Journal
Species Monitoring Using Unmanned Aerial Vehicle to Reveal the Ecological Role of Plateau Pika in Maintaining Vegetation Diversity on the Northeastern Qinghai-Tibetan Plateau
Next Article in Special Issue
Toward Super-Resolution Image Construction Based on Joint Tensor Decomposition
Previous Article in Journal
GIS-Based Machine Learning Algorithms for Gully Erosion Susceptibility Mapping in a Semi-Arid Region of Iran
Previous Article in Special Issue
Detection of Insect Damage in Green Coffee Beans Using VIS-NIR Hyperspectral Imaging
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Linear and Non-Linear Models for Remotely-Sensed Hyperspectral Image Visualization

Electronics and Computers Department, Transilvania University of Braşov, 500036 Braşov, Romania
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(15), 2479; https://doi.org/10.3390/rs12152479
Submission received: 29 June 2020 / Revised: 24 July 2020 / Accepted: 30 July 2020 / Published: 2 August 2020
(This article belongs to the Special Issue Advances in Hyperspectral Data Exploitation)

Abstract

:
The visualization of hyperspectral images still constitutes an open question and may have an important impact on the consequent analysis tasks. The existing techniques fall mainly in the following categories: band selection, PCA-based approaches, linear approaches, approaches based on digital image processing techniques and machine/deep learning methods. In this article, we propose the usage of a linear model for color formation, to emulate the image acquisition process by a digital color camera. We show how the choice of spectral sensitivity curves has an impact on the visualization of hyperspectral images as RGB color images. In addition, we propose a non-linear model based on an artificial neural network. We objectively assess the impact and the intrinsic quality of the hyperspectral image visualization from the point of view of the amount of information and complexity: (i) in order to objectively quantify the amount of information present in the image, we use the color entropy as a metric; (ii) for the evaluation of the complexity of the scene we employ the color fractal dimension, as an indication of detail and texture characteristics of the image. For comparison, we use several state-of-the-art visualization techniques. We present experimental results on visualization using both the linear and non-linear color formation models, in comparison with four other methods and report on the superiority of the proposed non-linear model.

1. Introduction

Hyperspectral imaging captures high-resolution spectral information covering the visible and the infrared wavelength spectra, and thus can provide a high-level understanding of the land cover objects [1]. It is used in a wide variety of applications, such as agriculture [2,3], forest management [4,5], geology [6,7] and military/defense applications [8,9]. Human interaction with hyperspectral images is very important for image interpretation and analysis as the visualization is very often the first step in an image analysis chain [10]. However, displaying a hyperspectral image poses the problem of reducing the large number of bands to just three color RGB channels in order for it to be rendered on a monitor, with the information being meaningful from a human point of view. In order to address this problem, a series of hyperspectral image visualization techniques have been developed, which can be included in the following broad categories: band selection, PCA-based approaches, linear approaches, approaches based on digital image processing techniques and machine/deep learning methods.
Band selection methods consist of a mechanism of picking three spectral channels from the hyperspectral image and mapping them as the red, green and blue channels in the color composite. Commercial geospatial image analysis software products such as ENVI [11] offer the possibility to visualize a hyperspectral image by manually selecting the three channels to be displayed. More complex unsupervised band selection approaches have been developed, based on the one-bit transform (1BT) [12], normalized information (NI) [13], linear prediction (LP) or the minimum endmember abundance covariance (MEAC) [14].
Another family of hyperspectral visualization techniques consists of methods that use principal component analysis (PCA) for dimension reduction of the data. A straightforward visualization technique is to map a set of three principal components (usually the first three) to the R, G and B channels of the color image [15]. Other methods use PCA as part of a more complex approach. For instance, the method presented in [16] is an interactive visualization technique based on PCA, followed by convex optimization. The authors of [17] obtain the color composite by fusing the spectral bands with saliency maps obtained before and after applying PCA. In [1], the image is first decomposed into two different layers (base and detail) through edge-preserving filtering; dimension reduction is achieved through PCA applied on the base layer and a weighted averaging-based fusion on the detail layer, with the final result being a combination of the two layers.
In the case of the linear method described in [18,19], the values of each output color channel are computed as projections of the hyperspectral pixel values on a vector basis. Examples of such bases include one consisting of a stretched version of the CIE 1964 color matching functions (CMFs), a constant-luma disc basis or an unwrapped cosine basis.
A set of hyperspectral image visualization approaches are based on digital image processing techniques. In [20], dimension reduction is achieved using multidimensional scaling, followed by detail enhancement using a Laplacian pyramid. The approach presented in [21] uses the averaging method in order to the number of bands to 9; a decolorization algorithm is then applied on groups of three adjacent channels, which produces the final color image. The technique described in [22] is based on t-distributed stochastic neighbor embedding (t-SNE) and bilateral filtering. The method in [23] is also based on bilateral filtering, together with high dynamic range (HDR) processing techniques, while in [24] a pairwise-distances-analysis-driven visualization technique is described.
Machine/deep learning-based methods used for hyperspectral image visualization generally rely on a geographically-matched RGB image, either obtained through band selection or captured by a color image sensor. Approaches include constrained manifold learning [25], a method based on self-organizing maps [26], a moving least squares framework [10], a technique based on a multichannel pulse-coupled neural network [27] or methods based on convolutional neural networks (CNNs) [28,29].
In this paper, our goal is to produce natural-looking visualization results (i.e., depicting colors close to the real ones in the scene) with the highest possible amount of information and complexity. We propose the usage of a linear color formation model based on a widely-used linear model in colorimetry, based on spectral sensitivity curves. We study the impact on visualization of the choice of spectral sensitivity curves and the amount of overlapping between them, which induces the correlation between the three color channels used for visualization. Besides Gaussian functions, we use spectral sensitivity functions of digital camera sensors, the main idea behind the approach being to emulate the result of capturing the scene with a consumer-grade digital camera sensor instead of a hyperspectral one. Alternatively, we also developed a non-linear visualization method based on an artificial neural network, trained using the spectral signatures of a 24-sample color checker, also often used in colorimetry. By using the proposed approaches, we address the following question: what is the impact of the choice of visualization technique on the amount of information and complexity of a scene? The amount of information in a hyperspectral image should be preserved as much as possible after the visualization. The entropy is often used to measure the amount of information contained by a signal [30] and is one of the metrics that are used for the objective assessment of the visualization result [10,21,31]. The complexity of a scene is related to the texture and object characteristics preservation in the process of visualization. The color fractal dimension is a multi-scale measure capable of globally assessing the complexity of a color image, which can be useful to evaluate both the amount of detail and the object-level content in the image. We perform both a qualitative and a quantitative evaluation (using color entropy and color fractal dimension) of the described techniques in comparison with four other state-of-the-art methods, employing five widely used hyperspectral test images.
The rest of the paper is organized as follows: Section 2 presents the five hyperspectral images used in our experiments, the proposed approaches (both linear and non-linear) and the two embraced measures for the objective evaluation of the performance of the proposed approaches, Section 3 depicts the experimental results, Section 4 the discussion on the various aspects related to the proposed approaches, as well as possible further investigation paths, and Section 5 presents our conclusions.

2. Data and Methods

In this section we briefly describe the five hyperspectral images used in our experiments, the linear and non-linear models proposed and used to visualize the respective hyperspectral images, as well as the two quality metrics deployed to objectively evaluate the experimental results—the color entropy and the color fractal dimension.

2.1. Hyperspectral Images

The hyperspectral images used in our experiments are Pavia University, Pavia Centre, Indian Pines, SalinasA and Cuprite [32]. The first two were acquired by the ROSIS-3 sensor [33], while the other three were acquired by the AVIRIS sensor [34]. Figure 1 depicts RGB representations of the five test images.
Pavia University (Figure 1a) is a 610 × 340 image, with a resolution of 1.3 m. The image has 103 bands in the 430–860 nm range. The scene in the image contains a number of 9 materials according to the provided ground truth, both natural and man-made. Pavia Centre (Figure 1b) is a 1096 × 715, 102-band image with the same characteristics as Pavia University. In both cases, the 10th, 31st and 46th bands were used for generating the RGB representations [25].
The third test image, Indian Pines (Figure 1c), is a 145 × 145 image, having 224 spectral reflectance bands in the 400–2500 nm range with a 20 m resolution. The water absorption bands were removed, resulting in a total of 200 bands. The image contains 16 classes, mostly vegetation/crops.
SalinasA (Figure 1d), is an 86 × 83 sub-scene of the Salinas image. After removing the water absorption bands, the image has 204 spectral reflectance bands in the 400–2500 nm range with a spatial resolution of 3.7 m. This image exhibits 6 types of agricultural crops.
The fifth image, Cuprite (Figure 1e), is of size 512 × 614, with 188 spectral reflectance bands in the 400–2500 nm range remaining after removing noisy and water absorption channels. This image contains 14 types of minerals.
For the last three images, the RGB representations were generated by selecting the 6th, 17th, and 36th bands [25].

2.2. Linear Color Formation Model

Considering the formation process of an RGB image, we embraced a linear model given by Equation (1) [35]. In colorimetry, the linear model is used as a standard model for the color formation, but usually the XYZ coordinates of colors are used as an intermediate step before computing the RGB final color coordinates [36]. In the embraced approach, for a pixel at any position ( x , y ) in the resulting RGB color image, the scalar value on each channel of the RGB triplet is computed as the integral of the product between the spectral reflectance R ( λ ) of the ( x , y ) point in the real scene, the power spectral distribution L ( λ ) of the illuminant and the spectral sensitivity C ( λ ) of the imaging sensor:
I k ( x , y ) = λ m i n λ m a x C k ( λ ) L ( λ ) R ( x , y ) ( λ ) d λ , k = R , G , B
For the spectral sensitivity curves of the imaging sensor one can use theoretical or ideal curves, in order to simulate the image formation process. An alternative would be to use the actual sensitivity curves of a specific sensor, which can be measured according to the approach proposed in [35].
The illuminant can be also characterized, either by considering a standard illuminant or measuring the real one by means of spectrophotometry. In colorimetry, a D65 illuminant is very often preferred, as it corresponds to a bright summer day light. For remotely-sensed images, one may know the illuminant as the direct sun light incident to the Earth’s surface, as the position of the sun with respect to the position of the satellite is known. The use of the illuminant in the model from Equation (1) represents merely an unbalanced weighting of the three sensitivities, favoring the blue channel (lower wavelengths) over green and red. The classical D65 illuminant is depicted in Figure 2, in support of this statement. However, in this article we assume that the illuminant is constant across all wavelengths, as we are mostly interested in the effect of the image sensor sensitivity curves on the vizualization process. Thus, the influence of the illuminant L ( λ ) in Equation (1) is basically null and it can be removed from the integral. Consequently, the equation is basically reduced to the following:
I k ( x , y ) = λ m i n λ m a x C k ( λ ) R ( x , y ) ( λ ) d λ , k = R , G , B
This is the linear model we consider for the experimental results presented in Section 3. In order to apply Equation (2) on a hyperspectral image, we extract from it only the bands corresponding to the range [ λ m i n , λ m a x ] , covered by the sensitivity curves, which corresponds to the visible spectrum. This is the main difference between the proposed model and the linear model presented in [18], which uses all of the bands of the hyperspectral image and the weighting functions are stretched in order to cover the entire range of wavelenghts of the hyperspectral image. Since both the sensitivity functions and the reflectances are discrete, an interpolation of the pixel values of the hyperspectral image is done in order to match the wavelengths and number of values of the sensitivity functions.
Given the embraced linear model and sensitivity functions, our study is limited to the visible spectrum. The extension beyond the visible range could be done either by (i) stretching the sensitivity functions [18] or (ii) adding a fourth color channel, given that one of the latest trends in color display technologies is to add a fourth channel (such as a yellow channel) besides the RGB primaries [37]. However, both approaches would lead to unnatural-looking visualization results, which is not the goal of this study.

2.3. Spectral Sensitivity Functions

As the main objective of visualization is very often the interpretation of the image by humans, we start by considering the spectral sensitivity of the human visual system, which is actually the paradigm for RGB-based color image acquisition and display systems. Figure 3 presents the spectral sensitivities of the human cone cells in the retina, based on the data from [38]. The spectral sensitivity is a function of the wavelength of signal relative to detection of color. These spectral sensitivities are labeled in three categories, depending on the peak value: short (S), medium (M) and long (L). The cone cells are called β for the S group with the range that corresponds to the perception of the blue color. Similarly, the range of the M group ( γ cells) corresponds to green and the L group ( ρ ) corresponds to red.
The RGB color digital cameras are characterized by their sensor spectral sensitivity functions, which define the performance of the respective system. The sensor sensitivity functions for consumer-grade cameras have a similar shape to the spectral sensitivities of human cone cells, since the aim of these products is to capture a representation of the scene that is as accurate as possible from the point of view of human perception. The five digital camera sensor spectral sensitivity functions used in our experiments, taken from [35], are presented in Figure 4.
Starting from the spectral sensitivities of the Canon 5D camera sensor, for our experiments we modeled a set of spectral sensitivities consisting of three Gaussian functions with the mean equal to the wavelength corresponding to the three peaks in Figure 4a and with increasing standard deviation. The functions are depicted in Figure 5. Figure 5a depicts Gaussian functions with a standard deviation of 0, which represent basically unit impulses. In this case, the linear model is reduced to a band selection approach (BS). The standard deviation is gradually increased in the next graphs, resulting in an increasing degree of overlapping between the three functions: no overlap (NOL), small overlap (SOL), medium overlap (MOL) and high overlap (HOL). In this way, we emulate the various levels of correlation between the three RGB color channels of the considered sensor model—from zero correlation, corresponding to a complete separation between the color channels for an ideal imaging sensor, to high overlap, corresponding to a low-performance imaging sensor.

2.4. Non-Linear Color Formation Model

The non-linear color formation model that we propose is based on an Artificial Neural Network (ANN) [39], with the input feature vector consisting of a spectral reflectance curve and the output being the corresponding RGB value. The architecture of the fully connected 5-layer network is depicted in Figure 6. The network uses the Exponential Linear Unit (ELU) [40] as an activation function instead of the more standard Rectified Linear Unit (ReLU), in order to overcome the problem of having a multitude of deactivated neurons (also referred to as “dying neurons” [41]). The implementation was done using the PyTorch library [42].
For the supervised training of the ANN, we chose to use a standard set of 24 colors widely-used in colorimetry–the McBeth color chart [43], depicted in Figure 7. In Figure 8 we show the spectral reflectance curves of the color patches for each row in the McBeth color chart, with their original designations in the legend of the plots. For each color, the RGB triplet is known and we used the measurements provided by [44]. The wavelength range covered by the reflectance curves is 380–780 nm. The reason for choosing this McBeth standard color set is twofold: (i) the spectral reflectance curves of the colors are specified regardless of the illuminant, therefore they can be used as references both in ideal or real conditions; and (ii) this particular color set was determined independently from the domain of remote sensing, thus it can be seen as a neutral set of colors compared to the existing data set of material spectral signatures, such as the ASTER spectral library [45]. In addition, the chosen color set does not require the mapping between the spectral curves and corresponding RGB colors. The training of the ANN is done via the classical backpropagation algorithm, with the mean squared error (MSE) being used as a cost function and Adam used as the optimizer.
As in the case of the linear model, only the bands covered by the spectral reflectance curves of the McBeth color set are used from the hyperspectral image. Concretely, the common range between the Pavia University image and the McBeth curves is 430–780 nm. This range is covered by 83 bands of the image and 71 values of the spectral reflectance curves. The 83 bands of the image are reduced to 71 through interpolation, such as to match the McBeth spectral reflectance curves, giving the size of the input feature vector in Figure 6.
After training with the 24 reflectance curves, the network is applied on a pixel-by-pixel basis; thus, for each pixel in the input image (a vector of 71 values in the case of Pavia University), the 3 output values (R, G and B) are obtained and placed in the corresponding position in the visualization result.

2.5. Quality Metrics

A commonly used objective quality metric for hyperspectral image visualization is the entropy, which is a measure of the degree of information preservation in the resulting image [1]. The most common definition of entropy is the Shannon entropy (see Equation (3)) which measures the average level of information present in a signal with N quantization levels [30].
H = i = 1 N p i log 2 p i
where p i represents the probability to find a certain level in the signal (or color i in a given subset, in context of color images). From the Shannon definition, various other definitions were developed: Rényi entropy (as a generalization), Hartley entropy, collision entropy and min-entropy, or the Kolmogorov entropy, which is another generic definition of entropy [46]. The original Shannon entropy was embraced by Haralick as one of his thirteen features proposed for texture characterization [47]. In our experiments, we use the extension of the entropy to color images from [48].
Additionally, we use the fractal dimension from fractal geometry [49] to assess the complexity of the color images resulting in the process of hyperspectral image visualization. The fractal dimension, also called similarity dimension, is a measure of the variations, irregularities or wiggliness of a fractal object [50]. This multi-scale measure is often used in practice for the discrimination between various signals or patterns exhibiting fractal properties, such as textures [51]. In [52] the fractal dimension was linked to the visual complexity of a color image, more specifically to the perceived beauty of the visual art. Consequently, we use it in this article to both objectively assess the color image content at multiple scales and the appealing of the visualization from a human perception point of view.
The theoretical fractal dimension is the Hausdorff dimension [53], which is comprised in the interval [ E , E + 1 ] , where E is the topological dimension of that object (thus, for gray-scale images the fractal dimension is comprised between 2 and 3.). Because it was defined for continuous objects, equivalent fractal dimension estimates were defined and used: the probability measure [54,55], the Minkowski or box-counting dimension [53], the δ -parallel body method [56], the gliding box-counting algorithm [57] etc. The fractal dimension estimation was extended to the color image domain, like the marginal color analysis [58] or the fully vectorial probabilistic box-counting [59]. More recent attempts in defining the fractal dimension for color images exist [60,61]. For an RGB color image, the estimated color fractal dimension should be comprised in the interval [ 2 , 5 ] [59].
In our experiments, we used the probabilistic box-counting approach defined color images in [59] for the estimation of the fractal dimension of the visualization results. The classical box-counting method consists of covering the image with grids at different scales and counting the number of boxes that cover the image pixels in each grid. The fractal dimension F D is then computed as [62]:
F D = lim r 0 log N r log r
where N r is the number of boxes and r is the scale.
F D is defined and computed for binary and grayscale images (considering the z = f ( x , y ) image model, where z is the luminance and x and y are the spatial coordinates). The extension of F D to color images, the color fractal dimension ( C F D ), is defined by considering the color image as a surface in a 5-dimensional hyperspace ( R G B x y ) [59] and 5D hyper-boxes instead of 3D regular ones. For the experimental results presented in Section 3, the stable C F D estimator proposed in [63] was used, which minimizes the variance of the nine regression line estimators used in the process of fractal dimension estimation. See [64] for reference color fractal images and the Matlab implementation of the baseline CFD estimation approach.

3. Experimental Results

Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13 depict the visualization results for the five hyperspectral test images presented in Section 2. Each figure is organized as follows: on the top row, the results obtained with the proposed linear approach using the Gaussian functions (Figure 5); on the middle row, the results obtained with the linear approach using camera spectral sensitivity functions (Figure 4); on the bottom row, the results obtained using the proposed ANN approach (Section 2.4), the approach based on the PCA to RGB mapping [15], the linear approach based on the stretched color matching functions (CMF) [18] and two recent approaches, constrained manifold learning (CML) [25] and decolorization-based hyperspectral visualization (DHV) [21].
For the Gaussian approaches, it can be noticed that, as the degree of overlapping between the three functions increases, the vizualization results tend to come closer to grayscale images, as expected. In the case of the camera functions, the difference between the results is not significant, proving that the choice of a particular camera model over the other does not have a large impact on the visualization results. Moreover, there is no significant difference in the visualization results between the two cases of the proposed linear approach. The proposed ANN approach obtains satisfying results in terms of both color and contrast, while the other depicted methods, particularly PCA and DHV, do not tend to give natural-looking results.
The corresponding values for the color entropy H and color fractal dimension C F D are depicted in Table 1 and Table 2. One may note that, for the set of Linear Gaussian approaches, both the color entropy and color fractal dimension are maximum for the band selection, with one exception for the SalinasA image, and they both decrease with the increase of the correlation between the three Gaussian functions, as the color content tends to gray-scale and thus complexity diminishes. For the set of Linear Camera proposed approaches, the two quality measures have similar values, basically there is no noticeable difference in the visualization results. For both the Linear Gaussian and Linear Camera approaches, the two quality measures exhibit relatively modest values, which indicate that the visualization result does neither contain the highest information, nor is the most complex. The highest amount of information, measured through the color entropy, is obtained using the proposed non-linear ANN approach for the Pavia University and Pavia Centre images, the PCA approach for the Indian Pines and Cuprite images, and DHV for the SalinasA image. For the three latter images, the proposed ANN-based non-linear approach obtains the third (Indian Pines, Cuprite) and second (SalinasA) best visualization from the point of view of entropy. The highest complexity, measured through the color fractal dimension, is revealed when the hyperspectral images are visualized using the non-linear approach based on ANN, with the exception of the Cuprite image, in which case the PCA approach proves to be superior. The main advantage of the ANN method is that basically any out-of-the-box artificial neural network model can be used, by changing the input layer only in order to match the hyperspectral image under analysis. Table 3 lists, for each visualization method, the independent data used in addition to the hyperspectral images. In the case of the CML approach, the geographically-matched RGB image was obtained through band selection from the original image; the images used are depicted in Figure 1, while the specific bands chosen are listed in Section 2.1.

4. Discussion

First of all, other measures can be considered for the assessment of the complexity of color images, like the Naive Complexity Measure [65]. For the evaluation of the information present in a color image, one could use the Pearson correlation coefficient between the color channels of the resulting RGB color image [63] as an indication of the overlapping between the information on the three RGB color channels. In the presence of a reference or ground truth, similarity indexes like Structural Similarity Index Measure [66] can be used. Nevertheless, the ultimate criteria for the evaluation of the performance of the hyperspectral image visualization approaches are dictated by the specific application and its objectives.
The best experimental results were obtained using the proposed non-linear ANN-based model, despite the extremely reduced training set—only 24 spectral reflectance curves and the corresponding RGB triplets. One should investigate the effects of increasing the size of the training set, in order to assess and reduce the overfitting effect [67] which may occur in our experiments. Extending the training set implies the realization of more color references, characterized both by their hyperspectral signatures (e.g., by using a spectrophotometer) and RGB triplets (e.g., by using a calibrated digital color image acquisition system). The non-linear model itself could be developed further by considering the wavelengths outside the visible range and taking into account the possibility to display the image with more than 3 color channels, including various choices for the mapping between the hyperspectral signatures and RGB triplets.
The linear models used to obtain the experimental results can be useful in understanding both the capabilities and limitations of current or new imaging sensors. The full characterization of the imaging sensors is mandatory in order to predict the imaging process outcome.

5. Conclusions

In this article, we proposed the usage of a linear model for the color formation based on spectral sensitivity curves in order to visualize hyperspectral images by rendering them as RGB color images. We deployed both Gaussian and real digital camera sensitivity curves and showed that, as the correlation between the RGB color channels increases, similar to the overlapping of the curves for both the human visual system and commercially-available digital cameras, the resulting color images tend to go to gray-scale and to exhibit both a smaller amount of information and complexity. We also proposed a non-linear color formation model based on an artificial neural network which was trained with the colors of the McBeth color chart widely used in colorimetry. The training was supervised as the 24 colors of the McBeth chart are specified both by their spectral reflectance curves and RGB triplets. Given their construction, both proposed linear and non-linear approaches generate color images with natural colors.
For the objective assessment of the quality of the hyperspectral image visualization results, we deployed the widely-used measure of entropy, as it is an indicator of the amount of information contained by a signal. We also proposed the usage of the fractal dimension, which is a multi-scale measure usually employed to assess the complexity of color images, but also their beauty and appeal according to some studies. The fractal dimension is an indicator of the amount of details present in the image along multiple analysis scales.
In our experiments, we compared the proposed approaches with four other visualization techniques, using five remotely-sensed hyperspectral images. In the case of the Gaussian functions, our results show that, as the degree of overlapping between functions increases, the visualization results come closer to a grayscale image. With regards to the camera sensitivity functions, we show that the specific choice of a camera model does not have a significant impact on the visualization result. Our experiments also show that the proposed non-linear model achieves the best visualization results from the point of view of the complexity of the resulting color images. We envisage further development by investigating the possible overfitting effect occurring in the case of the ANN approach, extending the approach beyond the visible range and by using a fourth color channel. We underline that for the choice of the most appropriate visualization technique, one may need to consider three important aspects: the naturalness of the resulting colors, the amount of information present in the resulting color image and the complexity along multiple scales.

Author Contributions

Idea and methodology, M.I. and R.-M.C.; software, M.M., C.H. and R.-M.C.; investigation, M.I. and R.-M.C.; writing—original draft preparation, R.-M.C., M.M. and M.I.; writing—review and editing, R.-M.C.; supervision, M.I. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kang, X.; Duan, P.; Li, S. Hyperspectral image visualization with edge-preserving filtering and principal component analysis. Inf. Fusion 2020, 57, 130–143. [Google Scholar] [CrossRef]
  2. Teke, M.; Deveci, H.S.; Haliloğlu, O.; Gürbüz, S.Z.; Sakarya, U. A short survey of hyperspectral remote sensing applications in agriculture. In Proceedings of the IEEE 2013 6th International Conference on Recent Advances in Space Technologies (RAST), Istanbul, Turkey, 12–14 June 2013; pp. 171–176. [Google Scholar]
  3. Reshma, S.; Veni, S. Comparative analysis of classification techniques for crop classification using airborne hyperspectral data. In Proceedings of the IEEE 2017 International Conference on Wireless Communications, Signal Processing and Networking (WiSPNET), Chennai, India, 22–24 March 2017; pp. 2272–2276. [Google Scholar]
  4. Piiroinen, R.; Heiskanen, J.; Maeda, E.; Viinikka, A.; Pellikka, P. Classification of tree species in a diverse African agroforestry landscape using imaging spectroscopy and laser scanning. Remote Sens. 2017, 9, 875. [Google Scholar] [CrossRef] [Green Version]
  5. Fricker, G.A.; Ventura, J.D.; Wolf, J.A.; North, M.P.; Davis, F.W.; Franklin, J. A convolutional neural network classifier identifies tree species in mixed-conifer forest from hyperspectral imagery. Remote Sens. 2019, 11, 2326. [Google Scholar] [CrossRef] [Green Version]
  6. Dumke, I.; Nornes, S.M.; Purser, A.; Marcon, Y.; Ludvigsen, M.; Ellefmo, S.L.; Johnsen, G.; Søreide, F. First hyperspectral imaging survey of the deep seafloor: High-resolution mapping of manganese nodules. Remote Sens. Environ. 2018, 209, 19–30. [Google Scholar] [CrossRef]
  7. Acosta, I.C.C.; Khodadadzadeh, M.; Tusa, L.; Ghamisi, P.; Gloaguen, R. A machine learning framework for drill-core mineral mapping using hyperspectral and high-resolution mineralogical data fusion. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 4829–4842. [Google Scholar] [CrossRef]
  8. Shimoni, M.; Haelterman, R.; Perneel, C. Hypersectral Imaging for Military and Security Applications: Combining Myriad Processing and Sensing Techniques. IEEE Geosci. Remote Sens. Mag. 2019, 7, 101–117. [Google Scholar] [CrossRef]
  9. El-Sharkawy, Y.H.; Elbasuney, S. Hyperspectral imaging: A new prospective for remote recognition of explosive materials. Remote Sens. Appl. Soc. Environ. 2019, 13, 31–38. [Google Scholar] [CrossRef]
  10. Liao, D.; Chen, S.; Qian, Y. Visualization of Hyperspectral Images Using Moving Least Squares. In Proceedings of the IEEE 2018 24th International Conference on Pattern Recognition (ICPR), Beijing, China, 20–24 August 2018; pp. 2851–2856. [Google Scholar]
  11. Available online: https://www.harrisgeospatial.com/Software-Technology/ENVI/ (accessed on 23 July 2020).
  12. Demir, B.; Celebi, A.; Erturk, S. A low-complexity approach for the color display of hyperspectral remote-sensing images using one-bit-transform-based band selection. IEEE Trans. Geosci. Remote Sens. 2008, 47, 97–105. [Google Scholar] [CrossRef]
  13. Le Moan, S.; Mansouri, A.; Voisin, Y.; Hardeberg, J.Y. A constrained band selection method based on information measures for spectral image color visualization. IEEE Trans. Geosci. Remote Sens. 2011, 49, 5104–5115. [Google Scholar] [CrossRef] [Green Version]
  14. Su, H.; Du, Q.; Du, P. Hyperspectral imagery visualization using band selection. In Proceedings of the IEEE 2012 4th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Shanghai, China, 4–7 June 2012; pp. 1–4. [Google Scholar]
  15. Tyo, J.S.; Konsolakis, A.; Diersen, D.I.; Olsen, R.C. Principal-components-based display strategy for spectral imagery. IEEE Trans. Geosci. Remote Sens. 2003, 41, 708–718. [Google Scholar] [CrossRef] [Green Version]
  16. Cui, M.; Razdan, A.; Hu, J.; Wonka, P. Interactive hyperspectral image visualization using convex optimization. IEEE Trans. Geosci. Remote Sens. 2009, 47, 1673–1684. [Google Scholar]
  17. Khan, H.A.; Khan, M.M.; Khurshid, K.; Chanussot, J. Saliency based visualization of hyper-spectral images. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015; pp. 1096–1099. [Google Scholar]
  18. Jacobson, N.P.; Gupta, M.R. Design goals and solutions for display of hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2005, 43, 2684–2692. [Google Scholar] [CrossRef]
  19. Jacobson, N.P.; Gupta, M.R.; Cole, J.B. Linear fusion of image sets for display. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3277–3288. [Google Scholar] [CrossRef]
  20. Fang, J.; Qian, Y. Local detail enhanced hyperspectral image visualization. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015; pp. 1092–1095. [Google Scholar]
  21. Kang, X.; Duan, P.; Li, S.; Benediktsson, J.A. Decolorization-based hyperspectral image visualization. IEEE Trans. Geosci. Remote Sens. 2018, 56, 4346–4360. [Google Scholar] [CrossRef]
  22. Zhang, B.; Yu, X. Hyperspectral image visualization using t-distributed stochastic neighbor embedding. In MIPPR 2015: Remote Sensing Image Processing, Geographic Information Systems, and Other Applications; Liu, J., Sun, H., Eds.; International Society for Optics and Photonics, SPIE: Bellingham, WA, USA, 2015; Volume 9815, pp. 14–21. [Google Scholar]
  23. Ertürk, S.; Süer, S.; Koç, H. A high-dynamic-range-based approach for the display of hyperspectral images. IEEE Geosci. Remote Sens. Lett. 2014, 11, 2001–2004. [Google Scholar] [CrossRef]
  24. Long, Y.; Li, H.C.; Celik, T.; Longbotham, N.; Emery, W.J. Pairwise-Distance-Analysis-Driven Dimensionality Reduction Model with Double Mappings for Hyperspectral Image Visualization. Remote Sens. 2015, 7, 7785–7808. [Google Scholar] [CrossRef] [Green Version]
  25. Liao, D.; Qian, Y.; Tang, Y.Y. Constrained manifold learning for hyperspectral imagery visualization. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1213–1226. [Google Scholar] [CrossRef] [Green Version]
  26. Jordan, J.; Angelopoulou, E. Hyperspectral image visualization with a 3-D self-organizing map. In Proceedings of the 2013 5th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Gainesville, FL, USA, 26–28 June 2013; pp. 1–4. [Google Scholar]
  27. Duan, P.; Kang, X.; Li, S.; Ghamisi, P. Multichannel pulse-coupled neural network-based hyperspectral image visualization. IEEE Trans. Geosci. Remote Sens. 2019, 58, 2444–2456. [Google Scholar] [CrossRef]
  28. Duan, P.; Kang, X.; Li, S. Convolutional Neural Network for Natural Color Visualization of Hyperspectral Images. In Proceedings of the IGARSS 2019–2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 3372–3375. [Google Scholar]
  29. Tang, R.; Liu, H.; Wei, J.; Tang, W. Supervised learning with convolutional neural networks for hyperspectral visualization. Remote Sens. Lett. 2020, 11, 363–372. [Google Scholar] [CrossRef]
  30. Shannon, C. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–623. [Google Scholar] [CrossRef] [Green Version]
  31. Amankwah, A. A Multivariate Gradient and Mutual Information Measure Method for Hyperspectral Image Visualization. In Proceedings of the IGARSS 2018–2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 5001–5004. [Google Scholar]
  32. Computational Intelligence Group of the University of the Basque Country (UPV/EHU). Hyperspectral Remote Sensing Scenes. 2014. Available online: http://www.ehu.eus/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes (accessed on 23 July 2020).
  33. Buschner, R.; Doerffer, R.; van der Piepen, H. Imaging Spectrometer ROSIS. In Laser/Optoelektronik in der Technik / Laser/Optoelectronics in Engineering; Waidelich, W., Ed.; Springer: Berlin/Heidelberg, Germany, 1990; pp. 368–373. [Google Scholar]
  34. Vane, G.; Green, R.O.; Chrien, T.G.; Enmark, H.T.; Hansen, E.G.; Porter, W.M. The airborne visible/infrared imaging spectrometer (AVIRIS). Remote Sens. Environ. 1993, 44, 127–143. [Google Scholar] [CrossRef]
  35. Gu, J.; Jiang, J.; Susstrunk, S.; Liu, D. What is the Space of Spectral Sensitivity Functions for Digital Color Cameras? In WACV ’13, Proceedings of the 2013 IEEE Workshop on Applications of Computer Vision (WACV), Clearwater Beach, FL, USA, 15–17 January 2013; IEEE Computer Society: Washington, DC, USA, 2013; pp. 168–179. [Google Scholar]
  36. CIE Standard Colorimetric System. In Colorimetry; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 2006; Chapter 3; pp. 63–114.
  37. Kishino, K.; Sakakibara, N.; Narita, K.; Oto, T. Two-dimensional multicolor (RGBY) integrated nanocolumn micro-LEDs as a fundamental technology of micro-LED display. Appl. Phys. Express 2019, 13, 014003. [Google Scholar] [CrossRef]
  38. Stockman, A.; Sharpe, L.T. The spectral sensitivities of the middle-and long-wavelength-sensitive cones derived from measurements in observers of known genotype. Vis. Res. 2000, 40, 1711–1737. [Google Scholar] [CrossRef] [Green Version]
  39. Haykin, S.S. Neural Networks and Learning Machines, 3rd ed.; Pearson Education: Upper Saddle River, NJ, USA, 2009. [Google Scholar]
  40. Clevert, D.; Unterthiner, T.; Hochreiter, S. Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs). In Proceedings of the 4th International Conference on Learning Representations, ICLR, San Juan, Puerto Rico, 2–4 May 2016. [Google Scholar]
  41. Trottier, L.; Gigu, P.; Chaib-draa, B. Parametric exponential linear unit for deep convolutional neural networks. In Proceedings of the 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), Cancun, Mexico, 18–21 December 2017; pp. 207–214. [Google Scholar]
  42. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems; NIPS: San Diego, CA, USA, 2019; pp. 8026–8037. [Google Scholar]
  43. McCamy, C.S.; Marcus, H.; Davidson, J.G. A Color-Rendition Chart. J. Appl. Photogr. Eng. 1976, 2, 95–99. [Google Scholar]
  44. Pascale, D. RGB Coordinates of the Macbeth ColorChecker; The BabelColor Company: Montreal, QC, Canada, 2006. [Google Scholar]
  45. Baldridge, A.; Hook, S.; Grove, C.; Rivera, G. The ASTER spectral library version 2.0. Remote Sens. Environ. 2009, 113, 711–715. [Google Scholar] [CrossRef]
  46. Pham, T.D. The Kolmogorov-Sinai Entropy in the Setting of Fuzzy Sets for Image Texture Analysis and Classification. Pattern Recognit. 2016, 53, 229–237. [Google Scholar] [CrossRef]
  47. Haralick, R.M.; Shanmugam, K.; Dinstein, I. Textural Features for Image Classification. IEEE Trans. Syst. Man Cybern. 1973, SMC-3, 610–621. [Google Scholar] [CrossRef] [Green Version]
  48. Ivanovici, M.; Richard, N. Entropy versus fractal complexity for computer-generated color fractal images. In Proceedings of the 4th CIE Expert Symposium on Colour and Visual Appearance, Prague, Czech Republic, 6–7 September 2016. [Google Scholar]
  49. Mandelbrot, B. The Fractal Geometry of Nature; W.H. Freeman and Co.: New York, NY, USA, 1982. [Google Scholar]
  50. Peitgen, H.; Saupe, D. The Sciences of Fractal Images; Springer: Berlin, Germany, 1988. [Google Scholar]
  51. Chen, W.; Yuan, S.; Hsiao, H.; Hsieh, C. Algorithms to estimating fractal dimension of textured images. IEEE Int. Conf. Acoust. Speech Signal Process. ICASSP 2001, 3, 1541–1544. [Google Scholar]
  52. Forsythe, A.; Nadal, M.; Sheehy, N.; Cela-Conde, C.; Sawey, M. Predicting beauty: Fractal dimension and visual complexity in art. Br. J. Psychol. 2011, 102, 49–70. [Google Scholar] [CrossRef] [Green Version]
  53. Falconer, K. Fractal Geometry, Mathematical Foundations and Applications; John Wiley and Sons: Hoboken, NJ, USA, 1990. [Google Scholar]
  54. Voss, R. Random Fractals: Characterization and measurement. Scaling Phenom. Disord. Syst. 1986, 10, 51–61. [Google Scholar] [CrossRef]
  55. Keller, J.; Chen, S. Texture Description and segmentation through Fractal Geometry. Comput. Vis. Graph. Image Process. 1989, 45, 150–166. [Google Scholar] [CrossRef]
  56. Maragos, P.; Sun, F. Measuring the fractal dimension of signals: Morphological covers and iterative optimization. IEEE Trans. Signal Process. 1993, 41, 108–121. [Google Scholar] [CrossRef]
  57. Allain, C.; Cloitre, M. Characterizing the lacunarity of random and deterministic fractal sets. Phys. Rev. A 1991, 44, 3552–3558. [Google Scholar] [CrossRef] [PubMed]
  58. Manousaki, A.; Manios, A.; Tsompanaki, E.; Tosca, A. Use of color texture in determining the nature of melanocytic skin lesions—A qualitative and quantitative approach. Comput. Biol. Med. 2006, 36, 416–427. [Google Scholar] [CrossRef]
  59. Ivanovici, M.; Richard, N. Fractal Dimension of Colour Fractal Images. IEEE Trans. Image Process. 2011, 20, 227–235. [Google Scholar] [CrossRef]
  60. Zhao, X.; Wang, X. Fractal Dimension Estimation of RGB Color Images Using Maximum Color Distance. Fractals 2016, 24, 1650040. [Google Scholar] [CrossRef]
  61. Nayak, S.R.; Mishra, J.; Khandual, A.; Palai, G. Fractal dimension of RGB color images. Optik 2018, 162, 196–205. [Google Scholar] [CrossRef]
  62. Li, J.; Du, Q.; Sun, C. An improved box-counting method for image fractal dimension estimation. Pattern Recognit. 2009, 42, 2460–2469. [Google Scholar] [CrossRef]
  63. Ivanovici, M. Fractal Dimension of Color Fractal Images with Correlated Color Components. IEEE Trans. Image Process. 2020. [Google Scholar] [CrossRef]
  64. Ivanovici, M. Color Fractal Images with Independent RGB Color Components. 2019. Available online: https://ieee-dataport.org/open-access/color-fractal-images-independent-rgb-color-components (accessed on 30 July 2020).
  65. Ivanovici, M.; Richard, N. A Naive Complexity Measure for color texture images. In Proceedings of the 2017 International Symposium on Signals, Circuits and Systems (ISSCS), lasi, Romania, 13–14 July 2017; pp. 1–4. [Google Scholar]
  66. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
  67. Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
Figure 1. RGB representations of the five hyperspectral images used in our experiments. Top row: images acquired by the ROSIS-3 sensor; bottom row: images acquired by the AVIRIS sensor
Figure 1. RGB representations of the five hyperspectral images used in our experiments. Top row: images acquired by the ROSIS-3 sensor; bottom row: images acquired by the AVIRIS sensor
Remotesensing 12 02479 g001
Figure 2. The D65 illuminant.
Figure 2. The D65 illuminant.
Remotesensing 12 02479 g002
Figure 3. Spectral sensitivities of human cone cells.
Figure 3. Spectral sensitivities of human cone cells.
Remotesensing 12 02479 g003
Figure 4. Spectral sensitivity functions for 5 digital cameras
Figure 4. Spectral sensitivity functions for 5 digital cameras
Remotesensing 12 02479 g004
Figure 5. Gaussian spectral sensitivity functions based on the functions of the Canon 5D camera from Figure 4a.
Figure 5. Gaussian spectral sensitivity functions based on the functions of the Canon 5D camera from Figure 4a.
Remotesensing 12 02479 g005
Figure 6. Architecture of the ANN.
Figure 6. Architecture of the ANN.
Remotesensing 12 02479 g006
Figure 7. The McBeth color chart.
Figure 7. The McBeth color chart.
Remotesensing 12 02479 g007
Figure 8. Spectral reflectance curves of the color patches in each row of the McBeth color chart
Figure 8. Spectral reflectance curves of the color patches in each row of the McBeth color chart
Remotesensing 12 02479 g008
Figure 9. Experimental results on the Pavia University image. (a) BS. (b) NOL. (c) SOL. (d) MOL. (e) HOL. (f) Canon 5D. (g) Canon 1D. (h) Hasselblad H2. (i) Nikon D3X. (j) Nikon D50. (k) ANN. (l) PCA [15]. (m) CMF [18]. (n) CML [25]. (o) DHV [21].
Figure 9. Experimental results on the Pavia University image. (a) BS. (b) NOL. (c) SOL. (d) MOL. (e) HOL. (f) Canon 5D. (g) Canon 1D. (h) Hasselblad H2. (i) Nikon D3X. (j) Nikon D50. (k) ANN. (l) PCA [15]. (m) CMF [18]. (n) CML [25]. (o) DHV [21].
Remotesensing 12 02479 g009
Figure 10. Experimental results on the Pavia Centre image. (a) BS. (b) NOL. (c) SOL. (d) MOL. (e) HOL. (f) Canon 5D. (g) Canon 1D. (h) Hasselblad H2. (i) Nikon D3X. (j) Nikon D50. (k) ANN. (l) PCA [15]. (m) CMF [18]. (n) CML [25]. (o) DHV [21].
Figure 10. Experimental results on the Pavia Centre image. (a) BS. (b) NOL. (c) SOL. (d) MOL. (e) HOL. (f) Canon 5D. (g) Canon 1D. (h) Hasselblad H2. (i) Nikon D3X. (j) Nikon D50. (k) ANN. (l) PCA [15]. (m) CMF [18]. (n) CML [25]. (o) DHV [21].
Remotesensing 12 02479 g010
Figure 11. Experimental results on the Indian Pines image. (a) BS. (b) NOL. (c) SOL. (d) MOL. (e) HOL. (f) Canon 5D. (g) Canon 1D. (h) Hasselblad H2. (i) Nikon D3X. (j) Nikon D50. (k) ANN. (l) PCA [15]. (m) CMF [18]. (n) CML [25]. (o) DHV [21].
Figure 11. Experimental results on the Indian Pines image. (a) BS. (b) NOL. (c) SOL. (d) MOL. (e) HOL. (f) Canon 5D. (g) Canon 1D. (h) Hasselblad H2. (i) Nikon D3X. (j) Nikon D50. (k) ANN. (l) PCA [15]. (m) CMF [18]. (n) CML [25]. (o) DHV [21].
Remotesensing 12 02479 g011
Figure 12. Experimental results on the SalinasA image. (a) BS. (b) NOL. (c) SOL. (d) MOL. (e) HOL. (f) Canon 5D. (g) Canon 1D. (h) Hasselblad H2. (i) Nikon D3X. (j) Nikon D50. (k) ANN. (l) PCA [15]. (m) CMF [18]. (n) CML [25]. (o) DHV [21].
Figure 12. Experimental results on the SalinasA image. (a) BS. (b) NOL. (c) SOL. (d) MOL. (e) HOL. (f) Canon 5D. (g) Canon 1D. (h) Hasselblad H2. (i) Nikon D3X. (j) Nikon D50. (k) ANN. (l) PCA [15]. (m) CMF [18]. (n) CML [25]. (o) DHV [21].
Remotesensing 12 02479 g012
Figure 13. Experimental results on the Cuprite image. (a) BS. (b) NOL. (c) SOL. (d) MOL. (e) HOL. (f) Canon 5D. (g) Canon 1D. (h) Hasselblad H2. (i) Nikon D3X. (j) Nikon D50. (k) ANN. (l) PCA [15]. (m) CMF [18]. (n) CML [25]. (o) DHV [21].
Figure 13. Experimental results on the Cuprite image. (a) BS. (b) NOL. (c) SOL. (d) MOL. (e) HOL. (f) Canon 5D. (g) Canon 1D. (h) Hasselblad H2. (i) Nikon D3X. (j) Nikon D50. (k) ANN. (l) PCA [15]. (m) CMF [18]. (n) CML [25]. (o) DHV [21].
Remotesensing 12 02479 g013
Table 1. Entropy and fractal dimension for the visualization results in Figure 9 and Figure 10. The values in bold represent the highest values for the respective image.
Table 1. Entropy and fractal dimension for the visualization results in Figure 9 and Figure 10. The values in bold represent the highest values for the respective image.
Pavia UniversityPavia Centre
Method H CFD H CFD
Linear
Gaussian
(proposed)
BS13.222.4113.482.44
NOL12.812.3913.082.39
SOL12.682.3812.962.38
MOL12.452.3712.702.37
HOL10.792.3510.822.36
Linear
Camera
(proposed)
Canon 5D12.032.3712.102.36
Canon 1D12.232.3712.312.37
Hasselblad H212.252.3812.252.37
Nikon D3X12.302.3812.342.37
Nikon D5012.452.3812.492.38
ANN (proposed)15.383.0215.282.87
PCA [15]15.202.8414.582.75
CMF [18]14.982.5115.112.63
CML [25]12.792.8012.942.63
DHV [21]15.312.8415.212.77
Table 2. Entropy and fractal dimension for the visualization results in Figure 11, Figure 12 and Figure 13. The values in bold represent the highest values for the respective image.
Table 2. Entropy and fractal dimension for the visualization results in Figure 11, Figure 12 and Figure 13. The values in bold represent the highest values for the respective image.
Indian PinesSalinas ACuprite
Method H CFD H CFD H CFD
Linear
Gaussian
(proposed)
BS13.222.4112.012.3813.712.84
NOL11.762.5111.292.2913.312.80
SOL11.532.5611.162.2813.192.79
MOL11.312.4610.972.2712.842.79
HOL10.292.429.842.3610.992.73
Linear
Camera
(proposed)
Canon 5D12.032.3711.062.4312.132.76
Canon 1D11.282.4610.512.2612.322.77
Hasselblad H211.402.4610.472.2512.362.76
Nikon D3X11.342.4510.522.2512.352.77
Nikon D5011.502.4610.692.2712.562.78
ANN (proposed)13.893.2411.502.7515.763.00
PCA [15]14.243.1911.042.3817.403.37
CMF [18]12.513.1710.361.868.002.77
CML [25]12.812.6811.102.1413.662.67
DHV [21]14.133.0312.292.4416.223.06
Table 3. Independent data used by the methods under comparison.
Table 3. Independent data used by the methods under comparison.
MethodIndependent Data
Linear GaussianGaussian sensitivity functions (Figure 5)
Linear CameraCamera sensitivity functions (Figure 4)
ANNMcBeth spectral reflectance curves (Figure 8)
PCA [15]none
CMF [18]Stretched CIE 1964 color matching functions
CML [25]Geographically-matched RGB image (Figure 1)
DHV [21]none

Share and Cite

MDPI and ACS Style

Coliban, R.-M.; Marincaş, M.; Hatfaludi, C.; Ivanovici, M. Linear and Non-Linear Models for Remotely-Sensed Hyperspectral Image Visualization. Remote Sens. 2020, 12, 2479. https://doi.org/10.3390/rs12152479

AMA Style

Coliban R-M, Marincaş M, Hatfaludi C, Ivanovici M. Linear and Non-Linear Models for Remotely-Sensed Hyperspectral Image Visualization. Remote Sensing. 2020; 12(15):2479. https://doi.org/10.3390/rs12152479

Chicago/Turabian Style

Coliban, Radu-Mihai, Maria Marincaş, Cosmin Hatfaludi, and Mihai Ivanovici. 2020. "Linear and Non-Linear Models for Remotely-Sensed Hyperspectral Image Visualization" Remote Sensing 12, no. 15: 2479. https://doi.org/10.3390/rs12152479

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop