Next Article in Journal
Memory-Limited Partially Observable Stochastic Control and Its Mean-Field Control Approach
Next Article in Special Issue
Rosenblatt’s First Theorem and Frugality of Deep Learning
Previous Article in Journal
Measuring Uncertainty in the Negation Evidence for Multi-Source Information Fusion
Previous Article in Special Issue
A Fast kNN Algorithm Using Multiple Space-Filling Curves
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Entropy as a High-Level Feature for XAI-Based Early Plant Stress Detection

Department of Mathematical Software and Supercomputing Technologies, Lobachevsky University, 603950 Nizhny Novgorod, Russia
*
Author to whom correspondence should be addressed.
Entropy 2022, 24(11), 1597; https://doi.org/10.3390/e24111597
Submission received: 19 August 2022 / Revised: 17 October 2022 / Accepted: 26 October 2022 / Published: 3 November 2022

Abstract

:
This article is devoted to searching for high-level explainable features that can remain explainable for a wide class of objects or phenomena and become an integral part of explainable AI (XAI). The present study involved a 25-day experiment on early diagnosis of wheat stress using drought stress as an example. The state of the plants was periodically monitored via thermal infrared (TIR) and hyperspectral image (HSI) cameras. A single-layer perceptron (SLP)-based classifier was used as the main instrument in the XAI study. To provide explainability of the SLP input, the direct HSI was replaced by images of six popular vegetation indices and three HSI channels (R630, G550, and B480; referred to as indices), along with the TIR image. Furthermore, in the explainability analysis, each of the 10 images was replaced by its 6 statistical features: min, max, mean, std, max–min, and the entropy. For the SLP output explainability, seven output neurons corresponding to the key states of the plants were chosen. The inner layer of the SLP was constructed using 15 neurons, including 10 corresponding to the indices and 5 reserved neurons. The classification possibilities of all 60 features and 10 indices of the SLP classifier were studied. Study result: Entropy is the earliest high-level stress feature for all indices; entropy and an entropy-like feature (max–min) paired with one of the other statistical features can provide, for most indices, 100% accuracy (or near 100%), serving as an integral part of XAI.

1. Introduction

Explainability of artificial intelligence (AI) results is increasingly considered a necessary property. An arsenal of universal methods for increasing the explainability and interpretability of neural network solutions has been developed. This process was stimulated by early articles exploring the explainability and interpretability predictions of AI results, such as [1,2]. The most effective solutions to this problem were identified as analysis of variance or sensitivity analysis, which is well-known in mathematics and used in traditional areas of science and technology [3]; Fisher’s fundamental criterion for separability in high-dimensional space [4]; and linear discriminant analysis [5].
Subsequently, additional methods were proposed, which remain popular today, such as LIME [6], based on the idea of local linear separability, as well as learning important features through propagating activation differences (DeepLIFT) [7,8]. The most popular method, SHAP, generalized a number of previous methods, and the DeepSHAP method based on DeepLIFT [9] was also introduced. An approach [10] developing the idea of hierarchical interpretability based on a multilevel feature pyramid also proved to be methodically useful for AI applications. Selvaraju, Cogswell, et al. [11] also made a significant contribution to the development of a visual approach to the explainability of deep learning model results.
An important direction for XAI is the estimation of the effective local dimensionality of the feature space. Dimensionality estimation is a classical problem solved by applying the PCA method in a local area of the n-dimensional space under study. To date, methods have been proposed to estimate the global topology of the n-dimensional feature space based on graph models that are interesting for XAI, such as [12]. A library was made available, providing the Scikit-learn API to evaluate global and local intrinsic dimensionality, as well as a possibility of working with benchmark datasets published in the literature [13].
A series of works has been devoted to explainability in graph neural networks (GNN) has appeared, as reviewed in [14]. According to [12], software can be used to extract real-life data for GNN training. GNNExplainer was introduced in 2019 [15] as the first general, model-agnostic approach to GNN-based models, although without a global understanding of predictions. In 2020, PGExplainer [16] was proposed, providing a global understanding of predictions made by GNNs. PGMExplainer was also introduced in 2020 [17], which is based on perturbation of the original graph to eliminate unimportant variables from input–output data, employing explanation Bayesian networks as the last step. GraphMask was introduced in 2021 [18]; it is similar to PGExpaliner and can provide a global understanding of a trained GNN model, in addition to providing relevant paths by using a different mask for each layer.
In the field of explainable AI (XAI), the review by Linardatos et al. [19] deserves special attention; the authors systematized the results of the development of XAI to date. We agree with the assertion made by Linardatos et al. [19] that there remain aspects of explainable AI to be explored, including the lack of performance aside, with considerable potential to unlock in the coming years, which has motivated us to further pursue the development of XAI.
Of considerable interest in the field of XAI is the search for explainable features that are simple and fundamental enough to remain explainable for a wide class of objects and phenomena. Our interest in such explainable features was inspired by a recent article [20] on the zero-shot learning method based on a high-level feature vector. Such a vector ensures the minimization of the dataset required for training, as well as the transfer of training results from one dataset to another, in which some of the categories are missing (or not visible). As an example of high-level features, the article considers high-dimensional spectral curves of HSI pixels.
In the present study, we searched for high-level features for the detection of wheat plant stress state. With the detection of plant stress as the application task, we employed vegetation indices that are widely used in smart agriculture. These indices are calculated based on the original data of multispectral images (MSI), especially on hyperspectral images (HSI), owing to their high-frequency resolution. The most popular vegetation index is NDVI (normalized difference vegetation index); many of its analogs have been used, in previous research, such as GNDVI, CGL, SIPI, GI, etc. [21]. The use of TIR sensors is the most commonly applied method by biologists for the early detection of stress, i.e., early enough to eliminate stress without crop loss. TIR sensors are able to detect plant stress at an early stage based on a slight increase in leaf temperature (by 0.2 °C). TIR leaf images, like HSI channels, are grayscale images.
Artificial intelligence methods are widely applied in smart farming [22,23,24]—mostly deep learning methods. However, the most relevant property of currently applied AI models is the explainability of decisions, which is the main property of explainable AI (XAI). In the field of XAI, approaches have been developed that turn the problem of data dimensionality into an exploitable feature [4], offering easy-to-train decision correctors that can be additionally trained during operation [7].
A successful attempt to create a simple, easily configurable, and efficient XAI network was described in [25]. An XAI-based classifier and regressor, which are simple and easily configurable as part of the user task, were built based on a single-layer perceptron (SLP). However, the decision was largely tied to a specific experiment involving plant drought in the presence of a reference (control group).
Phuong, Dao, et al. [26] conducted a study on early diagnosis of the plant drought stress based on HSI data by means of classical ML, including multilayer perceptron (MLP), considerably overlapping with the conditions of our application task; therefore, we will compare the results of the present study with those reported by Phuong, Dao, et al.

2. Materials and Methods

2.1. Materials

We conducted an experiment to monitor the drought stress of wheat plants for 25 days under biolab conditions, recording the state of plants from a distance 1 m every 2–3 days [25]. Plants were observed in 3 boxes of 30 pots with 15–20 plants in each pot; 15 pots on the left side of the box were watered, 15 plants on the right side of the box were not watered. The state of the plants was regularly recorded during the experiment at an angle of 90° to the surface using three cameras (sensors): a Specim IQ hyperspectral (HSI) camera (range: 400–1000 nm, spectral resolution: 7 nm, channels: 204; 512 × 512 pix), a Testo 885-2 thermal infrared (TIR) camera (320 × 240 pix), and high-resolution RGB camera (5184 × 3456 pix). The total image volume was 72.2 GB, mainly comprising HSIs. TIR sensors were chosen to directly record the leaf temperature, an increase in which is the earliest feature of a stress condition. HSIs were used primarily as a source for multiple vegetation indices that control the presence and the state of the green mass.
The differences between non-irrigated and irrigated plants in temperature (according to TIR images) and water loss (%, via plant weighing) were recorded on the 1st, 3rd, 6th, 8th, 10th, 12th, 14th, 16th, 19th, 22th, and 25th day of the experiment. The following key events and changes in the state of plants were recorded and compared with those of control plants: (1) an increase in the average temperature of plants by 0.2 degrees after 5 days and (2) the beginning of water loss by the plant after 11 days (about 8% of the water volume). The former is the earliest evidence of drought stress, which happens without water loss and visible changes in the green mass. Detection of plant stress before the onset of water loss is a criterion of “early” detection. After 18 days or somewhat later, we observed a depletion of the plant’s compensatory function, as manifested by a break in the line of monotonic temperature increase.
At the end of the experiment, we compiled data that can be considered a time series, although we opted to consider the problem time-context-free.

2.2. The Use of Entropy and Max–Min Features as Universal, Explainable, and High-Level

All HSI-based indices and TIR images were collected as grayscale images, which could be characterized at the preprocessing stage by their histogram with 4 standard statistical features {max, min, mean, std}, supplemented with max–min and entropy.
The idea of using entropy and max–min as universal high-level attributes is based on the fact that entropy is an objective and universal measure for changes in the internal state of complex natural objects and their sets. We considered the important or key states observed in the plants under abiotic drought stress during the experimental period. We can observes the following key states: (1) the initial, essentially homogeneous state of the plants (1st day); (2) if on the 1st day, the plants have not yet formed sufficiently and continue to actively grow and bush, then it resulting in some increase in homogeneity, as in our case (3rd day); (3) an increase in state diversity due to uneven entry into a state of stress resulting from heterogeneous soil moisture (6th day); (4) a non-uniform entry into the state of real loss of moisture by the body of the plant and the beginning of drying (12–19th days); (5) a predominance of withered plants and a further reduction in the diversity of states (25th day). An example of a parallel monotonic process on a certain key range is an increase in plant temperature or an increase in the red leaf color component.
Entropy, as an objective reflection of an object state, can be easy calculated for plants using the histogram of the image pixels belonging to a plant as object of interest. The histogram width (max–min), together with the entropy, can also be used as an explainable feature of the plant state.
Graphs of dependence of entropy and max-min feature on key days (see Figure A1, Appendix A) indicate a close and at the same time explainable connection of their values with each key state of plants for all 10 indices. This means that the “high-level” property of the entropy and max-min is confirmed as well.

2.3. XAI-Based Classifier Description

In the construction of XAI-based classifier and regressor, which are simple and easily configurable as part of the user task, we followed the example described in [27], implementing the idea of [28]; however, we used the ‘Backyard Dog’ function as the main function of the network, which we implemented via SLP (Figure 1).
To determine N, first assume a separate thermal IR channel (TIR), a series of indices, which can be calculated using HSI channel images commonly used in smart agriculture: NDVI, GNDVI, GCL, SIPI, and GI [20]. In addition, consider the capabilities of the 3 visible HSI channels (R630, G550, and B480) as analogs of the red, green, and blue channels of the regular color image. Red, green, and blue channels are of interest for smart farming owing to their high resolution and low price and can be captured using an RGB camera. However, it is generally accepted that they cannot provide sufficient accuracy for the early diagnosis of plant stress. The NDblue index is specially constructed for plant mask building. Hereafter, all 10 objects are collectively referred to as indices. As a result, I (=10) indices were accepted for the study, and the number of neurons in the inner layer was chosen as N = I + A, where A is the number of additional neurons reserved to increase accuracy and decrease calculation time. As a result, the maximum number of inputs (M) and the maximum number of weights (Nw) are expressed as:
M = N × h = 10 × 6 = 60,
Nw = M × N + N × K = 60 × 15 + 15 × 7 = 1005
Our SLP classifier should provide early detection of plant stress in the absence of a clearly defined standard of a stress-free plant in the detected area.
An important condition for the successful application of computational experiments is the correct separation of plant pixels from the background under conditions of the changing state of the plant, soil, and other background objects. The plant mask can be built on both the traditionally used NDVI and other indices that are sensitive to chlorophyll. In connection with the study of the channels of a color image, it is possible to build a mask on the values of the visible range or even on the RGB values. Therefore, mNDblue was used as a base index, which is intended for high-resolution plant leaf images [20]:
mNDblue = −(ρλ − ρ450)/(ρ850 + ρ450) λϵ{530,570,675,730}.
In the present study, we used mNDblue with two modifications: (1) we used λ = 550 for compatibility with the G550 hyperspectral channel, and (2) we changed the normalization of Formula (2) to a constant threshold value (Th) for all plant states in the masking process. The result, NDblue, is expressed as:
NDblue = (G550 − B450)/(max(G550 − B450) − min(G550 − B450)); or
NDGB = (G − B)/(max(G − B) − min(G − B))
where G and B are RGB channels.
Option (3) is for HSI, and Option (4) is for the visible range. Computational experiments were carried out to select the type of the plant mask formation index between traditional NDVI and NDblue, as well as to select the threshold value. The results for one of the days are shown in Figure 2. The NDblue index (3) was chosen, and a threshold Th > 0.1 was set for the plant mask according to experimental results.
The NDblue variant with a threshold of 0.1 was chosen as the most suitable index, preserving the integrity of the plant without capturing background pixels and losing pixels inside the leaf when the state of the plant changes. This selection was made on all key days. Figure 2 shows day 25, when the most visually noticeable changes occurred. The normalization adopted in formula (3) ensures that the threshold is constant for all days.

2.4. Exploration Methods

Using the SLP classifier (Figure 1b), in the present study, we aimed to solve the following tasks: (1) determine which components of the input feature vector are the best for early plant stress detection and sufficient for SLP classifier training; (2) determine the role of the high-level features of entropy and max–min in early stress detection; (3) determine whether the construction of an XAI SLP classifier is adequate to solve the problem.
To achieve these goals, we studied early plant stress detection with respect to all 60 the features in the following order: (1) for each feature separately, (2) for the feature pairs within each index; (3) for each index separately using all 6 features; (4) for combinations of indices, excluding TIR, in pairs, triplets, fours, and fives. Each of the tasks was solved for two cases—a short monitoring range (12 days) and a long monitoring range (25 days)—to investigate the differences in stress detection in these two time ranges for smart agriculture applications and to explore their possible co-application.
We found that is was possible to reduce the share of data used to train the classifier to 10–20% of the plants’ mask area without losing classification quality.
The accuracy calculation was organized as an average accuracy calculation over 50 or 100 trials of the stress detection training procedure, generating starting weights for each trial according to the “Kalming” variant of the Kalming He distributions [29].
Before calculating our 6 classification features from histograms, we executed a denoising preprocessing procedure. To this end, we excluded a few percentiles of pixel values from the top and bottom of the histogram. For noisier TIR images, the exclusion of 5 percentiles from the top and 1 percentile from the bottom is sufficient, whereas for HSI-based indices, 1 percentile needs to be removed from both sides to eliminates the main part of the noise and increase the robustness of the features. A special XAI neural network tool was constructed to distinguish and study the 7 key stress states of plants, from no stress to deep stress.

3. Results

3.1. Significance of Each Feature from the Complete Feature Vector

The significance of each feature for each of the key days is shown in Figure 3 in the form of a confusion matrix.
The significance of each feature for each of the key days is shown in Figure 4 in the form of a confusion matrix for the period of days 1–25.
For a short period (12 days, Figure 3), TIR shows the best diagonality for two features (std and max–min), GCL shows the best diagonality for three features (max, min, and max–min), Green shows the best diagonality for the two features (max and max–min), and GI shows the best diagonality for two features (min and entropy). A similar diagonal pattern persists for the period from 1 to 25 days. The green channel (550 nm) was included in Figure 3 and Figure 4 as the center of the visible range. However, the best indices, according to the sum of short- and long-period accuracy, are Blue and GI (Table 1). GCL and TIR are ranked third and fourth, respectively. Table 1 shows the average plant stress classification accuracy over 100 trials for each of the 6 features for all 10 indices. As a rule, the classification ability of a feature for a full period decreases relative to that for a short period. The maximum values for each of the indices are shown in bold for both periods.
For the TIR, the maximum classification accuracy falls on the std feature in both short and long periods. For the NDVI, Blue (480 nm), and GCL the maximum classification accuracy is associated with the max feature. For Red (630 nm) and GI, the maximum classification accuracy is associated with the min feature. For the GNDVI and NDblue indices, the maximum classification accuracy is associated with the max–min feature, and for the Green index, the maximum classification accuracy is associated with the mean feature.

3.2. Significance of Feature Pairs within Each Index Separately and of the Entropy or Max–Min Presence in a Pair

Figure 5 shows the average accuracy of classification of the key stress states achieved for paired combinations of features within each index, indicating the significance of paired combinations for tress detection in each the 12-day (above the gray diagonal) and 25-day (below the diagonal) periods. The most interesting results for 6 (TIR, GCL, Blue, GI, GNDVI, and Green) of the 10 indices are shown. For comparison, the gray diagonal separating the short and long period shows the accuracy for the most significant single feature.
The best pairs (in the range of 0.95 to 1.0) within each index including the entropy or max–min features are indicated by a green background in Figure 5 with (dark green for the 25-day period and light green for the 12-day period). Similar pairs without the participation of the entropy or max–min featured are indicated by yellow in the upper triangular part and orange in the lower triangular part.
The TIR index has the greatest quantity of such “green” pairs. The other indices also contain a sufficient amount of such pairs for practical use.
SLP classifier training was executed each time using only one of 10 indices and different pairs from of the six features. The paired combinations of features resulted in a classification accuracy equal to or very close to 1.00, with more such cases in the 12-day observational period. The upper average value for pairs inside the NDVI is 0.96. Thus, using the 12-day observation period and any index excluding NDVI at least one pair of features can be found, which provides the early detection of plant stress with an accuracy of 1. The use of the 25-day observation period, an accuracy value of 1.00 can be achieved with only 5 of 10 indices (Figure 5).
Next, we investigated the proportion entropy and max–min features in the pairs required to ensure maximum accuracy. Table 2 shows accuracy values for the 12- and 25-day periods sorted in descending order of their sums.
Table 2 also shows the training time, which characterizes the convergence rate for the training process. If the sums of accuracies are equal, then the table is sorted by the sum of training times. The last row was added to Table 2 specifically to show the best combination of features for NDVI (for comparison with the results reported in [26]). Training for all feature combinations listed in the table was completed in less than 1 s.
More than 60% of the pairs listed in Table 2 include entropy or max–min. Moreover, the use of max–min in a pair is almost always preferable, replacing std in the pair.
The specific unique and high-level role of entropy is demonstrated in the last column of Figure A1 in Appendix A, which shows the entropy graphs for all 10 indices. The same behavior is observed for all indices, as demonstrated by the three start points of the graphs, which characterize the earliest features of plant stress in our experiment. This result supports the universality of entropy as a feature for any index or object, such as the gray-level co-occurrence matrix (GLCM) [30].

3.3. Significance of Excluding and Including All Six Features within Each Index and Using a Complete Feature Vector for All Indices Excluding or including TIR

Figure 6 shows the confusion matrices for classifications 5 (12-day range) and 7 (25-day range) of the key days using all 6 features within each of 10 indices. Only four examples for the TIR, GCL, Red, and GI indices are shown. The matrices demonstrate a sufficient level of diagonalization and the presence of errors, despite low values.
The average accuracies for the classification of key stress states achieved after training using all 6 features within each of the 10 indices over stress states are shown separately in Table 3. The table includes the accuracies for the 12- and 25-day observation periods, as well as values of the training time in seconds for each case.
SLP classifier training using all six features instead the pairs resulted in decreased detection accuracy. In some cases, such as the NDVI, the decrease is sufficient: from 0.97 for pairs of features to 0.77 for using six features.
Training the SLP classifier immediately on the full vector of features and using the index combinations made it possible to achieve the maximum classification accuracy for all key days for both the 12- and 25-day monitoring ranges (Table 4).
Here we investigated the possibility of reducing the number of indices necessary for robast classification under TIR exclusion conditions. Table 4 shows that: (1) our SLP classifier can classify all seven stress states of plants with an accuracy of 1 by combining two, three, four, or five indices; (2) the combination of indices results a shorter training time than the use of each index separately; (3) a shorter training time is required for the same combination of indices but for different day ranges when the accuracy is 1 than when a lower accuracy is achieved; and (4) no obvious increase in training time is associated with an increase in the number of indices in the combination.

4. Discussion

We compared the results of our experiments with those reported in [26]. The main difference observed was that the plants were in different stages of vegetation at the start of experiment, with a lag in development of the crown (in our case, approximately 3–4 days) but with the same start point in terms of real water losses. Table 5 shows a comparison of our experimental results with those reported in [26]; this comparison highlights two different approaches, classical ML and XAI, with feature selection, combination, and study using XAI methods.
The following questions were resolved in the present study:
(1)
An SLP-classifier was built. The classifier structure was adjusted in terms of the number of neurons (N) used on the inner layer (according to the number of indices used), the length of the feature vector (M = m × N, where m = 1,2,…6), and the number of detected states (key days; K).
(2)
The classification accuracy of key days was determined individually for each of the 60 possible features and for their pair combinations within an index. Combinations that provide an accuracy of 1 (or near 1), as well as the number of such combinations for each of the indices, were determined, and indices and leaders were established.
(3)
We established that the involvement of the full-feature vector does not necessarily result in the maximum accuracy, and a set of at least two features and a few neurons in the inner layer is required to provide a solution to the problem.
(4)
We recommend to use the entropy as the main feature, as well as the max–min feature, which determines the number of states for which the entropy is calculated.
(5)
It is important to investigate not only the contribution (sensitivity) of individual features but also that of combinations of their minimal numbers inside indices. Therefore, the NDVI achieved the worst performance in index ranking when using all six features (Acc. = 0.77 for 25 days; see Table 3), but the best pair of NDVI features provided an Acc. = 0.97 for the 25-day range (see Table 2).
(6)
The use statistical features of the index image instead of the image itself as the MLP input and use of formalized key states as the output ensures the explainability of the SLP classifier as a whole and its high accuracy, representing a valuable XAI research tool.
(7)
Increasing the number of indices increases the robustness of the solution. As many as five index combinations may be required to ensure the robustness of solutions for smart farming applications.
The exploration software was implemented via Python 3.8.6. For preprocessing and visualization of TIR images and HSIs, the libraries PySptools, Scikit-image, NumPy, Pandas, Matplotlib, and Cv2 were used. PyTorch and Scikit-learn libraries were used for the neural network models creation, training, and quality estimation. The computer, with an Intel Core i3-8130U, 2.2 GHz, 4 cores, 4 GB, was used as a hardware.

5. Conclusions

The results of the study are as follows:
(1)
Entropy can be used as a universal high-level explainable feature for classification, in particular for early detection of plant stress. The histogram of the single-channel image pixels belonging to any object of interest is the source for the entropy calculation.
(2)
The histogram width determined as max–min also can be used as a high-level explainable feature.
(3)
The entropy and the max–min features, in combination with the other histogram statistical features, should be used as the priority high-level input parameters of XAI neural networks (excluding pairs ‘max–min, std’, owing to their high correlation).
XAI networks using feature pairs involving high-level features such as entropy and max–min can be used for the following applications:
To replace the use of a complete set of statistical features of HSI-based indices;
To eliminate the need for thermal IR sensors for the early detection of plant stress;
To significantly reduce the requirements for sensors used in smart farming;
To eliminate the need for large datasets, energy, computational, and time resources for neural network training; and
For one-trial correction of AI systems [27].
In this work, the SLP classifier training time was reduced to 0.13–1.0 s.
Additional studies of our XAI approach, including studies on the influence of noise and the robustness of plant stress detection, are planned in the future.

Author Contributions

Conceptualization and methodology, V.T.; software and validation, I.M. and M.L.; formal analysis, V.T.; investigation, E.V. and A.G.; data curation, M.L.; writing—original draft preparation, M.L.; writing—review and editing, V.T.; visualization, M.L. and I.M.; funding acquisition, V.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Ministry of Science and Higher Education of the Russian Federation (agreement number 075-15-2020-808).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used herein were from a 25-day experiment on wheat drought, including records of the state of plants in the control and experimental groups collected every 2–3 days using three types of sensors (HSI, thermal IR, and RGB). The data occupy 72.2 GB and can be obtained from the authors upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Figure A1. Plots of feature values on key days for all indices.
Figure A1. Plots of feature values on key days for all indices.
Entropy 24 01597 g0a1

References

  1. Štrumbelj, E.; Kononenko, E. Explaining prediction models and individual predictions with feature contributions. Knowl. Inf. Syst. 2013, 41, 647–665. [Google Scholar] [CrossRef]
  2. Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; Fergus, R. Intriguing properties of neural networks. In Proceedings of the International Conference on Learning Representations, Banff, AB, Canada, 14–16 April 2014. [Google Scholar] [CrossRef]
  3. Wei, P.; Lu, Z.; Song, J. Variable importance analysis: A comprehensive review. Reliab. Eng. Syst. Saf. 2015, 142, 399–432. [Google Scholar] [CrossRef]
  4. Gorban, A.N.; Makarov, V.A.; Tyukin, I.Y. High-Dimensional Brain in a High-Dimensional World: Blessing of Dimensionality. Entropy 2020, 22, 82. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Hastie, T.; Tibshirani, R.; Friedman, J. The Elements of Statistical Learning. Data Mining, Reference, and Prediction; Springer Series in Statistics: New York, NY, USA, 2001; 764p. [Google Scholar]
  6. Ribeiro, M.T.; Singh, S.; Guestrin, C. “Why should i trust you?”: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 1135–1144. [Google Scholar] [CrossRef]
  7. Shrikumar, A.; Greenside, P.; Kundaje, A. Learning Important Features Through Propagating Activation Differences. arXiv 2017, arXiv:1704.02685. Available online: https://arxiv.org/abs/1704.02685 (accessed on 10 July 2022).
  8. Shrikumar, A.; Greenside, P.; Shcherbina, A.; Kundaje, A. Not Just a Black Box: Learning Important Features Through Propagating Activation Differences. arXiv 2016, arXiv:1605.01713. Available online: https://arxiv.org/abs/1605.01713 (accessed on 10 July 2022).
  9. Lundberg, S.M.; Lee, S.-I. A Unified Approach to Interpreting Model Predictio ns. In Proceedings of the 31st Conference on Neural Information Processing Systems (NeurIPS 2017), Long Beach, CA, USA, 4–9 December 2017; Available online: https://proceedings.neurips.cc/paper/2017/file/8a20a8621978632d76c43dfd28b67767-Paper.pdf (accessed on 10 July 2022).
  10. Zhao, Q.; Sheng, T.; Wang, Y.; Tang, Z.; Chen, Y.; Cai, L.; Ling, H. M2Det: A Single-Shot Object Detector Based on Multi-Level Feature Pyramid Network. In Proceedings of the AAAI Conference on Artificial Intelligence, Atlanta, Georgia, 8–12 October 2019; Volume 33, pp. 9259–9266. [Google Scholar]
  11. Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. Int. J. Comput. Vis. 2020, 128, 336–359. [Google Scholar] [CrossRef] [Green Version]
  12. Albergante, L.; Mirkes, E.; Bac, J.; Chen, H.; Martin, A.; Faure, L.; Barillot, E.; Pinello, L.; Gorban, A.; Zinovyev, A. Robust and scalable learning of complex intrinsic dataset geometry via ElPiGraph. Entropy 2020, 22, 296. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Bac, J.; Mirkes, E.M.; Gorban, A.N.; Tyukin, I.; Zinovyev, A. Scikit-Dimension: A Python Package for Intrinsic Dimension Estimation. Entropy 2021, 23, 1368. [Google Scholar] [CrossRef] [PubMed]
  14. Li, P.; Yang, Y.; Pagnucco, M.; Song, Y. Explainability in Graph Neural Networks: An Experimental Survey. arXiv 2022, arXiv:2203.09258. [Google Scholar] [CrossRef]
  15. Ying, R.; Bourgeois, D.; You, J.; Zitnik, M.; Leskovec, J. GNNExplainer: Generating explanations for graph neural networks. NeurIPS 2019, 1, 1–13. [Google Scholar] [CrossRef]
  16. Luo, D.; Cheng, W.; Xu, D.; Yu, W.; Zong, B.; Chen, H.; Zhang, X. Parameterized explainer for graph neural network. NeurIPS 2020, 33, 19620–19631. [Google Scholar] [CrossRef]
  17. Vu, M.; Thai, M.T. PGM-Explainer: Probabilistic graphical model explanations for graph neural networks. In Proceedings of the NeurIPS 2020, Vancouver, BC, Canada, 6 December 2020; Available online: https://arxiv.org/abs/2010.05788 (accessed on 10 July 2022).
  18. Schlichtkrull, M.S.; De Cao, N.; Titov, I. Interpreting graph neural networks for NLP with differentiable edge masking. In Proceedings of the ICLR, Virtual Event, Austria, 3–7 May 2021. [Google Scholar] [CrossRef]
  19. Linardatos, P.; Papastefanopoulos, V.; Kotsiantis, S. Explainable AI: A Review of Machine Learning Interpretability Methods. Entropy 2021, 23, 18. [Google Scholar] [CrossRef] [PubMed]
  20. Pan, E.; Ma, Y.; Fan, F.; Mei, X.; Huang, J. Hyperspectral Image Classification across Different Datasets: A Generalization to Unseen Categories. Remote Sens. 2021, 13, 1672. [Google Scholar] [CrossRef]
  21. Dausset, J. Vegetation Indices for Chlorophyll (CI–MTCI–NDRE–ND705–ND550–mNDblue). Plant Phenotyping Vegetation Indices for Chlorophyll—Blog Hiphen (hiphen-plant.com). Available online: https://www.hiphen-plant.com/vegetation-indices-chlorophyll/3612/ (accessed on 10 July 2022).
  22. Jha, K.; Doshi, A.; Patel, P.; Shah, M. A comprehensive review on automation in agriculture using artificial intelligence. Artif. Intell. Agric. 2019, 2, 1–12. [Google Scholar] [CrossRef]
  23. Talaviya, T.; Shah, D.; Patel, N.; Yagnik, H.; Shah, M. Implementation of artificial intelligence in agriculture for optimisation of irrigation and application of pesticides and herbicides. Artif. Intell. Agric. 2020, 4, 58–73. [Google Scholar] [CrossRef]
  24. Pathan, M.; Patel, N.; Yagnik, H.; Shah, M. Artificial cognition for applications in smart agriculture: A comprehensive review. Artif. Intell. Agric. 2020, 4, 81–95. [Google Scholar] [CrossRef]
  25. Maximova, I.; Vasiliev, E.; Getmanskaya, A.; Kior, D.; Sukhov, V.; Vodeneev, V.; Turlapov, V. Study of XAI-capabilities for early diagnosis of plant drought. In Proceedings of the 2021 International Joint Conference on Neural Networks (IJCNN): International Joint Conference on Neural Networks, Shenzhen, China, 18–22 July 2021. [Google Scholar] [CrossRef]
  26. Dao, P.D.; He, Y.; Proctor, C. Plant drought impact detection using ultra-high spatial resolution hyperspectral images and machine learning. Int. J. Appl. Earth Obs. Geoinf. 2021, 102, 102364. [Google Scholar] [CrossRef]
  27. Gorban, A.N.; Burton, R.; Romanenko, I.; Tyukin, I.Y. One-trial correction of legacy AI systems and stochastic separation theorems. Inf. Sci. 2019, 484, 237–254. [Google Scholar] [CrossRef]
  28. Gorban, A.N.; Mirkes, E.M.; Tyukin, I.Y. How Deep Should be the Depth of Convolutional Neural Networks: A Backyard Dog Case Study. Cogn. Comput. 2020, 12, 388–397. [Google Scholar] [CrossRef] [Green Version]
  29. He, K.; Zhang, X.; Ren, S.; Jian Sun, J. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. In Proceedings of the International Conference on Computer Vision, Las Condes, Chile, 11–18 December 2015. [Google Scholar] [CrossRef]
  30. Haralick, R.M. Pattern recognition with measurement space and spatial clustering for multiple image. Proc. IEEE 1969, 57, 654–665. [Google Scholar] [CrossRef]
Figure 1. Comparison of the designs of two neural networks: (a) the ‘backyard dog’ network from a given legacy network (courtesy of the authors [28] Figure 5); (b) the structure of our SLP classifier as an example of the ‘backyard dog’ net implementation for independent use.
Figure 1. Comparison of the designs of two neural networks: (a) the ‘backyard dog’ network from a given legacy network (courtesy of the authors [28] Figure 5); (b) the structure of our SLP classifier as an example of the ‘backyard dog’ net implementation for independent use.
Entropy 24 01597 g001
Figure 2. Upper row—day 25 images defining the mask, from left to right: 1st—normalized NDblue index in pseudocolor scale for threshold=0.1; 2nd and 3rd—normalized NDVI index in pseudocolor for the thresholds of 0.15 and 0.25, respectively. Bottom row—images generated by applying upper row masks to three HSI channels (R630, G550, and B480).
Figure 2. Upper row—day 25 images defining the mask, from left to right: 1st—normalized NDblue index in pseudocolor scale for threshold=0.1; 2nd and 3rd—normalized NDVI index in pseudocolor for the thresholds of 0.15 and 0.25, respectively. Bottom row—images generated by applying upper row masks to three HSI channels (R630, G550, and B480).
Entropy 24 01597 g002
Figure 3. SLP classifier training with one scalar feature. Period: 12 days. D-00—key days with their numbers: 01, 03, 06, 08, 12. Confusion matrices for each scalar feature. The following features are shown in the columns: max, min, std, max–min, and entropy; the most interesting 4 of 10 indices (TIR, GCL, Green, and GI) are shown in the rows.
Figure 3. SLP classifier training with one scalar feature. Period: 12 days. D-00—key days with their numbers: 01, 03, 06, 08, 12. Confusion matrices for each scalar feature. The following features are shown in the columns: max, min, std, max–min, and entropy; the most interesting 4 of 10 indices (TIR, GCL, Green, and GI) are shown in the rows.
Entropy 24 01597 g003
Figure 4. SLP classifier training with one scalar feature. Period: 25 days. D-00—key days with their numbers: 01, 03, 06, 08, 12, 19, 25. Confusion matrices for each scalar feature. The following features are shown in the columns: max, min, std, max–min, and entropy; the most interesting 4 of 10 indices (TIR, GCL, Green, and GI) are shown in the rows.
Figure 4. SLP classifier training with one scalar feature. Period: 25 days. D-00—key days with their numbers: 01, 03, 06, 08, 12, 19, 25. Confusion matrices for each scalar feature. The following features are shown in the columns: max, min, std, max–min, and entropy; the most interesting 4 of 10 indices (TIR, GCL, Green, and GI) are shown in the rows.
Entropy 24 01597 g004
Figure 5. Average accuracy values for the classification of key stress states achieved after training using pairs of features. The accuracy values in the triangle above the gray diagonal correspond to the 12-day period, whereas those under the gray diagonal correspond the 25-day period.
Figure 5. Average accuracy values for the classification of key stress states achieved after training using pairs of features. The accuracy values in the triangle above the gray diagonal correspond to the 12-day period, whereas those under the gray diagonal correspond the 25-day period.
Entropy 24 01597 g005
Figure 6. Confusion matrices for the key days after training using all six features per index. Results are shown for four indices (TIR, GCL, Red, and GI). Confusion matrix: true classes are shown on the y axis, and predicted classes are shown on the x axis. D-00 are the key days. The first row corresponds to the 12-day observation period, and the second row corresponds to the 25-day observation period.
Figure 6. Confusion matrices for the key days after training using all six features per index. Results are shown for four indices (TIR, GCL, Red, and GI). Confusion matrix: true classes are shown on the y axis, and predicted classes are shown on the x axis. D-00 are the key days. The first row corresponds to the 12-day observation period, and the second row corresponds to the 25-day observation period.
Entropy 24 01597 g006
Table 1. Significance of each feature for SLP classification in terms of accuracy values.
Table 1. Significance of each feature for SLP classification in terms of accuracy values.
Index/FeatureAccuracy, 12Accuracy, 25Index/FeatureAccuracy, 12Accuracy, 25
TIR/max0.700.67RED/max0.700.79
TIR/min0.760.7RED/min0.870.82
TIR/mean0.760.67RED/mean0.710.71
TIR/std0.940.84RED/std0.60.54
TIR/max–min0.880.8RED/max–min0.630.71
TIR/entropy0.690.49RED/entropy0.760.59
NDVI/max0.790.71GCL/max0.930.86
NDVI/min0.610.50GCL/min0.970.81
NDVI/mean0.80.68GCL/mean0.840.74
NDVI/std0.690.65GCL/std0.520.52
NDVI/max–min0.550.47GCL/max–min0.890.82
NDVI/entropy0.550.57GCL/entropy0.610.56
GNDVI/max0.810.72SIPI/max0.800.74
GNDVI/min0.910.72SIPI/min0.830.84
GNDVI/mean0.820.70SIPI/mean0.70.66
GNDVI/std0.440.37SIPI/std0.670.66
GNDVI/max–min0.950.76SIPI/max–min0.800.72
GNDVI/entropy0.690.55SIPI/entropy0.610.55
BLUE/max0.920.91NDblue/max0.910.71
BLUE/min0.620.64NDblue/min0.580.51
BLUE/mean0.790.73NDblue/mean0.600.59
BLUE/std0.820.70NDblue/std0.80.6
BLUE/max–min0.80.79NDblue/max–min0.930.74
BLUE/entropy0.780.6NDblue/entropy0.580.42
GREEN/max0.870.73GI/max0.700.67
GREEN/min0.750.69GI/min0.900.90
GREEN/mean0.860.80GI/mean0.810.79
GREEN/std0.850.75GI/std0.760.66
GREEN/max–min0.850.76GI/max–min0.800.61
GREEN/entropy0.360.38GI/entropy0.880.79
Table 2. Average accuracy values after training using feature pairs for 12- and 25-day periods.
Table 2. Average accuracy values after training using feature pairs for 12- and 25-day periods.
Combination
of Features
Accuracy,
12 Days
Training Time
(12 Days), s
Accuracy,
25 Days
Training Time (25 Days), s
GI/max, min10.1910.31
TIR/max–min, entropy10.2010.34
GCL/min, max–min10.2310.32
GI/min, max–min10.2110.35
GI/max, max–min10.2110.36
GCL/max, min10.2910.32
TIR/std, max–min10.1910.46
GI/mean, max–min10.2310.43
GCL/max, max–min10.3110.38
GCL/mean, max–min10.3710.44
TIR/max, min10.4110.43
TIR/max, max–min10.4310.43
TIR/min, mean10.4110.48
TIR/min, max–min10.4010.50
GNDVI/max, min10.4210.57
GNDVI/max, max–min10.4210.58
GNDVI/min, max–min10.4110.65
TIR/mean, std10.4410.74
TIR/max, std10.5910.62
GREEN/min, std10.5510.79
TIR/std, entropy10.380.990.60
NDVI/max, mean0.960.80.970.87
Table 3. Average classification accuracy after training using each index separately with six features and the training time for each case.
Table 3. Average classification accuracy after training using each index separately with six features and the training time for each case.
IndexAccuracy, 12 DaysTraining Time (12 Days), sAccuracy, 25 DaysTraining Time (25 Days), s
TIR10.3910.40
GCL0.990.210.980.34
RED0.980.630.990.81
GI0.990.230.970.33
BLUE0.970.660.950.67
GNDVI0.960.350.930.54
NDblue0.910.860.870.91
GREEN0.820.740.950.89
SIPI0.850.580.870.87
NDVI0.790.490.770.58
Table 4. The average accuracy after training via the index combinations and the training time.
Table 4. The average accuracy after training via the index combinations and the training time.
Combination
of Indices
Accuracy (12 Days)Training Time
(12 Days), s
Accuracy (25 Days)Training Time (25 Days), s
GCL, GI10.1510.17
BLUE, RED10.4110.49
GNDVI, GI10.130.990.30
GNDVI, RED10.340.990.53
GNDVI, NDblue0.970.5510.54
GREEN, GCL0.930.4910.42
GNDVI, GREEN0.90.6710.49
GNDVI, RED, GI10.1510.19
GNDVI, BLUE, GI10.1510.21
GNDVI, SIPI, GI10.1510.25
GNDVI, GREEN, GI10.2710.38
GREEN, RED, GI10.4810.45
GNDVI, NDblue, GI0.990.5310.39
GNDVI, BLUE, NDblue0.990.6010.51
GNDVI, GREEN, NDblue0.970.4610.38
GNDVI, GREEN, RED, SIPI0.990.4610.55
GREEN, RED, SIPI, NDblue10.520.990.67
GREEN, RED, GCL, NDblue0.990.370.990.3
Table 5. The results of two different approaches, classical ML and XAI, for comparison.
Table 5. The results of two different approaches, classical ML and XAI, for comparison.
Results of [26]Our Results
A total of three of nine index images (CIRed-edge, mSR705, and SR [21,26]) indicated significant differences after 3 days of water treatment. Most of the indices were sensitive to drought-induced change after 6 days of the water treatment, with the exception of NDVI, ARI, and CCRI.We used six statistical features instead of the image for each of 10 indices. We considered individual stress detection possibilities for each of 60 features and their pairs for two time ranges (12 and 25 days). All indices were able to detect plant stress states in both intervals with an accuracy 1 or near 1 (0.96). Entropy detected the earliest changes for all used indices, including the NDVI.
Indices cannot be used individually for quality diagnostics on all days. The best result (although not ideal) was achieved by a mixture of indices.Indices can be employed for quality diagnostics on all days, using some feature pairs. The application of all six features is less productive than the use of pairs. Using a mixture of 2–5 indices guarantees an accuracy 1.
A near-ideal result was achieved by MLP with the use of average HSI signature curves and their derivatives, designated as DNN-Full and DNN-Deriv(atives) at the input.SLP (practical case of MLP) is sufficient for plant stress classification with an accuracy 1. The employed properties of HSI signature derivatives practically equal to the properties of NDVI.
Owing to the use of hyperspectra as features, it was necessary to train the classifier for each plant type and possible irrigation condition. Using the entropy and max–min features in a pair with another suitable statistical feature, we obtained the simplest and robust XAI-classifier, independent from plant type, irrigation conditions, temperature, and other external conditions.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lysov, M.; Maximova, I.; Vasiliev, E.; Getmanskaya, A.; Turlapov, V. Entropy as a High-Level Feature for XAI-Based Early Plant Stress Detection. Entropy 2022, 24, 1597. https://doi.org/10.3390/e24111597

AMA Style

Lysov M, Maximova I, Vasiliev E, Getmanskaya A, Turlapov V. Entropy as a High-Level Feature for XAI-Based Early Plant Stress Detection. Entropy. 2022; 24(11):1597. https://doi.org/10.3390/e24111597

Chicago/Turabian Style

Lysov, Maxim, Irina Maximova, Evgeny Vasiliev, Alexandra Getmanskaya, and Vadim Turlapov. 2022. "Entropy as a High-Level Feature for XAI-Based Early Plant Stress Detection" Entropy 24, no. 11: 1597. https://doi.org/10.3390/e24111597

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop