Next Article in Journal
Automatic Parsing and Utilization of System Log Features in Log Analysis: A Survey
Previous Article in Journal
Yeast Fermentation for Production of Neutral Distilled Spirits
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

RiceDRA-Net: Precise Identification of Rice Leaf Diseases with Complex Backgrounds Using a Res-Attention Mechanism

1
College of Information and Intelligence, Hunan Agricultural University, Changsha 410128, China
2
College of Mechanical and Electrical Engineering, Hunan Agricultural University, Changsha 410128, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(8), 4928; https://doi.org/10.3390/app13084928
Submission received: 25 March 2023 / Revised: 8 April 2023 / Accepted: 12 April 2023 / Published: 14 April 2023

Abstract

:
In this study, computer vision applicable to traditional agriculture was used to achieve accurate identification of rice leaf diseases with complex backgrounds. The researchers developed the RiceDRA-Net deep residual network model and used it to identify four different rice leaf diseases. The rice leaf disease test set with a complex background was named the CBG-Dataset, and a new single background rice leaf disease test set was constructed, the SBG-Dataset, based on the original dataset. The Res-Attention module used 3 × 3 convolutional kernels and denser connections compared with other attention mechanisms to reduce information loss. The experimental results showed that RiceDRA-Net achieved a recognition accuracy of 99.71% for the SBG-Dataset test set and possessed a recognition accuracy of 97.86% on the CBG-Dataset test set. In comparison with other classical models used in the experiments, the test accuracy of RiceDRA-Net on the CBG-Dataset decreased by only 1.85% compared with that on the SBG-Dataset. This fully illustrated that RiceDRA-Net is able to accurately recognize rice leaf diseases with complex backgrounds. RiceDRA-Net was very effective in some categories and was even capable of reaching 100% precision, indicating that the proposed model is accurate and efficient in identifying rice field diseases. The evaluation results also showed that RiceDRA-Net had a good recall ability, F 1 score, and confusion matrix in both cases, demonstrating its strong robustness and stability.

1. Introduction

Rice is one of the major cereal crops cultivated in China and plays a significant role in its agricultural industry [1]. However, rice diseases can have a significant impact on grain production. Traditional methods of identifying these diseases require trained personnel to observe them, which is time-consuming and laborious. Often, by the time the disease is detected, it has progressed to a severe level, resulting in a loss of yield, time, and money [2]. Therefore, in this paper, we proposed a deep learning model with a deep residual architecture for identifying rice leaf diseases with a complex context in rice fields.
To address this issue, we proposed a deep learning model with a deep residual architecture that could identify rice leaf diseases with a complex context in rice fields. Our proposed model, the Rice Dense Residual Attention Net (RiceDRA-Net), has a denser residual structure and integrates a new attention mechanism, the Residual Attention Mechanism (Res-Attention), to improve disease recognition. With this model, we aimed to achieve a high rate of rice disease recognition, with the Res-Attention mechanism accurately identifying the disease locations in rice leaves.
Studies have shown that traditional machine learning methods can achieve high accuracy in plant disease identification tasks. For example, Jiang F et al. [3] conducted a study in which the average correct recognition rate of a deep-learning- and SVM-based rice disease identification model was 96.8%, representing a higher accuracy than that of a traditional back-propagation neural network model. In another study by Govardhan M et al. [4], they used random forest to diagnose tomato plant diseases and developed a disease identification system with an overall accuracy of 95%. Ramesh S et al. [5] trained diseased and healthy leaf datasets collectively under random forest to classify diseased and healthy images. Ahmed K et al. [6] used machine learning techniques to detect rice leaf diseases and achieved an accuracy of more than 97% on the test dataset. Sethy P K et al. [7] used a support vector machine approach for rice disease identification and obtained a good result. Although these machine-learning-based models showed good results, they still face some challenges in terms of their practical applications.
Several previous studies have investigated the use of deep learning for identifying plant diseases. For instance, Mohanty S P et al. [8] proposed using deep learning for image-based plant disease detection. Edna Chebet Too et al. [9] conducted a comparative study on fine-tuning deep learning models for plant disease recognition, adapting and comparing different deep learning models for plant diseases. Sk Mahmudul Hassan et al. [10] proposed a novel deep learning model based on initial layers and residual connections, where depth-separable convolution could be used to reduce the number of parameters. Ahila Priyadharshini R et al. [11] proposed a deep convolutional neural network (CNN) for identifying maize leaf diseases, which achieved an accuracy of 97.89%. Similarly, Hassan S M et al. [12] used CNN and migration learning methods to identify plant leaf diseases and achieved better recognition results in their experiments. In Shin J et al.’s study [13], a deep learning method was used for strawberry leaf powdery mildew detection using RGB-based images, resulting in 98% classification accuracy. Zhong Y et al. [14] conducted a study on deep learning in apple leaf disease recognition and achieved a recognition rate of 96% in a database of collected apple leaf disease images. Junde Chen et al. [15] introduced transfer learning in plant disease recognition, which uses networks pre-trained on large labeled datasets, such as ImageNet, to initialize weights, rather than randomly initializing weights and training from scratch. Hareem Kibriya et al. [16] explored the identification and classification of plant diseases in leaf images using deep learning (DL) and machine learning (ML) algorithms. Atila Ü et al. [17] studied plant leaf disease classification using the EfficientNet deep learning model. Mishra A M et al. [18] achieved a good result using a deep convolutional neural network to estimate weed density in a soybean crop using smart agriculture. Kaur P et al. [19] used hybrid convolutional neural networks to recognize allopatric disease by applying feature reduction. However, most of these studies focused solely on identifying the presence of diseases without accurately locating the diseases in plants. Thus, the proposed Res-Attention mechanism can better solve this problem and improve the accuracy of disease identification. Additionally, some studies have investigated the use of attention mechanisms for object recognition in deep learning. For example, Hu et al. [20] proposed a squeeze and excite (SE) block, which uses attention mechanisms to enhance feature maps. Similarly, Bhujel A et al. [21] proposed a lightweight attention-based convolutional neural network for tomato leaf disease classification to improve the accuracy of image classification. In Zhao Y et al.’s study [22], the attention mechanism was embedded into a residual network for plant disease severity detection. Inspired by these studies, we integrated the Res-Attention mechanism into RiceDRA-Net, thus improving the model’s ability to identify rice leaf diseases.
Several studies have investigated the use of deep learning for identifying rice diseases. For example, Ghosal S et al. [23] proposed using CNN and migration learning to classify rice leaf diseases. They created a small dataset of rice diseases and used migration learning to develop a deep learning model that achieved an accuracy of 92.46%. Similarly, Swathika R et al. [24] used a convolutional neural network to identify rice diseases by training using 3500 images of healthy and diseased rice leaves, achieving an accuracy of nearly 70%. Rahman C R et al. [25] proposed a two-stage small CNN architecture for rice pest identification and detection. Zhou G et al. [26] proposed a fast disease detection method for rice based on the fusion of FCM-KM and Faster R-CNN. Su N T et al. [27] used deep learning techniques and mobile device targets for rice leaf disease classification and achieved an accuracy of 81.87% for the training data and 81.25% for the validation data. Archana K S et al. [28] proposed a new method to improve the computational and classification performance of rice disease identification. Patil RR et al. [29] proposed a multimodal data fusion framework for rice disease diagnosis, Rice-Fusion. However, these studies all used a relatively simple CNN architecture and did not explore the use of attention mechanisms. Therefore, using RiceDRA-Net with the Res-Attention mechanism can be highly effective in improving the recognition of rice leaf diseases.
None of the models for recognizing rice leaf diseases discussed above have employed attention mechanisms to enhance accuracy. Furthermore, most have focused solely on identifying the presence of disease without precisely locating it in the plant. Moreover, the models proposed previously are typically trained in a single context and do not guarantee reliable recognition in more complex contexts. To address these shortcomings, we have developed a novel approach that offers three key contributions:
  • The novel RiceDRA-Net with a denser residual structure and a Res-Attention mechanism was proposed, which effectively improves the accuracy, robustness, and disease localization capabilities of the model for identifying rice leaf diseases.
  • A new single-background rice leaf disease dataset, SBG-Dataset, was constructed.
  • RiceDRA-Net has better recognition capabilities in rice leaf disease identification with a complex background compared with other classical models.
The rest of the paper is organized as follows: Section 2 presents the details of our dataset and network architecture. In Section 3 we provide all the experimental results and discuss the analysis of the experimental results. Finally, we conclude our paper in Section 4.

2. Materials and Methods

2.1. Dataset

The dataset used in this study was sourced from a publicly available rice disease dataset available via the internet. This dataset contained four different rice diseases, which were selected because of their complex context, namely bacterial wilt, rice blast, brown spot, and rice Tungro virus disease. This dataset contained a total of 5932 rice disease images, including 4153 training samples and 1779 test samples. The distribution of the number of disease images is shown in Table 1. We integrated 1779 test images from the test samples containing rice leaf disease images with complex backgrounds into a test dataset named the Complex Background test Dataset (CBG-Dataset). The images in the CBG-Dataset are shown in Figure 1. In this study, we used image processing techniques to investigate the relationship between the recognition of rice leaf diseases using different models and the complexity of their backgrounds. We removed the complex background from the test CBG-Dataset to obtain a rice leaf disease test dataset with a single background, which we named the Single Background test Dataset (SBG-Dataset). The pictures of rice leaf diseases in SBG-Dataset are shown in Figure 2. Therefore, the number of disease pictures in the CBG-Dataset and SBG-Dataset was kept the same, having a complex background and a single background, respectively.

2.2. Image Processing

Since the images in the CBG dataset were from a public dataset, they were of varying sizes, including 256 × 256 and 300 × 300 pixels. To ensure consistency in the image sizes for input into the network, we resized all of the images to 224 × 224 pixels. Additionally, we performed data augmentation techniques such as random flipping and rotating at random angles on some samples using programming language [30] to increase the amount of data in both datasets equally.

2.3. Deep Learning Networks

2.3.1. Deep Residual Network

The emergence of deep residual networks has greatly improved the characterization ability and learning ability of deep learning, and become a hot research direction in the field of image classification [31]. The deep residual network is able to utilize the residual structure to maximize the information loss generated in the convolution process while performing feature extraction in the ordinary deep convolutional neural network. This greatly improves the accuracy of rice disease recognition.
The DenseNet network model was developed in 2017 by Huang G et al. [32], a deep residual model proposed at CVPR. The model uses densely connected connectivity, in which all layers can access the feature maps from their preceding layers, thus encouraging feature reuse. As a direct result, the model is more compact and less prone to overfitting. In addition, each individual layer receives direct supervision from the loss function via a shortcut path, which provides implicit deep supervision [33].

2.3.2. Dense Connection

In order to achieve the transmission of maximum information flow during image recognition, DenseNet uses a different connection pattern than that used previously, using dense connectivity. This means that each layer in DenseNet is connected to all of the previous layers. The purpose of this is to ensure that maximum information flow is achieved during the network transfer process, where each layer of the network receives additional input from the previous network and passes the feature maps it obtains to the later network.
x l = H l x 0 , x 1 , , x l 1
where x l represents the feature input of layer l; x 0 through x l 1 represent all layers before layer l, with layer l receiving input from all previous layers; x 0 , x 1 , , x l 1 represents the feature maps output from all layers before input layer l for merging; and H l represents the composite function that implements the join operation of multiple features in the equation, which includes the batch normalization (BN) layer, the ReLU activation layer, and the (3 × 3) convolutional layer.
The DenseNet structure is shown in Figure 3. The network structure of DenseNet-121 is shown in Table 2.

2.3.3. Transition

The transition module is introduced in DenseNet to perform down-sampling operations. The transition module mainly performs two operations, convolution and pooling, so that the model can be compressed to reduce the number of channels and the number of parameters to the next dense module, and an average pooling layer of (2 × 2).

2.3.4. Attentional Mechanisms

When we process an image, we want the convolutional neural network to pay attention to the areas of the image that can positively affect the predicted outcome, rather than paying attention to everything. The attention mechanism allows the convolutional neural network to adaptively adjust what to pay attention to. Attention mechanisms have been used in computer vision and natural language processing. They have been widely used in sequential models with recurrent neural networks and long short-term memory (LSTM) [34]. Among the attention mechanisms, they are generally divided into spatial attention mechanisms and channel attention mechanisms. This paper used a combination of both CBAM models. The CBAM module is a simple and effective attention module for feedforward convolutional neural networks [35], and the structure of the model is shown in Figure 4.
Firstly, an intermediate feature map of the process is provided; then, different attention weights are assigned sequentially along two different dimensions of the spatial attention module and the channel attention module. Then, the original feature map is multiplied by the inferred attention weights to obtain the adaptive adjustment. The two sub-modules of the CBAM module, the spatial attention module and the channel attention module are shown in Figure 5.

The Channel Attention Mechanism Module

The channel attention mechanism first takes the input feature map F C × H × W and passes it through global maximum pooling and global average pooling to obtain two C × 1 × 1 feature maps M S C × 1 × 1 . Then, the resulting feature maps pass through a shared network consisting of a two-layer multilayer perceptron (MLP) with the number of neurons in the first hidden activation layer C / r , where r is the shrinkage rate, and the number of neurons in the second layer is C . Afterwards, the output features of the shared network are summed element-wise, and the final complete feature vector is output by the channel attention mechanism M C F . The formula of the channel attention mechanism is shown in Equation (1).
M C F = σ M L P A v g P o o l F + M L P M a x P o o l F = σ W 1 W 0 F a v g C + W 1 W 0 F max C
where σ denotes the sigmoid function; W 0 C / r × C , W 1 C / r × C , and the MLP weights W 0 and W 1 are shared for both inputs; and W 0 is after the ReLU activation function.

Spatial Attention Mechanism Module

The spatial attention mechanism first takes the feature vectors obtained from the previous channel attention mechanism module as the input feature vectors for this module. The input feature vector is first subjected to a maximum pooling operation and an average pooling operation to obtain two feature vectors F m a x S 1 × H × W and F a v g S 1 × H × W , respectively. Then, the maximum pooled features and average pooled features are subjected to a channel splicing operation. Afterwards, the feature vectors are reduced to one dimension by a convolutional convolution operation of (7 × 7). Finally, after a sigmoid function, the feature vector is obtained M S ( F ) R H × W . The equation of the spatial attention mechanism is shown in Equation (2).
M S F = σ f 7 × 7 A v g P o o l F ; M a x P o o l F = σ f 7 × 7 F a v g S ; F max S
where σ denotes the sigmoid function and f 7 × 7 denotes the convolution operation [36] with a convolution kernel of size (7 × 7).

2.4. Rice Leaf Disease Identification Model

2.4.1. Res-Attention

In this study, we developed a new attention mechanism called the Res-Attention module based on the CBAM module. The Res-Attention module aims to reduce information loss during transmission by adding a residual structure. In the Res-Attention module, a portion of the information is retained when the feature map enters the channel attention model. Since the spatial attention model is connected after the channel attention model, the previously retained portion of information is fused with the information output from the spatial attention module. Finally, the fused feature information is output from the Res-Attention module. The structure of the Res-Attention module is shown in Figure 6.
The network flow diagram of the Res-Attention module is shown in Figure 7, which gives a more intuitive view of the transmission process of the feature map in the Res-Attention module.

2.4.2. RiceDRA-Net

In this study, we developed and studied the RiceDRA-Net network model by adding the Res-Attention module to the DenseNet-121 network model in order to improve the accuracy of this model in rice disease identification. The RiceDRA-Net model consists of four Dense Block modules, four Res-Attention modules, and three Transition Layers, which are interconnected and superimposed. In this study, the Res-Attention module was placed after the Dense Block module, and the Transition module was connected afterwards. The first three Res-Attention modules were connected to the Transition module, and the last Res-Attention module was directly connected to the Classification Layer. Finally, we changed the output features of the Classification Layer to 4, corresponding to the 4 categories of rice diseases we identified.
In the RiceDRA-Net network model, the feature maps from the Dense Block are directly passed to the Res-Attention module, and then passed through the Channel Attention Model and Spatial Attention Model in the Res-Attention module, respectively. The structure of the RiceDRA-Net network is shown in Figure 8.

3. Experimental Results and Discussion

3.1. Experimental Platform

This experiment used Ubuntu 20.04.4 LTS 64 as the operating system (Canonical Ltd., London, UK) and Intel€ X€ (R) Silver 4214 as the processor, CPU@2.20GHz, 32 G of RAM (Intel, Santa Clara, CA, USA). The GPU is an NVIDIA Tesla T4 with 16 G of video memory (Nvidia, Santa Clara, CA, USA). The programming language was Python and the PyTorch deep learning framework was used.

3.2. Experimental Design

In this study, we conducted several comparative experiments. Firstly, we chose the SBG-Dataset as the test dataset and conducted comparative experiments on the selection of hyperparameters. These included the selection of optimizer, learning rate, and convolutional kernel size in the network to identify the most suitable experimental hyperparameters for the improved model. Finally, we evaluated six different experimental models on two test datasets, the SBG-Dataset and CBG-Dataset, respectively. We compared RiceDRA-Net in this study with other classical models for rice leaf disease recognition and evaluated the recognition effects of different models on rice leaf diseases with complex and single backgrounds.

3.3. Evaluation Indicators

In this study, we used precision, recall, accuracy, and the F 1 score as evaluation metrics. We used the values of these evaluation metrics to evaluate the model in a comprehensive manner. Precision, recall, accuracy and the F 1 score were calculated as follows.
Precision = T P T P + F P
Recall = T P T P + F N
Accuray = T P + T N T P + F N + F P + T N
F 1 = 2 T P 2 T P + F P + F N
where T P is the number of true positive samples, T N is the number of true negative samples, F P is the number of false positive samples, and F N is the number of false negative samples. Accuracy is the ratio of the number of correctly predicted samples to the total number of samples used for model experiments. Precision is the ratio of the number of correctly predicted positive samples to the total number of correctly predicted samples, and recall is the ratio of the number of correctly predicted positive samples to the total number of positive samples. The F 1 score is a comprehensive evaluation index that considers precision and recall, and the average of precision and recall.
We chose the cross-entropy loss function as the loss function for our experiments. The cross-entropy loss function is commonly used for classification problems. It measures the difference between the model output and the true label, making it a preferred loss function for training classification models. The cross-entropy loss function is widely used in deep learning because of its good performance, ability to directly optimize classification probability, and ability to handle category imbalance. The cross-entropy loss function is shown in Equation (7), and we can calculate it by using the equation where M , P i , c and y i , c represent the class, predicted probabilities, and ground truth, respectively, for specific images.
L o s s = c = 1 M y i , c l o g P i , c

3.4. Selection of Hyperparameters

The choice of optimizer and learning rate can improve the training of neural networks, making them faster and better at achieving the desired results. In this research experiment, we combined four different optimizers (SGD, Adam, Adagrad, and AdaDelta) and five different learning rates (lr). We conducted a comparison experiment to find the most suitable optimizer and learning rate for this experiment. We used five learning rates of 0.1, 0.05, 0.01, 0.005, and 0.001, which were combined with the four optimizers. We tested the experiment on the SBG-Dataset test data, setting the epoch of the experiment to 30 rounds. The comparison results of different optimizers and learning rates are shown in Table 3 and Table 4.
According to Table 3 and Table 4, we can see that the highest accuracy was obtained by using the Adagrad optimizer with a learning rate of 0.01, which resulted in an accuracy of 99.71% and the lowest total loss value on the test set. Therefore, we can conclude that the optimal combination of optimizer and learning rate for this study’s model was Adagrad with a learning rate of 0.01.
The spatial attention mechanism in the Res-Attention module is impacted by the choice of its convolutional kernel size, and for this experiment, we compared the impact of using convolutional kernels of (3 × 3) or (7 × 7) on the recognition accuracy of the model. We used the Adagrad optimizer with a learning rate of 0.01, ran the experiment for 30 epochs, and tested the model on the SBG-Dataset.
Based on Table 5, we observed that modifying the size of the convolutional kernel of the spatial attention mechanism in the Res-Attention module to (3 × 3) was more suitable for identifying rice disease infestations in this network.

3.5. Comparison with Different Classical Models

Currently, in the field of image recognition for classification research, five models are often used: AlexNet [37], Vgg [38], ResNet [39], MobileNet [40], and DenseNet-121, and DenseNet-121 is the benchmark model of RiceDRA-Net. Consequently, in order to ensure the objectivity of the experimental model, we compared the model of this study with these classical models to obtain objective results. Among these models, both Vgg and ResNet networks have a variety of models with different network sizes. In this experiment, while ensuring objectivity, it was not appropriate to use too large a network, considering the size of the dataset and the experimental hardware. Therefore, we used two models, Vgg-19 and ResNet-101, as the experimental models for comparison experiments. We compared the model in this study with five different types of classical models for rice leaf disease recognition, which set the epoch to 30. We used the Adagrad optimizer and a 0.01 learning rate. We applied the trained models to two test datasets, the SBG-Dataset and CBG-Dataset, respectively. The experimental results are shown in Table 6 and Table 7, and Figure 9.
As can be seen from Table 6 and Figure 9, the test results on the rice leaf disease dataset with single backgrounds showed that the accuracy of the model in this study improved by 15.34%, 6.35%, 5.06%, and 2.08% compared with AlexNet, Vgg-19, MobileNet, and ResNet-101, respectively. RiceDRA-Net had the highest recognition accuracy of 1.85% compared to its benchmark model DenseNet-121. It can be seen from Figure 9 that this model also had the highest accuracy of 99.71% compared to other models in terms of precision, recall, and the F 1 score. Furthermore, it can be observed from Figure 9 that this experimental model had a faster convergence speed compared to other classical models and did not overfit. We can also see from Table 7 and Figure 9 that RiceDRA-Net also produced good experimental results in terms of the rice leaf disease test dataset with complex backgrounds, with a 97.86% recognition accuracy on the test set, which was much higher than the other five models. Furthermore, it performed very well in terms of precision, recall, and the F 1 score.
Therefore, the RiceDRA-Net studied in this paper had the highest recognition accuracy compared to the other five models with both single and complex backgrounds for rice leaf disease recognition. In addition, we also added the total loss values of the six different models on the two test sets, as shown in Figure 10. We can see very intuitively that, among the six models, the RiceDRA-Net proposed in this paper had the lowest total loss value compared to the other five models in the test set. Moreover, the loss value decreased the fastest, regardless of whether it had a single background or a complex background.
We compared the recognition accuracy of the six models for two different test datasets for rice leaf diseases, as shown in Figure 11. It can be observed that the recognition accuracy of RiceDRA-Net was 99.71% for the SBG-Dataset and 97.86% for the CBG-Dataset. In addition, the rice leaf disease recognition accuracy of RiceDRA-Net with a single background decreased by only 1.85% compared with a complex background. The accuracy of AlexNet decreased by 8.49%, MobileNet decreased by 9.16%, Vgg-19 decreased by 9.16%, ResNet-101 decreased by 7.53%, and DenseNet-121 decreased by 5.34%. The experimental results show that complex backgrounds have the least impact on RiceDRA-Net, and its accuracy in terms of rice leaf disease recognition with complex backgrounds decreased the least compared with the remaining five comparison models. The experiments demonstrate that RiceDRA-Net has a better recognition effect for rice leaf diseases with complex backgrounds.
We used six different models with two test sets, the SBG-Dataset and CBG-Dataset, to test their accuracy for four different disease categories on the rice leaf disease dataset, as shown in Figure 12. This allowed more intuitive analysis of the recognition effect of the models on different rice leaf diseases. From the figure, it can be seen that RiceDRA-Net outperformed the other five reference models in terms of its recognition of the four different rice leaf diseases, both with a single context and a complex context. As can be seen from Figure 12, RiceDRA-Net achieved 100% precision for three diseases, blast, brown spot and Tungro, with a single background. Although RiceDRA-Net had lower precision than MobileNet for bacterial blight with a complex background, RiceDRA-Net had better precision than the remaining five models for the remaining three diseases. This indicates that RiceDRA-Net had stronger robustness and stability, and high accuracy. Furthermore, the evaluation results can be seen in Table 8. RiceDRA-Net had a significantly higher recall and F 1 score for four different rice leaf diseases compared to the five reference models for both test sets, which indicates that RiceDRA-Net has a good recall ability and F 1 score and is less influenced by complex backgrounds. Consequently, it is suitable for rice leaf disease identification with a complex background.
Finally, we produced confusion matrix plots for the six models for the two test sets and analyzes them as shown in Figure 13 and Figure 14.
Where the horizontal coordinate of the confusion matrix represented the true category, the vertical coordinate represented the predicted category, and the shades of colors in the squares represent different quantities, with darker colors representing higher quantities, we can see from Figure 13 that the precision for the three disease categories of blast, brown spot, and Tungro with a single context reached 100% on the model being studied. In addition, the precision for bacterial blight also reached 98.9%. Five experimental samples were misclassified as bacterial blight instead of blast. These two disease categories, bacterial blight and blast, are somewhat similar in disease characteristics, which may cause misclassification. As can be seen from Figure 14, RiceDRA-Net with a complex background had the highest precision for all three diseases, blast, brown spot and Tungro, and the precision for Tungro reached 100%. The experimental results showed that the recall and F 1 Score of RiceDRA-Net for rice leaf disease identification with both single and complex contexts were higher than those of the remaining five comparative models. This indicates that RiceDRA-Net had the highest recall and F 1 Score for each. It can be seen that RiceDRA-Net outperformed the classical models in the identification of different categories of rice leaf diseases. Overall, the RiceDRA-Net model studied in this paper had a good recognition effect and is able to meet the requirements of rice leaf disease recognition in general production.
In classification tasks, PR curves are a common performance evaluation metric that can be used to measure the accuracy and recall of a classifier. PR curves visualize the trade-off between accuracy (precision) and recall. The average PR curve is a method of combining PR curves from multiple categories into a single graph to calculate their average. The average PR curve balances the performance of each category and provides an overall assessment metric. The area under the curve (AUC) is a commonly used performance metric. Higher AUC values indicate a better performance of the classifier. We compared the average PR curves of six different models using the CBG-Dataset in our experiments, as shown in Figure 15. The experiments showed that the RiceDRA-Net model proposed in this study had the highest AUC value, indicating that RiceDRA-Net exhibited the best performance in terms of the trade-off between precision and recall. The curve was very steep, indicating that the model was able to obtain higher precision while maintaining high recall.
The computational complexity of a convolutional neural network can be determined by calculating the number of parameters and the computational volume of the model. Specifically, the number of parameters is the total number of weight parameters in all layers with parameters in the model. The computational volume is the number of floating-point of operations (FLOPs) required for forward inference. We compared the number of FLOPs and parameters for the six experimental models mentioned in the experiments, and the results are shown in Table 9. From Table 9, we can see that RiceDRA-Net’s FLOPs were only higher than those of AlexNet and MobileNet, but these models constitute an early structurally simple network model and a lightweight network, respectively. Furthermore, the recognition accuracy of both AlexNet and MobileNet is far inferior to that of RiceDRA-Net. Comparing RiceDRA-Net with VGG-19 and ResNet-101, the number of FLOPs for RiceDRA-Net was much lower than for these two models. Comparing RiceDRA-Net with the benchmark model DenseNet-121, we can see that RiceDRA-Net improved the accuracy of rice leaf disease recognition without changing the number of floating-point operations. It can also be observed from Table 9 that the number of parameters in RiceDRA-Net was only higher than that of MobileNet, a lightweight model. However, the recognition accuracy of RiceDRA-Net for rice leaf diseases was much higher than that of MobileNet. It is worth mentioning that RiceDRA-Net has achieved a reduction in the number of parameters while improving the accuracy of the model compared to the benchmark model DenseNet-121. In summary, RiceDRA-Net has excellent performance compared with other models in terms of the number of model parameters and computational power.

3.6. Heat Map Comparison

As the Res-Attention module is integrated in RiceDRA-Net, Grad-CAM was adopted to extract the heat map of the disease pictures after the Res-Attention module to visualize the map extracted by the convolutional neural network features. This experiment extracted the feature maps output from the last layer of the improved Res-Attention module in RiceDRA-Net to better show the parts of the picture samples that were focused on in the model, and RiceDRA-Net was tested on the CBG-Dataset. The heat maps are shown in Figure 16.
We used three disease images, bacterial blight, blast and brown spot, for our experiments. We also simultaneously extracted the last layer of convolutional layers from four classical models, AlexNet, Vgg-19, MobileNet, ResNet-101, and DenseNet-121. In order to compare these four models for rice disease, a heat map was compared with our proposed model. It can be seen from Figure 16 that the five different classical models used in the comparison were able to identify the exact disease location for a specific disease. However, this was not very accurate for several other diseases. For example, ResNet-101 was more accurate in locating bacterial blight, but not blast or brown spot. Furthermore, DenseNet-121 was accurate for bacterial blight and brown spot, but not for blast. However, RiceDRA-Net was very accurate at rice disease location and was able to locate several diseases in rice leaves. This model was able to locate the location of the disease in the rice leaf, which achieved a better recognition effect. This also demonstrates that adding the Res-Attention module to the dense residual network can better achieve accurate disease identification.

4. Conclusions

In this study, we proposed a deep learning model called RiceDRA-Net for identifying rice leaf diseases in the context of complex rice fields. We incorporated the Res-Attention module into the deep residual structure of the deep residual network to form RiceDRA-Net, making the residual connections more dense and more suitable for identifying rice leaf diseases in rice fields. We also constructed a test dataset, SBG-Dataset, consisting of a single background of rice leaf diseases. We also experimented with the effects of different optimizers, different learning rates, and different convolutional kernel sizes in Res-Attention in terms of the recognition effect of the model.
RiceDRA-Net extracts image features while retaining as much information as possible that may be lost during the convolution process. This greatly improves the accuracy in identifying rice leaf diseases with complex rice backgrounds. We compared RiceDRA-Net with various classical models on two datasets and found that RiceDRA-Net achieved higher accuracy in identifying rice leaf diseases with complex rice backgrounds and accurately locating the diseases in rice leaves.
In conclusion, our study presents a novel deep learning model, RiceDRA-Net, that performs well in identifying rice leaf diseases with complex rice backgrounds. The Res-Attention module incorporated into the model improves the ability of the model to identify rice leaf diseases and retain important information. The SBG-Dataset is a useful dataset to further study rice leaf diseases. Our results show that RiceDRA-Net outperforms classical models in identifying rice leaf diseases and locating diseases in rice leaves. These findings have important implications for the development of effective tools to diagnose and manage rice leaf diseases in rice fields.
The study’s main limitation was the small dataset size, which restricted the comparative analysis of the models. This may affect the reliability and generalizability of the study’s findings. To address this, future research could focus on collecting additional data. Furthermore, data augmentation techniques could be explored to expand the dataset, which would increase its diversity and size. Additionally, employing more lightweight algorithms could reduce the dependence on a large volume of data.

Author Contributions

Conceptualization, J.P. and Y.W.; methodology, J.P. and Y.W.; software, R.Z.; validation, J.P., R.Z. and H.C.; data curation, H.C.; writing—original draft preparation, J.P.; writing—review and editing, Y.W.; visualization, P.J.; funding acquisition and supervision, Y.W. and P.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Key R&D Program of China under the subproject “Research and System Development of Navigation Technology for Harvesting Machine of Special Economic Crops” (No. 2022YFD2002001), within the key program “Engineering Science and Comprehensive Interdisciplinary Research”.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are available on request due to privacy.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, Y.; Wang, H.; Peng, Z. Rice diseases detection and classification using attention based neural network and bayesian optimization. Expert Syst. Appl. 2021, 178, 114770. [Google Scholar] [CrossRef]
  2. Azizi, M.M.F.; Lau, H.Y. Advanced diagnostic approaches developed for the global menace of rice diseases: A review. Can. J. Plant Pathol. 2022, 44, 627–651. [Google Scholar] [CrossRef]
  3. Jiang, F.; Lu, Y.; Chen, Y.; Cai, D.; Li, G. Image recognition of four rice leaf diseases based on deep learning and support vector machine. Comput. Electron. Agric. 2020, 179, 105824. [Google Scholar] [CrossRef]
  4. Govardhan, M.; Veena, M. Diagnosis of tomato plant diseases using random forest. In Proceedings of the 2019 Global Conference for Advancement in Technology (GCAT), Bangalore, India, 18–20 October 2019; pp. 1–5. [Google Scholar]
  5. Ramesh, S.; Hebbar, R.; Niveditha, M.; Pooja, R.; Shashank, N.; Vinod, P. Plant disease detection using machine learning. In Proceedings of the 2018 International conference on design innovations for 3Cs Compute Communicate Control (ICDI3C), Bangalore, India, 25–28 April 2018; pp. 41–45. [Google Scholar]
  6. Ahmed, K.; Shahidi, T.R.; Alam, S.M.I.; Momen, S. Rice leaf disease detection using machine learning techniques. In Proceedings of the 2019 International Conference on Sustainable Technologies for Industry 4.0 (STI), Dhaka, Bangladesh, 24–25 December 2019; pp. 1–5. [Google Scholar]
  7. Sethy, P.K.; Barpanda, N.K.; Rath, A.K.; Behera, S.K. Deep feature based rice leaf disease identification using support vector machine. Comput. Electron. Agric. 2020, 175, 105527. [Google Scholar] [CrossRef]
  8. Mohanty, S.P.; Hughes, D.P.; Salathé, M. Using Deep Learning for Image-Based Plant Disease Detection. Front. Plant Sci. 2016, 7, 1419. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Too, E.C.; Yujian, L.; Njuki, S.; Yingchun, L. A comparative study of fine-tuning deep learning models for plant disease identification. Comput. Electron. Agric. 2019, 161, 272–279. [Google Scholar] [CrossRef]
  10. Hassan, S.M.; Maji, A.K. Plant Disease Identification Using a Novel Convolutional Neural Network. IEEE Access 2022, 10, 5390–5401. [Google Scholar] [CrossRef]
  11. Priyadharshini, R.A.; Arivazhagan, S.; Arun, M.; Mirnalini, A. Maize leaf disease classification using deep convolutional neural networks. Neural Comput. Appl. 2019, 31, 8887–8895. [Google Scholar] [CrossRef]
  12. Hassan, S.; Maji, A.; Jasiński, M.; Leonowicz, Z.; Jasińska, E. Identification of Plant-Leaf Diseases Using CNN and Transfer-Learning Approach. Electronics 2021, 10, 1388. [Google Scholar] [CrossRef]
  13. Shin, J.; Chang, Y.K.; Heung, B.; Nguyen-Quang, T.; Price, G.W.; Al-Mallahi, A. A deep learning approach for RGB image-based powdery mildew disease detection on strawberry leaves. Comput. Electron. Agric. 2021, 183, 106042. [Google Scholar] [CrossRef]
  14. Zhong, Y.; Zhao, M. Research on deep learning in apple leaf disease recognition. Comput. Electron. Agric. 2019, 168, 105146. [Google Scholar] [CrossRef]
  15. Chen, J.; Chen, J.; Zhang, D.; Sun, Y.; Nanehkaran, Y. Using deep transfer learning for image-based plant disease identification. Comput. Electron. Agric. 2020, 173, 105393. [Google Scholar] [CrossRef]
  16. Kibriya, H.; Abdullah, I.; Nasrullah, A. Plant Disease Identification and Classification Using Convolutional Neural Network and SVM. In Proceedings of the 2021 International Conference on Frontiers of Information Technology (FIT), Islamabad, Pakistan, 13–14 December 2021; pp. 264–268. [Google Scholar]
  17. Atila, Ü.; Uçar, M.; Akyol, K.; Uçar, E. Plant leaf disease classification using EfficientNet deep learning model. Ecol. Inform. 2021, 61, 101182. [Google Scholar] [CrossRef]
  18. Mishra, A.M.; Harnal, S.; Gautam, V.; Tiwari, R.; Upadhyay, S. Weed density estimation in soya bean crop using deep convolutional neural networks in smart agriculture. J. Plant Dis. Prot. 2022, 129, 593–604. [Google Scholar] [CrossRef]
  19. Kaur, P.; Harnal, S.; Tiwari, R.; Upadhyay, S.; Bhatia, S.; Mashat, A.; Alabdali, A.M. Recognition of Leaf Disease Using Hybrid Convolutional Neural Network by Applying Feature Reduction. Sensors 2022, 22, 575. [Google Scholar] [CrossRef]
  20. Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
  21. Bhujel, A.; Kim, N.-E.; Arulmozhi, E.; Basak, J.K.; Kim, H.-T.J.A. A lightweight Attention-based convolutional neural networks for tomato leaf disease classification. Agriculture 2022, 12, 228. [Google Scholar] [CrossRef]
  22. Zhao, Y.; Chen, J.; Xu, X.; Lei, J.; Zhou, W. SEV-Net: Residual network embedded with attention mechanism for plant disease severity detection. Concurr. Comput. Pract. Exp. 2021, 33, e6161. [Google Scholar] [CrossRef]
  23. Ghosal, S.; Sarkar, K. Rice leaf diseases classification using CNN with transfer learning. In Proceedings of the 2020 IEEE Calcutta Conference (CALCON), Kolkata, India, 28–29 February 2020; pp. 230–236. [Google Scholar]
  24. Swathika, R.; Srinidhi, S.; Radha, N.; Sowmya, K. Disease Identification in paddy leaves using CNN based Deep Learning. In Proceedings of the 2021 Third International Conference on Intelligent Communication Technologies and Virtual Mobile Networks (ICICV), Tirunelveli, India, 4–6 February 2021; pp. 1004–1008. [Google Scholar]
  25. Rahman, C.R.; Arko, P.S.; Ali, M.E.; Khan, M.A.I.; Apon, S.H.; Nowrin, F.; Wasif, A.J.B.E. Identification and recognition of rice diseases and pests using convolutional neural networks. Biosyst. Eng. 2020, 194, 112–120. [Google Scholar] [CrossRef] [Green Version]
  26. Zhou, G.; Zhang, W.; Chen, A.; He, M.; Ma, X. Rapid detection of rice disease based on FCM-KM and faster R-CNN fusion. IEEE Access 2019, 7, 143190–143206. [Google Scholar] [CrossRef]
  27. Su, N.T.; Hung, P.D.; Vinh, B.T.; Diep, V.T. Rice leaf disease classification using deep learning and target for mobile devices. In Proceedings of the International Conference on Emerging Technologies and Intelligent Systems: ICETIS 2021 (Volume 1); Springer: Berlin/Heidelberg, Germany, 2022; pp. 136–148. [Google Scholar]
  28. Archana, K.S.; Srinivasan, S.; Bharathi, S.P.; Balamurugan, R.; Prabakar, T.N.; Britto, A.S.F. A novel method to improve computational and classification performance of rice plant disease identification. J. Supercomput. 2022, 78, 8925–8945. [Google Scholar] [CrossRef]
  29. Patil, R.R.; Kumar, S.J.I.A. Rice-fusion: A multimodality data fusion framework for rice disease diagnosis. IEEE Access 2022, 10, 5207–5222. [Google Scholar] [CrossRef]
  30. Ismail, A.; Ahmad, S.A.; Che Soh, A.; Hassan, K.; Harith, H.H. Improving convolutional neural network (CNN) architecture (miniVGGNet) with batch normalization and learning rate decay factor for image classification. Int. J. Integr. Eng. 2019, 11. [Google Scholar] [CrossRef]
  31. Oyewola, D.O.; Dada, E.G.; Misra, S.; Damaševičius, R.J.P.C.S. Detecting cassava mosaic disease using a deep residual convolutional neural network with distinct block processing. PeerJ Comput. Sci. 2021, 7, e352. [Google Scholar] [CrossRef]
  32. Huang, G.; Liu, Z.; van der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Hawaii, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  33. Zhu, Y.; Newsam, S. Densenet for dense flow. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 7–20 September 2017; pp. 790–794. [Google Scholar]
  34. Fukui, H.; Hirakawa, T.; Yamashita, T.; Fujiyoshi, H. Attention branch network: Learning of attention mechanism for visual explanation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 10705–10714. [Google Scholar]
  35. Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
  36. Ma, R.; Wang, J.; Zhao, W.; Guo, H.; Dai, D.; Yun, Y.; Li, L.; Hao, F.; Bai, J.; Ma, D. Identification of Maize Seed Varieties Using MobileNetV2 with Improved Attention Mechanism CBAM. Agriculture 2022, 13, 11. [Google Scholar] [CrossRef]
  37. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef] [Green Version]
  38. Simonyan, K.; Zisserman, A.J. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  39. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
  40. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H.J. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
Figure 1. CBG-Dataset images (a) Bacterial blight; (b) Blast; (c) Brown spot; (d) Tungro.
Figure 1. CBG-Dataset images (a) Bacterial blight; (b) Blast; (c) Brown spot; (d) Tungro.
Applsci 13 04928 g001
Figure 2. SBG-Dataset images (a) Bacterial blight; (b) Blast; (c) Brown spot; (d) Tungro.
Figure 2. SBG-Dataset images (a) Bacterial blight; (b) Blast; (c) Brown spot; (d) Tungro.
Applsci 13 04928 g002
Figure 3. Structure of DenseNet for achieving maximum information flow during image recognition.
Figure 3. Structure of DenseNet for achieving maximum information flow during image recognition.
Applsci 13 04928 g003
Figure 4. Structure of the CBAM module for implementing both spatial and channel attention mechanisms in feedforward convolutional neural networks.
Figure 4. Structure of the CBAM module for implementing both spatial and channel attention mechanisms in feedforward convolutional neural networks.
Applsci 13 04928 g004
Figure 5. Structure of the spatial attention module and the channel attention module in the CBAM module for adaptive adjustment of feature maps.
Figure 5. Structure of the spatial attention module and the channel attention module in the CBAM module for adaptive adjustment of feature maps.
Applsci 13 04928 g005
Figure 6. Structure of the Res-Attention module for reducing information loss during transmission.
Figure 6. Structure of the Res-Attention module for reducing information loss during transmission.
Applsci 13 04928 g006
Figure 7. Network flow diagram of feature map transmission process in the Res-Attention module.
Figure 7. Network flow diagram of feature map transmission process in the Res-Attention module.
Applsci 13 04928 g007
Figure 8. Structure of RiceDRA-Net network model for rice leaf disease identification.
Figure 8. Structure of RiceDRA-Net network model for rice leaf disease identification.
Applsci 13 04928 g008
Figure 9. Comparison of various classical models in a single context.
Figure 9. Comparison of various classical models in a single context.
Applsci 13 04928 g009
Figure 10. Comparison of the total loss values of the five different models on the test set in a single context.
Figure 10. Comparison of the total loss values of the five different models on the test set in a single context.
Applsci 13 04928 g010
Figure 11. Comparison of model accuracy under two data sets.
Figure 11. Comparison of model accuracy under two data sets.
Applsci 13 04928 g011
Figure 12. Comparison of accuracy rates of different models for each disease type.
Figure 12. Comparison of accuracy rates of different models for each disease type.
Applsci 13 04928 g012
Figure 13. Confusion matrix generated with SBG-Dataset.
Figure 13. Confusion matrix generated with SBG-Dataset.
Applsci 13 04928 g013
Figure 14. Confusion matrix generated with CBG-Dataset.
Figure 14. Confusion matrix generated with CBG-Dataset.
Applsci 13 04928 g014
Figure 15. Average PR curves of six models using CBG-Dataset.
Figure 15. Average PR curves of six models using CBG-Dataset.
Applsci 13 04928 g015
Figure 16. Heat map of disease pictures.
Figure 16. Heat map of disease pictures.
Applsci 13 04928 g016
Table 1. Number of rice disease datasets.
Table 1. Number of rice disease datasets.
Categories of Rice Leaf DiseasesBacterial BlightBlastBrown SpotTungroTraining SamplesTest Samples
Quantity158414401600130841531779
Table 2. DenseNet-121 network structure.
Table 2. DenseNet-121 network structure.
Network LayerKernel SizeStrideOutput
Input--(a,3,224,224)
Conv2d72(a,64,112,112)
Dense Block (1)--(a,256,112,112)
Transition (1)--(a,128,56,56)
Dense Block (2)--(a,512,56,56)
Transition (2)--(a,256,28,28)
Dense Block (3)--(a,1024,28,28)
Transition (3)--(a,512,14,14)
Dense Block (4)--(a,1024,14,14)
AvgPool11(a,1024,1,1)
Classification--(1024,class_num)
‘a’ represents the number of 3D tensors.
Table 3. Recognition accuracy (%) of RiceDRA-Net on SBG-Dataset under different optimization algorithms and learning rates.
Table 3. Recognition accuracy (%) of RiceDRA-Net on SBG-Dataset under different optimization algorithms and learning rates.
Optimization Algorithmlr = 0.1lr = 0.05lr = 0.01lr = 0.005lr = 0.001
SGD98.1198.9099.2398.5698.88
Adam97.9798.8599.5399.5599.66
Adagrad95.2597.2699.7199.6899.21
AdaDelta85.4487.8299.6585.3290.56
Table 4. Total loss values of RiceDRA-Net on SBG-Dataset under different optimization algorithms and learning rates.
Table 4. Total loss values of RiceDRA-Net on SBG-Dataset under different optimization algorithms and learning rates.
Optimization Algorithmlr = 0.1lr = 0.05lr = 0.01lr = 0.005lr = 0.001
SGD0.25630.15780.17165.8854.552
Adam0.78560.56840.43870.42510.559
Adagrad1.5891.2560.03560.37580.4505
AdaDelta15.3715.480.445326.4818.25
Table 5. Comparison of different convolution kernel sizes.
Table 5. Comparison of different convolution kernel sizes.
Kernel SizeAccuracy (%)Precision (%)Recall (%) F 1 Score
(3 × 3)99.7199.7299.700.996
(7 × 7)99.4999.5199.470.994
Table 6. Comparison of various classical models under SBG-Dataset.
Table 6. Comparison of various classical models under SBG-Dataset.
ModelAccuracy (%)Precision (%)Recall (%) F 1 Score
AlexNet84.3784.6584.620.846
Vgg-1993.3693.6793.300.934
MobileNet94.6595.0794.570.948
ResNet-10197.6397.6797.820.977
DenseNet-12197.8697.8797.870.978
RiceDRA-Net99.7199.7299.700.996
Table 7. Comparison of various classical models under CBG-Dataset.
Table 7. Comparison of various classical models under CBG-Dataset.
ModelAccuracy (%)Precision (%)Recall (%) F 1 Score
AlexNet75.8877.8775.950.768
Vgg-1984.2085.5783.570.845
MobileNet85.4987.9085.670.867
ResNet-10190.1090.8290.250.905
DenseNet-12192.5292.7592.820.927
RiceDRA-Net97.8697.9597.870.979
Table 8. Comparison of multiple evaluation indicators in different models on each disease type.
Table 8. Comparison of multiple evaluation indicators in different models on each disease type.
SBG-DatasetCBG-Dataset
ModelType NamesPrecision (%)Recall (%) F 1  ScorePrecision (%)Recall (%) F 1  Score
AlexNetBacterial blight91.171.40.80064.080.40.712
Blast80.572.70.76484.937.70.522
Brown spot82.797.50.89582.187.70.848
Tungro84.396.90.90280.598.00.883
Vgg-19Bacterial blight89.197.90.93375.293.90.835
Blast90.687.30.88978.373.60.758
Brown spot96.893.10.94993.091.50.922
Tungro98.294.90.96595.875.30.843
MobileNetBacterial blight89.697.90.93672.698.30.835
Blast95.389.40.92393.267.10.780
Brown spot95.494.20.94897.778.80.872
Tungro100.097.20.98688.198.50.930
ResNet-101Bacterial blight96.199.60.97897.279.20.872
Blast94.9100.00.97478.687.70.829
Brown spot100.091.70.95789.395.80.924
Tungro100.0100.0198.299.00.985
DenseNet-121Bacterial blight98.598.30.98495.291.40.932
Blast97.695.10.96886.191.70.888
Brown spot96.798.10.97491.088.50.897
Tungro98.7100.00.99398.799.70.991
RiceDRA-NetBacterial blight98.9100.00.99496.597.50.969
Blast100.098.80.99496.195.80.959
Brown spot100.0100.0199.299.00.990
Tungro100.0100.0110099.20.995
Table 9. Comparison of Computational Complexity.
Table 9. Comparison of Computational Complexity.
ModelFLOPs/GBParameters/MB
AlexNet0.7161.10
Vgg-1919.63143.67
MobileNet0.333.50
ResNet-1017.8744.55
DenseNet-1212.907.98
RiceDRA-Net2.907.56
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Peng, J.; Wang, Y.; Jiang, P.; Zhang, R.; Chen, H. RiceDRA-Net: Precise Identification of Rice Leaf Diseases with Complex Backgrounds Using a Res-Attention Mechanism. Appl. Sci. 2023, 13, 4928. https://doi.org/10.3390/app13084928

AMA Style

Peng J, Wang Y, Jiang P, Zhang R, Chen H. RiceDRA-Net: Precise Identification of Rice Leaf Diseases with Complex Backgrounds Using a Res-Attention Mechanism. Applied Sciences. 2023; 13(8):4928. https://doi.org/10.3390/app13084928

Chicago/Turabian Style

Peng, Jialiang, Yi Wang, Ping Jiang, Ruofan Zhang, and Hailin Chen. 2023. "RiceDRA-Net: Precise Identification of Rice Leaf Diseases with Complex Backgrounds Using a Res-Attention Mechanism" Applied Sciences 13, no. 8: 4928. https://doi.org/10.3390/app13084928

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop