Next Article in Journal
Galileo Receiver Performance Analysis with New I/NAV Improvements Live Data
Previous Article in Journal
Electrode Modified with Carboxylated Multi-Walled Carbon Nanotubes and Electropolymerized Pyrogallol Red for the Determination of Eugenol
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Identification of Pest Attack on Corn Crops Using Machine Learning Techniques †

1
School of Agriculture Engineering and Food Sciences, Shandong University of Technology, Zibo 255000, China
2
Institute of Computer Science, Khwaja Fareed University of Engineering and Information Technology, Rahim Yar Khan 64200, Pakistan
*
Author to whom correspondence should be addressed.
Presented at the 4th International Electronic Conference on Applied Sciences, 27 October–10 November 2023; Available online: https://asec2023.sciforum.net/.
Eng. Proc. 2023, 56(1), 183; https://doi.org/10.3390/ASEC2023-15953
Published: 9 November 2023
(This article belongs to the Proceedings of The 4th International Electronic Conference on Applied Sciences)

Abstract

:
The agriculture sector plays a very important role in the increasing population, year by year, to fulfill requirements and contributes significantly to the economy of the country. One of the main challenges in agriculture is the prevention and early detection of pest attack on crops. Farmers spend a significant amount of time and money in detecting pests and diseases, often by looking at plant leaves and analyzing the presence of diseases and pests. Late detection of pest attacks and improper use of pesticide application can cause damage to plants and compromise food quality. This problem can be solved through artificial intelligence, machine learning, and accurate image classification systems. In recent years, machine learning has made improvements in image recognition and classification. Hence, in this research article, we used convolutional neural network (CNN)-based models, such as the Cov2D library and VGG-16, to identify pest attacks. Our experiments involved a personal dataset consisting of 7000 images of pest-attacked leaf samples of different positions on maize plants, categorized into two classes. The Google Colab environment was used for experimentation and implementation, specially designed for cloud computing and machine learning.

1. Introduction

In the last few years, artificial intelligence and machine learning growth has been fastest in the agriculture sector, playing a crucial role in increasing productivity and contributing to the economies of countries worldwide. Currently, machine learning is not only used by IT professionals but is also being utilized in various industries by different specialists [1]. For example, it is used for tasks such as face detection [2,3], in online marketing such as for the prediction of stock [4], for breast cancer detection in the medical field [5], as well as for lung disease prediction [6]. Also, in the field of the agriculture sector, it is used for example in image processing and detection, classification of disease, pest, wheat spike, crop yield estimation [7], crop identification [8], counting numbers of plants, and identifying insect attacks, and for solving crop related problems. The application makes it easier to detect, recognize and classify pests [9], insects, and diseases [10,11].
Maize, a staple food after wheat and rice, is infected every year by the fall armyworm (Spodoptera frugiperda), a highly polyphagous pest belonging to the Noctuidae family. Fall armyworms reportedly attack over 350 host plants across 76 families [12]. At the global scale, crop damage by fall armyworms across hosts includes maize losses (19.5–41.1%), (Zea mays L.) [13], sorghum, (Sorghum bicolor (L.) Moench) [14], rice losses (24.6–40.9%) [14], (Oryza sativa), soybean losses (11.0–32.4%), (Glycine max (L.) Merr) [15], cotton, (Gossypium hirsutum L.), barley, (Hordeum vulgare L.) [16] and wheat losses (10.1–28.1%), (Triticum aestivum L.) [17], potato losses (8.1–21.0%), (Solanum tuberosum L.) [18] with graminaceous plants being preferred [12,19].
The dataset of maize fall armyworm pests is not available for training, testing, and validation of the models. The results from this research will be used later for the variable rate spraying system and real-time detection for the management of maize fields.
Our previous research [20] performed detection, based on deep learning for wheat crop weeds, which exactly performs the detection and classification of one type of weed in wheat crops. The comparison with other advanced techniques in our models shows the following benefits. (1) It examines the possibility of a plant being impacted by different weeds at the same time in the same sample. (2) It utilized images with different device cameras with several resolutions. (3) It can easily deal with different lighting conditions, sizes of weeds, and backgrounds of weeds. (4) It can provide easy use of a variable rate spraying system or real-time spot detection in the field without any use of expensive and high technology.
The three main objectives of the study were as follows.
  • A real pest dataset was required to be developed containing images of maize crop leaves collected from the Shandong University technology research form, located in Zibo city of Shandong Province, China.
  • This dataset needed to contain images of one type of pest collected in different weather conditions and with different time intervals.
  • The Cov2D and VGG16 state-of-the-art models for maize leaf pest detection, classification and identification were to be utilized.

Review Literature

Table 1 presents a literature review discussing the various studies on different model of detection using deep learning and machine learning in recent years.

2. Materials and Methods

The method for detection consists of two parts: an agricultural pest dataset from the field and the construction of a deep convolutional neural network (DCNN) for detection. Pest dataset collection involves capturing pest images and labeling and separating them accordingly. The pest detection model is composed of four key components: multicategory classification, pest identification, a pest feature extraction network, and a region proposal network for pest objects.

2.1. Pest Image Acquisition

A total of 5500 image datasets were used for detection. A total of 2750 images were consistently used from different points collected in Shandong, China. Images of maize crops and pest-attack leaves at different growth stages were taken under different weather conditions, such as daylight, evening, and with cloud. In this process, we ensured that the pest dataset was of sufficient quantity and accuracy to facilitate data processing and analysis at a later stage. Pest dataset images with pixel resolution (416 × 416) were taken through a Logitech C920 Pro HD webcam with a resolution of 1080 × 2400 (FHD+) (Full HD 1080 p/30 fps HD 720 p/30 fps) pixels. Some pictures were also taken with a hand and other human parts in the background. The original pest dataset pictures of different locations from one research area are available in Table 2 and Figure 1. The pictures of healthy leaves were captured on a pest-affected farm.

2.2. Data Labeling

Data labeling was carried out by professional labeling techniques using IrfanView 4.66 software. Pest location coordinates and both classes of pest datasets were saved as XML files in YOLO format. The number of labeled samples corresponded to the number of bounding boxes annotated in each image. Additionally, each image contained multiple labels, depending on the number of pests present in the crop [20]

2.3. Image Preprocessing

The dataset was resized to 416 × 416 pixels for the shortened duration of training in the training stage, and IrfanView was used to resize the images. Furthermore, the training efficiency of the deep neural network model was improved. The labeling annotation tools were used manually to annotate rectangular boxes to apply to all pest-attacked leaves. A total of 5500 images were labeled for the training of all deep learning models and after annotating the images were divided for the training dataset. The created dataset of pest-attack maize leaves was divided into training and testing sets, and the pest-attack image samples in the training and test were 5500 and 1000, respectively.

2.4. Image Detection

Detecting the pest type and species is important for improving crop yield and protecting the crop from pest attacks, so we used different models such as Cov2D and VGG16 under the TensorFlow framework to obtain better accuracy of results.

2.5. Data Splitting

To evaluate the model’s ability to be successful and produce the intended result of our model, we followed a standard process of dividing the images into a training set, a validation set, and a testing set. These datasets were divided into the proportions of 70%, 20%, and 10%. This was conducted to optimize the detection model, to leverage more data during training, for the validation set in order to build and train the val dataset, and to improve overall performance of the detection model. Thus, the dataset division played a vital role in achieving better and more accurate overall results of both models in detecting fall armyworm pests.

2.6. Hyperparameters of the Models

Table 3 shows hyper parameters of our models.

2.7. Network Architecture Model

In this study, we used a pre-trained model based on accuracy for maize crop pest-attack identification. The details of the model architecture are listed in Table 4. Each model has the same filter size, but the other hyper parameters are different and play a very important role in model accuracy. The filter size plays a vital role in extracting specific features from the feature maps, and the feature maps depend on the specific values of the filters. In this research study we used an actual pretrained model network with the actual combination of convolution layers and actual filter size.

2.8. Vgg-16 and Cov2D Tuning Details

In this study, we used two CNN models to identify pest attacks on the maize leaves. Each input size of the image for networks is 416 × 416, and in the first two layers, it has 64 channels and a filter size of 3 × 3 and 2 strides. 256 channels with a 3 × 3 filter in the next two layers of VGG-16, followed by max pooling with two strides. There were two convolution layers with 256 channels three 3 × 3 layers after the pooling layer. Two convolution layers and two sets of three convolution layers with a pooling layer, along with 3 × 3 filters. Details are shown in Figure 2 for both the models’ architecture.

2.9. Evaluation Metrics

To evaluate the model performance, we utilized the average precision for each division and the mean average precision. The precision, recall and F1 score were calculated by the following equations:
P r e c i s i o n = T P T P + F P   n
R e c a l l = T P T P + F N
F 1 = 2 P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l
TP represents the number of true positives, FP denotes the number of false positives, and the FN value corresponds to the number of false negatives.

3. Results and Discussion

3.1. Experimental/Hardware Setup

For this study, we utilized an Intel(R) Core 3(TM) i5-10210U CPU @ 1.60 GHz 2.11 GHz CPU with 8 GB RAM for training and validating our model. Furthermore, for training the VGG-16 and Cov2D models, the AMD Riyzen Threadripper 3970X 32-Core Processor 3.69 GHz memory was used. The development environment chosen was PyCharm, and the programming language was python 3.8. The main framework for our model was TensorFlow, along with the advanced version of OpenCV. Simultaneously, we used a deep learning framework.

3.2. Evaluation of Results

3.2.1. VGG16 Model Performance

This part of the study employed a state-of-the-art deep learning model using VGG16 and Cov2D models for the detection of plant pests specifically targeting the fall armyworm. The fall armyworm dataset is not available publicly, and we used it to further train the two hybrid models and transfer learning systems. The primary dataset used in this experiment was the fall armyworm dataset, and this dataset has not been used in any other models.
To ensure proper evaluation, we divided the fall armyworm dataset into three parts: training, testing, and validation samples. Specifically, 80% of the dataset was used for training, 0.1% for validation, and 20% for testing the pretrained VGG-16 and Cov2D models. Both models were run for 10 epochs, and it was found that in our pretrained model after the 1st epoch, the model accuracy was increased, and the loss was decreased. The graph in Figure 3 and the corresponding accuracy results are shown in Table 5. Furthermore, the results for test accuracy, test loss, train accuracy, validation accuracy, and training loss accuracy are also provided in Table 5. It is evident from these results that the pretrained models performed effectively in detecting plant pests.

3.2.2. Cov2D Model Performance

Using the same dataset for evaluating the Cov2D model, we followed the same division of the fall armyworm dataset as we used for the VGG16 model above, for better model performance. This division was carried out to ensure fair evaluation and comparison between the two models. The confusion matrix in Table 6 shows the performance of the Cov2D model on the test set, with each row and column index representing a type of pest, specifically the fall armyworm. The obtained results suggest that the Cov2D model correctly detected the fall armyworm pest with impressive average training and testing accuracies of 99.9% and 98.5%, respectively. Additionally, the training loss was 0.062%, while the testing loss of 0.383% was slightly higher, as shown in Table 6.

3.3. Comparison with Other Work

In this section, various models are compared with the state-of-the-art convolutional neural network CNN model introduced in [32,33]. The authors used an online dataset comprising 70% of the images for training and the remaining 30% for testing. [34,35] for the different models listed in Table 7; the momentum parameter, basic learning, and dropout layers were set according to [20]. As a result, this study conducted a comparative analysis between different fine-tuned models with our proposed model, presented in Table 7, which were carefully selected to ensure remarkable accuracy for both testing and training, as presented in Table 4 and Table 5 and Figure 4.

4. Conclusions

In this research article, we conducted successful analysis of two different deep learning models under tensor flow framework to identify suitable models for detecting the fall armyworm pest in summer maize leaves. A dataset of 7000 images was collected from Shandong university research farm at Zibo city, Shandong province, China.
The evaluation of state-of-the art CNNs (convolutional neural networks) using deep learning was based on various metrics including training accuracy, testing accuracy, training loss, testing loss, recall, and F1-score. Our proposed models VGG-16, and Cov2D were compared against other well-known models AlexNet, ResNet, Inspection-V4, YoloV3-Tiny, YoloV4-Tiny, and YoloV5s, m, l, shown in Table 7. It was found that VGG-16 and Cov2D model had outstanding accuracy results.
One of the key advantages of VGG-16 and Cov2D models was their ease of training and testing of the models; the different trainable parameters are shown in Table 3. Additionally, Table 4 shows that the hyperparameters were the same for both models except the dropout parameter. Hence the VGG-16 model is more suitable for a leaf detection system when there is a new pest to be included in the model. The selected models achieved detection accuracy and loss accuracy, respectively presented in Table 5 and Table 6.
The future research work will involve evaluating the performance metrics of the proposed models on larger and different types of dataset to assess their generalization and robustness. Additionally, we plan to train and test models on larger datasets for real-time detection in a spot spraying system to improve the efficiency and accuracy of pest management practices in the agricultural field in order to increase production and save costs on pest control.

Author Contributions

Conceptualization, S.I.U.H. and Y.L.; methodology, A.R.; software, S.I.U.H.; validation, S.I.U.H., Y.L. and S.W.; formal analysis, S.W.; investigation, S.W.; resources, S.I.U.H.; data curation, S.I.U.H.; writing—original draft preparation, S.I.U.H.; writing—review and editing, Y.L.; visualization, Y.L.; supervision, Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data cannot be shared openly but are available on request from authors.

Acknowledgments

First, thanks to Yubin Lan for his guidance and correction of the paper. Second, thanks to all co-authors for their time and support.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Seelam, V.; kumar Penugonda, A.; Kalyan, B.P.; Priya, M.B.; Prakash, M.D. Smart Attendance Using Deep Learning and Computer Vision. Mater. Today Proc. 2021, 46, 4091–4094. [Google Scholar] [CrossRef]
  2. Chen, J.-C.; Patel, V.M.; Chellappa, R. Unconstrained Face Verification Using Deep Cnn Features. In Proceedings of the 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Placid, NY, USA, 7–10 March 2016; IEEE: Toulouse, France, 2016. [Google Scholar]
  3. AbdAlmageed, W.; Wu, Y.; Rawls, S.; Harel, S.; Hassner, T.; Masi, I.; Choi, J.; Lekust, J.; Kim, J.; Natarajan, P. Face Recognition Using Deep Multi-Pose Representations. In Proceedings of the 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Placid, NY, USA, 7–10 March 2016; IEEE: Toulouse, France, 2016. [Google Scholar]
  4. Hoseinzade, E.; Haratizadeh, S. CNNpred: CNN-Based Stock Market Prediction Using a Diverse Set of Variables. Expert Syst. Appl. 2019, 129, 273–285. [Google Scholar] [CrossRef]
  5. Al Rahhal, M.M.; Bazi, Y.; Al Zuair, M.; Othman, E.; BenJdira, B. Convolutional Neural Networks for Electrocardiogram Classification. J. Med. Biol. Eng. 2018, 38, 1014–1025. [Google Scholar] [CrossRef]
  6. Zhao, X.; Qi, S.; Zhang, B.; Ma, H.; Qian, W.; Yao, Y.; Sun, J. Deep CNN Models for Pulmonary Nodule Classification: Model Modification, Model Integration, and Transfer Learning. J. X-ray Sci. Technol. 2019, 27, 615–629. [Google Scholar] [CrossRef]
  7. Rahnemoonfar, M.; Sheppard, C. Deep Count: Fruit Counting Based on Deep Simulated Learning. Sensors 2017, 17, 905. [Google Scholar] [CrossRef] [PubMed]
  8. Lee, S.H.; Chan, C.S.; Wilkin, P.; Remagnino, P. Deep-Plant: Plant Identification with Convolutional Neural Networks. In Proceedings of the 2015 IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada, 27–30 September 2015; IEEE: Toulouse, France, 2015; pp. 452–456. [Google Scholar]
  9. Hall, D.; McCool, C.; Dayoub, F.; Sunderhauf, N.; Upcroft, B. Evaluation of Features for Leaf Classification in Challenging Conditions. In Proceedings of the 2015 IEEE Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 5–9 January 2015; IEEE: Toulouse, France, 2015; pp. 797–804. [Google Scholar]
  10. Kamilaris, A.; Prenafeta-Boldú, F.X. Deep Learning in Agriculture: A Survey. Comput. Electron. Agric. 2018, 147, 70–90. [Google Scholar] [CrossRef]
  11. Dyrmann, M.; Karstoft, H.; Midtiby, H.S. Plant Species Classification Using Deep Convolutional Neural Network. Biosyst. Eng. 2016, 151, 72–80. [Google Scholar] [CrossRef]
  12. Montezano, D.G.; Sosa-Gómez, D.R.; Specht, A.; Roque-Specht, V.F.; Sousa-Silva, J.C.; de Paula-Moraes, S.; Peterson, J.A.; Hunt, T.E. Host Plants of Spodoptera frugiperda (Lepidoptera: Noctuidae) in the Americas. Afr. Entomol. 2018, 26, 286–300. [Google Scholar] [CrossRef]
  13. Savary, S.; Willocquet, L.; Pethybridge, S.J.; Esker, P.; McRoberts, N.; Nelson, A. The Global Burden of Pathogens and Pests on Major Food Crops. Nat. Ecol. Evol. 2019, 3, 430–439. [Google Scholar] [CrossRef]
  14. Marenco, R.J.; Foster, R.E.; Sanchez, C.A. Sweet Corn Response to Fall Armyworm (Lepidoptera: Noctuidae) Damage during Vegetative Growth. J. Econ. Entomol. 1992, 85, 1285–1292. [Google Scholar] [CrossRef]
  15. de Freitas Bueno, R.C.O.; de Freitas Bueno, A.; Moscardi, F.; Postali Parra, J.R.; Hoffmann-Campo, C.B. Lepidopteran Larva Consumption of Soybean Foliage: Basis for Developing Multiple-species Economic Thresholds for Pest Management Decisions. Pest Manag. Sci. 2011, 67, 170–174. [Google Scholar] [CrossRef] [PubMed]
  16. Hardke, J.T.; Lorenz III, G.M.; Leonard, B.R. Fall Armyworm (Lepidoptera: Noctuidae) Ecology in Southeastern Cotton. J. Integr. Pest Manag. 2015, 6, 10. [Google Scholar] [CrossRef]
  17. Yang, X.; Sun, X.; Zhao, S.; Li, J.; Chi, X.; Jiang, Y.; Wu, K. Population Occurrence, Spatial Distribution and Sampling Technique of Fall Armyworm Spodoptera frugiperda in Wheat Fields. Plant Prot. 2020, 46, 23. [Google Scholar]
  18. Gebretsadik, K.G.; Liu, Y.; Yin, Y.; Zhao, X.; Li, X.; Chen, F.; Zhang, Y.; Chen, J.; Chen, A. Population Growth of Fall Armyworm, Spodoptera frugiperda Fed on Cereal and Pulse Host Plants Cultivated in Yunnan Province, China. Plants 2023, 12, 950. [Google Scholar] [CrossRef] [PubMed]
  19. Malo, M.; Hore, J. The Emerging Menace of Fall Armyworm (Spodoptera frugiperda JE Smith) in Maize: A Call for Attention and Action. J. Entomol. Zool. Stud. 2020, 8, 455–465. [Google Scholar]
  20. Haq, S.I.U.; Tahir, M.N.; Lan, Y. Weed Detection in Wheat Crops Using Image Analysis and Artificial Intelligence (AI). Appl. Sci. 2023, 13, 8840. [Google Scholar] [CrossRef]
  21. Shijie, J.; Peiyi, J.; Siping, H. Automatic Detection of Tomato Diseases and Pests Based on Leaf Images. In Proceedings of the 2017 Chinese Automation Congress (CAC), Jinan, China, 20–22 October 2017; IEEE: Toulouse, France, 2017; pp. 2510–2537. [Google Scholar]
  22. Dey, B.; Haque, M.M.U.; Khatun, R.; Ahmed, R. Comparative Performance of Four CNN-Based Deep Learning Variants in Detecting Hispa Pest, Two Fungal Diseases, and NPK Deficiency Symptoms of Rice (Oryza sativa). Comput. Electron. Agric. 2022, 202, 107340. [Google Scholar] [CrossRef]
  23. Burhan, S.A.; Minhas, S.; Tariq, A.; Hassan, M.N. Comparative Study of Deep Learning Algorithms for Disease and Pest Detection in Rice Crops. In Proceedings of the 2020 12th International Conference on Electronics, Computers and Artificial Intelligence (ECAI), Bucharest, Romania, 25–27 June 2020; IEEE: Toulouse, France, 2020; pp. 1–5. [Google Scholar]
  24. Waheed, H.; Zafar, N.; Akram, W.; Manzoor, A.; Gani, A.; Islam, S. ul Deep Learning Based Disease, Pest Pattern and Nutritional Deficiency Detection System for “Zingiberaceae” Crop. Agriculture 2022, 12, 742. [Google Scholar] [CrossRef]
  25. Rahman, C.R.; Arko, P.S.; Ali, M.E.; Khan, M.A.I.; Apon, S.H.; Nowrin, F.; Wasif, A. Identification and Recognition of Rice Diseases and Pests Using Convolutional Neural Networks. Biosyst. Eng. 2020, 194, 112–120. [Google Scholar] [CrossRef]
  26. Hadipour-Rokni, R.; Asli-Ardeh, E.A.; Jahanbakhshi, A.; Sabzi, S. Intelligent Detection of Citrus Fruit Pests Using Machine Vision System and Convolutional Neural Network through Transfer Learning Technique. Comput. Biol. Med. 2023, 155, 106611. [Google Scholar] [CrossRef]
  27. Türkoğlu, M.; Hanbay, D. Plant Disease and Pest Detection Using Deep Learning-Based Features. Turk. J. Electr. Eng. Comput. Sci. 2019, 27, 1636–1651. [Google Scholar] [CrossRef]
  28. Tetila, E.C.; Machado, B.B.; Astolfi, G.; de Souza Belete, N.A.; Amorim, W.P.; Roel, A.R.; Pistori, H. Detection and Classification of Soybean Pests Using Deep Learning with UAV Images. Comput. Electron. Agric. 2020, 179, 105836. [Google Scholar] [CrossRef]
  29. Li, Y.; Wang, H.; Dang, L.M.; Sadeghi-Niaraki, A.; Moon, H. Crop Pest Recognition in Natural Scenes Using Convolutional Neural Networks. Comput. Electron. Agric. 2020, 169, 105174. [Google Scholar] [CrossRef]
  30. Meena, S.D.; Susank, M.; Guttula, T.; Chandana, S.H.; Sheela, J. Crop Yield Improvement with Weeds, Pest and Disease Detection. Procedia Comput. Sci. 2023, 218, 2369–2382. [Google Scholar] [CrossRef]
  31. Cleetus, L.; Raji Sukumar, A.; Hemalatha, N. Computational Prediction of Disease Detection and Insect Identification Using Xception Model. bioRxiv 2021. [Google Scholar] [CrossRef]
  32. Rangarajan, A.K.; Purushothaman, R.; Ramesh, A. Tomato Crop Disease Classification Using Pre-Trained Deep Learning Algorithm. Procedia Comput. Sci. 2018, 133, 1040–1047. [Google Scholar] [CrossRef]
  33. Kaushik, M.; Prakash, P.; Ajay, R.; Veni, S. Tomato Leaf Disease Detection Using Convolutional Neural Network with Data Augmentation. In Proceedings of the 2020 5th International Conference on Communication and Electronics Systems (ICCES), Coimbatore, India, 10–12 June 2020; IEEE: Toulouse, France, 2020; pp. 1125–1132. [Google Scholar]
  34. Eunice, J.; Popescu, D.E.; Chowdary, M.K.; Hemanth, J. Deep Learning-Based Leaf Disease Detection in Crops Using Images for Agricultural Applications. Agronomy 2022, 12, 2395. [Google Scholar]
  35. Khalid, S.; Oqaibi, H.M.; Aqib, M.; Hafeez, Y. Small Pests Detection in Field Crops Using Deep Learning Object Detection. Sustainability 2023, 15, 6815. [Google Scholar] [CrossRef]
Figure 1. Sample images of dataset: (A) pest-fall armyworm; (B) healthy leaf of a maize crop during different growth stages.
Figure 1. Sample images of dataset: (A) pest-fall armyworm; (B) healthy leaf of a maize crop during different growth stages.
Engproc 56 00183 g001
Figure 2. VGG16 and Cov2D model architecture.
Figure 2. VGG16 and Cov2D model architecture.
Engproc 56 00183 g002
Figure 3. Performance analysis of the VGG-16 model using the primary dataset: (A) model detection accuracy and (B) training and testing accuracy.
Figure 3. Performance analysis of the VGG-16 model using the primary dataset: (A) model detection accuracy and (B) training and testing accuracy.
Engproc 56 00183 g003
Figure 4. Performance analysis of the Cov2D model using the primary dataset: (A) model detection accuracy and (B) training and testing accuracy.
Figure 4. Performance analysis of the Cov2D model using the primary dataset: (A) model detection accuracy and (B) training and testing accuracy.
Engproc 56 00183 g004
Table 1. Deep learning and machine learning used in various studies.
Table 1. Deep learning and machine learning used in various studies.
S.NoYearJournal NameModelPurposePest/DiseaseClassesAccuracy %Crop NameReferences
12017IEEEVGG16 + SVMClassificationPest/Disease10 Tomato[21]
22022Computers & Electronics in AgricultureVGG16, VGG19, ResNet50, InceptionV3ClassificationPest/Disease/NPK deficiency7 Rice[22]
32020IEEEVGG16, VGG19, ResNet50,
ResNet50v2, ResNet101v2,
ClassificationPest/Disease4 Rice[23]
42022AgricultureANN, CNN, VGG-16, MobileNetV2ClassificationPest/Disease/Nutrient deficiency2 Zingiberaceae[24]
52020Biosystems engineeringVGG16, InceptionV3, MobileNetv2, NasNet Mobile, SqueenzeNet v1.1, Simple CNNIdentification/RecognitionPest/Disease9 Rice[25]
62023Computers in Biology and MedicineAlexNet, VGG-16, GoogleNet, ResNet-50DetectionPest3 Citrus fruit[26]
72019Turkish Journal of Electrical Engineering and Computer SciencesAlexNet, VGG-16, VGG-19, SqueezeNet,
GoogleNet, Inceptionv3, InceptionResNetv2, ResNet50, ResNet101
DetectionPest/Disease8 Not given[27]
82020Computers & Electronics in AgricultureInception-v3, Resnet-50, VGG-16, VGG-19, XceptionDetection and classificationPests1393.82, 91.87, 91.80, 91.33, 90.52soybean[28]
102020Computers & Electronics in AgricultureVGG-16, VGG-19, ResNet50, ResNet152, GoogLeNet,RecgnitionPest1098.91Not given[29]
112023Procedia Computer ScienceDenseNet201, Hyperparameter Search 2D layer, Mobilenet, VGG16, InceptionV3DetectionWeeds, Pest, and Disease987.85, 91.85, 78.71, 99.62, 71.07Not given[30]
122021bioRxivCNN, VGG16, INCEPTION V3, XCEPTIONIdentificationDisease/Insect1024.28, 71.74, 77.19, 77.90, 82.83, 82.11, 80.33, 82.89Tomato [31]
Table 2. The unhealthy images and healthy images of the maize crop dataset.
Table 2. The unhealthy images and healthy images of the maize crop dataset.
Network ModelsPest-Attacked PicturesHealthy Pictures
VGG-1627502000
Cov2D27502000
Table 3. Hyper parameters of VGG-16 and Cov2D models.
Table 3. Hyper parameters of VGG-16 and Cov2D models.
Hyper ParametersValues
Cov2DVGG-16
ActivationSigmoidSigmoid
Dropout0.40.8
OptimizerAdamAdam
Learning rate0.0010.001
LossBinary_crossentropyBinary_crossentropy
MetricsAccuracyAccuracy
Epochs1010
Table 4. VGG-16 and Cov2D model pretrained network architecture.
Table 4. VGG-16 and Cov2D model pretrained network architecture.
Network ModelVGG-16Cov2D
  • Total layers
1664
2.
Max pool layers
55
3.
Dense layers
31
4.
Drop-out layers
0.80.4
5.
Flatten layers
11
6.
Filter size
3 × 33 × 3
7.
Stride
2 × 22 × 2
Trainable-parameters25,089790,337
Table 5. Comparative analysis performance of the VGG-16 model.
Table 5. Comparative analysis performance of the VGG-16 model.
Network ModelTraining Accuracy (%)Training Loss (%)Test Accuracy (%)Test Loss (%)
VGG-1699.90.024099.90.0526
Table 6. Comparative analysis performance of the Cov2D-based CNN model.
Table 6. Comparative analysis performance of the Cov2D-based CNN model.
Network ModelTraining Accuracy (%)Training Loss (%)Test Accuracy (%)Test Loss (%)
Cov2D99.90.062798.50.3832
Table 7. Different CNN model metrics.
Table 7. Different CNN model metrics.
Dataset UsedPre-Trained ModelsMulti-ClassesAccuracy %References
Plant-VillageAlexNet798.8[32]
=do=ResNet50697.1[33]
=do=Inspection-V43897.59[34]
Captured from fieldYoloV3-Tiny333.2[35]
YoloV4-Tiny345.0
Captured from fieldYoloV5s159[20]
YoloV5l167
YoloV5m184
Collected from fieldVGG-16299.9Our Work
Cov2D299.9
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Haq, S.I.U.; Raza, A.; Lan, Y.; Wang, S. Identification of Pest Attack on Corn Crops Using Machine Learning Techniques. Eng. Proc. 2023, 56, 183. https://doi.org/10.3390/ASEC2023-15953

AMA Style

Haq SIU, Raza A, Lan Y, Wang S. Identification of Pest Attack on Corn Crops Using Machine Learning Techniques. Engineering Proceedings. 2023; 56(1):183. https://doi.org/10.3390/ASEC2023-15953

Chicago/Turabian Style

Haq, Syed Ijaz Ul, Ali Raza, Yubin Lan, and Shizhou Wang. 2023. "Identification of Pest Attack on Corn Crops Using Machine Learning Techniques" Engineering Proceedings 56, no. 1: 183. https://doi.org/10.3390/ASEC2023-15953

Article Metrics

Back to TopTop