Next Article in Journal
Automated Muzzle Detection and Biometric Identification via Few-Shot Deep Transfer Learning of Mixed Breed Cattle
Next Article in Special Issue
Relationships between Soil Electrical Conductivity and Sentinel-2-Derived NDVI with pH and Content of Selected Nutrients
Previous Article in Journal
Long-Term Effects of the Use of Organic Amendments and Crop Rotation on Soil Properties in Southeast Spain
Previous Article in Special Issue
Economic Comparison of Satellite, Plane and UAV-Acquired NDVI Images for Site-Specific Nitrogen Application: Observations from Italy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Technique for Classifying Bird Damage to Rapeseed Plants Based on a Deep Learning Algorithm

by
Ali Mirzazadeh
1,*,
Afshin Azizi
1,
Yousef Abbaspour-Gilandeh
2,
José Luis Hernández-Hernández
3,*,
Mario Hernández-Hernández
4 and
Iván Gallardo-Bernal
5
1
Department of Agricultural Engineering and Technology, Faculty of Agriculture and Natural Resources (Moghan), University of Mohaghegh Ardabili, Ardabil 56131-56491, Iran
2
Department of Biosystems Engineering, Faculty of Agriculture and Natural Resources, University of Mohaghegh Ardabili, Ardabil 56199-11367, Iran
3
Tecnológico Nacional de México/Campus Chilpancingo, Chilpancingo 39070, Guerrero, Mexico
4
Faculty of Engineering, Autonomous University of Guerrero, Chilpancingo 39087, Guerrero, Mexico
5
Higher School of Government and Public Management, Autonomous University of Guerrero, Chilpancingo 39087, Guerrero, Mexico
*
Authors to whom correspondence should be addressed.
Agronomy 2021, 11(11), 2364; https://doi.org/10.3390/agronomy11112364
Submission received: 15 October 2021 / Revised: 8 November 2021 / Accepted: 18 November 2021 / Published: 22 November 2021
(This article belongs to the Special Issue Remote Sensing in Agriculture)

Abstract

:
Estimation of crop damage plays a vital role in the management of fields in the agriculture sector. An accurate measure of it provides key guidance to support agricultural decision-making systems. The objective of the study was to propose a novel technique for classifying damaged crops based on a state-of-the-art deep learning algorithm. To this end, a dataset of rapeseed field images was gathered from the field after birds’ attacks. The dataset consisted of three classes including undamaged, partially damaged, and fully damaged crops. Vgg16 and Res-Net50 as pre-trained deep convolutional neural networks were used to classify these classes. The overall classification accuracy reached 93.7% and 98.2% for the Vgg16 and the ResNet50 algorithms, respectively. The results indicated that a deep neural network has a high ability in distinguishing and categorizing different image-based datasets of rapeseed. The findings also revealed a great potential of deep learning-based models to classify other damaged crops.

1. Introduction

Bird damage to agricultural and horticultural products leads to lots of problems, especially in high-value crops [1,2]. Damaged crops are often a source of contamination as they attract insects and contribute to propagating diseases which cause further economic losses to farmers. Among different crops, rapeseed is particularly prone to bird damage and is a cause of concern for farmers [3].
Rapeseed (scientific name Brassica napus) is a major grain in the Iranian Agricultural Ministry’s programs for increasing oil production [4]. For this reason, more than 80 thousand ha in Iran have been assigned to this crop, with the north of Ardabil Province (the Moghan Plain) being an important region in Iran in this regard [4]. Despite planned measures to support rapeseed cultivation such as guaranteed crop purchasing by the government, support for manufacturers and buyers of this crop, giving subsidized agricultural inputs to farmers, etc., various factors have restricted the area under this crop cultivation which should be considered by policymakers, planners, and experts. According to the reports of local farmers, authorities, field observations, and local news, bird damage is one of the major limiting factors from the initial growth to the beginning of stemming (rosette stage).
Every year, when the cold season arrives, in addition to native birds such as cuckoos and sparrows, the Moghan region hosts thousands of different species of migratory birds that migrate mainly from the cold Siberian and Russian regions for overwintering [5]. Then, after a temporary stay (several months), they return to their countries of origin. Among these birds, Tetrax tetrax (little bustard, named mazmak in the local vernacular), which travels annually over the north of Ardabil Province from Russia and Kazakhstan, selects the Moghan Plain for its temporary migration. Tetrax tetrax is a species of wild bird that lives in pastures and grasslands which in Europe has adapted its habitat to pastures and the fields [6]. This bird is currently globally threatened and classified in Europe as SPEC 2 (a species with a poor conservation status) [6,7]. Overwintering of these birds is often associated with damage to rapeseed crops in the region [8]. Because of the cold weather and restricted food supplies along with the large population of these birds, they feed on a broad leaf plants of the region, especially rapeseed. This is observed objectively by the droppings and feathers left by these birds that completely indicate the damage of these birds to the rapeseed crop. Based on the objective observations, in terms of rapeseed growth stages this crop is damaged by these birds between the four-leaf and rosette stages. In other words, this event occurs between autumn and winter before the plant reaches the tillering stage [8].
Identifying and classifying crop damage directly impacts future prevention strategies. In order to stop or at least reduce the bird damage, different approaches have been suggested by researchers such as crop improvement, breeding, as well as use of sparrows and hunter-gatherers [7,8]. However, as Tetrax tetrax is an endangered species, hunting them is not permissible, and chasing them away with other bird species may not be feasible [7]. The European union support for these species is another reason [1]. The surveys conducted between 2002 and 2009 indicated a 17% decline in the Tetrax tetrax male population due to illegal hunting, collisions with power lines, and other anthropogenic causes [9].
Studies have shown that the use of appropriate planting techniques can reduce the damage caused by birds to growing rapeseed bushes [8]. Field research has indicated that the major damage to rapeseed crops in different regions of Iran, from the cultivation stage to the end of the rosette stage, is predominantly caused by migratory birds such as Tetrax tetrax and cuckoos [8]. Bird damage occurs from the vegetative stage of rapeseed, wheat, and barley to the end of the rosette stage, by eating the seeds and uprooting the plant bushes and then eating the young leaves of the plant [10,11]. However, that damage caused by the birds does not represent a severe reduction in crop yields [5,12]. In Essex, east London (UK), for instance, about 81% of lettuce plants in the fall were damaged by cuckoos, but it still yielded acceptable production. References [5,12] have reported that three zones were identified in Iran for overwintering of Tetrax tetrax birds, including the Moghan Plain in northwest Iran, Turkmen Sahra in the southeast corner of the Caspian Sea, and the Sarakhs Plain in southeast Iran near the Afghanistan border. Population surveys of this type of migratory bird during 2005–2009 showed that over 10,050 species in Iran wintered in these habitats and returned after warming to their regions of origin (mainly Russia and Kazakhstan) [8].
A research group studied the activity of common cranes in arable fields as examples of large grazing birds, and their impacts on the agricultural sector [13]. They surveyed different effective crop damage factors in order to develop preventive strategies and reported that proper approaches based on conservation practices are required to reduce crop damages. In another study, the damages caused by monk parakeets to corps were evaluated [14]. In this research, various agricultural and horticultural products such as tomato, corn, red plum, etc., were analyzed regarding the crop damage caused by the parakeets. This work was done manually by counting the number of damaged crops, and extrapolating them to the total area. Time consumption and inability to use the data in real-time applications are the main limitations of this method of crop damage estimation.
In similar studies, bird abundance and damage to crop fields have been analyzed regarding habitat features and their economic impacts on the fields’ yields [15,16]. These works reported that some species of the birds can cause significant damage to spring crops where severe damage to freshly planted crops can lead to the loss of the entire crop and therefore require replanting. Accordingly, managers should pay special attention to a series of local factors in case of geometrical features of the fields for preventing bird damage to crops.
Few studies have been conducted on crop protection and prediction of some probable damage causes such as frost, hail, etc., which are very damaging to most crops using image and non-image data [17,18]. In such studies, which use common methods of image processing and analysis, despite their relative accuracy, there are often two challenges. First, they need to direct feature extraction so that the required information must be obtained manually by the users and not automatically. The second problem is less generalization as the performance of these methods is not robust enough to be reliable and generalizable to new data in a variety of applications. Thus, smarter methods are needed. Regarding the identification and classification of rapeseed damage especially using computer-based methods which are nondestructive, there is a gap. Accordingly, the current study tries to fill it with a novel and strong method without direct interference of human operators called deep transfer learning [19].

Deep Learning

Deep learning is a major domain of machine learning and artificial intelligence which has been used extensively in a variety of fields due to its outstanding performance [20]. Deep learning models are often designed and implemented using neural networks where the number of hidden layers determine the depth of the deep neural networks. These deep neural networks have achieved major breakthroughs in image classification [21,22]. It is remarkable that traditional machine learning methods are outperformed by deep learning by a significant margin in different domains such as computer vision, natural language processing, time series, etc. [20]. Also, due to the high ability of deep learning-based models in extracting more and more complex features, they work best for unstructured data such as images, text, and sensor data [23]. In other words, complex features are detected in the subsequent layers of the deep models which are used in the classification procedure. The entire procedure is done automatically with no direct interference by any human operator, which the great advantage of deep neural networks [24].
There are several deep learning architectures of which convolutional neural networks (CNNs) are the most famous of them [20]. CNNs have a great potential in the area of computer vision, especially if implemented on modern powerful graphical processing units (GPUs) [25]. Studies have showed that using GPUs significantly improves the recognition rate in many vision datasets such as MNIST, CIFAR10, etc. [26]. Due to extensive applications of deep learning models in different domains, they have also entered the agricultural sector. A comprehensive survey with more details explaining the use of deep learning techniques for solving agricultural and food chain challenges has been conducted by [27,28,29]. It is worth noting that deep learning models are data-driven, meaning they form themselves, and their main limitation is the need for large volumes of data to work correctly [20]. In other words, deep learning models are data oriented which means that they often need plenty of input data for feeding and training [23]. However, there are methods such as active learning and transfer learning that have been able to solve this challenge to a good extent in recent years [20,23]. Also, comparisons of the models with other methods have been performed regarding performance.
Several studies have been conducted on the applications of deep learning in identifying and classifying damaged plants. In [30] crop diseases and pests were identified using deep learning and a fuzzy system. These researchers claimed their method had advantages such as better robustness, generalization and acceptable accuracy. In another study, different models of deep networks for classification of soybean pest images were evaluated [31]. The results of this study showed that with transfer learning and fine-tuning techniques, very good accuracy can be achieved. Also, the performance of these models was reported to be significantly superior to other feature extraction methods such as SIFT and SURF.
The combination of a CNN and learning vector quantization (LVQ) was investigated to detect and classify plant leaf disease [32]. In this study, color characteristics were extracted by a modeled CNN, and the resulting information was fed as input to an LVQ. Their experimental results showed a satisfactory performance in identifying and classifying four types of tomato leaf diseases. In another study, plant diseases were identified with various machine learning techniques, including convolutional deep neural networks [33]. They achieved 95% accuracy, which represented better peformance than similar studies before. In another study, generative adversarial networks (GANs) have been used to compensate for the lack of data on disease and plant pest images, and the task of classification has been performed by applying a CNN. The results are reported as satisfactory [34].
Considering the mentioned cases and to overcome the current limitations, the objective of the current research was to present a new method for classifying damaged rapeseed crop images caused by Tetrax tetrax birds based on two deep neural networks called Vgg16 and ResNet50, and using a transfer learning strategy.

2. Materials and Methods

2.1. Location and Description of the Study Area

All imaging operations were conducted in 2019 at Moghan Agro-Industrial & Live-stock Company (MAIL Co., 39°31′35″ N 47°57′24″ E and an altitude of 46 m above sea level) on the Moghan Plain, Ardabil, Iran (Figure 1). Several rapeseed fields were chosen as a target crop to provide the required image dataset. The damage to the rapeseed crop involves leaves partially to completely eaten by the birds. Accordingly, to determine the category of the rapeseed damage, all fields were divided into three groups including undamaged, partially damaged (damage to some parts of the plant leaves), and fully damaged (the leaves that are almost completely eaten) based on the advice of agricultural experts and insurers. Then, for counting the number of plants per unit area and the number of leaves per plant, a 0.5 m × 0.5 m wooden frame was used. This frame was placed randomly on different points at each field far from the borders whose results are reported in Table 1. In the next part (after image acquisition), the captured images were labeled according to these values. Since the current study is a supervised learning, the label of each image is required for training the deep neural network.

2.2. Dataset Preparation

In order to feed the deep neural network for classifying rapeseed damaged crop, image data were used. To provide the dataset, a 10-megapixel digital camera (W3-Fujifilm, Fuji, Minato City, Tokyo, Japan) equipped with 12 mm focal length lens was used under real field conditions in terms of lightening. On the day of image capture, the weather was sunny and the temperature was 15 °C. The wind speed was also 3 km/h, so that it was not felt much. The camera was mounted on an aluminum platform to capture images with the distance between the camera holder and the ground set at 80 cm. This distance was optimum for image acquisition in the current research as the pixels of these images were not affected by geometric distortion. Accordingly, no geometrical corrections were performed on the RGB images, and they were directly used for feeding the designed network. The resolution of this image dataset was 3784 × 2536 pixels. The platform built to hold the camera is displayed in Figure 2, and was moved manually across the field. Also, a sample of the dataset used in this study is shown in Figure 3. To access more images and improve the generalization power of the classification model, data augmentation was used. To perform this, rotation to the 180° and translation in both x and y directions were applied to the input images to enhance their number artificially.

2.3. Classification Model

It is remarkable that providing massive data for many applications in the real world (as with the current research) is often very hard or impossible [23]. Thus, a transfer learning technique was used to overcome this challenge. Indeed, transfer learning was used because the number of images was limited. In this regard, pre-trained networks especially convolutional neural networks (CNNs) as famous models with many applications in computer vision tasks, were applied. Vgg16 and ResNet50 are deep CNNs trained on approximately 1.2 million images from ImageNet [35]. These deep neural networks were used via transfer learning to avoid overfitting. Both networks have the capability of classifying 1000 object categories into their specific class [36,37]. The input images to the Vgg16 and ResNet50 are is in size of 224 × 224 but the number of convolution layers is 13 and 50, respectively [36].
After entering the input images, CNNs identify lower-level features, such as edges and spots, as well as medium-level features, such as corners and texture, as well as higher-level features, which form the overall shape of objects within the scene [24]. In other words, as the network goes deeper, more complex features are extracted so that their performance is often high [20]. Other specifications of Vgg16 such as the number and size of kernels, type of activation function, pooling and batch normalization (BN) layers are reported in Table 2. Activation function is a nonlinear function that takes the weighted sum of all of the inputs from previous layers, and then generates and passes an output value to the next layers to control the outputs of out neural networks [38]. Batch normalization is a technique that normalizes the input of activation functions in a hidden layer [39]. It speeds up training, enables higher learning rates and reduces overfitting. Convolutional layer is a layer of a deep neural network that a filter passes along an input matrix, and extracts the required features. A kernel (filter) is a matrix with smaller size than input matrix which is used to extract certain features from an input matrix. The quality of features and network performance depends on the optimal values of these kernels [39]. Pooling is an operation which is used to reduce the size of feature matrices created by earlier convolutional layers [23]. Further, the architecture of Vgg16 is shown as a sample in Figure 4. Residual networks (ResNet50) which contains more hidden layers was the winner of ImageNet challenge in 2015 in the image classification task. The main difference between ResNet50 and Vgg16 networks is the existence of a skip connection which connects the input with the output of convolution block [36,37]. This trick is often useful in multi-class classification. The specifications and information with more details of ResNet50 can be found in [37].
In the training procedure, hyper parameters such as meta-settings, which basically control the behavior of the learning model were adjusted. The optimum values for these hyper parameters were tuned after several trial and errors. In this regard, the hyper parameters used in the Vgg16 and ResNet50 for classifying the rapeseed images are reported in Table 3. It should be noted that parameters (weights and biases) were not changed as the networks had previously been taught on the huge ImageNet dataset. Due to difference between the source (ImageNet) and the target datasets, 55% of initial layers of Vgg16 were frozen, with their weights and biases kept, and only 45% of parameters were updated in the backpropagation phase. These ratios were obtained based on trial and error as well as the similarity of the source dataset of these pre-trained networks (ImageNet) and the target dataset (the dataset of the study). The percentage of frozen layers for ResNet50 was set to 45. It is remarkable that the dataset was divided into 80% for training, 10% for validation, and 10% for test set, which numbered 296, 37 and 37 images, respectively.

2.4. Performance Validation

In classification tasks, there is a common tool for evaluating of the performance entitled confusion matrix from which different measures can be extracted from it [40]. In the current study, accuracy (correctly classified image rate), precision (positive predictive value), and recall (true positive rate) were computed as the most conventional metrics to validate the presented classification model which are defined as follows:
A c c u r a c y = T P + T N T P + T N + F P + F N
P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N
where true positive (TP), true negative (TN), false negative (FN), and false positive (FP) denote correctly classified positive samples, correctly classified negative samples, incorrectly classified positive samples, and incorrectly negative samples, respectively [41]. Also, the K-fold cross validation technique was used to improve the robustness of the CNNs via stability training. K was chosen as 10 so that each image in the entire dataset had the opportunity to be used in the test set one time and used to train the models nine times.

2.5. Conventional Image Processing

In order to more accurately evaluate the results of the deep learning-based models, a method based on conventional image processing (IP) was implemented. This IP method involved extracting some color features to determine the amount of greenness. Since the color of the plants is green in the images, the color feature (especially green) contains probable valuable information that can play an important role in classifying and selecting the right decision boundary. For this reason, greenness was examined. To do this, the RGB color space was first normalized to modify the brightness and color of a given pixels [42]. Then, this color space was decomposed, and the amount of greenness was calculated from the green channel. The obtained values were given to a linear discriminant analysis model which is a linear classifier to classify the images. The confusion matrix was obtained to measure the efficiency of the model by calculating the classification criteria introduced in the previous section.

2.6. Implementation Requirements

Regarding the software requirements to implement the Vgg16 and ResNet50, Python 3.8.5 programming language in the open source Pytorch 1.9.0 library developed by Facebook’s AI research laboratory was used. The corresponding hardware for this implementation included a system equipped with Corei7 3.6 GHz Intel processor, 8 GB RAM and an NVIDIA GeForce (CA, USA) GTX 1060 GPU. It is worth noting that use of a GPU expedites significantly the computations compared to just a CPU.

3. Results

Deep CCNs perform their work automatically by extracting and engineering various representations from lower to higher level features. The result of Vgg16 in edge detection of rapeseed’s leaves and stems is illustrated in Figure 5. Based on this figure, it can be inferred that edges, corners, and other important sites inside the images, which are good sites for extracting valuable information, were in good agreement with the images used in this study. Also, the other point from this figure is that, given the type of CNN (Vgg16), transfer learning technique was the right approach.
The confusion matrix values for evaluating the performance of the deep models presented in this study are shown in Table 4. The overall accuracy of Vgg16 and ResNet50 reached 93.7% and 98.2%, respectively. The highest accuracy of both models was obtained for the first (undamaged rapeseeds) and third classes (fully damaged). Indeed, the second class (partially damaged) has some similarities to classes 1 and 2 causing few misclassifications in the class. ResNet50 was able to predict all images in classes 1 and 3 correctly in their own categories with only two errors in the second class.
To visualize the performance of Vgg16 and ResNet50 based on confusion matrix, their bar plots are shown in Figure 6. In the plot, the horizontal axis is probability because the output layer of the CNNs used (Vgg16 and ResNet50) assigns a probability vector to each of the classes so that the probability of appropriate classes determines their prediction rate. This bar graph shows two bars for each class: one for the corresponding class with a large probably and another for one of the other two misclassified. As it can be observed form this illustration, the undamaged and fully-damaged classes were predicted by the two classifiers more accurately than the partially damaged class, although the performance of ResNet50 was perfect compared to Vgg16. The other point is that for the ResNet50, the p-value of partially damaged class is less than 5% significance level which means that the classification of the images is sensitive to the class but not to other classes (undamaged and fully-damaged rapeseed), while for the Vgg16 algorithm, the p-value of the partially damaged class is larger than significance level, supporting that the image samples come from different distributions.
Table 5 reports the results of classification of three rapeseed crops in case of soundness using both the Vgg16 and ResNet50 as a deep learning-based model and conventional image processing method. These values were obtained from confusion matrix with the defined metrics consisting of accuracy, precision, and recall. This table indicates that the best performance belonged to the ResNet50 with overall accuracy of 98.2%. As mentioned earlier, CNNs perform their recognition task by extracting a wide range of features; so, the ResNet50 with a large number of convolutional layers (compared Vgg16), has found the features which are linearly separable in the features space. Since the Softmax function in the last fully connected layer of both models is a linear classifier, it was able to classify the three classes with a very high accuracy in ResNet50 compared to Vgg16. The results of the conventional image processing method (IP) are also shown in Table 5. The overall accuracy of the IP method is significantly different from the deep learning models. The values of the three classification criteria were also very different for the other classes except the first category.

4. Discussion

ResNet50 produced the best rapeseed image classification performance. The good performance of ResNet50 has two major reasons. One is that its number of hidden layers is high (50) enabling this model to learn and extract more complex features and use them in the classification task. In other words, by learning more features in deeper models, the machine experience increases so that the performance of the network also improves significantly. This was in agreement with the results obtained in similar studies [20,24]. The second reason is related to the skip connection which is present in ResNet50 but not in Vgg16. Skip connections mitigate the problem of vanishing gradient when updating weights. Indeed, skip connections allow gradient to flow information from earlier layers in the network to later layers. This is also in line with the similar study [43]. Thus, they pass information from the down sampling layers to the up-sampling layers. Also, in this study, learning rate was the most effective hyper parameter, and played a decisive role in the convolutional neural networks convergence.
Comparing the results of this study with [32] in which deep neural networks were used to classify the similar image data, it was found that both models used in this study (Vgg16 and ResNet50) had better performance in terms of accuracy. While, compared to [33], only ResNet50 has higher accuracy, and Vgg16 provided lower performance (93.7%). Also, the results of this study are comparable to some similar studies that have had satisfactory results in terms of classification accuracy [44,45,46,47]. Comparison of the results of these studies once again proves the effectiveness and power of pre-trained deep neural networks and transfer learning techniques in classifying of different image datasets.
Considering the performance of the results of the image processing method (values of Table 5), it was shown that this method has performed well in predicting the classification results for the undamaged rapeseed class but has faced a major problem in the two partially damaged and fully damaged classes. The reason for this event is that the greenness index is dominant in the undamaged class and is visible in almost all green images. While, in the two partially damaged and completely damaged classes, the values of this index were close to each other, contrary to what can be seen in the images. This means that the distribution of plants in the partially damaged class images was almost uniform, but in the fully damaged class this distribution was not uniform and often accumulated on one side of the image. As a result, the amount of greenness in both classes is almost equal, but their location in the images is different, so that in many cases the linear discriminant analysis model predicted them instead of each other. CNNs, on the other hand, automatically detect all the necessary features such as color, texture, shape, etc. (at different levels of complexity) and use them for the classification task. This is exactly the reason why the deep learning approach excels.
Since the images have been taken in the field, lightning conditions had a direct impact on the image quality and consequently on the classification results due to manual extraction of features. In other words, in the conventional image processing method, any undesired variations in the intensity of the pixels affect the feature extraction procedure. Hence the accuracy of the classification is not very high. Whereas, in the deep neural network method, all these variations are under the control of the network, so that it extracts all the necessary features in order to converge the network, and the accuracy it delivers is high.
Unlike common methods in the computer vision tasks [48] which often require an additional stage called image registration to avoid distortion, the models used in this study were independent of it. Further, the close-up images also helped in this regard because of their lower distance from the ground. Meanwhile, the automation of the presented models enables them to deal in different input images, albeit with some undesired signals such as noises, blurring, etc. A further point is that all of the steps involved in the processing of the rapeseed images were performed without any interaction from the operator, including the preprocessing and feature extraction processes, as well as the classification and differentiation of the appropriate categories. This provides new insights into the use of smart algorithms in the real world especially automation of agricultural operations and performing tasks automatically.

5. Conclusions

In this study, two classifiers called Vgg16 and ResNet50 were used to classify different rapeseed crop images showing undamaged, partially damaged, and fully damaged rapeseed plants caused by Tetrax tetrax. Considering the use of pre-trained models on the ImageNet as a massive dataset, and applying transfer learning technique, the results were satisfactory. The obtained results indicated that the methods used in this study have the strong potential to be extended in quantifying and classifying other damaged crops in different stages of plant growth. Providing the results of this research to the experts involved in the area, leads to precise estimation of the damage. Consequently, farmers receive compensation from the insurance companies for the damage caused to their crops. Since the matter is a point of contention, precise estimation can be promisingly helpful. Further, the suggested methods here have the ability to effectively help insurance experts make the right decisions in crop damage estimation according to the type of damage. This, in turn, results in choosing preventative strategies and avoiding hunting the birds or other destructive approaches. Hence, different species of these birds can be protected. This can also have economic benefits. Indeed, four groups at least can benefit from the findings of this study: (i) agricultural agencies because of the economic benefits of savings; (ii) agricultural experts for developing rapeseed and other similar crops; (iii) farmers because of compensation for the crop damage; (iv) the environmental activists for the protection of the birds and other animal species.
One of the limitations of this study was the difficulty of collecting image data manually. Hence, time consuming and relative difficulty in the data preparation was a challenge that this study faced. As a suggestion for further research, drone imagery will be used to compute the damaged area of the fields. To perform this, the classification task should be performed on the pixel scale. However, a high volume of computations and the complexity of those models can be expected in that case.

Author Contributions

Conceptualization, A.M. and J.L.H.-H.; methodology, A.M. and A.A.; software, A.A.; investigation, A.M., A.A. and J.L.H.-H.; resources, Y.A.-G. and I.G.-B.; data curation, A.M. and A.A.; writing—original draft preparation, Y.A.-G.; writing—review and editing, Y.A.-G., J.L.H.-H., M.H.-H. and I.G.-B.; funding acquisition, A.M., Y.A.-G., M.H.-H. and I.G.-B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

The authors would like to thank Moghan Agro-Industrial & Livestock Company (MAIL Co.) staff for their contribution to this project. Authors also sincerely thank especially Faculty of Agriculture and Natural Resources (Moghan) and the University of Mohaghegh Ardabili for their assistance in this research.

Conflicts of Interest

The authors declare no conflict of interest. The funders Funding says there are no funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Lindell, C.A.; Steensma, K.M.; Curtis, P.D.; Boulanger, J.R.; Carroll, J.E.; Burrows, C.; Linz, G.M. Proportions of bird damage in tree fruits are higher in low-fruit-abundance contexts. Crop Prot. 2016, 90, 40–48. [Google Scholar] [CrossRef] [Green Version]
  2. Bhusal, S.; Khanal, K.; Karkee, M.; Steensma, K.; Taylor, M.E. Unmanned aerial systems (UAS) for mitigating bird damage in wine grapes. In Proceedings of the 14th International Conference on Precision Agriculture, Montreal, QC, Canada, 24–27 June 2018. [Google Scholar]
  3. Bomford, M.; Sinclair, R. Australian research on bird pests: Impact, management and future directions. Emu 2002, 102, 29–45. [Google Scholar] [CrossRef]
  4. FAO. FAOSTAT Data Base; Food and Agriculture Organization of the United Nations: Rome, Italy, 2017. [Google Scholar]
  5. Sehhatisabet, M.E.; Abdi, F.; Ashoori, A.; Khaleghizadeh, A.; Khani, A.; Rabiei, K.; Shakiba, M. Preliminary assessment of distribution and population size of overwintering Little Bustards Tetrax tetraxin Iran. Bird Conserv. Int. 2012, 109, 123–132. [Google Scholar]
  6. Suárez-Seoane, S.; de la Morena, E.L.G.; Prieto, M.B.M.; Osborne, P.E.; de Juana, E. Maximum entropy niche-based modelling of seasonal changes in little bustard (Tetrax tetrax) distribution. Ecol. Modell. 2008, 219, 17–29. [Google Scholar] [CrossRef]
  7. Iñigo, A.; Barov, B. Action Plan for the Little Bustard Tetrax tetrax in the European Union; SEO|BirdLife BirdLife International European Commission: London, UK, 2010. [Google Scholar]
  8. Khaleghizadeh, A.; Khormali, S.; Taghizadeh, M. Effects of agronomic methods on reducing bird damage to rapeseed. Ir. Res. Inst. Plant Prot. 2015, 12. (In Persian) [Google Scholar] [CrossRef]
  9. Ponjoan, A.; Bota, G.; Mañosa, S. Sisó Tetrax tetrax. In Atles dels Ocells Nidificants de Catalunya; Herrando, S., Brotons, L., Estrada, J., Guallar, S., Anton, M., Eds.; Institut Català d’Ornitologia/Lynx Edicions: Bellaterra, Spain, 2011; pp. 242–243. [Google Scholar]
  10. Halse, S.A.; Trevenen, H.J. Damage to cereal crops by larks in north-western Iraq. Ann. Appl. Biol. 1986, 108, 423–430. [Google Scholar] [CrossRef]
  11. Green, R.E. Food selection by skylarks: The effect of a pesticide on grazing preferences. Bird Probl. Agric. 1980, 180–187. [Google Scholar]
  12. Edgar, W.H.; Isaacson, A.J. Observations on skylark damage to sugar beet and lettuce seedlings in East Anglia. Ann. Appl. Biol. 1974, 76, 335–337. [Google Scholar] [CrossRef]
  13. Nilsson, L.; Bunnefeld, N.; Persson, J.; Månsson, J. Large grazing birds and agriculture—predicting field use of common cranes and implications for crop damage prevention. Agric. Ecosyst. Environ. 2016, 219, 163–170. [Google Scholar] [CrossRef] [Green Version]
  14. Senar, J.C.; Domènech, J.; Arroyo, L.; Torre, I.; Gordo, O. An evaluation of monk parakeet damage to crops in the metropolitan area of Barcelona. Anim. Biodiv. Conserv. 2016, 39, 141–145. [Google Scholar] [CrossRef]
  15. Canavelli, S.B.; Branch, L.C.; Cavallero, P.; González, C.; Zaccagnini, M.E. Multi-level analysis of bird abundance and damage to crop fields. Agric. Ecosyst. Environ. 2014, 197, 128–136. [Google Scholar] [CrossRef]
  16. Shwiff, S.A.; Ernest, K.L.; Degroot, S.L.; Anderson, A.M.; Shwiff, S.S. The Economic Impact of Blackbird Damage to Crops; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
  17. Robinson, C.; Mort, N. A neural network system for the protection of citrus crops from frost damage. Comput. Electron. Agric. 1997, 16, 177–187. [Google Scholar] [CrossRef]
  18. Zhou, J.; Pavek, M.J.; Shelton, S.C.; Holden, Z.J.; Sankaran, S. Aerial multispectral imaging for crop hail damage assessment in potato. Comput. Electron. Agric. 2016, 127, 406–412. [Google Scholar] [CrossRef] [Green Version]
  19. Zappone, A.; Di Renzo, M.; Debbah, M. Wireless networks design in the era of deep learning: Model-based, AI-based, or both? IEEE Trans. Commun. 2019, 67, 7331–7376. [Google Scholar] [CrossRef] [Green Version]
  20. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; The MIT Press: Cambridge, MA, USA, 2017. [Google Scholar]
  21. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  22. Matthew, D.; Fergus, R. Visualizing and understanding convolutional neural networks. In ECCV 2014: Computer Vision–ECCV 2014, Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T., Eds.; Springer: Cham, Switzerland, 2014. [Google Scholar]
  23. Pattanayak, S. Pro Deep Learning with TensorFlow; Apress: Bangalore, Karnataka, India, 2017. [Google Scholar]
  24. Azizi, A.; Abbaspour-Gilandeh, Y.; Vannier, E.; Dusséaux, R.; Mseri-Gundoshmian, T.; Moghaddam, H.A. Semantic segmentation: A modern approach for identifying soil clods in precision farming. Biosyst. Eng. 2020, 196, 172–182. [Google Scholar] [CrossRef]
  25. Ciregan, D.; Meier, U.; Schmidhuber, J. Multi-column deep neural networks for image classification. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 3642–3649. [Google Scholar]
  26. Hamid, M.S.; Abd Manap, N.; Hamzah, R.A.; Kadmin, A.F. Stereo Matching Algorithm based on Deep Learning: A Survey. J. King Saud Univ. Comput. Inf. Sci. 2020, in press. [Google Scholar] [CrossRef]
  27. Kamilaris, A.; Prenafeta-Boldú, F.X. Deep learning in agriculture: A survey. Comput. Electron. Agric. 2018, 147, 70–90. [Google Scholar] [CrossRef] [Green Version]
  28. Christiansen, P.; Nielsen, L.N.; Steen, K.A.; Jørgensen, R.N.; Karstoft, H. DeepAnomaly: Combining background subtraction and deep learning for detecting obstacles and anomalies in an agricultural field. Sensors 2016, 16, 1904. [Google Scholar] [CrossRef] [Green Version]
  29. Steen, K.A.; Christiansen, P.; Karstoft, H.; Jørgensen, R.N. Using deep learning to challenge safety standard for highly autonomous machines in agriculture. J. Imaging 2016, 2, 6. [Google Scholar] [CrossRef] [Green Version]
  30. Fan, T.; Xu, J. Image classification of crop diseases and pests based on deep learning and fuzzy system. Int. J. Data Warehous. Min. 2000, 16, 34–47. [Google Scholar] [CrossRef]
  31. Tetila, E.C.; Machado, B.B.; Astolfi, G.; de Souza Belete, N.A.; Amorim, W.P.; Roel, A.R.; Pistori, H. Detection and classification of soybean pests using deep learning with UAV images. Comput. Electron. Agric. 2020, 179, 105836. [Google Scholar] [CrossRef]
  32. Sardogan, M.; Tuncer, A.; Ozen, Y. Plant leaf disease detection and classification based on CNN with LVQ algorithm. In Proceedings of the 2018 3rd International Conference on Computer Science and Engineering (UBMK), Sarajevo, Herzegovina, 20–23 September 2018; pp. 382–385. [Google Scholar]
  33. Trivedi, J.; Shamnani, Y.; Gajjar, R. Plant leaf disease detection using machine learning. In Proceedings of the International Conference on Emerging Technology Trends in Electronics Communication and Networking, Surat, India, 7–8 February 2020; pp. 267–276. [Google Scholar]
  34. Gandhi, R.; Nimbalkar, S.; Yelamanchili, N.; Ponkshe, S. May. Plant disease detection using CNNs and GANs as an augmentative approach. In Proceedings of the 2018 IEEE International Conference on Innovative Research and Development, Bangkok, Thailand, 11–12 May 2018; pp. 1–5. [Google Scholar]
  35. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Li, F.F. Imagenet: A Large-Scale Hierarchical Image Database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
  36. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  37. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  38. Nwankpa, C.; Ijomah, W.; Gachagan, A.; Marshall, S. Activation Functions: Comparison of trends in Practice and Research for Deep Learning. arXiv 2018, arXiv:1811.03378. [Google Scholar]
  39. Luo, P.; Wang, X.; Shao, W.; Peng, Z. Understanding regularization in batch normalization. arXiv 2018, arXiv:1809.00846, 1. [Google Scholar]
  40. Stehman, S.V. Selecting and interpreting measures of thematic classification accuracy. Remote Sens. Environ. 1997, 62, 77–89. [Google Scholar] [CrossRef]
  41. Khalil, M.; Ayad, H.; Adib, A. Performance evaluation of feature extraction techniques in MR-Brain image classification system. Procedia Comput. Sci. 2018, 127, 218–225. [Google Scholar] [CrossRef]
  42. Azizi, A.; Abbaspour-Gilandeh, Y.; Nooshyar, M.; Afkari-Sayah, A. Identifying potato varieties using machine vision and artificial neural networks. Int. J. Food Prop. 2016, 19, 618–635. [Google Scholar] [CrossRef]
  43. Azizi, A.; Gilandeh, Y.A.; Mesri-Gundoshmian, T.; Saleh-Bigdeli, A.A.; Moghaddam, H.A. Classification of soil aggregates: A novel approach based on deep learning. Soil Tillage Res. 2020, 199, 104586. [Google Scholar] [CrossRef]
  44. Jadhav, S.B.; Udupi, V.R.; Patil, S.B. Convolutional neural networks for leaf image-based plant disease classification. IAES Int. J. Artif. Intell. 2019, 8, 328–341. [Google Scholar] [CrossRef]
  45. Nachtigall, L.G.; Araujo, R.M.; Nachtigall, G.R. Classification of Apple Tree Disorders Using Convolutional Neural Networks. In Proceedings of the 2016 IEEE 28th International Conference on Tools with Artificial Intelligence (ICTAI), San Jose, CA, USA, 6–8 November 2016; pp. 472–476. [Google Scholar] [CrossRef] [Green Version]
  46. Agarwal, M.; Sinha, A.; Gupta, S.K.; Mishra, D.; Mishra, R. Potato crop disease classification using convolutional neural network. In Proceedings of the Smart Systems and IoT: Innovations in Computing, Jaipur, India, 18–20 January 2019; pp. 391–400. [Google Scholar]
  47. Alencastre-Miranda, M.; Johnson, R.M.; Krebs, H.I. Convolutional neural networks and transfer learning for quality inspection of different sugarcane varieties. IEEE Trans. Ind. Inform. 2020, 17, 787–794. [Google Scholar] [CrossRef]
  48. Ajdadi, F.R.; Abbaspour-Gilandeh, Y.A.; Mollazade, K.; Hasanzadeh, R.P. Application of machine vision for classification of soil aggregate size. Soil Tillage Res. 2016, 162, 8–17. [Google Scholar] [CrossRef]
Figure 1. Location of oilseed rape host lands for Tetrax tetrax birds, MAIL Co, Ardabil, Iran.
Figure 1. Location of oilseed rape host lands for Tetrax tetrax birds, MAIL Co, Ardabil, Iran.
Agronomy 11 02364 g001
Figure 2. Platform for holding the camera and image acquisition.
Figure 2. Platform for holding the camera and image acquisition.
Agronomy 11 02364 g002
Figure 3. Some samples of rapeseed crop images in three categories; (a) undamaged, (b) partially damaged and (c) fully damaged crop.
Figure 3. Some samples of rapeseed crop images in three categories; (a) undamaged, (b) partially damaged and (c) fully damaged crop.
Agronomy 11 02364 g003
Figure 4. Architecture of Vgg16 for classification of damaged rapeseed crop.
Figure 4. Architecture of Vgg16 for classification of damaged rapeseed crop.
Agronomy 11 02364 g004
Figure 5. Extraction of initial information of input images in Vgg16. (a) original images (b) corresponding output of the first layer from 90th filter of Vgg16.
Figure 5. Extraction of initial information of input images in Vgg16. (a) original images (b) corresponding output of the first layer from 90th filter of Vgg16.
Agronomy 11 02364 g005
Figure 6. Illustration of performance of the two classifiers presented based on confusion matrix. (a) Vgg16 and (b) ResNet50.
Figure 6. Illustration of performance of the two classifiers presented based on confusion matrix. (a) Vgg16 and (b) ResNet50.
Agronomy 11 02364 g006
Table 1. The number of rapeseed plants and their leaves per image for classification task.
Table 1. The number of rapeseed plants and their leaves per image for classification task.
UndamagedPartially DamagedFully Damaged
# of plants inside the frame14–1814–18less than 14
# of leaves per a plant9–102–8less than 2
Table 2. Characteristics of Vgg16 to classify rapeseed crop in terms of health.
Table 2. Characteristics of Vgg16 to classify rapeseed crop in terms of health.
Convolutional LayersParameters
13 × 3 Conv. 64, Stride 1, BN, ReLU
23 × 3 Conv. 64, Stride 1, BN, ReLU, 2 × 2 Max-pool
33 × 3 Conv. 128, Stride 1, BN, ReLU
43 × 3 Conv. 128, Stride 1, BN, ReLU, 2 × 2 Max-pool
53 × 3 Conv. 256, Stride 1, BN, ReLU
63 × 3 Conv. 256, Stride 1, BN, ReLU
73 × 3 Conv. 256, Stride 1, BN, ReLU, 2 × 2 Max-pool
83 × 3 Conv. 512, Stride 1, BN, ReLU,
93 × 3 Conv. 512, Stride 1, BN, ReLU
103 × 3 Conv. 512, Stride 1, BN, ReLU, 2 × 2 Max-pool
113 × 3 Conv. 512, Stride 1, BN, ReLU
123 × 3 Conv. 512, Stride 1, BN, ReLU
133 × 3 Conv. 512, Stride 1, BN, ReLU, 2 × 2 Max-pool
Table 3. Hyper-parameters used for training and monitoring the Vgg16 and ResNet50 to classify rapeseed crop images.
Table 3. Hyper-parameters used for training and monitoring the Vgg16 and ResNet50 to classify rapeseed crop images.
Deep Neural NetworkLearning Rate# EpochsOptimizer TypeMini-Batch SizeLearning Rate Drop PeriodLearning Rate Drop FactorL2regularization
Vgg160.002100Adam4200.70.005
ResNet500.001100Adam880.60.0002
Table 4. The Values of confusion matrix of DNNs and conventional image processing for classification of dam-aged rapeseed crop for test set.
Table 4. The Values of confusion matrix of DNNs and conventional image processing for classification of dam-aged rapeseed crop for test set.
Actual Classes
Vgg16ResNet50Conventional IP
UndamagedPartially DamagedFully DamagedUndamagedPartially DamagedFully DamagedUndamagedPartially DamagedFully Damaged
Undamaged361037003520
Partially damaged1324035222411
Fully damaged013600370928
Table 5. Results of classification of rapeseed images based on different metrics obtained from confusion matrix of both two deep convolutional neural networks and conventional IP method for the test set. The values are in per-cent.
Table 5. Results of classification of rapeseed images based on different metrics obtained from confusion matrix of both two deep convolutional neural networks and conventional IP method for the test set. The values are in per-cent.
ClassVgg16ResNet50Conventional IP Method
AccuracyPrecisionRecallAccuracyPrecisionRecallAccuracyPrecisionRecall
Undamaged98.297.397.310010010096.494.694.6
Partially damaged93.786.594.198.294.610078.464.968.6
Fully-damaged95.597.39098.210094.982.075.771.8
Overall accuracy93.798.278.4
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Mirzazadeh, A.; Azizi, A.; Abbaspour-Gilandeh, Y.; Hernández-Hernández, J.L.; Hernández-Hernández, M.; Gallardo-Bernal, I. A Novel Technique for Classifying Bird Damage to Rapeseed Plants Based on a Deep Learning Algorithm. Agronomy 2021, 11, 2364. https://doi.org/10.3390/agronomy11112364

AMA Style

Mirzazadeh A, Azizi A, Abbaspour-Gilandeh Y, Hernández-Hernández JL, Hernández-Hernández M, Gallardo-Bernal I. A Novel Technique for Classifying Bird Damage to Rapeseed Plants Based on a Deep Learning Algorithm. Agronomy. 2021; 11(11):2364. https://doi.org/10.3390/agronomy11112364

Chicago/Turabian Style

Mirzazadeh, Ali, Afshin Azizi, Yousef Abbaspour-Gilandeh, José Luis Hernández-Hernández, Mario Hernández-Hernández, and Iván Gallardo-Bernal. 2021. "A Novel Technique for Classifying Bird Damage to Rapeseed Plants Based on a Deep Learning Algorithm" Agronomy 11, no. 11: 2364. https://doi.org/10.3390/agronomy11112364

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop