Next Article in Journal
PLSR-Based Assessment of Soil Respiration Rate Changes under Aerated Irrigation in Relation to Soil Environmental Factors
Previous Article in Journal
Corn Silk Extract: A Potential Modulator for Producing Functional Low Cholesterol Chicken Eggs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Monitoring System for Leucoptera malifoliella (O. Costa, 1836) and Its Damage Based on Artificial Neural Networks

1
Department of Agricultural Zoology, Faculty of Agriculture, University of Zagreb, Svetošimunska cesta 25, 10 000 Zagreb, Croatia
2
Department of Computer Engineering and Automation, Faculty of Electrical Engineering, Computer Science and Information Technology, Josip Juraj Strossmayer University of Osijek, Kneza Trpimira 2B, 31 000 Osijek, Croatia
3
Department of Pomology, Faculty of Agriculture, University of Zagreb, Svetošimunska cesta 25, 10 000 Zagreb, Croatia
4
Department of Plant Nutrition, Faculty of Agriculture, University of Zagreb, Svetošimunska cesta 25, 10 000 Zagreb, Croatia
5
Department of Ecology, Agriculture and Aquaculture, University of Zadar, Trg Kneza Višeslava 9, 23 000 Zadar, Croatia
*
Author to whom correspondence should be addressed.
Agriculture 2023, 13(1), 67; https://doi.org/10.3390/agriculture13010067
Submission received: 30 November 2022 / Revised: 20 December 2022 / Accepted: 21 December 2022 / Published: 26 December 2022

Abstract

:
The pear leaf blister moth is a significant pest in apple orchards. It causes damage to apple leaves by forming circular mines. Its control depends on monitoring two events: the flight of the first generation and the development of mines up to 2 mm in size. Therefore, the aim of this study was to develop two models using artificial neural networks (ANNs) and two monitoring devices with cameras for the early detection of L. malifoliella (Pest Monitoring Device) and its mines on apple leaves (Vegetation Monitoring Device). To train the ANNs, 400 photos were collected and processed. There were 4700 annotations of L. malifoliella and 1880 annotations of mines. The results were processed using a confusion matrix. The accuracy of the model for the Pest Monitoring Device (camera in trap) was more than 98%, while the accuracy of the model for the Vegetation Monitoring Device (camera for damage) was more than 94%, all other parameters of the model were also satisfactory. The use of this comprehensive system allows reliable monitoring of pests and their damage in real-time, leading to targeted pest control, reduction in pesticide residues, and a lower ecological footprint. Furthermore, it could be adopted for monitoring other Lepidopteran pests in crop production.

Graphical Abstract

1. Introduction

The pear leaf blister moth (Leucoptera malifoliella (O. Costa, 1836)) (Lepidoptera: Lyonetiidae) is one of the most important economic pests in apple production [1]. It occurs in orchards in Europe and Asia [2]. It is a multivoltine species [3], and due to climate change, it is becoming more common and with larger populations [4]. L. malifoliella is a typical physiological pest, whose larvae penetrate the leaf and feed on the mesophyll of the leaf tissue, leaving the epidermis untouched, forming circular mines. One mine has an average size of 0.88–1.04 cm2 and represents a loss of 3.4–4.6% of the leaf surface. A higher number of mines (more than 40 per leaf) causes premature defoliation during August or early September [1]. Early defoliation has a negative effect on bud differentiation, while a severe infection on the leaves directly affects the size, yield, and quality of apple fruits [5]. Namely, the overwintering population is noticed too late due to the small dimensions of this pest and its hiding behavior, which is mostly why the damage in apple orchards is not noticed in time [6]. Chemical treatments that are applied too late can only lead to worse results [5].
For successful control of L. malifoliella, monitoring of the first-generation flight and the beginning of its oviposition need to be conducted, as well as monitoring of the embryonic development of larvae, their perforation in the leaf and the initial development of mines up to 2 mm in diameter [5]. Synthetic pheromones have proven useful for monitoring L. malifoliella [4]. The most favorable time for insecticide treatment is the occurrence of mines of the first generation. It is considered that the critical number of mines per leaf is two to three. Such an infection enables the further development of L. malifoliella leading to damage of an economically significant scale [7,8]. Considering that mines are difficult to notice due to their small dimensions, it is necessary to use more precise and faster monitoring methods, in order to react in time and prevent the occurrence of significant damage.
Decision thresholds based on pest capture are the basis of integrated pest management (IPM) and are used to optimize the timing of insecticide treatment. However, IPM requires frequent visits to inaccessible orchards and checking traps on a weekly basis can lead to late interventions and ineffective control [9]. Due to climate change, agricultural systems are exposed to increasing pressures, and significant changes have been recorded in pest phenology [10]. Changes in air temperature directly affect the population dynamics, the relationship with natural enemies, and creates an increase in pest reproduction, which results in the occurrence of a greater number of generations and, consequently, greater damage. Considering the unpredictability of pest occurrence and the impracticality of existing monitoring methods, it is crucial to develop more sophisticated monitoring methods [11]. Therefore, automatic pest monitoring systems have recently been intensively developed [12,13]. The need to use automatic systems for pest monitoring is particularly important in crops that are grown on large areas, such as apples.
Apple (Malus domestica Borkh., 1803) is one of the most economically important fruit crops worldwide, whose fruits are consumed fresh or processed throughout the year [14]. It is grown on an area of 4.6 million hectares, and the global production in 2020 was 86.4 million tons [15]. The apple is the most commonly consumed fruit in Croatia and occupies 36% of the total fruit production [16]. Despite its importance, pest management, as well as early pest monitoring methods in apple production are mostly outdated and unreliable. Therefore, scientists developed automatic systems for monitoring apple pests [17,18], mostly for monitoring the most dangerous apple pest, codling moth (Cydia pomonella (Linnaeus, 1758)) (Lepidoptera: Tortricidae) [19,20]. L. malifoliella is a significant apple pest as well, and due to its morphology (small dimensions of all developmental stages), its detection is difficult using classical monitoring methods. Therefore, there is a need to develop and deploy automatic systems for monitoring this pest as well.
The use of deep learning for automatic pest detection from photos is increasingly being used to detect insect pests in a timely manner [21]. These works can serve as a basis for developing automatic systems to monitor other important pests and their damage. For example, Sabanci et al. [22] used two different deep learning architectures for detecting sunn pest-damaged wheat grains and achieved high classification success, with a classification accuracy of 98.50% and 99.50%. Zhong et al. [23] used neural networks to detect flying insects, and the detection accuracy was over 92%. Moreover, Sütő [24] developed a smart trap using a deep learning-based method for insect counting and proposed solutions to problems such as “lack of data” and “false insect detection”. El Massi et al. [25] used a neural network and a support vector machine as a classifier for classifying and detecting damage caused by leaf miners, with more than 91% accuracy. Grünig et al. [26] presented a deep learning-based system for monitoring damage by L. malifoliella. The used neural network showed good results in categorizing different damage classes from 52,322 leaf photos taken under standardized conditions as well as in the field. However, for the above system to be fully effective in the timely detection of pests, it is critical that the system monitors the number of adult individuals in addition to leaf damage, as this will allow for earlier response and damage prevention.
In addition, Ding and Taylor [19] used deep learning techniques to develop a fully automated codling moth monitoring system. Namely, artificial neural networks were trained to recognize adult codling moths based on 177 collected red-green-blue (RGB) photos. The model was effective, but for the system to be even more reliable and successful in detection, a larger data set needs to be collected. Albanese et al. [27] developed a smart trap for monitoring codling moth using various deep learning algorithms. The advantage of this system is that photo processing is performed in the trap where big data is limited to a small message. In this case, only the detection results are sent to the external server, and the limited energy available in the field is used optimally. However, there is a need to develop these types of systems for monitoring L. malifoliella as well, so that they can be practically used under field conditions.
Most deep learning models are based on artificial neural networks (ANNs) [28,29], which have recently been applied in various fields, including agriculture [30]. In light of this, many researchers have adopted ANN-based detection methods for pest monitoring in agriculture [21,31]. Besides insect detection, artificial neural networks and their variants have been shown to be the most effective method for object detection and recognition [32].
Considering that there are no particularly effective ecological friendly control measures for L. malifoliella and that its management relies mostly on chemical control [2], it is extremely important to use as precise monitoring methods as possible, in order to limit chemical insecticides to only targeted and thus effective applications.
Therefore, this paper makes two main contributions. Firstly, the development of the model using artificial neural networks for early detection of L. malifoliella and its damage on apple leaves that is accurate, precise, fast, and requires minimal pre-processing of data. Secondly, the development of a Pest Monitoring Device (PMD) for monitoring L. malifoliella individuals and a Vegetation Monitoring Device (VMD) for detecting its damages. This system is based on detecting pests and their damage from photos taken with a monitoring device in the field. In these devices, data processing is performed on-the-node, which enables lower energy consumption, and thus a longer lifetime of the entire system, as well as less need for human intervention. Automatic pest monitoring is still in its infancy, and this system is an innovative solution for faster and more reliable pest monitoring.
The hypothesis of this study is that artificial neural networks and the proposed monitoring devices are a reliable tool for monitoring the pear leaf blister moth individuals and their damage if the detection accuracy is more than 90% compared to visual inspection by an expert entomologist.

2. Materials and Methods

The study is divided into four phases, which are shown in Figure 1. The first phase involves photo acquisition for the purpose of learning the ANN. In the second phase, data were processed, and the photos labelled. In the third phase, the learning (or training) ANNs were provided, in order to build an analytical model for automatic detection of L. malifoliella and its damage. In the final, fourth phase, monitoring devices were built to implement the developed model for monitoring pests and damage (Figure 1).

2.1. Data Acquisition

The collection of photos was set up and structured so that it might be possible to obtain enough photos for all classes of defined observations. This was performed so that two EfficientDet Object identification (ANN) models could be trained [33]. No more than two ANN models were used in this work. Each of the two ANN models was used to identify a separate class. The two classes were: 1. the pest, L. malifoliella; and 2. the damage caused by this pest (leaf mines). The number of classes that can be seen is dependent on the quality of the photos taken, the accuracy of the detection methods, and the economic significance of the classes. How the characteristics of each class could be seen under different weather conditions was examined. This was done before the decision was made to observe a certain class.
In the period between April and September 2022, 400 photos for each model (photos of adhesive pads with adult L. malifoliella and photos of mines on the apple leaves) were taken in apple orchards in Zagreb County, Croatia (Petrovina Turopoljska, Mičevec, and Staro Čiče). Petrovina Turopoljska and Mičevec are orchards that use IPM strategies, whereas Staro Čiče is an orchard with organic production. Adults of L. malifoliella were caught on adhesive pads with traditional Delta traps, and pheromone lures (Csalomon®) were made for this species. Traps were inspected, adhesive pads were changed, and photos were taken on a weekly basis.
RGB camera was mounted in the polycarbonate housing, which mimics the housing of the Pest Monitoring Device (PMD), and used for data acquisition, in order to achieve similar shooting conditions as those taken later by the PMD. The adhesive pads were transferred from the Delta traps to the housing. Photos of adult L. malifoliella and apple leaves were taken manually in the field in a real production situation under different lighting conditions (sunny, cloudy, etc.) in order to create the model for automatic pest detection. The RGB cameras for both models were connected to the SBC (single board computer) of the Raspberry Pi 4 foundation, from where the camera is controlled. Photos of adult L. malifoliella from the Delta pheromone trap were taken in parallel with the collection of photos of vegetation. The data were collected on a weekly basis using an RGB camera. Photos of the central part of the apple tree were taken from a distance of 50 cm to detect mines on the leaves.

2.2. Data Preparation

The collected photos, both of the adhesive pads with pests, and of vegetation, were processed with the program LabelImg. Entomological experts were used to detect L. malifoliella adults (Figure 2) and their damage on apple leaves (Figure 3), which were marked manually with bounding boxes.
For establishing a model for detecting L. malifoliella adults in the PMD, three classes were defined and annotated: MINER i.e., L. malifoliella adults, INSECT i.e., other insects and OTHER i.e., other objects (e.g., remains of leaves, branches, etc.). To make a model for detecting mines on apple leaves, two classes: MINES i.e., damage caused by L. malifoliella, and OTHER i.e., other objects (e.g., healthy leaves, nutrient deficiency on leaves, fruit coloration, etc.), were taken out of the 32-class model that is still under development. There are several less important subclasses for the development of both models, all of which have been set as class OTHER to better distinguish them from the important classes listed above. The 1880 annotations of the class MINES (damage) and 4700 annotations of the class MINER (pests) were used for the learning ANNs. The annotation format is PascalVOC, an XML structure. Thus, the images from the PMD consist of the segments shown in Figure 4, while the images from the VMD consist of the segments shown in Figure 5.

2.3. Creating Analytical Models for Automatic Detection of L. malifoliella and Its Damage

Automatic object detection was performed by an artificial intelligence algorithm. The annotated images were used to create two ANNs. The ANNs are computer processing systems that are based on biological nervous systems. They are mostly made up of a large number of connected computing nodes called neurons, which work together in a distributed way to learn from inputs and improve the final output [32].
Learning (or training) ANNs to make an analytical model starts with raw images, extracting important features, such as edges and blobs, and bounding boxes of the objects (L. malifoliella adults and their damage) for the data set. During image processing, they were changed to fit the purpose by applying annotations and modeling concepts. A data set of 400 images was used for learning each ANN. Images were rotated by 0, 90, 180, and 270°, and copies were made that were mirrored on the horizontal and vertical axes. In the end, 12 images were taken from each original image. Then, the original image (size 4000 × 3900) was cut into smaller images (size 640 × 640), and 30 images were obtained from each of these 12 images. This gave 144,000 images for training ANN for pest detection and 144,000 images for training ANN for damage detection.
The images were randomly divided into three sets: the training set, the validation set, and the test set. For training, 80–95% of the images were used, while 20–5% of the images were used for validation. The validation of the model is the first step of the model’s quality examination, which is carried out during the model’s creation [34]. For the validation phase, various basic statistical elements that define the precision of object detection were used. Namely, Average Recall (AR) and Average Precision (AP), as well as statistical indicators, which refer to indicators during the model’s creation and validation at the end of each epoch, and final statistical indicators of the entire model and the classes included in the model.
Checking the accuracy of the validation during training allows for early termination and avoidance of overfitting [35]. Learning loss is a measure of how well a deep learning model fits the training data. Namely, it evaluates the error of the model on the training set. The training set is a portion of the data set used to initially train the model. The learning loss was calculated using the sum of the errors for each training sample [36]. Validation loss is a metric used to evaluate how well a deep learning model performs on the validation set. The validation set is a subset of the data set used to evaluate the performance of the model. Similar to the learning loss, the validation loss was calculated by accumulating the errors for each validation set [36].
For the test data set, it was important to select the most complicated images that the model had never seen before and test the model on this data set. The test results were statistically evaluated using the confusion matrix. Confusion matrices represent the number of predicted and actual values and indicate the accuracy of the model. The confusion matrix consists of 4 categories (TP—“true positive”, TN—‘‘true negative’’, FP—“false positive”, and FN—“false negative”) [37]. The confusion matrix was used to calculate the data on accuracy, precision, sensitivity (recall) and F1 score. The F1 score is the harmonic mean of Precision and Recall. Recall indicates the number of miners correctly identified as positive relative to the total number of miners in the image [38]. All of the aforementioned metrics were calculated using the equations of Aslan et al. [39]. The equation for each metric was defined from Equations (1)–(4). All metrics were calculated using correspondences across the entire data set and do not represent averages across individual images [19].
Accuracy = TP + TN TP + FP + TN + FN × 100
Precision = TP TP + FP
Recall = TP TP + FN
F 1   score = 2 TP 2 TP + FP + FN
The analytical models, themselves, were made through a series of algorithm tests and parameter changes to obtain the best performance for the model’s intended use. The Python 3.6 programming language was chosen to run the program using the TensorFlow artificial intelligence library. Python makes it easy to write scripts quickly [40], and TensorFlow has all the functions needed to make models already built in. For the production application, the TensorFlow Lite platform was used. This means that the analytical models that were built can be used on ‘’end’’ devices with less power consumption, such as SBC [41]. Taking into account the use of analytical models on “end” devices with the TensorFlow Lite application, the quality of detection was tested on different concepts of analytical models, such as SSD (Single-Shot Detector) MobileNet in different versions, SSD ResNet in different versions, and EfficientDet-Lite in different versions. Finally, the network structure of EfficientDet-Lite 4 showed the best detection quality. EfficientDet is a new family of object detectors that are more accurate and use fewer resources than the current state-of-the-art [42].
Emerging technologies, such as computer vision, require accurate object identification but have limited processing resources. These requirements are not met by many high-accuracy detectors. Real-world object detection solutions use different platforms and resources. “Scalable and Efficient Object Detection” (EfficientDet), an accurate object detector that uses few resources, was made by the authors in [33]. EfficientDet is nine times smaller and uses less computation than preceding state-of-the-art detectors. For its backbone for image feature extraction, it uses EfficientNet (Figure 6) [33]. The weighted bi-directional feature pyramid network (BiFPN) gives the input features more weight based on their resolution. This helps the network understand how important they are. In the next step, the feature network takes multiple levels of features from the input and sends out fused features that show the most important aspects of the image. All regular convolutions are replaced with depth-wise separable convolutions. Finally, a class or box network uses fused features to predict the class and location of each object. The EfficientDet neural network architecture can have a different number of layers in BiFPN and class or box networks based on how much processing power is available [43].
Two separate ANNs were trained to identify L. malifoliella (MINER) and mines caused by this pest (MINES). Objects in RGB images (adult L. malifoliella and mines on leaves) were used to create analytical models based on what was seen and the structure of the corresponding object detection classes. The analytical models were divided into groups depending on what they are used for. For example, the PMD has an analytical model for insects (L. malifoliella) and the VMD has an analytical model for mines. Different concepts of algorithms have been used to develop analytical models depending on their intended use (e.g., the quality of detection, the speed of detection required, the amount of energy required for detection, etc.).
Once a model is created, each one must be tested and adjusted, and production parameters must be changed. Analytical models are constantly being improved. Regardless of the purpose or version of the camera control server, a way has been found to automatically update the analytical models. Computers specifically set up for this purpose have been used for storing data and to create analytical models. A computer with a special hardware configuration was acquired to ensure that models requiring more processing power could be created. A separate virtual server was set up for the appropriate storage of image material.

2.4. The Pest Monitoring Device (PMD) and Vegetation Monitoring Device (VMD)

The Pest Monitoring Device (PMD) for pest monitoring (Figure 7a) was housed in a milky-white polycarbonate shell. The housing contains an RGB camera, a temperature sensor for the battery and electronics, an adhesive pad, a pheromone lure, and a power supply system with a battery and a solar panel. On the other hand, in the Vegetation Monitoring Device (VMD), the RGB camera was placed in a separate housing (Figure 7b). The software for both cameras allows users to use a remote monitoring service with image transmission and processing. One PMD is sufficient for use on 1 to 3 ha of the orchard. The devices (including the associated models) have been designed to be used throughout the vegetation and are resistant to weather conditions thanks to an external structure.

2.4.1. Trap Housing

During the building process, a lot of thought was put into how to protect the house from things like rain and sunlight. Because the devices were attached to the stakes in the orchard, special mounting stands had to be made. The trap, which was made specifically for several sorts of insects, also protects the camera within. The rectangular-shaped trap features two entrances on its two opposing sides. The openings are 16 cm wide and 10 cm high, while the house is 25 cm long, 24 cm high, and 16 cm broad. The antenna for the 4G and 5G networks is on top of the housing, along with the solar panel.

2.4.2. Camera

When making the camera, the features of the orchard, especially during the growing season, as well as how it would be used and the conditions in which it would work, were taken into account. The cameras were built around single-board computers (SBCs) from the Raspberry Pi Foundation. The cameras have an Rpi HQ sensor, which allows different M12 lenses to be used depending on what the camera is going to be used for. Lens specifications for the PMD are 75° HFOV, 1/23”, and 3.9 mm, while those for the VMD are 28° HFOV, 1/25”, and 12 mm. The resolution utilized was 12.5 megapixels, or 4056 × 3040 pixels. A SIM card was included in each camera to enable 4G network communication. The temperature of the battery and electronics can be observed from the temperature sensors.

2.4.3. Battery

The power supply system was made up of three Panasonic NCR18650B rechargeable lithium-ion batteries connected in series. These batteries have a 3350 mAh capacity, a 5 A maximum current capacity, and a 10.8 to 12.6 V voltage range. The Battery Management System (BMS) is in charge of making sure that charging and discharging work well so that the battery lasts as long as possible. The system is autonomous and has a long lifespan without assistance from people. The batteries were recharged using solar panels. A solar panel charges the battery system during the day so that it can run on its own.

3. Results

Two artificial neural networks or analytical models were established, one for the automatic detection of L. malifoliella adults within the developed PMD and one for the automatic detection of mines on apple leaves within the VMD.

3.1. Validation Phase

The validation parameters and final statistical indicators of the models and included classes are shown in Table 1. The Average Precision (AP) for the class MINER was 0.69 in the model for the PMD, while AP for the class MINES was 0.62 in the model for the VMD (Table 1). The AP of the whole model for the VMD was quite a bit lower than the AP of the MINES class, due to other classes in the model that were still in the training process.
Statistical indicators during the model’s creation and validation at the end of each epoch are shown in Figure 8. The EfficientDet-Lite model uses 50 epochs by default (Figure 8), which means it went through the training data set 50 times. Additionally, validation loss was measured after each epoch (Figure 8). This indicates whether the model requires additional adjustments.
In the case of the model for the VMD, the validation loss was greater than the learning loss (Figure 8), indicating underfitting of the model. Underfitting occurs when the model cannot accurately model the learning data and, therefore, produces large errors [45]. This result suggests that the model for the VMD needs to be improved with a larger data set for future use of the other classes (not mentioned in this paper), but our focus was on the class MINES. For the model for the PMD, both the learning and validation losses decreased and stabilized at a certain point (Figure 8), indicating an optimal fit (without overfitting or underfitting), so no further adjustments to the model are needed.

3.2. Test Phase

Due to the importance of the model’s work in practice, the model was tested, i.e., its quality was checked after the overall creation. The data set used for model testing consisted of the 30 most complicated photos for each ANN. These photos were not used to train or validate the proposed two ANNs. The examples of automatic counts on the test photos (results of the model) are shown in Figure 9 and Figure 10.
The model was tested by comparing the automatic counts (Figure 9 and Figure 10) to counts provided by an expert entomologist. The results were processed and shown using a confusion matrix (Figure 11 and Figure 12).
In this case, the automatic counts given by the proposed detection algorithm were similar to the manual counts given by an expert entomologist. In Figure 11, which represents the confusion matrix of the model for the PMD, it can be seen that the number of class INSECT and class OTHER was high (>400).
However, there were no false positive (FP) detections, which points to the high precision of the model for the PMD (1.0) (Table 2), while the data set was big enough to build a model with a high level of accuracy (1.0) (Table 2). A high number of true positives (TP) were detected (54) and 19 objects were marked as false negatives (FN) (Figure 11). The overall model accuracy was 98.03%, and the detection accuracy of the MINER class was 98.13% (Table 2).
In the case of the model for the PMD, the precision value was 1.0, which indicates a completely precise model for detecting L. malifoliella (Table 2); while in the model for the VMD, the precision value was 0.94 (Table 3). The recall in the model for the PMD was 0.74 and 0.91 for the VMD. For detecting L. malifoliella adults in the PMD model, the F1 score was 0.85 (Table 2), and 0.92 in the model for the VMD (Table 3). All aforementioned metrics (accuracy, precision, recall and F1 score) were calculated and the values are shown in Table 2 and Table 3.
In the model for the VMD, 115 FP detections were recorded (Figure 12), while the number of FN detections was 175. However, there was a high number of annotations and TP detections (1741 mines) (Figure 12). Finally, the overall accuracy of the model was 94.59%, as was the detection accuracy for the class MINES (Table 3).
The detection algorithm (model) started working as soon as an image was captured. On-site data processing was performed with a storage limit of 15 MB. The execution of the proposed method takes six minutes on average. Due to the built-in on-site detection mechanism, the detection result is sent with as few bytes as possible, which makes it suitable for rural areas. Larger models require more storage space and more time to run, making them difficult to use. The optimization phase aims to reduce the model size while minimizing the loss of accuracy or performance to find the optimum. This allows faster evaluation while minimizing accuracy loss. The proposed implementation applies optimization both during the training phase and before the evaluation of the training model.

4. Discussion

In this study, while testing the model for the PMD, there were no false positive (FP) detections, which points to the high precision of the model. In contrast, Preti et al. [20] developed a smart trap for monitoring the codling moth (C. pomonella) (Lepidoptera: Tortricidae), which had a high number of FP detections, accounting for 90.7% of the automatic counts. The majority of FP detections were represented by shadows on the adhesive pads, lures, and flies. The low precision was caused by the high number of FP detections, suggesting that adjustments to the detection algorithm are required. They point out that the reason is a small data set, and Du et al. [46] proved this with a theoretical calculation that the error in the ANN algorithms class is correlated to the data set size.
Moreover, in the model for the PMD, several detected objects were marked as false negatives (FN), mostly due to the change in color and decomposition of the insects over time. Ding and Taylor [19] analyzed errors caused by different factors and emphasized that many errors are related to time. Hence, the same insect pest could have different wing poses and decay conditions over time. Moreover, decaying insects could make the originally transparent adhesive pad become dirty and reduce the contrast between the insects and the background. Errors caused by time-related factors could be largely avoided in real production systems if adhesive pads in the smart trap are changed approximately once in every ten days, in order to avoid insect decomposition and dirt accumulation, and thus, a higher number of misdetections.
The difference between counts detected by an algorithm and counts detected manually by an entomologist is called detection accuracy. An accuracy equal to 100% means that the number of objects marked and counted automatically by the ANN matches the number of L. malifoliella adults on the adhesive pad. An accuracy lower than 100% means that there were L. malifoliella adults that were not recognized by the model. An accuracy greater than 100% represents counting non-target insects or other items as the miner; thus, the automatic identification overestimated the true occurrence of miners, which was the case in Preti et al. [20]. In our case, there were no FP detections in the model for the PMD, whereas in the case of the model for the VMD’s accuracy, this was not impacted by FP detections, due to the similar number of FN detections (Table 3). The precision parameter can range between 0 and 1. Values of precision close to 1 mean that the occurrence of false positives is very low, and therefore, the total automatic detection corresponded to the correct detections of L. malifoliella adults (showing that the algorithm is precise and does not mark non-target insects or other items) [20].
In the model for the VMD, several FP and FN detections were recorded (Figure 12). The reason for more FP detections is that the mines on leaves were misidentified as part of a branch or dry leaf. Most of these missed detections happened at the edges of the picture, where the mines were not fully shown and were, therefore, not seen. Even though there were more FP and FN than in the PMD model, and the overall accuracy, as well as the other parameters (Table 3) were slightly lower, the results are oddly satisfying, and the model is usable in practice, because there was a high number of annotations and TP detections.
When compared to other successful works concerning detecting Lepidopteran apple pests using ANNs, both models demonstrated the potential to work in practice. The accuracy of the considered models is higher than 90% [27,47], as well as in this work. Therefore, one can consider the models proposed in this work efficient for practical use. Suárez et al. [47] used the TensorFlow library and the programming language Python to make a model for detecting codling moths. The overall accuracy of the developed model was 94.8%. Similar results were obtained by Albanese et al. [27]. By using different deep learning algorithms in their trap, they achieved an accuracy of 95.1–97.9% in detecting codling moths. Grünig et al. [26] created an image classification model for detecting damage on apple leaves, using neural networks to achieve good results in categorizing different damage classes from photos of leaves taken under standardized and field conditions. The model was 93.1% accurate at detecting damage caused by L. malifoliella. This means that the model is good for automatic damage detection. However, there is no available model for the automatic detection of this pest in a trap, nor the developed automatic system, and works by Grünig et al. [26] and El Massi et al. [25] only classify damage caused by this pest. For the above-mentioned system to be fully effective in controlling pests in a timely manner, it is critical that the system monitors the number of adult individuals, in addition to leaf damage, so that it can intervene earlier and prevent damage (mines) from occurring.
The proposed model is accurate, precise, fast, and requires minimal preprocessing of data. In addition, the Pest Monitoring Device (PMD) for monitoring L. malifoliella adults and Vegetation Monitoring Device (VMD) for detecting its damages on apple leaves are both portable, independent devices that require no additional infrastructure installation. Most of the time, they are operating in sleep mode. This means that they use less energy, last longer, and have less need for human intervention. This system is specific, because the output of each device is the number of objects of a certain class that it detects (not the whole image). Due to the fact that the output is represented as a small amount of data, it is sent quickly over a mobile network. The results of detection are sent to the web portal, which is where all further analysis and reporting are performed. Several smart traps for monitoring apple pests are available on the market [48,49], but there is no available trap, nor comprehensive system for automatic monitoring of L. malifoliella or its damage.
Therefore, in this work, a comprehensive system consisting of two object detection models and accompanying devices for pest and vegetation damage monitoring (the PMD and VMD) was developed to obtain complete information in the orchard about L. malifoliella occurrence, to react in a timely manner, and prevent the occurrence of economically important damage. This proposed system contributes to the improvement of automatic pest monitoring and, thus, to its wider application. The use of this system allows for targeted pest control, thus reducing the use of pesticides, decreasing the negative impact on the environment, and ultimately allowing for higher quality and more profitable apple production.

5. Conclusions

The developed models showed high accuracy in detecting L. malifoliella (>98%) and its damage (>94%), compared to visual inspection by an expert entomologist. Therefore, the hypothesis established is accepted and the proposed system is an effective and reliable tool for monitoring the pear leaf blister moths and their damage. The system is also operative in field conditions. Automatic monitoring systems are still in their infancy, and to date, there are no research results on automatic monitoring of L. malifoliella individuals. The proposed solution can provide comprehensive detection results (pest and damage monitoring) directly from apple orchards to enable site-specific management and sustainable apple production. Further research is recommended to extend this system for the detection of other important apple pests as well as other Lepidoptera pests in crop production. In precision agriculture, innovative solutions, such as automatic monitoring systems, are a key element of decision support systems. Therefore, they are expected to be used as the gold standard for pest monitoring in the future.

Author Contributions

Conceptualization, D.Č, I.P.Ž. and I.A.; methodology, D.Č, I.P.Ž. and I.A.; software, I.A.; validation, I.P.Ž., I.A., D.L. and T.K.; formal analysis, D.Č. and I.A.; investigation, D.Č., I.M., A.M.A., R.V. and A.V.; resources, I.P.Ž., D.L. and T.K.; data curation, D.Č., I.P.Ž. and I.A.; writing—original draft preparation, D.Č., I.P.Ž. and I.A.; writing—review and editing, I.P.Ž., I.A., D.L. and T.K.; visualization, I.A.; supervision, I.P.Ž. and I.A.; project administration, I.P.Ž., D.L. and T.K.; funding acquisition, I.P.Ž. All authors have read and agreed to the published version of the manuscript.

Funding

The publication was supported by the Open Access Publication Fund of the University of Zagreb Faculty of Agriculture and the European Regional Development Found through project AgriART comprehensive management system in the field of precision agriculture (KK.01.2.1.02.0290).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data available in a publicly accessible repository. The data presented in this study are openly available in [repository name e.g., FigShare] at [doi], reference number [reference number].

Acknowledgments

The authors thank the European Union, which supported the project AgriART comprehensive management system in the field of precision agriculture (KK.01.2.1.02.0290) through the European Regional Development Fund within the Operational Programme Competitiveness and Cohesion (OPCC) 2014–2020.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Baufeld, P.; Freier, B. Artificial injury experiments on the damaging effect of Leucoptera malifoliella on apple trees. Entomol. Exp. Appl. 1991, 61, 201–209. [Google Scholar] [CrossRef]
  2. CABI. org 2022. Available online: https://www.cabi.org/isc/datasheet/30492 (accessed on 10 October 2022).
  3. Maceljski, M. Poljoprivredna Entomologija, 2nd ed.; Zrinski d.d.: Čakovec, Croatia, 2008. [Google Scholar]
  4. Francke, W.; Franke, S.; Toth, M.; Szöcs, G.; Guerin, P.; Arn, H. Identification of 5, 9-dimethylheptadecane as a sex pheromone of the moth Leucoptera scitella. Naturwissenschaften 1987, 74, 143–144. [Google Scholar] [CrossRef]
  5. Šubić, M. Mogućnosti i ograničenja suzbijanja moljca kružnih mina (Leucoptera malifoliella Costa) (Lepidoptera: Lionetiidae) u Međimurju. Glas. Biljne Zaštite 2015, 15, 195–206. [Google Scholar]
  6. Rovesti, L.; Deseö, K.V. Effectiveness of neem seed kernel extract against Leucoptera malifoliella Costa (Lep., Lyonetiidae). J. App. Entomol. 1991, 111, 231–236. [Google Scholar] [CrossRef]
  7. Ciglar, I. Integrirana zaštita voćnjaka i vinograda; Zrinski d.d.: Čakovec, Croatia, 1998. [Google Scholar]
  8. Ciglar, I. Ispitivanje djelotvornosti nekih insekticida na lisne minere. Agron. Glas. Glas. Hrvat. Agron. Društva 1972, 36, 663–670. [Google Scholar]
  9. Barzman, M.; Bàrberi, P.; Birch, A.N.E.; Boonekamp, P.; Dachbrodt-Saaydeh, S.; Graf, B.; Hommel, B.; Jensen, J.E.; Kiss, J.; Kudsk, P.; et al. Eight principles of integrated pest management. Agron. Sustain. Dev. 2015, 35, 1199–1215. [Google Scholar] [CrossRef] [Green Version]
  10. Sharma, H.C. Climate change effects on insects: Implications for crop protection and food security. J. Crop Improv. 2014, 28, 229–259. [Google Scholar] [CrossRef]
  11. Skendžić, S.; Zovko, M.; Živković, I.P.; Lešić, V.; Lemić, D. The impact of climate change on agricultural insect pests. Insects 2021, 12, 440. [Google Scholar] [CrossRef]
  12. Cardim Ferreira Lima, M.; Damascena de Almeida Leandro, M.E.; Valero, C.; Pereira Coronel, L.C.; Gonçalves Bazzo, C.O. Automatic detection and monitoring of insect pests—A review. Agriculture 2020, 10, 161. [Google Scholar] [CrossRef]
  13. Wang, R.; Li, R.; Chen, T.; Zhang, J.; Xie, C.; Qiu, K.; Liu, H. An automatic system for pest recognition and forecasting. P. Manag. Sci. 2022, 78, 711–721. [Google Scholar] [CrossRef]
  14. Brzica, K. Voćarstvo za svakog, 6th ed.; ITP Naprijed: Zagreb, Croatia, 1991. [Google Scholar]
  15. FAO STAT. Food and Agriculture Organization of the United Nations. 2022. Available online: https://www.fao.org/faostat/en/#home (accessed on 10 October 2022).
  16. Cerjak, M.; Vrhovec, R.; Vojvodić, M.; Mesić, Ž. Analiza hrvatskog tržišta jabuka. In Proceedings of the 43rd Croatian and 3rd International Symposium on Agriculture, Opatija, Croatia, 14–18 February 2011; p. 314. [Google Scholar]
  17. Čirjak, D.; Miklečić, I.; Lemić, D.; Kos, T.; Pajač Živković, I. Automatic Pest Monitoring Systems in Apple Production under Changing Climatic Conditions. Horticulturae 2022, 8, 520. [Google Scholar] [CrossRef]
  18. Boniecki, P.; Zaborowicz, M.; Pilarska, A.; Piekarska-Boniecka, H. Identification process of selected graphic features apple tree pests by neural models type MLP, RBF and DNN. Agriculture 2020, 10, 218. [Google Scholar] [CrossRef]
  19. Ding, W.; Taylor, G. Automatic moth detection from trap images for pest management. Comput. Electron. Agric. 2016, 123, 17–28. [Google Scholar] [CrossRef] [Green Version]
  20. Preti, M.; Moretti, C.; Scarton, G.; Giannotta, G.; Angeli, S. Developing a smart trap prototype equipped with camera for tortricid pests remote monitoring. Bull. Insectol. 2021, 74, 147–160. [Google Scholar]
  21. Xia, D.; Chen, P.; Wang, B.; Zhang, J.; Xie, C. Insect detection and classification based on an improved convolutional neural network. Sensors 2018, 18, 4169. [Google Scholar] [CrossRef] [Green Version]
  22. Sabanci, K.; Aslan, M.F.; Ropelewska, E.; Unlersen, M.F.; Durdu, A. A Novel Convolutional-Recurrent Hybrid Network for Sunn Pest–Damaged Wheat Grain Detection. Food Anal. Methods 2022, 15, 1748–1760. [Google Scholar] [CrossRef]
  23. Zhong, Y.; Gao, J.; Lei, Q.; Zhou, Y. A vision-based counting and recognition system for flying insects in intelligent agriculture. Sensors 2018, 18, 1489. [Google Scholar] [CrossRef] [Green Version]
  24. Sütő, J. Embedded system-based sticky paper trap with deep learning-based insect-counting algorithm. Electronics 2021, 10, 1754. [Google Scholar] [CrossRef]
  25. El Massi, I.; Es-Saady, Y.; El Yassa, M.; Mammass, D.; Benazoun, A. Automatic recognition of the damages and symptoms on plant leaves using parallel combination of two classifiers. In Proceedings of the 2016 13th International Conference on Computer Graphics, Imaging and Visualization, Beni Mellal, Morocco, 29 March–1 April 2016; pp. 131–136. [Google Scholar] [CrossRef]
  26. Grünig, M.; Razavi, E.; Calanca, P.; Mazzi, D.; Wegner, J.D.; Pellissier, L. Applying deep neural networks to predict incidence and phenology of plant pests and diseases. Ecosphere 2021, 12, 16. [Google Scholar] [CrossRef]
  27. Albanese, A.; Nardello, M.; Brunelli, D. Automated Pest Detection with DNN on the Edge for Precision Agriculture. IEEE J. Emerg. Sel. Top. Circuits Syst. 2021, 11, 458–467. [Google Scholar] [CrossRef]
  28. Wu, J. Introduction to Convolutional Neural Networks; National Key Lab for Novel Software Technology, Nanjing University: Nanjing, China, 2017; Volume 5, p. 495. [Google Scholar]
  29. Jakhar, D.; Kaur, I. Artificial intelligence, machine learning and deep learning: Definitions and differences. Clin. Exp. Dermatol. 2020, 45, 131–132. [Google Scholar] [CrossRef]
  30. Kamilaris, A.; Prenafeta-Boldú, F.X. A review of the use of convolutional neural networks in agriculture. J. Agric. Sci. 2018, 156, 312–322. [Google Scholar] [CrossRef]
  31. Wang, J.; Li, Y.; Feng, H.; Ren, L.; Du, X.; Wu, J. Common pests image recognition based on deep convolutional neural network. Comput. Electron. Agric. 2020, 179, 105834. [Google Scholar] [CrossRef]
  32. O’Shea, K.; Nash, R. An introduction to convolutional neural networks. arXiv 2015, arXiv:1511.08458. [Google Scholar]
  33. Tan, M.; Pang, R.; Le, Q.V. EfficientDet: Scalable and Efficient Object Detection. arXiv 2019. [Google Scholar] [CrossRef]
  34. Datatron Blog. Available online: https://datatron.com/what-is-model-validation-and-why-is-it-important/ (accessed on 13 December 2022).
  35. Tensorflow.org. Available online: https://www.tensorflow.org/lite/models/modify/model_maker/object_detection#run_object_detection_and_show_the_detection_results (accessed on 9 November 2022).
  36. Baeldung. Available online: https://www.baeldung.com/cs/training-validation-loss-deep-learning (accessed on 10 November 2022).
  37. Kulkarni, A.; Chong, D.; Batarseh, F.A. Foundations of data imbalance and solutions for a data democracy. Data Democr. 2020, 83–106. [Google Scholar] [CrossRef]
  38. PaperspaceBlog. Available online: https://blog.paperspace.com/deep-learning-metrics-precision-recall-accuracy/ (accessed on 8 November 2022).
  39. Aslan, M.F.; Sabanci, K.; Durdu, A. A CNN-based novel solution for determining the survival status of heart failure patients with clinical record data: Numeric to image. Biomed. Signal Process. Control 2021, 68, 102716. [Google Scholar] [CrossRef]
  40. Be A Python Dev. Available online: https://beapython.dev/2019/12/23/writing-your-first-python-script/ (accessed on 13 December 2022).
  41. Tensorflow.org. Available online: https://www.tensorflow.org/lite (accessed on 14 December 2022).
  42. Towards Data Science. Available online: https://towardsdatascience.com/googles-efficientdet-an-overview-8d010fa15860 (accessed on 14 December 2022).
  43. Towards Data Science. Available online: https://towardsdatascience.com/a-thorough-breakdown-of-efficientdet-for-object-detection-dc6a15788b73 (accessed on 14 December 2022).
  44. COCO. Common Objects in Context. Available online: https://cocodataset.org/#detection-eval (accessed on 8 November 2022).
  45. Jabbar, H.; Khan, R.Z. Methods to avoid over-fitting and under-fitting in supervised machine learning (comparative study). Comput. Sci. Commun. Instrum. Devices 2015, 70, 163–172. [Google Scholar] [CrossRef]
  46. Du, S.S.; Wang, Y.; Zhai, X.; Balakrishnan, S.; Salakhutdinov, R.R.; Singh, A. How many samples are needed to estimate a convolutional neural network? Adv. Neural. Inf. Process Syst. 2018, 31, 1–11. [Google Scholar] [CrossRef]
  47. Suárez, A.; Molina, R.S.; Ramponi, G.; Petrino, R.; Bollati, L.; Sequeiros, D. Pest detection and classification to reduce pesticide use in fruit crops based on deep neural networks and image processing. In Proceedings of the 2021 XIX Workshop on Information Processing and Control (RPIC), San Juan, Argentina, 3–5 November 2021; pp. 1–6. [Google Scholar] [CrossRef]
  48. Trapview 2022. Available online: https://trapview.com/project/better-earning-apple/ (accessed on 13 November 2022).
  49. Semios 2022. Available online: https://semios.com/our-hardware/automated-camera-traps/ (accessed on 13 November 2022).
Figure 1. The methodology of the study.
Figure 1. The methodology of the study.
Agriculture 13 00067 g001
Figure 2. Annotated class MINES (L. malifoliella adults)—marked with yellow bounding boxes, class OTHER (other objects) marked with purple bounding boxes, class INSECTS (other insects) marked with blue bounding boxes in the LabelImg program from images of adhesive pads.
Figure 2. Annotated class MINES (L. malifoliella adults)—marked with yellow bounding boxes, class OTHER (other objects) marked with purple bounding boxes, class INSECTS (other insects) marked with blue bounding boxes in the LabelImg program from images of adhesive pads.
Agriculture 13 00067 g002
Figure 3. Annotated class MINES (damage caused by L. malifoliella)—marked with pink bounding boxes, class OTHER (other objects, healthy leaves, etc.) marked with light blue bounding boxes in the LabelImg program from images of vegetation.
Figure 3. Annotated class MINES (damage caused by L. malifoliella)—marked with pink bounding boxes, class OTHER (other objects, healthy leaves, etc.) marked with light blue bounding boxes in the LabelImg program from images of vegetation.
Agriculture 13 00067 g003
Figure 4. Examples of typical pest trap image segments with sizes of 55 × 90 pixels and 90 × 90 pixels include L. malifoliella adults (a,b); other insects (c); pheromone lure for L. malifoliella (d); other objects on adhesive pads (dirt, remains of leaves, branches, etc.) (e).
Figure 4. Examples of typical pest trap image segments with sizes of 55 × 90 pixels and 90 × 90 pixels include L. malifoliella adults (a,b); other insects (c); pheromone lure for L. malifoliella (d); other objects on adhesive pads (dirt, remains of leaves, branches, etc.) (e).
Agriculture 13 00067 g004
Figure 5. Examples of typical vegetation image segments with sizes of 200 × 200 pixels including damage caused by L. malifoliella (ac); other objects (d).
Figure 5. Examples of typical vegetation image segments with sizes of 200 × 200 pixels including damage caused by L. malifoliella (ac); other objects (d).
Agriculture 13 00067 g005
Figure 6. The structure of the EfficientDet neural network, where the number of layers changes depending on how much processing power is available [33].
Figure 6. The structure of the EfficientDet neural network, where the number of layers changes depending on how much processing power is available [33].
Agriculture 13 00067 g006
Figure 7. Pest Monitoring Device (a); Vegetation Monitoring Device (b).
Figure 7. Pest Monitoring Device (a); Vegetation Monitoring Device (b).
Agriculture 13 00067 g007
Figure 8. Process of learning and validation of the models through epochs (PMD – Pest Monitoring Device, VMD – Vegetation Monitoring Device).
Figure 8. Process of learning and validation of the models through epochs (PMD – Pest Monitoring Device, VMD – Vegetation Monitoring Device).
Agriculture 13 00067 g008
Figure 9. Example of the automatic counts of the model for the Pest Monitoring Device (PMD). Red bounding boxes – detected class MINER (L. malifoliella adults), green bounding boxes – detected classes OTHER (other objects) and INSECT (other insects).
Figure 9. Example of the automatic counts of the model for the Pest Monitoring Device (PMD). Red bounding boxes – detected class MINER (L. malifoliella adults), green bounding boxes – detected classes OTHER (other objects) and INSECT (other insects).
Agriculture 13 00067 g009
Figure 10. Example of the automatic counts of the model for the Vegetation Monitoring Device (VMD). Green bounding boxes – detected classes (MINES, OTHER).
Figure 10. Example of the automatic counts of the model for the Vegetation Monitoring Device (VMD). Green bounding boxes – detected classes (MINES, OTHER).
Agriculture 13 00067 g010
Figure 11. Confusion matrix of the detection algorithm for the Pest Monitoring Device (PMD).
Figure 11. Confusion matrix of the detection algorithm for the Pest Monitoring Device (PMD).
Agriculture 13 00067 g011
Figure 12. Confusion matrix of the detection algorithm for the Vegetation Monitoring Device (VMD).
Figure 12. Confusion matrix of the detection algorithm for the Vegetation Monitoring Device (VMD).
Agriculture 13 00067 g012
Table 1. Validation parameters of the models for the PMD and VMD.
Table 1. Validation parameters of the models for the PMD and VMD.
ParameterModel for PMDModel for VMD
AP0.66305340.467365
AP500.93083060.6163057
AP750.799482940.538607
Aps0.417021070.38043988
Apm0.645879860.3874695
Apl0.573561550.50087154
Armax10.30277510.5197058
Armax100.707186160.6650187
Armax1000.753328740.6722461
Ars0.58966790.68923765
Arm0.735343160.57283384
Arl0.62813240.6942948
AP_MINER0.69061697/
AP_MINES/0.62485766
* ‘AP’—% AP at IoU = 0.50:0.05:0.95 (primary challenge metric); ‘AP50’—% AP at IoU = 0.50 (PASCAL VOC metric); ‘AP75’—% AP at IoU = 0.75 (strict metric); ‘Aps’—% AP for small objects: area < 322; ‘Apm’—% AP for medium objects: 322 < area < 962; ‘Apl’—% AP for large objects: area > 962; ‘Armax1’—% AR given 1 detection per image; ‘Armax10’—% AR given 10 detections per image; ‘Armax100’—% AR given 100 detections per image; ‘Ars’—% AR for small objects: area < 322; ‘Arm’—% AR for medium objects: 322 < area < 962; ‘Arl’—% AR for large objects: area > 962; ‘AP_/apple_trap_leaf_miner’—average precision of the class ‘MINER’; ‘AP_/apple_leaf_miner’—average precision of the class ‘MINES’ [44].
Table 2. Metrics per class in the model for the Pest Monitoring Device (PMD).
Table 2. Metrics per class in the model for the Pest Monitoring Device (PMD).
Class(n) Truth(n) ClassifiedAccuracyPrecisionRecallF1 Score
MINER735498.13%1.00.740.85
INSECT49150998.03%0.961.00.98
OTHER45345499.9%1.01.01.0
Table 3. Metrics per class in the model for the Vegetation Monitoring Device (VMD).
Table 3. Metrics per class in the model for the Vegetation Monitoring Device (VMD).
Class(n) Truth(n) ClassifiedAccuracyPrecisionRecallF1 Score
MINES1916185694.59%0.940.910.92
OTHER3446350694.59%0.950.970.96
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Čirjak, D.; Aleksi, I.; Miklečić, I.; Antolković, A.M.; Vrtodušić, R.; Viduka, A.; Lemic, D.; Kos, T.; Pajač Živković, I. Monitoring System for Leucoptera malifoliella (O. Costa, 1836) and Its Damage Based on Artificial Neural Networks. Agriculture 2023, 13, 67. https://doi.org/10.3390/agriculture13010067

AMA Style

Čirjak D, Aleksi I, Miklečić I, Antolković AM, Vrtodušić R, Viduka A, Lemic D, Kos T, Pajač Živković I. Monitoring System for Leucoptera malifoliella (O. Costa, 1836) and Its Damage Based on Artificial Neural Networks. Agriculture. 2023; 13(1):67. https://doi.org/10.3390/agriculture13010067

Chicago/Turabian Style

Čirjak, Dana, Ivan Aleksi, Ivana Miklečić, Ana Marija Antolković, Rea Vrtodušić, Antonio Viduka, Darija Lemic, Tomislav Kos, and Ivana Pajač Živković. 2023. "Monitoring System for Leucoptera malifoliella (O. Costa, 1836) and Its Damage Based on Artificial Neural Networks" Agriculture 13, no. 1: 67. https://doi.org/10.3390/agriculture13010067

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop