Next Article in Journal
Stereolithographic Additive Manufacturing of Zirconia Electrodes with Dendritic Patterns for Aluminum Smelting
Previous Article in Journal
Integrating Airborne Laser Scanning and 3D Ground-Penetrating Radar for the Investigation of Protohistoric Structures in Croatian Istria
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatic System for the Detection of Defects on Olive Fruits in an Oil Mill

by
Pablo Cano Marchal
*,†,
Silvia Satorres Martínez
,
Juan Gómez Ortega
and
Javier Gámez García
Robotics, Automation and Computer Vision Group, Campus Las Lagunillas, s/n, University of Jaén, 23071 Jaén, Spain
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2021, 11(17), 8167; https://doi.org/10.3390/app11178167
Submission received: 29 July 2021 / Revised: 17 August 2021 / Accepted: 27 August 2021 / Published: 3 September 2021
(This article belongs to the Topic Applied Computer Vision and Pattern Recognition)

Abstract

:
The ripeness and sanitary state of olive fruits are key factors in the final quality of the virgin olive oil (VOO) obtained. Since even a small number of damaged fruits may significantly impact the final quality of the produced VOO, the olive inspection in the oil mill reception area or in the first stages of the productive process is of great interest. This paper proposes and validates an automatic defect detection system that utilizes infrared images, acquired under regular operating conditions of an olive oil mill, for the detection of defects on individual fruits. First, the image processing algorithm extracts the fruits based on the iterative application of the active contour technique assisted with mathematical morphology operations. Second, the defect detection is performed on the segmented olives using a decision tree based on region descriptors. The final assessment of the algorithm suggests that it works effectively with a high detection rate, which makes it suitable for the VOO industry.

1. Introduction

The production of Virgin Olive Oil (VOO) is a food transformation activity that encompasses the extraction of the oil out of the raw incoming olives using exclusively mechanical means. The ripeness and sanitary state of these olives is a key factor in the final quality of the VOO obtained [1,2], as the technological variables of the process have only limited authority to influence the final features of the produced VOO, and can only preserve the potential quality offered by the fruit. Therefore, from a practical point of view, the properties of the incoming olives set an upper bound on the VOO quality, as the process cannot compensate for poor olive conditions. Under these circumstances, a proper characterization of the incoming olives emerges as a key step when the objective is to produce a consistent quality level of the final product. There is well known trade-off between quality and extraction yield [3,4], so it is important to only employ a high quality configuration of the process for those olives that could actually provide such quality, since using that configuration for other types of olives would simply result in a diminished extraction yield without the desired higher quality.
Separating the olives according to whether they were collected from the tree canopy or from the ground is a widespread practice in the VOO production industry. When this classification is carried out, olive growers are typically paid according to this factor, with olives coming from the tree receiving a higher remuneration. The classification of the batches between olives coming from the trees or from the ground is usually carried out by operators of the reception area of the factory. Consequently, these assessments can be affected by the subjectivity and the fatigue of the operators. Another important drawback of the manual inspection of the batches is the fact that the operator typically needs to perform some other actions in the reception area, so the inspection is carried out only on a subset of the received olives.
The different remuneration for the olives coming from the tree canopy and coming from the ground provides an economic motivation for growers to try to present ground olives as canopy olives, or to mix small amounts of ground olives in a batch of canopy olives. Sometimes, a preliminary cleaning process is carried out in the orchards in order to complicate the classification of ground olives as such.
Besides these considerations, canopy olives may have different quality levels, depending on whether they have suffered plagues or hail. The detection of these olives is of great interest, since even a small number of these may significantly impact the final quality of the produced VOO.
Different aspects of the determination of the quality level of olives using computer vision have been addressed in previous scientific publications. The first contributions were focused on the detection of bruises on table olives and the classification of these into different classes according to the presence of these defects [5,6]. These works reported that the best results were obtained using artificial neural networks (ANN). A posterior work [7] introduced the use of the Lab and HSV color spaces, and showed their applicability for the classification of table olives into different classes according to the type of defect present in them. In turn, [8] proposed using a ANN for the prediction of the ripeness of olives bound for the production of virgin olive oil (VOO). All of these contributions employ images that were taken in structured environments that avoided the superposition of olives and enhanced the difference between the olives and the background.
The first contributions that try to segment olives in unstructured conditions were [9,10]. These works, which are very similar, tackle the segmentation of olives from the olive branch, using the correlation of the image with an olive prototype and the application of the Chan-Vese algorithm in the surrounding area of the seeds detected by the correlation algorithm.
The use of cameras with spectra other than the visible band started with [11,12], where the authors show the ability of infrared cameras to capture defects in the olives that are not perceptible in the visible band. In these contributions, the layout of the olives in the images is similar to those found in the conveyor belts of the reception area of a olive oil mill; however, these contributions do not detail how to perform the olive segmentation automatically in these conditions.
In turn, in [13], the authors propose the classification of complete olive batches, assigning the same class to all the olives in the image. This work compares the results obtained using Fisher Linear Discriminant and ANN, reporting success rates above 90% in both cases, and slightly better for ANN. The authors continued the batch classification approach in [14], where the images employed were taken from the conveyor belt in the reception area of an olive oil mill. In this contribution, the authors employ a methodology similar to their previous paper and report success rates over 95% in all the cases.
The same batch analysis approach, without individually segmenting olives, can be found in [15], where olive images were used to predict the quality level of the VOO obtained from them. In this work, the features used were Haralick descriptors computed from the whole image removing the background. The prediction models were built using partial least squares regression and offered good results for different VOO quality parameters.
The individual olive segmentation approach can be found again in [16], where the authors employ structured images to estimate the size and weight of olives. Ref. [17] is a continuation of the work, showing less strict requirements regarding the disposition of the olives in the images, but still avoiding the superposition of olives for the segementation. A third related article by the same authors is [18], where the objective is to segment olives directly from the olive branch. For this, the authors employ convolutional neural networks (CNN) and compare the results obtained using different CNN described in the literature, reporting sensitivity values around 80% and precision values between 84.4% and 88.8%.
A different approach for the evaluation of the properties of olives in the groves can be found in [19]. In this work, the authors propose a method consisting of sampling the olives and presenting them in structured conditions to obtain color, ripeness and detection of bruises.
Finally, two works that employ computer vision on olives with objectives other than the determination of their quality are [20], where the cultivar variety of the olives is determined based on images from their pit; and [21], where the position of table olives is detected using ANN for the pit-removal processing. Other interesting applications of computer vision for quality control in other industries can be found in [22,23].
The main technological challenge that this paper addresses is the use of images of the olives that are acquired under regular operating conditions of an olive oil mill for the detection of defects on indiviual olives. This way, this work proposes and validates an automatic defect detection system based on the extraction of information from olive images acquired with a infrared camera installed in the reception yard of the mill. The main contributions of this work are:
  • Development of a non-invasive industrial monitorization system that allows us to capture data without interfering with the regular productive process of the mill.
  • Development of an iterative process that employs active contours combined with mathematical morphology for the detection of individual olives after the washing process, using images that were acquired during the regular operation of the mill.
  • Development of a segmentation algorithm for the detection of defects in olives.
The rest of the paper is organized as follows: Section 2 presents the equipment design, image acquisition and image processing; while Section 3 introduces and discusses the obtained results. Finally Section 4 presents the conclusions of the work.

2. Materials and Methods

The next Sections detail the equipment design, the image acquisition and the image processing and classification steps.

2.1. Equipment Design and Image Acquisition

The first step for using images acquired in industrial conditions for the determination of defects in olives is to design an acquisition system capable of obtaining good quality images, coping with the challenges imposed by these conditions. On the one hand, the speed of the conveyor imposes the use of very short acquisition times, which implies demanding requirements for the lighting system; on the other, the height of the olives in the conveyor varies in a sufficiently broad range as to influence the required depth of field and the focal distance of the lenses.
For this, an image acquisition system was designed for its installation on the conveyors typically employed in the olive oil mill yard for the movement of the olives in the reception, washing and storage processes previous to the milling. The acquisition system was composed of a lighting system, a camera, electronic components to manage the acquisition of the images and their transference to external servers, and mechanical elements to protect the different components and fix the system to the conveyor. For this work, an infrared Dalsa Genie NANO M2590-NIR and a visible spectra Nano C242 with 25 mm optics, along with a 500 W stroboscopic halogen lighting system, were employed. These components were protected by chamber designed in collaboration with ISR, a spin-off company from the University of Jaén. The chamber contained a holder for the camera that located it at approximately 50 cm from the conveyor and prevented its movement, providing a spatial resolution of 10 pixel/mm. Figure 1 shows the system installed in the conveyor of the factory.
The image acquisition system can be located before or after the fruit cleaning process, since conveyors are used in both stages. The decision to situate it before or after this cleaning process is influenced by the conveyor configuration of the factory. Depending on this configuration, it could be possible to store the fruit in hoppers according to its quality level. In many factories, the storage hopper is determined by the reception line employed, without the possibility of changing it afterwards; however, some other factory configurations do allow to choose the storage hopper independently of the recepction line. In this case, the information obtained from the images could be used to define which hopper should store the olives, which would recommend installing the system before the switch point where the olives are actually routed to one hopper or another. If there is no such possibility, then the detection of defects can be used to penalize the grower that provides a batch with a sub-standard quality level, which would leave the question of whether to install the system before of after the cleaning process open to other considerations.
For this work, the images employed were acquired in the olive oil factory Industria San Pedro, S.L., located in Jaén, during the season 2020–21. Since the system was not to be used for deciding which hopper the olives should be routed to, considerations related with the simplicity of installation motivated the location of the system in the conveyor that moves the fruit from the washer to the storage hopper. This way, an average of 200 daily images, taken every 10 s and covering the reception of several batches of olives, were taken from 11 January until 26 January. Figure 2 shows some examples of the acquired images using the cameras with different spectra installed on the conveyor. As can be seen in Figure 2a, the infrared camera is a better choice to detect damaged olives. The fruits damaged by insects show stains with a lower gray level that the rest of the surface of the fruit. These stains would not be visible in the images of the color camera shown in Figure 2b.

2.2. Image Processing

Figure 3 shows the general block diagram designed for processing the infrared images acquired after the cleaning of the fruit. As can be observed, the process is structured in two main blocks: olive detection and defect detection. The average computation time of the complete algorithm was around 100 s for each image.
The olive detection block is composed of a preprocessing stage and two segmentation steps. From a global point of view, this block is based on the iterative application of the active contours technique [24] assisted with mathematical morphology operations. The type of images to be analysed render classical segmentation techniques inapplicable. This way, the applied techniques should be robust to the presence of noise and other spurious elements, and should be able to extract objects even when the boundary between them and the background is not clear.
The active contour technique allows us to segment objects based on models that use a priori information about the shape of these objects. If the model is adequate, the presence of false positives or negatives is expected to be low. These models are the result of the preprocessing stage, where two type of masks are obtained: global and adaptative. The active contour segmentation is carried out in two steps, using a global mask for the first, and an adaptative masks for the latter. The mathematical morphologic operations are a required step to adjust the models between steps.
The detection of the defects is performed individually, evaluating each of the objects or olive contours found in the previous stage. The intersection of the results obtained using the global and adaptative segmentation techniques allows to detect the defects of the olives, minimizing the false positives produced mainly by glows, changes in the intensity level of the olive borders or occlusions with other olives. The extracted regions of each olive are evaluated to decide in the classification stage whether they are considered as defective or not. The classification is performed using a decision tree that evaluates descriptors of the regions such as circularity, elongation and area.

2.2.1. Detection of Olives

As a previous step to the iterative application of the active contours technique for the detection of the olives, it is necessary to have the models that should be adjusted to the image data for the global contour. These models are obtained in the preprocessing step using classical segmentation procedures based on global and locally adaptative thresholding. The obtainment of these models and the olive contours are presented next.
  • Obtainment of the models
    The global thresholding method applied for obtaining the global model or mask is by Kapur [25], which employs the entropy of the histogram for the computation of the optimal threshold. The method considers that the optimal threshold U o p t has been obtained when the sum of the entropy of the background H f ( T ) and the entropy of the object H o b j ( T ) are maximized, i.e.,
    U o p t = a r g m a x [ H o b j ( T ) + H f ( T ) ] .
    The entropies are defined as:
    H o b j ( T ) = g = T + 1 G p o b j ( g ) · l o g p o b j ( g ) H f ( T ) = g = 0 T p f ( g ) · l o g p f ( g ) ,
    where p o b j ( g ) and p f ( g ) correspond to the mass probability functions for the object and background pixels, defined, respectively, as:
    p o b j ( g ) T + 1 g G p f ( g ) 0 g T ,
    being g a specific gray level, G the maximum gray level and T the threshold chosen to segment the image.
    The adaptive umbralization method employed for the obtainment of the adaptive model or mask is by Bernsen [26]. This method consists of the the selection of a different segmentation threshold for each pixel of the image, according to some local feature in a given neighbourhood area that is centered in the pixel. The local feature employed by this method is the mean between the higher and lower values of the pixels included in the neighbourhood area (w). This way, the optimal threshold for each pixel of the image is obtained as:
    U o p t ( i , j ) = 0.5 · { m a x w [ I ( i + m , j + n ) ] + m i n w [ I ( i + m , j + n ) ] } ,
    where I ( i , j ) is the intensity level of the pixel in the original image, while m and n represent, respectively, the number of rows and columns of w, which defines the neighbourhood area. The results that are obtained with this method are totally dependent on the size of this neighbourhood. Bernsen established the following correspondence between the appropriate dimension of the neighbourhood area and the contrast between defects and background:
    w = 31 C = I m a x ( x , y ) I m i n ( x , y ) 15
    being I m a x and I m i n the maximum and minimum gray levels of the original image, respectively. These values have been used in this article.
  • Detection of the contours
    As commented above, the detection of the contours of the olives is based on the application of the active contours, or snakes, method. This method starts with a contour that is relatively close to the final solution, which has been called the model above, and this contour evolves to a local minimum of an energy functional. Mathematically, a snake is defined as a parametric curve r ( s ) = ( x ( s ) , y ( s ) ) , w i t h s ( 0 , 1 ) . The energy functional can be represented as
    E s n a k e * ( r ) = 0 1 E i n t e r n a l ( r ( s ) ) d s + 0 1 E e x t e r n a l ( r ( s ) ) d s ,
    where E i n t e r n a l is the energy of the curve that defines the capacity of the contour to adapt itself to the frontier of interest and E e x t e r n a l is the energy that guides the snake towards this frontier. The energy of the curve E i n t e r n a l is composed of the sum of the elastic energy ( E e l a s t i c ) and the rigidity energy ( E r i g i d i t y ), being:
    E i n t e r n a l ( r ( s ) ) = ϑ 1 | | r ( s ) | | 2 + ϑ 2 | | r ( s ) | | 2 2 ,
    where, by adjusting the parameters ϑ 1 and ϑ 2 , it is possible to control the relative weight of E e l a s t i c and E r i g i d i t y .
    In turn, E e x t e r n a l is composed of the sum of the image energy ( E i m a g e ) and the restrictive energy( E r e s t r i c t i v e ), being:
    E i m a g e = ϑ l i n · E l i n + ϑ b o r · E b o r + ϑ t e r · E t e r ,
    where ϑ l i n , ϑ b o r and ϑ t e r are the weights related to the line ( E l i n ), border ( E b o r ) and termination ( E t e r ) energies, respectively. Depending on the setting of these parameters, the behaviour of the snake in the image can be modified according to the following guidelines:
    -
    Line energy: if the image contains lines, the contour will be attracted to them.
    E l i n ( r ( s ) ) = I ( x ( s ) , y ( s ) ) ,
    with I denoting the intensity of the image.
    -
    Border energy: its purpose is to find borders in the image.
    E b o r d e r ( r ( s ) ) = | ( 2 h σ ( x ( s ) , y ( s ) ) ) · ( I ( x ( s ) , y ( s ) ) ) | 2 ,
    where h σ is a Gaussian function with a standard deviation of σ .
    -
    Termination energy: allows to detect line endings and corners.
    E t e r ( r ( s ) ) = D n 2 ( s ) 2 C ( x ( s ) , y ( s ) ) D n 1 ( s ) C ( x ( s ) , y ( s ) ) ,
    where C ( x ( s ) , y ( s ) ) is the softened version of the image, D n 2 2 is the second directional derivative of the unitary vector n 2 , and D n 1 is the directional derivative of the vector n 1 . The vectors n 1 and n 2 define the parallel and perpendicular directions to the gradient, respectively.
The second term of the external energy, i.e., the restrictive energy ( E r e s t r i c t i v e ), can be expressed as:
E r e s t r i c t i v e = k · ( p 1 p 2 ) 2 + k ( p 1 p 2 ) 2 ,
where k is a constant and p 1 and p 2 are points of the snake and the image, respectively.
Figure 4 shows an example of the image processing obtained in each of the main stages of the method for the detection of the olives, as detailed in Figure 3. Figure 4a shows the original infrared image, while Figure 4b depicts the result of the preprocessing stage, which is the mask to be used for the application of active contours in the different levels. The result of the step 1 segmentation, which employs the global masks, is shown in magenta in Figure 4c. As can be seen, the contours of the olives are perfectly defined although three of these olives are very close together. The result of the step 2 segmentation, which uses the adaptive masks, is shown in green in Figure 4c. In this case, the mask detects the protrusions of the conveyor and the active contours adapt themselves to these masks. The final result of the composition of both contours, including the decision whether each region should be considered valid, is shown in Figure 4d. This decision is based on the elongation of the contour ( e l o n g ), obtained from:
e l o n g = l 2 l 1 ,
where l 2 y l 1 represent, respectively, the height and the width of the minimum bounding box of the contour.

2.2.2. Detection of Defects

The detection of defects is based on an iterative process that sequentially applies the segmentation algorithms previously presented (Kapur and Bernsen) to each of the olive contours found in the previous step. Figure 5 shows the detection of defects in the image previously used to illustrate the detection of olives. In order to avoid glows in the olives and enhance potential defects, a preprocessing step, consisting of the inversion of the pixel intensity values with respect to the maximum possible gray level (255), is required. This way, given an image I defined in the interval [ 0 , 255 ] , the inverted image I I N V is obtained with the operation I I N V = 255 I . The bottom-left area of Figure 5 shows the image after this transformation.
The extracted features of the regions obtained after applying the segmentation procedures are area, elongation and circularity (cir), obtained as:
c i r = 4 · π · a r e a p e r i m e t e r 2
In general, the defects that identify an olive as defective are expected to be small, circular and with a gray level in the inverted image that is larger than the rest of the pixels in the olive surface. With these features, and applying the decision tree of Figure 6, it is possible to discern between acceptable and defective regions. The area, elongation and circularity thresholds have been set heuristically after the analysis of the 50 % of the images acquired the first day. The elongation and circularity, since they are rations, are quite insensitive to small variations in the distance between the camera and the conveyor, which might affect the spatial resolution of the system. The area, on the other hand, is directly impacted by these variations. However, the design of the chamber includes a holder for the camera that prevents its movement. The installation of the system at a different distance from the conveyor would require a recalibration of the system and the definition of new thresholds. An olive is considered acceptable if there is no defective area in its contour. Figure 5 shows the result of the classification where only one defective olive is found.
In addition, the percentage of defective olives is also given and it is obtained from:
K O a r e a = d e f e c t i v e a r e a t o t a l a r e a · 100 .

3. Results and Discussion

In this section, the details of the image processing algorithm performance, along with their discussion, are presented in two parts. In the first part, the evaluation procedure designed for the image processing algorithm is described. In the second part, the results of the inspections using the former procedure on images acquired during the 2020–2021 harvest period are presented and analysed.

3.1. Evaluation Procedure for the Image Processing Algorithm

The image processing goal is to detect the percentage of defective olives from the images acquired using the infrared camera (Nano M2590-NIR) placed over the conveyor belt after the washing stage. For this, olives are segmented and then every segmented region is independently processed in order to extract defects. Figure 7 and Figure 8 show the results of the olive segmentation step for two different types of images. As can be seen, the quantity of olives on the conveyor belt has an influence in the segmentation results. When the conveyor is not completely full (Figure 7a) olives are successfully extracted even though they touch each other. The application of the two segmentations steps presented in the previous Section manages to extract all the fruits, along with part of the conveyor belt (Figure 7b,c). The regions that belong to the conveyor belt have a significantly different shape than those from the fruits. These regions can be removed from the final result using the region elongation as decision feature, (Figure 7d).
When the conveyor belt is completely full, some fruits are included in the same region (Figure 8d). This issue mainly occurs when the image is not properly lighted (left side of the conveyor belt) or when there is too much overlap among fruits. Despite this drawback, most of the fruits can be analysed, as they are extracted from the background. Figure 8b shows the preprocessing stage, while Figure 8c depicts the result of the two level segmentation process.
The results of the olive inspection procedure for the former examples are shown in Figure 9. The regions extracted from the segmented olives are labelled as defective or non-defective by the classification algorithm detailed in Figure 6. A segmented olive is labelled as defective when at least one region inside it is detected as defective.
In order to assess the algorithm performance, a set of five images per day were manually labelled, using a graphic editor, with the oil mill expert assistance. Figure 10 shows an example of these images that were utilized as ground-truth. Then, the same set of images were analysed by the image processing algorithm, and the results of region comparisons were categorized and annotated according to the following definitions:
  • TP (true positive): olives or set of olives labelled and detected as defective.
  • FN (false negative): olives or set of olives labelled as defective but classified as non-defective.
  • FP (false positive): olives or set of olives labelled as non-defective but classified as defective.
  • TN (true negative): olives or set of olives labelled and detected as non-defective.
This way, true positive and false positive rates (TPR, FPR) were computed as:
T P R = T P T P + F N · 100 ; F P R = F P F P + T N · 100 .
Thus, the detection rate is given by T P R , while F P R provides the rate of non-defective olives wrongly categorized as defective.

3.2. Results of the Image Processing Algorithm

As mentioned previously, the validity of the image processing algorithm was tested through ground-truth image comparison. The values of the measures proposed to evaluate the algorithm performance are included in Table 1.
The first data column of the above table shows the harvesting date. An example of three days within the assessed period (from 11 to 26 January 2021) have been chosen for the discussion of the results obtained. In addition, data from the whole period is also presented. The second column classifies images, according the number of olives or set of olives extracted from the background, into the three following levels: low, medium and high. Images containing a number of regions below 30 are labelled as low, whereas the number of regions is between 30 and 100 are considered as medium; finally, if the number of regions included in an image is more than 100, this image is labelled as high. The last two columns give the detection rate (TPR) and the olives wrongly classified (FPR). In all cases, TPR and FPR rates are the average of the set of images analysed by the expert in the corresponding harvest date (5 images per day).
As expected, the quantity of fruits on the conveyor belt has an influence on the algorithm performance. Detection rate is always higher for images labelled as low, while there is no significant difference between images designated as medium or high. The TPR is more reliable if olives are not overlapped, as the whole fruit can be analysed.
Olives can also be wrongly classified if several of them are included in the same region and disturbances, such as the olive edge or part of the background visible among them, appear. This issue increases the number of FP, i.e., olives labelled as non-defective but classified as defective, which has a direct correlation with the FPR. Images labelled as medium or high are more likely to suffer this effect, but it can also occur in images labelled as low. An example of this is shown in Table 1. In both harvesting dates, 15 and 26 of January, the FPR for images considered as low is slightly higher than those labelled as high.
The conveyor belt can also be a source of mistakes, and images considered as low are prone to this type of error. Figure 11 shows an example of these two types of false positive sources: the conveyor belt and the overlap among several olives, both labelled as defects.
The final assessment of the image processing algorithm was done analysing a total of 80 images. As shown in the last row of the Table 1, the TPR was 81.22 % and the FPR was 11.51 % . The detection rate is expected to be higher than what is achieved in a manual inspection, since the current inspection process is performed by sampling a subset of olives of each batch, with the frequency of inspection and the number of olives to be inspected based on the oil-mill expert indications. The capacity to analyse the whole batch of incoming olives is already a substantial improvement on the current practice. Although TPR and FPR rates are not yet established for this industrial sector, the results achieved during the assessment are quite close to what it is normally demanded for industrial applications in the automotive sector ( T P R > 85 % F P R < 10 % ) [27].
Finally, Figure 12 shows the results of the inspection of several images acquired at different dates during the period of evaluation. The general condition of the olives did not worsen as the time passed in this period, as evidenced by the fact that the “ko area” data did not follow a clear upwards trend. A plausible explanation for this is that the temporal period covered by the analysis is relatively small compared to the timescale required to appreciate an general evolution in the overall state of the olives.

4. Conclusions

This paper was concerned with the automated detection of defects on olive fruits for the production of VOO. The early detection of fruits affected by plagues or hail is of great interest, since even a small amount of them may significantly impact the final quality of the produced VOO. As most of these defects are not perceptible in the visible band, this work has proposed a non-invasive industrial monitorization system based on an infrared camera. The image acquisition system was placed over the conveyor belt after the cleaning process and without interfering the regular productive process of the mill. Although images were acquired under controlled conditions, they were not free from glows or shadows due to fruits overlap.
To extract information from these type of images, a two stages image processing algorithm was designed. First, olives were segmented from the conveyor belt by an iterative application of the active contour technique assisted with mathematical morphology operations. Second, the detection of defect was performed individually evaluating each of the segmented regions extracted from the previous stage.
The proposal was validated by comparison between the automated and manual inspections, considered as ground-truth. In the course of more than a fortnight of January 2021, a total of 5 images per day were manually inspected and then compared with the automated inspections. Two rates were computed for this comparison: the detection rate, obtained from the True Positive Rate (TPR), and the olives wrongly classified estimated by the False Positive Rate (FPR). The results underscore the robustness and accuracy of the algorithm as they are in line with what it is demanded in industry.
Because of this, the presented solution comprises a promising starting point to inspect most of the incoming fruits without interfering with the normal daily oil mill functioning. It represents a considerable improvement to the current inspection process, which is based on a sampling scheme carried out by the oil mill expert.
The image acquistion time of 10 s was chosen to reduce the number of files and allow to have a more representative sample of olives for the same number of images. However, both the camera and the stroboscopic halogen used allow to significantly increase the capture rate, allowing to practically constantly capture images of the olives being processed on the factory. On the other hand, the average image processing time of 100 s per image is currently too high for making decisions in real time. This computation time, however, allows to perform a quality assurance process on the olive batches, enabling the detection of faulty batches and carrying out the subsquent actions regarding the remuneration of the farmer.
Further research effort will be devoted to optimizing the implementation of the different steps in order to reduce this computation time, as well as exploring the enhancement of the olive segmentation step with the application of deep learning methods instead of traditional image processing algorithms. In addition, it is expected to increase the number of image acquisitions as the industrial monitorization system will be installed during the whole next season.

Author Contributions

Conceptualization, P.C.M., S.S.M., J.G.O. and J.G.G.; formal analysis, P.C.M. and S.S.M.; funding acquisition, J.G.O. and J.G.G.; investigation, P.C.M. and S.S.M.; writing—original draft, P.C.M. and S.S.M.; writing—review and editing, P.C.M., S.S.M., J.G.O. and J.G.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by the Spanish Ministry of Science and Innovation under the project PID2019-110291RB-I00.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to confidenciality agreements with Almazara San Pedro.

Acknowledgments

The authors would like to thank Almazara San Pedro, S.L. for their collaboration in this work.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Barranco Navero, D.; Fernandez Escobar, R.; Rallo Romero, L.; Barranco, D.; Fernandez Escobar, R.; Rallo, L. El Cultivo del Olivo, 7th ed.; Mundi-Prensa: Madrid, Spain, 2008; pp. 657–728. [Google Scholar]
  2. Genovese, A.; Caporaso, N.; Sacchi, R. Flavor chemistry of virgin olive oil: An overview. Appl. Sci. 2021, 11, 1639. [Google Scholar] [CrossRef]
  3. Cano Marchal, P.; García, J.G.; Ortega, J.G. Application of Fuzzy Cognitive Maps and Run-to-Run Control to a Decision Support System for Global Set-Point Determination. IEEE Trans. Syst. Man Cybern. Syst. 2017, 47, 2256–2267. [Google Scholar] [CrossRef]
  4. Tamborrino, A.; Urbani, S.; Servili, M.; Romaniello, R.; Perone, C.; Leone, A. Pulsed electric fields for the treatment of olive pastes in the oil extraction process. Appl. Sci. 2020, 10, 114. [Google Scholar] [CrossRef] [Green Version]
  5. Diaz, R.; Faus, G.; Blasco, M.; Blasco, J.; Molto, E. The application of a fast algorithm for the classification of olives by machine vision. Food Res. Int. 2000, 33, 305–309. [Google Scholar] [CrossRef]
  6. Diaz, R.; Gil, L.; Serrano, C.; Blasco, M.; Moltó, E.; Blasco, J. Comparison of three algorithms in the classification of table olives by means of computer vision. J. Food Eng. 2004, 61, 101–107. [Google Scholar] [CrossRef]
  7. Riquelme, M.T.; Barreiro, P.; Ruiz-Altisent, M.; Valero, C. Olive classification according to external damage using image analysis. J. Food Eng. 2008, 87, 371–379. [Google Scholar] [CrossRef] [Green Version]
  8. Furferi, R.; Governi, L.; Volpe, Y. ANN-based method for olive Ripening Index automatic prediction. J. Food Eng. 2010, 101, 318–328. [Google Scholar] [CrossRef] [Green Version]
  9. Gatica, C.; Best, S.; Ceroni, J.; Lefranc, G. A New Method for Olive Fruits Recognition; Springer: Berlin/Heidelberg, Germany, 2011; Volume 7042, pp. 646–653. [Google Scholar]
  10. Gatica, G.; Best, S.; Ceroni, J.; Lefranc, G. Olive fruits recognition using neural networks. Procedia Comput. Sci. 2013, 17, 412–419. [Google Scholar] [CrossRef] [Green Version]
  11. Guzmán, E.; Baeten, V.; Pierna, J.A.F.; García-Mesa, J.A. Infrared machine vision system for the automatic detection of olive fruit quality. Talanta 2013, 116, 894–898. [Google Scholar] [CrossRef]
  12. Guzmán, E.; Baeten, V.; Antonio, J.; Pierna, F.; García-mesa, J.A. Using a Visible Vision System for On-Line Determination of Quality Parameters of Olive Fruits. Food Nutr. Sci. 2013, 4, 90–98. [Google Scholar] [CrossRef] [Green Version]
  13. Puerto, D.; Gila, D.; García, J.; Ortega, J. Sorting Olive Batches for the Milling Process Using Image Processing. Sensors 2015, 15, 15738–15754. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Aguilera Puerto, D.; Cáceres Moreno, Ó.; Martínez Gila, D.M.; Gómez Ortega, J.; Gámez García, J. Online system for the identification and classification of olive fruits for the olive oil production process. J. Food Meas. Charact. 2019, 13, 716–727. [Google Scholar] [CrossRef]
  15. Navarro Soto, J.; Satorres Martínez, S.; Martínez Gila, D.; Gómez Ortega, J.; Gámez García, J. Fast and Reliable Determination of Virgin Olive Oil Quality by Fruit Inspection Using Computer Vision. Sensors 2018, 18, 3826. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Ponce, J.M.; Aquino, A.; Millán, B.; Andújar, J.M. Olive-fruit mass and size estimation using image analysis and feature modeling. Sensors 2018, 18, 2930. [Google Scholar] [CrossRef] [Green Version]
  17. Ponce, J.M.; Aquino, A.; Millan, B.; Andujar, J.M. Automatic Counting and Individual Size and Mass Estimation of Olive-Fruits Through Computer Vision Techniques. IEEE Access 2019, 7, 59451–59465. [Google Scholar] [CrossRef]
  18. Aquino, A.; Ponce, J.M.; Andújar, J.M. Identification of olive fruit, in intensive olive orchards, by means of its morphological structure using convolutional neural networks. Comput. Electron. Agric. 2020, 176, 105616. [Google Scholar] [CrossRef]
  19. Sola-Guirado, R.R.; Bayano-Tejero, S.; Aragón-Rodríguez, F.; Bernardi, B.; Benalia, S.; Castro-García, S. A smart system for the automatic evaluation of green olives visual quality in the field. Comput. Electron. Agric. 2020, 179, 1–10. [Google Scholar] [CrossRef]
  20. Satorres Martínez, S.; Martínez Gila, D.; Beyaz, A.; Gómez Ortega, J.; Gámez García, J. A computer vision approach based on endocarp features for the identification of olive cultivars. Comput. Electron. Agric. 2018, 154, 341–346. [Google Scholar] [CrossRef]
  21. de Jódar Lázaro, M.; Luna, A.M.; Lucas Pascual, A.; Martínez, J.M.M.; Canales, A.R.; Madueño Luna, J.M.; Segovia, M.J.; Sánchez, M.B. Deep learning in olive pitting machines by computer vision. Comput. Electron. Agric. 2020, 171, 105304. [Google Scholar] [CrossRef]
  22. Parkot, K.; Sioma, A. Development of an automated quality control system of confectionery using a vision system. In Photonics Applications in Astronomy, Communications, Industry, and High-Energy Physics Experiments; Proceedings of SPIE/The International Society for Optical, Engineering; Ryszard, S., Romaniuk, M.L., Eds.; SPIE: Bellingham, WA, USA, 2018; Volume 1080817, p. 200. ISSN 0277-786X. [Google Scholar] [CrossRef]
  23. Nitka, A.; Sioma, A. Design of an automated rice grain sorting system using a vision system. In Photonics Applications in Astronomy, Communications, Industry, and High-Energy Physics Experiments; Proceedings of SPIE/The International Society for Optical, Engineering; Ryszard, S., Romaniuk, M.L., Eds.; SPIE: Bellingham, WA, USA, 2018; Volume 1080816, p. 198. ISSN 0277-786X. [Google Scholar] [CrossRef]
  24. Chan, T.F.; Vese, L.A. Active contours without edges. IEEE Trans. Image Process. 2001, 10, 266–277. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Kapur, J.N.; Sahoo, P.K.; Wong, A.K. A new method for gray-level picture thresholding using the entropy of the histogram. Comput. Vis. Graph. Image Process. 1985, 29, 273–285. [Google Scholar] [CrossRef]
  26. Sezgin, M.; Sankur, B. Survey over image thresholding techniques and quantitative performance evaluation. J. Electron. Imaging 2004, 13, 220. [Google Scholar] [CrossRef]
  27. Automotive Industry Action Group. Measurement System Analysis, 4th ed.; AIAG: Southfield, MI, USA, 2010. [Google Scholar]
Figure 1. Industrial monitorization system installed on the conveyor belt after the olive cleaning process.
Figure 1. Industrial monitorization system installed on the conveyor belt after the olive cleaning process.
Applsci 11 08167 g001
Figure 2. Images acquired on the conveyor after the cleaning process. (a) Infrared camera (Nano M2590-NIR). (b) Visible band camera (Nano C242).
Figure 2. Images acquired on the conveyor after the cleaning process. (a) Infrared camera (Nano M2590-NIR). (b) Visible band camera (Nano C242).
Applsci 11 08167 g002
Figure 3. Image processing block diagram.
Figure 3. Image processing block diagram.
Applsci 11 08167 g003
Figure 4. Detection of the contour of the olives. (a) Infrared image. (b) Global mask (magenta) and adaptive mask (green). (c) Active contours obtained with each mask. (d) Final active contours.
Figure 4. Detection of the contour of the olives. (a) Infrared image. (b) Global mask (magenta) and adaptive mask (green). (c) Active contours obtained with each mask. (d) Final active contours.
Applsci 11 08167 g004
Figure 5. Example of the detection of defects in olives.
Figure 5. Example of the detection of defects in olives.
Applsci 11 08167 g005
Figure 6. Classification algorithm based on a decision tree that evaluates descriptors of the region such as area, circularity and elongation.
Figure 6. Classification algorithm based on a decision tree that evaluates descriptors of the region such as area, circularity and elongation.
Applsci 11 08167 g006
Figure 7. Example of olives segmentation (conveyor belt not completely full). (a) Image acquisition. (b) Result of the preprocesing step: global thresholding mask in magenta and adaptative thresholding mask in green. (c) Level one (magenta) and level two (green) segmentation. (d) Final segmentation result.
Figure 7. Example of olives segmentation (conveyor belt not completely full). (a) Image acquisition. (b) Result of the preprocesing step: global thresholding mask in magenta and adaptative thresholding mask in green. (c) Level one (magenta) and level two (green) segmentation. (d) Final segmentation result.
Applsci 11 08167 g007
Figure 8. Example of olive segmentation (conveyor belt completely full). (a) Image acquisition. (b) Result of the preprocesing step: global thresholding mask in magenta and adaptative thresholding mask in green. (c) Level one (magenta) and level two (green) segmentation. (d) Final segmentation result.
Figure 8. Example of olive segmentation (conveyor belt completely full). (a) Image acquisition. (b) Result of the preprocesing step: global thresholding mask in magenta and adaptative thresholding mask in green. (c) Level one (magenta) and level two (green) segmentation. (d) Final segmentation result.
Applsci 11 08167 g008
Figure 9. Defect detection. (a) Figure 7 defective olives detection. (b) Figure 8 defective olives detection.
Figure 9. Defect detection. (a) Figure 7 defective olives detection. (b) Figure 8 defective olives detection.
Applsci 11 08167 g009
Figure 10. Example of ground-truth (including a zoom view) for the evaluation of the Image Processing Algorithm.
Figure 10. Example of ground-truth (including a zoom view) for the evaluation of the Image Processing Algorithm.
Applsci 11 08167 g010
Figure 11. Example of false positives in an image with a low number of olives or set of olives regions. The top shows false positives because of the conveyor belt glows. The bottom shows false positives originating by the overlap of olives.
Figure 11. Example of false positives in an image with a low number of olives or set of olives regions. The top shows false positives because of the conveyor belt glows. The bottom shows false positives originating by the overlap of olives.
Applsci 11 08167 g011
Figure 12. Example of defect detection for images labelled as low acquired during the period of evaluation.
Figure 12. Example of defect detection for images labelled as low acquired during the period of evaluation.
Applsci 11 08167 g012
Table 1. Performance of the image processing algorithm calculated by comparison between the automated and manual inspections (ground-truth). Results are expressed in terms of True Positive Rate (TPR) and False Positive Rate (FPR).
Table 1. Performance of the image processing algorithm calculated by comparison between the automated and manual inspections (ground-truth). Results are expressed in terms of True Positive Rate (TPR) and False Positive Rate (FPR).
Harvesting DateNumber of Olives or Set of OlivesTPR (%)FPR (%)
11/01/2021low85.716.67
medium79.5511.39
high8814.38
15/01/2021low93.3310.81
medium87.513.46
high76.479.46
26/01/2021low88.8910.34
medium73.0816.24
high72.739.83
11/01/2021–26/01/2021low90.329.88
medium79.0716.24
high79.699.83
11/01/2021–26/01/2021global81.2211.51
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Cano Marchal, P.; Satorres Martínez, S.; Gómez Ortega, J.; Gámez García, J. Automatic System for the Detection of Defects on Olive Fruits in an Oil Mill. Appl. Sci. 2021, 11, 8167. https://doi.org/10.3390/app11178167

AMA Style

Cano Marchal P, Satorres Martínez S, Gómez Ortega J, Gámez García J. Automatic System for the Detection of Defects on Olive Fruits in an Oil Mill. Applied Sciences. 2021; 11(17):8167. https://doi.org/10.3390/app11178167

Chicago/Turabian Style

Cano Marchal, Pablo, Silvia Satorres Martínez, Juan Gómez Ortega, and Javier Gámez García. 2021. "Automatic System for the Detection of Defects on Olive Fruits in an Oil Mill" Applied Sciences 11, no. 17: 8167. https://doi.org/10.3390/app11178167

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop