Next Article in Journal
A New Method for Extracting Individual Plant Bio-Characteristics from High-Resolution Digital Images
Next Article in Special Issue
GSAP: A Global Structure Attention Pooling Method for Graph-Based Visual Place Recognition
Previous Article in Journal
Exploring VIIRS Continuity with MODIS in an Expedited Capability for Monitoring Drought-Related Vegetation Conditions
Previous Article in Special Issue
ZoomInNet: A Novel Small Object Detector in Drone Images with Cross-Scale Knowledge Distillation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Method of Segmenting Apples Based on Gray-Centered RGB Color Space

1
College of Mechanical and Electronic Engineering, Northwest A&F University, Yangling 712100, China
2
School of Computer Science and Technology, Baoji University of Arts and Science, Baoji 721016, China
3
Shannxi Key Laboratory of Apple, Yangling 712100, China
4
Apple Mechanized Research Base, Yangling 712100, China
5
State Key Laboratory of Soil Erosion and Dryland Farming on Loess Plateau, Yangling 712100, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(6), 1211; https://doi.org/10.3390/rs13061211
Submission received: 23 February 2021 / Revised: 15 March 2021 / Accepted: 16 March 2021 / Published: 23 March 2021

Abstract

:
In recent years, many agriculture-related problems have been evaluated with the integration of artificial intelligence techniques and remote sensing systems. The rapid and accurate identification of apple targets in an illuminated and unstructured natural orchard is still a key challenge for the picking robot’s vision system. In this paper, by combining local image features and color information, we propose a pixel patch segmentation method based on gray-centered red–green–blue (RGB) color space to address this issue. Different from the existing methods, this method presents a novel color feature selection method that accounts for the influence of illumination and shadow in apple images. By exploring both color features and local variation in apple images, the proposed method could effectively distinguish the apple fruit pixels from other pixels. Compared with the classical segmentation methods and conventional clustering algorithms as well as the popular deep-learning segmentation algorithms, the proposed method can segment apple images more accurately and effectively. The proposed method was tested on 180 apple images. It offered an average accuracy rate of 99.26%, recall rate of 98.69%, false positive rate of 0.06%, and false negative rate of 1.44%. Experimental results demonstrate the outstanding performance of the proposed method.

Graphical Abstract

1. Introduction

Apple picking is a labor-intensive and time-intensive task. To save labor and to promote the automation of agricultural production, apple-picking robots are being employed to replace manual fruit picking. In the past few years, several researchers have extensively studied fruit- and vegetable-picking robots [1,2,3,4,5]. In a natural unstructured environment, due to sunlight and obstruction by branches and leaves, light and dark spots appear on the apple’s surface [6]. These spots are a special kind of noise that distort the information of the target area in the image, affect the processing of the image, increase the difficulty of recognition and segmentation, and affect the precise positioning of the fruit target and the execution of the picking task [7].
In the existing fruit harvesting robot’s vision system, the influence of sunlight is reduced by changing the imaging conditions before collection or by optimizing the image after collection. Before the acquisition, optical filters on camera lenses or large shades with artificial auxiliary light sources are applied to improve the imaging conditions [1]. This manual intervention and the auxiliary light source method can simplify the follow-up process to obtain light-independent images. However, the practical utility of this method is limited. The frequent replacement of the filter’s color absorber and the power consumption of the auxiliary light source increase the system’s cost, which is not conducive for its promotion and application [8]. Besides, most of the fruit and vegetable images need to be captured in natural light. Under extremely bad lighting conditions, segmentation results in the absence of an auxiliary light source are not ideal for the image robustness and processing speed [9,10,11].
Another solution is to optimize the image capturing algorithm to reduce or eliminate the influence of light (with shadow) on the target area of the image. Ji et al. [12] first selected a batch of pixels that may be apples, and then used the Euclidean distance between the pixels in the RGB space as a measure of similarity, combining the color and texture features to be used by the region growth method and support vector machine (SVM) algorithm to complete the apple segmentation. Lü et al. [13] used red-green (R-G) color difference and combined it with the Otsu dynamic threshold method to realize fast segmentation of an apple. Liu et al. [14] proposed a nighttime apple image segmentation method based on pixel color and location. Based on the RGB value and the HSI value of each pixel, a back-propagation neural network was used for classification. The apple was then segmented again according to the relative position and color difference of the pixels around the segmented fruit. Lv et al. [15] proposed a red-blue (R-B) color difference map segmentation method, which extracts the main colors in the image, then reconstructs the image, and segments the original image according to the threshold to obtain the apple target. Although these algorithms are good at segmentation, they cannot adapt to conditions with varying illumination [16]. In the natural environment of the orchard, a segmentation method that works with varying illumination is yet to be demonstrated. The segmentation effect under the influence of alternating lighting and shadows remains to be verified.
Besides the above segmentation methods based on chromatic aberration and threshold, segmentation algorithms that can account for the effect of illumination and shadows have also been proposed. Tu et al. [17] applied the principle of illumination invariance for the recognition of apple targets. First, the median filter algorithm was used to eliminate the noise in the image. Then the illumination invariant graph was extracted from the processed color image to eliminate the influence of variation in light. Finally, the Otsu threshold segmentation method was used to extract the target fruit. However, the work does not discuss the algorithm’s processing effect on the apple targets that are affected by shadow. Huang and He [18] segmented apple targets and the background based on the fuzzy-2 partition entropy algorithm in Lab color space, and used the exhaustive search algorithm to find the optimal threshold for image segmentation. However, this method can only work under a specific light condition. Song et al. [19] used the illumination invariant image principle to obtain the illumination invariant image of the shadowed apple image. They then extracted the red component information of the original image, added it to the illumination invariant image, and performed adaptive threshold segmentation on the added image to remove shadows. However, the shadow in the image background had a greater impact on the segmentation results, resulting in a high false segmentation rate. Song et al. [20] proposed a method for removing shadows on the surface using a fuzzy set theory. In this algorithm, the image is viewed as a fuzzy matrix. The membership function is used for de-blurring the image to enhance its quality and to minimize the influence of shadow on apple segmentation. The applicability of this method needs further research and discussion. Sun et al. [6] proposed an improved visual attention mechanism named the Grab Cut (GrabCut) model, combined it with the normalized cut (Ncut) algorithm to identify green apples in the orchard with varying illumination, and achieved good segmentation accuracy. The algorithm improves the image target segmentation’s accuracy by removing the influence of shadows and lighting. However, the images were processed in a pixel-wise fashion, and the inherent spatial information between pixels was ignored. This is not ideal for processing the images in a natural orchard environment [21].
Superpixel segmentation algorithm is a segmentation algorithm that fully considers the special relationship between the adjacent pixels. This avoids the segmentation errors caused by a single-pixel mutation effectively. Liu et al. [16] divided the entire image into several superpixel units, then extracted the color and texture features of the superpixels and used SVM to classify the superpixels and to segment the apples. However, the running speed was slow. Xu et al. [22] combined group pixels and edge probability maps to generate an apple image with superpixel blocks. They removed the effect of shadows by re-illuminating. This method can effectively remove the shadows from the apple’s surface in the image. However, the divided superpixels will eventually affect the segmentation accuracy. Xie et al. [23] proposed sparse representation and dictionary learning methods for the classification of hyperspectral images, using the pixel blocks to improve the classification accuracy of the images. However, dictionary learning is used to represent images with complex structures, and the learning process is time consuming and labor intensive.
Recently, deep convolutional neural networks (DCNNs) have dominated many fields of computer vision, such as image recognition [24] and object detection [25]. The DCNNs have also dominated the field of image semantic segmentation, such as fully convolutional network (FCN) [26], and an improved deep neural network, DaSNet-v2, which can perform detection and instance segmentation on fruits [27]. However, these methods require large, labelled training data sets and a lot of computing power before a reliable result can be calculated. Therefore, new segmentation methods in the color space are needed, based on the apple’s characteristics, so that the apples can be identified in real-time in the natural scene of the orchard.
The segmentation process becomes easier when the difference between the fruit and the background is large. However, the interference of natural light and other factors in the orchard environment reduces this difference, and increases the difficulty in identification [28]. In this study, we propose a method to segment the apple image based on gray-centered RGB color space. In gray-centered RGB color space, this paper presents a novel color feature selection method which accounts for the influence of halation and shadow in apple images. By exploring both color features and local variation contents in apple images, we propose an efficient patch-based segmentation algorithm that is a generalization of the K-means clustering algorithm. Extensive experiments prove the effectiveness and superiority of the proposed method.

2. Materials and Methods

2.1. Apple Image Acquisition

The variety of apple tested in this study was Fuji, which is the most popular variety in China. Experiments were carried out in the Baishui Apple experimental demonstration station of Northwest A & F University, located in Baishui County, Shaanxi Province (109°E, 35.12°N). It is in the transitional zone between the Guanzhong Plain and the Northern Shaanxi Plateau.
The image data used in this paper were collected from the apple orchards during cloudy and sunny weather conditions. All the apples in the orchard had reached maturity. The apples were collected between 8 a.m. and 5 p.m., to acquire apple images in a weak atmospheric environment as well as a strong light environment. A database was built with 300 apple images taken under natural light conditions in the orchard. One hundred and eighty images were randomly selected to test the performance of the algorithm. Out of these 180 images, 60 images had shadows on the apples to varying degrees. 60 images had halation to varying degrees (existing with the edge of the apple or the inside of the apple), and 60 images had both shadows and halation to varying degrees. The image was acquired using a Canon PowerShot G16 camera and Intel Realsense Depth camera D435, with a resolution of 4000 × 3000 pixels. The shots were taken from a distance of 30–60 cm and were saved in JPEG format as 24-bit RGB color images. The proposed algorithms were evaluated using MATLAB (R2018b, © 1994–2020 The MathWorks, Inc., Natickc, MA, USA). The computations were performed using an Intel Core i9-9880H CPU with 8 GB memory hardware.

2.2. Gray-Centered RGB Color Space

In this paper, we work in gray-centered RGB color space (i.e., the origin of the RGB color space is placed at the center of the color cube). For 24-bit color images, the translation is achieved by simply subtracting (127.5, 127.5, 127.5) from each pixel value in RGB space. By so doing, all pixels along the same direction from mid-gray have the same hue [29]. This translation operation effectively moves all the pixels on the apple image by half the distance from their own pixels in the RGB color space, forming a new coordinate system with medium gray as the origin. The conversion of this space coordinate system is shown in Figure 1.

2.3. Color Features Extraction

2.3.1. Quaternion

Quaternion algebra, the first discovered hypercomplex algebra, is a mathematical tool to realize the reconstruction of a three-dimensional color image signal [30,31,32]. The quaternion-based method imitates human perception of the visual environment and processes RGB channel information in parallel [33].
A quaternion has four parts, and can be written as q = a + i b + j c + k d , where a , b , c , d , and where i , j , k satisfy the condition i 2 = j 2 = k 2 = 1 . For the apple image, the RGB color triple is represented as a purely imaginary quaternion, and can be written as U = R i + G j + B k [34,35,36].
We assume that two pure quaternions P and Q are multiplied together as shown in Equation (1).
P Q = ( P 1 i + P 2 j + P 3 k ) ( Q 1 i + Q 2 j + Q 3 k ) = P 1 Q 1 P 2 Q 2 P 3 Q 3 + ( P 2 Q 3 P 3 Q 2 ) i + ( P 1 Q 3 + P 3 Q 1 ) j + ( P 1 Q 2 P 2 Q 1 ) k = P Q + P × Q
If Q = v is a unit pure quaternion, P can be decomposed into parallel and perpendicular components about v as shown in Equation (2).
| P v | = P v = | P | cos θ | P v | = | P × v | = | P | sin θ
where | P v | , | P v | , respectively, represent the component parallel to v and the component perpendicular to v .
Let C denote the chosen color of interest (COI), while C = C / | C | is a unit pure quaternion. Given a pure quaternion U and a unit pure quaternion C, U may be decomposed into components that are parallel and perpendicular to C shown in Equation (3):
U c = 1 2 [ U C U C ] U c = 1 2 [ U + C U C ]

2.3.2. Color Features Decomposition of the Apple Image

Using the quaternion algebra and the COI characteristic, vector decompositions of an image that are parallel and perpendicular to the “grayscale” direction (1, 1, 1) are obtained. The work is carried out in the gray-centered RGB color space in which the origin of the RGB color space is shifted to the center of the color cube. The image is first switched from the original RGB color space to gray-centered RGB color space. Thus, all the pixels along the same direction from the mid-gray have a similar hue. Subsequently, the algebraic operations of the quaternion are used to complete the vector decomposition of the image. Finally, all pixels are shifted back to the origin of RGB color space. The decomposition diagram is shown in Figure 2.
For apple images, all pixels resolved in the direction parallel to “grayscale” direction (1, 1, 1) show the hue of the images, while those resolved in the direction perpendicular to the “grayscale” direction (1, 1, 1) show the information about color. Figure 3 shows the decomposition results of four images from the apple dataset based on the quaternion algebra. These four images represent four kinds of apple images, (a) those without any intense light irradiation and without any shadow, (b) those having shadows but without any intense light irradiation, (c) those having intense light irradiation but without any shadows, and (d) those having both shadows and intense light irradiation.
In this paper, apple image features are obtained from the gray-center color space. In the gray-center color space, vector decompositions parallel/perpendicular to the gray direction of the apple image are still color images. The perpendicular components to the “grayscale” direction (1, 1, 1) mainly show the images’ color. The difference between the apple and the background is more prominent in these images, which weakens the influence of shadow and halation on the apple surface to a certain extent. Therefore, in this paper, we search for features on these perpendicular components.

2.3.3. Choice of COI and Features

In the gray-center RGB space, a vector red color (127.5, −127.5, −127.5) is selected as COI in this study, since red color is the prominent color information of an apple. As mentioned above, in gray-center RGB color space, all the pixels along this component have the same hue. Although red color (127.5, −127.5, −127.5) is the prominent color information, apples at the matured stage have other components along with the red component. Other color components may also belong to the apple. Thus, the COI does not represent all the color pixels of the apple. So, in the perpendicular switched space, the angle between the COI and the vector from the gray center to every pixel can be used to indicate the difference between the pixel and the apple. In this paper, the cosine of the angle between COI and the vector from the gray center to every pixel is used as the feature for the whole apple image. In this study, we define that pixels in all vectors within 30° of the selected COI angle belong to the apple. Therefore, all pixels within the range constitute the apple features constructed in this paper.

2.4. A Patch-Based Feature Segmentation Algorithm

In the 1960s, MacQueen proposed the classic K-means clustering algorithm [37]. It is a kind of unsupervised method whose main idea is to divide objects into K clusters based on their distance from the cluster centers. If xk are data samples and ci are cluster centers, then the mean deviation of clustering is expressed as shown in Equation (4):
E = i = 1 k k x k c i 2
With each iteration, the energy (1) is minimized.
In image processing, data samples are pixels, which can be divided into K clusters to minimize (1). However, this calculation ignores the spatial relationship between pixels in the image, leading to poor segmentation results in complex images. In particular, the local variations of apple images obviously cannot be effectively described by pixel-based methods. In this study, based on the pixel patches, the proposed clustering segmentation model is as shown in Equation (5):
{ min I i , C i j { i = 1 N x Ω j R m j f j ( x ) C i j 2 I i ( x ) } s . t . Ι . i = 1 N I i ( x ) = 1 , Ι Ι . I i ( x ) = 0   or   1 , i = 1 , 2 , , N  
where fj(x) is the jth color feature of the original apple image f(x), R m j f j ( x ) is the m j × m j patch vector, Cij is clustering center, and I i ( x ) is the label function, whose value can be 0 or 1.
As can be seen from Equation (5), both color features and local contents in apple images are all considered in our model, which makes it robust to halation, shadows, and local variations in apple images.
The superscript ( k ) is used to represent the kth iteration. The solving process of iteration can be divided into two steps:
In the first step, in Equation (5), I i is fixed and C i j is updated, and the optimization problem that is needed to be solved is given by Equation (6).
min I i , C i j { i = 1 N x Ω j R m j f j ( x ) C i j 2 I i ( k ) ( x ) }
By differentiating, we get:
C i j ( k + 1 ) = x Ω R m j f j I i ( k ) / x Ω I i ( k ) , i = 1 : N , j = 1 : 3
Next, in the second step, in Equation (5), C i j is fixed and I i is updated, resulting in the following optimization problem given by Equation (8).
{ min I i , C i j { i = 1 N x Ω j R m j f j ( x ) C i j ( k + 1 ) 2 I i ( x ) } s . t . Ι . i = 1 N I i ( x ) = 1 , Ι Ι . I i ( x ) = 0   or   1 ,   i = 1 , 2 , , N
This results in Equation (9).
I i ( k + 1 ) ( x ) = { 1 , i = i min ( x ) , 0 , i i min ( x ) , i min ( x ) = argmin i ( r i ( k + 1 ) ) , r i ( k + 1 ) = j R m j f j ( x ) C i j ( k + 1 ) 2 ,   i = 1 : N , j = 1 : 3 .
The overall procedure of the segmentation model is as follows (Algorithm 1).
Algorithm review:
Algorithm 1K-means clustering algorithm based on pixel block-based
Input: Original apple images f
Segmentation region N
Initialization: Randomly initialize   I i ( x ) , i = 1 : N , k = 0
Iteration: According to Equation (6), calculate C i j ( k + 1 )
According to Equation (7), calculate I i ( k + 1 ) ( x )
k = k + 1
Until I i ( k + 1 ) ( x ) = I i ( k ) ( x )
Output: I i ( i = 1 , 2 , , N )

2.5. Criteria Methods

The criteria for evaluating the image segmentation algorithm are the performances of the algorithm, which can be evaluated by comparing ground truth and the segmented image by their pixel points [38]. Using LabelMe, the apple target area in the test image is marked as ground truth [39].
For the binary classification, the actual results and the predicted results are compared. Figure 4 shows the confusion matrix composed of four cases, each corresponding to a different result: (1) truth positives (TP): the number of pixels that are correctly segmented as belonging to the apple; (2) false positives (FN): the number of pixels belonging to the apple that are incorrectly segmented as the background; (3) false negatives (FP): the number of pixels belonging to the background that are incorrectly classified as those belonging to the apple; (4) truth negatives (TN): the number of pixels that are correctly segmented as belonging to the background.
Recall rate, precision rate, false positive rate (FPR), and false negative rate (FNR) can be obtained from the confusion matrix. These indexes can be used to evaluate the proposed algorithm’s segmentation performance [40]. In Equations (10) and (11), recall rate and precision can be used to measure the ability of the algorithm to identify the apple correctly. In Equation (10), FPR defines gives the percentage of the pixels that belong to the background but are classified as the target. In Equation (13), FNR gives the percentage of the pixels belonging to the target that are incorrectly classified as the background.
Re c a l l = T P T P + F N
Pr e c i s i o n = T P T P + F P
F P R = F P F P + T N
F N R = F N T P + F N

3. Experimental Results and Analysis

The apple test images used in this paper were collected during the ripening season. In order to test the performance of the proposed algorithm in a comprehensive range, a test set of 180 images was selected that includes images with varying degrees of halation and shadows.

3.1. Visualization of Segmentation Results

The test data shown in Figure 5a have light and heavy shadows on the surface of the apple. The test data shown in Figure 5b have small and large areas that are strongly illuminated. The test data shown in Figure 5c have both shadows and light to varying degrees.
The original image and the outcome of segmentation are shown in Figure 5. The segmentation is based on the 3 × 3 pixel block clustering segmentation in the gray center color space proposed in this paper. The number of clusters is taken as 2, that is, one for the apples and another for the background (branches and leaves, sky, ground). The blue line marks the edge of the apple. The result of segmentation has two colors, black and white. From the results, it can be seen that most of the background area has been removed.

3.2. Comparison and Quantitative Analysis of the Results of Segmentation

In this subsection, we show a quantitative analysis of the proposed method and compare it with the existing segmentation methods. The images in the data set used in this article have 12 million pixels (4000 × 3000) each. To reduce the processing time, the original image is scaled using the bilinear difference method without affecting the clustering result. Reducing the number of pixels to 750,000 (1000 × 750) can save image processing time and improve the practicality of the algorithm. We compare our method with the fuzzy 2-partition entropy [18] method, in which, based on color feature selection like ours, the color histogram threshold in Lab space is used to segment images. We also compare our method with the superpixel-based fast fuzzy C-means clustering method (SFFCM) for image segmentation [41]. This method defines a multiscale morphological gradient reconstruction operation to obtain a superpixel image with accurate contour, and uses the color histogram as a fuzzy cluster class objective function to achieve color image binarization segmentation. This method provides good results for images with normal lighting and shadows. In addition, we compare our method with the mask regions with convolutional neural network (Mask R-CNN) algorithm by the popular deep learning-based method, which is verified as a state-of-the art method [42].
Four apple images with varying degrees of halation and shadows are shown in the first row of Figure 6. The second row shows the segmentation results based on the fuzzy 2-partition entropy method. As can be seen, when there is strong light on the surface of the apple and the background (branches, leaves) is bright, or the apple surface has shadows and the background (branches, leaves) is dark, it is easy to classify similar parts into one category. From the third row of the Figure 6, we can see that the superpixel-based fast fuzzy C-means clustering method provides good results for images with normal lighting and shadows. However, it is unable to provide complete segmentation for images with strong lighting and strong shadow areas. From the fourth row of the Figure 6, the mask R-CNN algorithm also shows outstanding segmentation results, but it requires a large amount of training data. Our approach does not require any training data. If the branches and leaves do not appear in the background, it may be segmented into apple regions when there are. The fifth row in Figure 6 shows the segmentation results of the proposed algorithm. As the proposed method is designed based on the apple characteristics in the gray-scale center color space, it is more robust to different degrees of shadow and halation. Thus, a complete apple target can be obtained.
In order to quantitatively analyze the performance difference between the algorithm proposed in this paper and those existing in literature, the fruit area in the test images of the data set is manually marked by LabelMe software, and the marked result is recorded as ground truth. The real data of the segmented image, the background and the labeled image, are compared pixel by pixel. Each pixel is divided into true positive (real apple), false positive (real background), false negative (false apple), and true negative (false background). Using the confusion matrix, the recall, precision, FNR, and FPR are calculated for each of these algorithms, and the results are given in Table 1.
By comparing the average computation time of four kinds of segmentation methods, the statistical results are shown in Figure 7. From the figure, our proposed algorithm is just slightly lower than that of fuzzy C-means, but it has a higher efficiency. The average time required for image processing was 1.37 s, which indicates that the proposed algorithm can be implemented in real-time.
The average recall, precision, FPR, and FNR of the proposed algorithm are 98.69%, 99.26%, 0.06%, and 1.44%, respectively. The average recall, precision, FPR, and FNR of the threshold segmentation algorithm based on fuzzy 2-partition entropy are 87.75%, 84.87%, 9.36%, and 12.44%, respectively. These parameters for the superpixel-based fast fuzzy C-means clustering are 94.34%, 96.87%, 1.37%, and 2.97%, respectively. Finally, these parameters for the mask R-CNN instance segmentation algorithm are 97.02%, 98.16%, 0.47%, and 2.54%, respectively. Therefore, the average value of recall and precision of the proposed algorithm have improved by 10.94% and 14.39%, respectively, and the average value of FPR and FNR have decreased by 9.30% and 11.00%, respectively, when compared to the parameters obtained using the threshold segmentation algorithm based on fuzzy 2-partition entropy. The average value of recall and precision of the proposed algorithm have improved by 4.35% and 2.39%, respectively, and the average value of FPR and FNR have decreased by 1.31% and 1.53%, respectively, as compared to the algorithm based on superpixel-based fast fuzzy C-means clustering. The average value of recall and precision of the proposed algorithm have improved by 1.67% and 1.10%, respectively, and the average value of FPR and FNR have decreased by 0.41% and 1.10%, respectively, as compared to the mask R-CNN instance segmentation algorithm.

3.3. Double and Multi-Fruit Split Results

To develop an eye-in-hand vision servo-picking robot, a distance of 30–40 cm was used for capturing the images having only one apple target. However, apple growth is complicated in unstructured orchards, and shots from a distance of 40–60 cm are unavoidable, resulting in the images having multiple fruit targets. Figure 8 shows the recognition results of the proposed algorithm for multiple targets, and Figure 8a–c show the segmentation results for images with multiple fruits.

4. Discussion

4.1. Segmentation Result and Analysis with Different COI

Based on the selection of COI, different targets can be segmented as shown in Figure 9. The feature of apples can be extracted if COI is pointed towards red from gray center (170.03, −84.99, −84.99). The feature of soil in the image can be extracted if COI is pointed towards yellow from gray center (85.01, 85.01, −170.01). The feature of leaves in the image can be extracted if COI is pointed towards green from gray center (−170.01, 85.01, 85.01), and the feature of sky in the image can be extracted if COI is pointed towards blue from gray center (−84.99, −84.99, 170.03).
However, apples on the tree differ in their position and their growth. Thus, the ripened apples show different degrees of red during the coloring process. Using only the COI that points to the red from the gray center does not result in high recognition accuracy. As mentioned in Section 2.3.3, a large number of experiments were performed with the apple images, having different degrees of red. It was found that the pixels in all vectors within 30° of the selected COI angle belong to apple (Figure 9). The geometry of the OBC region in Figure 9 covers apples with different degrees of red color.
The feature maps of leaves, sky, and soil are shown in Figure 10, respectively, which can be segmented using different algorithms.

4.2. Further Research Perspectives

In recent years, deep learning is being widely adopted for many computer vision tasks, such as image classification [43], segmentation [44], and object detection [45].
Deep learning can adapt to the variances within the working scene, making it a promising approach in many vision tasks [46,47,48]. Deep learning models can be easily deployed and applied after training on a highly configured computer with a large amount of data, for achieving high accuracy [49,50,51]. However, the relationship between the datasets, the network architectures, and the network generalization capabilities are still being explored by a large number of researchers to find out the essence [52,53,54]. For different data objects, such as the apple images, the amount of data, the nature of the data, and which network architecture to use that will have acceptable generalization capabilities are still unclear, and their interpretability needs further research. Deep learning techniques are fixed to the labelled input data and may not work under different crop and apple types, and this was not considered in the training dataset [55].
Another mainstream approach is the model-based image segmentation method used in this paper. A model-based segmentation algorithm is proposed for segmenting images of ripe apples by studying the color characteristics and the local variations of apple images, and constructing features from the segmented targets. This approach makes the segmentation of apples easy to interpret and comprehend. In addition, the model-based approach is not computationally expensive; it is possible to iterate faster and to try different techniques in less time. However, the traditional image algorithms can only solve certain scenario-specific, manually definable, designable, and understandable image tasks [56]. Without being trained under the color assumption, they cannot capture different target classes (green apple varieties) with similar background colors, which is the drawback of the color assumption in the model-based method.
In this paper, a new class of a priori information is given using the model-based approach, thereby improving the results. Compared to the deep learning methods, the model-based approach used in this work can be easily interpreted and is flexible with existing datasets. However, deep learning and model-based approaches have their own advantages and disadvantages. It would be interesting to explore a combination of both to make a more reasonable interpretation and to further generalize the deep learning methods. Using the a priori conditions given in this paper, combining model-based and deep learning could be a potential topic for future research.

5. Conclusions

In this study, a patch-based segmentation algorithm in gray-center color space was proposed, realizing stable apple segmentation in a natural apple orchard environment (shadow, light, and background). Specifically, quaternion was used to obtain the parallel/vertical decomposition for the apple image. Then, a vector (COI) that points from the gray center to red in the resulting vertical image was selected based on the cosine of the angle between the vector pointing out from the gray center and the COI, which represents the color information of the image. This was used as the decision condition for extracting the features of an apple. Finally, both color features and local contents in apple images were adopted to realize the segmentation of the target area.
The proposed algorithm has several characteristics as compared to the traditional clustering algorithms and deep learning instance segmentation algorithms: (i) the color degree of pixels in the image is perpendicular to the gray center of gray direction of RGB color space, and the color degree of pixels whose initial point direction are gray center are equal; (ii) the color information of the original image was better reflected by the vertical decomposition; and (iii) the geometrical shape of the segmented target was well maintained, and the segmentation error was significantly reduced, using the proposed patch-based segmentation algorithm.
The experimental results showed that the recall and precision of the proposed segmentation method were 10.94% and 14.39%, respectively, higher than that of the modified threshold segmentation method, while FPR and FNR were 9.30% and 11.00% lower. Compared with the recall and precision of the modified clustering algorithm, their value increased by 4.35% and 2.39%, respectively, and FPR and FNR decreased by 1.31% and 1.53%, respectively. Finally, compared with the deep learning segmentation algorithm, the recall and precision of the proposed segmentation method were 1.67% and 1.10% higher, and FPR and FNR were 0.41% and 1.10% lower.
As shown in previous experiments, our method can deal with the problems of illumination and shadow. In more complex environments, for specific targets, the focus of future research will be on the application of the proposed method. Our method may be utilized to segment other fruit targets such as bananas, grapes, pears, and so on. Future work includes the segmentation of unmanned aerial vehicle (UAV)-based remote sensing images using the methods of this research. However, color feature selection and characteristics of different images need to be studied more delicately.

Author Contributions

P.F. developed the experimental plan, carried out the data analysis, and wrote the text. G.L. contributed to the development of the algorithm, programming, and writing. P.G. and X.L. helped in obtaining the images of apples. B.Y. contributed to the original draft preparation. Z.L. reviewed and edited the draft. F.Y. provided significant contributions to this development as the lead. All authors have read and agreed to the published version of the manuscript.

Funding

This work was financially supported by the science and technology projects in Shaanxi Province Development and Application of Key Equipment for Orchard Mechanization and Intelligence (Grant No. 2020zdzx03-04-01) and the National Natural Science Foundation of China (No.61971005).

Institutional Review Board Statement

The study in the paper did not involve humans or animals.

Informed Consent Statement

The study in the paper did not involve humans or animals.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

We thank the critical comments and suggestions from the anonymous reviewers for improving the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yang, H.; Chen, L.; Ma, Z.; Chen, M.; Zhong, Y.; Deng, F.; Li, M. Computer vision-based high-quality tea automatic plucking robot using Delta parallel manipulator. Comput. Electron. Agric. 2021, 181, 105946. [Google Scholar] [CrossRef]
  2. Li, Y.; Li, M.; Jiangtao, Q.; Zhou, D.; Zou, Z.; Liu, K. Detection of typical obstacles in orchards based on deep convolutional neural network. Comput. Electron. Agric. 2021, 181, 105932. [Google Scholar] [CrossRef]
  3. Zhang, Z.; Kayacan, E.; Thompson, B.; Chowdhary, G. High precision control and deep learning-based corn stand counting algorithms for agricultural robot. Auton. Robot. 2020, 44, 1289–1302. [Google Scholar] [CrossRef]
  4. Arad, B.; Balendonck, J.; Barth, R.; Ben-Shahar, O.; Edan, Y.; Hellström, T.; Hemming, J.; Kurtser, P.; Ringdahl, O.; Tielen, T.; et al. Development of a sweet pepper harvesting robot. J. Field Robot. 2020, 37, 1027–1039. [Google Scholar] [CrossRef]
  5. Xiong, Y.; Ge, Y.; Grimstad, L.; From, P.J. An autonomous strawberry-harvesting robot: Design, development, integration, and field evaluation. J. Field Robot. 2020, 37, 202–224. [Google Scholar] [CrossRef] [Green Version]
  6. Sun, S.; Jiang, M.; He, D.; Long, Y.; Song, H. Recognition of green apples in an orchard environment by combining the GrabCut model and Ncut algorithm. Biosyst. Eng. 2019, 187, 201–213. [Google Scholar] [CrossRef]
  7. Wu, G.; Li, B.; Zhu, Q.; Huang, M.; Guo, Y. Using color and 3D geometry features to segment fruit point cloud and improve fruit recognition accuracy. Comput. Electron. Agric. 2020, 174, 105475. [Google Scholar] [CrossRef]
  8. Vidoni, R.; Bietresato, M.; Gasparetto, A.; Mazzetto, F. Evaluation and stability comparison of different vehicle configurations for robotic agricultural operations on side-slopes. Biosyst. Eng. 2015, 129, 197–211. [Google Scholar] [CrossRef]
  9. De-An, Z.; Jidong, L.; Ji, W.; Ying, Z.; Yu, C. Design and control of an apple harvesting robot. Biosyst. Eng. Biosyst. Eng. 2011, 110, 112–122. [Google Scholar] [CrossRef]
  10. Bulanon, D.; Kataoka, T. Fruit detection system and an end effector for robotic harvesting of Fuji apples. Agric. Eng. Int. CIGR J. 2010, 12, 203–210. [Google Scholar]
  11. Mao, S.; Li, Y.; Ma, Y.; Zhang, B.; Zhou, J.; Wang, K. Automatic cucumber recognition algorithm for harvesting robots in the natural environment using deep learning and multi-feature fusion. Comput. Electron. Agric. 2020, 170, 105254. [Google Scholar] [CrossRef]
  12. Ji, W.; Zhao, D.; Cheng, F.; Xu, B.; Zhang, Y.; Wang, J. Automatic recognition vision system guided for apple harvesting robot. Comput. Electr. Eng. 2012, 38, 1186–1195. [Google Scholar] [CrossRef]
  13. Lü, J.; Zhao, D.; Ji, W. Fast tracing recognition method of target fruit for apple harvesting robot. Nongye Jixie Xuebao/Trans. Chin. Soc. Agric. Mach. 2014, 45, 65–72. [Google Scholar] [CrossRef]
  14. Liu, X.; Zhao, D.; Jia, W.; Ruan, C.; Tang, S.; Shen, T. A method of segmenting apples at night based on color and position information. Comput. Electron. Agric. 2016, 122, 118–123. [Google Scholar] [CrossRef]
  15. Lv, J.; Wang, F.; Xu, L.; Ma, Z.; Yang, B. A segmentation method of bagged green apple image. Sci. Hortic. 2019, 246, 411–417. [Google Scholar] [CrossRef]
  16. Xiaoyang, L.; Dean, Z.; Weikuan, J.; Chengzhi, R.; Wei, J.I. Fruits Segmentation Method Based on Superpixel Features for Apple Harvesting Robot. Trans. Chin. Soc. Agric. Mach. 2019, 50, 22–30. [Google Scholar] [CrossRef]
  17. Tu, J.; Liu, C.; Li, Y.; Zhou, J.; Yuan, J. Apple recognition method based on illumination invariant graph. Nongye Gongcheng Xuebao/Trans. Chin. Soc. Agric. Eng. 2010, 26, 26–31. [Google Scholar] [CrossRef]
  18. Huang, L.; He, D. Apple Recognition in Natural Tree Canopy based on Fuzzy 2-partition Entropy. Int. J. Digit. Content Technol. Appl. 2013, 7, 107–115. [Google Scholar] [CrossRef]
  19. Song, H.; Qu, W.; Wang, D.; Yu, X.; He, D. Shadow removal method of apples based on illumination invariant image. Nongye Gongcheng Xuebao/Trans. Chin. Soc. Agric. Eng. 2014, 30, 168–176. [Google Scholar] [CrossRef]
  20. Song, H.; Zhang, W.; Zhang, X.; Zou, R. Shadow removal method of apples based on fuzzy set theory. Nongye Gongcheng Xuebao/Trans. Chin. Soc. Agric. Eng. 2014, 30, 135–141. [Google Scholar] [CrossRef]
  21. Jia, W.; Mou, S.; Wang, J.; Liu, X.; Zheng, Y.; Lian, J.; Zhao, D. Fruit recognition based on pulse coupled neural network and genetic Elman algorithm application in apple harvesting robot. Int. J. Adv. Robot. Syst. 2020, 17, 1729881419897473. [Google Scholar] [CrossRef]
  22. Xu, W.; Chen, H.; Su, Q.; Ji, C.; Xu, W.; Memon, D.-M.S.; Zhou, J. Shadow detection and removal in apple image segmentation under natural light conditions using an ultrametric contour map. Biosyst. Eng. 2019, 184, 142–154. [Google Scholar] [CrossRef]
  23. Xie, M.; Ji, Z.; Zhang, G.; Wang, T.; Sun, Q. Mutually Exclusive-KSVD: Learning a Discriminative Dictionary for Hyperspectral Image Classification. Neurocomputing 2018, 315, 177–189. [Google Scholar] [CrossRef]
  24. Sladojevic, S.; Arsenovic, M.; Anderla, A.; Stefanović, D. Deep Neural Networks Based Recognition of Plant Diseases by Leaf Image Classification. Comput. Intell. Neurosci. 2016, 2016, 1–11. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 39, 1137–1149. [Google Scholar] [CrossRef] [Green Version]
  26. Long, J.; Shelhamer, E.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. Arxiv 2014, 79, 474–478. [Google Scholar] [CrossRef]
  27. Kang, H.; Zhou, H.; Wang, X.; Chen, C. Real-Time Fruit Recognition and Grasping Estimation for Robotic Apple Harvesting. Sensors 2020, 20, 5670. [Google Scholar] [CrossRef]
  28. Jiao, Y.; Luo, R.; Li, Q.; Deng, X.; Yin, X.; Ruan, C.; Jia, W. Detection and Localization of Overlapped Fruits Application in an Apple Harvesting Robot. Electronics 2020, 9, 1023. [Google Scholar] [CrossRef]
  29. Li, Y.; Feng, X.; Wang, W. Color-Dependent Diffusion Equations Based on Quaternion Algebra. Chin. J. Electron. 2012, 21, 277–282. [Google Scholar] [CrossRef]
  30. Zhang, D.; Guo, Z.; Wang, G.; Jiang, T. Algebraic techniques for least squares problems in commutative quaternionic theory. Math. Methods Appl. Sci. 2020, 43, 3513–3523. [Google Scholar] [CrossRef]
  31. Marques-Bonham, S.; Chanyal, B.; Matzner, R. Yang–Mills-like field theories built on division quaternion and octonion algebras. Eur. Phys. J. Plus 2020, 135, 1–34. [Google Scholar] [CrossRef]
  32. Zhang, X.; Xia, J.; Tan, X.; Zhou, X.; Wang, T. PolSAR Image Classification via Learned Superpixels and QCNN Integrating Color Features. Remote Sens. 2019, 11, 1831. [Google Scholar] [CrossRef] [Green Version]
  33. Jia, Z.; Ng, M.; Guangjing, S. Robust quaternion matrix completion with applications to image inpainting. Numer. Linear Algebra Appl. 2019, 26, 26. [Google Scholar] [CrossRef]
  34. Evans, C.; Sangwine, S.; Ell, T. Hypercomplex color-sensitive smoothing filters. In Proceedings of the 2000 International Conference on Image Processing (Cat. No.00CH37101), Vancouver, BC, Canada, 10–13 September 2000; Volume 1, pp. 541–544. [Google Scholar] [CrossRef]
  35. Ell, T.; Sangwine, S. Hypercomplex Fourier Transforms of Color Images. IEEE Trans. Image Process. 2007, 16, 22–35. [Google Scholar] [CrossRef] [PubMed]
  36. Shi, L.; Funt, B. Quaternion color texture segmentation. Comput. Vis. Image Underst. 2007, 107, 88–96. [Google Scholar] [CrossRef]
  37. MacQueen, J. Some Methods for Classification and Analysis of MultiVariate Observations, 1st ed.; Le Cam, L., Neyman, J., Eds.; University of California Press: Berkeley, CA, USA, 1967; Volume 1, pp. 281–297. [Google Scholar]
  38. Liu, F.-Q.; Wang, Z.-Y. Automatic “Ground Truth” Annotation and Industrial Workpiece Dataset Generation for Deep Learning. Int. J. Autom. Comput. 2020, 17, 1–12. [Google Scholar] [CrossRef]
  39. Li, X.; Chang, D.; Ma, Z.; Tan, Z.-H.; Xue, J.-H.; Cao, J.; Guo, J. Deep InterBoost networks for small-sample image classification. Neurocomputing 2020. [Google Scholar] [CrossRef]
  40. Vahidi, H.; Klinkenberg, B.; Johnson, B.A.; Moskal, L.M.; Yan, W. Mapping the Individual Trees in Urban Orchards by Incorporating Volunteered Geographic Information and Very High Resolution Optical Remotely Sensed Data: A Template Matching-Based Approach. Remote Sens. 2018, 10, 1134. [Google Scholar] [CrossRef] [Green Version]
  41. Lei, T.; Jia, X.; Zhang, Y.; Liu, S.; Meng, H.; Nandi, A.K. Superpixel-Based Fast Fuzzy C-Means Clustering for Color Image Segmentation. IEEE Trans. Fuzzy Syst. 2019, 27, 1753–1766. [Google Scholar] [CrossRef] [Green Version]
  42. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar]
  43. Hammam, A.A.; Soliman, M.M.; Hassanien, A.E. Real-time multiple spatiotemporal action localization and prediction approach using deep learning. Neural Netw. 2020, 128, 331–344. [Google Scholar] [CrossRef]
  44. Jia, W.; Tian, Y.; Luo, R.; Zhang, Z.; Lian, J.; Zheng, Y. Detection and segmentation of overlapped fruits based on optimized mask R-CNN application in apple harvesting robot. Comput. Electron. Agric. 2020, 172, 105380. [Google Scholar] [CrossRef]
  45. Yuan, Y.; Chu, J.; Leng, L.; Miao, J.; Kim, B.-G. A scale-adaptive object-tracking algorithm with occlusion detection. EURASIP J. Image Video Process. 2020, 2020, 1–15. [Google Scholar] [CrossRef] [Green Version]
  46. Yamaguchi, T.; Tanaka, Y.; Imachi, Y.; Yamashita, M.; Katsura, K. Feasibility of Combining Deep Learning and RGB Images Obtained by Unmanned Aerial Vehicle for Leaf Area Index Estimation in Rice. Remote Sens. 2021, 13, 84. [Google Scholar] [CrossRef]
  47. Krahe, C.; Bräunche, A.; Jacob, A.; Stricker, N.; Lanza, G. Deep Learning for Automated Product Design. Procedia CIRP 2020, 91, 3–8. [Google Scholar] [CrossRef]
  48. Wu, M.; Yin, X.; Li, Q.; Zhang, J.; Feng, X.; Cao, Q.; Shen, H. Learning deep networks with crowdsourcing for relevance evaluation. EURASIP J. Wirel. Commun. Netw. 2020, 2020, 1–11. [Google Scholar] [CrossRef]
  49. Rizwan-i-Haque, I.; Neubert, J. Deep learning approaches to biomedical image segmentation. Inform. Med. Unlocked 2020, 18, 100297. [Google Scholar] [CrossRef]
  50. Amanullah, M.A.; Ariyaluran Habeeb, R.A.; Nasaruddin, F.; Gani, A.; Ahmed, E.; Nainar, A.; Akim, N.; Imran, M. Deep learning and big data technologies for IoT security. Comput. Commun. 2020, 151, 495–517. [Google Scholar] [CrossRef]
  51. Karanam, S.; Srinivas, Y.; Krishna, M. Study on image processing using deep learning techniques. Mater. Today Proc. 2020, 44, 2093–2109. [Google Scholar] [CrossRef]
  52. Arora, S.; Du, S.; Hu, W.; Li, Z.; Wang, R. Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks 2019. Available online: https://ui.adsabs.harvard.edu/abs/2019arXiv190108584A (accessed on 21 July 2020).
  53. Du, S.; Zhai, X.; Poczos, B.; Singh, A. Gradient Descent Provably Optimizes Over-parameterized Neural Networks 2018. Available online: https://ui.adsabs.harvard.edu/abs/2018arXiv181002054D (accessed on 5 July 2020).
  54. Neyshabur, B.; Li, Z.; Bhojanapalli, S.; LeCun, Y.; Srebro, N. Towards Understanding the Role of Over-Parametrization in Generalization of Neural Networks 2018. Available online: https://ui.adsabs.harvard.edu/abs/2018arXiv180512076N (accessed on 13 September 2020).
  55. Riehle, D.; Reiser, D.; Griepentrog, H.W. Robust index-based semantic plant/background segmentation for RGB- images. Comput. Electron. Agric. 2020, 169, 105201. [Google Scholar] [CrossRef]
  56. Karabağ, C.; Verhoeven, J.; Miller, N.; Reyes-Aldasoro, C. Texture Segmentation: An Objective Comparison between Traditional and Deep-Learning Methodologies; University of London: London, UK, 2019. [Google Scholar] [CrossRef]
Figure 1. Conversion of RGB color space to gray-center RGB color space: origin at O in the RGB color space coordinate system is moved to O′ to form a new RGB color space coordinate system.
Figure 1. Conversion of RGB color space to gray-center RGB color space: origin at O in the RGB color space coordinate system is moved to O′ to form a new RGB color space coordinate system.
Remotesensing 13 01211 g001
Figure 2. Vector decomposition of the image based on quaternion algebra. (a) Parallel components to (1, 1, 1). (b) Perpendicular components to (1, 1, 1).
Figure 2. Vector decomposition of the image based on quaternion algebra. (a) Parallel components to (1, 1, 1). (b) Perpendicular components to (1, 1, 1).
Remotesensing 13 01211 g002
Figure 3. Results of quaternion decomposition for four images of the apple dataset: (a) without any intense light irradiation and without any shadow, (b) with shadows but without any intense light irradiation, (c) with intense light irradiation but without any shadow, (d) with both shadows and intense light irradiation.
Figure 3. Results of quaternion decomposition for four images of the apple dataset: (a) without any intense light irradiation and without any shadow, (b) with shadows but without any intense light irradiation, (c) with intense light irradiation but without any shadow, (d) with both shadows and intense light irradiation.
Remotesensing 13 01211 g003
Figure 4. Confusion matrix composed of four cases (TP, FP, FN, and TN).
Figure 4. Confusion matrix composed of four cases (TP, FP, FN, and TN).
Remotesensing 13 01211 g004
Figure 5. Visualizing the segmentation results: (a) light and heavy shadows, (b) strongly illuminated, (c) both shadows and light to varying degrees.
Figure 5. Visualizing the segmentation results: (a) light and heavy shadows, (b) strongly illuminated, (c) both shadows and light to varying degrees.
Remotesensing 13 01211 g005
Figure 6. Experimental comparison of different methods: fuzzy 2-partition entropy, SFFCM, mask R-CNN, and the proposed algorithm.
Figure 6. Experimental comparison of different methods: fuzzy 2-partition entropy, SFFCM, mask R-CNN, and the proposed algorithm.
Remotesensing 13 01211 g006aRemotesensing 13 01211 g006b
Figure 7. Computational efficiency of different methods; comparison between fuzzy 2-partition entropy, SFFCM, mask R-CNN, and the proposed algorithm.
Figure 7. Computational efficiency of different methods; comparison between fuzzy 2-partition entropy, SFFCM, mask R-CNN, and the proposed algorithm.
Remotesensing 13 01211 g007
Figure 8. Segmentation results for apple images: (a) double-fruit, (b) triple-fruit, and (c) multiple fruit.
Figure 8. Segmentation results for apple images: (a) double-fruit, (b) triple-fruit, and (c) multiple fruit.
Remotesensing 13 01211 g008
Figure 9. The choice of different COI (red apple, soil, sky, and leaf) under the angle of view perpendicular to the gray center in the gray-center color space.
Figure 9. The choice of different COI (red apple, soil, sky, and leaf) under the angle of view perpendicular to the gray center in the gray-center color space.
Remotesensing 13 01211 g009
Figure 10. Feature maps of leaves, sky, and soil under specific COI.
Figure 10. Feature maps of leaves, sky, and soil under specific COI.
Remotesensing 13 01211 g010
Table 1. Average Result Based on Database.
Table 1. Average Result Based on Database.
MethodMethod SourceRecallPrecisionFPRFNR
Fuzzy 2-partition EntropyFuzzy 2-partition entropy87.75%84.87%9.36%12.44%
Fuzzy C-meansSuperpixel-based fast fuzzy C-means clustering94.34%96.87%1.37%2.97%
Deep-learningMask R-CNN97.02%98.16%0.47%2.54%
Proposed algorithm 98.69%99.26%0.06%1.44%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Fan, P.; Lang, G.; Yan, B.; Lei, X.; Guo, P.; Liu, Z.; Yang, F. A Method of Segmenting Apples Based on Gray-Centered RGB Color Space. Remote Sens. 2021, 13, 1211. https://doi.org/10.3390/rs13061211

AMA Style

Fan P, Lang G, Yan B, Lei X, Guo P, Liu Z, Yang F. A Method of Segmenting Apples Based on Gray-Centered RGB Color Space. Remote Sensing. 2021; 13(6):1211. https://doi.org/10.3390/rs13061211

Chicago/Turabian Style

Fan, Pan, Guodong Lang, Bin Yan, Xiaoyan Lei, Pengju Guo, Zhijie Liu, and Fuzeng Yang. 2021. "A Method of Segmenting Apples Based on Gray-Centered RGB Color Space" Remote Sensing 13, no. 6: 1211. https://doi.org/10.3390/rs13061211

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop