Next Article in Journal
A New Reliability Class-Test Statistic for Life Distributions under Convolution, Mixture and Homogeneous Shock Model: Characterizations and Applications in Engineering and Medical Fields
Previous Article in Journal
C-R Immersions and Sub-Riemannian Geometry
Previous Article in Special Issue
Least Squares in a Data Fusion Scenario via Aggregation Operators
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Systematic Review

Systematic Review of Aggregation Functions Applied to Image Edge Detection

Miqueias Amorim
Gracaliz Dimuro
Eduardo Borges
Bruno L. Dalmazo
Cedric Marco-Detchart
Giancarlo Lucca
1,† and
Humberto Bustince
Centro de Ciências Computacionais (C3), Universidade Federal do Rio Grande, Av. Itália km 08, Campus Carreiros, Rio Grande 96201-900, Brazil
Valencian Research Institute for Artificial Intelligence (VRAIN), Universitat Politècnica de València (UPV), Camino de Vera s/n, 46022 Valencia, Spain
Department Estadistica, Informatica y Matematicas, Universidad Publica de Navarra, 31006 Pamplona, Spain
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Axioms 2023, 12(4), 330;
Submission received: 10 January 2023 / Revised: 6 March 2023 / Accepted: 23 March 2023 / Published: 28 March 2023


Edge detection is a crucial process in numerous stages of computer vision. This field of study has recently gained momentum due to its importance in various applications. The uncertainty, among other characteristics of images, makes it difficult to accurately determine the edge of objects. Furthermore, even the definition of an edge is vague as an edge can be considered as the maximum boundary between two regions with different properties. Given the advancement of research in image discontinuity detection, especially using aggregation and pre-aggregation functions, and the lack of systematic literature reviews on this topic, this paper aims to gather and synthesize the current state of the art of this topic. To achieve this, this paper presents a systematic review of the literature, which selected 24 papers filtered from 428 articles found in computer databases in the last seven years. It was possible to synthesize important related information, which was grouped into three approaches: (i) based on both multiple descriptor extraction and data aggregation, (ii) based on both the aggregation of distance functions and fuzzy C-means, and (iii) based on fuzzy theory, namely type-2 fuzzy and neutrosophic sets. As a conclusion, this review provides interesting gaps that can be explored in future work.

1. Introduction

One of the most common approaches for detecting discontinuities in images is edge detection [1]. An edge is defined as the maximum boundary between two regions with different properties; that is, the boundary between two objects or object faces in an image [2]. This research field represents an important task in several steps of computer vision and image processing, such as [3,4,5,6]: object detection [7], pattern recognition [8,9,10,11,12], photo-sketching [13], image retrieval [14,15,16,17], face recognition [18,19,20], corner detection [21,22,23], road detection [24], and target tracking [25,26,27,28].
In recent years, the study of edge detection has gained momentum due to its importance in various applications, such as autonomous cars [29,30]; augmented reality [31,32]; image colorization [33,34,35]; and medical image processing [36,37,38]. Either because of the discretization intrinsic to the digital capture process or because of some subsequent quantization process, the edge or faces of the objects show a small smoothing around the actual boundary of the regions.
The uncertainty, among other characteristics of images, makes it difficult to determine the edge of objects accurately. To accomplish this goal, different methods have been proposed throughout history, from methods based on partial derivatives, such as Sobel [39], Log [40], and Canny [41] in the 1970s and 1980s, to methods based on convolutional neural networks in recent years [42].
Considering that the human visual system has been a source of inspiration for computer vision researchers, recent studies considering the usage of aggregation and pre-aggregation [43] functions have been applied to simulate the behavior of the information flow in the visual cortex in the task of edge detection and image segmentation, as well as to assist in the decision making of other algorithms with the same objective (see, e.g., [44,45,46,47,48,49,50,51,52]).
Aggregation functions are a type of function with certain special properties, which aim to combine or fuse inputs into a value that is able to represent the set of inputs. Precisely, this function must respect two main properties, namely (i) boundary conditions and (ii) monotonicity. The relaxation of the monotonicity condition has been the focus of recent investigations. The notion of weak monotonicity was introduced by Beliakov et al. [53] and extended to directional monotonicity by Bustince et al. [54]. After that, the concept of pre-aggregation functions [55] was introduced, which are functions that respect the same boundary conditions as an aggregation function; however they are just directional increasing.
Given the advancement of research in image discontinuity detection, especially using clustering, aggregation, and pre-aggregation functions, and since there are no systematic literature reviews on this topic, this paper aims to gather and synthesize the current state of the art of this research field. In this sense, the question that is intended to be answered is:
What edge detection methods are based on aggregation and pre-aggregation functions?
The main objectives of the study are to:
Fill this gap in the literature of systematic reviews on the topic;
Summarize the existing technology regarding methods that make use of those functions in digital image processing, more specifically regarding edge detection;
Identify the gaps in the detection approach using aggregation or pre-aggregation functions, proposing themes for future work.
This paper is organized, in addition to this introduction, as follows. In Section 2 (Preliminaries), the most widespread edge detection methods to date are presented chronologically. Section 3 (Materials and Methods) presents the search terms, where inclusion, delimitation, and exclusion criteria are discussed, as well as the number of papers found and admitted for review. Then, in Section 4 (Results and Discussion), we present the aforementioned research question, summarizing the methods reviewed and performing a qualitative assessment that compares the main techniques and approaches. Finally, Section 5 is the Conclusions.

2. Preliminaries

In order to provide a better understanding of the techniques that will be presented in this review, this section shows a rigorous presentation regarding aggregation and pre-aggregation functions (Section 2.1) and a summary of the classical and most widespread edge detection techniques in recent years. This rationale is based on four major review articles published in the last three years [42,49,56,57]. The following is a summary grouped by approach: gradient-based methods (Section 2.2), region-segmentation-based methods (Section 2.3), methods based on machine learning (Section 2.4) and fuzzy-logic-based methods (Section 2.5).

2.1. Aggregation and Pre-Aggregation Functions

An aggregation function [58] is an n-ary function f : [ 0 , 1 ] n [ 0 , 1 ] with the following properties: (A1) boundary conditions: f ( 0 , 0 , . . . , 0 ) = 0 and f ( 1 , 1 , . . . , 1 ) = 1 ; (A2) increasingness: X Y implies that f ( X ) f ( Y ) , for all X , Y [ 0 , 1 ] n .
An n-ary function P : [ 0 , 1 ] n [ 0 , 1 ] is said to be a pre-aggregation function [55] if it satisfies the following conditions: (PAI) boundary conditions as defined in (A1); (PA2) directional increasingness [55]: there is a real vector r R n ( r 0 ) for which P is r -increasing; that is, for all x [ 0 , 1 ] n and c > 0 such that x + c r [ 0 , 1 ] n : P ( x + c r ) P ( x ) .

2.2. Gradient-Based Methods

A digital image is a discrete representation of the variation in light in the real world. Thus, each numerical value carried by the pixel represents the intensity of light or color at that point. The contours of objects in the image can be interpreted as a transition zone between these intensities. Very intense transitions have a much higher chance of being an edge than too smooth transitions. It should be noted that the first edge detectors that appeared in the literature took this point into account based on the gradient of the image [59].
There are two approaches toward gradient-based detection: (i) based on first-order derivatives and (ii) based on second-order derivatives. The most well-known gradient-based detection methods are: (i) first-order fixed operations; (ii) first-order oriented operations that use the maximum energy of the orientation; and (iii) two-direction operations. Table 1 summarizes the first-order methods, which consist of three fundamental steps:
Computation of the magnitude of the gradient | I | and its orientation η , using a kernel 3 × 3 , (Table 1), with a first-order, vertical and horizontal derivative filter such as steerable Gaussian, oriented anisotropic Gaussian kernels, or a combination of two half-Gaussian kernels;
Non-maximum suppression operation for thinner edges: selection of pixels with a local maximum gradient magnitude along the direction η of the gradient, which is perpendicular to the orientation edge;
Determination of the thresholds of the fine contours to obtain the contour map.
These steps are described visually in Figure 1.
Considering the gradient-based methods [49], in Figure 2, we provide a timeline showing the main related works and their order of publication. Taking this chronological order into account, in the following, we discuss the main articles.
The Canny detector [41], which is one of the most widely used methods, was proposed considering a Gaussian smoothing followed by a gradient operation and finally thresholding. In [42], the authors discussed other first-order derivative-based methods inspired by Canny, such as D-ISEF [61], color-boundary [62], and FDAG [63].
The detector proposed by Sobel et al. [39] calculates the gradient value at each pixel position in the image using a fixed operator. In turn, Prewitt et al. [64] has a similar technique to Sobel, which, however, has no adjustment coefficients, varies by a constant value, and calculates the magnitude of the gradient with the image orientations. In the study by Roberts [65], the derivative is calculated using the root of the difference between diagonally adjacent pixels as follows:
Z i , j = ( y i , j y i + 1 , j + 1 ) 2 + ( y i + 1 , j y i , j + 1 ) 2
y i , j = x i , j ,
focusing on invariant properties that the edges exhibit.
A log filter [40], or Gaussian Laplacian, is based on Gaussian filtering followed by a Laplacian operation as follows:
2 f ( x , y ) = δ 2 f ( x , y ) δ x 2 + δ 2 f ( x , y ) δ y 2 .
This approach considers a second-order operator, which seeks to identify the maximum and minimum points of the variation in intensities so that, by finding these points that cross to zero, the edge candidates in the image are found. The Gaussian filter step is important because it is a second-order method and therefore extremely sensitive to noise.

2.3. Region-Based Segmentation Methods

In these methods, regions are segmented in various forms, such as clustering methods and automated thresholding. The edges are detected as the border of these regions. It is possible to leverage the regions formed by the textures in high-complexity images as an enabler for the process of detecting the edges between them [56].
In general, the region-based segmentation approach outperforms edge-based methods. Other methods, such as those based on frequency domain filtering and statistical methods, are used both in region-based segmentation methods and for direct edge detection. Through the emergence of texture descriptors and other local information, such as the brightness, texture, and color gradient, the probabilistic contour (Pb) method [42,56,66] has been one of the most representative approaches in this type of detector.

2.4. Methods Based on Machine Learning

By combining the Pb method with a logistic regression, it was possible to develop another kind of edge detection model [42]. The method has been extended over the years in many ways, bringing, for example, the multi-scale probabilistic contour method and multi-scale spectral clustering.
Currently, new techniques based on machine learning have emerged, particularly those based on Convolutional Neural Networks (CNNs). In [42], the authors present other machine-learning-based methods in their research, such as Holistically Nested Edge Detection, better known as HED [67], proposed to improve the performance of convolutional neural networks, which then inspired the methods: CEDN [68], RCF [69], and LPCB [70], among others. Recently, the BDCN [71] method has emerged, proposing detection at different scales. Currently, techniques such as Fined [72], EDTER [73], and PiDiNet [74] have been proposed to deal with edge detection without the need for a voluminous database for model training.

2.5. Fuzzy-Logic-Based Methods

A fuzzy inference system modeling for efficiently extracting edges in noisy images, called the Russo method, was proposed in [75]. From this point, the application of fuzzy set theory [76] in edge detection increased, justified by the fuzzy nature of object edges in a digital image, which makes fuzzy set theory suitable for solving such problems.
Detectors based on the junction of techniques such as divergence and Fuzzy Entropy minimization (FED) [77], and detectors based on morphological gradient and type-2 fuzzy logic (Type-2 Color) [78] have appeared in the literature. Hybrid techniques and fuzzy versions of neural networks have been created, improving the performance of other approaches and, in some cases, decreasing the computational complexity.
In the studies of hybrid methods [57] that use neural networks, fuzzy logic type-1, type-2, and type-3 (hybrid approach of neuro-fuzzy-1, neuro-fuzzy-2, and neuro-fuzzy-3, respectively), which are extensions of fuzzy logic with more degrees of associated uncertainty, can also be identified in the literature, as well as the use of fuzzy logic to improve classical methods, such as Canny (C&I-TYPE2), and a hybrid method that uses Sobel, type-1 fuzzy logic, and a type-2 fuzzy interval system (T2FLS). For a better understanding of fuzzy-logic-based methods, Figure 3 shows a timeline presenting, in chronological order, the papers previously discussed. Figure 4 presents an attempt to broadly represent a schema for fuzzy-logic-based methods.

3. Materials and Methods

This section presents the methodology used in this research. We first introduce the concept of a systematic literature review and the methodology used to answer the research problems.

3.1. Systematic Literature Review Scope

A synthesized literature review is a form of secondary study that aims to identify, evaluate, and interpret all available relevant research on a specific research problem, topic, or phenomenon of interest [79]. Among the many reasons to engage in a systematic literature review, the most common are: (i) to summarize the existing evidence regarding treatment or technology; (ii) to identify gaps in current research with the goal of suggesting areas of future investigation; (iii) to provide a body of information that appropriately positions new research activities; (iv) to reduce or attempt to eliminate research bias.
In this context, this research seeks to summarize the existing technology regarding methods that make use of aggregation functions for digital image processing, more specifically regarding edge detection. Another important contribution was the identification of gaps in the state of the art.

3.2. Definition of Criteria, Search in Indexing Databases, and Obtaining Primary Research

In order to answer the research problems, searches were conducted in two large databases widely used in the computer science field: Scopus (SC) (available at, accessed on 1 November 2022) and Web Of Science (WS) (available at, accessed on 2 November 2022). The inclusion terms used to answer which edge detection methods use clustering and pre-aggregation functions were the terms:
“preaggregation” OR “pre-aggregation" OR “aggregation” OR “fusion” OR “Sugeno” OR “Type-2” that appear in the abstract OR;
The terms of the item (i) joined to the terms “Fuzzy” and “Logic” concatenated with “Type-2” with the exception of “Sugeno” as terms that appear in the title;
The terms of the items (i) and (ii) in conjunction with the terms “image” AND “segmentation” OR “image” AND “processing” OR “edge” AND “detection” as terms that appear in the abstract.
To conduct the search, delimitation criteria were used, such as articles written in English, published from 2016 onward, that have referenced the work of “Canny” or authors who have worked with “canny” and that were articles or reviews in the area of computing. This search returned 428 results, which were reduced by the exclusion criteria.
Initially, all of the papers that had a title that demonstrated a lack of relevance of that study for this review work were eliminated, all of the papers that were flagged as retracted by the journal were eliminated, and an analysis of the abstract was performed, observing the objectives of that research in particular, thus eliminating all of the articles that were outside the scope of this review. Finally, an analysis of the methodology of the papers was performed in order to check whether they present any serious technical problem that would justify discarding them even before their full reading.
At the end of the process, 17 articles remained. These remaining articles were used to find other relevant works. We analyzed the sections of related works, also searching using the keywords of such papers, whether from the same research groups or not, to find papers that were somehow composed of the development of the concept or the history of what was developed in the remaining articles, thus considering the presentation of what had already been completed and had somehow been missed in the initial search.
In total, 7 works were added and are marked in the second column of Table 2. An overview of the systematic review performed is available in Figure 5. It is possible to observe that, given the search question, the inclusion terms are considered, where ABS() and TITLE() represent searches of the abstracts and titles of the papers, respectively.
In delimitation terms, PUBYEAR, REFAUTH, and REF are, respectively, the year of publication of the paper, papers that cite a specific author, and papers that present a specific term in their references. In a number of occurrences, one has in INC the number of papers found through the inclusion and delimitation terms. Then, filtering was performed by title, abstract, and the exclusion of articles retrieved and included after, where EXC represents the number of articles excluded and REM the remaining.

4. Results and Discussion

After filtering the papers, using the inclusion and exclusion criteria already discussed, 24 papers remained to be reviewed, listed in Table 2. Among them, 19 papers were found that are specifically dedicated to edge detection, either using multiple descriptors extraction and aggregation or based on fuzzy set theory: type-2 fuzzy and neutrosophic sets [44,47,80,81,82,83,84,85,86,87,88,89,90] or works that use clustering and pre-aggregation functions, which are dedicated to region segmentation, but consider that their characteristics can be extended to the task of edge detection [46,48,51,52,91,92].
Table 2. Found and included papers.
Table 2. Found and included papers.
Found PapersPapers IncludedTotal Number of Reviewed Papers

4.1. Summary of Methods

The found methods of segmentation or edge detection, based on aggregation and pre-aggregation functions, can be divided into three groups: (i) based on the aggregation of distance functions and FCM, (ii) multiple descriptors extraction and aggregation, and (iii) based on fuzzy set theory: type-2 fuzzy and neutrosophic sets. Table 3 lists the papers included in each group. Next, for each approach, we present a discussion of the related papers.

4.1.1. Multiple Descriptors Extraction and Aggregation

In [44,80,81,82,93], the authors present an edge detection method inspired by the way that the human visual system works. The central idea of this approach lies in the integration of several types of global or local information features, such as brightness, color, and the relationship between these descriptors. In these works, the authors model some descriptors from the color and luminance channels, which travel through the image pixel by pixel, which, in turn, generate auxiliary images that mimic the way information is processed in the retina and visual cortex, respectively. At the end, the images are aggregated by a function that considers, besides the objective responses of each descriptor, a direction vector that is obtained by the difference between the direction of each pixel in each of the channels. In this sense, each descriptor would have its own capacity to represent the visual clues or primitive shapes, which, when combined, delineate the location of the positive points of the edges of the objects in the image with more confidence.
This same principle was explored in [44], where the image feature extraction step aims to simulate the responses of the ganglion cells present in the eye, and the fusion of the information is the process that occurs in the visual cortex. In this case, the feature extraction step was carried out through the discrete difference between pixels in various directions, which, in short, is a type of first-order gradient detection. The aggregation of the different extracted features was carried out by using a function known as the Choquet integral [99] and its generalizations [43], which is a kind of pre-aggregation function [55].
The authors point out that these functions are best suited to the nonlinear profile of modeling how the prefrontal cortex acts in fusing visual stimuli. The Choquet integral generalized by a t-norm T [100], which is called the C T -integral [55], is defined below. Note that, if the t-norm used is the product, the standard Choquet integral is obtained.
Definition 1 
( C T -integral). Let m : 2 N [ 0 , 1 ] be a fuzzy measure and T : [ 0 , 1 ] 2 [ 0 , 1 ] be a t-norm. Taking as a basis the Choquet integral, the C T -integral is defined as the function C m T : [ 0 , 1 ] n [ 0 , n ] , for all x [ 0 , 1 ] n , given by:
C m T ( x ) = i = 1 n T ( x ( i ) x ( i 1 ) , m ( A ( i ) ) )
where ( x ( 1 ) , , x ( n ) ) is an increasing permutation on the input xl that is, x ( 1 ) x ( n ) , with the convention that x ( 0 ) = 0 , and A ( i ) = ( i ) , , ( n ) is the subset of indices of the n i + 1 largest components of x.
The researchers sought to simulate the behavior of X (visual information for the near plane) and Y (visual information for the focus plane) ganglion cells; however, according to studies on the physiology of the visual system, there are Y and W (visual information from the far plane) ganglion cells in the visual system, which play essential roles in processing visual information.
Taking into account the importance of the w information, an edge detection method was proposed, which additionally considers modeling for the W cells, called DXYW [82]. In this model, three separate sub-models simulating the X, Y, and W channels are designed to process grayscale images and disparity maps, the latter responsible for the W visual information. Furthermore, a depth-guided multi-channel fusion method is introduced to integrate the visual response from different channels. The responses from each channel are fused by a weighted average function so that the product between each contribution from each channel is operated by a Hadamard product. The authors understand that DXYW achieves a competitive advantage detection performance that better maintains the integrity of object contours and suppresses background texture.
The use of depth information was also explored by [83] for the detection of region boundaries. According to the authors, detecting the border of salient regions of an image in an accurate way and distinguishing between objects is an extremely difficult task, especially when one has complex backgrounds or when the objects in the main plane and in the background have low contrast between each other. Therefore, depth information can provide visual clues to locate and distinguish these objects.
In both works [83,84], the model was obtained through a convolutional neural network, which, in the case of the latter, was justified by the attempt to simulate the human subjective visual attention guidance. Similarly, in [45], authors used the Choquet integral, embedded in the form of a neural network, to integrate the results of the segmentations generated by multiple deep neural network models that used the same basis, but considering, in the training stage, the manual segmentation of different experts. Manual annotations in medical image segmentation tasks usually present a certain inter-observer variability, even when performed by experts.
To increase the robustness of the segmentation system, the Choquet integral is proposed as a replacement of the fusion by majority vote. This same process could be used to segment only the border region, or edges of the image.
The importance of contrast for describing edges in the image becomes even clearer in the work of [85], where they proposed a local contrast function based on aggregation functions and fuzzy implications—what they call salient region detection—which is essentially a detection, or at least partial segmentation, of the front object of the image. The concept of local contrast as a measure of the variation with respect to the membership degrees of the elements in a specified region of a fuzzy relation was presented, along with a new construction method using the concept of consensus degrees among the input values. The concept of total contrast was also discussed as the aggregation of the local contrast values.
The segmentations of the front object regions of the image consist of thresholding from the detection of the highest contrast regions. To achieve this, they converted the image from RGB color space to CIELAB space and then each individual channel was fuzzified. After that, through a defined m × n window, the image was subjected to the local contrast process. Finally, all of the channels were merged and the processed image was automatically thresholded using Otsu’s method.
In addition to the information already presented as important for characterizing image edges, such as contrast and depth information, in [96], a method that was inspired by the focus-selective nature of the human visual system was introduced, which, when faced with a progressive blurring of a scene, and naturally having the sharpness of its tonal transitions diminished, deals with this problem by automatically compensating for the relevance of some edges compared to the rest of the image.
For that, two classes of self-adaptive vector fusion operators and the parameter variation of a Gaussian smoothing filter, followed by a gradient extraction on each of the images generated from the previous process, were proposed. Then, these intermediate images were fused into one image, which was then binarized to obtain the final edge image. One of the fusion functions developed was the ordered weighted gradient fusion operator given, for all gradient vector D ( R 2 ) n , by:
ϕ h ( D ) = ( i = 1 n h i D ˜ i , x , i = 1 n h i D ˜ i , y ) ,
where h = ( h 1 , , h n ) , with i = 1 n h i = 1 , is a weighing vector, and D ˜ i is the vector in D with the i-th greatest magnitude, whose horizontal and vertical components are D ˜ i , x and D ˜ i , y , respectively. The use of a weighing vector h allows for the variation in the importance of the gradients to be fused. This operator is strongly inspired by Ordered Weighted Averaging (OWA) operators [101]. To be clear about the importance of varying blur using a Gaussian approach and how this relates to the way our visual system compensates for relevance and edges, it is enough to understand that a high level of detail should force the detection of small or not very significant objects, whereas a low one should lead to the extraction of (only) the most relevant silhouettes. In other words, a decrease in the level of detail should force an edge detection method to avoid objects that are too small or are part of a background texture. The aggregation of these variations in detail should strengthen the most relevant transitions and negatively strengthen the less relevant information by offsetting the boundaries between regions, thus improving the performance of edge detections.
An interesting result on the performance of applying information fusion methods for edge detection can be found in [80], where the authors discussed why the edge detection approach in combination with aggregation methods, or information fusion, is so important, and demonstrated that it performs better than other detection methods using information theory.
According to their research, fusion techniques aim to reduce information uncertainty and to minimize redundancy, improving reliability and maximizing information relevant to an objective task. The same image can arrive at different fusion results because of the fusion objectives and the priority of relevant information for a given task. The authors presented an unsupervised edge detection model as an optimization problem. In this sense, the final detection result was produced by the consensus between results provided by different detectors or by the same algorithm with varying parameters.
Considering that information aggregation has the fundamental role of reducing uncertainty or improving reliability and maximizing relevant information for the edge detection task, in [87], the authors proposed a methodology for aggregating the HSV (hue saturation and value) channels in order to improve the detection of the edges given by the Sobel and Canny methods.
The idea of this method is to select features through clustering that contribute to the desired detection. In [97], one has an edge detection algorithm based on a Bonferroni Mean-type pre-aggregation operator (BM), with more emphasis given to feature image extraction. A comprehensive comparative study was conducted to evaluate the results obtained using the proposed edge detection algorithm with some other well-known and widely used edge detectors in the literature, demonstrating the superiority of the proposed method regarding the aspects of accuracy and edge continuity. Additionally, it was shown that the correct selection of channels to fuse makes all of the difference in the result of edge detection.
In [95], the authors proposed two methods. The first consists of separating the RGB channels, followed by a directional edge detection using the Sobel method, one horizontal and one vertical, and then the concatenation of the layers to obtain a first edge detection, which then undergoes a transformation to YCbCr space. Then, this is followed by the application of the Canny detector to the R and Y channels, and, after, the application of their fusion. The second method simply consists of applying Canny to the G and B channels, followed by a fusion of these processed channels to generate the final edge detection. Then, the two fused detection results were compared and it was concluded that the second method produced visually better results for the base used. Despite this, the authors neither explained what type of fusion function they used nor provided metrics for a quantitative evaluation of their results.
The authors in [81] presented an algorithm called Improved Wavelet Modulus Maxima Degree of Polarization (IWMMDP), which is based on the fusion between two edge detection results, namely (i) that obtained from the light intensity and (ii) that obtained from the degree of polarization. Initially, the image was pre-processed using a noise suppression filter called Non-Local Mean denoising (NLM), and then the modulus of the gradient vector and amplitude of the variable angle of the intensities were calculated, followed by a Suppress Non-Maxima (SNM) and a thresholding process. Finally, edges were selected and connected through another thresholding process, and then aggregated with those edges obtained by the degree of polarization threshold. After the experiment, this technique presented a better-defined trace and better connectivity in relation to the classical methods. Assuming the pixel referring to the edge of the image produced by light intensity detection as E A ( i , j ) and the one referring to the polarization degree image that considers the four directions 0 , 45 , 90 , and 135 as E B ( i , j ) , the edge fusion is given by:
E ( i , j ) = α 1 E A ( i , j ) + α 2 E B ( i , j ) , α 1 = α 2 = 0.5
where α 1 and α 2 are arbitrary weights.
The construction of an image feature extraction function was proposed in [47] using an Ordered Weighted Average (ODM) clustering function. The proposed method consists of building a feature map of the image based on the neighborhood information. The information taken from the neighborhood values is fused using the ordered aggregation function, which has, as a main feature, the ability to consider different directions of the variation in the intensity growth. Finally, this method was used in combination with different edge detectors, such as Canny, gravitational, and fuzzy morphology. The experiments showed that the use of MDGs performs competitively with respect to classical methods. On the other hand, the use of the approach that aimed to determine a consensus image when applying different methods resulted in a superior performance with respect to those tested.
Another method using local and neighborhood descriptors can be found in [98]. The method consists of first converting the image to grayscale, and then having a Gaussian smoothing followed by edge detection by the VLEP descriptor, which is a flexible circular edge detection descriptor that characterizes the local spatial structure of the different direction information. The direction of the edge is measured along the straight line passing through two zero points of the VLEP descriptor. This descriptor varies in all directions but depends on the radius and neighborhood parameters. Then, the final gradient is given by the root of the sum of the detections in all directions squared. Finally, a binarization is performed to obtain the final image. The fusion function is given in cases where it has descriptors with different parameters. The experimental results show that the method obtained a better performance in some aspects, such as continuity, smoothness, refinement, and localization accuracy.
A new method for multimodal medical image fusion based on Salient features for Fusion Discrimination (SFD) and fuzzy logic was proposed in [86]. First, the SFD method is employed to extract two saliency features for fusion discrimination. Then, two new fusion decision maps, called an incomplete fusion map and supplementary fusion map, are constructed from salient features. In this step, the supplementary map is constructed by two different fuzzy logic systems. The supplementary and incomplete maps are then combined to build an initial fusion map. The final fusion map is obtained by processing the initial fusion map with a Gaussian filter. Finally, a weighted average approach is adopted to create the final fused image.

4.1.2. Based on the Aggregation of Distance Functions and FCM

The methods based on the aggregation of distance functions and Fuzzy C-Means (FCM) [46,48,51,52,91] are established on the importance of distance functions as a criterion for making decisions about the belonging of an element to a certain group or segment in clustering algorithms.
All of the works presented here used the same base algorithm (FCM) in their experimental tests, this being one of the main limitations. The FCM segmentation method is a data clustering technique that allows each of the data points to belong to multiple clusters at the same time at certain degrees of pertinence. The algorithm is based on minimizing the following objective function [102]:
J m = i = 1 D j = 1 N μ i , j m d i , j 2 ,
  • D is the number of data points;
  • N is the number of clusters;
  • m is a fuzzy partition matrix exponent for controlling the degree of fuzzy overlap, with m > 1. Fuzzy overlap refers to how fuzzy the boundaries between clusters are; that is, the number of data points that have significant membership in more than one cluster;
  • μ i , j is the degree of membership of x i in the jth cluster. For a given data point x i , the sum of the membership values for all clusters is one;
  • d i , j 2 is a measure of distance, which, in general, can vary according to the proposed approach, but is classically given by | | x i c j | | A 2 , where | | . | | A is a norm.
A summary of methods based on the aggregation of distance functions or fuzzy measurements can be found in Table 4. The first column contains the reference, the second the name of the aggregation function, then the used fuzzy measure, and finally the adopted clustering method.
In general, the aggregation of different distance functions in combination with FCM allows for the segmentation of important regions of the image, which, in turn, can be used for edge determination (see Section 2.3). Next, for each of the papers, considering the distance-based approach, the main differences between the studies are pointed out.
In [91], the authors presented a review of the Ordered Weighted Average function (OWA), which is a type of aggregation operator. OWA is used to initially aggregate two absolute distance functions that consider:
The d S intensity of pixels p 1 and p 1 , given by:
d S ( p 1 , p 2 ) = 1 255 | s 2 s 1 |
The average pixel in an eight-connected neighborhood, calculated as
d N ( p 1 , p 2 ) = 1 255 | n 2 n 1 | .
As a result, they obtained the distance function, defined below, where α is an adjustment parameter:
d S , N ; α ( p 1 , p 2 ) = α d S ( p 1 , p 2 ) + ( 1 α ) d N ( p 1 , p 2 ) .
Three other functions analogous to Equation (8) were proposed for the intensity of each channel of the RGB color space, resulting in the following distance function:
d ( p 1 , p 2 ) = α 255 | c 2 1 c 1 1 | + β 255 | c 2 2 c 1 2 | + 1 α β 255 | c 2 3 c 1 3 |
where | c 2 1 c 1 1 | , | c 2 2 c 1 2 | and | c 2 3 c 1 3 | are, respectively, the minimum, average, and maximum difference in each individual RGB channel of the pixels p 1 and p 2 , and α and β are arbitrary parameters.
Using the difference between the gray levels in each color channel as descriptors and varying a parameter α to fit the results, also using FCM as the clustering method, the authors in [52] used the Aggregation Operator Of Convex Combination (AOOCC) to achieve the results. The aggregated distance is defined as:
d α r , α g , α b ( ( r 1 , g 1 , b 1 ) , ( r 2 , g 2 , b 2 ) ) = i { r , g , b } α i d i ( ( r 1 , g 1 , b 1 ) , ( r 2 , g 2 , b 2 ) )
d r ( ( r 1 , g 1 , b 1 ) , ( r 2 , g 2 , b 2 ) ) = 1 255 | r 1 r 2 |
d g ( ( r 1 , g 1 , b 1 ) , ( r 2 , g 2 , b 2 ) ) = 1 255 | g 1 g 2 |
d b ( ( r 1 , g 1 , b 1 ) , ( r 2 , g 2 , b 2 ) ) = 1 255 | b 1 b 2 |
A descriptor, motivated by the Local Binary Patterns descriptor family (LBPs), widely used in the literature for texture analysis, or its variation, called a Shift Local Binary Pattern (SLBP), are other methods for constructing distance functions [51]. For that, the authors introduced two extended aggregation functions: Extended Powers Product (EPP) and Extended Weighted Arithmetic Mean of Powers (EWAMP), which were used to construct the proposed distance functions. Then, the aggregated distance function is defined by:
d λ , ω ( p 1 , p 2 ) = λ 1 d 1 ω 1 ( r 1 , r 2 ) + λ 2 d 1 ω 2 ( g 1 , g 2 ) + λ 3 d 1 ω 3 ( b 1 , b 2 ) + λ 4 d 2 ω 4 ( s 1 , s 2 ) + λ 5 d 2 ω 5 ( q 1 , q 2 )
where ω and λ are fitting parameters, and d 1 and d 2 are distance functions given by:
d 1 ( c 1 , c 2 ) = 1 255 | c 1 c 2 |
d 2 ( t 1 , t 2 ) = | t 1 t 2 |
The authors came to the conclusion that, when using the appropriate selection of parameters, the aggregated distance function produces good results in the application in images, and the similarity descriptor between the pixels and their neighbors also contributes significantly to the quality of the segmentation.
The application of three aggregation functions, OWA, Powers Product (PP), and Weighted Arithmetic Mean of Powers (WAMP), was proposed in [46]. The constructed distance function is given by:
d α ω [ 5 ] , λ [ 5 ] ( p i , j , p k , n ) = ω 5 , 1 d 1 λ 5 , 1 ( r i , j , r k , n ) + ω 5 , 2 d 1 λ 5 , 2 ( g i , j , g k , n ) + ω 5 , 3 d 1 λ 5 , 3 ( b i , j , b k , n ) + ω 5 , 4 d 2 λ 5 , 4 ( S ( p i , j ) , S ( p k , n ) ) + ω 5 , 5 d 2 λ 5 , 5 ( I C α ( p i , j ) , I C α ( p k , n ) )
where, as descriptors, one has the pixel saturation S given by
S ( P i , j ) = 1 255 3 r i , j 2 + g i , j 2 + b i , j 2 [ 0 , 1 ] ,
the neighborhood average N A C given by
N A C ( p i , j ) = 1 3 ( N A C ( R ) ( p i , j ) + N A C ( G ) ( p i , j ) + N A C ( B ) ( p i , j ) )
N A C ( c ) ( p i , j ) = 1 8 ( k = j 1 j + 1 c i 1 , j + c i , j 1 + c i , j + 1 + k = j 1 j + 1 c i + 1 , j )
and a neighborhood-oriented similarity descriptor I C α ( c ) given by
I C α ( c ) ( p i , j ) = m = 1 8 I p i , j ; α ( c ) ( m )
I p i , j ; α ( c ) ( m ) = { 1 | c i , j c i , j , m | α , w i t h m { 1 , 8 } 0 | c i , j c i , j , m | > α
and α is a chosen threshold.
Considering as attributes, for the determination of the measure, the difference in the measures of the components in the color space and the value of a descriptor of the pixels that incorporates the calculations of the spatial relations with each neighbor, in [48], the authors used as an aggregation function an example of the Generalized Quasi-Arithmetic Mean (GQ-AM).
The composition of the measurements was performed by the product t-norm and the metrics generated for the experiments were incorporated into the FCM as a difference in the measure for every two observed pixels. The measurements took into account the local binary pattern descriptor LBP, C ω = τ ω · t ω , where τ and t are given by:
τ ( F i , F j , K ) = l ( R , G , B ) F i ( l ) + F j ( l ) 2 + K max F i ( l ) , F j ( l ) + K
t ( D i , D j , K ) = K K + | D i D j | ,
where F i are the normalized color components, D i is a similarity descriptor, and K is a fitting coefficient.

4.1.3. Based on Fuzzy Set Theory: Type-2 Fuzzy and Neutrosophic Set

The representation of different feature classes of an image, pixel level, local level, and global level, through triangular interval type-2 fuzzy numbers was performed in [88]. The operations on the images and edge detection were performed by aggregating these numbers by a type of interval fuzzy triangular set aggregation function. The membership functions, upper ( U M F M ¯ ( x ) ) and lower ( L M F M ¯ ( x ) ), of a type-2 triangular interval fuzzy set, is represented by a triangular fuzzy number M ¯ = [ l M ̲ , l M ¯ ] , C M , [ r M ̲ , r M ¯ ] , called TIT2FS, where:
L M F M ¯ ( x ) = x l M ¯ C M l M ¯ if l M ¯ x < C M 1 if x = C M x r M ̲ C M r M ̲ if C M x < r M ̲ 0 otherwise
U M F M ¯ ( x ) = x l M ̲ C M l M ̲ if l M ̲ x < C M 1 if x = C M x r M ¯ C M r M ¯ if C M x < r M ¯ 0 otherwise
The authors developed an edge detection application of a three-dimensional image of a patient’s brain, reconstructed through a sequence of DICOM images, whose pre-processing is filtered by two directional gradients, and, after the aggregation of the triangular numbers, the method performs the edge detection. However, the authors did not make it totally clear how the fuzzy triangular numbers are generated from the image information to finally perform the aggregation and edge detection.
A very similar contribution of a fuzzy edge detection method was presented in a later work [89], where the computation of the image gradient was performed using a t-norm. In short, the minimum and maximum were chosen for the purpose of morphological operations. The mathematical properties of the aggregation operations for representing all morphological operations using the Triangular Interval Type-2 Fuzzy Yager Weighted Geometric (TIT2FYWG) and the Triangular Interval Type-2 Fuzzy Yager Weighted Arithmetic (TIT2FYWA) were deduced. These properties represent the components of the image processing.
Edge detection was performed for a DICOM image by converting it into a two-dimensional grayscale image. The architecture of the method consists of defining the triconsistsangular norms, defining the properties of the clustering operation using type-2 interval fuzzy triangular numbers, converting the DICOM image into a two-dimensional grayscale image, applying the gradient using the triangular norms, obtaining the image segmentation, and applying the edge detection using type-2 fuzzy logic.
Another type-2-fuzzy-logic-based edge detection method was presented in [90]. The general process consists of obtaining the image gradients in four directions: horizontal, vertical, and the two diagonals. After this, a generalization of the type-2 fuzzy Sugeno Integral is used to integrate the four image gradients. In this second step, the integral establishes the criteria for determining which level of gradient obtained from the image belongs to an edge during the process; this is calculated by assigning different fuzzy densities of type-2, and these fuzzy gradients are aggregated using the meet and join operations.
Gradient integration using the Sugeno integral provides a methodology for achieving more robust edge detection, especially when one is working with blurred images. The experimental analyses were performed using real and synthetic images, and the accuracy was quantified using the Pratt’s Figure of Merit. The evaluated results show that the proposed edge detection method outperforms existing methods.
Adopting the neutrosophic set theory, the paper [94] developed a generalization of clustering operators. In this study, the grayscale images were first transformed into the Linguistic Neutrosophic Cubic Set domain using three degrees of pertinence, fuzzy on each channel, and then this information was aggregated using the proposed operators, which were: the Generalized Linguistic Neutrosophic Cubic Weighted Average (GLNCWA) and Generalized Linguistic Neutrosophic Cubic Weighted Geometric (GLNCWG). The variation in the clustering operators enables different processing operations on the image, including edge detection.
According to the authors, these methods clarified the noise in the tested images, which would justify the use of this approach. To demonstrate their claim, they considered several types of noise, such as Gaussian, Poisson, and speckle. The authors compared the computational efficiency of the proposed method with existing methods, and although the results show that the approach consumes less memory and is faster than the compared methods, the research did not consider a data set that presented a variability of conditions of light, shadow, shapes, and textures in order to reach a sufficiently adequate conclusion regarding the applicability of the method.
In summary, we can divide the methods presented here into three groups:
Multiple descriptors extraction and aggregation;
Based on the aggregation of distance functions and FCM;
Based on fuzzy set theory: type-2 fuzzy and neutrosophic sets.
The first group considers methods inspired by the way that the human visual system works. The idea is the integration of different global or local features, where each descriptor would have its own capacity to represent the visual clues or primitive shapes that compose the edges of the composite shapes of the objects present in the image. In this sense, the aggregation functions model the behavior of the information processed in the cortex, and the descriptors play the role of the stimuli generated by the ganglion cells.
Each theorist proposes a different type of aggregation or pre-aggregation function to simulate the behavior of information fusion, and there is no consensus on this. Although there are no studies demonstrating which type or family of clustering or pre-clustering functions would be best suited for modeling feature fusion for edge detection purposes, some researchers directly argue that Choquet integrals would serve this purpose better, since they would better model the non-linear behavior of the problem. Another clue to the Choquet integral performance is possibly the use of a weighing vector that allows for the variation in the importance of the information to be fused, as well as some other functions presented here. It is noteworthy that only average functions were used in all papers, whether arithmetic-weighted or geometric-weighted.
Regarding the descriptors, taking into account that most of the works considered modeling the X and Y-type cells, i.e., the visual information close to the image plane and the focal plane information, and the biological and performance justification that exists in the use of modeling for the visual information far from the plane, i.e., the modeling of W-type cells, it could be concluded that the use of depth data, such as disparity maps, laser sensor depth maps, and the like, is fundamental, especially when working with images with complex backgrounds or low contrast. The importance of contrast as a descriptor for image edges becomes clear when one observes the features used by the researchers.
The edge detection approach in combination with aggregation methods, or information fusion techniques, is important because it enables a reduction in information uncertainty and minimizes redundancy, improves reliability, and maximizes the information relevant to a task. This allows for the approach of consensus detection, via aggregation, between different detectors or the same detector by varying parameters. Beyond this, features can also be used as information extractors, so that the image features themselves are identified using aggregation. Works that link extraction and detection with aggregation functions have not been found.
In summary, we can compile the results of this approach:
These are methods that take direct inspiration from the way that the human visual system works, seeking to simulate, through feature extraction or the determination of local or global descriptors, the visual stimuli and the processing of information for the recognition of some pattern through aggregation or pre-aggregation functions.
Functions that make use of weighing vectors and that enable a better modeling of nonlinear behavior, such as the Choquet integral, would perform better.
Depth information is essential, especially when working with images with complex backgrounds or low contrast, the latter being an important feature in the performance of the models.
Combining detection methods with aggregation methods or information fusion makes it possible to reduce information uncertainty and minimize redundancy, improve reliability, and maximize the information relevant to a task.
Works that link extraction and detection via clustering functions have not been found.
The second set of techniques (based on the aggregation of distance functions and FCM) is based on the importance of distance functions as a criterion for decision making regarding the belonging of an element to a certain group or segment in clustering algorithms. Through a set of distance functions or fuzzy measures, using a set of aggregation or pre-aggregation functions, it is possible to build new distance functions that can be used as a parameter of clustering algorithms such as FCM. After this segmentation, it is possible to apply a morphological operation or classical detection to obtain the contours of the main objects of the image. One of the main limitations of this work is that only experiments using FCM were performed, since it theoretically allows for implementation in any clustering algorithm, or, in general, uses a distance metric as a stopping or starting criterion.
In summary, this approach has the following characteristics:
They are based on the importance of distance functions as decision criteria in data clustering algorithms;
Aggregation and pre-aggregation functions are used to construct new distance functions from others that represent some dimension relevant to the problem;
The papers are limited to application to a single clustering algorithm and do not make the performance of other methods clear.
The third group of techniques (based on fuzzy set theory: type-2 fuzzy and neutrosophic sets) performs an interpretation of the image in terms of fuzzy logic. The gray levels of the image are treated as a variation on a fuzzy set. This allows for the direct use of aggregation functions. Overall, this technique has proven to be quite robust and superior to those compared in their respective works, particularly with respect to noise sensitivity.

5. Conclusions

Edge detection represents an important task in several steps of computer vision and image processing, ranging from object detection to image retrieval. Studies in this area of research have been gaining momentum in recent years due to its importance in applications such as autonomous cars and augmented reality. The difficulties inherent to the task have required a continuous improvement of techniques and modeling.
Some approaches applied to solve this task considered using aggregation and pre-aggregation functions. Due to the lack of works that summarize and show the gaps in the area of edge detection by functions of this type, this paper developed a systematic review of the literature, which aimed to answer:
“Which edge detection methods use aggregation and pre-aggregation functions?”
Considering a review of 24 papers filtered from 428 articles found in computer databases in the last seven years, it was possible to synthesize important information that was grouped into three approaches, which support the answering of the research question. Precisely, the groups are:
Multiple descriptors extraction and aggregation;
Based on the aggregation of distance functions and FCM;
Based on fuzzy set theory: type-2 fuzzy and neutrosophic sets.
In summary, through this systematic literature review research, it was possible to:
Fill an existing gap in the literature of systematic reviews on edge detection using clustering functions;
Summarize the existing technology regarding methods that make use of these functions in edge detection;
Identify the gaps in the detection approach using aggregation or pre-aggregation functions, proposing topics for future work and fulfilling the initial objectives of the research.
As future works, some paths have emerged from this work. It is proposed to perform a quantitative analysis of the methods, comparing the metrics provided by researchers who used the same databases for validation, as well as replicating the methods that did not present metrics with the same datasets in order to answer which type of aggregation or pre-aggregation functions presents a better performance in the task of edge detection.
Another suggestion is the development of works that aim to fill the gaps presented here, such as:
The application of the method of constructing distance functions by clustering functions in different data clustering techniques, such as DBscan, region growth, K-means, and others;
Applying non-average aggregation functions for edge detection;
The development of further work involving the modeling of W ganglion cells; in this sense, work with depth information, such as lidar sensors and others;
Exploring the combination of aggregation functions, both in feature extraction and in information fusion;
The direct use of classical detectors in an ensemble fused by aggregation functions, also taking into account the fusion of descriptors and other visual cues, thus ensuring the participation of primitive shapes and the influence of contrast, color, and depth information, among others discussed here.

Author Contributions

Conceptualization, M.A., E.B., C.M.-D., and G.L.; methodology, M.A., E.B., C.M.-D., and G.L.; formal analysis, M.A.; investigation, M.A.; resources, M.A.; data curation, M.A.; writing—original draft preparation, M.A.; writing—review and editing, E.B., G.L., B.L.D., C.M.-D., and G.D.; supervision, G.D., E.B., G.L., and C.M.-D.; project administration, G.D. and H.B.; funding acquisition, G.D. and E.B. All authors have read and agreed to the published version of the manuscript.


This research was partially funded by (1) The Spanish Ministry of Science (TIN2016-77356-P, PID2019-108392GB-I00 AEI/10.13039/501100011033) (2) MCIN/AEI/ 10.13039/501100011033 (grant PID2021-123673OB-C31) (3) ERDF A way of making Europe", Consellería d’Innovació, Universitats, Ciencia i Societat Digital from Comunitat Valenciana (APOSTD/2021/227) through the European Social Fund (Investing In Your Future) (4) Reseach Services of Universitat Politècnica de València (PAID-PD-22) (5) FAPERGS/Brazil (Proc. 19/2551-0001279-9, 19/2551-0001660,23/2551-0000126-8) (6) CNPq/Brazil (Proc. 301618/2019-4,305805/2021-5).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.


This work was partially supported with grant PID2021-123673OB-C31 funded by MCIN/AEI/ 10.13039/501100011033 and by “ERDF A way of making Europe”, Consellería d’Innovació, Universitats, Ciencia i Societat Digital from Comunitat Valenciana (APOSTD/2021/227) through the European Social Fund (Investing In Your Future), grant from the Reseach Services of Universitat Politècnica de València (PAID-PD-22), FAPERGS/ Brazil (Proc. 19/2551-0001279-9, 19/2551-0001660) and CNPq/Brazil (301618/2019-4, 305805/2021-5) Programa de Apoio à Fixação de Jovens Doutores no Brasil (23/2551-0000126-8).

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.


  1. Suresh, K.; Srinivasa, P. Various Image Segmentation Algorithms: A Survey. In Smart Intelligent Computing and Applications; Springer: Singapore, 2019; Volume 105, pp. 233–239. [Google Scholar]
  2. Martin, D.R. An Empirical Approach to Grouping and Segmentation; University of California: Berkeley, CA, USA, 2002. [Google Scholar]
  3. Zhang, K.; Zhang, L.; Lam, K.M.; Zhang, D. A Level Set Approach to Image Segmentation With Intensity Inhomogeneity. IEEE Trans. Cybern. 2016, 46, 546–557. [Google Scholar] [CrossRef] [PubMed]
  4. Wei, Y.; Liang, X.; Chen, Y.; Shen, X.; Cheng, M.M.; Feng, J.; Zhao, Y.; Yan, S. STC: A Simple to Complex Framework for Weakly-Supervised Semantic Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2314–2320. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Arbelaez, P.; Pont-Tuset, J.; Barron, J.; Marques, F.; Malik, J. Multiscale Combinatorial Grouping. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 328–335. [Google Scholar]
  6. Wang, X.F.; Huang, D.S.; Xu, H. An efficient local Chan–Vese model for image segmentation. Pattern Recognit. 2010, 43, 603–618. [Google Scholar] [CrossRef]
  7. Yang, M.H.; Kriegman, D.; Ahuja, N. Detecting faces in images: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 34–58. [Google Scholar] [CrossRef] [Green Version]
  8. Shotton, J.; Blake, A.; Cipolla, R. Multiscale Categorical Object Recognition Using Contour Fragments. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 1270–1281. [Google Scholar] [CrossRef]
  9. Mohan, K.; Seal, A.; Krejcar, O.; Yazidi, A. Facial Expression Recognition Using Local Gravitational Force Descriptor-Based Deep Convolution Neural Networks. IEEE Trans. Instrum. Meas. 2021, 70, 1–12. [Google Scholar] [CrossRef]
  10. Olson, C.; Huttenlocher, D. Automatic target recognition by matching oriented edge pixels. IEEE Trans. Image Process. 1997, 6, 103–113. [Google Scholar] [CrossRef] [Green Version]
  11. Vu, N.S.; Caplier, A. Enhanced Patterns of Oriented Edge Magnitudes for Face Recognition and Image Matching. IEEE Trans. Image Process. 2012, 21, 1352–1365. [Google Scholar]
  12. Drolia, U.; Guo, K.; Tan, J.; Gandhi, R.; Narasimhan, P. Cachier: Edge-Caching for Recognition Applications. In Proceedings of the 2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS), Atlanta, GA, USA, 5–8 June 2017; pp. 276–286. [Google Scholar]
  13. Li, M.; Lin, Z.; Mech, R.; Yumer, E.; Ramanan, D. Photo-Sketching: Inferring Contour Drawings From Images. In Proceedings of the 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), Waikoloa Village, HI, USA, 7–11 January 2019; pp. 1403–1412. [Google Scholar]
  14. Pavithra, L.; Sharmila, T.S. An efficient framework for image retrieval using color, texture and edge features. Comput. Electr. Eng. 2018, 70, 580–593. [Google Scholar] [CrossRef]
  15. Gordo, A.; Almazán, J.; Revaud, J.; Larlus, D. Deep Image Retrieval: Learning Global Representations for Image Search. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Leibe, B., Matas, J., Sebe, N., Welling, M., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 241–257. [Google Scholar]
  16. Lin, K.; Yang, H.F.; Hsiao, J.H.; Chen, C.S. Deep learning of binary hash codes for fast image retrieval. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Boston, MA, USA, 7–12 June 2015; pp. 27–35. [Google Scholar]
  17. Radenovic, F.; Iscen, A.; Tolias, G.; Avrithis, Y.; Chum, O. Revisiting Oxford and Paris: Large-Scale Image Retrieval Benchmarking. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 5706–5715. [Google Scholar]
  18. Zhao, Z.Q.; Huang, D.S.; Sun, B.Y. Human face recognition based on multi-features using neural networks committee. Pattern Recognit. Lett. 2004, 25, 1351–1358. [Google Scholar] [CrossRef]
  19. Chen, W.S.; Yuen, P.; Huang, J.; Dai, D.Q. Kernel Machine-Based One-Parameter Regularized Fisher Discriminant Method for Face Recognition. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 2005, 35, 659–669. [Google Scholar] [CrossRef]
  20. Li, B.; Zheng, C.H.; Huang, D.S. Locally linear discriminant embedding: An efficient method for face recognition. Pattern Recognit. 2008, 41, 3813–3821. [Google Scholar] [CrossRef]
  21. Zhang, W.; Wang, F.; Zhu, L.; Zhou, Z. Corner detection using Gabor filters. IET Image Process. 2014, 8, 639–646. [Google Scholar] [CrossRef]
  22. Zhang, W.C.; Shui, P.L. Contour-based corner detection via angle difference of principal directions of anisotropic Gaussian directional derivatives. Pattern Recognit. 2015, 48, 2785–2797. [Google Scholar] [CrossRef]
  23. Zhang, W.; Sun, C.; Breckon, T.; Alshammari, N. Discrete Curvature Representations for Noise Robust Image Corner Detection. IEEE Trans. Image Process. 2019, 28, 4444–4459. [Google Scholar] [CrossRef] [Green Version]
  24. Dollar, P.; Tu, Z.; Belongie, S. Supervised Learning of Edges and Object Boundaries. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition—Volume 2 (CVPR’06), Washington, DC, USA, 17–22 June 2006; Volume 2, pp. 1964–1971. [Google Scholar]
  25. Chi, Z.; Li, H.; Lu, H.; Yang, M.H. Dual Deep Network for Visual Tracking. IEEE Trans. Image Process. 2017, 26, 2005–2015. [Google Scholar] [CrossRef] [Green Version]
  26. Leal-Taixé, L.; Canton-Ferrer, C.; Schindler, K. Learning by Tracking: Siamese CNN for Robust Target Association. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 418–425. [Google Scholar]
  27. Xu, Y.; Brownjohn, J.; Kong, D. A non-contact vision-based system for multipoint displacement monitoring in a cable-stayed footbridge. Struct. Control. Health Monit. 2018, 25, e2155. [Google Scholar] [CrossRef] [Green Version]
  28. Ojha, S.; Sakhare, S. Image processing techniques for object tracking in video surveillance- A survey. In Proceedings of the 2015 International Conference on Pervasive Computing (ICPC), Pune, India, 8–10 January 2015; pp. 1–6. [Google Scholar]
  29. Muthalagu, R.; Bolimera, A.; Kalaichelvi, V. Lane detection technique based on perspective transformation and histogram analysis for self-driving cars. Comput. Electr. Eng. 2020, 85, 106653. [Google Scholar] [CrossRef]
  30. Abi Zeid Daou, R.; El Samarani, F.; Yaacoub, C.; Moreau, X. Fractional Derivatives for Edge Detection: Application to Road Obstacles. In Smart Cities Performability, Cognition, & Security; Springer International Publishing: Cham, Switzerland, 2020; pp. 115–137. [Google Scholar]
  31. Orhei, C.; Vert, S.; Vasiu, R. A Novel Edge Detection Operator for Identifying Buildings in Augmented Reality Applications. In Information and Software Technologies; Springer International Publishing: Cham, Switzerland, 2020; Volume 1283, pp. 208–219. [Google Scholar]
  32. Kühne, G.; Richter, S.; Beier, M. Motion-based segmentation and contour-based classification of video objects. In Proceedings of the 9th ACM International Conference on Multimedia—MULTIMEDIA ’01, Ottawa, ON, Canada, 30 September–5 October 2001; p. 41. [Google Scholar]
  33. Huang, Y.C.; Tung, Y.S.; Chen, J.C.; Wang, S.W.; Wu, J.L. An adaptive edge detection based colorization algorithm and its applications. In Proceedings of the 13th Annual ACM International Conference on Multimedia—MULTIMEDIA ’05, Hilton, Singapore, 6–11 November 2005; p. 351. [Google Scholar]
  34. Sun, T.H.; Lai, C.H.; Wong, S.K.; Wang, Y.S. Adversarial Colorization of Icons Based on Contour and Color Conditions. In Proceedings of the 27th ACM International Conference on Multimedia—MM ’19, Nice, France, 21–25 October 2019; pp. 683–691. [Google Scholar]
  35. Wharton, E.J.; Panetta, K.; Agaian, S.S. Logarithmic edge detection with applications. In Proceedings of the 2007 IEEE International Conference on Systems, Man and Cybernetics, Montreal, QC, Canada, 7–10 October 2007; pp. 3346–3351. [Google Scholar]
  36. Orujov, F.; Maskeliūnas, R.; Damaševičius, R.; Wei, W. Fuzzy based image edge detection algorithm for blood vessel detection in retinal images. Appl. Soft Comput. 2020, 94, 106452. [Google Scholar] [CrossRef]
  37. Tariq Jamal, A.; Ben Ishak, A.; Abdel-Khalek, S. Tumor edge detection in mammography images using quantum and machine learning approaches. Neural Comput. Appl. 2021, 33, 7773–7784. [Google Scholar] [CrossRef]
  38. Qiu, B.; Guo, J.; Kraeima, J.; Glas, H.H.; Zhang, W.; Borra, R.J.H.; Witjes, M.J.H.; van Ooijen, P.M.A. Recurrent Convolutional Neural Networks for 3D Mandible Segmentation in Computed Tomography. J. Pers. Med. 2021, 11, 492. [Google Scholar] [CrossRef] [PubMed]
  39. Sobel, I.; Feldman, G. A 3x3 isotropic gradient operator for image processing. In Pattern Classification and Scene Analysis; Duda, R., Hart, P., Eds.; John Wiley and Sons: Hoboken, NJ, USA, 1973; pp. 271–272. [Google Scholar]
  40. Marr, D.; Hildreth, E. Theory of edge detection. Proc. R. Soc. London Ser. B. Biol. Sci. 1980, 207, 187–217. [Google Scholar]
  41. Canny, J. A Computational Approach to Edge Detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, 8, 679–698. [Google Scholar] [CrossRef] [PubMed]
  42. Jing, J.; Liu, S.; Wang, G.; Zhang, W.; Sun, C. Recent advances on image edge detection: A comprehensive review. Neurocomputing 2022, 503, 259–271. [Google Scholar] [CrossRef]
  43. Dimuro, G.P.; Fernández, J.; Bedregal, B.; Mesiar, R.; Sanz, J.; Lucca, G.; Bustince, H. The state-of-art of the generalizations of the Choquet integral: From aggregation and pre-aggregation to ordered directionally monotone functions. Inf. Fusion 2020, 57, 27–43. [Google Scholar] [CrossRef]
  44. Marco-Detchart, C.; Lucca, G.; Lopez-Molina, C.; De Miguel, L.; Pereira Dimuro, G.; Bustince, H. Neuro-inspired edge feature fusion using Choquet integrals. Inf. Sci. 2021, 581, 740–754. [Google Scholar] [CrossRef]
  45. Qiu, H.; Su, P.; Jiang, S.; Yue, X.; Zhao, Y.; Liu, J. Learning from Human Uncertainty by Choquet Integral for Optic Disc Segmentation. In Proceedings of the ACM International Conference Proceeding Series, Macau, China, 13–15 August 2021; Association for Computing Machinery: New York, NY, USA, 2021; pp. 7–12. [Google Scholar]
  46. Pap, E.; Nedović, L.; Ralević, N. Image Fuzzy Segmentation Using Aggregated Distance Functions and Pixel Descriptors. Stud. Comput. Intell. 2021, 973, 255–273. [Google Scholar]
  47. Marco-Detchart, C.; Dimuro, G.; Sesma-Sara, M.; Castillo-Lopez, A.; Fernandez, J.; Bustince, H. Consensus image feature extraction with ordered directionally monotone functions. Commun. Comput. Inf. Sci. 2018, 831, 155–166. [Google Scholar]
  48. Ralević, N.; Delić, M.; Nedović, L. Aggregation of fuzzy metrics and its application in image segmentation. Iran. J. Fuzzy Syst. 2022, 19, 19–37. [Google Scholar]
  49. Aggarwal, P.; Mittal, H.; Samanta, P.; Dhruv, B. Review of Segmentation Techniques on Multi-Dimensional Images. In Proceedings of the 2018 International Conference on Power Energy, Environment and Intelligent Control, PEEIC 2018, Greater Noida, India, 13–14 April 2018; pp. 268–273. [Google Scholar]
  50. Dimuro, G.; Bustince, H.; Fernandez, J.; Sanz, J.; Lucca, G.; Bedregal, B. On the definition of the concept of pre-t-conorms. In Proceedings of the IEEE International Conference on Fuzzy Systems, Naples, Italy, 9–12 July 2017. [Google Scholar]
  51. Delić, M.; Nedović, L.; Pap, E. Extended power-based aggregation of distance functions and application in image segmentation. Inf. Sci. 2019, 494, 155–173. [Google Scholar] [CrossRef]
  52. Nedovic, L.; Ralevic, N.M.; Pavkov, I. Aggregated distance functions and their application in image processing. Soft Comput. 2018, 22, 4723–4739. [Google Scholar] [CrossRef]
  53. Beliakov, G.; Pradera, A.; Calvo, T. Aggregation Functions: A Guide for Practitioners; Springer: Berlin, Germany, 2007. [Google Scholar]
  54. Bustince, H.; Mesiar, R.; Kolesárová, A.; Dimuro, G.; Fernandez, J.; Diaz, I.; Montes, S. On some classes of directionally monotone functions. Fuzzy Sets Syst. 2020, 386, 161–178. [Google Scholar] [CrossRef]
  55. Lucca, G.; Sanz, J.A.; Dimuro, G.P.; Bedregal, B.; Mesiar, R.; Kolesárová, A.; Bustince, H. Preaggregation Functions: Construction and an Application. IEEE Trans. Fuzzy Syst. 2016, 24, 260–272. [Google Scholar] [CrossRef]
  56. Mubashar, M.; Khan, N.; Sajid, A.; Javed, M.; Hassan, N. Have We Solved Edge Detection? A Review of State-of-the-art Datasets and DNN based Techniques. IEEE Access 2022, 10, 70541–70552. [Google Scholar] [CrossRef]
  57. Ghosh, C.; Majumder, S.; Ray, S.; Datta, S.; Mandal, S.N. Different EDGE Detection Techniques: A Review. In Electronic Systems and Intelligent Computing; Mallick, P.K., Meher, P., Majumder, A., Das, S.K., Eds.; Springer Singapore: Singapore, 2020; pp. 885–898. [Google Scholar]
  58. Grabisch, M.; Marichal, J.; Mesiar, R.; Pap, E. Aggregation Functions; Cambridge University Press: Cambridge, UK, 2009. [Google Scholar]
  59. Agrawal, A.; Bhogal, R.K. A Review—Edge Detection Techniques in Dental Images. In Proceedings of the International Conference on ISMAC in Computational Vision and Bio-Engineering (ISMAC-CVB), Palladam, India, 16–17 May 2018; Pandian, D., Fernando, X., Baig, Z., Shi, F., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 1359–1378. [Google Scholar]
  60. Magnier, B.; Abdulrahman, H.; Montesinos, P. A review of supervised edge detection evaluation methods and an objective comparison of filtering gradient computations using hysteresis thresholds. J. Imaging 2018, 6, 74. [Google Scholar] [CrossRef] [Green Version]
  61. McIlhagga, W. The Canny Edge Detector Revisited. Int. J. Comput. Vis. 2010, 91, 251–261. [Google Scholar] [CrossRef] [Green Version]
  62. Yang, K.; Gao, S.; Li, C.; Li, Y. Efficient Color Boundary Detection with Color-Opponent Mechanisms. In Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013. [Google Scholar]
  63. Zhang, W.; Zhao, Y.; Breckon, T.P.; Chen, L. Noise robust image edge detection based upon the automatic anisotropic Gaussian kernels. Pattern Recognit. 2017, 63, 193–205. [Google Scholar] [CrossRef] [Green Version]
  64. Prewitt, J.M. Object enhancement and extraction. Pict. Process. Psychopictorics 1970, 10, 15–19. [Google Scholar]
  65. Roberts, J. Machine Percept 3D Solids; Dissertations-G: Cambridge, MA, USA, 1980. [Google Scholar]
  66. Arbelaez, P.; Maire, M.; Fowlkes, C.; Malik, J. Contour detection and hierarchical image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 33, 898–916. [Google Scholar] [CrossRef] [Green Version]
  67. Xie, S.; Tu, Z. Holistically-Nested Edge Detection. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015. [Google Scholar]
  68. Yang, J.; Price, B.; Cohen, S.; Lee, H.; Yang, M.H. Object Contour Detection with a Fully Convolutional Encoder-Decoder Network. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  69. Liu, Y.; Cheng, M.M.; Hu, X.; Bian, J.W.; Zhang, L.; Bai, X.; Tang, J. Richer Convolutional Features for Edge Detection. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 41, 1939–1946. [Google Scholar] [CrossRef] [Green Version]
  70. Deng, R.; Shen, C.; Liu, S.; Wang, H.; Liu, X. Learning to Predict Crisp Boundaries. In Proceedings of the Computer Vision—ECCV 2018, Munich, Germany, 8–14 September 2018; pp. 570–586. [Google Scholar]
  71. He, J.; Zhang, S.; Yang, M.; Shan, Y.; Huang, T. BDCN: Bi-Directional Cascade Network for Perceptual Edge Detection. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 100–113. [Google Scholar] [CrossRef] [PubMed]
  72. Wibisono, J.K.; Hang, H.M. Fined: Fast Inference Network for Edge Detection. In Proceedings of the 2021 IEEE International Conference on Multimedia and Expo (ICME), Shenzhen, China, 5–9 July 2021. [Google Scholar]
  73. Pu, M.; Huang, Y.; Liu, Y.; Guan, Q.; Ling, H. EDTER: Edge Detection with Transformer. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022. [Google Scholar]
  74. Su, Z.; Liu, W.; Yu, Z.; Hu, D.; Liao, Q.; Tian, Q.; Pietikainen, M.; Liu, L. Pixel Difference Networks for Efficient Edge Detection. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 11–17 October 2021. [Google Scholar]
  75. Russo, F. Edge detection in noisy images using fuzzy reasoning. IEEE Trans. Instrum. Meas. 1998, 47, 1102–1105. [Google Scholar] [CrossRef]
  76. Zadeh, L. Fuzzy sets. Inf. Control. 1965, 8, 338–353. [Google Scholar] [CrossRef] [Green Version]
  77. Versaci, M.; Morabito, F.C. Image Edge Detection: A New Approach Based on Fuzzy Entropy and Fuzzy Divergence. Int. J. Fuzzy Syst. 2021, 23, 918–936. [Google Scholar] [CrossRef]
  78. Melin, P.; Gonzalez, C.I.; Castro, J.R.; Mendoza, O.; Castillo, O. Edge-Detection Method for Image Processing Based on Generalized Type-2 Fuzzy Logic. IEEE Trans. Fuzzy Syst. 2014, 22, 1515–1525. [Google Scholar] [CrossRef]
  79. Kitchenham, B.; Charters, S. Guidelines for Performing Systematic Literature Reviews in Software Engineering; Technical Report EBSE-2007-01; Keele University and University of Durham: Durham, UK, 2007. [Google Scholar]
  80. Zhang, Y.; Wang, H.; Zhou, H.; Deng, P. A mixture model for image boundary detection fusion. IEICE Trans. Inf. Syst. 2018, E101D, 1159–1166. [Google Scholar] [CrossRef] [Green Version]
  81. Gu, Y.; Lv, J.; Bo, J.; Zhao, B.; Zheng, K.; Zhao, Y.; Tao, J.; Qin, Y.; Wang, W.; Liang, J. An Improved Wavelet Modulus Algorithm Based on Fusion of Light Intensity and Degree of Polarization. Appl. Sci. 2022, 12, 3558. [Google Scholar] [CrossRef]
  82. Lin, C.; Wang, Q.; Wan, S. DXYW: A depth-guided multi-channel edge detection model. Signal Image Video Process. 2023, 17, 481–489. [Google Scholar] [CrossRef]
  83. Ge, Y.; Zhang, C.; Wang, K.; Liu, Z.; Bi, H. WGI-Net: A weighted group integration network for RGB-D salient object detection. Comput. Vis. Media 2021, 7, 115–125. [Google Scholar] [CrossRef]
  84. Fang, A.; Zhao, X.; Zhang, Y. Cross-modal image fusion guided by subjective visual attention. Neurocomputing 2020, 414, 333–345. [Google Scholar] [CrossRef]
  85. Bentkowska, U.; Kepski, M.; Mrukowicz, M.; Pekala, B. New fuzzy local contrast measures: Definitions, evaluation and comparison. In Proceedings of the IEEE International Conference on Fuzzy Systems, Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar]
  86. Yang, Y.; Wu, J.; Huang, S.; Fang, Y.; Lin, P.; Que, Y. Multimodal Medical Image Fusion Based on Fuzzy Discrimination with Structural Patch Decomposition. IEEE J. Biomed. Health Inform. 2019, 23, 1647–1660. [Google Scholar] [CrossRef]
  87. Flores-Vidal, P.; Gómez, D.; Castro, J.; Montero, J. New Aggregation Approaches with HSV to Color Edge Detection. Int. J. Comput. Intell. Syst. 2022, 15, 78. [Google Scholar] [CrossRef]
  88. Nagarajan, D.; Lathamaheswari, M.; Kavikumar, J.; Hamzha, A. A Type-2 Fuzzy in image extraction for DICOM image. Int. J. Adv. Comput. Sci. Appl. 2018, 9, 351–362. [Google Scholar] [CrossRef] [Green Version]
  89. Nagarajan, D.; Lathamaheswari, M.; Sujatha, R.; Kavikumar, J. Edge Detection on DICOM Image using Triangular Norms in Type-2 Fuzzy. Int. J. Adv. Comput. Sci. Appl. 2018, 9, 462–475. [Google Scholar] [CrossRef] [Green Version]
  90. Martinez, G.E.; Gonzalez, I.C.; Mendoza, O.; Melin, P. General Type-2 Fuzzy Sugeno Integral for Edge Detection. J. Imaging 2019, 8, 71. [Google Scholar] [CrossRef] [Green Version]
  91. Ljubo, N.; Marija, D.; Ralević, N.M. OWA aggregated distance functions and their application in image segmentation. In Proceedings of the IEEE 16TH International Symposium on Intelligent Systems and Informatics (SISY 2018), Subotica, Serbia, 13–15 September 2018; pp. 311–316. [Google Scholar]
  92. Ralević, N.; Nedović, L.; Krstanović, L.; Ilić, V.; Dragić, D. Color Image Segmentation Using Distance Functions Based on Aggregation of Pixels Colors. In Intelligent and Fuzzy Techniques for Emerging Conditions and Digital Transformation; Kahraman, C., Cebi, S., Cevik Onar, S., Oztaysi, B., Tolga, A.C., Sari, I.U., Eds.; Springer International Publishing: Cham, Switzerland, 2022; pp. 717–724. [Google Scholar]
  93. Li, F.; Lin, C.; Zhang, Q.; Wang, R. A biologically Inspired Contour Detection Model Based on Multiple Visual Channels and Multi-Hierarchical Visual Information. IEEE Access 2020, 8, 15410–15422. [Google Scholar] [CrossRef]
  94. Kaur, G.; Garg, H. A new method for image processing using generalized linguistic neutrosophic cubic aggregation operator. Complex Intell. Syst. 2022, 8, 4911–4937. [Google Scholar] [CrossRef]
  95. Gudipalli, A.; Mandava, S.; Sudheer, P.; Saravanan, M. Hybrid colour infrared image edge detection using RGB-YCbCr image fusion. Int. J. Adv. Sci. Technol. 2019, 28, 101–108. [Google Scholar]
  96. Lopez-Molina, C.; Montero, J.; Bustince, H.; De Baets, B. Self-adapting weighted operators for multiscale gradient fusion. Inf. Fusion 2018, 44, 136–146. [Google Scholar] [CrossRef]
  97. Hait, S.; Mesiar, R.; Gupta, P.; Guha, D.; Chakraborty, D. The Bonferroni mean-type pre-aggregation operators construction and generalization: Application to edge detection. Inf. Fusion 2022, 80, 226–240. [Google Scholar] [CrossRef]
  98. Wang, Y.; Zhang, N.; Yan, H.; Zuo, M.; Liu, C. Using Local Edge Pattern Descriptors for Edge Detection. Int. J. Pattern Recognit. Artif. Intell. 2018, 32, 1850006. [Google Scholar] [CrossRef]
  99. Choquet, G. Theory of Capacities. Ann. l’Inst. Fourier 1954, 5, 131–295. [Google Scholar] [CrossRef] [Green Version]
  100. Klement, E.P.; Mesiar, R.; Pap, E. Triangular Norms; Kluwer Academic Publisher: Dordrecht, The Netherland, 2000. [Google Scholar]
  101. Yager, R.R. On Ordered Weighted Averaging Aggregation Operators in Multicriteria Decisionmaking. IEEE Trans. Syst. Man, Cybern. 1988, 18, 183–190. [Google Scholar] [CrossRef]
  102. Bezdek, J.C. Pattern Recognition with Fuzzy Objective Function Algorithms; Springer: Berlin/Heidelberg, Germany, 1981. [Google Scholar]
Figure 1. Schema of the gradient-based methods.
Figure 1. Schema of the gradient-based methods.
Axioms 12 00330 g001
Figure 2. Chronology of detection methods without using aggregation functions or fuzzy methods [39,40,41,61,62,63,64,65,66,67,68,69,70,71,72,73,74].
Figure 2. Chronology of detection methods without using aggregation functions or fuzzy methods [39,40,41,61,62,63,64,65,66,67,68,69,70,71,72,73,74].
Axioms 12 00330 g002
Figure 3. Chronology of fuzzy methods presented in this paper [57,75,77,78].
Figure 3. Chronology of fuzzy methods presented in this paper [57,75,77,78].
Axioms 12 00330 g003
Figure 4. Schema of fuzzy-based methods.
Figure 4. Schema of fuzzy-based methods.
Axioms 12 00330 g004
Figure 5. Terms of inclusion, exclusion, and number of selected papers.
Figure 5. Terms of inclusion, exclusion, and number of selected papers.
Axioms 12 00330 g005
Table 1. The gradient magnitude and orientation calculation for a scalar image I, where I θ represents the derivative of the image using the first-order filter in orientation θ in radians [60].
Table 1. The gradient magnitude and orientation calculation for a scalar image I, where I θ represents the derivative of the image using the first-order filter in orientation θ in radians [60].
Operator TypeFixed OperatorFilter-OrientedHalf-Gaussian Kernels
Magnitude of gradient | I | = I 0 2 + I π / 2 2 | I | = max θ [ 0 , π [ | I θ | | I | = max θ [ 0 , 2 π [ | I θ | min θ [ 0 , 2 π [ | I θ |
Gradient direction η = arctan I π / 2 I 0 η = arg max θ [ 0 , π [ | I θ | + π 2 η = arg max θ [ 0 , 2 π [ I θ + arg min θ [ 0 , 2 π [ I θ / 2
Table 3. Summary of papers by method.
Table 3. Summary of papers by method.
Multiple Descriptors Extraction and AggregationBased on Aggregation of Distance Functions and FCMBased on Fuzzy Theory: Type-2 Fuzzy and Neutrosophic Set
Table 4. Edge detection using clustering and distance functions or fuzzy measurements.
Table 4. Edge detection using clustering and distance functions or fuzzy measurements.
Ref.AggregationMeasuresClustering Algorithm
[91]OWA d S ( p 1 , p 2 ) = 1 255 | s 2 s 1 | e d N ( p 1 , p 2 ) = 1 255 | n 2 n 1 | 1
[48]GQ-AM C ω = τ ω . t ω 2
[51]EPP and EWAMP d λ , ω ( p 1 , p 2 ) 3FCM
[46]OWA and PP and WAMP d α ω [ 5 ] , λ [ 5 ] ( p i , j , p k , n ) 3
[52]AOOCC d α r , α g , α b ( ( r 1 , g 1 , b 1 ) , ( r 2 , g 2 , b 2 ) ) ; d r ( ( r 1 , g 1 , b 1 ) , ( r 2 , g 2 , b 2 ) ) ; d g ( ( r 1 , g 1 , b 1 ) , ( r 2 , g 2 , b 2 ) ) and, d b ( ( r 1 , g 1 , b 1 ) , ( r 2 , g 2 , b 2 ) ) 4
1Si,j represents the gray level of the pixel and ni,j the average gray levels in the eight-connected neighborhood; 2 τ and t described in Equations (25) and (26), where Fi are the normalized color components, Di is a similarity descriptor, and K a coefficient of adjustment; 3 Equation (16), where ω and λ are adjustment parameters; 4 Equations (12)–(15).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Amorim, M.; Dimuro, G.; Borges, E.; Dalmazo, B.L.; Marco-Detchart, C.; Lucca, G.; Bustince, H. Systematic Review of Aggregation Functions Applied to Image Edge Detection. Axioms 2023, 12, 330.

AMA Style

Amorim M, Dimuro G, Borges E, Dalmazo BL, Marco-Detchart C, Lucca G, Bustince H. Systematic Review of Aggregation Functions Applied to Image Edge Detection. Axioms. 2023; 12(4):330.

Chicago/Turabian Style

Amorim, Miqueias, Gracaliz Dimuro, Eduardo Borges, Bruno L. Dalmazo, Cedric Marco-Detchart, Giancarlo Lucca, and Humberto Bustince. 2023. "Systematic Review of Aggregation Functions Applied to Image Edge Detection" Axioms 12, no. 4: 330.

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop