Next Article in Journal
A Study on the Surface Vibration Effect of CO2 Phase Transition Cracking Based on the Time-Domain Recursive Analysis Method
Previous Article in Journal
Fuzzy Model Parameter and Structure Optimization Using Analytic, Numerical and Heuristic Approaches
Previous Article in Special Issue
Image Denoising Method Relying on Iterative Adaptive Weight-Mean Filtering
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Edge Detection Algorithms for Water-Repellent Images of Insulators Taking into Account Efficient Approaches

School of Computer and Artificial Intelligence, Zhengzhou University, Zhengzhou 450001, China
*
Author to whom correspondence should be addressed.
Symmetry 2023, 15(7), 1418; https://doi.org/10.3390/sym15071418
Submission received: 19 June 2023 / Revised: 7 July 2023 / Accepted: 9 July 2023 / Published: 14 July 2023

Abstract

:
Computer vision has become an essential interdisciplinary field that aims to extract valuable information from digital images or videos. To develop novel concepts in this area, researchers have employed powerful tools from both pure and applied mathematics. Recently, the use of fractional differential equations has gained popularity in practical applications. Moreover, symmetry is a critical concept in digital image processing that can significantly improve edge detection. Investing in symmetry-based techniques, such as the Hough transform and Gabor filter, can enhance the accuracy and robustness of edge detection algorithms. Additionally, CNNs are incredibly useful in leveraging symmetry for image edge detection by identifying symmetrical patterns for improved accuracy. As a result, symmetry reveals promising applications in enhancing image analysis tasks and improving edge detection accuracy. This article focuses on one of the practical aspects of research in computer vision, namely, edge determination in image segmentation for water-repellent images of insulators. The article proposes two general structures for creating fractional masks, which are then calculated using the Atangana–Baleanu–Caputo fractional integral. Numerical simulations are utilized to showcase the performance and effectiveness of the suggested designs. The simulations’ outcomes reveal that the fractional masks proposed in the study exhibit superior accuracy and efficiency compared to various widely used masks documented in the literature. This is a significant achievement of this study, as it introduces new masks that have not been previously used in edge detection algorithms for water-repellent images of insulators. In addition, the computational cost of the suggested fractional masks is equivalent to that of traditional masks. The novel structures employed in this article can serve as suitable and efficient alternative masks for detecting image edges as opposed to the commonly used traditional kernels. Finally, this article sheds light on the potential of fractional differential equations in computer vision research and the benefits of developing new approaches to improve edge detection.

1. Introduction

In recent years, there has been a notable surge in utilizing symmetry for mathematical modeling and analysis of significant real-world problems. Symmetry is a concept that has been utilized in science and technology for centuries. It refers to the property of an object or system that remains invariant under certain transformations, such as reflection, rotation, or translation. Symmetry can be found in various natural phenomena, and it has been used to develop many technological applications. One of the most well-known applications of symmetry is in crystallography. The study of crystals involves analyzing their symmetrical properties to understand their atomic structure and chemical composition. The discovery of X-ray crystallography in the early 20th century enabled scientists to analyze the internal structure of crystals with unprecedented detail. By studying the symmetrical arrangement of atoms within a crystal, scientists can determine its physical and chemical properties, including its strength, electrical conductivity, and optical behavior. Another important application of symmetry is in the field of physics. Many fundamental laws of nature are based on symmetries. For example, the principle of conservation of energy is based on the fact that physical laws are invariant under time translation. Similarly, the principle of conservation of momentum is based on the fact that physical laws are invariant under spatial translation. The concept of symmetry has also contributed significantly to the development of modern particle physics, where symmetries are used to describe the interactions between subatomic particles. In engineering, symmetry is often used to achieve balance and stability in structures. For example, in civil engineering, symmetric designs are used to distribute loads evenly across buildings, bridges, and other structures. Similarly, in mechanical engineering, symmetric designs are used to achieve balanced movement in machinery and vehicles. Symmetry is also increasingly being used in computer science and artificial intelligence. The use of symmetry can help reduce the complexity of algorithms, making them more efficient and easier to implement. Some machine learning algorithms use symmetry to identify patterns in data and make predictions about future outcomes.
Image processing refers to the manipulation or analysis of digital images using algorithms and mathematical operations. Image processing comprises several operations. For example, image denoising is the process of reducing or removing unwanted noise from a digital image, resulting in a cleaner and visually more appealing picture [1,2,3]. Image segmentation is the process of dividing an image into multiple distinct regions or objects based on their visual characteristics, enabling more precise analysis and understanding of the image content [4]. Image inpainting refers to the restoration or filling in of missing or damaged parts of an image using surrounding information, resulting in a visually coherent and complete representation of the original image [5]. Image compression is the technique of reducing the file size of an image while preserving its visual quality by removing redundant or nonessential data, making it more efficient for storage and transmission purposes [6]. Image reconstruction refers to the process of recovering a high-quality image from incomplete or degraded input data, often achieved through advanced algorithms and techniques that fill in missing information or enhance the image based on available data [7,8]. Also, video signal processing involves manipulating and enhancing video signals to improve their quality, correct errors, adjust color and brightness levels, apply special effects, compress the data for efficient storage or transmission, and perform other operations to enhance the visual experience of videos [9]. Stereo image processing is a technique that involves analyzing and manipulating a pair of stereo images taken from slightly different perspectives to extract depth information, perform 3D reconstruction, create visual effects, or generate immersive experiences by simulating the perception of depth in the human visual system [10]. Also, optimization techniques play a crucial role in image processing by enhancing image quality, reducing noise, and improving computational efficiency through methods like image denoising algorithms or optimal parameter selection for various filters [11,12,13,14]. Image captioning is the task of automatically generating textual descriptions that accurately capture the content and context of an image, enabling better accessibility, understanding, and utilization of visual information for various applications such as assistive technologies, image indexing, and content recommendation systems [15]. Image matching, also known as image registration or image alignment, is the process of finding correspondences between two or more images to determine their spatial relationship or similarity. It involves identifying common features, key points, or patterns in the images and aligning them based on these matches, enabling tasks such as image stitching, object recognition, motion tracking, or image retrieval [16,17,18]. Mirror detection is the process of identifying and locating mirrors in an image or a scene. It involves the use of computer vision techniques to analyze visual information and identify objects that resemble mirrors based on their properties [19]. Machine learning is a field of artificial intelligence (AI) that focuses on the development of algorithms and models that enable computers to learn from and make predictions or decisions based on data, without being explicitly programmed. It involves creating mathematical models and algorithms that automatically improve their performance through experience [20]. Deep learning is a subset of machine learning that focuses on training artificial neural networks with multiple layers (deep architectures) to learn and make predictions from complex patterns in data. It is inspired by the structure and function of the human brain’s neural networks [21,22,23]. Finally, depth estimation is the process of inferring the depth information (distance) of objects within a scene from a given image or a sequence of images [24,25,26].
In recent years, image processing has made significant progress due to advancements in machine learning and deep learning algorithms. These improvements have resulted in a wide range of applications in different fields. For instant, image processing in intelligent transportation systems is utilized for tasks such as vehicle detection and tracking, license plate recognition, and traffic analysis [27,28,29]. These systems also enable incident detection, real-time monitoring and efficient management of road networks to enhance safety and optimize traffic flow [30,31,32]. In cluster analysis, image processing can be applied to segment and group similar objects or regions within images, allowing for tasks such as object recognition, traffic sign detection, vehicle classification, and road condition assessment in intelligent transportation systems [33]. Image processing techniques are also employed in the optical design to simulate and analyze the performance of optical systems, enabling tasks such as image quality assessment, distortion correction, and evaluation of system parameters, ultimately aiding in the optimization and enhancement of optical designs for a variety of applications, including imaging, microscopy, and spectroscopy [34,35]. In the context of transmitting signals, image-processing techniques can be applied to design and analyze bandpass filters for signal conditioning. These filters are used to selectively pass a specific frequency range of the transmit signal while attenuating frequencies outside the desired band, facilitating efficient transmission and reception of signals in applications such as wireless communication systems, radar, and audio processing [36,37,38,39]. In emotion recognition, image-processing techniques can be utilized to analyze facial expressions and extract relevant features such as facial landmarks, texture patterns, and color variations [40,41,42]. These features are then processed using machine learning algorithms to classify and recognize different emotions, enabling applications in fields like human-computer interaction, affective computing, and psychological research [43,44]. In pattern analysis, image-processing techniques can be employed to extract meaningful patterns and features from images or visual data. These techniques involve tasks such as image segmentation, feature extraction, and classification, allowing for the identification and characterization of complex patterns in various domains such as object recognition, medical imaging, handwriting recognition, and quality control. By leveraging image-processing algorithms, pattern analysis enables the automated interpretation and understanding of visual information in diverse applications [45,46,47,48]. In object detection, image-processing techniques play a crucial role in identifying and localizing objects within images or video streams. These techniques involve the use of algorithms such as convolutional neural networks (CNNs), feature extraction, and bounding box regression to detect and classify objects of interest based on visual cues. Object detection finds applications in autonomous driving, surveillance systems, robotics, and many other fields where real-time identification and localization of objects are required for decision-making and analysis [49,50,51,52]. In classification algorithms, image-processing techniques can be utilized to preprocess and extract relevant features from images or visual data. These techniques involve tasks such as image resizing, color normalization, feature extraction (e.g., using methods like the histogram of oriented gradients or deep-learning-based feature extraction), and dimensionality reduction. The processed image features are then used as input to various classification algorithms, such as support vector machines (SVMs), decision trees, random forests, or deep neural networks, to classify the input into predefined categories or classes. This enables applications in diverse fields, including image recognition, medical diagnosis, object recognition, and sentiment analysis [53,54]. In feature extraction, image-processing techniques are employed to transform raw data, such as images or signals, into a set of representative features that capture relevant information. These techniques encompass methods like edge detection, texture analysis, shape descriptors, and color histograms, which extract meaningful characteristics from the data [55,56,57]. Feature extraction plays a crucial role in various applications, such as image recognition, object detection, biometrics, and data compression, as it enables the reduction in data dimensionality while retaining key discriminative information for subsequent analysis and decision-making processes [58,59,60]. More diverse applications of this research field can be further found in [61,62,63,64,65,66].
Edge detection is a fundamental task in image processing that involves identifying and extracting the boundaries of objects or regions of interest in an image [67,68,69]. The process of edge detection plays a crucial role in many computer vision tasks such as object recognition, image segmentation, and feature extraction [70,71,72]. Symmetry can be leveraged to enhance edge detection by exploiting the inherent symmetrical properties of images. Specifically, symmetry-based techniques have been developed to improve the accuracy and robustness of edge detection algorithms. One approach to leveraging symmetry for edge detection is the Hough transform. This mathematical technique can identify lines or curves within an image by detecting symmetrical patterns. For instance, if an image contains parallel lines, the Hough transform can detect the symmetry between them and use this information for more accurate edge detection. Another technique to use symmetry in edge detection is through the Gabor filter. This type of linear filter identifies edges and other features by analyzing the local frequency and orientation of the image’s spatial structure. The detection of symmetrical patterns within this structure can enhance the detection of edges using the Gabor filter. Deep learning techniques, specifically convolutional neural networks (CNNs), can also use symmetry for image edge detection. CNNs can learn and identify underlying symmetries within an image to optimize edge detection accuracy. For example, a CNN can identify symmetrical regions within an image and leverage that information to improve edge detection.
Some well-known edge detection techniques are
  • Sobel Operator: One of the most commonly used edge detection algorithms is the Sobel operator. The Sobel operator is a gradient-based method that calculates the intensity gradient of an image at each pixel using a small convolution kernel. The Sobel operator has two kernels, one for horizontal edges and one for vertical edges, which are convolved with the image to produce two separate gradient images. The final edge map is obtained by combining these two gradient images.
  • Canny Edge Detector: The Canny edge detector is another popular method used for edge detection. The Canny edge detector is a multi-stage algorithm that involves smoothing the image to reduce noise, calculating the gradient magnitude and orientation, applying nonmaximum suppression to thin the edges, and finally thresholding to identify strong edges.
  • Laplacian Operator: The Laplacian operator is another gradient-based edge detection algorithm that calculates the second derivative of an image to detect edges. The Laplacian operator is more sensitive to noise than other edge detection algorithms, so it is often used in combination with other methods to improve edge detection performance.
  • Marr–Hildreth Edge Detector: The Marr–Hildreth edge detector uses a Laplacian of the Gaussian (LoG) filter to detect edges. The LoG filter is used to smooth the image and highlight edges, and then zero crossings are detected to identify edges.
  • Prewitt Operator: The Prewitt operator is another gradient-based edge detection algorithm that calculates the intensity gradient of an image using a 3 × 3 convolution kernel. The Prewitt operator is similar to the Sobel operator, but it has a simpler kernel and is less computationally expensive.
  • Roberts Cross Operator: The Roberts cross operator is a simple edge detection algorithm that uses two kernels to detect edges in the horizontal and vertical directions. The Roberts cross operator is less sensitive to noise than other edge detection algorithms, but it is also less accurate.
Moreover, symmetry can be a useful tool in edge detection techniques, particularly when trying to identify the presence of lines or shapes in an image. One application of symmetry in edge detection is the use of the Canny edge detection algorithm. The Canny edge detection algorithm works by identifying edges in an image based on changes in intensity values across neighboring pixels. By using symmetry as a guiding principle, the algorithm can be made more robust and accurate in its identification of edges. Specifically, the Canny algorithm uses the principle of symmetry to identify and suppress edges that are not likely to be part of the true object boundary. The algorithm achieves this by performing two rounds of convolution with a Gaussian filter, followed by computation of gradient magnitude and direction. In the first round of convolution, the filter is applied in both the x and y directions, while in the second round, it is only applied in the x direction. This ensures that the resulting edge map is symmetric and reduces the number of false edges detected. Detecting tampered digital images is vital due to the widespread use of tools for image manipulation. To address this, a new algorithm combining the faster R-CNN model with edge detection was proposed in [73]. The algorithm extracts tampering features from symmetrical ResNet101 networks and uses RoI pooling layer to fuse features and classify tampering in the fully connected layer. Bilinear interpolation is used instead of the RoI max pooling, and the region proposal network (RPN) locates forgery regions. Experiments on three datasets show that this algorithm is more effective than existing ones. A new edge detection algorithm is proposed in [74] for multi-view SAR images using a GAN network model. This overcomes the low accuracy of Canny-based methods. The GAN network generates symmetric difference nuclear SAR image data, which are used to construct an edge detection model for any direction. Post-processing eliminates nonedges and Hough transform calculates edge direction. Experimental results show 93.8% accuracy with 96.85% correct edge detection and 97.08% detection within three-pixel widths, demonstrating high accuracy for kernel SAR images. A deconvolution model that uses the Gram matrix to calculate filter response correlation and adjusts parameters to learn salient area patterns using shape templates has been proposed in [75]. It also estimates unknown blur kernels using image prior knowledge and gradient-domain algorithms. The model is robust, insensitive to noise, and overcomes the water ripple effect. The paper also studies hydrophobic indicator function methods and improves the contrast of the hydrophobic image by extracting the B channel component. Connected-domain wave processing is used to filter water droplets, and “hole filling” is used to eliminate reflection problems caused by uneven illumination. The work of [76] proposes a computational method for edge detection based on precise and comprehensive goals. The approach defines detection and localization criteria for different types of edges and presents mathematical forms for these criteria. The study concludes that there is a natural uncertainty principle between detection and localization performance, leading to the development of an optimal single-operator shape for detecting edges at any scale. The proposed operator has a simple implementation using maxima in the gradient magnitude of a Gaussian-smoothed image. Ref. [77] presents a method for edge detection in digital images that uses morphological gradient and fuzzy logic. The authors improved a basic method for edge detection by applying fuzzy logic, eliminating the need for filtering the image. Simulation results showed that the images obtained with fuzzy logic were better than those obtained using only the morphological gradient method. The interval type-2 fuzzy inference system (IT2FIS) achieved the best results due to its ability to model uncertainty in gradient values and gray ranges. The membership function parameters were obtained directly from the images, making the proposed method applicable to images with different gray scales.
This article delves into the significance of symmetry in image processing and its various applications in edge detection. The paper sheds light on how symmetry aids in comprehending intricate systems and phenomena by revealing the fundamental governing principles. It concludes by underscoring the unresolved issues and hurdles in the exploration of symmetry and presents potential avenues for further investigation. The primary objective of this paper is to apply a novel set of approaches that efficiently employ Atangana–Baleanu–Caputo fractional operators with noninteger orders. These techniques can be used to mitigate salt-and-pepper noise from digital images, and we believe that they have great potential in the realm of image denoising. In order to accomplish this goal, our study provides a comprehensive analysis of the effectiveness and possible applications of these novel techniques. It is important to note that the approach used in this study has not been explored for edge detection algorithms in water-repellent images of insulators in the previous literature. Our results demonstrate that this fresh approach exhibits considerable potential as a feasible solution for developing efficient window masks in the field of edge detection. Through our research, we provide a detailed explanation of how this technique works and its superiority over other existing methods. Our experimental analysis shows that our proposed method is highly efficient in removing salt-and-pepper noise from digital images. This demonstrates the effectiveness of our approach and its potential use in real-world applications. Additionally, we conduct a comparative analysis of our employed technique with other well-known edge-detecting methods, and our findings indicate its superiority over other currently available approaches.
The article is structured as follows. In Section 2, the basic definitions related to fractional differential calculus will be reviewed to provide a necessary foundation for understanding the concepts discussed in the rest of the article. In the third section, different filter structures used in image processing will be presented and their effectiveness will be analyzed. This section provides a comprehensive overview of the various methods available for removing noise from digital images. The main results of the article are also presented in Section 4 where we focus on the discretization techniques for the fractional integral operator of the Atangana–Baleanu–Caputo type. In this section, several approaches for the numerical approximation of this operator are employed to construct edge-detector window masks. In Section 5, we present the results of numerical simulations and comparisons between our employed techniques and some well-known masks. These results will demonstrate the effectiveness of the proposed approach and its superiority over other existing methods. The article’s final section will provide a summary of the key findings and conclusions.

2. A Brief Overview of Some Essential Concepts in Fractional Calculus

Within this section, a concise overview of several fundamental concepts in fractional calculus is provided.
The fractional derivative of the Liouville–Caputo sense is given by [78]
LC ν P τ = 1 Γ ( 1 ν ) 0 τ ( τ ν ) ν P ˙ ( ν ) d ν , 0 < ν 1 .
The Caputo–Fabrizio derivative [79] is given by
CF ν P τ = ( 2 ν ) A ( ν ) 2 ( 1 ν ) 0 τ exp ν ( τ ν ) 1 ν P ˙ ( ν ) d ν , 0 < ν < 1 ,
where A ( ν ) = ν / ( 2 ν ) .
The Atangana–Baleanu–Caputo derivative is given by [80]
ABC ν P ( τ ) = M ( ν ) 1 ν 0 τ M ν ( τ ν ) 1 ν P ˙ ( ν ) d ν , 0 < ν 1 ,
where M ( . ) is the Mittag–Leffler function given by M ( τ ) = i = 0 τ i Γ ( ν i + 1 ) , and M ( ν ) = 1 ν + ν / Γ ( ν ) .
Also, the definition corresponding to the integral for this operator is as follows:
I A B C ν P ( τ ) = 1 ν M ( ν ) P ( τ ) + ν Γ ( ν ) M ( ν ) 0 τ P ( τ ) ( τ σ ) ν 1 d σ .
The application of fractional calculus in the fields of mathematical modeling, applied sciences, and fractional calculus has proven to be highly valuable and impactful. In mathematical modeling, fractional calculus offers a powerful tool for capturing complex dynamics characterized by memory effects and nonlocal behavior [81,82,83,84,85,86,87].

3. Some Popular Mask-Based Techniques Used for Image Processing

3.1. Windows Mask Applications in Image Denoising

The general basis of many fractional masks in the literature is based on the discretized form for fractional operators. For example, let us consider
I ν P ( τ ) 0 P ( τ ) + 1 P τ + 2 P τ 2 + 3 P τ 3 + ,
where is the time-step length, and the initial coefficients of the expansion corresponding to the fractional operator are denoted by i ’s. This idea can be further extended in higher dimensions as
x I ν P ( x , y ) 0 P ( x , y ) + 1 P x x , y + 2 P x 2 x , y + 3 P x 3 x , y + , y I ν P ( x , y ) 0 P ( x , y ) + 1 P x , y y + 2 P x , y 2 y + 3 P x , y 3 y + .
The symmetrical coefficients that have been acquired can be employed to create masks for various image-processing applications. For x = y = 1 , these masks can be designed in a variety of configurations with differing dimensions, including the examples below.
Very recently, these ideas have been used to construct the following window masks in image denoising [88]:
  • The following 3 × 3 symmetrical window mask is used as a fractional integral mask:
    Λ 3 = [ λ i , j 3 ] : = 1 1 1 1 8 0 1 1 1 1 .
  • Also, the fractional integral mask in size of 5 × 5 is suggested as
    Λ 5 = [ λ i , j 5 ] : = 2 2 2 2 2 2 1 1 1 2 2 1 8 0 1 2 2 1 1 1 2 2 2 2 2 2 .
  • The structure of this mask, considering the dimensions 7 × 7 , becomes as follows:
    Λ 7 = [ λ i , j 7 ] : = 3 3 3 3 3 3 3 3 2 2 2 2 2 3 3 2 1 1 1 2 3 3 2 1 8 0 1 2 3 3 2 1 1 1 2 3 3 2 2 2 2 2 3 3 3 3 3 3 3 3 .
  • The structure of this mask, considering the dimensions 9 × 9 , becomes as follows:
    Λ 9 = [ λ i , j ] : = 4 4 4 4 4 4 4 4 4 4 3 3 3 3 3 3 3 4 4 3 2 2 2 2 2 3 4 4 3 2 1 1 1 2 3 4 4 3 2 1 8 0 1 2 3 4 4 3 2 1 1 1 2 3 4 4 3 2 2 2 2 2 3 4 4 3 3 3 3 3 3 3 4 4 4 4 4 4 4 4 4 4 .
  • Finally, the structure of this mask, considering the dimensions 11 × 11 , becomes as follows in a symmetric form as:
    Λ 11 = [ λ i , j ] : = 5 5 5 5 5 5 5 5 5 5 5 5 4 4 4 4 4 4 4 4 4 5 5 4 3 3 3 3 3 3 3 4 5 5 4 3 2 2 2 2 2 3 4 5 5 4 3 2 1 1 1 2 3 4 5 5 4 3 2 1 8 0 1 2 3 4 5 5 4 3 2 1 1 1 2 3 4 5 5 4 3 2 2 2 2 2 3 4 5 5 4 3 3 3 3 3 3 3 4 5 5 4 4 4 4 4 4 4 4 4 5 5 5 5 5 5 5 5 5 5 5 5 .

3.2. Windows Mask Applications in Edge Detection

Edge detection is a fundamental task in image processing, and one of the most widely used filters for this purpose is the Prewitt operator. This method relies on approximating the first-order derivative using central differences and produces results by convolving an image with two specific kernels. In recent years, there has been significant interest in optimizing edge detection techniques, particularly with regard to increasing accuracy and reducing computational complexity. Despite this ongoing research, the Prewitt operator remains a popular choice due to its simplicity and effectiveness. In this paper, we examine the Prewitt operator in both directions using the following window masks
H x = 1 0 1 1 0 1 1 0 1 , H y = 1 1 1 0 0 0 1 1 1 .
The Prewitt operator also has some limitations. One disadvantage is that it can produce weaker edge responses compared to other more advanced edge detection methods, particularly around edges that are not aligned with the x- or y-axis of the image. Another limitation is the inability to detect edges at angles that do not align with the kernel orientations used in the filter. This can lead to missed edges or inaccurate edge detections in certain images. Finally, like many traditional edge detection methods, the Prewitt operator is sensitive to noise and can produce false edge responses in noisy images. Despite these limitations, the Prewitt operator remains a popular choice for edge detection due to its simplicity and effectiveness in many applications.
The Sobel operator is a commonly used filter that utilizes central finite differences and gives more weightage to the pixels closer to the center of the mask compared to the Prewitt operator, resulting in more accurate edge detection. To achieve this, the Sobel operator employs specific convolution kernels which differ from those used in the Prewitt operator. Although the Sobel operator is often compared to the Prewitt operator, each has unique strengths and weaknesses that make them suitable for different applications. In this paper, we delve into the Sobel operator and explore its effectiveness as an edge detection tool using two following convolution kernels:
H x = 1 0 1 2 0 2 1 0 1 , H y = 1 2 1 0 0 0 1 2 1 .
While the Sobel operator is generally effective at detecting edges in images, it does have some limitations. One disadvantage is its sensitivity to noise, which can result in false edge detections. Additionally, the Sobel operator’s kernel size is fixed, making it difficult to adapt to images with varying resolutions or varying edge widths. Finally, the Sobel operator may not perform well in cases where edges are curved or occur at angles that do not align with the mask orientations used in the filter. Despite these drawbacks, the Sobel operator remains a widely used and effective tool for edge detection in many image-processing applications.
Over the past few years, fractional differential operators have been increasingly utilized in image processing to achieve significant advancements in areas such as texture enhancement, noise reduction, and edge analysis. The effectiveness of these operators in enhancing image quality has been demonstrated through numerous impressive results. In the domain of image processing, there is a fundamental formula that plays a significant role in expanding fractional differential operators. This formula is essential for performing various operations such as edge detection and image enhancement. It involves breaking down an image into its constituent parts and applying fractional differential operators to these parts. The resulting output provides valuable information about the underlying structures present in the image, which can be further utilized for various applications, including computer vision, medical imaging, and satellite imagery analysis. In this paper, we delve into the use of fractional differential operators in depth, exploring their capabilities and limitations for improving various aspects of image processing. Here again, we assume that we have
I x ν P ( x , y ) 0 P ( x , y ) + 1 P x x , y + 2 P x 2 x , y + 3 P x 3 x , y + , I y ν P ( x , y ) 0 P ( x , y ) + 1 P x , y y + 2 P x , y 2 y + 3 P x , y 3 y + .
By using the coefficients of these expansions, two different types of masks can be identified:
  • Type I
    H x = 0 0 0 1 0 1 2 0 2 , H y = 0 1 2 0 0 0 0 1 2 .
  • Type II
    H x = 0 1 2 3 0 3 2 1 0 , H y = 2 3 0 1 0 1 0 3 2 .
The construction of these two kernels involves utilizing adjacent pixels in the vertical, horizontal, and diagonal directions around the central pixel. This critical feature makes them an exceptional tool for capturing intricate image details, including edges and texture. This ability to extract fine image details using these kernels is due to the specific pattern in which they are designed, which enables them to capture and accentuate the subtle variations in image intensity. As a result, these kernels find widespread application in various image-processing tasks, including feature extraction, object recognition, and image segmentation.
Once the kernels are created, it is common practice to utilize their absolute values for approximating the gradient moduli as
| H | | P ( x , y ) H x | + | P ( x , y ) H y | ,
where P ( x , y ) represents the pixel value of the image, and ∗ denotes the image convolution operator. This approach involves computing the magnitude of changes in image intensity across its various regions by applying the kernels to these regions and taking the absolute value of the resulting output. This technique of using absolute values to approximate gradient moduli finds widespread application in various image-processing tasks, including edge detection, motion estimation, and texture analysis. It enables the identification of significant local variations in image intensity that are crucial for detecting edges and differentiating between various image patterns. Moreover, the use of such an approach allows for efficient and accurate computation of gradient moduli, which can be further utilized for performing more complex image-processing tasks, such as feature extraction and object recognition.

4. Some Recent Fractional Masks in Edge Detecting

Due to the nonlocal nature of fractional operator (4), their numerical approximation can be quite challenging. There are various methods available for approximating fractional operators, and each method has its strengths and weaknesses depending on the specific application. One common approach is to use a finite difference method, where the derivative is approximated using discrete points. Some of these approaches are presented in [89] for the discretization of fractional derivatives, which we will review in the rest of this section.

4.1. The Results of the Grumwald–Letnikov (GL) Approach

First, consider the following fractional integral operator
I A B C ν P ( τ ) = 1 ν M ( ν ) P ( τ ) + ν Γ ( ν ) M ( ν ) 0 τ P ( τ ) ( τ σ ) ν 1 d σ .
The integral in this definition can be discretized as follows [90]:
I G L ν P ( τ ) 0 τ P ( τ ) ( τ σ ) 1 ν d σ = lim 0 1 ν P ( τ ) + ν P τ + ( ν ) ( ν + 1 ) 2 P τ + + Γ ( ν + 1 ) k ! Γ ( ν k + 1 ) P τ k .
By inserting = 1 in Equation (14) and then placing the derived expression in Equation (13), it reads
I G L ν P ( τ ) 1 ν M ( ν ) P ( τ ) + ν M ( ν ) P ( τ ) + ν P τ 1 + ν ( ν 1 ) 2 P τ 2 + ν ( 1 ν 2 ) 6 P τ 3 + .
By performing some simplifications in Equation (15), in both x and y directions, we obtain
x I G L ν P ( x , y ) 1 M ( ν ) P x , y + ν 2 M ( ν ) P x 1 , y + ν 3 ν 2 2 M ( ν ) P x 2 , y + ν 2 ν 4 6 M ( ν ) P x 3 , y + ,
y I G L ν P ( x , y ) 1 M ( ν ) P x , y + ν 2 M ( ν ) P x , y 1 + ν 3 ν 2 2 M ( ν ) P x , y 2 + ν 2 ν 4 6 M ( ν ) P x , y 3 + .
Thus, the coefficients required are established in the following manner:
0 = 1 M ( ν ) , 1 = ν 2 M ( ν ) , 2 = ν 3 ν 2 2 M ( ν ) , 3 = ν 2 ν 4 6 M ( ν ) .
As we know, Equations (10) and (11) provide a general structure for analyzing images. Additionally, by using the coefficients obtained from Equation (18), we reach two new masks that help identify the edges of an image.
  • Fractional window mask of GL1:
    H x = 1 M ( ν ) 0 1 M ( ν ) ν 2 M ( ν ) 0 ν 2 M ( ν ) ν 3 ν 2 2 M ( ν ) 0 ν 3 ν 2 2 M ( ν ) , H y = 1 M ( ν ) ν 2 M ( ν ) ν 3 ν 2 2 M ( ν ) 0 0 0 1 M ( ν ) ν 2 M ( ν ) ν 3 ν 2 2 M ( ν ) .
  • Fractional window mask of GL2:
    H x = 1 M ( ν ) ν 2 M ( ν ) ν 3 ν 2 2 M ( ν ) ν 2 ν 4 6 M ( ν ) 0 ν 2 ν 4 6 M ( ν ) ν 3 ν 2 2 M ( ν ) ν 2 M ( ν ) 1 M ( ν ) , H y = ν 3 ν 2 2 M ( ν ) ν 2 ν 4 6 M ( ν ) 1 M ( ν ) ν 2 M ( ν ) 0 ν 2 M ( ν ) 1 M ( ν ) ν 2 ν 4 6 M ( ν ) ν 3 ν 2 2 M ( ν ) .

4.2. Toufik–Atangana’s Method-Based Fractional Window Masks (TA) Approach

An alternative approach involves approximating the function P τ through interpolation within the interval [ t k , τ k + 1 ] as
P ( τ ) = P ( t k ) τ τ k 1 + P ( τ k 1 ) τ t k .
When we set τ = τ n in the ABC-fractional integral formula (4), we obtain
I T A ν P τ n = 1 ν M ( ν ) P t n + ν Γ ( ν ) M ( ν ) 0 τ n + 1 P ( τ ) ( τ τ ) ν 1 d τ , = 1 ν M ( ν ) P ( τ ) + ν Γ ( ν ) M ( ν ) k = 0 n t k τ k + 1 P ( τ ) ( τ τ ) ν 1 d τ .
Using Equation (21) in Equation (22) and performing some necessary calculations, we obtain [91]
I T A ν P τ n = 1 ν M ( ν ) P τ n + ν ν M ( ν ) Γ ( ν + 2 ) × s = 0 n ( P τ s ( n s + 1 ) ν ( n s + 2 + ν ) ( n s ) ν ( n s + 2 + 2 ν ) P τ s 1 ( n s + 1 ) ν + 1 ( n s ) ν ( n s + 1 + ν ) ) .
Therefore, it is possible to express Equation (23) in an alternative form as follows:
I T A ν P τ n = 1 ν Γ ν + 2 + ν ν ν + 2 M ν Γ ν + 2 P τ n + ν ν ν + 3 2 ν 2 ν 4 M ( ν ) Γ ν + 2 P τ n 1 + ν ν ν + 4 3 ν 2 ν + 5 2 ν + ν + 2 M ( ν ) Γ ν + 2 P τ n 2 + ν ν ν + 5 4 ν 2 ν + 6 3 ν + ν + 3 2 ν M ( ν ) Γ ν + 2 P τ n 3 + .
Especially for = 1 in Equation (24), we will have
x I T A ν P x , y = 1 ν Γ ν + 2 + ν 2 + 2 ν M ν Γ ν + 2 P x , y + ν 2 + 3 ν 2 ν 2 ν 2 4 ν M ( ν ) Γ ν + 2 P x 1 , y + ν 2 + 4 ν 3 ν 2 ν 2 + 5 ν 2 ν + ν 2 + 2 ν M ( ν ) Γ ν + 2 P x 2 , y + ν 2 + 5 ν 4 ν 2 ν 2 + 6 ν 3 ν + ν 2 + 3 ν 2 ν M ( ν ) Γ ν + 2 P x 3 , y + ,
y I T A ν P x , y = 1 ν Γ ν + 2 + ν 2 + 2 ν M ν Γ ν + 2 P x , y + ν 2 + 3 ν 2 ν 2 ν 2 4 ν M ( ν ) Γ ν + 2 P x , y 1 + ν 2 + 4 ν 3 ν 2 ν 2 + 5 ν 2 ν + ν 2 + 2 ν M ( ν ) Γ ν + 2 P x , y 2 + ν 2 + 5 ν 4 ν 2 ν 2 + 6 ν 3 ν + ν 2 + 3 ν 2 ν M ( ν ) Γ ν + 2 P x , y 3 + .
Hence, the required coefficients are established in the following manner:
0 = 1 ν Γ ν + 2 + ν 2 + 2 ν M ν Γ ν + 2 , 1 = ν 2 + 3 ν 2 ν 2 ν 2 4 ν M ( ν ) Γ ν + 2 , 2 = ν 2 + 4 ν 3 ν 2 ν 2 + 5 ν 2 ν + ν 2 + 2 ν M ( ν ) Γ ν + 2 , 3 = ν 2 + 5 ν 4 ν 2 ν 2 + 6 ν 3 ν + ν 2 + 3 ν 2 ν M ( ν ) Γ ν + 2 .
In this way, the following two fractional windows are achieved.
  • Fractional window mask of TA1:
    H x = 1 ν Γ ν + 2 + ν 2 + 2 ν M ν Γ ν + 2 0 1 ν Γ ν + 2 + ν 2 + 2 ν M ν Γ ν + 2 ν 2 + 3 ν 2 ν 2 ν 2 4 ν M ( ν ) Γ ν + 2 0 ν 2 + 3 ν 2 ν 2 ν 2 4 ν M ( ν ) Γ ν + 2 ν 2 + 4 ν 3 ν 2 ν 2 + 5 ν 2 ν + ν 2 + 2 ν M ( ν ) Γ ν + 2 0 ν 2 + 4 ν 3 ν 2 ν 2 + 5 ν 2 ν + ν 2 + 2 ν M ( ν ) Γ ν + 2 , H y = 1 ν Γ ν + 2 + ν 2 + 2 ν M ν Γ ν + 2 ν 2 + 3 ν 2 ν 2 ν 2 4 ν M ( ν ) Γ ν + 2 ν 2 + 4 ν 3 ν 2 ν 2 + 5 ν 2 ν + ν 2 + 2 ν M ( ν ) Γ ν + 2 0 0 0 1 ν Γ ν + 2 + ν 2 + 2 ν M ν Γ ν + 2 ν 2 + 3 ν 2 ν 2 ν 2 4 ν M ( ν ) Γ ν + 2 ν 2 + 4 ν 3 ν 2 ν 2 + 5 ν 2 ν + ν 2 + 2 ν M ( ν ) Γ ν + 2 .
  • Fractional window mask of TA2:
    H x = 1 ν Γ ν + 2 + ν 2 + 2 ν M ν Γ ν + 2 ν 2 + 3 ν 2 ν 2 ν 2 4 ν M ( ν ) Γ ν + 2 ν 2 + 4 ν 3 ν 2 ν 2 + 5 ν 2 ν + ν 2 + 2 ν M ( ν ) Γ ν + 2 ν 2 + 5 ν 4 ν 2 ν 2 + 6 ν 3 ν + ν 2 + 3 ν 2 ν M ( ν ) Γ ν + 2 0 ν 2 + 5 ν 4 ν 2 ν 2 + 6 ν 3 ν + ν 2 + 3 ν 2 ν M ( ν ) Γ ν + 2 ν 2 + 4 ν 3 ν 2 ν 2 + 5 ν 2 ν + ν 2 + 2 ν M ( ν ) Γ ν + 2 ν 2 + 3 ν 2 ν 2 ν 2 4 ν M ( ν ) Γ ν + 2 1 ν Γ ν + 2 + ν 2 + 2 ν M ν Γ ν + 2 , H y = ν 2 + 4 ν 3 ν 2 ν 2 + 5 ν 2 ν + ν 2 + 2 ν M ( ν ) Γ ν + 2 ν 2 + 5 ν 4 ν 2 ν 2 + 6 ν 3 ν + ν 2 + 3 ν 2 ν M ( ν ) Γ ν + 2 1 ν Γ ν + 2 + ν 2 + 2 ν M ν Γ ν + 2 ν 2 + 3 ν 2 ν 2 ν 2 4 ν M ( ν ) Γ ν + 2 0 ν 2 + 3 ν 2 ν 2 ν 2 4 ν M ( ν ) Γ ν + 2 1 ν Γ ν + 2 + ν 2 + 2 ν M ν Γ ν + 2 ν 2 + 5 ν 4 ν 2 ν 2 + 6 ν 3 ν + ν 2 + 3 ν 2 ν M ( ν ) Γ ν + 2 ν 2 + 4 ν 3 ν 2 ν 2 + 5 ν 2 ν + ν 2 + 2 ν M ( ν ) Γ ν + 2 .

4.3. Euler’s Method-Based Fractional Window Masks (Eu)

Another possible process in the discretization of the integral fractional operator (4) at τ = τ n is proposed as follows [92]:
I E u ν P τ n = 1 ν M ( ν ) P τ n + ν ν M ( ν ) Γ ( ν + 1 ) s = 0 n 1 τ n , s P τ s ,
where
τ n , s = n s ν n s 1 ν .
Therefore, Equation (30) is reformulated as
I E u ν P τ n = 1 ν M ν P τ n + ν ν M ( ν ) Γ ν + 1 P τ n 1 + ν ν 2 ν 1 M ( ν ) Γ ν + 1 P τ n 2 + ν ν 3 ν 2 ν M ( ν ) Γ ν + 1 P τ n 3 + .
Applying the same idea to the x and y directions, it follows that
x I E u ν P ( x , y ) 1 ν M ν P x , y + ν M ( ν ) Γ ν + 1 P x 1 , y + ν 2 ν 1 M ( ν ) Γ ν + 1 P x 2 , y + ν 3 ν 2 ν M ( ν ) Γ ν + 1 P x 3 , y + ,
y I E u ν P ( x , y ) 1 ν M ν P x , y + ν M ( ν ) Γ ν + 1 P x , y 1 + ν 2 ν 1 M ( ν ) Γ ν + 1 P x , y 2 + ν 3 ν 2 ν M ( ν ) Γ ν + 1 P x , y 3 + .
Hence, the required coefficients are established in the following manner:
0 = 1 ν M ν , 1 = ν M ( ν ) Γ ν + 1 , 2 = ν 2 ν 1 M ( ν ) Γ ν + 1 , 3 = ν 3 ν 2 ν M ( ν ) Γ ν + 1 .
In this way, the following two fractional windows are achieved.
  • Fractional window mask of Eu1
    H x = 1 ν M ν 0 1 ν M ν ν M ( ν ) Γ ν + 1 0 ν M ( ν ) Γ ν + 1 ν ν 2 ν 1 M ( ν ) Γ ν + 1 0 ν ν 2 ν 1 M ( ν ) Γ ν + 1 , H y = 1 ν M ν ν M ( ν ) Γ ν + 1 ν ν 2 ν 1 M ( ν ) Γ ν + 1 0 0 0 1 ν M ν ν M ( ν ) Γ ν + 1 ν ν 2 ν 1 M ( ν ) Γ ν + 1 .
  • Fractional window mask of Eu2
    H x = 1 ν M ν ν M ( ν ) Γ ν + 1 ν ν 2 ν 1 M ( ν ) Γ ν + 1 ν ν 3 ν 2 ν M ( ν ) Γ ν + 1 0 ν ν 3 ν 2 ν M ( ν ) Γ ν + 1 ν ν 2 ν 1 M ( ν ) Γ ν + 1 ν M ( ν ) Γ ν + 1 1 ν M ν , H y = ν ν 2 ν 1 M ( ν ) Γ ν + 1 ν ν 3 ν 2 ν M ( ν ) Γ ν + 1 1 ν M ν ν M ( ν ) Γ ν + 1 0 ν M ( ν ) Γ ν + 1 1 ν M ν ν ν 3 ν 2 ν M ( ν ) Γ ν + 1 ν ν 2 ν 1 M ( ν ) Γ ν + 1 .

4.4. The Middle Point Approach Based on Fractional Window Masks (MP)

In this part, let us re-consider the following fractional integral operator
I A B C ν P ( τ ) = 1 ν M ( ν ) P ( τ ) + ν Γ ( ν ) M ( ν ) 0 τ P ( ω ) ( τ ω ) 1 ν d ω .
If we consider the variable σ = τ ω into the integral denoted by Equation (38), we obtain
I A B C ν P ( τ ) = 1 ν M ( ν ) P ( τ ) + ν Γ ( ν ) M ( ν ) 0 τ P ( τ σ ) σ 1 ν d σ .
The integral in Equation (39) can be partitioned as follows:
I A B C ν P ( τ ) = 1 ν M ( ν ) P ( τ ) + ν Γ ( ν ) M ( ν ) s = 0 n 1 τ s τ s + 1 P ( τ σ ) σ 1 ν d σ .
Here, the integrals in Equation (40) are approximated through the following formulae:
τ m τ s + 1 P ( σ ) σ 1 ν d τ P ( τ m ) + P ( τ s + 1 ) 2 τ s τ s + 1 d σ σ 1 ν ,
Taking Equation (41) into account in Equation (40) yields
I M P ν P ( τ ) = 1 ν M ( ν ) P ( τ ) + ν Γ ( ν ) M ( ν ) s = 0 n 1 P ( τ τ s ) + P ( τ τ s + 1 ) 2 τ s τ s + 1 d τ τ 1 ν , = 1 ν M ( ν ) P ( τ ) + ν Γ ( ν ) M ( ν ) s = 0 n 1 P ( τ τ s ) + P ( τ τ s + 1 ) 2 ν τ s + 1 ν τ s ν .
By substituting τ = τ n = n into Equation (40), the following discrete form is obtained:
I M P ν P t n = 1 ν M ( ν ) P t n + ν Γ ( ν ) M ( ν ) s = 0 n 1 P ( t n t m ) + P ( t n τ s + 1 ) 2 ν ( ( s + 1 ) ) ν ( s ) ν , = 1 ν M ( ν ) P t n + ν Γ ( ν ) M ( ν ) s = 0 n 1 P ( τ n s ) + P ( τ n s 1 ) 2 ( s + 1 ) ν s ν .
Then, after using basic algebraic calculations, Equation (43) can be converted into the following equation:
I M P ν P τ n = 1 ν M ( ν ) + ν 2 M ( ν ) Γ ( ν ) P τ n + ν 2 M ( ν ) Γ ν 2 ν P τ n 1 + ν M ( ν ) Γ ν 3 ν 1 2 P τ n 2 + ν M ( ν ) Γ ν 4 ν 2 ν 2 P τ n 3 + .
We can express the related equations for the x and y directions as follows:
x I M P ν P ( x , y ) 2 Γ ( ν ) ( 1 ν ) + 1 2 M ( ν ) Γ ( ν ) P x , y + 2 ν 1 M ( ν ) Γ ( ν ) P x 1 , y + 3 ν 1 2 M ( ν ) Γ ( ν ) P x 2 , y + 4 ν 2 ν 2 M ( ν ) Γ ( ν ) P x 3 , y + ,
y I M P ν P ( x , y ) 2 Γ ( ν ) ( 1 ν ) + 1 2 M ( ν ) Γ ( ν ) P x , y + 2 ν 1 M ( ν ) Γ ( ν ) P x , y 1 + 3 ν 1 2 M ( ν ) Γ ( ν ) P x , y 2 + 4 ν 2 ν 2 M ( ν ) Γ ( ν ) P x , y 3 + .
Hence, the required coefficients are established in the following manner:
0 = 2 Γ ( ν ) ( 1 ν ) + 1 2 M ( ν ) Γ ( ν ) , 1 = 2 ν 1 M ( ν ) Γ ( ν ) , 2 = 3 ν 1 2 M ( ν ) Γ ( ν ) , 3 = 4 ν 2 ν 2 M ( ν ) Γ ( ν ) .
Thus, the following two fractional window masks are derived:
  • Fractional window mask of MP1
    H x = 2 Γ ( ν ) ( 1 ν ) + 1 2 M ( ν ) Γ ( ν ) 0 2 Γ ( ν ) ( 1 ν ) + 1 2 M ( ν ) Γ ( ν ) 2 ν 1 M ( ν ) Γ ( ν ) 0 2 ν 1 M ( ν ) Γ ( ν ) 3 ν 1 2 M ( ν ) Γ ( ν ) 0 3 ν 1 2 M ( ν ) Γ ( ν ) , H y = 2 Γ ( ν ) ( 1 ν ) + 1 2 M ( ν ) Γ ( ν ) 2 ν 1 M ( ν ) Γ ( ν ) 3 ν 1 2 M ( ν ) Γ ( ν ) 0 0 0 2 Γ ( ν ) ( 1 ν ) + 1 2 M ( ν ) Γ ( ν ) 2 ν 1 M ( ν ) Γ ( ν ) 3 ν 1 2 M ( ν ) Γ ( ν ) .
  • Fractional window mask of MP2
    H x = 2 Γ ( ν ) ( 1 ν ) + 1 2 M ( ν ) Γ ( ν ) 2 ν 1 M ( ν ) Γ ( ν ) 3 ν 1 2 M ( ν ) Γ ( ν ) 4 ν 2 ν 2 M ( ν ) Γ ( ν ) 0 4 ν 2 ν 2 M ( ν ) Γ ( ν ) 3 ν 1 2 M ( ν ) Γ ( ν ) 2 ν 1 M ( ν ) Γ ( ν ) 2 Γ ( ν ) ( 1 ν ) + 1 2 M ( ν ) Γ ( ν ) , H y = 3 ν 1 2 M ( ν ) Γ ( ν ) 4 ν 2 ν 2 M ( ν ) Γ ( ν ) 2 Γ ( ν ) ( 1 ν ) + 1 2 M ( ν ) Γ ( ν ) 2 ν 1 M ( ν ) Γ ( ν ) 0 2 ν 1 M ( ν ) Γ ( ν ) 2 Γ ( ν ) ( 1 ν ) + 1 2 M ( ν ) Γ ( ν ) 4 ν 2 ν 2 M ( ν ) Γ ( ν ) 3 ν 1 2 M ( ν ) Γ ( ν ) .

5. Numerical Implementations

To obtain numerical results in this paper, we have used various algorithms including the Canny mask (CM), Prewitt mask (PM), Sobol mask (SM), GL1 in Equation (19), GL2 in Equation (20), TA1 in Equation (28), TA2 in Equation (29), Eu1 in Equation (36), Eu2 in Equation (37), MP1 in Equation (48), and MP2 in Equation (49) for edge detecting of Sample 1–4 images. The fundamental aspect highlighted in the structures described in this article pertains to the presence of parameter ν within the configuration of Windows masks. Figure 1, Figure 2, Figure 3 and Figure 4 demonstrate an examination of the impact of this parameter on two factors, namely peak signal-to-noise ratio (PSNR) and ENTROPY. In these plots, the value of the PSNR index is calculated using the following formula:
P S N R = 10 log 10 255 × 255 M S E ,
where
M S E = 1 m × n s = 1 n r = 1 m P * ( r , s ) P ( r , s ) 2 .
A higher PSNR value indicates that the image is closer to the original image in terms of visual quality and fidelity.
On the other hand, the formula for determining ENTROPY is as follows [89]:
E N T R O P Y = i = 0 L 1 P ( i ) log 2 P ( i ) ,
where L is the number of intensity levels, and P ( i ) is the probability of occurrence of intensity level i. Entropy is a statistical measure of randomness or uncertainty in a signal or data set. In the context of image processing, entropy can be used as a measure of the amount of information contained in an image. In Figure 5, Figure 6, Figure 7 and Figure 8, we demonstrate the performance outcomes of these algorithms until attaining the highest PSNR, alongside the corresponding value for parameter ν at this point. According to Table 1, the MP1 algorithm gives better performance in Sample 1, which can also be seen in the Figure 5 as well. This table also indicates that the TA2 algorithm exhibits superior performance in Sample 2, as can also be observed in Figure 6. The MP2 algorithm displays superior performance for Sample 3 as indicated by Table 1 and depicted in Figure 7. Finally, as per the data presented in Table 1, it is evident that the TA1 algorithm performs better in Sample 4. This observation is also supported by the findings depicted in Figure 8.
Furthermore, the Windows masks evaluated in this article exhibit significantly higher accuracy and efficiency than the widely used standard masks of Canny, Prewitt, and Sobol masks within all analyzed sample images.

6. Conclusions

Edge detection is an important task in image processing that is used in many computer vision applications. There are several edge detection algorithms available, each with its strengths and weaknesses. The choice of algorithm depends on the specific application and the characteristics of the input image. Evaluating the performance of edge detection algorithms requires careful consideration of several metrics, and the choice of metric depends on the specific application and the desired performance characteristics. Fractional differential calculus, which involves the use of fractional derivatives and integrals, holds significant importance in various fields related to image processing. In this study, we delve into designing novel edge detectors by utilizing the fractional definition for integral as per Atangana–Baleanu–Caputo’s concept. The aim is to leverage the benefits of fractional calculus, particularly the order in corresponding definitions, to overcome the drawbacks associated with conventional methods like Canny, Prewitt, and Sobel. We conducted empirical experiments to demonstrate the superior performance of the new fractional kernels proposed in this study in terms of improving edge information and preserving image quality. It is seen that these kernels can be deemed as one of the most promising alternative kernels for enhancing edge information in images. In addition, we assert that the novel fractional kernels offer more precise details compared to conventional methods, resulting in enhanced comprehension of the underlying patterns present in the image data. It is important to note that the computational expense of each of the new fractional masks remains the same as that of traditional fractional masks, making them incredibly efficient and feasible. Moving forward, the researchers emphasize the need to explore the optimal value for the fractional order sigma to further enhance the effectiveness of the proposed method. Overall, this study offers valuable insights into the potential of using fractional differential calculus in the field of image processing. By leveraging its unique properties and advantages, researchers can devise innovative approaches to tackle the challenges associated with conventional methods and achieve more accurate and reliable results.

Author Contributions

All authors contributed equally and significantly to writing this article. All authors read and approved the final manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare that there are no conflict of interest regarding the publication of this paper.

References

  1. Li, R.; Zhang, H.; Chen, Z.; Yu, N.; Kong, W.; Li, T.; Wang, E.; Wu, X.; Liu, Y. Denoising method of ground-penetrating radar signal based on independent component analysis with multifractal spectrum. Measurement 2022, 192, 110886. [Google Scholar]
  2. Liu, F.; Zhao, X.; Zhu, Z.; Zhai, Z.; Liu, Y. Dual-microphone active noise cancellation paved with Doppler assimilation for TADS. Mech. Syst. Signal Process. 2023, 184, 109727. [Google Scholar] [CrossRef]
  3. Ghanbari, B.; Atangana, A. A new application of fractional Atangana–Baleanu derivatives: Designing ABC-fractional masks in image processing. Phys. A Stat. Mech. Its Appl. 2020, 542, 123516. [Google Scholar] [CrossRef]
  4. Zhu, H.; Xue, M.; Wang, Y.; Yuan, G.; Li, X. Fast visual tracking with siamese oriented region proposal network. IEEE Signal Process. Lett. 2022, 29, 1437–1441. [Google Scholar] [CrossRef]
  5. Liu, R.; Wang, X.; Lu, H.; Wu, Z.; Fan, Q.; Li, S.; Jin, X. SCCGAN: Style and characters inpainting based on CGAN. Mob. Netw. Appl. 2021, 26, 3–12. [Google Scholar] [CrossRef]
  6. Liu, Q.; Yuan, H.; Hamzaoui, R.; Su, H.; Hou, J.; Yang, H. Reduced reference perceptual quality model with application to rate control for video-based point cloud compression. IEEE Trans. Image Process. 2021, 30, 6623–6636. [Google Scholar] [CrossRef]
  7. Sheng, H.; Wang, S.; Yang, D.; Cong, R.; Cui, Z.; Chen, R. Cross-View Recurrence-based Self-Supervised Super-Resolution of Light Field. IEEE Trans. Circuits Syst. Video Technol. 2023. [Google Scholar] [CrossRef]
  8. Zhu, W.; Chen, J.; Sun, Q.; Li, Z.; Tan, W.; Wei, Y. Reconstructing of High-Spatial-Resolution Three-Dimensional Electron Density by Ingesting SAR-Derived VTEC Into IRI Model. IEEE Geosci. Remote Sens. Lett. 2022, 19, 4508305. [Google Scholar] [CrossRef]
  9. Zhou, X.; Sun, K.; Wang, J.; Zhao, J.; Feng, C.; Yang, Y.; Zhou, W. Computer Vision Enabled Building Digital Twin Using Building Information Model. IEEE Trans. Ind. Inform. 2022, 19, 2684–2692. [Google Scholar] [CrossRef]
  10. Zhuo, Z.; Du, L.; Lu, X.; Chen, J.; Cao, Z. Smoothed Lv distribution based three-dimensional imaging for spinning space debris. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5113813. [Google Scholar] [CrossRef]
  11. Li, B.; Tan, Y.; Wu, A.G.; Duan, G.R. A distributionally robust optimization based method for stochastic model predictive control. IEEE Trans. Autom. Control 2021, 67, 5762–5776. [Google Scholar] [CrossRef]
  12. Xie, X.; Huang, L.; Marson, S.M.; Wei, G. Emergency response process for sudden rainstorm and flooding: Scenario deduction and Bayesian network analysis using evidence theory and knowledge meta-theory. Nat. Hazards 2023, 117, 3307–3329. [Google Scholar] [CrossRef]
  13. Xie, X.; Xie, B.; Cheng, J.; Chu, Q.; Dooling, T. A simple Monte Carlo method for estimating the chance of a cyclone impact. Nat. Hazards 2021, 107, 2573–2582. [Google Scholar] [CrossRef]
  14. Zhang, J.; Zhu, C.; Zheng, L.; Xu, K. ROSEFusion: Random optimization for online dense reconstruction under fast camera motion. ACM Trans. Graph. 2021, 40, 56. [Google Scholar] [CrossRef]
  15. Wang, Y.; Xu, N.; Liu, A.A.; Li, W.; Zhang, Y. High-order interaction learning for image captioning. IEEE Trans. Circuits Syst. Video Technol. 2021, 32, 4417–4430. [Google Scholar] [CrossRef]
  16. Wang, Y.; Su, Y.; Li, W.; Xiao, J.; Li, X.; Liu, A.A. Dual-path Rare Content Enhancement Network for Image and Text Matching. IEEE Trans. Circuits Syst. Video Technol. 2023. [Google Scholar] [CrossRef]
  17. Zhou, L.; Ye, Y.; Tang, T.; Nan, K.; Qin, Y. Robust matching for SAR and optical images using multiscale convolutional gradient features. IEEE Geosci. Remote Sens. Lett. 2021, 19, 4017605. [Google Scholar] [CrossRef]
  18. Deng, X.; Liu, E.; Li, S.; Duan, Y.; Xu, M. Interpretable Multi-modal Image Registration Network Based on Disentangled Convolutional Sparse Coding. IEEE Trans. Image Process. 2023, 32, 1078–1091. [Google Scholar] [CrossRef]
  19. Tan, X.; Lin, J.; Xu, K.; Chen, P.; Ma, L.; Lau, R.W. Mirror detection with the visual chirality cue. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 3492–3504. [Google Scholar] [CrossRef]
  20. Wang, S.; Hu, X.; Sun, J.; Liu, J. Hyperspectral Anomaly Detection Using Ensemble and Robust Collaborative Representation. Inf. Sci. 2023, 624, 748–760. [Google Scholar] [CrossRef]
  21. Lin, Z.; Wang, H.; Li, S. Pavement anomaly detection based on transformer and self-supervised learning. Autom. Constr. 2022, 143, 104544. [Google Scholar] [CrossRef]
  22. Zhang, J.; Peng, S.; Gao, Y.; Zhang, Z.; Hong, Q. APMSA: Adversarial Perturbation Against Model Stealing Attacks. IEEE Trans. Inf. Forensics Secur. 2023, 18, 1667–1679. [Google Scholar] [CrossRef]
  23. Zhou, G.; Song, B.; Liang, P.; Xu, J.; Yue, T. Voids filling of DEM with multiattention generative adversarial network model. Remote Sens. 2021, 14, 1206. [Google Scholar] [CrossRef]
  24. Ban, Y.; Liu, M.; Wu, P.; Yang, B.; Liu, S.; Yin, L.; Zheng, W. Depth estimation method for monocular camera defocus images in microscopic scenes. Electronics 2022, 11, 2012. [Google Scholar] [CrossRef]
  25. Fu, C.; Yuan, H.; Xu, H.; Zhang, H.; Shen, L. TMSO-Net: Texture adaptive multi-scale observation for light field image depth estimation. J. Vis. Commun. Image Represent. 2023, 90, 103731. [Google Scholar] [CrossRef]
  26. Li, B.; Zhang, M.; Rong, Y.; Han, Z. Transceiver optimization for wireless powered time-division duplex MU-MIMO systems: Non-robust and robust designs. IEEE Trans. Wirel. Commun. 2021, 21, 4594–4607. [Google Scholar] [CrossRef]
  27. Ma, X.; Dong, Z.; Quan, W.; Dong, Y.; Tan, Y. Real-time assessment of asphalt pavement moduli and traffic loads using monitoring data from Built-in Sensors: Optimal sensor placement and identification algorithm. Mech. Syst. Signal Process. 2023, 187, 109930. [Google Scholar] [CrossRef]
  28. Zhang, X.; Wen, S.; Yan, L.; Feng, J.; Xia, Y. A hybrid-convolution spatial–temporal recurrent network for traffic flow prediction. Comput. J. 2022. [Google Scholar] [CrossRef]
  29. Han, Y.; Wang, B.; Guan, T.; Tian, D.; Yang, G.; Wei, W.; Tang, H.; Chuah, J.H. Research on road environmental sense method of intelligent vehicle based on tracking check. IEEE Trans. Intell. Transp. Syst. 2022, 24, 1261–1275. [Google Scholar] [CrossRef]
  30. Chen, J.; Wang, Q.; Cheng, H.H.; Peng, W.; Xu, W. A Review of Vision-Based Traffic Semantic Understanding in ITSs. IEEE Trans. Intell. Transp. Syst. 2022, 23, 19954–19979. [Google Scholar] [CrossRef]
  31. Chen, J.; Xu, M.; Xu, W.; Li, D.; Peng, W.; Xu, H. A Flow Feedback Traffic Prediction Based on Visual Quantified Features. IEEE Trans. Intell. Transp. Syst. 2023. [Google Scholar] [CrossRef]
  32. Chen, J.; Wang, Q.; Peng, W.; Xu, H.; Li, X.; Xu, W. Disparity-Based Multiscale Fusion Network for Transportation Detection. IEEE Trans. Intell. Transp. Syst. 2022, 23, 18855–18863. [Google Scholar] [CrossRef]
  33. Fan, W.; Yang, L.; Bouguila, N. Unsupervised grouped axial data modeling via hierarchical Bayesian nonparametric models with Watson distributions. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 9654–9668. [Google Scholar] [CrossRef] [PubMed]
  34. Dai, B.; Zhang, B.; Niu, Z.; Feng, Y.; Liu, Y.; Fan, Y. A novel ultrawideband branch waveguide coupler with low amplitude imbalance. IEEE Trans. Microw. Theory Tech. 2022, 70, 3838–3846. [Google Scholar] [CrossRef]
  35. Zhang, Y.; Shao, Z.; Zhang, J.; Wu, B.; Zhou, L. The effect of image enhancement on influencer’s product recommendation effectiveness: The roles of perceived influencer authenticity and post type. J. Res. Interact. Mark. 2023. [Google Scholar] [CrossRef]
  36. Feng, Y.; Zhang, B.; Liu, Y.; Niu, Z.; Fan, Y.; Chen, X. A D-band manifold triplexer with high isolation utilizing novel waveguide dual-mode filters. IEEE Trans. Terahertz Sci. Technol. 2021, 12, 678–681. [Google Scholar] [CrossRef]
  37. Xu, K.D.; Guo, Y.J.; Liu, Y.; Deng, X.; Chen, Q.; Ma, Z. 60-GHz compact dual-mode on-chip bandpass filter using GaAs technology. IEEE Electron. Device Lett. 2021, 42, 1120–1123. [Google Scholar] [CrossRef]
  38. Liu, Y.; Wang, K.; Liu, L.; Lan, H.; Lin, L. Tcgl: Temporal contrastive graph for self-supervised video representation learning. IEEE Trans. Image Process. 2022, 31, 1978–1993. [Google Scholar] [CrossRef]
  39. Cheng, L.; Yin, F.; Theodoridis, S.; Chatzis, S.; Chang, T.H. Rethinking Bayesian learning for data analysis: The art of prior and inference in sparsity-aware modeling. IEEE Signal Process. Mag. 2022, 39, 18–52. [Google Scholar] [CrossRef]
  40. Nie, W.; Bao, Y.; Zhao, Y.; Liu, A. Long Dialogue Emotion Detection Based on Commonsense Knowledge Graph Guidance. IEEE Trans. Multimed. 2023. [Google Scholar] [CrossRef]
  41. Zhou, X.; Zhang, L. SA-FPN: An effective feature pyramid network for crowded human detection. Appl. Intell. 2022, 52, 12556–12568. [Google Scholar] [CrossRef]
  42. Xie, X.; Jin, X.; Wei, G.; Chang, C.T. Monitoring and early warning of SMEs’ shutdown risk under the impact of global pandemic shock. Systems. Systems 2023, 11, 260. [Google Scholar] [CrossRef]
  43. Liu, H.; Li, J.; Meng, X.; Zhou, B.; Fang, G.; Spencer, B.F. Discrimination Between Dry and Water Ices by Full Polarimetric Radar: Implications for China’s First Martian Exploration. IEEE Trans. Geosci. Remote Sens. 2022, 61, 5100111. [Google Scholar] [CrossRef]
  44. Liu, H.; Yuan, H.; Liu, Q.; Hou, J.; Zeng, H.; Kwong, S. A Hybrid Compression Framework for Color Attributes of Static 3D Point Clouds. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 1564–1577. [Google Scholar] [CrossRef]
  45. Guan, Z.; Jing, J.; Deng, X.; Xu, M.; Jiang, L.; Zhang, Z.; Li, Y. DeepMIH: Deep invertible network for multiple image hiding. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 372–390. [Google Scholar] [CrossRef]
  46. Tian, H.; Huang, N.; Niu, Z.; Qin, Y.; Pei, J.; Wang, J. Mapping winter crops in China with multi-source satellite imagery and phenology-based algorithm. Remote Sens. 2019, 11, 820. [Google Scholar] [CrossRef] [Green Version]
  47. Tian, H.; Pei, J.; Huang, J.; Li, X.; Wang, J.; Zhou, B.; Qin, Y.; Wang, L. Garlic and Winter Wheat Identification Based on Active and Passive Satellite Imagery and the Google Earth Engine in Northern China. Remote Sens. 2020, 12, 3539. [Google Scholar] [CrossRef]
  48. Zhuang, Y.; Chen, S.; Jiang, N.; Hu, H. An Effective WSSENet-Based Similarity Retrieval Method of Large Lung CT Image Databases. KSII Trans. Internet Inf. Syst. 2022, 16, 2359–2376. [Google Scholar]
  49. Xu, J.; Zhang, X.; Park, S.H.; Guo, K. The alleviation of perceptual blindness during driving in urban areas guided by saccades recommendation. IEEE Trans. Intell. Transp. Syst. 2022, 23, 16386–16396. [Google Scholar] [CrossRef]
  50. Xu, J.; Park, S.H.; Zhang, X.; Hu, J. The improvement of road driving safety guided by visual inattentional blindness. IEEE Trans. Intell. Transp. Syst. 2022, 23, 4972–4981. [Google Scholar] [CrossRef]
  51. Xiong, S.; Li, B.; Zhu, S. DCGNN: A single-stage 3D object detection network based on density clustering and graph neural network. Complex Intell. Syst. 2023, 9, 3399–3408. [Google Scholar] [CrossRef]
  52. Cheng, D.; Chen, L.; Lv, C.; Guo, L.; Kou, Q. Light-Guided and Cross-Fusion U-Net for Anti-Illumination Image Super-Resolution. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 8436–8449. [Google Scholar] [CrossRef]
  53. Zhou, G.; Zhang, R.; Huang, S. Generalized buffering algorithm. IEEE Access 2021, 9, 27140–27157. [Google Scholar] [CrossRef]
  54. Yang, M.; Wang, H.; Hu, K.; Yin, G.; Wei, Z. IA-Net: An Inception–Attention-Module-Based Network for Classifying Underwater Images From Others. IEEE J. Ocean. Eng. 2022, 47, 704–717. [Google Scholar] [CrossRef]
  55. Zhong, Q.; Han, S.; Shi, K.; Zhong, S.; Kwon, O.M. Co-design of adaptive memory event-triggered mechanism and aperiodic intermittent controller for nonlinear networked control systems. IEEE Trans. Circuits Syst. II Express Briefs 2022, 69, 4979–4983. [Google Scholar] [CrossRef]
  56. Xiong, Z.; Zeng, M.; Zhang, X.; Zhu, S.; Xu, F.; Zhao, X.; Wu, Y.; Li, X. Social similarity routing algorithm based on socially aware networks in the big data environment. J. Signal Process. Syst. 2022, 94, 1253–1267. [Google Scholar]
  57. Zhou, G.; Wang, Q.; Huang, Y.; Tian, J.; Li, H.; Wang, Y. True2 Orthoimage Map Generation. Remote Sensing. Remote Sens. 2022, 14, 4396. [Google Scholar] [CrossRef]
  58. Cong, R.; Sheng, H.; Yang, D.; Cui, Z.; Chen, R. Exploiting Spatial and Angular Correlations With Deep Efficient Transformers for Light Field Image Super-Resolution. IEEE Trans. Multimed. 2023. [Google Scholar] [CrossRef]
  59. Wang, S.; Sheng, H.; Yang, D.; Zhang, Y.; Wu, Y.; Wang, S. Extendable multiple nodes recurrent tracking framework with RTU++. IEEE Trans. Image Process. 2022, 31, 5257–5271. [Google Scholar] [CrossRef]
  60. Yang, D.; Zhu, T.; Wang, S.; Wang, S.; Xiong, Z. LFRSNet: A robust light field semantic segmentation network combining contextual and geometric features. Front. Environ. Sci. 2022, 10, 996513. [Google Scholar] [CrossRef]
  61. Yan, A.; Li, Z.; Cui, J.; Huang, Z.; Ni, T.; Girard, P.; Wen, X. LDAVPM: A latch design and algorithm-based verification protected against multiple-node-upsets in harsh radiation environments. IEEE Trans.-Comput.-Aided Des. Integr. Circuits Syst. 2022, 42, 2069–2073. [Google Scholar] [CrossRef]
  62. Cheng, B.; Zhu, D.; Zhao, S.; Chen, J. Situation-aware IoT service coordination using the event-driven SOA paradigm. IEEE Trans. Netw. Serv. Manag. 2016, 13, 349–361. [Google Scholar] [CrossRef]
  63. Wang, J.; Tian, J.; Zhang, X.; Yang, B.; Liu, S.; Yin, L.; Zheng, W. Control of time delay force feedback teleoperation system with finite time convergence. Front. Neurorobot. 2022, 16, 877069. [Google Scholar] [CrossRef] [PubMed]
  64. Gu, Q.; Tian, J.; Yang, B.; Liu, M.; Gu, B.; Yin, Z.; Yin, L.; Zheng, W. A novel architecture of a six degrees of freedom parallel platform. Electronics 2023, 12, 1774. [Google Scholar] [CrossRef]
  65. Chen, Y.; Chen, Z.; Guo, D.; Zhao, Z.; Lin, T.; Zhang, C. Underground space use of urban built-up areas in the central city of Nanjing: Insight based on a dynamic population distribution. Undergr. Space 2022, 7, 748–766. [Google Scholar] [CrossRef]
  66. Guo, F.; Zhou, W.; Lu, Q.; Zhang, C. Path extension similarity link prediction method based on matrix algebra in directed networks. Comput. Commun. 2022, 187, 83–92. [Google Scholar] [CrossRef]
  67. Li, J.; Deng, Y.; Sun, W.; Li, W.; Li, R.; Li, Q.; Liu, Z. Resource orchestration of cloud-edge–based smart grid fault detection. ACM Trans. Sens. Netw. (TOSN) 2022, 18, 1–26. [Google Scholar] [CrossRef]
  68. Wang, S.; Sheng, H.; Zhang, Y.; Yang, D.; Shen, J.; Chen, R. Blockchain-empowered distributed multi-camera multi-target tracking in edge computing. IEEE Trans. Ind. Inform. 2023. [Google Scholar] [CrossRef]
  69. Wang, Y.; Han, X.; Jin, S. MAP based modeling method and performance study of a task offloading scheme with time-correlated traffic and VM repair in MEC systems. Wirel. Netw. 2023, 29, 47–68. [Google Scholar] [CrossRef]
  70. Dai, X.; Xiao, Z.; Jiang, H.; Lui, J.C.S. UAV-Assisted Task Offloading in Vehicular Edge Computing Networks. IEEE Trans. Mob. Comput. 2023. [Google Scholar] [CrossRef]
  71. Zong, C.; Wan, Z. Container ship cell guide accuracy check technology based on improved 3D point cloud instance segmentation. Brodogradnja 2022, 73, 23–35. [Google Scholar] [CrossRef]
  72. Xiong, Z.; Li, X.; Zhang, X.; Deng, M.; Xu, F.; Zhou, B.; Zeng, M. A Comprehensive Confirmation-based Selfish Node Detection Algorithm for Socially Aware Networks. J. Signal Process. Syst. 2023, 1–19. [Google Scholar] [CrossRef]
  73. Wei, X.; Wu, Y.; Dong, F.; Zhang, J.; Sun, S. Developing an image manipulation detection algorithm based on edge detection and faster r-cnn. Symmetry 2019, 11, 1223. [Google Scholar] [CrossRef] [Green Version]
  74. Zhang, Z.; Liu, Y.; Liu, T.; Li, Y.; Ye, W. Edge detection algorithm of a symmetric difference kernel SAR image based on the GAN network model. Symmetry 2019, 11, 557. [Google Scholar] [CrossRef] [Green Version]
  75. Wang, D.; Ma, L. Insulator Hydrophobic Image Edge Detection Algorithm considering Deconvolution and Deblurring Algorithm. Math. Probl. Eng. 2022, 2022, 1871079. [Google Scholar] [CrossRef]
  76. Canny, J. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, 6, 679–698. [Google Scholar] [CrossRef]
  77. Melin, P.; Mendoza, O.; Castillo, O. An Improved Method for Edge Detection Based on Interval Type-2 Fuzzy Logic. Expert Syst. Appl. 2010, 37, 8527–8535. [Google Scholar] [CrossRef]
  78. Samko, G.; Kilbas, A.A.; Marichev, O.I. Fractional Integrals and Derivatives: Theory and Applications; Gordon & Breach: Yverdon, Switzerland, 1993. [Google Scholar]
  79. Caputo, M.; Fabrizio, M. A new definition of fractional derivative without singular kernal. Prog. Fract. Differ. Appl. 2015, 1, 73–85. [Google Scholar]
  80. Atangana, A.; Baleanu, D. New fractional derivatives with non-local and non-singular kernel: Theory and application to heat transfer model. Therm. Sci. 2016, 20, 763–769. [Google Scholar] [CrossRef] [Green Version]
  81. Baleanu, D.; Jajarmi, A.; Mohammadi, H.; Rezapour, S. A new study on the mathematical modeling of human liver with Caputo–Fabrizio fractional derivative. Chaos Solitons Fractals 2020, 134, 109705. [Google Scholar] [CrossRef]
  82. Defterli, O.; Baleanu, D.; Jajarmi, A.; Sajjadi, S.S.; Alshaikh, N.; Asad, J. Fractional treatment: An accelerated mass-spring system. Rom. Rep. Phys. 2022, 74, 1–3. [Google Scholar]
  83. Baleanu, D.; Hasanabadi, M.; Vaziri, A.M.; Jajarmi, A. A new intervention strategy for an HIV/AIDS transmission by a general fractional modeling and an optimal control approach. Chaos Solitons Fractals 2023, 167, 113078. [Google Scholar] [CrossRef]
  84. Ghanbari, B.; Gómez-Aguilar, J.F. New exact optical soliton solutions for nonlinear Schrödinger equation with second-order spatio-temporal dispersion involving M-derivative. Mod. Phys. Lett. B 2019, 33, 1950235. [Google Scholar] [CrossRef] [Green Version]
  85. Ghanbari, B. Chaotic behaviors of the prevalence of an infectious disease in a prey and predator system using fractional derivatives. Math. Methods Appl. Sci. 2021, 44, 9998–10013. [Google Scholar] [CrossRef]
  86. Ghanbari, B. Abundant exact solutions to a generalized nonlinear Schrödinger equation with local fractional derivative. Math. Methods Appl. Sci. 2021, 44, 8759–8774. [Google Scholar] [CrossRef]
  87. Ghanbari, B. On approximate solutions for a fractional prey–predator model involving the Atangana–Baleanu derivative. Adv. Differ. Equations 2020, 2020, 679. [Google Scholar] [CrossRef]
  88. Wang, M.; Wang, S.; Ju, X.; Wang, Y. Image Denoising Method Relying on Iterative Adaptive Weight-Mean Filtering. Symmetry 2023, 15, 1181. [Google Scholar] [CrossRef]
  89. Ghanbari, B.; Atangana, A. Some new edge detecting techniques based on fractional derivatives with non-local and non-singular kernels. Adv. Differ. Equations 2020, 2020, 435. [Google Scholar] [CrossRef]
  90. Podlubny, I. Fractional Dfferential Equations, Vol. 198 of Mathematics in Science and Engineering; Academic Press: Cambridge, MA, USA, 1999. [Google Scholar]
  91. Toufik, M.; Atangana, A. New numerical approximation of fractional derivative with non-local and non-singular kernel: Application to chaotic models. Eur. Phys. J. Plus 2017, 132, 444. [Google Scholar] [CrossRef]
  92. Li, C.; Zeng, F. The finite difference methods for fractional ordinary differential equations. Num. Funct. Anal. Opt 2013, 34, 149–179. [Google Scholar] [CrossRef]
Figure 1. The highest PSNR achieved using different techniques for the image of Sample 1.
Figure 1. The highest PSNR achieved using different techniques for the image of Sample 1.
Symmetry 15 01418 g001
Figure 2. The highest PSNR achieved using different techniques for the image of Sample 2.
Figure 2. The highest PSNR achieved using different techniques for the image of Sample 2.
Symmetry 15 01418 g002
Figure 3. The highest PSNR achieved using different techniques for the image of Sample 3.
Figure 3. The highest PSNR achieved using different techniques for the image of Sample 3.
Symmetry 15 01418 g003
Figure 4. The highest PSNR achieved using different techniques for the image of Sample 4.
Figure 4. The highest PSNR achieved using different techniques for the image of Sample 4.
Symmetry 15 01418 g004
Figure 5. The highest PSNR achieved using different techniques for the image of Sample 1.
Figure 5. The highest PSNR achieved using different techniques for the image of Sample 1.
Symmetry 15 01418 g005
Figure 6. The highest PSNR achieved using different techniques for the image of Sample 2.
Figure 6. The highest PSNR achieved using different techniques for the image of Sample 2.
Symmetry 15 01418 g006
Figure 7. The highest PSNR achieved using different techniques for the image of Sample 3.
Figure 7. The highest PSNR achieved using different techniques for the image of Sample 3.
Symmetry 15 01418 g007
Figure 8. The highest PSNR achieved using different techniques for the image of Sample 4.
Figure 8. The highest PSNR achieved using different techniques for the image of Sample 4.
Symmetry 15 01418 g008
Table 1. Evaluating the highest PSNRs achieved through various algorithms.
Table 1. Evaluating the highest PSNRs achieved through various algorithms.
Sample No.CMPMSMGL1GL2TA1TA2Eu1Eu2MP1MP2
1−39.5811.5611.2911.5711.4511.5511.5411.5411.5211.5711.55
2−40.628.378.508.638.648.638.688.648.658.648.67
3−43.746.266.496.676.676.676.686.656.676.676.68
4−40.4810.8411.2811.3611.3511.3711.3111.2911.3111.3611.30
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ding, Y.; Nan, X. On Edge Detection Algorithms for Water-Repellent Images of Insulators Taking into Account Efficient Approaches. Symmetry 2023, 15, 1418. https://doi.org/10.3390/sym15071418

AMA Style

Ding Y, Nan X. On Edge Detection Algorithms for Water-Repellent Images of Insulators Taking into Account Efficient Approaches. Symmetry. 2023; 15(7):1418. https://doi.org/10.3390/sym15071418

Chicago/Turabian Style

Ding, Yizhuo, and Xiaofei Nan. 2023. "On Edge Detection Algorithms for Water-Repellent Images of Insulators Taking into Account Efficient Approaches" Symmetry 15, no. 7: 1418. https://doi.org/10.3390/sym15071418

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop