Next Article in Journal
Approximate Solution of PHI-Four and Allen–Cahn Equations Using Non-Polynomial Spline Technique
Previous Article in Journal
Starlike Functions of the Miller–Ross-Type Poisson Distribution in the Janowski Domain
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Utilizing Generative Adversarial Networks Using a Category of Fuzzy-Based Structural Similarity Indices for Constructing Datasets in Meteorology

by
Bahram Farhadinia
1,2,*,
Mohammad Reza Ahangari
2,3 and
Aghileh Heydari
2,3
1
Department of Mathematics, Faculty of Engineering Sciences, Quchan University of Technology, Quchan P.O. Box 94771-67335, Iran
2
School of Computing and Information Systems, The University of Melbourne, Parkville, VIC 3010, Australia
3
Department of Mathematics, Faculty of Basic Science, Payame Noor University (PNU), Tehran P.O. Box 19395-4697, Iran
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(6), 797; https://doi.org/10.3390/math12060797
Submission received: 3 February 2024 / Revised: 4 March 2024 / Accepted: 5 March 2024 / Published: 8 March 2024
(This article belongs to the Special Issue Fuzzy Set Theory and Its Application to Machine Learning)

Abstract

:
Machine learning and image processing are closely related fields that have undergone major development and application in recent years. Machine learning algorithms are being used to develop sophisticated techniques for analyzing and interpreting images, such as object detection, image classification, and image segmentation. One important aspect of image processing is the ability to compare and measure the similarity between different images by providing a way to quantify the similarity between images using various features such as contrast, luminance, and structure. Generally, the flexibility of similarity measures enables fine-tuning the comparison process to achieve the desired outcomes. This is while the existing similarity measures are not flexible enough to address diverse and comprehensive practical aspects. To this end, we utilize triangular norms (t-norms) to construct an inclusive class of similarity measures in this article. As is well-known, each t-norm possesses distinctive attributes that allow for novel interpretations of image similarities. The proposed class of t-norm-based structural similarity measures offers numerous options for decisionmakers to consider various issues and interpret results more broadly in line with their objectives. For more details, in the Experiments section, the proposed method is applied to grayscale and binarized images and a specific experiment related to meteorology. Eventually, the presented diverse case studies confirm the efficiency and key features of the t-norm-based structural similarity.

1. Introduction

Image quality measures play important roles in various image processing applications. There are two main classes of objective quality or distortion assessment approaches.
The first class consists of mathematical measures such as the widely used mean squared error (MSE) [1], peak signal to noise ratio (PSNR) [2], root mean squared error (RMSE) [3], mean absolute error (MAE) [4], and signal-to-noise ratio (SNR) [5], which have been used because of their simplicity in calculation and having low computational complexity [2].
The second class of measurement methods takes into account the characteristics of the human visual system (HVS) [6]. Based on the properties and cognitive mechanism of the human visual system, image quality assessment based on HVS can be classified into bionics and engineering methods [7]. The bionics method, which is referred to as the bottom-up approach, entails creating a protocol for evaluating image quality by modeling the characteristics of the human visual system. This comprises multi-channel decomposition [8], the contrast sensitivity function [9], the masking function [10], and the just-noticeable difference function [11]. Additionally, they are unaffected by viewing conditions and individual observers. Although it is recognized that viewing conditions can significantly impact human perception of image quality, these conditions are often variable and require specific data that are generally unavailable to the image analysis system [12]. In [12], Wang et al. proposed a universal image quality index. The term universal means that the quality measurement approach is independent of the images being tested, viewing conditions, or individual observers. Moreover, it should apply to diverse image processing applications and enable meaningful comparisons across different types of image distortions.
In continuing the previous contribution of Wang et al. [12], they introduced the concept of Structural SIMilarity (SSIM), which is an enhanced version of the universal image quality index proposed in [13]. The SSIM index serves as an index to measure the similarity between two images. It plays a role as a quality metric for one of the compared images, assuming that the other image is considered to be of perfect quality.
There are three terms, namely luminance (l), contrast (c), and structure (s), which are defined between samples of a and b to construct an SSIM. These terms are as follows:
l ( a , b ) = 2 μ a μ b + c 1 μ a 2 + μ b 2 + c 1 ,
c ( a , b ) = 2 σ a σ b + c 2 σ a 2 + σ b 2 + c 2 ,
s ( a , b ) = σ a b + c 3 σ a σ b + c 3 ,
where μ a and μ b represent the pixel sample mean of image a and b, respectively. The constant c 1 is included to prevent instability when μ a 2 + μ b 2 is extremely close to zero. The variance of images a and b is denoted as σ a 2 and σ b 2 , respectively. The covariance between images a and b is represented as σ a b . Some suggestions for constants c 1 , c 2 , and c 3 are provided in [13] and other articles.
In summary, SSIM is a weighted combination of comparative measures l , c , and s and is defined as
S S I M ( a , b ) = l ( a , b ) α . c ( a , b ) β . s ( a , b ) γ ,
where α , β , and γ are weights whose determination to 1 results in the usual form of the SSIM formula.
The above definition of the SSIM quality index provides a comprehensive framework for developing and evaluating perceptual quality measures. This includes its connections to human visual neurobiology and perception, as well as the direct validation of the index against human subject ratings. SSIM quickly gained popularity within the image processing community and also found widespread adoption in the television and social media industries [14,15].
Furthermore, the SSIM is utilized in various fields including image compression [16], image restoration [17], and pattern recognition [18]. In image compression, some data are intentionally discarded to save storage space, and MSE is commonly employed in such compression methods. However, research has revealed that using SSIM instead of MSE yields superior results for uncompressed images. In the realm of image restoration, the Wiener filter [19] is employed, and its design is based on MSE. Nonetheless, using a variant of SSIM, particularly statistical structural similarity (Stat-SSIM), produces more visually appealing outcomes [20]. In pattern recognition, SSIM can be utilized to identify patterns since it emulates certain aspects of human perception. When dealing with challenges like image scaling, translation, and rotation, it is recommended to use Complex Wavelet Structural Similarity (CW-SSIM) as it is unaffected by these changes and can be directly applied through pattern matching without requiring any training samples. Additionally, in data-driven pattern recognition approaches, where ample training data are available, it is suggested to employ CW-SSIM according to [7] for enhanced performance.
Despite the efficiency and good results of SSIM in [13], this similarity measure is still limited by its almost fixed structure. This structure restricts researchers to a specific model of similarity analysis that SSIM deals with.
The main innovation proposed in this paper is to introduce a flexible structure for SSIM using t-norms [21]. By combining these features with the inherent power of SSIM, new possibilities emerge for analyzing the similarity between images, signals, and more. Moreover, the variety of t-norms enables researchers to explore new aspects of similarity. In addition to the latter superiority, in different scenarios, the appropriate model can be used to achieve improved and adaptive results based on the specific problem at hand.
In the remaining part of this contribution, we introduce in Section 2 a class of similarity measures based on the concept of t-norms. We raise three basic questions and answer them to explain why we chose to construct the similarity measure using t-norms. In the sequel, we prove the similarity properties of the proposed t-norm-based similarity measures with the help of a fundamental theorem. In Section 3, we present the comparison results and corresponding analysis, which demonstrate the practicality and applicability of the proposed t-norm-based similarity measures. In the first part of the experiment, we assess the effectiveness of the proposed method regarding measuring similarity of grayscale images, involving comparing one image with itself and two different images in the form of case studies. In the other part of the experiment, similar to the previous case, we examine the similarity measure values for binarized images using the Sogno integral, Choquet integral, and Bradley method. In the last experiment, we utilize a generative adversarial network (GAN) [22,23,24] to investigate the cloud cover subject. We generate images and assess the similarity between the generated images and the original ones. Needless to say, similar outcomes can be extended to larger and more practical experimental results without the loss of generality of the method.

2. T-Norm-Based Structural Similarity ( SSIM T )

One of the most widely used measures of similarity is that proposed by Wang et al. [13], known as SSIM. Indeed, this similarity measure is not flexible enough, even with the ability to be weighted. This non-flexibility limits the examination of similarity between images to a narrow scope. To address this issue, we have introduced a new structural similarity measure called S S I M T , based on t-norms. The inclusion of t-norms in the proposed similarity definition allows for a fresh perspective in examining the similarity between images, extending the aspects of analysis and increasing efficiency in decisionmaking.

2.1. T-Norms

A triangular norm, also known as a t-norm, is a type of binary operation used in mathematics. The application of t-norm is in two main areas: probabilistic metric spaces [25] and multi-valued logic [26]. In probabilistic metric spaces, t-norms are used to extend the concept of intersection in a lattice. In logic, they generalize the concept of conjunction.
A t-norm is a function [ 0 , 1 ] × [ 0 , 1 ] [ 0 , 1 ] that satisfies the following properties [21]:
  • Commutativity property: T ( a , b ) = T ( b , a ) ;
  • Monotonicity property: T ( a , b ) T ( c , d ) if a c and b d ;
  • Associativity property: T ( a , T ( b , c ) ) = T ( T ( a , b ) , c ) ;
  • Identity property: T ( a , 1 ) = a .
T-norm aims to extend classical two-valued logic by introducing intermediary truth values between 1 (truth) and 0 (falsity) to represent varying degrees of truth in propositions. A number of more commonly used t-norms for any a and b belonging to [0,1] are [27,28]:
(1)
Product t-norm:
T P ( a , b ) = a . b .
(2)
Lukasiewicz t-norm:
T L ( a , b ) = m a x ( 0 , a + b 1 ) .
(3)
Drastic product t-norm:
T D ( a , b ) = b , if a = 1 , a , if b = 1 , 0 , otherwise .
(4)
Nilpotent minimum t-norm:
T N ( a , b ) = m i n ( a , b ) , if a + b > 1 , 0 , otherwise .
(5)
Hamacher product t-norm:
T H ( a , b ) = 0 , if a = b = 0 , a b a + b a b , otherwise .
(6)
Dubois–Prade t-norm:
T D P ( a , b ) = a b max ( a , b , α ) , α > 0 .
(7)
Yager t-norm:
T Y ( a , b ) = max { 1 ( ( 1 a ) α + ( 1 b ) α ) 1 α } , α > 0 .
(8)
Sugeno–Weber t-norm:
T S W ( a , b ) = max { a + b 1 + α a b 1 + α } , α 1 .
Product t-norm, Lukasiewicz t-norm, drastic product t-norm, nilpotent minimum, and Hamacher product are mathematical operations used in set theory to model different types of intersection (AND) operations between sets or values. Let us, before going more in depth, compare these t-norms from different viewpoints.

2.2. When Might Each of the T-Norms Be Properly Used?

The concept of t-norms comes into play when we are dealing with the broader realm of fuzzy logic, which deals with reasoning that is approximate rather than fixed and exact. Indeed, t-norms (triangular norms) are a critical element used for this type of logic to model the conjunction (AND) operation in a fuzzy setting. The choice of a specific t-norm impacts the behavior of the conjunction operation, allowing for a tailored approach to intersection depending on the circumstances and requirements [29,30].
  • When integrating the concept of intersection in terms that mirror the multiplication operation in classic arithmetic, the product t-norm is applied. Its primary characteristic is that it yields substantial outputs only when both inputs are significant themselves. In this framework, a strong result is a product of two strong factors, aligning closely with the multiplication principle such that a small change in either input affects the output product significantly, and a zero in any input drives the product to zero, thus showing its suitability for scenarios requiring a robust and conjunctive interplay between operands.
  • Contrastingly, the Lukasiewicz t-norm is a softer alternative and a continuous counterpart to the standard logical “AND”. This t-norm follows a more relaxed policy towards the intersection, accommodating some overlap between inputs without necessitating each one to be particularly strong to yield a non-trivial outcome. This t-norm is valuable when intersections should not be strictly exclusive, accommodating scenarios where a soft logical connection is preferred.
  • For situations that call for an absolute outcome, where the result must be “1” if any of the inputs is “1”, the drastic t-norm is utilized. This is an extreme form of the intersection operation where the result aligns perfectly with the strict criteria of classical logic’s “AND”. It provides an outcome of unity only if one operand is unity, or if both operands are unity, mirroring a high degree of strictness similar to the behavior of the logical conjunction in cases where a significant intersection is indispensable.
  • In scenarios where the intersection operation is expected to be less rigid than the minimum operation, the nilpotent t-norm comes into action. It introduces a soft transition that allows for the inputs to slightly overlap, providing a graceful handling of the intersection that is not as demanding, thus providing flexibility in applications where an intermediate level of strictness is the goal.
  • The Hamacher t-norm is particularly intriguing when a certain degree of control over the individual contribution of each operand to the intersection is desired. This t-norm introduces an adjustable parameter, which permits the fine-tuning of how the inputs influence the intersection. This flexibility makes it a valuable tool in contexts requiring precise calibration of the intersection’s sensitivity to each operand.
In summary, the choice of t-norm is largely contingent on the intersection behavior that is needed for an application. Each t-norm provides a different approach to fuzzy conjunction, offering a spectrum of strictness levels and operand interactions, allowing for specificity in modeling logical “AND” operations within the scope of fuzzy logic.

2.3. When Might Each of the T-Norms Properly Refer to the Intersection between Pixel Values in Two Images?

Selecting an appropriate t-norm is a critical aspect of image analysis tasks that revolve around measuring similarities between images with granularity and precision. These t-norms are mathematical tools that help us to interpret and define the degree of conjunction or overlap between pixel values in images, thereby quantifying resemblance in a way that is aligned with the specific requirements of the task at hand [28,30].
  • When the objective is to draw a direct correlation between high pixel values in corresponding locations across two images, the product t-norm is employed. Its use in image processing is akin to a strict multiplication of pixel values where only pixels that are intense and coincident in both images contribute heavily to the similarity score. This means that, for images with high luminosity or color intensity in the same areas, the product t-norm will yield high similarity values, thereby indicating a strong resemblance.
  • In situations where a certain tolerance for imprecision is acceptable, perhaps to account for variations caused by noise or compression artifacts, the Lukasiewicz t-norm provides the necessary latitude. By accommodating slight differences in pixel values, the Lukasiewicz t-norm represents a more lenient approach to assessing similarity, allowing for overlap up to a certain degree without demanding exactitude in pixel value correspondence.
  • Alternatively, the drastic t-norm offers the most rigid comparison, suitable for applications where only perfect or near-perfect matches are of interest. With the drastic t-norm, images are compared under a binary lens. If the corresponding pixel values are not both high, the outcome rapidly defaults to zero, thus indicating complete dissimilarity, making it suitable for applications where a binary outcome is preferable.
  • For scenarios requiring a balance between flexibility and strictness, the nilpotent t-norm is introduced. This particular t-norm relaxes the constraints of the comparison to a moderate extent, providing a middle-ground approach that discerns similarity without being as severe as the minimum operation; hence, it is adept for nuanced similarity measurements where some leeway is beneficial.
  • The Hamacher t-norm stands out with its parametric control, allowing analysts to fine-tune the sensitivity of the similarity assessment. By manipulating the underlying parameter, one can dictate the level of influence each pixel value has on the cumulative similarity measure, thus endowing the user with the flexibility to calibrate the comparison precisely to the demands of the specific image analysis task.
In conclusion, the adaptability afforded by these different t-norms makes them potent tools within the realm of image processing, each suited to a particular nuance of similarity measurement. They offer a spectrum of operational behaviors from highly stringent to more permissive intersections of pixel values, ultimately contributing to the sophisticated and nuanced analysis of images.

2.4. Based on Which Specific Characteristics and Requirements Do We Need to Choose the Most Suitable T-Norm?

In the area of image quality assessment (IQA), selecting the appropriate t-norm is crucial as it can significantly influence the evaluation process. Image quality is not solely about sharpness or contrast; it encompasses a variety of attributes that are perceptually important to human viewers or specifically relevant to the task for which the image is used. The chosen t-norm affects how these attributes are aggregated and interpreted to provide a single quality score [27,29].
  • When the quality criteria prioritize the significance of extremely high-quality pixels or resonate better with certain types of data distributions or in some IQA tasks where images have predominantly binary or bimodal distributions of pixel values, representing areas of significant contrast or edge detection scenarios, the drastic t-norm could indeed be the t-norm of choice. Its stringent nature aligns with such an evaluation protocol, emphasizing the presence of high-quality pixels and discounting all others. This t-norm could be particularly useful in areas where perfection is non-negotiable, such as in high-precision manufacturing or satellite imagery where a single pixel can represent a significant area and sharply separates high-quality areas from the rest, which can be pivotal in detecting critical features within the IQA process.
  • If a more comprehensive and balanced image quality assessment is sought, accounting for the overall distribution of pixel quality across the image, then the product t-norm would be more fitting. The product t-norm ensures that each pixel contributes to the IQA proportionate to its quality.
  • The Lukasiewicz t-norm, being more forgiving than the product t-norm, would allow for a balanced approach where minor imperfections do not disproportionately affect the overall quality metric and in cases where the IQA process must be flexible yet robust against variations and outliers, such as assessing the quality of images in variable lighting conditions. In such cases, the Lukasiewicz t-norm presents itself as a viable option. It does not penalize minor deviations heavily, therefore accommodating natural variability.
  • The Hamacher t-norm is a strong candidate as it provides the added benefit of adjusting its sensitivity parameter to fine-tune the assessment according to the desired resilience against outliers and serves to valuably adjust the intersection behavior to meet nuanced quality requirements.
  • Parameter flexibility is a vital factor in complex IQA tasks where the assessment criteria may not be linear or may require a specific balance between different aspects of image quality. The nilpotent t-norm allows for customization by adjusting its parameters, offering a tailored assessment of quality that can distinguish between minor and major quality deficits.
In conclusion, IQA is a multi-faceted task, and the selection of a suitable t-norm should be consonant with the specificities of the assessment criteria, sensitivity requirements, parameter flexibility needs, and the nature of the image data being evaluated. Tailoring the t-norm to fit these aspects ensures that the IQA remains relevant and effective for the task and the type of images under consideration.
Fuzzy logic depends on expert knowledge, particularly for choosing tools as in t-norms selection. The presence of knowledgeable experts is crucial for the appropriate utilization of t-norms. Nevertheless, the scarcity of access to such experts can pose obstacles when applying fuzzy logic and t-norms in image processing.

2.5. S S I M T

If we consider the original image x and the coded image y, by considering the concepts of luminance (l), contrast (c), and structure (s) components and by keeping the associativity property (3) of the t-norm T in mind, we introduce
S S I M T ( x , y ) : = T ( l ( x , y ) , T ( c ( x , y ) , s ( x , y ) ) ) .
Theorem 1.
The mapping S S I M T provided by (13) defines a similarity index and implies that
  • ( S S I M T 1) 0 S S I M T ( x , y ) 1 ,
  • ( S S I M T 2) S S I M T ( x , y ) = S S I M T ( y , x ) ,
  • ( S S I M T 3) S S I M T ( x , y ) = 1 if x = y .
Proof. 
It is easy to see that all the functionals of l ( x , y ) , c ( x , y ) and s ( x , y ) meet the normalized, commutative, and identity properties that are referred to as ( S S I M T 1)–( S S I M T 3).
Proof ( S S I M T 1): We know that l ( x , y ) , s ( x , y ) [ 0 , 1 ] . Based on that and the evident T : [ 0 , 1 ] × [ 0 , 1 ] [ 0 , 1 ] , we find easily that S I M T ( x , y ) [ 0 , 1 ] .
Proof ( S S I M T 2): Following from the fact that l ( x , y ) , c ( x , y ) and s ( x , y ) are symmetrical indices, we result in
l ( x , y ) = l ( y , x ) , c ( x , y ) = c ( y , x ) , s ( x , y ) = s ( y , x ) ,
This implies that
T ( c ( x , y ) , s ( x , y ) ) = T ( c ( y , x ) , s ( y , x ) ) ,
and
S S I M T ( x , y ) = T ( l ( x , y ) , T ( c ( x , y ) , s ( x , y ) ) ) = T ( l ( y , x ) , T ( c ( y , x ) , s ( y , x ) ) ) = S I M T ( y , x ) .
Proof ( S S I M T 3): If x = y , then the identity property of indices l ( x , y ) , c ( x , y ) and s ( x , y ) gives rise to
l ( x , y ) = c ( x , y ) = s ( x , y ) = 1 .
Then, by keeping the identical property of t-norm T, we have
T ( c ( x , x ) , s ( x , x ) ) = T ( 1 , 1 ) ,
and
T ( l ( x , x ) , T ( 1 , 1 ) ) = T ( 1 , ( 1 , 1 ) ) ,
which leads to
S S I M T ( x , x ) = T ( l ( x , x ) , T ( c ( x , x ) , s ( x , x ) ) ) = T ( 1 , T ( 1 , 1 ) ) = T ( 1 , 1 ) = 1 .
It needs to be mentioned that Wang et al.’s SSIM is commonly used as the weighted form of (4). In this contribution, we have explored the use of l , c , and s in their basic form. If necessary, the results can be extended to the similar weighted form of Wang et al.’s S S I M .
Now, we will construct a new class of S S I M T indices by using the product, Lukasiewicz, drastic product, nilpotent minimum, and Hamacher product t-norms.
  • Product t-norm-based SSIM:
    S S I M T P ( a , b , c ) = T P ( a , T P ( b , c ) ) = a . T P ( b , c ) = a . ( b . c ) .
  • Lukasiewicz t-norm-based SSIM:
    S S I M T L ( a , b , c ) = T L ( a , T L ( b , c ) ) = max { 0 , a + max { 0 , b + c 1 } 1 } .
  • Drastic product t-norm-based SSIM:
    S S I M T D ( a , b , c ) = T D ( a , T D ( b , c ) ) = T D ( b , c ) , if a = 1 , a , if T D ( b , c ) = 1 , 0 , otherwise .
  • Nilpotent minimum t-norm-based SSIM:
    S S I M T N ( a , b , c ) = T N ( a , T N ( b , c ) ) = m i n ( a , T N ( b , c ) ) , if a + T N ( b , c ) > 1 , 0 , otherwise .
  • Hamacher product-based SSIM:
    S S I M T H ( a , b , c ) = T H ( a , T H ( b , c ) ) = 0 , if a = T H ( b , c ) = 0 , a T H ( b , c ) a + T H ( b , c ) a T H ( b , c ) , otherwise .
  • Dubois–Prade t-norm-based SSIM:
    S S I M T D P ( a , b , c ) = T D P ( a , T D P ( b , c ) ) = a T D P ( b , c ) max ( a , T D P ( b , c ) , α ) , α > 0
  • Yager t-norm-based SSIM:
    S S I M T Y ( a , b , c ) = T Y ( a , T Y ( b , c ) ) = max { 1 ( 1 a ) α + ( 1 T Y ( b , c ) α ) 1 α ) } , α > 0
  • Sugeno–Weber t-norm-based SSIM:
    S S I M T S W ( a , b , c ) = T S W ( a , T S W ( b , c ) ) = max { a + T S W ( b , c ) 1 + α a T S W ( b , c ) 1 α } , α 1
Note that, for simplicity, we have refrained from further expanding the formulas. The placement of t-norms in the proposed S S I M T leads to long and complicated relationships, which, if included here, may cause complexity and confusion for the reader. Therefore, only the symbolic expansion of the relations is presented here. During the programming and implementation phase of the proposed method, all the relations were widely applied.

3. Experiments and Empirical Results

In this section, we implement different types of t-norm-based similarity measures in different case studies to show the applicability of each t-norm-based similarity measure and how they can work.
By keeping in mind that each t-norm has unique properties, we can show that these properties can affect the output values of proposed t-norm-based similarity measures when we implement them in different situations.
It should be noted that the structure of S S I M T P is very similar to Wang et al.’s SSIM but with a small difference. The results of Wang’s similarity measure can be expected to be different from the results of S S I M T P . To increase the comparability of the results of the other t-norm-based SSIMs discussed in this section, correspondingly, we presented all the results in the following.
Let us now consider the theoretical aspect of Wang’s similarity measure, which is presented in the following form:
S S I M ( x , y ) = ( 2 μ x μ y + c 1 ) ( 2 σ x y + c 2 ) ( μ x 2 + μ y 2 + c 1 ) ( σ 1 2 + σ y 2 + c 2 ) .
Remark 1.
It is noticed that, in the previous and related articles, Wang’s similarity measure SSIM has been calculated not considering the terms l , c , and s separately. This is evident from (14), where the constant c 3 is not to be directly presented. Indeed, this is the main difference in calculating Wang’s similarity regarding the proposed t-norm-based similarity measure S S I M T P .
Furthermore, for the proposed t-norm-based similarity measures in this contribution, the latter type of simplification is not possible due to the different construction of similarity measures. Some of them are in the addition form of l , c , and s and some of them are in the multiplication form in terms of Lukasiewicz, Hamacher product, nilpotent minimum, drastic product, Yager, Dubois–Prade, and Sugeno–Weber.
The last three t-norms provided by (10)–(12) bear a notable resemblance to the Łukasiewicz and Hamacher t-norms in terms of structure. To avoid unnecessary complications, as well as to reduce computational load and table size, detailed outcomes for these t-norms are omitted here. However, it is important to point out that the results observed for these specific t-norms were akin to those obtained for other comparable t-norms and were found to be satisfactory within their respective contexts.
To express the comparisons more clearly, we will divide this section into two subsections followed by two different scenarios.
In the first part of comparison, we analyze the similarity of S S I M T and Wang et al.’s SSIM in two scenarios simultaneously (see Figure 1 and Figure 2). In the first scenario, a comparison is conducted through mean aggregation between two identical images and two distinct images. In the second scenario, a comparison is conducted for two identical images and two distinct images using the minimum and maximum aggregations. In both scenarios, the output of other t-norm-based similarity measures is presented.
In the second part of the comparison, we binarize the images from the original image column of Figure 3 using three methods. We then compare the similarity by first checking an arbitrary image with itself. Next, we compare two images with the same original image but different binarization methods. Finally, we compare two images that have different original images and binarization methods.
In the third part of the comparison, we use colory cloud cover images of https://www.flickr.com/ website [31] (accessed on 1 November 2023). Then, we use a GAN to train and generate similar images (see Figure 4 and Figure 5). We then compare the produced images with the original images in Figure 6 to assess their similarity. This method can be utilized to extend a new low-cost dataset or enhance resolution for decisionmaking and weather forecasting purposes.

3.1. First Part: Comparison of Grayscale Images

Figure 1 shows the images used in this section. These are random images with dimensions of 255 × 255 pixels and in grayscale format.
In Table 1 we compare the results of various t-norm-based similarity measures for Figure 1a,b. The Identical column shows the results for Figure 1a compared to itself, while the Distinct column displays the values for comparing Figure 1a,b.
Figure 1. Figures (ad) illustrate random grayscale images [32,33].
Figure 1. Figures (ad) illustrate random grayscale images [32,33].
Mathematics 12 00797 g001
The Identical column of Table 1 shows that the outputs of different similarity measures for two identical images, as theoretically expected, are 1. The distinct column of Table 1 represents the outputs of different similarity measures for Figure 1a,b. To determine the t-norm-based SSIMs, we calculate l, c, and s separately, and then we compute their average aggregation.
In Table 1, the output values derived from t-norm-based SSIMs for two distinct images, which are applied across various t-norms, are very close to the values obtained from Wang et al.’s SSIM for some t-norms. The Hamacher t-norm has a unique way of calculating outcomes, leading to different results. The variable sensitivity of the Hamacher t-norm makes it useful for specific tasks like texture analysis and stereoscopic image matching, where analysts can adjust sensitivity for each pixel. This allows for detailed comparisons that can pick up on subtle textural differences or depth cues in images [34].
There is indeed a difference in the output values of Wang et al.’s SSIM and Hamacher in Table 1 and other t-norms in the later tables. However, the utilization of Hamacher in this case and other t-norms is helpful because of the inclusion of the mentioned features, which are not present in Wang et al.’s SSIM.
In the following, we are going to make clear the experimental aspect of Wang et al.’s SSIM. Wang et al. perform the calculation process of similarity measure for a 255 × 255 pixel image, denoted here as IMAGE. In Figure 2, we have divided IMAGE into smaller odd-dimensioned 7 × 7 squares.
Figure 2 shows the pixel-mean of each square, which is calculated and placed in each array of vectors in the form of I M A G E = [ M 1 , M 2 , M 3 , ] . Then, using a Gaussian or Uniform filter [13], these values are mapped to [ 0 , 1 ] . By calculating l, c, and s as an input of Wang’s similarity measure, we are in the position in which this similarity measure is at hand.
In the same manner, we perform this procedure to calculate S S I M T P , but with the distinction that we provide l, c, and s to the system in a separate form using pixel-minimum and pixel-maximum aggregation where, for each of the three values of l, c, and s, we assume two options of minimum and maximum. Then, the total options will be eight modes as follows:
( l , c , s ) = { ( m a x , m a x , m a x ) , ( m a x , m a x , m i n ) , , ( m i n , m i n , m i n ) }
To achieve more clarification, let us explain the second triple above, which implies that we calculate the max value for l and c and the min value for s. Then, the outputs are utilized as the inputs for calculating the t-norm-based SSIM.
Figure 2. The preprocessing stages of Wang et al.’s SSIM and the direction of sliding windows from left to right and up to down.
Figure 2. The preprocessing stages of Wang et al.’s SSIM and the direction of sliding windows from left to right and up to down.
Mathematics 12 00797 g002
In order to have a wide viewpoint of different proposed similarity measures, we consider the similarity outputs for two distinct images in Figure 1a,b, which are summarized with respect to eight modes in Table 2.
Note that Wang et al.’s SSIM is only calculated for one mode and it equals 0.38.
Variations in appearance, structure, and texture between the images in Figure 1a,b lead to the output of lower values by the nilpotent t-norm. The nilpotent t-norm is good for situations where there is not just a clear yes or no answer. For instance, it can be helpful in combining details from different images to create a better overall picture, or in finding similar images instead of an exact match [35].
In Table 3, to further the investigation, we show the outputs of proposed t-norm-based SSIMs for two distinct images of Figure 1c,d. In this case, Wang et al.’s SSIM is only calculated for one mode and it equals 0.49.
In Figure 1c,d, the presence of an animal is a common feature. Assuming that all the components of the SSIM (l, c, and s) are max, the t-norm-based SSIMs would yield a high similarity value close to 0.9 due to the comparable color and texture of the images. However, SSIM is sensitive to variations in these components. Different combinations of l, c, and s significantly alter the resulting score. Using the Lukasiewicz t-norm in SSIM computation makes the differences between images more noticeable. This is helpful in applications where there may be variations in lighting or partial blockages, such as in surveillance systems for object recognition. The t-norm allows for comparison of images with similar characteristics, ensuring that the system does not miss identifying objects due to minor differences [36].

3.2. Second Part: Comparison of Binary Images

In this case, we initially binarize the images used in [37] using the new dimensions of 255 × 255 pixels. We then employ three methods and two thresholds used in [37]. The three methods are categorized as fuzzy-based methods, including Sugeno integral and Choquet integral, and a non-fuzzy-based method is the Bradley technique. Fuzzy methods are applied with an automatic Otsu threshold of about 0.48 and a fixed manual threshold of 2. The resulting images from implementing these methods are displayed in Figure 3.
Figure 3. Binerized images used in [37].
Figure 3. Binerized images used in [37].
Mathematics 12 00797 g003
The first column of Figure 3 displays the original colored images. The second column represents the ground truth (GT). The third column refers to the adaptive Choquet method with an automatic Otsu threshold of approximately 0.48 (ACH-TH = 0.49). The fourth column shows adaptive Choquet for a fixed manual threshold of 2 (ACH-TH = 2). The fifth and sixth columns depict the Sugeno method with an automatic Otsu threshold detected of approximately 0.48 (S-TH = 0.49) and a fixed manual threshold of 2 (S-TH = 2). Finally, the last column displays the output for Bradley’s method.
From Figure 3, we observe that the binarized images using the Sugeno method are visually more similar to their original form. The images generated by the Choquet integral show less similarity to the original images. The images generated by Bradley show the lowest similarity to the original images.
Moreover, applying the t-norm-based similarity measures to two different images is demonstrated in Table 4. The Identical column of Table 4 shows the results obtained for a binarized TREE image (Choquet with 2 as threshold), as well as the image itself, and the Distinct column presents the values for two distinct images that are chosen regarding adaptive Choquet and grayscale images of a TREE from Figure 3.
The first row of Table 4 displays the results of Wang et al.’s SSIM for binarized images. As previously mentioned, the output of S S I M T P closely resembles Wang’s output due to their similar structures, which verify the same values of the second row.
Using the product t-norm is important for precision in fields like medical imaging and quality control. It helps to accurately align scans for diagnosis and detect even the smallest deviations from a standard. The high correlation score it provides for similar pixels ensures that only the closest alignments are considered significant, reducing false positives [38].
Using drastic t-norm metrics for binary classification tasks can be very useful, especially in security settings like biometric systems. It helps to determine the presence or absence of a specific feature with certainty, ensuring a high pixel value match for confirming identity accurately [39].
In Table 5, for a broader comparison, the Identical column outputs are for the BIRD image generated by Sugeno integral and a threshold of 0.48 and itself. We use the BIRD image described later together with the DOLL image generated by adaptive Choquet integral and a threshold of 2 as the input of t-norm-based similarity measures for distinct images.
The first row of Table 5 confirms the previous satisfactory result of S S I M T P and closely resembles Wang’s output due to their similar structures.

3.3. Third Part: Comparison of GAN-Generated Images

Meteorology is a field that heavily relies on the analysis of vast amounts of data to make accurate predictions about weather patterns and climate changes. Machine learning has emerged as a powerful tool in this domain, allowing meteorologists to develop more sophisticated models that can process and interpret complex atmospheric data. By leveraging this technology, researchers have been able to improve the accuracy of weather forecasts, better understand climate phenomena, and develop early warning systems for severe weather events [22,23,24].
In this part, we will delve into a specific application of machine learning, fuzzy logic, and GANs in meteorology and image processing, highlighting the challenges and opportunities associated with building new datasets using GANs and the implications for training more robust and accurate machine learning models.
Furthermore, the use of GANs in building new datasets has opened up new possibilities for training machine learning models in meteorology and image processing. By generating synthetic data that closely resemble real-world observations, GANs have enabled researchers to overcome data scarcity and improve the robustness of their models. This has significant implications for the development of more accurate and reliable prediction systems, as well as for advancing our understanding of complex meteorological phenomena [40].
The SSIM [41] and cosine similarity [42], in particular, have become valuable tools for identifying similarities between images in meteorological research lately. Here, we demonstrate the effectiveness of the proposed t-norm-based SSIMs by comparing the similarity of these images with the original images. It is worth noting that, based on the diverse results provided by the proposed t-norm-based SSIMs, the decisionmaker can guide the image generation model, highlight a certain part of it, or direct the conclusion model to the direction of their interest.
The GAN structure of this contribution has two convolutional neural networks:
  • Generator: 4 hidden layers;
  • Input: a vector of random noise;
  • Activation function: LeakyReLU function;
  • Last layer: a tangent function is implemented to obtain the values in the range 1 to 1;
  • Discriminator: 5 hidden layers;
  • Input: a generated image by generator;
  • Last layer: a dropout layer to prevent the considered model from over-fitting and the sigmoid activation function to return the probability of an image being real or fake;
  • Optimizer: Adam optimizer;
  • Learning rate: 0.0002 ;
  • Momentum term: 0.5 ;
  • Batch size: 128;
  • Epoch: 300;
  • Loss function: Both generator and discriminator are binary cross-entropy.
The structures of both the generator and discriminator are illustrated in Figure 4 and Figure 5, as shown in [22].
Figure 4. The structure of proposed generator.
Figure 4. The structure of proposed generator.
Mathematics 12 00797 g004
Figure 5. The structure of proposed discriminator.
Figure 5. The structure of proposed discriminator.
Mathematics 12 00797 g005
Reference [31] contains color images with a size of 512 × 512 pixels. Due to the high computational load of working with these images, we initially converted a set of 2000 images to 64 × 64 pixels for evaluating the efficiency of the proposed method. In this case, we restricted ourselves to using a limited number of photos. If the reader is interested, they can explore the same example with a larger sample size to obtain results with higher similarity. We used 80 % of the images for training and the remaining 20 % for testing. The left side of Figure 6a displays some of the images chosen for this test from [31]. The right side of Figure 6b shows the GAN-generated images.
Figure 6. Left side (a) displays the original images and right side (b) shows the generated images by GAN.
Figure 6. Left side (a) displays the original images and right side (b) shows the generated images by GAN.
Mathematics 12 00797 g006
In the first part of this experiment, we compared the first image of the first line for original ((a) side) and generated images ((b) side) from Figure 6 in the colory mode. Table 6 displays the results obtained for t-norm-based SSIMs. In future work, using larger datasets may yield results with a higher similarity measure. Similar to previous cases, in this case, Wang et al.’s SSIM is only calculated for one mode and equals 0.43, and the cosine similarity measure is 0.29.
Note that, in this experiment, we utilized a GAN to generate cloud cover images similar to the original ones and then measured their similarity using the proposed t-norm-based SSIMs. The results of the first line in Table 6 show the high similarity values obtained for different t-norms and max aggregation. Line 5 of Table 6 shows that the values closely match to Wang et al.’s SSIM. The proposed t-norm-based SSIMs offer more flexibility than Wang et al.’s SSIM, allowing decisionmakers to assess various aspects of image similarity based on the unique properties of each t-norm, and to interpret the results from new perspectives.
In the second part of this experiment, we utilized the Sugeno integral with automatic Otsu threshold to produce binary images. The images employed in this part are identical to those used in the first part. The left side of Figure 7 displays the binarized output of the original image and the right side shows the binarized version of the generated image.
In Table 7, the values in the Value column correspond to the binary images shown in Figure 7. As in previous instances, Wang et al.’s SSIM is calculated for a single mode here, resulting in a value of 0.62, while the cosine similarity measure is 0.31. Table 7 demonstrates the reliability and accuracy of the proposed t-norm-based SSIMs compared to common similarity measures such as Wang et al.’s SSIM and cosine similarity.
In this paper, we examined the inherent changes of the SSIM similarity measure after adding t-norms. Each t-norm has specific applications and helps SSIM to explore new aspects of image similarity interpretation. To evaluate the effectiveness of the proposed method, we applied it to color, binarized, and grayscale images. Other methods such as Fréchet Inception Distance (FID) [43] and correlation coefficient metric (MCC) or F1 score [37] are also available for this purpose. The key distinction between these methods and our approach is that they only assess the output values and measure the degree of closeness to the results of other methods, which is not our objective. However, these comparison methods can also be considered for future research.

4. Conclusions

Having flexible decisionmaking policies leads us to faster and more appropriate decisions for various situations. In this paper, we introduced a new framework based on eight different t-norms to construct various SSIMs, allowing for flexibility in decisionmaking and new perspectives. The experimental results demonstrate that the proposed methods that were applied to grayscale and binarized and colory cloud covering images are able to effectively address the similarity between images and offer new perspectives in interpreting the similarity values. Furthermore, in that section, we compared various image types, such as GAN-generated images, to reach the discussion section and highlight the influence of selecting different t-norms on the ultimate similarity metrics.
In future work, we aim to assess the quality of produced images by using not only minimum and maximum operators but also the other aggregation operators, enabling us to convert images into vector form properly.

Author Contributions

Conceptualization, B.F.; methodology, B.F. and M.R.A.; software, M.R.A.; validation, B.F., M.R.A. and A.H.; formal analysis, A.H.; investigation, B.F., M.R.A. and A.H.; resources, B.F. and M.R.A.; data curation, M.R.A.; writing—original draft preparation, M.R.A.; writing—review and editing, B.F. and A.H.; visualization, M.R.A.; supervision, B.F. and A.H.; project administration, B.F. and A.H.; funding acquisition, B.F. and A.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Kollem, S.; Reddy, K.R.; Rao, D.S. A novel diffusivity function-based image denoising for MRI medical images. Multimed. Tools Appl. 2023, 82, 32057–32089. [Google Scholar] [CrossRef]
  2. Prodan, M.; Vlăsceanu, G.V.; Boiangiu, C.A. Comprehensive evaluation of metrics for image resemblance. J. Inf. Syst. Oper. Manag. 2023, 17, 161–185. [Google Scholar]
  3. Zhu, F.; Li, J.; Zhu, B.; Li, H.; Liu, G. Uav remote sensing image stitching via improved vgg16 siamese feature extraction network. Expert Syst. Appl. 2023, 229, 120525. [Google Scholar] [CrossRef]
  4. Yang, R.; Zheng, C.; Wang, L.; Zhao, Y.; Fu, Z.; Dai, Q. MAE-BG: Dual-stream boundary optimization for remote sensing image semantic segmentation. Geocarto Int. 2023, 38, 2190622. [Google Scholar] [CrossRef]
  5. Thakre, S.; Karan, V.; Kanjarla, A.K. Quantification of similarity and physical awareness of microstructures generated via generative models. Comput. Mater. Sci. 2023, 221, 112074. [Google Scholar] [CrossRef]
  6. Jin, J.; Xue, Y.; Zhang, X.; Meng, L.; Zhao, Y.; Lin, W. HVS-inspired signal degradation network for just noticeable difference estimation. arXiv 2022, arXiv:2208.07583. [Google Scholar]
  7. Gao, Y.; Rehman, A.; Wang, Z. CW-SSIM based image classification. In Proceedings of the 2011 18th IEEE International Conference on Image Processing, Brussels, Belgium, 11–14 September 2011; pp. 1249–1252. [Google Scholar]
  8. Wan, D.; Jiang, X.; Shen, Q. Blind Quality Assessment of Stereoscopic Images Considering Binocular Perception based on Shearlet Decomposition. IEEE Access 2023, 11, 96387–96400. [Google Scholar] [CrossRef]
  9. Chen, F.; Fu, H.; Yu, H.; Chu, Y. Using HVS Dual-Pathway and Contrast Sensitivity to Blindly Assess Image Quality. Sensors 2023, 23, 4974. [Google Scholar] [CrossRef]
  10. Zhang, A.X.; Wang, Y.G.; Tang, W.; Li, L.; Kwong, S. A Spatial–Temporal Video Quality Assessment Method via Comprehensive HVS Simulation. IEEE Trans. Cybern. 2023, 1–14. [Google Scholar] [CrossRef]
  11. Huang, L.; Zhang, R.; Wang, M. Just Noticeable Difference Estimation for Screen Content Images: A Content Uncertainty-guided Approach. In Proceedings of the 2023 IEEE International Conference on Multimedia and Expo (ICME), Brisbane, Australia, 10–14 July 2023; Volume 10, pp. 372–377. [Google Scholar]
  12. Wang, Z.; Bovik, A.C. A universal image quality index. IEEE Signal Process. Lett. 2002, 9, 81–84. [Google Scholar] [CrossRef]
  13. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  14. Rehman, A.; Zeng, K.; Wang, Z. Display device-adapted video quality-of-experience assessment. In Human Vision and Electronic Imaging XX; Springer: Berlin/Heidelberg, Germany, 2015; Volume 9394, pp. 27–37. [Google Scholar]
  15. Wang, Z.; Li, Q. Information content weighting for perceptual image quality assessment. IEEE Trans. Image Process. 2010, 20, 1185–1198. [Google Scholar] [CrossRef]
  16. Zerva, M.C.; Christou, V.; Giannakeas, N.; Tzallas, A.T.; Kondi, L.P. An improved medical image compression method based on wavelet difference reduction. IEEE Access 2023, 11, 18026–18037. [Google Scholar] [CrossRef]
  17. Mou, C.; Wang, Q.; Zhang, J. Deep generalized unfolding networks for image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–23 June 2022; pp. 17399–17410. [Google Scholar]
  18. Wei, W.; Yan, J.; Wu, X.; Wang, C.; Zhang, G. CSI fingerprinting for device-free localization: Phase calibration and SSIM-based augmentation. IEEE Wirel. Commun. Lett. 2022, 11, 1137–1141. [Google Scholar] [CrossRef]
  19. Sneha, M.R.; Manju, B.R. Performance Analysis of Wiener filter in Restoration of Covid-19 Chest X-Ray Images, Ultrasound Images and Mammograms. In Proceedings of the 2022 IEEE World Conference on Applied Intelligence and Computing (AIC), Sonbhadra, India, 17–19 June 2022; pp. 326–331. [Google Scholar]
  20. Wang, Z.; Bovik, A.C. Mean squared error: Love it or leave it? A new look at signal fidelity measures. IEEE Signal Process. Mag. 2009, 26, 98–117. [Google Scholar] [CrossRef]
  21. Wang, H.; Yang, B.; Li, W. Some properties of fuzzy t-norm and vague t-norm. arXiv 2022, arXiv:2205.09231. [Google Scholar]
  22. Farhadinia, B.; Ahangari, M.R.; Heydari, A.; Datta, A. A generalized optimization-based generative adversarial network. Expert Syst. Appl. 2024, 6, 123413. [Google Scholar] [CrossRef]
  23. Kim, Y.; Ratnam, J.V.; Doi, T.; Morioka, Y.; Behera, S.; Tsuzuki, A.; Minakawa, N.; Sweijd, N.; Kruger, P.; Maharaj, R.; et al. Malaria Predictions based on seasonal climate forecasts in South Africa: A time series distributed lag nonlinear model. Sci. Rep. 2019, 9, 17882, Erratum in Sci. Rep. 2020, 10, 2229. [Google Scholar] [CrossRef]
  24. Jasper-Tönnies, A.; Hellmers, S.; Einfalt, T.; Strehz, A.; Fröhle, P. Ensembles of radar nowcasts and COSMO-DE-EPS for urban flood management. Water Sci. Technol. 2018, 2017, 27–35. [Google Scholar] [CrossRef] [PubMed]
  25. Jebril, I.H.; Datta, S.K.; Sarkar, R.; Biswas, N. Common fixed point theorem in probabilistic metric space using Lukasiecz t-norm and product t-norm. J. Stat. Appl. Probab. 2021, 10, 6635–66399. [Google Scholar]
  26. Wu, C.W. On Rearrangement Inequalities for Triangular Norms and Co-norms in Multi-valued Logic. Log. Universalis 2023, 117, 331–346. [Google Scholar] [CrossRef]
  27. Rasuli, R. Fuzzy ideals of BCI-algebras with respect to t-norm. Math. Anal. Contemp. Appl. 2023, 5, 39–59. [Google Scholar]
  28. Hamidirad, F.; Molai, A.A. System of bipolar max-drastic product fuzzy relation equations with a drastic negation. Int. J. Nonlinear Anal. Appl. 2022, 13, 2095–2107. [Google Scholar]
  29. Polkowski, L.T. Finitely and Infinitely Valued Logics. In Logic: Reference Book for Computer Scientists: The 2nd Revised, Modified, and Enlarged Edition of “Logics for Computer and Data Sciences, and Artificial Intelligence”; Springer: Cham, Switzerland, 2023; pp. 281–332. [Google Scholar]
  30. Van Krieken, E.; Acar, E.; van Harmelen, F. Analyzing differentiable fuzzy logic operators. Artif. Intell. 2022, 302, 103602. [Google Scholar] [CrossRef]
  31. Available online: https://www.flickr.com/search/?text=Cloud (accessed on 1 November 2023).
  32. Khalitov, R.; Yu, T.; Cheng, L.; Yang, Z. Sparse Factorization of Large Square Matrices. arXiv 2021, arXiv:2109.08184. [Google Scholar]
  33. Bangug, C.M.; Fajardo, A.C.; Medina, R.P. A modified median filtering algorithm (MMFA). Measurement 2019, 3, 5. [Google Scholar]
  34. Huerga, C.; Morcillo, A.; Alejo, L.; Marín, A.; Obesso, A.; Travaglio, D.; Bayón, J.; Rodriguez, D.; Coronado, M. Role of correlated noise in textural features extraction. Phys. Med. 2021, 91, 87–98. [Google Scholar] [CrossRef]
  35. Wallis, T.S.; Bethge, M.; Wichmann, F.A. Testing models of peripheral encoding using metamerism in an oddity paradigm. J. Vis. 2016, 16, 4. [Google Scholar] [CrossRef]
  36. Bowen, F.; Hu, J.; Du, E.Y. A Multistage Approach for Image Registration. IEEE Trans Cybern. 2016, 46, 2119–2131. [Google Scholar] [CrossRef] [PubMed]
  37. Bardozzo, F.; De La Osa, B.; Horanska, L.; Fumanal-Idocin, J.; Priscoli, d.M.; Troiano, L.; Tagliaferri, R.; Fernandez, J.; Bustince, H. Sugeno integral generalization applied to improve adaptive image binarization. Inf. Fusion 2021, 68, 37–45. [Google Scholar] [CrossRef]
  38. Redelings, B. Erasing errors due to alignment ambiguity when estimating positive selection. Mol. Biol. Evol. 2014, 31, 1979–1993. [Google Scholar] [CrossRef] [PubMed]
  39. Wang, Y.; Shi, D.; Zhou, W. Convolutional neural network approach based on multimodal biometric system with fusion of face and finger vein features. Sensors 2022, 22, 6039. [Google Scholar] [CrossRef]
  40. Murad, T.; Ali, S.; Patterson, M. Exploring the Potential of GANs in Biological Sequence Analysis. Biology 2023, 12, 854. [Google Scholar] [CrossRef]
  41. Kellerhals, S.A.; De Leeuw, F.; Rivero, R.C. Cloud nowcasting with structure-preserving convolutional gated recurrent units. Atmosphere 2022, 13, 1632. [Google Scholar] [CrossRef]
  42. Kirişci, M. New cosine similarity and distance measures for Fermatean fuzzy sets and TOPSIS approach. Knowl. Inf. Syst. 2023, 65, 855–868. [Google Scholar] [CrossRef] [PubMed]
  43. Azour, L.; Hu, Y.; Ko, J.P.; Chen, B.; Knoll, F.; Alpert, J.B.; Brusca-Augello, G.; Mason, D.M.; Wickstrom, M.L.; Kwon, Y.J.F.; et al. Deep learning denoising of low-dose computed tomography chest images: A quantitative and qualitative image analysis. J. Comput. Assist. Tomogr. 2023, 47, 212–219. [Google Scholar] [CrossRef] [PubMed]
Figure 7. The (left) side is the binarized output of the original image and the (right) side is the binarized output of the generated image.
Figure 7. The (left) side is the binarized output of the original image and the (right) side is the binarized output of the generated image.
Mathematics 12 00797 g007
Table 1. The output of various t-norm-based SSIMs using Figure 1a and itself and Figure 1a,b.
Table 1. The output of various t-norm-based SSIMs using Figure 1a and itself and Figure 1a,b.
SimilarityIdenticalDistinct
Wang et al.’s SSIM10.38
S S I M T P 10.38
S S I M T L 10
S S I M T H 10.32
S S I M T D 10
S S I M T N 10.43
Table 2. Output of proposed t-norm-based SSIMs for two distinct images: Figure 1a,b.
Table 2. Output of proposed t-norm-based SSIMs for two distinct images: Figure 1a,b.
Mode SSIM T P SSIM T L SSIM T H SSIM T N SSIM T D
10.550.470.610
20.1400.1910
30.0900.120.40
40.0200.080.40
50.360.150.4510
60.09010.180
70.0600.40.110
80.0200.40.080
Table 3. Output of proposed t-norm-based SSIMs for two distinct images: Figure 1c,d.
Table 3. Output of proposed t-norm-based SSIMs for two distinct images: Figure 1c,d.
Mode SSIM T P SSIM T L SSIM T H SSIM T N SSIM T D
10.90.90.90.960
20.370.330.390.40
30.0280.230.290.30
40.1200.200
50.630.610.650.690
60.260.050.330.40
70.200.2600
80.800.1900
Table 4. Output of proposed t-norm-based SSIMs for an identical image and two distinct images of Figure 3.
Table 4. Output of proposed t-norm-based SSIMs for an identical image and two distinct images of Figure 3.
SimilarityIdenticalDistinct
Wang et al.’s SSIM10.7088
S S I M T P 10.7088
S S I M T L 10.6802
S S I M T H 10.7070
S S I M T D 10
S S I M T N 10.7380
Table 5. Output of proposed t-norm-based SSIMs for an identical image and two distinct images from Figure 2.
Table 5. Output of proposed t-norm-based SSIMs for an identical image and two distinct images from Figure 2.
SimilarityIdenticalDistinct
Wang et al.’s SSIM10.431
S S I M T P 10.431
S S I M T L 10.3004
S S I M T H 10.4922
S S I M T D 10
S S I M T N 10.5918
Table 6. Output of proposed t-norm-based SSIMs for the original and generated by GAN for the first images of the first line of Figure 6.
Table 6. Output of proposed t-norm-based SSIMs for the original and generated by GAN for the first images of the first line of Figure 6.
Mode SSIM T P SSIM T L SSIM T H SSIM T N SSIM T D
10.710.680.720.820
20.00.00.00.00
30.040.00.050.00
40.000.00.080.00
50.470.360.530.670
60.000.00.00.00
70.0020.00.050.00
80.000.00.070.00
Table 7. Output of proposed t-norm-based SSIMs for an identical image and two distinct images from Figure 2.
Table 7. Output of proposed t-norm-based SSIMs for an identical image and two distinct images from Figure 2.
SimilarityValue
S S I M T P 0.69
S S I M T L 0.67
S S I M T H 0.70
S S I M T D 0
S S I M T N 0.75
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Farhadinia, B.; Ahangari, M.R.; Heydari, A. Utilizing Generative Adversarial Networks Using a Category of Fuzzy-Based Structural Similarity Indices for Constructing Datasets in Meteorology. Mathematics 2024, 12, 797. https://doi.org/10.3390/math12060797

AMA Style

Farhadinia B, Ahangari MR, Heydari A. Utilizing Generative Adversarial Networks Using a Category of Fuzzy-Based Structural Similarity Indices for Constructing Datasets in Meteorology. Mathematics. 2024; 12(6):797. https://doi.org/10.3390/math12060797

Chicago/Turabian Style

Farhadinia, Bahram, Mohammad Reza Ahangari, and Aghileh Heydari. 2024. "Utilizing Generative Adversarial Networks Using a Category of Fuzzy-Based Structural Similarity Indices for Constructing Datasets in Meteorology" Mathematics 12, no. 6: 797. https://doi.org/10.3390/math12060797

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop