Next Article in Journal
Temperature and Moisture Sounding Performance of Current and Prospective Microwave Instruments under All-Sky Conditions
Previous Article in Journal
Optical Design of a Common-Aperture Camera for Infrared Guided Polarization Imaging
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

SAR Image Segmentation by Efficient Fuzzy C-Means Framework with Adaptive Generalized Likelihood Ratio Nonlocal Spatial Information Embedded

1
Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China
2
Key Laboratory of Technology in Geo-Spatial Information Processing and Application System, Chinese Academy of Sciences, Beijing 100190, China
3
School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 101408, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(7), 1621; https://doi.org/10.3390/rs14071621
Submission received: 4 March 2022 / Revised: 24 March 2022 / Accepted: 25 March 2022 / Published: 28 March 2022
(This article belongs to the Topic Computational Intelligence in Remote Sensing)

Abstract

:
The existence of multiplicative noise in synthetic aperture radar (SAR) images makes SAR segmentation by fuzzy c-means (FCM) a challenging task. To cope with speckle noise, we first propose an unsupervised FCM with embedding log-transformed Bayesian non-local spatial information (LBNL_FCM). This non-local information is measured by a modified Bayesian similarity metric which is derived by applying the log-transformed SAR distribution to Bayesian theory. After, we construct the similarity metric of patches as the continued product of corresponding pixel similarity measured by generalized likelihood ratio (GLR) to avoid the undesirable characteristics of log-transformed Bayesian similarity metric. An alternative unsupervised FCM framework named GLR_FCM is then proposed. In both frameworks, an adaptive factor based on the local intensity entropy is employed to balance the original and non-local spatial information. Additionally, the membership degree smoothing and the majority voting idea are integrated as supplementary local information to optimize segmentation. Concerning experiments on simulated SAR images, both frameworks can achieve segmentation accuracy of over 97%. On real SAR images, both unsupervised FCM segmentation frameworks work well on SAR homogeneous segmentation in terms of region consistency and edge preservation.

1. Introduction

Segmentation is a fundamental problem in SAR image analysis and applications. The primary purpose of segmentation is to segment the image into non-intersecting and consistent regions that are homogeneous [1]. Due to coherent speckle noise, which can be modeled as a powerful multiplicative noise, SAR image segmentation is recognized as a complex task. So far, many SAR image segmentation methods have been proposed to cope with the effect of speckle noise on image segmentation, such as threshold-based method [2], edge-based methods [3], region-based methods [4,5,6,7,8,9], cluster methods [10,11,12,13], Markov random field methods [3,14], Level set methods [15], graph-based methods [16,17], and deep learning based methods [18,19,20,21]. Among these methods, clustering is a commonly used method in segmentation tasks due to its effectiveness and stability. The fuzzy s-means (FCM) [22] is a classical clustering algorithm and has been extensively used to segment images. Unlike the hard clustering strategy, FCM is a soft clustering algorithm that allocates membership degrees to every category for each pixel. The FCM can achieve a good result for noise-free images. However, the standard FCM is noise-sensitive and lacks robustness without considering any spatial information. Thus, many modified algorithms have been proposed to enhance the effectiveness and robustness of standard FCM against noise.
Ref. Ahmed et al. [23] incorporated the spatial neighborhood term into the objective function of FCM, named BCFCM. BCFCM can modify the label of the center pixel by neighborhood weight distance and enhance the robustness to noise. However, it is time consuming. To reduce the complexity, Ref. Chen and Zhang [24] replaced the spatial neighborhood term with a mean-filtered and median-filtered image, respectively, called FCM_S1 and FCM_S2. Because of the availability of these two images in advance, the time complexity is greatly reduced. Besides, kernel methods were embedded into FCM_S1 and FCM_S2 to explore the non-Euclidean structure of data. Then two kernelized versions, KFCM_S1 and KFCM_S2, were derived. Ref. Szilagyi et al. [25] proposed the enhanced FCM, named EnFCM, which executed clustering on a gray level histogram rather than pixels to reduce the computation cost considerably. Afterwards, the fast generalized FCM (FGFCM) was proposed by Cai et al. [26]. In FGFCM, a new factor S i j was used to measure the local (both spatial and gray) similarity instead of α in EnFCM. The original image and its local spatial and gray level neighborhood are used to construct a non-linear weighted sum image, and then the clustering process is executed on the gray level histogram of the summed image. Thus, the computational load is very light. It is noteworthy that in all the aforementioned algorithms, the parameters for balancing noise immunity and edge preservation are needed. To avoid the parameter selection, Ref. Krinidis and Chatzis [27] introduced a new factor, G k i , incorporating local spatial and gray information into the objective function in a fuzzy way and proposed a new FCM named FLICM. This algorithm completely avoids the selection of parameters and is relatively independent of the type of noise.
However, when an image is contaminated with powerful noise, the local information may also be contaminated and unreliable. Actually, for a pixel, plenty of pixels with a similar neighborhood structural configuration exist on the image [28]. Exploring a larger space and incorporating nonlocal spatial information is necessary. Ref. Wang et al. [29] proposed a modified FCM with incorporating both local and non-local spatial information. Ref. Zhu et al. [30] introduced a novel membership constraint and a new objective function was constructed, named GIFP_FCM. Afterwards, Ref. Zhao et al. [31] incorporated non-local information into the objective function of the standard FCM and GIFP_FCM, respectively, and proposed two improved FCMs: An FCM with non-local spatial information (FCM_NLS) [31] and a novel FCM with a non-local adaptive spatial constraint term (FCM_NLASC) [32].
While the improved FCM listed above works well on simulated, nature, and MR images, none of them consider the statistical characteristics of SAR images. Consequently, the above-mentioned methods cannot assure a segmentation result on SAR images. To solve this problem, Ref. Feng et al. [33] proposed a robust non-local FCM with edge preservation (NLEP_FCM). In this algorithm, a modified ratio distance to measure patch similarity for SAR images was defined, and a sum image was constructed. The edge was rectified on the summed image. Ref. Ji and Wang [34] defined an adaptive binary weight NL-means and adopted an adaptive filter degree parameter to balance noise removed and detail preservation. Besides, a fuzzy between-cluster variation term was embedded into the objective function. Eventually, a new FCM named NS_FCM was proposed. However, the NS_FCM applied Euclidean distance, which is not suitable for SAR. Ref. Wan et al. [35] directly considered the statistical distribution of SAR image and derived a patch-similarity metric for SAR image based on Bayes theory. However, the assumption of additive Gaussian noise in the Bayes equation is not considered. Therefore, it is still a challenge to segment SAR images effectively.
In this paper, we incorporate the non-local spatial information into the objective function of FCM and propose two improved FCMs for segmenting SAR images effectively. In [36], an implicit assumption that the NL-means can emerge from the Bayesian approach is that the image is affected by additive Gaussian noise. Hence, we first apply the logarithmic transformation to convert the SAR multiplicative model into an additive model and then apply the Bayesian formula to derive a modified patch similarity metric. We then incorporate the non-local spatial information obtained by this new similarity metric into FCM and propose a more robust FCM named LBNL_FCM. Afterward, this Bayesian theory-based similarity metric is analyzed. Three undesirable properties that are incompatible with human intuition are determined, even if LBNL_FCM yields a satisfactory regional consistency. In order to avoid these undesirable distance characteristics, a statistical test method called generalized likelihood ratio (GLR) is introduced. The generalized likelihood ratio was applied to SAR images in the study of Deledalle et al. [37] and was proven to possess good distance properties. However, unlike the logarithm summation form in [37], we construct the patch similarity as the continued products of the similarity of corresponding pixels by combining the SAR statistical distribution. This continued product GLR-based similarity metric is used to generate an additional image that is insensitive to speckle noise. The additional auxiliary image is then added into the objective function of FCM as the non-local spatial information term and we propose GLR_FCM. Besides, an adaptive factor based on local intensity entropy is utilized to balance the original image and the nonlocal spatial information. Eventually, a simple membership degree smoothing and majority voting are adopted in LBNL_FCM and GLR_FCM to compensate for local spatial information. The basic idea is that the membership degree of a pixel should be influenced by neighborhood pixels. Experiments will demonstrate that LBNL_FCM can achieve a better result in region consistency than previous algorithms. GLR_FCM avoids the decay parameter selection and achieves a good balance between region consistency and edge preservation.
The main contributions are as follows:
(1)
A robust unsupervised FCM framework incorporating adaptive Bayesian non-local spatial information is proposed. This non-local spatial information is measured by the log-transformed Bayesian metric which is induced by applying the log-transformed SAR distribution into the Bayesian theory.
(2)
To avoid undesirable properties of the log-transformed Bayesian metric, we construct the similarity between patches as the continued product of corresponding pixel similarity measured by the generalized likelihood ratio. An alternative unsupervised FCM framework is then proposed, named GLR_FCM.
(3)
An adaptive factor is employed to balance the original and non-local spatial information. Besides, a sample membership degree smoothing is adopted to provide the local spatial information iteratively.
The rest of this paper is organized as follows. In Section 2, the relevant theories are described in detail. Section 3 presents the experimental results and parameters analysis. In Section 4, the qualitative evaluations of results are discussed. The conclusion is provided in Section 5.

2. Materials and Methods

2.1. Theoretical Background

2.1.1. The Standard FCM

Fuzzy C-Means Clustering is based on fuzzy set theory, proposed by Bezdek [38]. The standard FCM segments the image X into c clusters by iteratively minimizing the objective function. The objective function of the FCM algorithm is
min J m ( U , V ) = k = 1 c i = 1 N u k i m x i v k 2
where X = x 1 , x 2 , , x N denotes an image with N pixels, m is the fuzzy weighing exponent, usually set as 2, c is the number of clusters, and v k is the kth cluster center. u k i m represents the membership degree of the ith pixel belonging to the kth cluster, satisfying u k i [ 0 , 1 ] and k = 1 c u k i = 1 .
We can minimize Equation (1) by the Lagrange multiplier method. The u k i and v k can be update by
u k i = 1 j = 1 c ( x i v k 2 x i v j 2 ) 1 m 1
v k = i = 1 N u k i m x i i = 1 N u k i m
When the objective function reaches the minimum, we can convert the membership degree U into a segmentation result by assigning each pixel a class possessing the largest membership degree.

2.1.2. Nonlocal Means Method

Many algorithms have demonstrated the effectiveness of local information for the segmentation of low-noise images. However, the local information may be disturbed and unreliable when the noise is severe. In addition to local information, for a particular pixel, many pixels with a similar neighborhood configuration [28] exist over the entire image. We call this nonlocal spatial information. More specifically, for the ith pixel in image X, its non-local spatial information x ˜ i can be calculated by the following formula
x ˜ i = j W i r w i j x j
where W i r denotes the non-local search window of radius r centered at the ith pixel, w i j ( j W i r ) represents the normalized weight coefficient depending on the similarity of patches centered at the ith and jth pixel, i.e., v s ( N i ) and v s ( N j ) . The similarity w i j can be defined as
w i j = 1 Z i exp ( v s ( N i ) v s ( N j ) 2 , σ 2 h 2 )
where h controls the smoothing degree, Z i = j W i r exp ( v s ( N i ) v s ( N j ) 2 , σ 2 h 2 ) is the normalized constant, v ( N i ) = x k , k N i indicates the vectorized patch at pixel i, N i is the local neighborhood with size s × s at pixel i, and v s ( N i ) v s ( N j ) 2 , σ 2 denotes the Euclidean distance between patches v s ( N i ) and v s ( N j ) .

2.1.3. Nonlocal Spatial Information Based on Bayesian Approach

Kervrann et al. [36] claims that the NL-means filter can also emerges from the Bayesian formulation and the Bayesian estimator u ^ s ( N i ) of vectorized patch centered at the ith pixel can be written as
u ^ s ( N i ) = j W i r v s ( N j ) p ( v s ( N i ) | v s ( N j ) ) j W i r p ( v s ( N i ) | v s ( N j ) )
where W i r denotes the non-local spatial information search window centered at pixel i with size r × r , v s ( N i ) is the observed vectorized patch centered at pixel i, the set v s ( N 1 ) , , v s ( N r 2 ) is the observed patch samples in W i r . Once we know p ( v s ( N i ) | v s ( N j ) ) , we can calculate the Bayesian estimator u ^ s ( N i ) .
In [36], a usual additive noise model is considered, i.e., v ( x i ) = u ( x i ) + n ( x i ) , v ( x i ) is the grayscale value of pixel i in the observed image, u ( x i ) is the grayscale value of pixel i in the noise-free image, n ( x i ) is the additive Gaussian white noise. The likelihood can be factorized as
p ( v s ( N i ) | v s ( N j ) ) = k = 1 s 2 p ( ( x i k ) | ( x j k ) )
Due to the additive Gaussian noise model being considered, the v s ( N i ) | v s ( N j ) follows a multivariate normal distribution. Thus, the Bayesian estimator u ^ s ( N i ) is analogous to NL-means (Equation (4)) in form, and we can get
p ( v s ( N i ) | v s ( N j ) ) = k = 1 s 2 p ( ( x i k ) | ( x j k ) ) exp v s ( N i ) v s ( N j ) 2 h 2

2.2. The Modified FCM Based on Log-Transformed Bayesian Nonlocal Spatial Information

The initial NL-means can emerge from the Bayesian approach on the premise that the image is disturbed by additive Gaussian noise. Different from the work in Wan et al. [35] that directly considers Nakagami–Rayleigh distribution, we first utilize the logarithmic transformation to convert the multiplicative speckle noise model into the additive model. Then the Bayesian approach (Equation (8)) is used on log-transformed distribution to derive a new similarity metric for SAR images. We note that this is a reasonable treatment. Actually, Ref. Xie et al. [39] has proved that, for the amplitude concerning the SAR image, the PDF of the log-transformed distribution is statistically very close to the Gaussian PDF. Therefore, the image analysis methods based on the Gaussian noise image can work equally well on the log-transformed amplitude SAR image.
Considering the multiplicative noise model, which can be described as
X = R X n X
where X represents the observed image, R X is the noise-free amplitude image and is equal to R 1 2 , R is the radar cross section, n X is the speckle noise. Under the assumption of fully developed speckle [40], the PDF of L-look amplitude of SAR images obeys the Nakagami–Rayleigh distribution [41], represented as
p ( X | R ) = 2 L L Γ ( L ) R L X 2 L 1 exp ( L X 2 R )
where Γ ( · ) is the Gamma function; then the log transformation converts Equation (10) into
X ¯ = R ¯ X + n ¯ X
where X ¯ = ln X , R ¯ X = ln R X , n ¯ X = ln n X . Since the logarithmic transformation is monotonic, the PDF of X ¯ is
p X ¯ ( X ¯ | R ) = 2 Γ ( L ) L R L exp ( L exp ( 2 X ¯ ) R ) exp 2 L X ¯
Then, applying Equation (12) to the Bayesian formulation, we obtain
P ( v s ( N i ) | v s ( N j ) ) = k = 1 s 2 p ( x i k | x j k ) = k = 1 s 2 2 Γ ( L ) ( L x j k ) L exp ( L e 2 x i k x j k ) exp ( 2 L x i k ) = 2 Γ ( L ) s 2 L L s 2 k = 1 s 2 exp L ln x j k L exp ( 2 x i k ) x j k + 2 L x j k = 2 Γ ( L ) s 2 L L s 2 exp L k = 1 s 2 ln x j k + exp ( 2 x i k ) x j k 2 x i k exp L k = 1 s 2 ln x j k + exp ( 2 x i k ) x j k 2 x i k exp k = 1 s 2 ln x j k + exp ( 2 x i k ) x j k 2 x i k h 2
where s 2 denotes the number of pixels in patch v s ( N i ) and v s ( N j ) , x i k is the kth pixel in the patch centered at the ith pixel, h 2 = 1 L is the decay parameter of the filter. Then, a new patch similarity metric based on the Bayesian approach and log-transformed statistical distribution of SAR is derived. So far, v s ( N i ) v s ( N j ) 2 in Equation (5) can be replaced by
D ¯ s ( v s ( N i ) , v s ( N j ) ) = k = 1 s 2 ln x j k + exp ( 2 x i k ) x j k 2 x i k
Hence, the weight w i j between patches v s ( N i ) and v s ( N j ) can be calculated by
w i j = 1 Z i exp D ¯ s ( v s ( N i ) , v s ( N j ) ) h 2
Equation (15) can be applied to Equation (4). Thus an additional auxiliary image I ˜ , which is speckle noise insensitive, can be obtained. With I ˜ as the non-local spatial information term, incorporating into the standard FCM, a new robust FCM based on the log-transformed Bayesian non-local information (LBNL_FCM) can be obtained. The objective function is as follows
min J m ( U , V ) = k = 1 c i = 1 N u k i m | x i v k | 2 + k = 1 c i = 1 N η i u k i m | x i ˜ v k | 2 s . t . k = 1 c u k i = 1 , 0 u k i 1 , 0 i = 1 N u k i N
Minimizing Equation (16) by using the Lagrange multiplier method, the membership degree u k i and cluster v k can be updated by
u k i = 1 j = 1 c ( x i v k 2 + η i x i ˜ v k 2 x i v j 2 + η i x i ˜ v j 2 ) 1 m 1
v k = i = 1 N u k i m x i + η i u k i m x i ˜ i = 1 N u k i m + η i u k i m

2.3. Some Problems on Patch Similarity Metric by Bayesian Theory

In the last section, we made the amplitude SAR image log transformed and combined the Bayesian equation to derive a new similarity metric. This new metric for patch satisfies the assumptions in [36] and the non-local spatial information can be appropriately measured. However, there are still three problems that bother us.
Problem 1: In Equation (15), a decay parameter h is always needed to calculate the weights of the non-local spatial information. In most cases, it is difficult to obtain a satisfactory value.
Problem 2: The logarithmic transformation is homoerotic transformation (nonlinear transformation), which converts multiplier noise into additive noise while reducing the contrast of the SAR image. The original statistical distribution is changed.
Problem 3: In experiments, the LBNL_FCM effectively suppresses speckle noise and achieves the best region consistency. However, this similarity metric has three distance characteristics that do not match the characteristics one would intuitively expect. Here, we list three properties that Deledalle [37] used for the assessment of a similarity metric.
Property 1
(Symmetry). A good similarity metric should be invariant to changes in position.
( z 1 , z 2 ) = ( z 2 , z 1 ) f o r z 1 , z 2
Property 2
(Self-Similarity Maximum). A good similarity measurement should have the property of being the maximum similarity between itself.
( z 1 , z 1 ) > = ( z 1 , z 2 ) f o r z 1 , z 2
Property 3
(Self-Similarity Equal). For a good similarity measurement, the maximum similarity should not depend on the variation of variables.
( z 1 , z 1 ) = ( z 2 , z 2 ) f o r z 1 , z 2
To further illustrate, we consider x = 1 , 2 , 3 , 4 , 5 and y = 1 , 2 , 3 , 4 , 5 . We set x i , k = x and x j , k = y . Then we put x i , k and x j , k into Equation (14) and get the similarity matrix.
Figure 1 shows the similarity matrix. From the green square we can see Property 1 is not satisfied; from the orange square we can see Property 2 is not be satisfied; from the purple square we can see Property 3 is not be satisfied. The problems discussed above encourage us to find other better similarity metrics, even if the Bayesian similarity metric is good at keeping region consistency in segmentation. Fortunately, Deledalle [37] proposed that the similarity of patches can be measured by statistical test. He proved the generalized likelihood ratio satisfied properties used in evaluating the similarity metric.

2.4. The New FCM Based on Generalized Likelihood Ratio

Generalized likelihood testing is defined as the ratio between the maximum value of the likelihood function with constraints to the maximum value of the likelihood function without constraints. The basic idea is that, if the parameters imposed on the model are valid, adding such a constraint should not lead to a significant decrease in the maximum value of the likelihood function. Considering Nakagami–Rayleigh distribution, for a pair of patches ( v s ( N i ) , v s ( N j ) ) on a SAR image, we can define its likelihood ratio (LR)
ψ L R ( v s ( N i ) , v s ( N j ) ) = p ( v s ( N i ) , v s ( N j ) , R i = R 0 , R j = R 0 ; ϰ 0 ) p ( v s ( N i ) , v s ( N j ) , R i = R 1 , R j = R 2 ; ϰ 1 )
where ϰ 0 and ϰ 1 represent two hypotheses, defined as
ϰ 0 : R i = R j = R 0 ( N u l l H y p o t h e s i s ) ϰ 1 : R i = R 1 ; R j = R 2 ; R 1 R 2 ( A l t e r n a t i v e H y p o t h e s i s )
v s ( N i ) is the patch centered at pixel i, and v s ( N j ) denotes the non-local patch centered at pixel j. R i and R j as the hypothesis parameters denote the noise-free backscatter value of center pixel i. Hypothesis ϰ 0 means a parametric constraint on statistical distribution that the two patches ( v s ( N i ) , v s ( N j ) ) come from the same distribution. Thus, they have the same backscatter value, formalized as R i = R j = R 0 . Hypothesis ϰ 1 means no constraint on the statistical distribution of v s ( N i ) and v s ( N j ) , formalized as R i R j . For the sake of mathematical simplicity, we choose parameters in this way
R 0 = max Θ p ( v s ( N i ) , v s ( N j ) , R i = R j = R 0 ; ϰ 0 ) R 1 o r R 2 = max R 1 , R 2 Θ p ( z s ( N i ) , z s ( N j ) , R i = R 1 , R j = R 2 ; ϰ 1 )
Thus, Equation (22) becomes the generalized likelihood ratio (GLR), defined as
ψ G L R ( v s ( N i ) , v s ( N j ) ) = sup R 0 p ( v s ( N i ) , v s ( N j ) , R i = R j = R 0 ) sup R 1 , R 2 p ( v s ( N i ) , v s ( N j ) , R i = R 1 , R j = R 2 , R 1 R 2 )
where 0 < ψ G L R ( v s ( N i ) , v s ( N j ) ) < 1 ; the larger the ψ G L R ( z s ( N i ) , z s ( N j ) ) , the larger the probability that hypothesis ϰ 0 holds, and the more inclined to accept ϰ 0 . This also means that there is a higher probability of two patches v s ( N i ) and v s ( N j ) coming from the same distribution. Thus, we can use GLR to measure the similarity between two patches.
Unlike the Deledalle [37] approach, we construct the patch similarity as the continued product of corresponding pixel similarity. Next, we will give a detailed derivation.
Now, we assume v s ( N i ) and v s ( N j ) are irrelevant, and the corresponding pixel within the patch is independent. Thus, the similarity between v s ( N i ) and v s ( N i ) can be calculated by
ψ GLR ( v s ( N i ) , v s ( N j ) ) = k = 1 N ξ GLR ( x i k , x j k )
where N = s 2 is the number of pixels in the patch, and ξ GLR ( x i k , x j k ) is defined as
ξ GLR ( x i k , x j k ) = sup R 0 p ( x i k , x j k ; R 1 = R 2 = R 0 ) sup R 1 , R 2 p ( x i k , x j k ; R i = R 1 , R j = R 2 , R 1 R 2 ) = sup R 0 [ p ( x i k , x j k ; R 1 = R 2 = R 0 ) ] [ sup R 1 p ( x i k ; R i = R 1 ) ] [ sup R 2 p ( x j k ; R j = R 2 ) ]
x i k and x j k denote the kth pixel in patch; R 0 , R 1 , R 2 denote noise-free backscatter value. To obtain the maximum likelihood value sup R 0 p ( x i k , x j k , R i = R j = R 0 ) , we need get joint probability
p ( x i k , x j k ; R i = R j = R 0 ) = p ( x i k ; R 0 ) p ( x j k ; R 0 ) = 2 Γ ( L ) 2 L R 0 2 L x i k x j k 2 L 1 exp L R 0 x i k 2 + x j k 2
To obtain the maximum likelihood estimator R ^ 0 of R 0 , we construct the maximum likelihood function
L ( R 0 ) = m = 1 M p ( x i k m ; R 0 ) p ( x j k m ; R 0 ) = m = 1 M 2 Γ ( L ) 2 L R 0 2 L x i k m x j k m 2 L 1 exp L R 0 x i k m 2 + x j k m 2
Then, making the logarithm on L ( R 0 ) and differentiating
ln L ( R 0 ) R 0 = R 0 { m = 1 M ln 4 L 2 L Γ 2 ( L ) 2 L ln R 0 + ( 2 L 1 ) ln x i k m z j k m L R 0 x i k m 2 + x j k m 2 } = 2 L M R 0 + L R 0 2 m = 1 M x i k m 2 + x j k m 2
Let ln L ( R 0 ) R 0 = 0 ; then, we get
R ^ 0 = 1 2 M m = 1 M x i k m 2 + x j k m 2
considering that there is only one available observation for each pixel in the patch, that is to say M = 1 ; thus, we can get
R ^ 0 = 1 2 x i k 2 + x j k 2
With the same derivation process as above, we can obtain the maximum likelihood estimator R ^ 1 and R ^ 2 for R 1 and R 2
R ^ 1 = x i k 2 R ^ 2 = x j k 2
Now, we replace R 0 , R 1 , R 2 with maximum likelihood estimators R ^ 0 , R ^ 1 , and R ^ 2 in Equation (27); then, we get the similarity between corresponding pixels
ξ GLR ( x i k , x j k ) = sup R 0 p ( x i k , x j k ; R 1 = R 2 = R 0 ) sup R 1 , R 2 p ( x i k , x j k ; R i = R 1 , R j = R 2 , R 1 R 2 ) = 4 L 2 L Γ ( L ) 1 2 x i k 2 + x j k 2 2 L x i k x j k 2 L 1 exp ( 2 L ) 2 L L Γ ( L ) ( x i k ) 2 L ( x i k ) 2 L 1 exp ( L ) 2 L L Γ ( L ) ( x j k ) 2 L ( x j k ) 2 L 1 exp ( L )
After simplifying Equation (34), we get
ξ GLR ( x i k , x j k ) = 2 x i k x j k ( x i k ) 2 + ( x j k ) 2 2 L
Equation (35) can measure the similarity between corresponding pixels within two patches. Figure 2a shows the similarity ξ G L R ( x i k , x j k ) , where x i k = [ 1 , 2 , 3 , 4 , 5 ] and x j k = [ 1 , 2 , 3 , 4 , 5 ] . From Figure 2a we can see that the Properties 1 and 3 mentioned earlier can be satisfied. Figure 2b is the change curve of similarity ξ G L R ( x i k , x j k ) when x i k is fixed at 1 and x j k = [ 1 , 2 , , 10 ] . The maximum ξ G L R ( x i k , x j k ) can be obtained when x i k = x j k = 1 . Besides, ξ G L R ( x i k , x j k ) gradually decreases with increasing distance. Thus, Property 2 can be proved.
Therefore, by putting Equation (35) into Equation (26), a patch similarity metric based on GLR can be derived as follows
ψ GLR ( v s ( N i ) , v s ( N j ) ) = k = 1 N ξ GLR ( x i k , x j k ) = k = 1 N 2 z i k x j k ( x i k ) 2 + ( x j k ) 2
We then can use this similarity metric based on GLR (Equation (36)) to obtain the weight of each patch in a non-local search space centered at pixel i. Then the recovered amplitude of pixel i in in SAR image can be calculated as follows
x ˜ i = j W i r ψ GLR ( v s ( N i ) , v s ( N j ) ) x j
where x ˜ i is the estimator of the ith pixel, ψ GLR ( v s ( N i ) , v s ( N j ) ) is the weight between patch v s ( N i ) and v s ( N j ) . After visiting all pixels in SAR image, we can construct an auxiliary image I ˜ = x ˜ 1 , x ˜ 2 , x ˜ i , , x ˜ N . Then I ˜ is added into the objective function of standard FCM as non-local spatial information term and we can obtain GLR_FCM
min J m ( U , V ) = k = 1 c i = 1 N u k i m | x i v k | 2 + k = 1 c i = 1 N η i u k i m | x ˜ i v k | 2 s . t . k = 1 c u k i = 1 , 0 u k i 1 , 0 i = 1 N u k i N
By minimizing Equation (38) using Lagrange multiplier method, the membership degree u k i and cluster v k can be updated by
u k i = 1 j = 1 c ( x i v k 2 + η i x i ˜ v k 2 x i v j 2 + η i x i ˜ v j 2 ) 1 m 1
v k = i = 1 N u k i m x i + η i u k i m x i ˜ i = 1 N u k i m + η i u k i m
In the objective function of LBNL_FCM and GLR_FCM, an adaptive factor based on local intensity entropy η i is introduced to balance the original detail information and non-local spatial information. η i is defined as
η i = α × e x p ( max E i ) e x p ( E i ) e x p ( max E i ) 1 α = M e d σ 1 , σ 2 , , σ i , , σ N 1 , σ N
where E i = j = 1 k p i log ( p i ) denotes the information entropy of the local area histogram at the ith pixel. k is the number of quantized gray levels. σ i denotes the local variance at the ith pixel, M e d indicates a median operation, and N is the total number of pixels.
In Equation (41), η i is determined by the local intensity entropy E i . In the homogeneous region, the amplitude values tend to be the same, and E i is small; hence, a large weight η i will be assigned for non-local spatial information. Conversely, at the edges, where the local entropy E i is relatively large, and η i receives a small value, the original SAR information is given more consideration.
Figure 3a–e are original SAR image slices and Figure 3f–j are the η i maps for Figure 3a,b, respectively. We can see a black color near the edge, which indicates that the intensity value of η i at the edge is small and relatively large in the homogeneous regions. Thus, the original image information and non-local spatial information can be dynamically balanced and adjusted.

2.5. The Membership Degree Smoothing and Label Correction

In addition to non-local spatial information, local spatial information is also useful. For a pixel, its class should be influenced by the surrounding pixels. Thus, we add membership degree smoothing into the iteration process. For the ith pixel in the SAR image, we sum the membership vector of the neighborhood pixels to obtain a weight vector ϕ i ( ϕ i =   ϕ 1 i ϕ 2 i ϕ c i ) , and ϕ i is weighted to the membership vector of the ith pixel. Then we can get the new membership degree u i for the ith pixel.
ϕ k i = j N i u k j u i = u i ϕ i
where N i is the neighborhood pixels of the ith pixel, u i is the membership before smoothing, and u i is the weighted membership degree. Figure 4 shows the calculation process.
Besides, label correction is used as a homogeneous region smoothing technique in SAR segmentation in [42]. It has been shown to be effective in the correction of error class labels. Hence, we will adopt a simple method to correct the error pixel class. This framework uses the majority voting strategy to revise the error pixel label upon completion of the iteration. Specifically, a fixed-scale window is utilized to slide over the image. The class label with the largest number in the slid window is the final class of the central pixel. Figure 5 shows that the framework of GLR_FCM and LBNL_FCM is alike.

3. Experiments and Results

In this section, we perform LBNL_FCM and GLR_FCM on simulated SAR images and real SAR images to illustrate the effectiveness of our proposed algorithms. The segmentation results are evaluated qualitatively and quantitatively. Several popular improved FCM algorithms are used as baselines to illustrate the advantages of the proposed algorithms in edge preservation and region consistency. These methods are FCM [22], FCM_S1 and FCM_S2 [24], KFCM_S1 and KFCM_S2 [24], EnFCM [25], FGFCM [26], FCM_NLS [31], NS_FCM [34], and RFCM_BNL [35]. Note that, for real SAR images, we focus more on visual inspection because it is difficult to obtain its ground truth. Experiment images are selected from four different satellites, including AIRSAR, ALOS PolSAR, TerraSAR-X, and GF3.

3.1. Experimental Setting

For all algorithms, the parameters are selected as follows: The stopping threshold δ = 10 5 , Maximum iterations T = 200 , membership exponent m = 2 . We set α = 5 for FCM_S1, FCM_S2, KFCM_S1, KFCM_S2, EnFCM, and FCM_NLS. According to [34], we set α = 6 for NS_FCM. λ s and λ g in FGFCM are set to 2 and 7, respectively. For NS_FCM and FCM_NLS, the local neighbor size is 5 × 5 , and the non-local search window is set to 11 × 11 and 15 × 15 , respectively. For RFCM_BNL, LBNL_FCM and GLR_FCM, the local neighbor window is set to 3 × 3 and the non-local search window is set to 15 × 15 , 9 × 9 , and 23 × 23 . For LBNL_FCM and GLR_FCM, the membership degree smoothing and label correction window is set to 5 × 5 . In LBNL_FCM and GLR_FCM, when calculating η i , the gray level is quantized into 16 bins, i.e., k = 16 .

3.2. Evaluation Indicators

Evaluating results is a key step in measuring the effectiveness of the algorithms. In this paper, the effectiveness of the proposed and reference algorithms is assessed from both objective and subjective aspects. Moreover, we concentrate on two crucial aspects of the segmentation results: Compactness and separation. Whether it is a visual inspection by human eyes or a quantitative evaluation, a good segmentation algorithm should make the intra-class dissimilarity as small as possible and the inter-class variability as large as possible, i.e., corresponding to compactness and separation, respectively. Table 1 shows several assessment indicators that we intend to use to quantitatively evaluate these two properties, whose efficacy was proved in [43].

3.3. Segmentation Results on Simulated SAR Images

We can obtain accurate ground truth for simulated SAR images, so we use segmentation accuracy to evaluate the segmentation performance. In addition, five numerical evaluation indexes are computed. The segmentation accuracy is defined as the number of correctly segmented pixels divided by the total number of pixels, and the formula is as follows:
S A = k = 1 c A k C k j = 1 c C j
where c represents the number of segmentation objects, C k denotes the number of pixels within the kth class in the real SAR image, A k indicates the number of pixels belonging to the kth class in the segmentation result, and j = 1 c C j corresponds to the total number of pixels.

3.3.1. Experiment 1: Testing on the First Simulated SAR Image

The first experiments are carried out on a one-look simulated SAR image with 250 × 200 pixels as shown in Figure 6a. This simulated SAR image includes five classes with intensity value taken as 10, 50, 100, 150, 200. Its gray and color ground truth are shown in Figure 6b,c.
The experiment results of the proposed algorithms and comparative algorithms are shown in Figure 7. It can be seen that the original FCM has the worst result in regional consistency and many noise points are present. FCM_S1 and FCM_S2 enhance the segmentation result by adding local information. The kernel distance versions of KFCM_S1 and KFCM_S2 obtain further enhancement results. Nevertheless, there are still plenty of noise pixels. The reason is that the local neighborhood information on SAR images is contaminated by noise. The reliability of local spatial information is severely weakened, which ultimately leads to the failure of segmentation.
FCM_NLS and NS_FCM in Figure 7h,i consider the non-local information. However, the non-local spatial information is measured by Euclidean distance, which is inappropriate for SAR images. So they still have significant misclassification problems. The RFCM_BNL takes into account the characteristics of SAR images and therefore achieves a relatively good result in terms of the regional coherence. However, there is still a large number of isolated pixels near the edges. The result of LBNL_FCM presents a better continuity of edges and homogeneous regions cleaner than that of RFCM_BNL. However, the Bayesian-based FCM algorithm is not the best in terms of edge preservation in Figure 7j,k. There is a serious misclassification phenomenon at the edges, i.e., the region between the green region and the blue region is divided into yellow class. In Figure 7l, GLR_FCM achieves the best visual result for maintaining regional consistency and edge preservation. Effectively eliminating the false class of RFCM_BNL and LBNL_FCM at the edges and almost no isolated noise pixels.
Table 2 displays the SA (%) and executed time of each algorithm. We see that the kernel method is valid for results. The non-local information is more useful for SAR image segmentation compared to local information. Because of the statistical property of SAR images, higher segmentation accuracy is obtained by FCM_RBNL, LBNL_FCM and GLR_FCM. Besides, GLR_FCM obtains the best segmentation accuracy of 99.16%, consistent with the visualization in Figure 7. The algorithms based on the non-local information have higher time consumption because each pixel is visited in computing auxiliary.
Table 3 shows the quantitative evaluation for the first simulated SAR image. V P C and V M P C express the fuzziness of the partition result. The larger the value, the better the partition result. In contrast, the minimums of V P E and V M P E imply the optimal result. The V F S describes the compactness and separation. The best partition can be obtained with the minimum V F S . In addition to the optimal value obtained by the NS_FCM on V F S , the LBNL_FCM and GLR_FCM obtain the best value in the other criteria.
Figure 8 provides the change curve of the objective function. We can see that the objective function of LBNL_FCM descends fastest and obtains the minimum value. The objective function of GLR_FCM decreases at a similar speed to that of LBNL_FCM. Moreover, a relatively small value of the objective function is obtained.

3.3.2. Experiment 2: Testing on the Second Simulated SAR Image

The second simulated SAR image is composed of 283 × 283 pixels, and includes five classes with amplitude values settled as (0, 64, 128, 192, 255). Figure 9a–c show the original simulated SAR image and the ground truth. Figure 9d–o show the segmentation results of each algorithm.
Visually, the result of FCM (Figure 9d) has plenty of noise points. In Figure 9e–j, due to integration of the local spatial information, the isolated speckle pixels are significantly suppressed. The result of FCM_NLS obtains a better region consistency in red and green classes. However, there are still some blocks that are not properly classified under other categories. The results of NS_FCM and RFCM_BNL yield good regional coherence and smoothed edges. However, there are still serious classification mistakes on the periphery of different regions. In contrast, LBNL_FCM and GLR_FCM obtain relatively satisfactory segmentation results. Isolated pixels and blocks of speckle noise are practically non-existent there in homogeneous regions. In terms of structural information, GLR_FCM protects the continuity and smoothness of the edges, even when crossing regions with similar magnitude values. The edge can be well discriminated as shown in Figure 9o. Only slightly blurred edges exist at the nodes adjacent to the three regions.
A conclusion similar to the first experiment can be obtained from Table 4. In SAR image, non-local spatial information is more robust to speckle noise compared to local information. Thus, the FCMs with the non-local information terms obtain relatively good segmentation accuracy above 96%. However, they are time consuming because of the auxiliary image calculated in advance.
The quantitative evaluation indicators of each algorithm are recorded in Table 5. GLR_FCM obtains the optimal value on V P C , V P E , V M P C , and V M P E and significantly outperforms other algorithms. The LBNL_FCM has relatively optimal indicators. On V F S , EnFCM obtains the minimum value of 6.27 × 10 9 .
Figure 10 shows the curve of objective function on the second simulated SAR image. It can be seen that FCM_RBNL, LBNL_FCM, and GLR_FCM consider the statistical properties of SAR images, so their objective function decreases fastest and only needs two iterations to converge. In addition, at the convergence, GLR_FCM has the minimum loss value of the objective function. This also implies best segmentation performance.

3.4. Segmentation Results on Real SAR Images

Experiments on simulated SAR images only illustrate the validity and feasibility of algorithms. Therefore, we will test the practicality of proposed algorithms on real SAR images taken from different satellites. It is difficult to get ground truths for real SAR images; thus, segmentation results are accessed mainly by visual inspection.

3.4.1. Experiment 1: Experiment on the First Real SAR Image

The first experiment was carried out on an L-band, HH-polarized, SAR image with 2 m spatial resolution taken by AIRSAR in the Flevoland area of the Netherlands, as shown in Figure 11a. This area includes roughly four crop types, and the amplitudes are bright, dark, darker, and black. Figure 12 shows the segmentation results.
The auxiliary images of LBNL_FCM and GLR_FCM are shown in Figure 11b,c. The auxiliary image used by LBNL_FCM (see Figure 11b) strongly suppresses the noise and has a strong smoothing ability. The auxiliary image used in GLR_FCM (see Figure 11c reduces speckle noise while retaining the structural information. However, a slight texture noise remains inside the homogeneous region, which can be easily attenuated or removed by local information such as membership smoothing.
The most terrible result is provided by FCM in Figure 12a and almost fails when processing SAR images. The FCM_S1, FCM_S2, and the kernel methods suppress the noise to some extent. However, the results are still not very desirable. The EnFCM and FGFCM enhance the consistency of segmented regions compared with the previous methods by incorporating local information and using the histogram as the segmentation object. However, the darker region is misclassified to dark class from Figure 12f,g. FCM_NS has a better region coherence than FCM_NLS, but there is still severe misclassification in the region with similar intensity. Among these methods, RFCM_BNL, LBNL_FCM, and GLR_FCM obtain relatively satisfactory results. Visually, the segmentation results almost correctly reflect the region information of the original image. The large regions which are misclassed in other algorithms are correctly classified. However, the edges in RFCM_BNL and LBNL_FCM are not satisfactory enough, as shown in Figure 12j,k. A third class may appear in the middle of two adjacent regions. The result of GLR_FCM effectively overcomes this problem with the suitable similarity properties. Besides, most of the structure information is preserved in Figure 12l. A balance between regional homogeneity and edge preservation can be achieved well.

3.4.2. Experiment 2: Experiment on the Second Real SAR Image

An L-band, HH-polarized SAR image taken by AIRSAR is selected in this experiment. Figure 13a presents the original image. This area contains four kinds of crops shown as bright, gray, dark, and black. Figure 13a shows that the region with the brightest magnitude suffers from speckle noise. There is a gradual change in amplitude value.
The segmentation results of each algorithm are shown in Figure 13. The results of NS_FCM and FCM_NLS (Figure 13i,j) are relatively clean and accurate. However, there are many misclassified categories at the intersection of different regions. The segmentation results of RFCM_BNL and LBNL_FCM eliminate the isolated pixels and obtain good region conformity. However, RFCM_BNL and LBNL_FCM are prone to misclassification at the edge. The segmentation results of GLR_FCM are cleaner. The serious misclassification at the edge is weakened in GLR_FCM. Some small scale regions can also be segmented, such as roads that appear black being correctly segmented. However, with the noise enhancement, GLR_FCM tends to produce isolated patches when combined with label correction. Figure 14 shows the local detail map of four non-local spatial information FCMs. There is a significant reduction in misclassification at the edge of GLR_FCM.

3.4.3. Experiment 3: Experiment on the Third Real SAR Image

The fourth experiment is performed on a TerraSAR image shown in Figure 15a, which has 5 m spatial resolution and HH polarization in X-band strip imaging mode with 402 × 381 pixels. The SAR image is taken of an area of farmland near the border of Saxony in the German region and includes four categories. Some buildings show high amplitude values, and roads show low amplitude values. These unfavorable factors make it difficult to segment SAR images.
The partition results of each algorithm are provided in Figure 15b–m. Obviously, the results of FCM, FCM_S1, FCM_S2 and kernel editions are not satisfactory. Because of the effect of speckle noise, many misclassified pixels, blocks and regions exist. The EnFCM and FGFCM correct the middle area label that is misclassified into highlighted categories in Figure 15c–f. However, the pixels in gray and darker are substantially confused. The addition of NLS_FCM and NS_FCM with non-local information reduces the misclassification, but there is still some isolated noise due to unsuitable Euclidean distance.
Moreover, RFCM_BNL (Figure 15k) obtains good region conformity, but a tiny portion of darker areas is still segmented into black classes. The result of LBNL_FCM significantly weakens the influence of speckle noise, and the best smoothing effect is obtained. GLR_FCM (Figure 15m) is enabled to balance the speckle noise suppression and edge preservation. The region consistency is guaranteed without damaging structure information.

3.4.4. Experiment 4: Experiment on the Fourth Real SAR Image

The fifth experiment is a 3 m spatial resolution, 222 × 516 pixels, HH-polarized SAR image taken from GF-3 with the imaging mode of the strip, and this area is located near the Daxing Airport in Beijing. The original image is shown in Figure 16a. The buildings, land, and runways are included in this SAR image; they show in magnitude as highlighted, dark, and black, respectively. Some small areas, such as lakes, also appear black.
Figure 16b–m display the experimental results. As can be seen from Figure 16b the FCM is sensitive to the speckle noise. FCM_S1 and FCM_S2 slightly improve the results. The kernel versions further enhance the separability and homogeneity. However, some speckle blocks are not removed. Due to the complexity of this SAR image, EnFCM, FGFCM, FCM_NLS, and NS_FCM can barely segment correctly. Specifically, EnFCM and FGFCM cannot distinguish the ground and lake. In the results of FCM_NLS and NS_FCM, the building area and ground mix into the same category. This illustrates that the Euclidean distance is unreliable concerning SAR images. Due to the distribution of SAR being considered, RFCM_BNL, LBNL_FCM, and GLR_FCM obtain relatively satisfactory results. The two Bayesian-based FCMs slightly outperform the GLR_FCM in terms of region consistency. However, they are poor in edge localization. Additionally, in terms of structure information preserving, the GLR_FCM surpasses all the algorithms. In Figure 16m, we notice the contour of the lake can be segmented explicitly.

3.5. Sensitivity Analysis to Speckle Noise

In this section, we evaluate the sensitivity of proposed frameworks to noise intensity by adding different levels of speckle noise to Figure 6a. The SA (%) of different algorithms on images with eight speckle look is shown in Figure 17a. The SA (%) of most methods improves with the weakening of speckle noise. GLR_FCM obtains the best SA (%), which always exceeds 97%. Besides, the stability to different intensity of noise can be observed. LBNL_FCM obtains relatively good SA (%) and is stable for speckle look. The SA (%) of some algorithms fluctuates significantly to the number of speckle look. The variation between the best SA (%) and worst SA (%) exceeds 60%. The partial enlarged view can be seen in Figure 17b.

3.6. Parameters Analysis and Selection

The non-local search window size w × w and the square neighborhood size r × r are two crucial parameters related to the non-local spatial information. In this section, we investigate the optimal parameters on two simulated SAR images (Figure 6a and Figure 9a) for LBNL_FCM and GLR_FCM.
On the first simulated SAR image (Figure 6a), we set the non-local spatial information search window w = [ 5 , 7 , 9 , 11 , 13 , 15 , 17 , 19 , 21 , 23 ] and local neighborhood patch r = [ 3 , 5 , 7 , 9 , 11 ] . The SA (%) of LBNL_FCM and GLR_FCM on the first simulated SAR image is shown in Figure 18a,b, respectively. From Figure 18a, the SA curve of LBNL_FCM decreases rapidly for ab arbitrary r value when w exceeds 9. One reason for this is that the logarithm transformation reduces the contrast of image amplitude. As the window w expands, more pixels are included to calculate non-local information. Hence, the weight of reliable pixels decreases. The SA (%) curve of GLR_FCM can be seen from Figure 18b. The SA curve of r = 3 is always higher than others and the accuracy achieves the optimal with w = 23 . Therefore, in the parameter range above, the optimal value for r is 3. On the first simulated SAR image, we set r = 3 , w = 9 for LBNL_FCM and r = 3 , w = 23 for GLR_FCM. Figure 19 shows the SA curve of LBNL_FCM and GLR_FCM on the second simulated SAR image. Some similar phenomena can be observed. The curve with local neighborhood size r = 3 is always more accurate than others.

3.7. Computational Complexity Analysis

The computational complexity of the aforementioned algorithms is given in Table 6. Where N is total pixels, c denotes the number of clustering centers, T represents the iterations, w is the size of the window, r is the size of the non-local search window, s is the size of the neighborhood, W denotes the sliding window for calculating the factor η i , Q corresponds to the number of gray levels.
The computational complexity of proposed frameworks LBNL_FCM and GLR_FCM consists of three parts. The first part O ( N × r 2 × s 2 ) is contributed by the calculation of the non-local spatial information. It is calculated before the iterative process. The second part O ( N × W 2 ) comes from the calculation of the factor η i . The third part O ( N × c × T ) is from the iteration process. To sum up, the total computational complexity of LBNL_FCM and GLR_FCM is O ( N × r 2 × s 2 + N × W 2 + N × c × T ) .

4. Discussion

In the previous experiments, the effectiveness and robustness of both frameworks are verified. On the simulated SAR images, both algorithms obtain high segmentation accuracy (always exceeding 97%), and some unsupervised assessment indicators, such as v P C , v P E , also state that the fuzziness of clustering centers in results is reduced. On the real SAR images, LBNL_FCM shows a best region consistency in results compared with the previous algorithms. However, like FCM_NLS, NS_FCM, and RFCM_BNL, artifacts appear at the edge. Except for the factor that the amplitude value is prone to blur near the edge, it is also related to the characteristic of the log-transformed Bayesian metric reducing image contrast. Compared with FCM_NLS and NS_FCM, the results of GLR_FCM show satisfactory region uniformity; no isolated pixels exist. Compared with RFCM_FCM and LBNL_FCM, GLR_FCM can preserve the image details and the edges can be properly defined. The main reason is that the similarity metric constructed by the continued product of the generalized likelihood ratio is a ratio form in mathematical expression. That makes it easy to give a small contribution weight to the patches possessing dissimilar amplitude values with the central pixel, which implies the patches involved in reconstructing the real amplitude of central pixel in Equation (37) are trustworthy. Another feature of the proposed unsupervised FCM frameworks is that the non-local spatial information can be adaptively adjusted. Remarkably, in most previous methods, the relevant parameter is empirically set to a constant. Consequently, edge blurred artefacts are greatly reduced in GLR_FCM.
In addition to the methods involved in this article, there are many methods combining FCM with machine learning. For instance, MFCCM, proposed by Balakrishnan et al. [48], fused the characteristics of deep learning to clustering, and produces a satisfactory fuzzy clustering result. However, the disadvantage of its high computational complexity is also significant. Besides, a semi-supervised method combining CNN and IFCM [49] provided a more in-depth understanding and representation of the data features, although it requires a lot of training data. Compared to the advanced deep learning models, our proposed unsupervised FCM frameworks can quickly and efficiently deliver segmentation results. However, due to the lack of feature extraction and feature expression, the image data cannot be understood in depth.
In the parameter analysis, we confirm that r = 3 is an optimal value for neighborhood size when measuring patches. However, we found the optimal size of non-local search window of GLR_FCM is w = 23 , which is different from the optimal value w = 15 explored by other algorithms, such as FCM_NLSL, NS_FCM, and RFCM_BNL. We speculate that, because of the strong inhibition of the GLR_FCM on dissimilar patches, more reliable patches can be obtained by expanding the scope of the search window.
In this paper, an empirical statistical distribution (Nakagami–Reigh) is utilized to describe SAR images. The dedicated model is appropriate for the homogeneous region of the SAR. In other scenarios, such as mountainous areas, urban areas, etc., statistical properties may not be expressed correctly. Besides, The relatively high computational complexity is a limitation of the proposed method. In Section 3.7, the computational complexity was listed ( O ( N × r 2 × s 2 + N × W 2 + N × c × T ) ). From the loss function curve shown in Figure 8 and Figure 10, we can observe that the iterative speed is very fast. Therefore, in the practical application, the computational cost mainly comes from the calculation of the non-local spatial information. In addition, appropriately reducing the number of iterations can also improve the efficiency without reducing the accuracy.

5. Conclusions

To suppress the effect of speckle noise on SAR image segmentation by clustering algorithms, we propose two unsupervised FCM frameworks incorporating non-local spatial information term, named LBNL_FCM and GLR_FCM, respectively. The non-local spatial information in LBNL_FCM and GLR_FCM is obtained by combining the statistical properties of SAR images with Bayesian methods and generalized likelihood ratio methods. Therefore, speckle noise can be suppressed. In both frameworks, a simple membership smoothing strategy complements the local information, allowing the membership of the pixel to be iteratively adjusted towards the most probable class in the local neighborhood. Besides, we add a balance factor to adaptively control the effect of non-local spatial information on the edges, so as to reduce the artifact caused by blurred edges. On the synthetic SAR images, both unsupervised FCM frameworks can obtain 99% segmentation accuracy. Several unsupervised evaluation indicators also indicate LBNL_FCM and GLR_FCM can reduce the fuzziness of the divided clusters in results ( v P C = 0.9855 , v P E = 0.0260 ) . Experiments on the real SAR images show that LBNL_FCM can achieve best region consistency, and GLR_FCM can balance noise removal while preserve image detail and reduce edge blur artifacts.
In future research, we will consider combining unsupervised FCM with the characteristic of deep learning to explore intelligent clustering computing.

Author Contributions

Conceptualization, J.Z.; methodology, J.Z.; software, J.Z.; validation, J.Z.; formal analysis, J.Z.; investigation, J.Z.; resources, J.Z.; data curation, J.Z.; writing—original draft preparation, J.Z.; writing—review and editing, J.Z., F.W. and H.Y.; visualization, J.Z.; supervision, J.Z.; project administration, F.W. and H.Y.; funding acquisition, F.W. and H.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The author would like to thank the reviewers for their valuable suggestions and comments. We also would like to thank the production team for revising the format of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rahmani, M.; Akbarizadeh, G. Unsupervised feature learning based on sparse coding and spectral clustering for segmentation of synthetic aperture radar images. IET Comput. Vis. 2015, 9, 629–638. [Google Scholar] [CrossRef]
  2. Jiao, S.; Li, X.; Lu, X. An Improved Ostu Method for Image Segmentation. In Proceedings of the 2006 8th international Conference on Signal Processing, Guilin, China, 16–20 November 2006; Volume 2. [Google Scholar] [CrossRef]
  3. Yu, Q.; Clausi, D.A. IRGS: Image Segmentation Using Edge Penalties and Region Growing. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 2126–2139. [Google Scholar] [CrossRef] [PubMed]
  4. Carvalho, E.A.; Ushizima, D.M.; Medeiros, F.N.; Martins, C.I.O.; Marques, R.C.; Oliveira, I.N. SAR imagery segmentation by statistical region growing and hierarchical merging. Digit. Signal Process. 2010, 20, 1365–1378. [Google Scholar] [CrossRef] [Green Version]
  5. Xiang, D.; Zhang, F.; Zhang, W.; Tang, T.; Guan, D.; Zhang, L.; Su, Y. Fast Pixel-Superpixel Region Merging for SAR Image Segmentation. IEEE Trans. Geosci. Remote Sens. 2021, 59, 9319–9335. [Google Scholar] [CrossRef]
  6. Yu, H.; Zhang, X.; Wang, S.; Hou, B. Context-Based Hierarchical Unequal Merging for SAR Image Segmentation. IEEE Trans. Geosci. Remote Sens. 2013, 51, 995–1009. [Google Scholar] [CrossRef]
  7. Wang, M.; Dong, Z.; Cheng, Y.; Li, D. Optimal segmentation of high-resolution remote sensing image by combining superpixels with the minimum spanning tree. IEEE Trans. Geosci. Remote Sens. 2017, 56, 228–238. [Google Scholar] [CrossRef]
  8. Ma, F.; Zhang, F.; Xiang, D.; Yin, Q.; Zhou, Y. Fast Task-Specific Region Merging for SAR Image Segmentation. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–16. [Google Scholar] [CrossRef]
  9. Zhang, W.; Xiang, D.; Su, Y. Fast Multiscale Superpixel Segmentation for SAR Imagery. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  10. Zhang, X.; Jiao, L.; Liu, F.; Bo, L.; Gong, M. Spectral Clustering Ensemble Applied to SAR Image Segmentation. IEEE Trans. Geosci. Remote Sens. 2008, 46, 2126–2136. [Google Scholar] [CrossRef] [Green Version]
  11. Mukhopadhaya, S.; Kumar, A.; Stein, A. FCM Approach of Similarity and Dissimilarity Measures with α-Cut for Handling Mixed Pixels. Remote Sens. 2018, 10, 1707. [Google Scholar] [CrossRef] [Green Version]
  12. Xu, Y.; Chen, R.; Li, Y.; Zhang, P.; Yang, J.; Zhao, X.; Liu, M.; Wu, D. Multispectral image segmentation based on a fuzzy clustering algorithm combined with Tsallis entropy and a gaussian mixture model. Remote Sens. 2019, 11, 2772. [Google Scholar] [CrossRef] [Green Version]
  13. Madhu, A.; Kumar, A.; Jia, P. Exploring Fuzzy Local Spatial Information Algorithms for Remote Sensing Image Classification. Remote Sens. 2021, 13, 4163. [Google Scholar] [CrossRef]
  14. Xia, G.S.; He, C.; Sun, H. Integration of synthetic aperture radar image segmentation method using Markov random field on region adjacency graph. IET Radar Sonar Navig. 2007, 1, 348–353. [Google Scholar] [CrossRef]
  15. Shuai, Y.; Sun, H.; Xu, G. SAR Image Segmentation Based on Level Set With Stationary Global Minimum. IEEE Geosci. Remote Sens. Lett. 2008, 5, 644–648. [Google Scholar] [CrossRef]
  16. Bao, L.; Lv, X.; Yao, J. Water extraction in SAR Images using features analysis and dual-threshold graph cut model. Remote Sens. 2021, 13, 3465. [Google Scholar] [CrossRef]
  17. Luo, F.; Zou, Z.; Liu, J.; Lin, Z. Dimensionality reduction and classification of hyperspectral image via multi-structure unified discriminative embedding. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5517916. [Google Scholar] [CrossRef]
  18. Ma, F.; Gao, F.; Sun, J.; Zhou, H.; Hussain, A. Weakly supervised segmentation of SAR imagery using superpixel and hierarchically adversarial CRF. Remote Sens. 2019, 11, 512. [Google Scholar] [CrossRef] [Green Version]
  19. Wang, C.; Pei, J.; Wang, Z.; Huang, Y.; Wu, J.; Yang, H.; Yang, J. When Deep Learning Meets Multi-Task Learning in SAR ATR: Simultaneous Target Recognition and Segmentation. Remote Sens. 2020, 12, 3863. [Google Scholar] [CrossRef]
  20. Colin, A.; Fablet, R.; Tandeo, P.; Husson, R.; Peureux, C.; Longépé, N.; Mouche, A. Semantic Segmentation of Metoceanic Processes Using SAR Observations and Deep Learning. Remote Sens. 2022, 14, 851. [Google Scholar] [CrossRef]
  21. Zhang, R.; Chen, J.; Feng, L.; Li, S.; Yang, W.; Guo, D. A Refined Pyramid Scene Parsing Network for Polarimetric SAR Image Semantic Segmentation in Agricultural Areas. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  22. Bezdek, J.C.; Ehrlich, R.; Full, W. FCM: The fuzzy c-means clustering algorithm. Comput. Geosci. 1984, 10, 191–203. [Google Scholar] [CrossRef]
  23. Ahmed, M.N.; Yamany, S.M.; Mohamed, N.; Farag, A.A.; Moriarty, T. A modified fuzzy c-means algorithm for bias field estimation and segmentation of MRI data. IEEE Trans. Med. Imaging 2002, 21, 193–199. [Google Scholar] [CrossRef] [PubMed]
  24. Chen, S.; Zhang, D. Robust image segmentation using FCM with spatial constraints based on new kernel-induced distance measure. IEEE Trans. Syst. Man Cybern. Part Cybern. 2004, 34, 1907–1916. [Google Scholar] [CrossRef] [Green Version]
  25. Szilagyi, L.; Benyo, Z.; Szilágyi, S.M.; Adam, H. MR brain image segmentation using an enhanced fuzzy c-means algorithm. In Proceedings of the 25th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (IEEE Cat. No. 03CH37439), Cancun, Mexico, 17–21 September 2003; Volume 1, pp. 724–726. [Google Scholar]
  26. Cai, W.; Chen, S.; Zhang, D. Fast and robust fuzzy c-means clustering algorithms incorporating local information for image segmentation. Pattern Recognit. 2007, 40, 825–838. [Google Scholar] [CrossRef] [Green Version]
  27. Krinidis, S.; Chatzis, V. A robust fuzzy local information C-means clustering algorithm. IEEE Trans. Image Process. 2010, 19, 1328–1337. [Google Scholar] [CrossRef] [PubMed]
  28. Buades, A.; Coll, B.; Morel, J.M. A non-local algorithm for image denoising. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; Volume 2, pp. 60–65. [Google Scholar]
  29. Wang, J.; Kong, J.; Lu, Y.; Qi, M.; Zhang, B. A modified FCM algorithm for MRI brain image segmentation using both local and non-local spatial constraints. Comput. Med. Imaging Graph. 2008, 32, 685–698. [Google Scholar] [CrossRef]
  30. Zhu, L.; Chung, F.L.; Wang, S. Generalized fuzzy c-means clustering algorithm with improved fuzzy partitions. IEEE TRansactions Syst. Man Cybern. Part B Cybern. 2009, 39, 578–591. [Google Scholar]
  31. Zhao, F.; Jiao, L.; Liu, H. Fuzzy c-means clustering with non local spatial information for noisy image segmentation. Front. Comput. Sci. China 2011, 5, 45–56. [Google Scholar] [CrossRef]
  32. Zhao, F.; Jiao, L.; Liu, H.; Gao, X. A novel fuzzy clustering algorithm with non local adaptive spatial constraint for image segmentation. Signal Process. 2011, 91, 988–999. [Google Scholar] [CrossRef]
  33. Feng, J.; Jiao, L.; Zhang, X.; Gong, M.; Sun, T. Robust non-local fuzzy c-means algorithm with edge preservation for SAR image segmentation. Signal Process. 2013, 93, 487–499. [Google Scholar] [CrossRef]
  34. Ji, J.; Wang, K.L. A robust nonlocal fuzzy clustering algorithm with between-cluster separation measure for SAR image segmentation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 4929–4936. [Google Scholar] [CrossRef]
  35. Wan, L.; Zhang, T.; Xiang, Y.; You, H. A robust fuzzy c-means algorithm based on Bayesian nonlocal spatial information for SAR image segmentation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 896–906. [Google Scholar] [CrossRef]
  36. Kervrann, C.; Boulanger, J.; Coupé, P. Bayesian non-local means filter, image redundancy and adaptive dictionaries for noise removal. In International Conference on Scale Space and Variational Methods in Computer Vision; Springer: Berlin/Heidelberg, Germany, 2007; pp. 520–532. [Google Scholar]
  37. Deledalle, C.A.; Denis, L.; Tupin, F. How to compare noisy patches? Patch similarity beyond Gaussian noise. Int. J. Comput. Vis. 2012, 99, 86–102. [Google Scholar] [CrossRef] [Green Version]
  38. Bezdek, J.C. Pattern Recognition with Fuzzy Objective Function Algorithms; Springer: Berlin/Heidelberg, Germany, 1981. [Google Scholar]
  39. Xie, H.; Pierce, L.; Ulaby, F. Statistical properties of logarithmically transformed speckle. IEEE Trans. Geosci. Remote Sens. 2002, 40, 721–727. [Google Scholar] [CrossRef]
  40. Goodman, J.W. Some fundamental properties of speckle. JOSA 1976, 66, 1145–1150. [Google Scholar] [CrossRef]
  41. Oliver, C.; Quegan, S. Understanding Synthetic Aperture Radar Images; SciTech Publishing: Raleigh, NC, USA, 2004. [Google Scholar]
  42. Shang, R.; Lin, J.; Jiao, L.; Li, Y. SAR Image Segmentation Using Region Smoothing and Label Correction. Remote Sens. 2020, 12, 803. [Google Scholar] [CrossRef] [Green Version]
  43. Liu, Y.; Zhang, X.; Chen, J.; Chao, H. A validity index for fuzzy clustering based on bipartite modularity. J. Electr. Comput. Eng. 2019, 2019, 2719617. [Google Scholar] [CrossRef] [Green Version]
  44. Bezdek, J.C. Numerical taxonomy with fuzzy sets. J. Math. Biol. 1974, 1, 57–71. [Google Scholar] [CrossRef]
  45. Bezdek, J.C. Cluster Validity with Fuzzy Sets; Taylor & Francis: Abingdon, UK, 1973. [Google Scholar]
  46. Dave, R.N. Validating fuzzy partitions obtained through c-shells clustering. Pattern Recognit. Lett. 1996, 17, 613–623. [Google Scholar] [CrossRef]
  47. Fukuyama, Y. A new method of choosing the number of clusters for the fuzzy c-mean method. In Proceedings of the 5th Fuzzy Systems Symposium, Kobe, Japan, 3 June 1989; pp. 247–250. [Google Scholar]
  48. Balakrishnan, N.; Rajendran, A.; Palanivel, K. Meticulous fuzzy convolution C means for optimized big data analytics: Adaptation towards deep learning. Int. J. Mach. Learn. Cybern. 2019, 10, 3575–3586. [Google Scholar] [CrossRef]
  49. Wang, Y.; Han, M.; Wu, Y. Semi-supervised Fault Diagnosis Model Based on Improved Fuzzy C-means Clustering and Convolutional Neural Network. In Proceedings of the IOP Conference Series: Materials Science and Engineering. IOP Publishing, Shaanxi, China, 8–11 October 2020; Volume 1043, p. 052043. [Google Scholar]
Figure 1. The similarity matrix between x and y. The elements marked green, orange and purple are sampled to illustrate the unsatisfied properties of the log-transformed Bayesian distance.
Figure 1. The similarity matrix between x and y. The elements marked green, orange and purple are sampled to illustrate the unsatisfied properties of the log-transformed Bayesian distance.
Remotesensing 14 01621 g001
Figure 2. The similarity value ξ G L R ( x i k , x j k ) based on GLR. (a) The X and Y axes indicate values of x i k and x j k . (b) The similarity when x i k = 1 and x j k are taken from 1 to 10.
Figure 2. The similarity value ξ G L R ( x i k , x j k ) based on GLR. (a) The X and Y axes indicate values of x i k and x j k . (b) The similarity when x i k = 1 and x j k are taken from 1 to 10.
Remotesensing 14 01621 g002
Figure 3. The results of dynamically balanced factor η i . (ae) Five sample SAR image slices. (fj) η i maps of (ae), respectively.
Figure 3. The results of dynamically balanced factor η i . (ae) Five sample SAR image slices. (fj) η i maps of (ae), respectively.
Remotesensing 14 01621 g003
Figure 4. The calculation process of weight vector ϕ i for three classes. In this example, the neighborhood size is specified as 5 × 5 .
Figure 4. The calculation process of weight vector ϕ i for three classes. In this example, the neighborhood size is specified as 5 × 5 .
Remotesensing 14 01621 g004
Figure 5. The framework of proposed segmentation algorithm GLR_FCM, and the LBNL_FCM is similar.
Figure 5. The framework of proposed segmentation algorithm GLR_FCM, and the LBNL_FCM is similar.
Remotesensing 14 01621 g005
Figure 6. The simulated SAR image and ground truth. (a) Simulated SAR image; (b) ground truth with gray; (c) ground truth with color.
Figure 6. The simulated SAR image and ground truth. (a) Simulated SAR image; (b) ground truth with gray; (c) ground truth with color.
Remotesensing 14 01621 g006
Figure 7. The segmentation results on simulated SAR image. (a) FCM. (b) FCM_S1. (c) FCM_S2. (d) KFCM_S1. (e) KFCM_S2. (f) EnFCM. (g) FGFCM. (h) FCM_NLS. (i) NS_FCM. (j) RFCM_BNL. (k) LBNL_FCM. (l) GLR_FCM.
Figure 7. The segmentation results on simulated SAR image. (a) FCM. (b) FCM_S1. (c) FCM_S2. (d) KFCM_S1. (e) KFCM_S2. (f) EnFCM. (g) FGFCM. (h) FCM_NLS. (i) NS_FCM. (j) RFCM_BNL. (k) LBNL_FCM. (l) GLR_FCM.
Remotesensing 14 01621 g007
Figure 8. The objective function change curve of each algorithm on the first simulated SAR image.
Figure 8. The objective function change curve of each algorithm on the first simulated SAR image.
Remotesensing 14 01621 g008
Figure 9. The segmentation results on the second simulated SAR image. (a) Original Image (b) Ground Truth (Gray) (c) Ground Truth (Color) (d) FCM (e) FCM_S1 (f) FCM_S2 (g) KFCM_S1 (h) KFCM_S2 (i) EnFCM (j) FGFCM (k) FCM_NLS (l) NS_FCM (m) RFCM_BNL (n) LBNL_FCM (o) GLR_FCM.
Figure 9. The segmentation results on the second simulated SAR image. (a) Original Image (b) Ground Truth (Gray) (c) Ground Truth (Color) (d) FCM (e) FCM_S1 (f) FCM_S2 (g) KFCM_S1 (h) KFCM_S2 (i) EnFCM (j) FGFCM (k) FCM_NLS (l) NS_FCM (m) RFCM_BNL (n) LBNL_FCM (o) GLR_FCM.
Remotesensing 14 01621 g009
Figure 10. The objective function curve of each algorithm on the second simulated SAR image.
Figure 10. The objective function curve of each algorithm on the second simulated SAR image.
Remotesensing 14 01621 g010
Figure 11. The first real SAR image and auxiliary images. (a) Original real SAR image; (b) The auxiliary image of LBNL_FCM; (c) The auxiliary image of GLR_FCM.
Figure 11. The first real SAR image and auxiliary images. (a) Original real SAR image; (b) The auxiliary image of LBNL_FCM; (c) The auxiliary image of GLR_FCM.
Remotesensing 14 01621 g011
Figure 12. The segmentation result of each algorithm on the real SAR image. (a) FCM. (b) FCM_S1. (c) FCM_S2. (d) KFCM_S1. (e) KFCM_S2. (f) EnFCM. (g) FGFCM. (h) FCM_NLS. (i) NS_FCM. (j) RFCM_BNL. (k) LBNL_FCM. (l) GLR_FCM.
Figure 12. The segmentation result of each algorithm on the real SAR image. (a) FCM. (b) FCM_S1. (c) FCM_S2. (d) KFCM_S1. (e) KFCM_S2. (f) EnFCM. (g) FGFCM. (h) FCM_NLS. (i) NS_FCM. (j) RFCM_BNL. (k) LBNL_FCM. (l) GLR_FCM.
Remotesensing 14 01621 g012
Figure 13. The segmentation results of each algorithms on the real SAR image. (a) Original image. (b) FCM. (c) FCM_S1. (d) FCM_S2. (e) KFCM_S1. (f) KFCM_S2. (g) EnFCM. (h) FGFCM. (i) FCM_NLS. (j) NS_FCM. (k) RFCM_BNL. (l) LBNL_FCM. (m) GLR_FCM.
Figure 13. The segmentation results of each algorithms on the real SAR image. (a) Original image. (b) FCM. (c) FCM_S1. (d) FCM_S2. (e) KFCM_S1. (f) KFCM_S2. (g) EnFCM. (h) FGFCM. (i) FCM_NLS. (j) NS_FCM. (k) RFCM_BNL. (l) LBNL_FCM. (m) GLR_FCM.
Remotesensing 14 01621 g013
Figure 14. Local detail maps of segmentation results. (a) The result of NS_FCM. (b) The result of RFCM_BNL. (c) The result of LBNL_FCM. (d) The result of GLR_FCM.
Figure 14. Local detail maps of segmentation results. (a) The result of NS_FCM. (b) The result of RFCM_BNL. (c) The result of LBNL_FCM. (d) The result of GLR_FCM.
Remotesensing 14 01621 g014
Figure 15. The segmentation results on real image 4. (a) Original image. (b) FCM. (c) FCM_S1. (d) FCM_S2. (e) KFCM_S1. (f) KFCM_S2. (g) EnFCM. (h) FGFCM. (i) FCM_NLS. (j) NS_FCM. (k) RFCM_BNL. (l) LBNL_FCM. (m) GLR_FCM.
Figure 15. The segmentation results on real image 4. (a) Original image. (b) FCM. (c) FCM_S1. (d) FCM_S2. (e) KFCM_S1. (f) KFCM_S2. (g) EnFCM. (h) FGFCM. (i) FCM_NLS. (j) NS_FCM. (k) RFCM_BNL. (l) LBNL_FCM. (m) GLR_FCM.
Remotesensing 14 01621 g015
Figure 16. Segmentation results of each algorithm on GaoFen-3 SAR Image. (a) Original image. (b) FCM. (c) FCM_S1. (d) FCM_S2. (e) KFCM_S1. (f) KFCM_S2. (g) EnFCM. (h) FGFCM. (i) FCM_NLS. (j) NS_FCM. (k) RFCM_BNL. (l) LBNL_FCM. (m) GLR_FCM.
Figure 16. Segmentation results of each algorithm on GaoFen-3 SAR Image. (a) Original image. (b) FCM. (c) FCM_S1. (d) FCM_S2. (e) KFCM_S1. (f) KFCM_S2. (g) EnFCM. (h) FGFCM. (i) FCM_NLS. (j) NS_FCM. (k) RFCM_BNL. (l) LBNL_FCM. (m) GLR_FCM.
Remotesensing 14 01621 g016
Figure 17. The segmentation accuracy of different algorithms testing on the first simulated SAR image with adding speckle noise of different looks. (a) SA curves of different methods. (b) The partial enlarged view of (a).
Figure 17. The segmentation accuracy of different algorithms testing on the first simulated SAR image with adding speckle noise of different looks. (a) SA curves of different methods. (b) The partial enlarged view of (a).
Remotesensing 14 01621 g017
Figure 18. The SA (%) of LBNL_FCM and GLR_FCM carried out on the first simulated SAR image with different sizes of search window w × w and different sizes of local neighborhood r × r . (a) LBNL_FCM; (b) GLR_FCM.
Figure 18. The SA (%) of LBNL_FCM and GLR_FCM carried out on the first simulated SAR image with different sizes of search window w × w and different sizes of local neighborhood r × r . (a) LBNL_FCM; (b) GLR_FCM.
Remotesensing 14 01621 g018
Figure 19. The SA (%) of LBNL_FCM and GLR_FCM carried out on the second simulated SAR image with different sizes of search window w × w and different sizes of local neighborhood r × r . (a) LBNL_FCM; (b) GLR_FCM.
Figure 19. The SA (%) of LBNL_FCM and GLR_FCM carried out on the second simulated SAR image with different sizes of search window w × w and different sizes of local neighborhood r × r . (a) LBNL_FCM; (b) GLR_FCM.
Remotesensing 14 01621 g019
Table 1. The quantitative evaluation indicators used in simulation SAR image experiments for results.
Table 1. The quantitative evaluation indicators used in simulation SAR image experiments for results.
IndicatorFormulationDescription
PC (Partition Coefficient) [44] P C = 1 N c = 1 c i = 1 N u c i 2 The larger the PC value, the better the partition result
PE (Partition Entropy) [45] P E = 1 N c = 1 c i = 1 N u c i log ( u c i ) The smaller the PE value, the better the partition result
MPC (Modified PC) [46] M P C = C × P C 1 C 1 P C = 1 N c = 1 c i = 1 N u c i 2 The MPC eliminates the dependency on c, the
large the MPC is, the better the partition result
MPE (Modified PE) [46] M P E = N × P E N C P E = 1 N c = 1 c i = 1 N u c i log ( u c i ) Similar to above that the smaller the
MPE is, the better the partition result
FS (Fukuyama-Sugeno Index) [47] F S = J m ( U , V ) K m ( U , V ) = J m i = 1 N c = 1 c u c i m v c v ˜ 2 where v ˜ = 1 N i = 1 N x i The first term indicates the compactness and
the second term indicates the separation. And
the minimum FS implies the optimal partition
Table 2. SA (%) and executed time(s) on the first simulated SAR image.
Table 2. SA (%) and executed time(s) on the first simulated SAR image.
MethodSA (%)Time (s)MethodSA (%)Time (s)
FCM60.582.16FGFCM94.655.64
FCM_S190.491.11FCM_NLS83.617.27
FCM_S290.491.46NS_FCM95.037.77
KFCM_S192.661.27RFCM_BNL97.2910.88
KFCM_S291.421.20LBNL_FCM97.6412.11
EnFCM90.631.85GLR_FCM99.1617.73
Table 3. Quantitative evaluation on the first simulated SAR image.
Table 3. Quantitative evaluation on the first simulated SAR image.
Method V PC V PE V MPC V MPE V FS
FCM0.79940.39950.74920.3995 3.12 × 10 8
FCM_S10.72030.55810.65040.5582 1.36 × 10 8
FCM_S20.73500.53470.66880.5347 1.78 × 10 8
KFCM_S10.67830.66230.59780.6624 1.01 × 10 8
KFCM_S20.68610.65370.60760.6537 1.39 × 10 8
EnFCM0.85180.30310.81470.3031 1.56 × 10 8
FGFCM0.87500.25950.84380.2595 2.33 × 10 8
FCM_NLS0.71750.58920.64690.5893 1.70 × 10 8
NS_FCM0.69320.63420.61650.6342 9.03 × 10 7
RFCM_BNL0.80690.41650.75870.4165 1.34 × 10 8
LBNL_FCM0.96090.07920.95110.0792 1.54 × 10 8
GLR_FCM0.98550.02600.98190.0260 1.78 × 10 8
Table 4. SA (%) and executed time(s) on the second simulated SAR image.
Table 4. SA (%) and executed time(s) on the second simulated SAR image.
MethodSA (%)Time (s)MethodSA (%)Time (s)
FCM73.823.47FGFCM97.889.36
FCM_S195.831.16FCM_NLS95.038.85
FCM_S296.551.27NS_FCM96.109.59
KFCM_S196.361.02RFCM_BNL98.6616.58
KFCM_S296.941.22LBNL_FCM98.8216.83
EnFCM95.882.03GLR_FCM99.8618.45
Table 5. Quantitative evaluation of the second simulated SAR image.
Table 5. Quantitative evaluation of the second simulated SAR image.
Method V PC V PE V MPC V MPE V FS
FCM0.83540.32980.79430.3298 6.28 × 10 8
FCM_S10.82040.36670.77550.3667 4.76 × 10 8
FCM_S20.82980.35240.78720.3524 5.34 × 10 8
KFCM_S10.78800.44920.73510.4492 4.34 × 10 8
KFCM_S20.79230.44480.74040.4448 4.86 × 10 8
EnFCM0.90600.19710.88250.1971 6.27 × 10 9
FGFCM0.93070.15110.91340.1511 5.66 × 10 8
FCM_NLS0.81710.38900.77140.3890 4.90 × 10 8
NS_FCM0.80850.41020.76070.4103 4.26 × 10 8
RFCM_BNL0.89390.24140.86740.2414 4.79 × 10 8
LBNL_FCM0.98820.02080.98520.0208 4.99 × 10 8
GLR_FCM0.99720.00510.99650.0051 5.24 × 10 8
Table 6. The computational complexity of algorithms used in this study.
Table 6. The computational complexity of algorithms used in this study.
MethodComputational ComplexityMethodComputational Complexity
FCM O ( N × c × T ) FGFCM O ( N × w 2 + Q × c × T )
FCM_S1 O ( N × w 2 + N × c × T ) FCM_NLS O ( N × r 2 × s 2 + N × c × T )
FCM_S2 O ( N × w 2 + N × c × T ) NS_FCM O ( N × r 2 × s 2 + N × c × T )
KFCM_S1 O ( N × w 2 + N × c × T ) RFCM_BNL O ( N × r 2 × s 2 + N × W 2 + N × c × T )
KFCM_S2 O ( N × w 2 + N × c × T ) LBNL_FCM O ( N × r 2 × s 2 + N × W 2 + N × c × T )
EnFCM O ( N × w 2 + Q × c × T ) GLR_FCM O ( N × r 2 × s 2 + N × W 2 + N × c × T )
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhu, J.; Wang, F.; You, H. SAR Image Segmentation by Efficient Fuzzy C-Means Framework with Adaptive Generalized Likelihood Ratio Nonlocal Spatial Information Embedded. Remote Sens. 2022, 14, 1621. https://doi.org/10.3390/rs14071621

AMA Style

Zhu J, Wang F, You H. SAR Image Segmentation by Efficient Fuzzy C-Means Framework with Adaptive Generalized Likelihood Ratio Nonlocal Spatial Information Embedded. Remote Sensing. 2022; 14(7):1621. https://doi.org/10.3390/rs14071621

Chicago/Turabian Style

Zhu, Jingxing, Feng Wang, and Hongjian You. 2022. "SAR Image Segmentation by Efficient Fuzzy C-Means Framework with Adaptive Generalized Likelihood Ratio Nonlocal Spatial Information Embedded" Remote Sensing 14, no. 7: 1621. https://doi.org/10.3390/rs14071621

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop