Next Article in Journal
Integrating Semi-Supervised Learning with an Expert System for Vegetation Cover Classification Using Sentinel-2 and RapidEye Data
Next Article in Special Issue
Experimental Study of Maritime Moving Target Detection Using Hitchhiking Bistatic Radar
Previous Article in Journal
Response of Sediment Connectivity to Altered Convergence Processes Induced by Forest Roads in Mountainous Watershed
Previous Article in Special Issue
A Variable-Scale Coherent Integration Method for Moving Target Detection in Wideband Radar
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Change Detection Based on Fusion Difference Image and Multi-Scale Morphological Reconstruction for SAR Images

1
School of Physics and Electronic Information Technology, Yunnan Normal University, Kunming 650500, China
2
National Laboratory of Radar Signal Processing, Xidian University, Xi’an 710071, China
3
School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(15), 3604; https://doi.org/10.3390/rs14153604
Submission received: 30 June 2022 / Revised: 20 July 2022 / Accepted: 26 July 2022 / Published: 27 July 2022
(This article belongs to the Special Issue Radar High-Speed Target Detection, Tracking, Imaging and Recognition)

Abstract

:
Synthetic aperture radar (SAR) image-change detection is widely used in various fields, such as environmental monitoring and ecological monitoring. There is too much noise and insufficient information utilization, which make the results of change detection inaccurate. Thus, we propose an SAR image-change-detection method based on multiplicative fusion difference image (DI), saliency detection (SD), multi-scale morphological reconstruction (MSMR), and fuzzy c-means (FCM) clustering. Firstly, a new fusion DI method is proposed by multiplying the ratio (R) method based on the ratio of the image before and after the change and the mean ratio (MR) method based on the ratio of the image neighborhood mean value. The new DI operator ratio–mean ratio (RMR) enlarges the characteristics of unchanged areas and changed areas. Secondly, saliency detection is used in DI, which is conducive to the subsequent sub-area processing. Thirdly, we propose an improved FCM clustering-change-detection method based on MSMR. The proposed method has high computational efficiency, and the neighborhood information obtained by morphological reconstruction is fully used. Six real SAR data sets are used in different experiments to demonstrate the effectiveness of the proposed saliency ratio–mean ratio with multi-scale morphological reconstruction fuzzy c-means (SRMR-MSMRFCM). Finally, four classical noise-sensitive methods are used to detect our DI method and demonstrate the strong denoising and detail-preserving ability.

1. Introduction

The goal of synthetic aperture radar (SAR) image-change detection is to generate a change image, which describes the changes of two or more time-phase images between different calibration times [1,2,3]. As SAR has imaging capability for all day and for all weather [4,5], it is widely used in environmental monitoring, ecological monitoring, urban development research, agriculture forestry monitoring, natural disaster assessment, and other fields [6,7,8,9].
The process of SAR image-change detection generally includes three parts: image preprocessing, difference image (DI) generation, and difference image analysis [10,11].
The first step mainly includes image registration and image denoising. Due to the large amount of speckle noise in SAR images, many denoising methods are used to improve the detection effect, such as Lee filtering [12], Frost filtering [13], non-local means (NLM) [14], and Speckle reducing anisotropic diffusion (SRAD) [15].
DI generation is an important step in change detection. The first type is based on the pixel. Dekker proposed the ratio (R) method [16]. The R method reduces the noise, but exaggerates the change degree of the low gray pixel area. Bazi et al. proposed the log-ratio (LR) method [17]. The method transforms the multiplicative noise into additive noise, which is more conducive to the subsequent denoising work. The second type is based on neighborhood information such as mean value, median value, local variance, and weighted spatial distance. Inglada et al. proposed the mean-ratio (MR) method [18]. The MR operator can reduce the image noise. However, it will reduce the contrast of the changed area. Zheng et al. proposed a new operator by weighted fusion of the subtraction operator and LR operator [19]. Gong et al. proposed the neighborhood-based ratio (NR) method [20]. The NR operator makes full use of the spatial information of the image, but it will enlarge the gray level of the edge. Zhang et al. proposed a method based on super-pixel segmentation [21], which makes better use of neighborhood information, while maintaining image contours. Wang et al. proposed a DI-generation method based on the coefficient of variation and physical proximity [22]. Zhang et al. proposed a DI-generation method based on the adaptive generalized likelihood ratio test (AGLRT) [23]. This method greatly suppresses the noise in the image. Jia et al. fused the subtraction DI and ratio DI by multi-scale wavelet fusion [24].
The last step is DI analysis. Threshold and clustering are the most common unsupervised methods. Kittler and Illingworth (KI), Otsu method, and the expectation maximization (EM) method are widely used in change detection [25,26,27]. Clustering methods mainly include K-Means [28] clustering and fuzzy c-means clustering (FCM) [29].
FCM is the most widely used method. However, neighborhood information is not used in FCM, as it is very sensitive to noise, so the final segmentation effect is not ideal. Ahmed et al. proposed the offset corrected fuzzy c-means clustering (FCM_S) [30] method. Aiming at the low efficiency of FCM_S, Chen et al. proposed the improved FCM_S1 and FCM_S2 [31]. These two methods all use neighborhood information, but many experimental parameters need to be adjusted to get optimal results. Gong et al. used the improved fuzzy local information c-means (FLICM) method in SAR image-change detection [32]. This method has few parameters. However, the convergence speed is slow. Mu et al. proposed the fuzzy c-means clustering method based on the Gaussian kernel (KFCM) [33]. This algorithm can improve the extraction of image features but has high requirements for the selection of initial clustering centers. Wang et al. proposed the fuzzy adaptive local and region-level information c-means (FALRCM) [34]. Neighborhood information is adaptively utilized, and it is robust to noise. Lei et al. proposed the fast and robust fuzzy c-means (FRFCM) [35], which introduced the morphological images into fuzzy clustering. In deep learning, FCM is often used for image pre-classification. Gong et al. used FCM to pre-classify the DI and obtained reliable changed samples and unchanged samples. These samples are used to train Deep Neural Networks (DNN) [36].
Now, machine learning and deep learning are widely used in change detection. Gao et al. combined the neighborhood-based ratio and extreme learning machine (NR-ELM) [37], and Cui et al. proposed an unsupervised SAR change detection method based on stochastic subspace ensemble learning, which combined the training samples generated by two DIs [38]. Ma et al. proposed a method based on multi-grained cascade forest and multi-scale fusion [39]. Gao et al. proposed the convolution wavelet neural network (CWNN) [40] and the principal component analysis network (PCANet) [41]. Wang et al. proposed a novel multi-scale average pooling (MSAP) network to exploit the changed information from the noisy difference image [42]. Qu et al. proposed a dual-domain network (DDNet) [43], which introduced discrete cosine transform (DCT) to the net. However, machine learning and deep learning methods usually need a long running time and depend on the accuracy of labels.
In the existing DI methods, only single pixel or neighborhood information is used in these methods. Besides, the calculation of clustering is complex when using neighborhood information. The proposed saliency ratio–mean ratio with multi-scale morphological reconstruction fuzzy c-means (SRMR-MSMRFCM) has the following advantages. R operator and MR operator are used to generate fusion DI in our method. Therefore, the advantages of single pixel operator and neighborhood operator are combined in our method. Saliency detection can effectively extract the changed areas and unchanged areas, which lays the foundation for the subsequent sub-regional processing. Moreover, neighborhood information is introduced through morphological reconstruction rather than directly adding fuzzy factors, which simplifies the operation.
This paper’s remaining frame is organized as follows. The proposed method is described in Section 2 in detail. The experimental results on six data sets are shown in Section 3. Parameter analysis is shown in Section 4. The conclusion and further research direction are put forward in Section 5. The main contributions of this paper are as follows.
(1)
A new difference image generation method is proposed. The R method and the MR method are combined by multiplication. In the case of preserving the details of the image, the features of the changed areas are effectively enlarged, and the features of the unchanged areas are suppressed.
(2)
Saliency detection is used to obtain the changed and unchanged areas of the image. Large-size structuring elements are used to remove noise in the unchanged area. In the changed area, multi-scale morphological reconstruction can not only maintain the details of the image but also effectively remove the noise.
(3)
FCM, Kmeans, Otsu, and manual threshold are very sensitive to noise. Though these methods are applied in the proposed method, the proposed method can decrease the influence of the noise and preserve the detail of the changed area better.

2. Materials and Methods

In this part, we introduce the proposed SAR image-change detection method in detail. This method can be divided into the following steps: difference image generation, saliency detection, sub-regional morphological reconstruction, and output detection results.
Firstly, the R operator and the MR operator are used to generate DI, and the two images are fused into a new DI by multiplication. Secondly, saliency detection is used for DI, and Otsu method is used to obtain the binary saliency image. Thirdly, according to the saliency image, we reconstruct the image by sub-regional morphology. Finally, FCM is used to output the change detection result. Figure 1 is the flow chart of the method in this paper. In the proposed method, the new DI operator effectively increases the contrast between the changed and unchanged areas and increases the accuracy of saliency detection. Saliency detection is the basis of sub-regional morphological reconstruction. The combination of the two not only removes noise, but also preserves image details. By introducing the morphological reconstruction image information into FCM, FCM also has strong robustness to noise. The details of DI generation, saliency detection, multi-scale morphological reconstruction, and FCM are reported in Section 2.1, Section 2.2, Section 2.3 and Section 2.4

2.1. Generation of Difference Image

The ratio method reduces the influence of multiplicative noise and increases the contrast of the changed area. However, the additive noise generated by this method still exists in large quantities. Firstly, the normalized ratio method is used to obtain the initial DI. Compared with the original ratio method, this method reduces the weight of the change difference for low gray pixels, while the weight of the change difference for high gray pixels is almost unchanged. This improves the accuracy. Assuming that images T1 and T2 are SAR images of different times in the same area, the difference image of T1 and T2 can be obtained by Equation (1).
X d 1 x , y = max { T 1 ( x , y ) , T 2 ( x , y ) } min { T 1 ( x , y ) , T 2 ( x , y ) } max { T 1 ( x , y ) , T 2 ( x , y ) } + min { T 1 ( x , y ) , T 2 ( x , y ) }
The MR method is to take the neighborhood mean of the corresponding pixels and then calculate the ratio. It is strongly robust to scatter noise. The MR method can be calculated by Equation (2).
X d 2 x , y = 1 min μ 1 x , y μ 2 x , y , μ 2 x , y μ 1 x , y
where μ 1 ( x , y ) and μ 2 ( x , y ) are the average of the gray values of all pixels in the 3 × 3 neighborhood window centered on coordinate ( x , y ) in images T1 and T2, respectively.
In this paper, the R operator and the MR operator are multiplied and normalized to form a new DI operator. The ratio–mean ratio (RMR) DI can be calculated by Equation (3). The normalized image is shown by Equation (4).
X d 3 = max { T 1 ( x , y ) , T 2 ( x , y ) } min { T 1 ( x , y ) , T 2 ( x , y ) } max { T 1 ( x , y ) , T 2 ( x , y ) } + min { T 1 ( x , y ) , T 2 ( x , y ) }                                                                   × 1 min μ 1 x , y μ 2 x , y , μ 2 x , y μ 1 x , y
X d 4 = X d 3 x , y min X d 3 max X d 3 min X d 3
When a single pixel and its neighborhood change greatly, the gray value of the new operator will be still large after normalization. When the change of a single pixel and its neighborhood change little, the gray value of the new operator will be very small after normalization. Therefore, the contrast between the changed area and the unchanged area is improved. In addition, the proposed RMR method reduces the false negative caused by R operator and the false positive caused by MR operator through multiplication operator.

2.2. Saliency Detection

Saliency detection (SD) is suitable for SAR image-change detection. In paper [44,45], saliency detection was applied to select training samples. Context-Aware (CA) was proposed by Goferman et al. in 2012 [46]. The author thought that the saliency image should contain both the target and the background area near the target. In this way, the salient information of the image can be better described. Therefore, the changed area must be in the salient area. In this paper, the RMR DI is used for saliency detection. The principle of CA is as follows. RMR DI is converted from RGB space to Lab space. The distance between RMR DI blocks is obtained by the following.
d p i , p j = d c o l o r p i , p j 1 + c d p o s i t i o n p i , p j
where d c o l o r p i , p j is the Euclidean distance of color between area p i and p j , and d p o s i t i o n p i , p j is the Euclidean distance of space. Therefore, the distance between p i and p j is proportional to the color distance and inversely proportional to the spatial distance. Here c is set to 3. Then, the saliency value can be calculated according to the distance.
S i r = 1 exp 1 K K = 1 K d p i , p k
where S i r is the saliency value. At scale r, the image is segmented into K areas. The greater the dissimilarity calculated by the area, the greater the saliency value of the pixels in the area. Here K is set to 64. In order to enhance the contrast between salient areas and non-salient areas, the above calculation is extended to multi-scale. When a pixel has a large saliency value at multiple scales, it is considered as the salient area we are looking for. Therefore, the introduction of the mean saliency value is necessary, which is calculated by the following.
S i ¯ = 1 M r R S i r
where R = r 1 , r M is the set of scales of the area, and S i ¯ is the average saliency value of the area at these scales. Here R is set to 100 % , 80 % , 50 % , 30 % . Extract the most focused local areas at each scale from the saliency image, and a pixel is considered to be of attention at the current scale if its saliency value exceeds a certain threshold. The saliency value is refined as
S i ^ = S i ¯ 1 d f o c i r i
where d f o c i r i is the distance between area i and the nearest area of attention. Assuming that the image obeys a two-dimensional Gaussian distribution, the final saliency value S of the image can be calculated as
S = S i ^ G
where G is a two-dimensional Gaussian distribution at the center of the image. After obtaining the saliency image, the image is binarized in order to more intuitively reflect the changed area and unchanged area. Otsu [26] method can effectively distinguish the background from the target. Therefore, it is used to segment the image. Then, we get the binary saliency image SB. In SB, the white part is the general changed area, and the black part is the general unchanged area. Through SB, we get the corresponding changed area fc1 and unchanged area fu1 from DI.

2.3. Morphological Reconstruction

After the images of the two areas have been acquired, the preconditions for sub-regional processing have been completed and there is still some noise in the image. However, FCM algorithm does not make use of the neighborhood information, and it is poor to noise. Therefore, the change-detection results in lots of scenes are not satisfactory. Morphological reconstruction [35] can reduce the noise of noisy images while preserving the target contour in FCM. Morphological reconstruction includes two basic operations: dilation and erosion. Suppose f is the original image, and P is the structuring element. The two basic morphological operators can be written as
f P ( x , y ) = sup f ( x + a , y + b ) , a , b D b x , y D f
f Θ P ( x , y ) = inf f ( x + a , y + b ) , a , b D b x , y D f
where f P ( x , y ) and f Θ P ( x , y ) are dilation and erosion operator of f at pixel ( x , y ) , respectively. D b represents the domain of P, and D f represents the domain of f.
Through the combination of morphological dilation operator and erosion operator, some reconstruction operators with strong filtering ability can be obtained, such as morphological open operation and closed operation. They can be written as
f P ( x , y ) = ( f Θ P ) P ( x , y )
f P ( x , y ) = ( f P ) Θ P ( x , y )
where f P ( x , y ) represents open operation, and f P ( x , y ) represents closed operation. The open operation can reduce the noise of the image and remove the outliers in the image. The closed operation can fill the small cracks in the image without changing the position and size of the image block. Therefore, the alternative use of open operation and closed operation can remove the noise in the picture, while ensuring that the image is basically unchanged. In this paper, the morphological filtering of the image is achieved through open operation and closed operation, as shown in Equation (14).
F ( x , y ) = ( f P ( x , y ) ) ( P ( x , y ) )
where F ( x , y ) represents the value of the pixel ( x , y ) after morphological reconstruction.
In the above operations, the radius of the structuring element is 1. In paper [35], morphological reconstruction is only used in the original image. However, such operation is too rough for change detection. Therefore, the changed image is decomposed into three scales, and better results are obtained by morphological reconstruction of the three scales. They are shown by Equations (15)–(17).
f c 2 ( x , y ) = ( f c 1 n 1 P ( x , y ) ) ( n 1 P ( x , y ) )
f c 3 ( x , y ) = ( f c 1 / 2 n 1 P ( x , y ) ) ( n 1 P ( x , y ) )
f c 4 ( x , y ) = ( f c 1 / 4 n 1 P ( x , y ) ) ( n 1 P ( x , y ) )
where f c 1 is the original-scale image, f c 1 / 2 is the ½-scale image, f c 1 / 4 is the ¼-scale image, and n1 is the size of the structuring element. Different weight coefficients are used to fuse the three images. Then, the expression of the final changed image can be written as
f c ( x , y ) = α f c 2 ( x , y ) + β f c 3 ( x , y ) + γ f c 4 ( x , y )
For unchanged areas, it is more appropriate to use larger structuring elements for morphological reconstruction on the original scale image. It is written as
f u ( x , y ) = ( f u 1 n 2 P ( x , y ) ) ( n 2 P ( x , y ) )
Finally, fc and fu are summed to obtain the final morphological reconstruction image ξ .
ξ x , y = f c x , y + f u x , y

2.4. Fuzzy C-Means Clustering

FCM is a classical change-detection method. Its objective function is to find the fuzzy clustering of given data by minimizing the objective function. Its objective function can be calculated by Equation (21).
J m = l = 1 q k = 1 c ( u k l ) m y l v k 2 = l = 1 q k = 1 c ( u k l ) m d k l 2
where Y = ( y 1 , y 2 , , y q ) denotes a set of data samples, V = ( v 1 , v 2 , , v k ) denotes the clustering center of data, U = [ u k l ] q × c is the membership matrix of the samples, u k l [ 0 , 1 ] is the degree of membership belonging to class k, y l v k 2 is the Euclidean distance between the k-th cluster center and the l-th sample, and m ( 1 , ) is a weighted index. We introduce morphologically reconstructed images into clustering. Therefore, the objective function of the multi-scale morphological reconstruction fuzzy-c-means (MSMRFCM) clustering algorithm can be written as
J m = l = 1 q k = 1 c χ l ( u k l ) m ξ l v k 2
where χ l denotes the number of pixels of l-th gray level, ξ denotes the image after morphological reconstruction, ξ l means gray level, and ξ l v k 2 is the Euclidean distance between the k-th cluster center and the l-th gray level. The optimal u k l and v k can be obtained by Lagrange multiplier method, which are as follows.
u k l = ξ l v k 2 ( m 1 ) j = 1 c ξ l v j 2 ( m 1 )
v k = i = 1 q γ l ( u k l ) m ξ l i = 1 q γ l ( u k l ) m
Each pixel is assigned to the corresponding class according to the final membership degree, and the change detection result is generated.
Many improved FCM algorithms, such as FLICM, introduce spatial local information into the objective function. This will greatly increase the computational complexity of the algorithm. By introducing morphological reconstruction into FCM, we not only reduce the computation of the algorithm, but also have good robustness to different kinds of noise. In addition, the introduction of multi-scale images can better obtain the complete features of the image. Finally, the running time of the algorithm is greatly reduced by using gray histogram instead of pixel-by-pixel calculation.

3. Experimental Results

3.1. Data Set

In this section, six real SAR image data sets are used to demonstrate the superiority of the method. Figure 2, Figure 3, Figure 4, Figure 5, Figure 6 and Figure 7 show the SAR data sets. Figure 2 is the Bern data set. They were taken in July 1999 and May 1999 by ERS-2. The image size and resolution are 301 × 301 and 20 m. Figure 3 is the Ottawa data set. They were taken in May 1997 and August 1997 by Radarsat-1. The image size and resolution are 290 × 350 and 12 m. Figure 4 is the Farmland data set. They were taken in June 2008 and June 2009 by Radarsat-2. The image size and resolution are 306 × 291 and 8 m. Figure 5 is the Coastline data set. They were taken in June 2008 and June 2009 by Radarsat-2. The image size and resolution are 450 × 280 and 8 m. Figure 6 is the Inland Water data set. They were taken in June 2008 and June 2009 by Radarsat-2. The image size and resolution are 291 × 444 and 8 m. Figure 7 is the Bangladesh data set. They were taken in April 2007 and July 2007 by Envisat. The image size and resolution are 300 × 300 and 10 m. A description of these data sets is shown in Table 1.

3.2. Evaluation Criterion

In order to more objectively explain the effect of change detection, we give five quantitative evaluation criterions of detection results: number of false negatives (FN), number of false positives (FP), number of overall errors (OE, the sum of FP and FN), percentage correct classification (PCC, the ratio of the number of correctly detected pixels to the total number of pixels) [47], and Kappa coefficient (KC, the similarity between the detection result image and the ground truth) [48]. The actual number of changed and unchanged pixels is Nc and Nu. These indicators are calculated by the following.
O E = F P + F N
P C C = N u + N c F P F N N u + N c × 100 %
K C = P C C P R E 1 P R E
P R E = N c F N + F P N c + N u F P + F N N u N c + N u N c + N u

3.3. DI Analysis

For DI analysis, we will take the Ottawa data set as an example. The images before and after change, saliency images, DIs, and change-detection results are shown in Figure 8. Figure 8a,b are the images before and after change, respectively. Figure 8c,d are the saliency image and binary image obtained by RMR DI, respectively. Figure 8e–h are R DI, MR DI, RMR DI, and SRMR-MSMR DI, respectively. Figure 8i–l are the change-detection results of these DIs, respectively. The evaluation indicators are shown in Table 2.
It can be seen from the binary saliency image that the changed area of the image is not continuous. The binary image is the approximate changed area of the image, which has a larger changed area than the ground truth. Therefore, it lacks a large number of details of the image, but reduces the missed detection. There is a lot of speckle noise in R DI, which makes the detection result of the image poor. The calculation process of R DI only involves ratio operation, so it is very sensitive to noise. There are a large number of isolated pixels in the final change image, and the FP value reaches 1631. The FN value reaches 1287, which is the worst. Due to the mean filtering of MR DI, the changed area becomes significantly larger, and the final change image has the most FP pixels, reaching 2323. Thanks to the multiplication operation, the noise in the unchanged area is reduced in the RMR DI. The KC value of RMR reaches 94.87%, which is far higher than those of R and MR. From the binary saliency image, the area within the green rectangle is the changed area. Compared with RMR, SRMR-MSMR successfully eliminates the surrounding misdetected pixels, so the changed area is completely preserved. The area within the red rectangle is the unchanged area. SRMR-MSMR completely eliminates these misdetected pixels. It can be seen that small-size structuring elements are used for the changed area, which can not only maintain the details of the image but also eliminate noise. Large-size structuring elements are used for the unchanged area, which can completely eliminate noise. This is thanks to the correct guidance provided by the saliency detection for subregional processing.

3.4. Change-Detection Results and Analysis

Six SAR image data sets and the change-detection results obtained by various methods are shown in Figure 9, Figure 10, Figure 11, Figure 12, Figure 13 and Figure 14. The change-detection-result evaluation is shown in Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8. Eight methods are used as comparison methods for the proposed SRMR-MSMRFCM, which are FCM [29], FLICM [32], PCA-KMeans [28], PCANet [41], CWNN [40], MSAPNet [42], robust unsupervised small-area change detection (RUSACD) [21], and DDNet [43].
The change-detection results and evaluation indicators of the Bern data set are shown in Figure 9 and Table 3, respectively. The change images generated by FCM, FLICM, and PCA-KMeans have many isolated pixels. Besides, the changed areas of FCM and DDNet are not continuous, resulting in a large number of FN values. The changed areas of FLICM and RUSACD are too large, resulting in a large number of FP values. The detection result of PCA-KMeans is very well, but there is too much noise. Inside the red rectangle, there are a lot of missing changed areas in PCANet, so the detection effect is poor. The result of CWNN is the same, but the situation is slightly better. Obviously, from the visual point of view, our method and MSAPNet have achieved the best detection results. Our method has a slight advantage over FN, while MSAPNet has a slight advantage over FP. There are no isolated pixels in the image, and the changed area is also kept completely. In terms of evaluation criteria, the KC value of the proposed SRMR-MSMRFCM is improved by 6.75%, 2.80%, 0.93%, 13.46%, 2.39%, 0.11%, 5.10%, and 2.69% over FCM, FLICM, PCA-KMeans, PCANet, CWNN, MSAPNet, RUSACD, and DDNet, respectively. Therefore, the proposed method has effective advantages in both visual and quantitative comparisons.
The change-detection results and evaluation indicators of the Ottawa data set are shown in Figure 10 and Table 4, respectively. Similar to the Bern data set, the existence of isolated pixels reduces the detection accuracy of FCM and FLICM. The edge of PCANet is not smooth. Although the change image of CWNN is very smooth, a lot of image details are lost. Some small changed areas are not detected. Therefore, the edges remain poor. MSAPNet has a large number of FN pixels, while PCA-KMeans and RUSAD are the opposite. For this data set, the performance of DDNet is very ordinary, as the values of FP and FN are not outstanding. The proposed SRMR-MSMRFCM achieves the best detection results and effectively preserves the small changed area, while removing the isolated pixels. In terms of evaluation criteria, the KC value of the proposed SRMR-MSMRFCM is improved by 6.95%, 1.20%, 5.12%, 2.72%, 1.94%, 5.9%, 2.03%, and 2.04% over FCM, FLICM, PCA-KMeans, PCANet, CWNN, MSAPNet, RUSACD, and DDNet, respectively. Visually and metrically, the proposed method draws a balance between FP and FN.
The change-detection results and evaluation indicators of the Farmland data set are shown in Figure 11 and Table 5, respectively. Since there is a large amount of noise in the original image, FCM, FLICM, and PCA-KMeans mistakenly judge the noise information as changed areas, resulting in a large number of FP values. There is almost no noise nor any isolated pixels in the image of PCANet and RUSACD, and the general changed areas and unchanged areas are detected. However, the detection results are not ideal because there are too many missed detection areas. CWNN and DDNet achieve better results in the changed area, but there are some false alarm areas. Our method has achieved excellent results in both changed and unchanged areas. MSAPNet has a similar performance to ours, but there are still some missed changed areas. It can be seen that our method has excellent robustness when the original image noise is too serious. In terms of evaluation criteria, the KC value of the proposed SRMR-MSMRFCM is improved by 20.96%, 8.59%, 7.98%, 6.58%, 2.42%, 0.48%, 4.37%, and 2.95% over FCM, FLICM, PCA-KMeans, PCANet, CWNN, MSAPNet, RUSACD, and DDNet, respectively. Therefore, the proposed method restores the changed areas with as little loss of information as possible.
The change-detection results and evaluation indicators of the Coastline data set are shown in Figure 12 and Table 6, respectively. For this data set, the detection results of FCM and PCA-KMeans are very poor. The FP value exceeds 30,000 and the error-detected pixels are almost all over the whole image. The results of PCANet, CWNN, and MSAPNet are better, but there are still a large number of block false detections. FLICM performs very well in this data set. The FP value is only 903, so the image noise is very small. RUSACD, DDNet, and our method all achieve excellent detection accuracy. By the naked eye, the change images of the three are almost the same as the ground truth. However, in the circular changed area, there are a small number of FP pixels in RUSACD and a small number of FN pixels in DDNet. From the evaluation criteria, our method achieves advantages in both FP and FN compared to those two methods. This is due to the fact that the MMR DI enlarges and reduces the gray levels of the changed area and the unchanged area, respectively. In addition, the MSMR algorithm effectively suppresses the noise. In terms of evaluation criteria, the KC value of the proposed SRMR-MSMRFCM is improved by 86.40%, 20.98%, 87.99%, 81.00%, 78.33%, 62.94%, 3.35% and 4.75% over FCM, FLICM, PCA-KMeans, PCANet, CWNN, MSAPNet, RUSACD and DDNet. For this data set, our method achieves much better results than the other methods, which proves its robustness.
The change-detection results and evaluation indicators of the Inland Water data set are shown in Figure 13 and Table 7, respectively. FCM, FLICM, PCAKMeans, and DDNet have the problem of a lot of FP pixels. PCANet and RUSACD have a large number of missed detections. CWNN and our method have their own advantages. In the red rectangle, CWNN has a large number of false alarm areas, but there is none in ours. CWNN has the advantage over FN value of 418, and we have the advantage over FP value of 612. Our PCC value is higher than CWNN by 0.15%, and the KC value is only 0.01% lower. MSAPNet achieves the best PCC and KC values. Although we do not achieve the best results for this data set, the noise in the change image is completely removed.
The change-detection results and evaluation indicators of the Bangladesh data set are shown in Figure 14 and Table 8, respectively. Obviously, the FP value of this data set is negligible, but there is a large number of missed detections, which can be seen from the image inside the red rectangle. The remaining methods, except RUSACD, all have FN values around 4000. RUSAD and the proposed SRMR-MSMRFCM successfully detected more changed areas. However, the proposed method leads by 59 and 495 pixels in FP and FN values, respectively. Thus, the proposed method effectively preserves the image details. In terms of evaluation criteria, the KC value of the proposed SRMR-MSMRFCM is improved by 13.78%, 6.62%, 9.59%, 11.32%, 7.88%, 12.70%, 2.72%, and 6.90% over FCM, FLICM, PCA-KMeans, PCANet, CWNN, MSAPNet, RUSACD, and DDNet, respectively. It can be seen that the proposed method effectively reduces the missed detection.
For six real SAR image-change detection data sets, the proposed SRMR-MSMRFCM method achieves the best results for five of them. Obviously, the results of our method are much better than those of the classical methods, such as FCM, FLICM, and PCA-KMeans. Therefore, our analysis mainly focuses on the comparison with advanced deep learning methods and the mechanism of the methods.
Firstly, the multiplication operator effectively increases the contrast between the changed and unchanged areas. The R operator will improve the FN value of the detection result, while the MR operator will improve the FP value of the detection result. The reasons for these two problems are the ratio operation between pixels and the mean filtering of the neighborhood. In the FP area of the MR DI, the corresponding pixels have low gray values on the R DI. Therefore, the R operator can suppress the FP value of the MR operator after the multiplication operation. Similarly, in the FN area of the R DI, the corresponding pixels have higher gray values on the MR DI. Therefore, the MR operator can suppress the FN value of the R operator after the multiplication operation. Besides, compared with the fusion method with weighted summation, the method based on multiplication can amplify the change characteristics of the image. For data sets less affected by noise, such as the Bern and Bangladesh data sets, the comprehensive performance of the proposed method is much better than the deep learning methods due to the RMR DI. The proposed method detects the changed areas more completely, while most deep learning methods miss many changed areas. The reason is that these deep learning methods use LR DI to obtain labels. This leads to the omission of changed class pixel labels. Therefore, the neural network cannot fully learn the features of the changed areas, resulting in the high FN values.
Secondly, saliency detection and large-size structuring elements completely remove noise in unchanged areas. For the six data sets, there is almost no isolated noise in the detection of the proposed method. The CA method comprehensively considers the distance, mean value, and multi-scale information, so it completely detects the changed area of the image. Since the prior information of the two-dimensional Gaussian distribution matrix is introduced, the final saliency image can better reflect the change information of the image. Large-size structuring elements have strong denoising ability, but will destroy the details of the image. However, since they deal with unchanged areas, this disadvantage does not actually exist. The advantage of this method is reflected in the data sets that are heavily affected by noise, such as the Farmland, Coastline, and Inland Water data sets. It can be seen there are no isolated pixels in unchanged areas in the proposed method, but there are more or less for the other methods.
Thirdly, multi-scale images of changed areas enrich the features of the images and improve the detection accuracy. After the previous algorithm process, qualified results can be obtained by a single-scale image. However, in order to obtain better results, the method needs to be extended to multiple scales. The fusion of multi-scale images with appropriate proportion not only preserves the details, but also reduces the noise of the image. Small-size structuring elements play the same role here. The advantages of this method are also reflected in data sets that are heavily affected by noise, such as the Farmland, Coastline, and Inland Water data sets. It can be seen that the changed areas of the results are complete and smooth.
In order to prove the universality of the proposed method, four detection methods are used to test the simulation accuracy, which are manual threshold [49], Otsu, KMeans, and FCM. Neighborhood information and complex operations are not used in these methods, so they can be used for universality tests. The results of the Ottawa data set are shown in Figure 15 and Table 9. Obviously, for this data set, no matter what method is used, very similar and excellent detection results can be obtained. For all experiments, the values of PCC and KC are more than 98.80% and 95.50%, which are higher than those of the comparison methods in Section 3.4. It proves that the method has good universality.

4. Discussion

4.1. Discussion of Weight Coefficient

For parameter analysis, we will take the Ottawa data set as an example.
In order to prove the feasibility of MSMR, we designed five groups of experiments. The experimental setup is shown in Table 10. The results of the Ottawa data set are shown in Figure 16 and Table 10. In experiment A, only a 1/4-scale image is used. Accordingly, only a 1/2-scale image and the original image are used in experiments B and C, respectively. In experiment D, the images of all three scales occupy 1/3 of the proportion. The parameters for experiment E are those when the best results are obtained. As can be seen from the figure, in experiment A, when only the 1/4scale image is used, the detection image obtained loses a lot of details. The FP value is 744, and the KC value is only 89.89%. In experiment B, when only the 1/2–scale image is used, the image details are enriched and the accuracy improves a little. In experiment C, when only the original scale image is used, PCC and KC reach 98.60% and 94.60%, respectively, which are the second-best results. In experiment D, when the images of the three scales are used equally, the inspection accuracy is improved qualitatively, and KC reaches 94.43%. However, this value is slightly lower than that of experiment C, which only uses the original image. The reason is the wrong proportion of the three scales. There are rich details in the original image, so it should occupy the largest proportion. The smaller the scale of the image, the more information is lost, and there is less noise. Therefore, smaller-scale images should occupy smaller proportions. In experiment E, when the original image, the 1/2-scale image, and the 1/4-scale image occupy the proportions of 0.57, 0.32, and 0.08, respectively, we reach the highest PCC and KC value of 98.85% and 95.69%, respectively.
Figure 17 shows the results of all data sets. We can see that for single-scale images, the original images have achieved the best results, the second-best is the 1/2-scale, and the worst is the 1/4-scale. The results of experiment C were better than those of experiment D, except for the Farmland data set. This proves that the result of a multi-scale image with wrong proportions is not as good as that of the original image. The best results are all obtained in experiment E. For the Bern, Farmland, Coastline, Inland Water, and Bangladesh data sets, α , β , and γ are 0.5 0.4 0.1, 0.6 0.3 0.1, 0.38 0.31 0.29, 0.4 0.33 0.27, and 0.64 0.32 0.04, respectively.

4.2. Discussion of Structuring Elements

In morphological reconstruction, the size of structuring elements is very important to the final result of image processing. This parameter will be analyzed by taking the Coastline data set as an example. The change-detection results and evaluation indicators of different-size structuring elements are shown in Figure 18 and Table 11, respectively. It can be seen that when the radius of the structuring element is 1, there are some isolated FP pixels in the image. When the radius is 2, there are very few FP pixels in the red rectangle. When the radii are 3 and 4, there are no isolated FP pixels in the image. The final detection results are almost the same as the ground truth. When the radius is 5, the changed area in the red rectangle is missed, and there are a lot of FP pixels in the green rectangle. When the radius is 6, the detection result is even worse, as the changed area in the blue rectangle is missed. It can be seen that if the structuring element is too small, the isolated noise may not be completely removed. If the structuring element is too large, both FP and FN pixels can seriously reduce the accuracy of detection. Therefore, it is necessary to select the appropriate size of structuring elements according to the noise level of the image. For the Coastline data set, the image noise is very serious, so the structuring element with radius 4 is chosen. For the Bern, Ottawa, Farmland, Inland Water, and Bangladesh data sets, the best radii are 1, 1, 3, 4, and 1, respectively.

5. Conclusions

In this paper, a new DI-generation method, RMR, is constructed by fusing the R DI and the MR DI based on multiplication. Experiments show that this method makes better use of the information of single pixels and neighborhood pixels. Therefore, it has excellent detection results for different kinds of SAR images. In addition, this paper proposes the MSMRFCM clustering change-detection method, based on multi-scale morphological reconstruction. Saliency detection is used to process images in different areas. Experiments show that the method is robust to noise and can maintain the details of the image. However, the method still has some shortcomings. There are four parameters in the method that can be adjusted. The size of the structuring elements can be easily adjusted to the best effect. However, although the coefficient adjustment of the three scale images is regular, it also takes some time to adjust. In future research, we will try to adapt or simplify the tuning of parameters without sacrificing too much accuracy. In addition, there is still much room for progress in the selection of the design of filters.

Author Contributions

Methodology, J.X.; validation, J.X.; software, Z.W. and Y.S.; writing—original draft preparation, J.X.; writing—review and editing, J.X. and Z.X.; supervision, Z.X. and G.L.; suggestions, P.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under Grant No. 61801419 and the Natural Science Foundation of Yunnan Province under Grant Nos. 2019FD114 and 202201AT070027.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lu, D.; Mausel, P.; Brondízio, E.; Moran, E. Change detection techniques. Int. J. Remote Sens. 2004, 25, 2365–2401. [Google Scholar] [CrossRef]
  2. Kit, O.; Lüdeke, M. Automated detection of slum area change in Hyderabad, India using multitemporal satellite imagery. Int. Soc. Photogramm. Remote Sens. 2013, 83, 130–137. [Google Scholar] [CrossRef] [Green Version]
  3. Dong, H.; Ma, W.; Wu, Y.; Gong, M.; Jiao, L. Local Descriptor Learning for Change Detection in Synthetic Aperture Radar Images via Convolutional Neural Networks. IEEE Access 2018, 7, 15389–15403. [Google Scholar] [CrossRef]
  4. Huang, Y.; Chen, Z.; Wen, C.; Li, J.; Xia, G.X.; Hong, W. An Efficient Radio Frequency Interference Mitigation Algorithm in Real Synthetic Aperture Radar Data. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–12. [Google Scholar] [CrossRef]
  5. Chen, Z.; Zeng, Z.; Huang, Y.; Wan, J.; Tan, X. SAR Raw Data Simulation for Fluctuant Terrain: A New Shadow Judgment Method and Simulation Result Evaluation Framework. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–18. [Google Scholar]
  6. Mas, J.-F. Monitoring land-cover changes: A comparison of change detection techniques. Int. J. Remote Sens. 1999, 20, 139–152. [Google Scholar] [CrossRef]
  7. Moser, G.; Serpico, S. Generalized minimum-error thresholding for unsupervised change detection from SAR amplitude imagery. IEEE Trans. Geosci. Remote Sens. 2006, 44, 2972–2982. [Google Scholar] [CrossRef]
  8. Yan, L.; Xia, W.; Zhao, Z.; Wang, Y. A Novel Approach to Unsupervised Change Detection Based on Hybrid Spectral Difference. Remote Sens. 2018, 10, 841. [Google Scholar] [CrossRef] [Green Version]
  9. Liu, W.; Yang, J.; Zhao, J.; Yang, L. A Novel Method of Unsupervised Change Detection Using Multi-Temporal PolSAR Images. Remote Sens. 2017, 9, 1135. [Google Scholar] [CrossRef] [Green Version]
  10. Gong, M.; Zhang, P.; Su, L.; Liu, J. Coupled dictionary learning for change detection from multisource data. IEEE Trans. Geosci. Remote Sens. 2016, 54, 7077–7091. [Google Scholar] [CrossRef]
  11. Jia, L.; Zhang, T.; Fang, J.; Dong, F. Multiple Kernel Graph Cut for SAR Image Change Detection. Remote Sens. 2021, 13, 725. [Google Scholar] [CrossRef]
  12. Lee, J.-S. Speckle suppression and analysis for synthetic aperture radar images. Proc. SPIE 1986, 25, 636–643. [Google Scholar] [CrossRef]
  13. Frost, V.S.; Stiles, J.A.; Shanmugan, K.S.; Holtzman, J.C. A model for radar images and its application to adaptive digital filtering of multiplicative noise. IEEE Trans. Pattern Anal. Mach. Intell. 1982, 4, 157–166. [Google Scholar] [CrossRef]
  14. Farhadiani, R.; Homayouni, S.; Safari, A. Hybrid SAR speckle reduction using complex wavelet shrinkage and non-local PCA-based filtering. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 1489–1496. [Google Scholar] [CrossRef] [Green Version]
  15. Yu, Y.; Acton, S.T. Speckle reducing anisotropic diffusion. IEEE Trans. Image Processing 2002, 11, 1260–1270. [Google Scholar]
  16. Dekker, R.J. Speckle filtering in satellite SAR change detection imagery. Int. J. Remote Sens. 1998, 19, 1133–1146. [Google Scholar] [CrossRef]
  17. Bruzzone, B.L.; Melgani, F. Automatic identification of the number and values of decision thresholds in the log-ratio image for change detection in SAR images. IEEE Geosci. Remote Sens. Lett. 2006, 3, 349–353. [Google Scholar]
  18. Inglada, J.; Mercier, G. A New Statistical Similarity Measure for Change Detection in Multitemporal SAR Images and Its Extension to Multiscale Change Analysis. IEEE Trans. Geosci. Remote Sens. 2007, 45, 1432–1445. [Google Scholar] [CrossRef] [Green Version]
  19. Zheng, Y.; Zhang, X.; Hou, B.; Liu, G. Using Combined Difference Image and k -Means Clustering for SAR Image Change Detection. IEEE Geosci. Remote Sens. Lett. 2013, 11, 691–695. [Google Scholar] [CrossRef]
  20. Gong, M.; Cao, Y.; Wu, Q. A Neighborhood-Based Ratio Approach for Change Detection in SAR Images. IEEE Geosci. Remote Sens. Lett. 2012, 9, 307–311. [Google Scholar] [CrossRef]
  21. Zhang, X.; Su, H.; Zhang, C.; Gu, X.; Tan, X.; Atkinson, P.M. Robust unsupervised small area change detection from SAR imagery using deep learning. ISPRS-J. Photogramm. Remote Sens. 2021, 173, 79–94. [Google Scholar] [CrossRef]
  22. Liu, R.; Wang, R.; Huang, J.; Li, J.; Jiao, L. Change Detection in SAR Images Using Multiobjective Optimization and Ensemble Strategy. IEEE Geosci. Remote Sens. Lett. 2021, 18, 1585–1589. [Google Scholar] [CrossRef]
  23. Zhuang, H.; Tan, Z.; Deng, K.; Yao, G. Adaptive Generalized Likelihood Ratio Test for Change Detection in SAR Images. IEEE Geosci. Remote Sens. Lett. 2020, 17, 416–420. [Google Scholar] [CrossRef]
  24. Jia, L.; Li, M.; Zhang, P.; Wu, Y.; An, L.; Song, W. Remote-Sensing Image Change Detection with Fusion of Multiple Wavelet Kernels. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2016, 9, 3405–3418. [Google Scholar] [CrossRef]
  25. Bazi, Y.; Bruzzone, L.; Melgani, F. An unsupervised approach based on the generalized Gaussian model to automatic change detection in multitemporal SAR images. IEEE Trans. Geosci. Remote Sens. 2005, 43, 874–887. [Google Scholar] [CrossRef] [Green Version]
  26. Otsu, N. Threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef] [Green Version]
  27. Dempster, A.P.; Laird, N.M.; Rubin, D.B. Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. 1977, 39, 1–38. [Google Scholar]
  28. Celik, T. Unsupervised change detection in satellite images using principal component analysis and κ-means clustering. IEEE Geosci. Remote Sens. Lett. 2009, 6, 772–776. [Google Scholar] [CrossRef]
  29. Ghosh, A.; Mishra, N.S.; Ghosh, S. Fuzzy clustering algorithms for unsupervised change detection in remote sensing images. Inf. Sci. 2011, 181, 699–715. [Google Scholar] [CrossRef]
  30. Ahmed, M.N.; Yamany, S.M.; Mohamed, N.; Farag, A.A.; Moriarty, T. A modified fuzzy c-means algorithm for bias field estimation and segmentation of MRI data. IEEE Trans. Med. Imaging 2002, 21, 193–199. [Google Scholar] [CrossRef]
  31. Chen, S.; Zhang, D. Robust image segmentation using FCM with spatial constraints based on new kernel-induced distance measure. IEEE Trans. Syst. Man Cybern. 2004, 34, 1907–1916. [Google Scholar] [CrossRef] [Green Version]
  32. Gong, M.; Zhou, Z.; Ma, J. Change detection in synthetic aperture radar images based on image fusion and fuzzy clustering. IEEE Trans. Image Processing 2012, 21, 2141–2151. [Google Scholar] [CrossRef]
  33. Mu, C.; Huo, L.; Liu, Y.; Liu, R.; Jiao, L. Change Detection for Remote Sensing Images Based on Wavelet Fusion and PCA-Kernel Fuzzy Clustering. Acta Electronica Sinica. 2015, 43, 1375–1381. [Google Scholar]
  34. Wang, Q.; Wang, X.; Fang, C.; Jiao, J. Fuzzy image clustering incorporating local and area-level information with median memberships. Appl. Soft Comput. 2021, 105, 107245. [Google Scholar] [CrossRef]
  35. Lei, T.; Jia, X.; Zhang, Y.; He, L.; Meng, H.; Nandi, A.K. Significantly Fast and Robust Fuzzy C-Means Clustering Algorithm Based on Morphological Reconstruction and Membership Filtering. IEEE Trans. Fuzzy Syst. 2018, 26, 3027–3041. [Google Scholar] [CrossRef] [Green Version]
  36. Gong, M.; Zhao, J.; Liu, J.; Miao, Q.; Jiao, L. Change detection in synthetic aperture radar images based on deep neural networks. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 125–138. [Google Scholar] [CrossRef]
  37. Gao, F.; Dong, J.; Li, B. Change detection from synthetic aperture radar images based on neighborhood-based ratio and ex-treme learning machine. J. Appl. Remote. Sens. 2016, 10, 046019. [Google Scholar] [CrossRef]
  38. Cui, B.; Zhang, Y.; Yan, L.; Wei, J.; Wu, H. An Unsupervised SAR Change Detection Method Based on Stochastic Subspace Ensemble Learning. Remote Sens. 2019, 11, 1314. [Google Scholar] [CrossRef] [Green Version]
  39. Ma, W.; Yang, H.; Wu, Y.; Xiong, Y.; Hu, T.; Jiao, L.; Hou, B. Change Detection Based on Multi-Grained Cascade Forest and Multi-Scale Fusion for SAR Images. Remote Sens. 2019, 11, 142. [Google Scholar] [CrossRef] [Green Version]
  40. Gao, F.; Wang, X.; Gao, Y.; Dong, J.; Wang, S. Sea Ice Change Detection in SAR Images Based on Convolutional-Wavelet Neural Networks. IEEE Geosci. Remote Sens. Lett. 2019, 16, 1240–1244. [Google Scholar] [CrossRef]
  41. Gao, F.; Dong, J.; Li, B.; Xu, Q. Automatic change detection in synthetic aperture radar images based on PCANet. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1792–1796. [Google Scholar] [CrossRef]
  42. Wang, R.; Ding, F.; Chen, J.W.; Liu, B.; Zhang, J.; Jiao, L. SAR Image Change Detection Method via a Pyramid Pooling Convolutional Neural Network. In Proceedings of the IGARSS 2020–2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, HI, USA, 26 September–2 October 2020; pp. 312–315. [Google Scholar]
  43. Qu, X.; Gao, F.; Dong, J.; Du, Q.; L, H.C. Change Detection in Synthetic Aperture Radar Images Using a Dual-Domain Network. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  44. Li, M.; Li, M.; Zhang, P.; Wu, Y.; Song, W.; An, L. SAR Image Change Detection Using PCANet Guided by Saliency Detection. IEEE Geosci. Remote Sens. Lett. 2019, 16, 402–406. [Google Scholar] [CrossRef]
  45. Geng, J.; Ma, X.; Zhou, X.; Wang, H. Saliency-Guided Deep Neural Networks for SAR Image Change Detection. IEEE Trans. Geosci. Remote Sens. 2019, 57, 7365–7377. [Google Scholar] [CrossRef]
  46. Goferman, S.; Zelnik-Manor, L.; Tal, A. Context-aware saliency detection. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 1915–1926. [Google Scholar] [CrossRef] [Green Version]
  47. Wang, J.; Wang, Y.; Liu, H. Hybrid Variability Aware Network (HVANet): A Self-Supervised Deep Framework for Label-Free SAR Image Change Detection. Remote Sens. 2022, 14, 734. [Google Scholar] [CrossRef]
  48. Shu, Y.; Li, W.; Yang, M.; Cheng, P.; Han, S. Patch-Based Change Detection Method for SAR Images with Label Updating Strategy. Remote Sens. 2021, 13, 1236. [Google Scholar] [CrossRef]
  49. Rignot, E.J.; Van Zyl, J.J. Change detection techniques for ERS-1 SAR data. IEEE Trans. Geosci. Remote Sens. 1993, 31, 896–906. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Flowchart of the proposed method.
Figure 1. Flowchart of the proposed method.
Remotesensing 14 03604 g001
Figure 2. Bern data set. (a) Image obtained in April 1999; (b) image obtained in May 1999; (c) the ground truth.
Figure 2. Bern data set. (a) Image obtained in April 1999; (b) image obtained in May 1999; (c) the ground truth.
Remotesensing 14 03604 g002
Figure 3. Ottawa data set. (a) Image obtained in May 1997; (b) image obtained in August 1997; (c) the ground truth.
Figure 3. Ottawa data set. (a) Image obtained in May 1997; (b) image obtained in August 1997; (c) the ground truth.
Remotesensing 14 03604 g003
Figure 4. Farmland data set. (a) Image obtained in June 2008; (b) image obtained in June 2009; (c) the ground truth.
Figure 4. Farmland data set. (a) Image obtained in June 2008; (b) image obtained in June 2009; (c) the ground truth.
Remotesensing 14 03604 g004
Figure 5. Coastline data set. (a) Image obtained in June 2008; (b) image obtained in June 2009; (c) the ground truth.
Figure 5. Coastline data set. (a) Image obtained in June 2008; (b) image obtained in June 2009; (c) the ground truth.
Remotesensing 14 03604 g005
Figure 6. Inland Water data set. (a) Image obtained in June 2008; (b) image obtained in June 2009; (c) the ground truth.
Figure 6. Inland Water data set. (a) Image obtained in June 2008; (b) image obtained in June 2009; (c) the ground truth.
Remotesensing 14 03604 g006
Figure 7. Bangladesh data set. (a) Image obtained in April 2007; (b) image obtained in July 2007; (c) the ground truth.
Figure 7. Bangladesh data set. (a) Image obtained in April 2007; (b) image obtained in July 2007; (c) the ground truth.
Remotesensing 14 03604 g007
Figure 8. Images of the Ottawa data set. (a) Before; (b) after; (c) saliency image; (d) binary saliency image; (e) R; (f) MR; (g) RMR; (h) SRMR-MSMR; (i) result of R; (j) result of MR; (k) result of RMR; (l) result of SRMR-MSMR.
Figure 8. Images of the Ottawa data set. (a) Before; (b) after; (c) saliency image; (d) binary saliency image; (e) R; (f) MR; (g) RMR; (h) SRMR-MSMR; (i) result of R; (j) result of MR; (k) result of RMR; (l) result of SRMR-MSMR.
Remotesensing 14 03604 g008aRemotesensing 14 03604 g008b
Figure 9. Change-detection results of the Bern data set. (a) FCM; (b) FLICM; (c) PCA-KMeans; (d) PCANet; (e) CWNN; (f) MSAPNet; (g) RUSACD; (h) DDNet; (i) SRMR-MSMRFCM; (j) the ground truth.
Figure 9. Change-detection results of the Bern data set. (a) FCM; (b) FLICM; (c) PCA-KMeans; (d) PCANet; (e) CWNN; (f) MSAPNet; (g) RUSACD; (h) DDNet; (i) SRMR-MSMRFCM; (j) the ground truth.
Remotesensing 14 03604 g009
Figure 10. Change-detection results of the Ottawa data set. (a) FCM; (b) FLICM; (c) PCA-KMeans; (d) PCANet; (e) CWNN; (f) MSAPNet; (g) RUSACD; (h) DDNet; (i) SRMR-MSMRFCM; (j) the ground truth.
Figure 10. Change-detection results of the Ottawa data set. (a) FCM; (b) FLICM; (c) PCA-KMeans; (d) PCANet; (e) CWNN; (f) MSAPNet; (g) RUSACD; (h) DDNet; (i) SRMR-MSMRFCM; (j) the ground truth.
Remotesensing 14 03604 g010
Figure 11. Change-detection results of the Farmland data set. (a) FCM; (b) FLICM; (c) PCA-KMeans; (d) PCANet; (e) CWNN; (f) MSAPNet; (g) RUSACD; (h) DDNet; (i) SRMR-MSMRFCM; (j) the ground truth.
Figure 11. Change-detection results of the Farmland data set. (a) FCM; (b) FLICM; (c) PCA-KMeans; (d) PCANet; (e) CWNN; (f) MSAPNet; (g) RUSACD; (h) DDNet; (i) SRMR-MSMRFCM; (j) the ground truth.
Remotesensing 14 03604 g011
Figure 12. Change-detection results of the Coastline data set. (a) FCM; (b) FLICM; (c) PCA-KMeans; (d) PCANet; (e) CWNN; (f) MSAPNet; (g) RUSACD; (h) DDNet; (i) SRMR-MSMRFCM; (j) the ground truth.
Figure 12. Change-detection results of the Coastline data set. (a) FCM; (b) FLICM; (c) PCA-KMeans; (d) PCANet; (e) CWNN; (f) MSAPNet; (g) RUSACD; (h) DDNet; (i) SRMR-MSMRFCM; (j) the ground truth.
Remotesensing 14 03604 g012
Figure 13. Change-detection results of the Inland Water data set. (a) FCM; (b) FLICM; (c) PCA-KMeans; (d) PCANet; (e) CWNN; (f) MSAPNet; (g) RUSACD; (h) DDNet; (i) SRMR-MSMRFCM; (j) the ground truth.
Figure 13. Change-detection results of the Inland Water data set. (a) FCM; (b) FLICM; (c) PCA-KMeans; (d) PCANet; (e) CWNN; (f) MSAPNet; (g) RUSACD; (h) DDNet; (i) SRMR-MSMRFCM; (j) the ground truth.
Remotesensing 14 03604 g013
Figure 14. Change-detection results of the Bangladesh data set. (a) FCM; (b) FLICM; (c) PCA-KMeans; (d) PCANet; (e) CWNN; (f) MSAPNet; (g) RUSACD; (h) DDNet; (i) SRMR-MSMRFCM; (j) the ground truth.
Figure 14. Change-detection results of the Bangladesh data set. (a) FCM; (b) FLICM; (c) PCA-KMeans; (d) PCANet; (e) CWNN; (f) MSAPNet; (g) RUSACD; (h) DDNet; (i) SRMR-MSMRFCM; (j) the ground truth.
Remotesensing 14 03604 g014
Figure 15. Change-detection results of Ottawa data set of different methods. (a) threshold; (b) Otsu; (c) KMeans; (d) FCM.
Figure 15. Change-detection results of Ottawa data set of different methods. (a) threshold; (b) Otsu; (c) KMeans; (d) FCM.
Remotesensing 14 03604 g015
Figure 16. Multi-scale DI and change-detection results of the Ottawa data set. (a) Original DI; (b) 1/2-scale DI; (c) 1/4-scale DI; (d) result of A; (e) result of B; (f) result of C; (g) result of D; (h) result of E.
Figure 16. Multi-scale DI and change-detection results of the Ottawa data set. (a) Original DI; (b) 1/2-scale DI; (c) 1/4-scale DI; (d) result of A; (e) result of B; (f) result of C; (g) result of D; (h) result of E.
Remotesensing 14 03604 g016
Figure 17. PCC and KC of all data sets. (a) PCC; (b) KC.
Figure 17. PCC and KC of all data sets. (a) PCC; (b) KC.
Remotesensing 14 03604 g017
Figure 18. Change-detection results of different-size structuring elements of the Coastline data set. (a) Radius = 1; (b) radius = 2; (c) radius = 3; (d) radius = 4; (e) radius = 5; (f) radius = 6.
Figure 18. Change-detection results of different-size structuring elements of the Coastline data set. (a) Radius = 1; (b) radius = 2; (c) radius = 3; (d) radius = 4; (e) radius = 5; (f) radius = 6.
Remotesensing 14 03604 g018
Table 1. The 6 data sets used in the experiments.
Table 1. The 6 data sets used in the experiments.
PlacePre-DataPost-DataSizeSatelliteResolution
Bern1999.041999.05301 × 301ERS-220 m
Ottawa1997.051997.08290 × 350Radarsat-112 m
Farmland2008.062009.06306 × 291Radarsat-28 m
Coastline2008.062009.06450 × 280Radarsat-28 m
Inland water2008.062009.06291 × 444Radarsat-28 m
Bangladesh2007.042007.07300 × 300Envisat10 m
Table 2. Change-detection evaluation indicators of Ottawa data set by different DIs.
Table 2. Change-detection evaluation indicators of Ottawa data set by different DIs.
FPFNOEPCC (%)KC (%)
R [16]16311287291897.1389.29
MR [18]2323193251697.5291.66
RMR427941136898.6594.87
MSMR670499116998.8595.69
Table 3. Change-detection evaluation indicators of Bern data set of different methods.
Table 3. Change-detection evaluation indicators of Bern data set of different methods.
FPFNOEPCC (%)KC (%)
FCM [29]8331039399.5780.92
FLICM [32]3017737899.5884.87
PCA-KMeans [28]15814630499.6686.74
PCANet [41]2545548099.4774.21
CWNN [40]8523031599.6585.28
MSAPNet [42]14814028899.6887.56
RUSACD [21]30712242999.5382.57
DDNet [43]7124631799.6584.98
SRMR-MSMRFCM16312328699.6887.67
Table 4. Change-detection evaluation indicators of Ottawa data set of different methods.
Table 4. Change-detection evaluation indicators of Ottawa data set of different methods.
FPFNOEPCC (%)KC (%)
FCM [29]8022139294197.1088.74
FLICM [32]839657149698.5394.49
PCA-KMeans [28]9701541251197.5390.57
PCANet [41]8711021189298.1492.97
CWNN [40]1291434172598.3093.75
MSAPNet [42]2622351261397.4389.79
RUSACD [21]1468295176398.2693.66
DDNet [43]6931010170398.3293.65
SRMR-MSMRFCM670499116998.8595.69
Table 5. Change-detection evaluation indicators of Farmland data set of different methods.
Table 5. Change-detection evaluation indicators of Farmland data set of different methods.
FPFNOEPCC (%)KC (%)
FCM [29]2472880335296.2370.39
FLICM [32]1381467184897.9282.76
PCA-KMeans [28]1293476176998.0183.37
PCANet [41]251312133798.5084.77
CWNN [40]324734105898.8188.93
MSAPNet [42]17968686598.9490.87
RUSACD [21]1241060118498.6786.98
DDNet [43]231855108698.7888.40
SRMR-MSMRFCM10270981199.0191.35
Table 6. Change-detection evaluation indicators of Coastline data set of different methods.
Table 6. Change-detection evaluation indicators of Coastline data set of different methods.
FPFNOEPCC (%)KC (%)
FCM [29]30,3976030,45775.835.87
FLICM [32]9038799099.2171.43
PCA-KMeans [28]39,5832439,60768.574.28
PCANet [41]17,879617,88585.8111.27
CWNN [40]13,9545114,00588.8813.94
MSAPNet [42]579458586295.3629.33
RUSACD [21]11517428999.7788.92
DDNet [43]14418432899.7487.52
SRMR-MSMRFCM3216419699.8492.27
Table 7. Change-detection evaluation indicators of Inland Water data set of different methods.
Table 7. Change-detection evaluation indicators of Inland Water data set of different methods.
FPFNOEPCC (%)KC (%)
FCM [29]3268543381197.0564.63
FLICM [32]1654798245298.1072.84
PCA-KMeans [28]1354603195798.4978.09
PCANet [41]6221770239298.1571.01
CWNN [40]1333494182798.5979.73
MSAPNet [42]939669160898.7681.04
RUSACD [21]7291114184398.5776.58
DDNet [43]1334576191098.5278.63
SRMR-MSMRFCM721912163398.7479.72
Table 8. Change-detection evaluation indicators of Bangladesh data set of different methods.
Table 8. Change-detection evaluation indicators of Bangladesh data set of different methods.
FPFNOEPCC (%)KC (%)
FCM [29]44957496194.4974.27
FLICM [32]213722374395.8481.43
PCA-KMeans [28]3084031433995.1878.46
PCANet [41]54548455394.9476.73
CWNN [40]193947396695.5980.17
MSAPNet [42]24781478394.6975.35
RUSACD [21]1982886306496.6085.33
DDNet [43]193774379395.7981.15
SRMR-MSMRFCM1392391253097.1988.05
Table 9. Change-detection results of Ottawa data set of different methods.
Table 9. Change-detection results of Ottawa data set of different methods.
FPFNOEPCC (%)KC (%)
Threshold [47]576636121298.8195.51
Otsu [26]626566119298.8395.60
KMeans [28]663512117598.8495.67
FCM [29]670499116998.8595.69
Table 10. Experimental parameter setting and change-detection results of Ottawa data set.
Table 10. Experimental parameter setting and change-detection results of Ottawa data set.
α β γ FPFNOEPCC (%)KC (%)
A0017441907265197.3989.89
B010621849191198.1292.59
C1002101212142298.6094.60
D1/31/31/33141157147498.5594.43
E0.570.320.08670499116998.8595.69
Table 11. Change-detection evaluation indicators of different-size structuring elements of Coastline data set.
Table 11. Change-detection evaluation indicators of different-size structuring elements of Coastline data set.
FPFNOEPCC (%)KC (%)
Radius = 117819637499.7085.88
Radius = 26318224599.8190.39
Radius = 32917620599.8491.88
Radius = 43216419699.8492.27
Radius = 517630748399.6280.98
Radius = 623972996899.2355.75
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Xuan, J.; Xin, Z.; Liao, G.; Huang, P.; Wang, Z.; Sun, Y. Change Detection Based on Fusion Difference Image and Multi-Scale Morphological Reconstruction for SAR Images. Remote Sens. 2022, 14, 3604. https://doi.org/10.3390/rs14153604

AMA Style

Xuan J, Xin Z, Liao G, Huang P, Wang Z, Sun Y. Change Detection Based on Fusion Difference Image and Multi-Scale Morphological Reconstruction for SAR Images. Remote Sensing. 2022; 14(15):3604. https://doi.org/10.3390/rs14153604

Chicago/Turabian Style

Xuan, Jiayu, Zhihui Xin, Guisheng Liao, Penghui Huang, Zhixu Wang, and Yu Sun. 2022. "Change Detection Based on Fusion Difference Image and Multi-Scale Morphological Reconstruction for SAR Images" Remote Sensing 14, no. 15: 3604. https://doi.org/10.3390/rs14153604

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop