Next Article in Journal
Study on Rock Damage Mechanism for Lateral Blasting under High In Situ Stresses
Next Article in Special Issue
Perceptual Quality of Audio-Visual Content with Common Video and Audio Degradations
Previous Article in Journal
Special Issue: “Control and Automation”
Previous Article in Special Issue
Data Hiding Method for Color AMBTC Compressed Images Using Color Difference
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Watermarking Scheme for Color Image Using Quaternion Discrete Fourier Transform and Tensor Decomposition

1
School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou 310018, China
2
Key Laboratory of Brain Machine Collaborative Intelligence of Zhejiang Province, Hangzhou 310018, China
3
Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2021, 11(11), 5006; https://doi.org/10.3390/app11115006
Submission received: 25 April 2021 / Revised: 15 May 2021 / Accepted: 17 May 2021 / Published: 28 May 2021
(This article belongs to the Special Issue Research on Multimedia Systems)

Abstract

:
To protect the copyright of the color image, a color image watermarking scheme based on quaternion discrete Fourier transform (QDFT) and tensor decomposition (TD) is presented. Specifically, the cover image is partitioned into non-overlapping blocks, and then QDFT is performed on each image block. Then, the three imaginary frequency components of QDFT are used to construct a third-order tensor. The third-order tensor is decomposed by Tucker decomposition and generates a core tensor. Finally, an improved odd–even quantization technique is employed to embed a watermark in the core tensor. Moreover, pseudo-Zernike moments and multiple output least squares support vector regression (MLS–SVR) network model are used for geometric distortion correction in the watermark extraction stage. The scheme utilizes the inherent correlations among the three RGB channels of a color image, and spreads the watermark into the three channels. The experimental results indicate that the proposed scheme has better fidelity and stronger robustness for common image-processing and geometric attacks, can effectively resist each color channel exchange attack. Compared with the existing schemes, the presented scheme achieves better performance.

1. Introduction

The modification of digital multimedia content has become easier, especially in terms of images, and thus the issue of image copyright protection has attracted more attention. Accordingly, image watermarking technology aims at providing a reliable way to alleviate this problem related to the intellectual management. The robust watermarking method can protect copyright of the image, and have two basic characteristics, namely, robustness and fidelity. Since these two characteristics are contradictory, a good robust watermarking method can balance the relationship between robustness and fidelity.
The robust watermarking technology is divided into the spatial and the frequency domain. Compared with the spatial domain, the frequency domain watermarking-based watermarking method can obtain much more watermarking robustness without a great amount of image distortion. Therefore, the present study focuses on the image watermarking schemes in the frequency domain.
Many frequency techniques have been presented for the robust watermarking, such as discrete wavelet transform [1,2], discrete Fourier transform [3], discrete cosine transform [4,5], quaternion discrete Fourier transform [6,7,8], and quaternion Hadamard transform [9].
Guan et al. [10] proposed a watermarking method that embedded a watermark into the two-level DCT coefficients using a specified technology. Li et al. [11] developed a robust watermarking scheme based on the wavelet domain. Due to the fact that the above two methods are single transform that have deficiencies without using inherent correlations in the frequency domain, hybrid transform watermarking schemes achieve better robustness and fidelity—many image-watermarking techniques combining several transform methods have been proposed [12,13,14]. In [12], a new method was presented, to hybrid SVD and integer wavelet transform to embed a watermark. Rastegar et al. [13] proposed a mixed watermarking method based on SVD and FRAT. Lai and Tsai [14] suggested a new image-watermarking method that blended a discrete wavelet transform and singular value decomposition. The method embedded a watermark on singular value of the host image’s DWT sub-bands.
From the above discussion, most image watermarking methods have been proposed to embed a watermark in a gray image or a channel. With the wide application of color images, the watermarking schemes for color images can be proposed [2,6,8,15,16,17,18,19,20,21]. Chou and Liu [2] proposed a new color-image watermarking algorithm based on wavelet transform and significant difference, and embedded the maximum watermark information under imperceptible distortion. Chen et al. [6] modulated at least one component of QDFT coefficients, and propagated the watermark to two or three RGB color channels. They used the characteristics of QDFT to avoid watermark energy loss. A color image-watermarking algorithm mingling with QDFT, LS-SVM and pseudo-Zernike moments has been proposed by Wang et al. [8]. In [8], quaternion Fourier transform allows watermark information energy to be propagated to all channels simultaneously to improve the robustness. Ma et al. [15] developed a local quaternion Fourier transform for the color image-watermarking method. The method used deeds of quaternion Fourier transform to improve watermark invisibility, and considered an invariant feature transform to resist the geometric attacks of the image. Kais Rouis et al. [16] proposed a method for image tampering detection, that has an underlying hashing process based on estimation of image gradient, and the performance of the method was compared to the use of QDFT method. Yang et al. [17] introduced a robust digital watermarking algorithm for geometric correction using quaternion Exponential moments. Li et al. [18] developed a color image-watermarking method based on QDFT and quaternion QR. The host image was decomposed by QDFT and quaternion QR, and a high-entropy block of the scalar part of the quaternion QR matrix was selected to embed the watermark.
Over the last decade, various image-watermarking schemes based on tensor decomposition have been proposed [19,20,21]. Tensor decomposition can maintain the internal structure of the digital image and avoids the loss of important image information. Xu et al. [19] pointed out a new blind watermarking scheme for color images based on the tensor domain. The scheme effectively considers the overall characteristics of color images, and propagates the watermark information to the three channels of the color image through tensor decomposition. Feng et al. [20] used Tucker decomposition to decompose the luminance component, and then used adaptive dot matrix quantization index modulation to embed the watermark in the tensor domain. Fang et al. [21] offered a watermarking scheme based on Tucker decomposition, and this method transformed the multi-spectral image and embedded the watermark into the element of the last frontal slice of the core tensor.
From the above methods, some embed a watermark in single transform domain [6,8,15,17,19,20]. Besides, in [2], the methods did not take efficient account of the correlation of frequency components. In [18], the scheme chose the high entropy block to embed a watermark, the block is unstable, which makes the watermark more vulnerable to attack. In a word, none of these methods takes full advantage of the three-dimensional ( 3 D ) imaginary components of QDFT and the above methods suffer from watermark energy loss [6].
Based on [18,19], the present paper proposes a hybrid transform color image watermarking scheme based on QDFT and tensor decomposition. The scheme considers the overall color image channels to improve attack resistance and decentralizes the distribution of the watermark further, and then enhances robustness. Furthermore, an appropriate strength is used to embed the watermark that satisfies the two conflicting factors, robustness and fidelity. The main contributions of the paper are as follows:
  • This paper blends QDFT with tensor decomposition (TD) and implements overall processing for a color image to embed a watermark.
  • The scheme proposed in this paper synchronously spreads the watermark to three RGB channels and enhances robustness performance.
  • This paper proves the correlation of three imaginary components of QDFT, using the QDFT’s components to structure a tensor.
The rest of this paper is arranged below. The relevant techniques are described in Section 2. The embedding and extraction processes of watermarking are provided in Section 3. The experimental part is provided in Section 4. Finally, the paper is summarized in Section 5.

2. Relevant Techniques

In this section, tensor decomposition, quaternion discrete Fourier transform, pseudo-Zernike moments, and multiple output LS-SVR are introduced.

2.1. Tensor Decomposition (TD)

Due to application requirements of high-order data, tensor decomposition (TD) is used as a tool to analyse high-order data. TD is a high-order extension of matrix decomposition in multi-linear algebra, and is an efficient technique used in many fields [22,23]. CANDECOMP/PARAFAC (CP) and Tucker decomposition are two particular ways to implement tensor decomposition; the well-known Tucker decomposition is always selected to implement TD.
Tucker decomposition can be considered to be higher-order extensions of the matrix singular value decomposition (SVD). The Tucker decomposition was introduced by Tucker [24] and has been successfully applied to data dimensional reduction, feature extraction, tensor subspace learning, face image recognition [25], data compression, image quality evaluation [26], noise reduction [27], and data analysis [28]. In the present paper, Tucker decomposition is used to construct a watermark embedding domain.
A third-order tensor T R M × N × O is decomposed by Tucker decomposition, there will be obtained three orthogonal factor matrices U 1 R M × P , U 2 R N × Q , U 3 R O × R , and a core tensor K R P × Q × R [24]. Figure 1 shows Tucker decomposition of a third-order tensor T.
Each element in the core tensor K represents the degree of interaction between different slices. The Tucker decomposition [22] is defined in Equation (1).
T K × 1 U 1 × 2 U 2 × 3 U 3 [ [ K ; U 1 , U 2 , U 3 ] ] .
For each element of the original tensor T, the Tucker decomposition [22] is expressed in Equation (2).
T p = 1 P q = 1 Q r = 1 R k p q r u p 1 u q 2 u r 3 .
where P, Q, and R correspond to the number of column vectors of the factor matrices U 1 , U 2 , and U 3 , respectively. P, Q, and R are generally less than or equal to M, N, and O, respectively. The symbol ‘∘’ represents outer product between two vectors. where the symbol ‘[[ ]]’ is a concise representation of Tucker decomposition given in [22]. The core tensor K has the same dimension as tensor T, and it is expressed in Equation (3).
K T × 1 U 1 × 2 U 2 × 3 U 3 .
K has full orthogonality, that is, any two slices of the core tensor K are orthogonal to each other, and the inner product between the two slices are zero.

2.2. Quaternion Discrete Fourier Transform (QDFT)

Quaternion was introduced by Hamilton [29], and was a generalization of a complex number. Quaternion [30] was regarded as a kind of hyper-complex, which can be represented by a four-dimensional complex number with one real part and three imaginary parts, and is defined as follows:
ϕ = α + β i + γ j + δ k .
where α , β , γ , and δ are real numbers, i, j, and k are imaginary operators with the following properties:
i 2 = j 2 = k 2 = i · j · k = 1 .
where the ‘·’ is the cross product, i · j = k , j · k = i , k · i = j , j · i = k , k · j = i , i · k = j .
Sangwine [30] was the first to demonstrate formulations of quaternion discrete Fourier transform (QDFT). Considering that QDFT does not satisfy the commutative law, QDFT is divided into three types, namely, left-way transform F L , right-way transform F R [8], and hybrid transform F L R [30]. The form of the left-way transform F L ( λ , υ ) is as follows:
F L ( λ , υ ) = 1 X , Y x = 0 X 1 y = 0 Y 1 e θ 2 π ( x λ X + y υ Y ) f ( x , y ) .
where f ( x , y ) is a color image of size X × Y represented in the quaternion form as Equation (8). The inverse Q D F T ( I Q D F T ) [8] is defined by,
f ( x , y ) = 1 X , Y x = 0 X 1 y = 0 Y 1 e θ 2 π ( x λ X + y υ Y ) F L ( λ , υ ) .
In these definitions, the quaternion operator was generalized, and θ is any unit of pure quaternion, where θ 2 = 1 . The operators i, j, and k are special cases of θ ; in this paper, θ = ( i + j + k ) / 3 .
Color image pixels have three components, R, G, and B. Thus, they can be represented in quaternion form using a pure quaternion. For example, the coordinates of a pixel is ( x , y ) in a color-image can be represented as follows:
f ( x , y ) = R ( x , y ) i + G ( x , y ) j + B ( x , y ) k .
where R ( x , y ) is the red component, and G ( x , y ) and B ( x , y ) are the green and blue components of a color image, respectively.
Using the Equations (6) and (8), we can obtain A ( λ , υ ) is a real component, C ( λ , υ ) , D ( λ , υ ) , and E ( λ , υ ) are the three imaginary components in Equation (9).
F L ( λ , υ ) = A ( λ , υ ) + C ( λ , υ ) i + D ( λ , υ ) j + E ( λ , υ ) k .
the inverse QDFT ( I Q D F T ) can be represented as follows:
f ( x , y ) = F L = A I Q D F T + C I Q D F T + D I Q D F T + E I Q D F T .
where P I Q D F T is the real inverse quaternion discrete Fourier transform of array P, and F L is the IQDFT.

2.3. Pseudo-Zernike Moment

Pseudo-Zernike moments [31] are very effective orthogonal rotation invariant moments and pseudo-Zernike moments are robust image feature descriptors. The moments have several characteristics: (1) Redundancy of information expression is small. Since the basis of the Zernike moment is orthogonal polynomial, the extracted features can be guaranteed to have small correlation and redundancy. (2) Effectiveness of information expression. It has been proven that the set of pseudo-Zernike moments can provide a compact, fixed-length and computation effective representation of the image content, and only a small fixed number of compact pseudo-Zernike moments need to be stored to effectively characterize the image content. (3) Multilevel representation of information. Pseudo-Zernike Moments effectively represent the contour of an image. The low-order moments and middle-order moments of pseudo-Zernike moments describe the overall shape of an image, while the high-order moments describe the details of an image. The pseudo-Zernike moments [32] of order n with repetition m for a 2-d continuous function f ( x , y ) are expressed as follows:
P n m = n + 1 π x 2 + y 2 1 f ( x , y ) V n m ( x , y ) d x d y = n + 1 π 0 2 π 0 1 f ( p , μ ) V n m ( x , y ) d p d μ .
where V n m ( x , y ) is a complex conjugate of V n m ( x , y ) and n is any positive, m is any positive and negative integer such that | m | < n . The variables x and y are such that x 2 + y 2 1 , p = x 2 + y 2 , μ = t a n 1 ( y x ) . Pseudo-Zernike polynomials [32] V n m ( x , y ) of order n with repetitions m are expressed as follows:
V n m ( x , y ) = R n m ( x , y ) e j m μ .
where j = 1 . The pseudo-Zernike radial polynomial [32] R n m ( x , y ) is defined as follows:
R n m ( x , y ) = s = 0 ( n | m | ) / 2 ( 1 ) s ( n s ) ! ( x 2 + y 2 ) ( n 2 s ) / 2 s ! ( n + | m | 2 s ) ! ( n | m | 2 s ) ! .
When f ( x , y ) is an image size of N × N , the pseudo-Zernike moments [33] are defined as follows:
P n m = n + 1 λ x = 0 N 1 y = 0 N 1 f ( x , y ) R n m ( x , y ) e j m μ = n + 1 λ x = 0 M 1 y = 0 N 1 f ( x , y ) V n m ( p , μ ) .
where λ is the number of pixels in an image that are mapped into the unit circle.
p = ( 2 x N + 1 ) 2 + ( N 1 2 y ) 2 N .
μ = t a n 1 ( N 1 2 y 2 x N + 1 ) .
Figure 2 shows the information expression of pseudo-Zernike moments for an image. It can be seen from the figure that the low-order moments of pseudo-Zernike moments can be used to construct the contour of the image.
Considering global geometric distortions, we select six low-order pseudo-Zernike moments Z n , m , including Z 0 , 0 , Z 2 , 2 , Z 4 , 4 , Z 8 , 8 , Z 9 , 9 , and Z 11 , 11 to reflect the global information of a digital image. The pseudo-Zernike moments are calculated as parameters to correct the geometric attack in the process of watermark extraction.

2.4. Multiple Output LS-SVR

Xu et al. [34] proposed the MLS–SVR network. Multiple output regression aims to learn the mapping from a multiple input feature space to a multiple output space. Although the standard formula of least squares support vector regression (LS-SVR) has potential practicality, it cannot handle multiple output situations. Multiple independent LS-SVRs are usually trained, thereby ignoring the potential (potentially nonlinear) cross-correlation between different outputs. To solve this problem, Xu et al. [34] used the multi-task learning method to propose a new machine learning network. The multiple outputs function Ψ ( χ ) is
Ψ ( χ ) = Φ ( i = 1 m j = 1 l τ i , j K ( χ , χ j ) , 1 , m ) + m λ j = 1 l τ j K ( χ , χ j ) + b T .
where χ is the sample, τ is Lagrange multiplier, K ( χ , χ j ) is the kernel function, b is parameter of the model, and b R , m is the number of output parameters, l is the number of b, λ is positive real regularized parameter, λ R + , Φ ( ) is the replicate matrix function (repmat), B = repmat (A,n) returns an array containing n copies of A in the row and column dimensions. The size of B is size(A)*n when A is a matrix.
In our paper, the above-mentioned machine learning model is used for geometric correction. The inputs of this model are six low-order features of Zernike moments [31], and the outputs of this model are parameters of geometric distortion.

3. Watermarking in Tensor Domain

To enhance the robustness for the color image watermarking scheme, this paper blends QDFT and TD to embed a watermark. QDFT considers the correlation among color image channels. Tensor decomposition fully utilizes the correlation among frequency components, and watermark is scattering on frequency components further by the decomposition, so tensor decomposition improves the robustness of the watermarking scheme. The scheme utilizes the overall characteristics of RGB three channels that provides better embedding performance than single-channel or each channel of a color image, the scheme is more appropriate for color image watermarking.
QDFT can process the three channels of the color image as a whole instead of processing them individually, thus avoiding unnecessary distortion and utilizing the inherent correlations among the three channels of the color image. The three imaginary components C, D, and E also have a strong correlation. Hence, three components can be used to construct a tensor T. Figure 9 shows three imaginary components C, D, and E.
Tucker decomposition can maintain the internal structural relationship of an image. The core tensor obtained by Tucker decomposition represents the main properties of each slice of the original tensor and reflects the correlation among the slices. The core tensor K is a compressed version of the original tensor T. Figure 3 shows the Tucker decomposition flowchart.
We can use the method in the article [19] to embed the watermark in the core tensor K, the maximum value of the core tensor is located in the upper-left corner, in the K ( 1 , 1 , 1 ) position, as shown in Figure 3. The position is robust when the image has experienced various attacks. Therefore, we modify the K ( 1 , 1 , 1 ) coefficient to embed the watermark. Then, we show the three slices of the core tensor K, which is shown in Figure 4. The brighter part in Figure 4 corresponds to a larger value of magnitude. It can be clearly seen that K ( 1 , 1 , 1 ) is larger than the other position.
The above content briefly introduces the proposed watermarking scheme in this paper. The rest of this section is arranged as below. This section introduces three contents, including correlation analysis among three imaginary components of QDFT, procedures of watermark embedding, and procedures of watermark extraction.

3.1. Correlation Analysis among Components of QDFT

A color-image is decomposed by QDFT to obtain four-dimensional frequency components, including a real component A and three imaginary components C, D, and E. The imaginary three-dimensional frequency components have a strong correlation. The part proves the correlation among the imaginary three-dimensional frequency components of QDFT.
Based on the analysis of its theory, the relationship of the three-dimensional frequency components is proved. Most images have close correlation among the three channels in the RGB color space. The color channels are derived from the same physical model, which determines that images not only have similarity among adjacent pixels, but also have close correlation among the color channels of each pixel [35,36]. Then, any channels of the color image, red, green, and blue, replaces another channel, such as red, green, and green. We find the reconstructed image is still clear, and no blur distortion occurs. Thus, the research fully shows that color image similarities among adjacent pixels, and the three channels of each pixel have a close correlation. Furthermore, the difference between the two color-channels are almost the same or very close, the results are shown in Figure 5.
C, D, and E all have red, green, and blue channels, these are combined by different coefficients. We substitute Equation (8) into Equation (6) as follows:
F L ( λ , υ ) = i 1 X , Y x = 0 X 1 y = 0 Y 1 e θ 2 π ( x λ X + y υ Υ ) R ( x , y ) + j 1 X , Y x = 0 X 1 y = 0 Y 1 e θ 2 π ( x λ X + y υ Υ ) G ( x , y ) + k 1 X , Y x = 0 X 1 y = 0 Y 1 e θ 2 π ( x λ X + y υ Υ ) B ( x , y ) .
F L ( λ , υ ) = i [ Θ ( F r ( λ , υ ) ) + θ I ( F r ( λ , υ ) ) ] + j [ Θ ( F g ( λ , υ ) ) + θ I ( F g ( λ , υ ) ) ] + k [ Θ ( F b ( λ , υ ) ) + θ I ( F b ( λ , υ ) ) ] = A ( λ , υ ) + C ( λ , υ ) i + D ( λ , υ ) j + E ( λ , υ ) k .
A ( λ , υ ) = 1 3 s i n ( 2 π ( x λ X + y υ Y ) ( R ( x , y ) + G ( x , y ) + B ( x , y ) ) .
C ( λ , υ ) = i ( R ( x , y ) c o s ( 2 π ( x λ X + y υ Y ) + 1 3 G ( x , y ) s i n ( 2 π ( x λ X + y υ Y ) + 1 3 B ( x , y ) s i n ( 2 π ( x λ X + y υ Y ) ) .
D ( λ , υ ) = j ( 1 3 R ( x , y ) s i n ( 2 π ( x λ X + y υ Y ) + G ( λ , υ ) c o s ( 2 π ( x λ X + y υ Y ) 1 3 B ( x , y ) s i n ( 2 π ( x λ X + y υ Y ) ) .
E ( x , y ) = k ( 1 3 R ( x , y ) s i n ( 2 π ( x λ X + y υ Y ) + 1 3 G ( x , y ) s i n ( 2 π ( x λ X + y υ Y ) + B ( x , y ) c o s ( 2 π ( x λ X + y υ Y ) ) .
where Θ ( a + b i ) = a, I ( a + b i ) = b , and i, j, and k are all orthogonal to each other.
On the other hand, the correlation of the three-dimensional imaginary components is proved by data distribution characteristics. We randomly select image block I r of size 16 × 16 in Lena, Table 1 shows the statistical characteristics of the RGB color space and QDFT frequency space for I r . Then, QDFT transformation operates on the image block I r . The distribution of C, D, and E are similar, the results are shown in Figure 6. The C ( : , r ) is the r column value of C component, D ( : , r ) is r column value of D component, and E ( : , r ) is r column value of E component, where 1 r 16 . It can be found from Table 1 that the max value of C is 57,738, as shown in Figure 6a, the max value of first column is also 57,738. So similarly, we can analyse D, and E from Table 1. Furthermore, the results point that the correlation among C, D, and E does not change with the different sizes of the image.
From all the above proof, it appears that the three imaginary components C, D, and E have a strong correlation. So, we can construct a tensor using C, D, and E.

3.2. Procedures of Watermark Embedding

This part mainly introduces the specific process of embedding. Figure 7 shows a flowchart of watermark embedding. The embedding process of watermark information is as follows.
Step 1 : Obtain a color-image I o with dimensions of X × Y × 3 , and divide the I o into non-overlapping blocks of size 8 × 8 × 3 . The numbers of the blocks are X × Y 8 × 8 .
Step 2 : Construct a pure quaternion Fourier f ( x , y ) = R ( x , y ) i + G ( x , y ) j + B ( x , y ) k using RGB channels of the color image block size of 8 × 8 × 3 , and perform QDFT on the each block to obtain A ( λ , υ ) , C ( λ , υ ) , D ( λ , υ ) , and E ( λ , υ ) by Equation (6).
Step 3 : Use the three Fourier frequency components C ( λ , υ ) , D ( λ , υ ) , and E ( λ , υ ) of each block to construct a third-order tensor T.
Step 4 : Operate Tucker decomposition on each tensor T to obtain core tensor K, and the numbers of K are X × Y 8 × 8 .
Step 5 : Perform logistic mapping on all the core tensor K blocks, a bit of the watermark w o is embedded in K ( 1 , 1 , 1 ) of each core tensor, and the odd–even quantization embedding technique is defined as follows:
if K ( 1 , 1 , 1 ) > 0 , η = r o u n d ( K ( 1 , 1 , 1 ) / Q )
K ( 1 , 1 , 1 ) = K ( 1 , 1 , 1 ) i f w mod ( η , 2 ) , η × Q + 0.6 × Q i f w = mod ( η , 2 ) .
else K ( 1 , 1 , 1 ) = 1 × K ( 1 , 1 , 1 ) , η = r o u n d ( K ( 1 , 1 , 1 ) / Q )
K ( 1 , 1 , 1 ) = K ( 1 , 1 , 1 ) i f w mod ( η , 2 ) , ( η × Q 0.6 × Q ) i f w = mod ( η , 2 ) .
where Q is the quantization step, that is, the watermark embedding strength, r o u n d ( ) is the rounding operation, and m o d ( ) is the modulo operation.
The value of K ( 1 , 1 , 1 ) is made up of positive and negative numbers. If the traditional odd–even quantization watermarking rule is used, the error rate is relatively high. When K ( 1 , 1 , 1 ) < 0 , the traditional rule is K ( 1 , 1 , 1 ) = 1 × ( η × Q 0.5 × Q ) , an error occurs when extracting the watermark. For example, K ( 1 , 1 , 1 ) = 1000 , Q = 23 , w = 1 , mod ( 1000 / 23 ) = 43 , and K ( 1 , 1 , 1 ) = 1 × ( 43 × 23 0.5 × 23 ) = 977.5 . When extracting the watermark, η = 43 , m o d ( η , 2 ) = 1 , and w = 0 . This result is inconsistent with w = 1 when embedding. Hence, the paper replaces 0.5 with 0.6 to avoid this error.
Step 6 : Perform inverse logistic mapping on all the core tensor K blocks with the watermark, and then obtain tensor T using Equation (1).
Step 7 : Obtain the three imaginary components C ( λ , υ ) , D ( λ , υ ) , and E ( λ , υ ) from T using frontal slice way in Figure 7, and then construct F ( λ , υ ) = A ( λ , υ ) + C ( λ , υ ) i + D ( λ , υ ) j + E ( λ , υ ) k .
Step 8 : Perform inverse QDFT transformation on F ( λ , υ ) by using Equation (7) to obtain f ( x , y ) = R ( x , y ) i + G ( x , y ) j + B ( x , y ) k . Finally, construct a watermarked color image using R ( x , y ) , G ( x , y ) , and B ( x , y ) , that is, the three RGB channels with the watermark.

3.3. Procedures of Watermark Extraction

This part mainly introduces the specific procedures of watermark extraction, as shown in Figure 8. The extracting process of the watermark is as follows. The watermarked image is geometrically rectified before the watermark is extracted. The technique of geometric correction can improve the watermark correct extraction rate, as shown in Table 8.
Step 1 : Obtain the Zernike moment of the watermarked image I w with size X × Y × 3 , the six-order features of Zernike moments as the input of trained machine learning network MLS–SVR to correct geometric distortion, and the corrected watermarked image I w is obtained.
Step 2 : Divide the corrected watermarking image I w into blocks, with a block size of 8 × 8 × 3 , the numbers of the blocks are X × Y 8 × 8 .
Step 3 : Construct a pure quaternion f w ( x , y ) = R w ( x , y ) i + G w ( x , y ) j + B w ( x , y ) k using the three RGB channels of the color image block. We can obtain a real component A w ( λ , υ ) and three imaginary components C w ( λ , υ ) , D w ( λ , υ ) , and E w ( λ , υ ) of each color block by QDFT.
Step 4 : Construct a third-order tensor T w with dimensions of 8 × 8 × 3 using C w ( λ , υ ) , D w ( λ , υ ) , and E w ( λ , υ ) of each color block.
Step 5 : Operate Tucker decomposition on T w and then the core tensor K w is obtained.
Step 6 : Perform logistic mapping for all core tensor K w blocks, and then, the odd–even quantization technique is used to extract a bit watermark in position K w ( 1 , 1 , 1 ) of each K w , the specific extraction rules are as follows:
K ( 1 , 1 , 1 ) = | ( K ( 1 , 1 , 1 ) | , η = r o u n d ( K ( 1 , 1 , 1 ) / Q ) ,
K ( 1 , 1 , 1 ) = w = 1 i f mod ( η , 2 ) = 0 , w = 0 i f mod ( η , 2 ) = 1 .
where ‘ | | ’ is the functions abs.
Step 7 : Obtain complete watermark w e through the odd–even quantization rule.

4. Experimental Results and Discussions

This paper uses the peak signal to noise ratio ( P S N R ) [37], normalized correlation coefficient ( N C ) [9], and bit error rate ( B E R ) [19] to evaluate the visibility and robustness of the watermarking scheme. P S N R is used to describe the fidelity performance, and N C is used to describe the watermarking robustness. M S E [19] is the mean square error of the data, which is expressed below:
M S E = 1 X × Y x = 0 X 1 y = 0 Y 1 ( I o ( x , y ) I w ( x , y ) ) 2 .
The P S N R is defined as follows:
P S N R = 10 log 10 255 2 M S E .
where I o ( x , y ) is the host image, I w ( x , y ) is the watermarked image. In addition, the bit error rate (BER) and normalized correlation (NC) are used to evaluate the performance in terms of the watermark’s robustness, BER and NC are defined as follows:
B E R = h = 1 H g = 1 G w e ( h , g ) w o ( h , g ) H × G .
N C = h = 1 H g = 1 G w e ( h , g ) × w o ( h , g ) ( h = 1 H g = 1 G ( w e ) 2 ) ( h = 1 H g = 1 G ( w o ) 2 )
where w 0 ( h , g ) is the original watermark, w e ( h , g ) is the extraction watermark. w ( h , g ) is a watermark of size H × G .
This section illustrates the performance of the scheme through a series of experiments, and only representative experimental results are given herein. The five parts include the QDFT transform and inverse QDFT transform, the geometric expression of pseudo-Zernike, optimal watermark strength, comparing the scheme with the existing schemes, and the forecasting performance of the MLS–SVR network.

4.1. QDFT Analysis

A color image can be transformed into four real numbers A, C, D, and E using a QDFT real transform. Figure 9 shows the 24 b i t color image and its red, green, and blue channels. The results of the 24 b i t color image that was operated by quanternion discrete Fourier transform are shown in Figure 10.
After inverse QDFT transform using Equation (7), A I Q D F T is negligible and can be approximately regarded as 0, this result also conforms to Equation (8). When the input is a pure quaternion, the result of I Q D F T can also be approximately regarded as a pure quaternion. When reconstructing the image, C I Q D F T , D I Q D F T , and E I Q D F T are as red R , green G , and blue B channels, respectively. The experiment verifies that the difference between the reconstructed image and original image is approximately 10 10 . The difference is very small, thus allowing the image to be almost completely restored. The reconstituting red channel R , green channel G , and blue channel B are shown in Figure 11.

4.2. Geometric Characteristics of Pseudo-Zernike Moments

An image I with size 512 × 512 is selected to obtain the Zernike moment feature [31]. The two parameters n and m of pseudo-Zernike moments, are in the order of an orthogonal polynomial. The values of (n, m) are (0, 0), (2, 2), (4, 4), (8, 8), (9, 9), and (11, 11). Three kinds of attacks are performed on the image I, including translation, scaling, and rotation. Specifically, five pseudo-Zernike moment features of images are shown in Figure 12, including the original image I o , the image shift twenty pixels to the left I 20 , a two-times magnified image I 2 , rotating the image thirty degrees counter-clockwise I 30 , and rotating the image thirty degrees clockwise I 30 .
When the image is subjected to different geometric attacks, the differences in pseudo-Zernike moments are relatively obvious. Hence, pseudo-Zernike moments can remarkably represent the global geometric features of the image.

4.3. Choose Watermark Embedding Strength

To balance the robustness and fidelity, the part discusses the embedding strength Q. We set the watermark embedding strength Q ( 10 , 1 , 500 ) .
Figure 13 shows that the value of Q is increasing, PSNR is decreasing, and NC is increasing, indicating that the robustness of the watermark is improved, whereas the image quality is deteriorated. When the value of Q reaches 410, NC is close to 1, and the watermark can be completely extracted without being attacked. To balance robustness and fidelity, Q = 1160 and PSNR = 40.413. Figure 14 shows the PSNR of the eight watermarked images, consisting of “Lena”, “Castle”, “Baboon”, “Barbara”, “Boats”, “Fruit”, “Airplane”, “Houses”, and a watermark w 0 .
It can be seen from the Figure 14 that P S N R is larger than 40, which indicates our scheme has better fidelity.

4.4. Comparison with Existing Schemes

To further describe the performance of the proposed color image watermarking scheme, we compare the proposed with exiting schemes [2,6,8,18,19]. The results are shown in Table 2, Table 3 and Table 4. Considering that the QDFT and TD hybrid transform allow the watermark energy to propagate synchronously in the three color image channels, when a channel is replaced by another channel of a color image, the watermark can still be extracted. Hence, we test the effect of re-composition for RGB channels, which is regarded as a special attack in this paper. The specific experimental results are shown in Table 5. Beyond that, this part also conducts an attack experiment, attack types including noise, filter, geometric, compression processing, and blur attack. Our scheme has many types of anti-attack and has strong anti-attack ability. The proposed scheme is very robust against noise, filtering, compression processing, blurring, and geometric attacks, and effectively resists each color channel exchange attack.

4.5. Forecasting Performance of MLS–SVR

To train the MLS–SVR model, we use the six-order features of pseudo-Zernike moments as the input parameters [38,39], and the scaling, rotation, and translation parameters of the image subjected to geometric attack as output parameters. This experiment includes 114 training and 30 test samples. The training prediction errors of scaling, rotation, and translation are 0.0069, 0.0052, and 0.0066, respectively. Table 6 shows pseudo-Zernike moments of five random images from training samples. The forecasting results of MLS–SVR are shown in Table 7.
The experimental results show that the prediction accuracy of the MLS–SVR network remains relatively high. The corrected watermark image can improve the accuracy of watermark extraction. When the watermarked image is subjected to rotation, translation, and scaling attacks with correction, the watermark extraction bit error rate is shown in Table 8. It can be seen from Table 8 that B E R is very small, which indicates the watermark can be be almost completely extracted after correction.

5. Conclusions

In this paper, we propose a color image watermarking scheme based on QDFT and TD. In our scheme, the watermark is not embedded directly on the QDFT coefficients but rather on the element of the TD domain. The scheme fully considers the overall characteristics of a color image, and fully utilizes the correlation of QDFT components to the structure tensor. The hybrid QDFT and TD transform provides better performance than a single transform, has better fidelity, and is more appropriate for color images. The hybrid transform allows the watermark energy to propagate synchronously to the three RGB channels rather than one channel. Hence, the robustness of the watermarking scheme can be greatly improved, and higher-precision color image information can be maintained. Beyond that, this paper uses the MLS–SVR network and pesudo-Zernike moment features to rectify geometric attacks for improving the accuracy of extraction. Moreover, after analyzing the characteristics of the rounding operation, this paper provides an improved odd–even quantization embedding rule, which improves the accuracy of watermark extraction. Our scheme is resistant to a specific attack—when a channel is substituted by other channels, the watermark can be almost completely extracted. However, the paper divides RGB channels into 8 × 8 × 3 blocks, which cannot resist a cropping attack. Image processing [40,41] could affect the accuracy of watermark extraction.
In future work, we hope that the scheme can be resistant to cropping attack and use fuzzy image preproccessing to further improve accuracy.

Author Contributions

Conceptualization, R.B. and S.Z.; methodology, R.B.; software, R.B.; validation, R.B., S.Z. and L.L.; formal analysis, R.B.; investigation, R.B.; resources, R.B.; data curation, R.B.; writing—original draft preparation, R.B.; writing—review and editing, C.-C.C.; visualization, R.B.; supervision, R.B.; project administration, S.Z.; funding acquisition, J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by National Natural Science Foundation of China (No. 61370218, No. 61971247), Public Welfare Technology and Industry Project of Zhejiang Provincial Science Technology Department (No. LGG19F0-2016).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

Thank you to the reviewers who reviewed this paper and the MDPI editor who edited it professionally.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
QDFTQuaternion discrete Fourier transform
TDTensor decomposition
MLS–SVRMultiple output least squares support vector regression
DCTDiscrete cosine transform
SVDSingular value decomposition
DWTDiscrete Wavelet transformation
FRATFinite radon transform
PSNRPeak signal to noise ratio
NCNormalized correlation coefficient
BERBit error rate
QRQuadrature rectangle decomposition

References

  1. Tan, Y.; Qin, J.; Xiang, X.; Ma, W.; Pan, W.; Xiong, N.N. A Robust Watermarking Scheme in YCbCr Color Space Based on Channel Coding. IEEE Access 2019, 7, 25026–25036. [Google Scholar]
  2. Chou, C.; Liu, K. A Perceptually Tuned Watermarking Scheme for Color Images. IEEE Trans. Image Process. 2010, 19, 2966–2982. [Google Scholar] [CrossRef]
  3. Tsui, T.K.; Zhang, X.; Androutsos, D. Color Image Watermarking Using the Spatio-Chromatic Fourier Transform. In Proceedings of the 2006 IEEE International Conference on Acoustics Speech and Signal Processing, Toulouse, France, 14–19 May 2006. [Google Scholar]
  4. Loan, N.A.; Hurrah, N.N.; Parah, S.A.; Lee, J.W.; Sheikh, J.A.; Bhat, G.M. Secure and Robust Digital Image Watermarking Using Coefficient Differencing and Chaotic Encryption. IEEE Access 2018, 6, 19876–19897. [Google Scholar] [CrossRef]
  5. Barni, M.; Bartolini, F.; Piva, A. Multichannel watermarking of color images. IEEE Trans. Circuits Syst. Video Technol. 2002, 12, 142–156. [Google Scholar] [CrossRef]
  6. Chen, B.; Coatrieux, G.; Chen, G.; Sun, X.; Coatrieux, J.L.; Shu, H. Full 4-D quaternion discrete Fourier transform based watermarking for color images. Digit. Signal Process. 2014, 28, 106–119. [Google Scholar] [CrossRef] [Green Version]
  7. Wang, C.; Wang, X.; Zhang, C.; Xia, Z. Geometric correction based color image watermarking using fuzzy least squares support vector machine and Bessel K form distribution. Signal Process. 2017, 134, 197–208. [Google Scholar] [CrossRef]
  8. Wang, X.; Wang, C.; Yang, H.; Niu, P. A robust blind color image watermarking in quaternion Fourier transform domain. J. Syst. Softw. 2013, 86, 255–277. [Google Scholar] [CrossRef]
  9. Li, J.; U, C.Y.; Gupta, B.; Ren, X. Color image watermarking scheme based on quaternion Hadamard transform and Schur decomposition. Multimedia Tools Appl. 2018, 77, 4545–4561. [Google Scholar] [CrossRef]
  10. Guan, H.; Zeng, Z.; Liu, J.; Zhang, S. A novel robust digital image watermarking algorithm based on two-level DCT. In Proceedings of the 2014 International Conference on Information Science, Electronics and Electrical Engineering, Sapporo City, Japan, 26–28 April 2014; pp. 1804–1809. [Google Scholar]
  11. Li, C.; Song, X.; Liu, Z. A Robust Watermarking Scheme Based on Maximum Wavelet Coefficient Modification and Optimal Threshold Technique. J. Electr. Comput. Eng. 2015, 2015, 370615. [Google Scholar] [CrossRef] [Green Version]
  12. Makbol, N.M.; Khoo, B.E.; Rassem, T.H.; Loukhaoukha, K. A new reliable optimized image watermarking scheme based on the integer wavelet transform and singular value decomposition for copyright protection. Inf. Sci. 2017, 417, 381–400. [Google Scholar] [CrossRef]
  13. Rastegar, S.; Namazi, F.; Yaghmaie, K.; Aliabadian, A. Hybrid watermarking algorithm based on Singular Value Decomposition and Radon transform. Int. J. Electron. Commun. 2011, 65, 658–663. [Google Scholar] [CrossRef]
  14. Lai, C.; Tsai, C. Digital image watermarking using discrete wavelet transform and singular value decomposition. IEEE Trans. Instrum. Meas. 2010, 59, 3060–3063. [Google Scholar] [CrossRef]
  15. Ma, X.; Xu, Y.; Song, L.; Yang, X. Color image watermarking using local quaternion Fourier spectral analysis. In Proceedings of the IEEE International Conference on Multimedia and Expo, Chengdu, China, 14–18 July 2014; pp. 233–236. [Google Scholar]
  16. Rouis, K.; Gomez-Krämer, P.; Coustaty, M. Local Geometry Analysis For Image Tampering Detection. In Proceedings of the 2020 IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates, 25–28 October 2020; pp. 2551–2555. [Google Scholar]
  17. Yang, H.; Wang, Y.Z.P.; Wang, X.; Wang, C. A geometric correction based robust color image watermarking scheme using quaternion Exponent moments. Optik 2014, 125, 4456–4469. [Google Scholar] [CrossRef]
  18. Li, M.; Yuan, X.; Chen, H.; Li, J. Quaternion Discrete Fourier Transform-Based Color Image Watermarking Method Using Qurternion QR Decomposition. IEEE Access 2020, 8, 72308–72315. [Google Scholar] [CrossRef]
  19. Xu, H.; Jiang, G.; Yu, M.; Luo, T. A Color Image Watermarking Based on Tensor Analysis. IEEE Access 2018, 6, 51500–51514. [Google Scholar] [CrossRef]
  20. Feng, B.; Lu, W.; Sun, W.; Huang, J.; Shi, Y. Robust image watermarking based on Tucker decomposition and Adaptive-Lattice Quantization Index Modulation. Signal Process. Image Commun. 2016, 41, 1–14. [Google Scholar] [CrossRef]
  21. Hai, F.; Quan, Z.; Kaijia, L. Robust Watermarking Scheme for Multispectral Images Using Discrete Wavelet Transform and Tucker Decomposition. J. Comput. 2013, 8, 2844–2850. [Google Scholar]
  22. Kolda, T.G.; Bader, B.W. Tensor Decompositions and Applications. SIAM Rev. 2009, 51, 455–500. [Google Scholar] [CrossRef]
  23. Cao, X.; Wei, X.; Han, Y.; Lin, D. Robust face clustering via tensor decomposition. IEEE Trans. Cybern. 2015, 45, 2546–2557. [Google Scholar] [CrossRef] [PubMed]
  24. Tucker, L.R. Implications of factor analysis of three-way matrices for measurement of change. In Problems in Measuring Change; Harris, C.W., Ed.; University of Wisconsin Press: Madison, WI, USA, 1963; pp. 122–137. [Google Scholar]
  25. Michal, R.P.; Snasel, Z.L. Recognition of Face Images with Noise Based on Tucker Decomposition. In Proceedings of the 2015 IEEE International Conference on Systems, Man, and Cybernetics, Hong Kong, China, 9–12 October 2015; pp. 2649–2653. [Google Scholar]
  26. Cheng, C.; Wang, H. Quality assessment for color images with Tucker decomposition. In Proceedings of the 2012 19th IEEE International Conference on Image Processing, Orlando, FL, USA, 30 September–3 October 2012; pp. 1489–1492. [Google Scholar]
  27. Yazdi, A.K.M.; Asli, A.Z. Noise reduction of hyperspectral images using kernel non-negative tucker decomposition. IEEE J. Sel. Top. Signal Process. 2011, 5, 487–493. [Google Scholar]
  28. Li, L.; Boulware, D. High-order tensor decomposition for large-scale data analysis. In Proceedings of the 2015 IEEE International Congress on Big Data, New York, NY, USA, 27 June–2 July 2015; pp. 665–668. [Google Scholar]
  29. Moxey, C.; Sangwine, S.; Ell, T. Color-grayscale image registration using hypercomplex phase correlation. In Proceedings of the 2002 IEEE International Conference on Image Processing, Rochester, NY, USA, 22–25 September 2002; Volume 3, pp. 247–250. [Google Scholar]
  30. Ell, T.A.; Sangwine, S.J. Hypercomplex Fourier Transforms of Color Images. IEEE Trans. Image Process. 2007, 16, 22–35. [Google Scholar] [CrossRef] [PubMed]
  31. Teague, M. Image analysis via the general theory of moments. J. Opt. Soc. Am. 1980, 70, 920–930. [Google Scholar] [CrossRef]
  32. Yap, P.; Jiang, X.; Kot, A.C. Two-Dimensional Polar Harmonic Transforms for Invariant Image Representation. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 1259–1270. [Google Scholar]
  33. Teh, C.; Chin, R. On Image Analysis by the Method of Moments. IEEE Trans. Pattern Anal. Mach. Intell. 1988, 10, 496–513. [Google Scholar] [CrossRef]
  34. Xu, S.; An, X.; Qiao, X.; Zhu, L.; Li, L. Multi-output least-squares support vector regression machines. Pattern Recognit. Lett. 2013, 34, 1078–1084. [Google Scholar] [CrossRef]
  35. Comon, P. Independent component analysis, A new concept? Signal Process. 1994, 36, 287–314. [Google Scholar] [CrossRef]
  36. Kim, S.; Cho, N.I. Hierarchical Prediction and Context Adaptive Coding for Lossless Color Image Compression. IEEE Trans. Image Process. 2014, 23, 445–449. [Google Scholar] [CrossRef] [PubMed]
  37. Chang, C.; Li, C.; Shi, Y. Privacy-Aware Reversible Watermarking in Cloud Computing Environments. IEEE Access 2018, 6, 70720–70733. [Google Scholar] [CrossRef]
  38. Chang, C. Adversarial learning for invertible steganography. IEEE Access 2020, 8, 198425–198435. [Google Scholar] [CrossRef]
  39. Chang, C. Neural Reversible Steganography withLong Short-Term Memory. Secur. Commun. Netw. 2021, 2021, 5580272. [Google Scholar] [CrossRef]
  40. Caponetti, L.; Castellano, G. Fuzzy Image Processing; Springer International Publishing: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  41. Chang, C.C. Cryptospace Invertible Steganography with Conditional Generative Adversarial Networks. Secur. Commun. Netw. 2021, 2021, 5538720. [Google Scholar] [CrossRef]
Figure 1. Tucker decomposition: a third-order tensor T decomposed by Tucker decomposition can obtain three orthogonal factor matrices U 1 , U 2 , U 3 , and a core tensor K.
Figure 1. Tucker decomposition: a third-order tensor T decomposed by Tucker decomposition can obtain three orthogonal factor matrices U 1 , U 2 , U 3 , and a core tensor K.
Applsci 11 05006 g001
Figure 2. Information expression of pseudo-Zernike moments for images. (ad) are the original images. ( a ^ d ) ^ are reconstructed images of (ad) using pseudo-Zernike moments, respectively.
Figure 2. Information expression of pseudo-Zernike moments for images. (ad) are the original images. ( a ^ d ) ^ are reconstructed images of (ad) using pseudo-Zernike moments, respectively.
Applsci 11 05006 g002
Figure 3. Flowchart for Tucker decomposition.
Figure 3. Flowchart for Tucker decomposition.
Applsci 11 05006 g003
Figure 4. Tensor’s slices map.
Figure 4. Tensor’s slices map.
Applsci 11 05006 g004
Figure 5. RGB channels combination and the difference between channels.
Figure 5. RGB channels combination and the difference between channels.
Applsci 11 05006 g005
Figure 6. Distribution of three imaginary components. (a) The C s value distribution of the image block I r size is 16 × 16 . (b) The D s value distribution of the image block size I r is 16 × 16 . (c) The E s value distribution of the image block size I r is 16 × 16 .
Figure 6. Distribution of three imaginary components. (a) The C s value distribution of the image block I r size is 16 × 16 . (b) The D s value distribution of the image block size I r is 16 × 16 . (c) The E s value distribution of the image block size I r is 16 × 16 .
Applsci 11 05006 g006
Figure 7. Flowchart for watermark embedding.
Figure 7. Flowchart for watermark embedding.
Applsci 11 05006 g007
Figure 8. Flowchart for extracting watermark.
Figure 8. Flowchart for extracting watermark.
Applsci 11 05006 g008
Figure 9. The color image Lena and its red channel, green channel, and blue channel.
Figure 9. The color image Lena and its red channel, green channel, and blue channel.
Applsci 11 05006 g009
Figure 10. The QDFT results of the real component A, the imaginary component C, the imaginary component D, and the imaginary component E.
Figure 10. The QDFT results of the real component A, the imaginary component C, the imaginary component D, and the imaginary component E.
Applsci 11 05006 g010
Figure 11. The IQDFT results of the real component A I Q D F T , red channel R , green channel G , and blue channel B .
Figure 11. The IQDFT results of the real component A I Q D F T , red channel R , green channel G , and blue channel B .
Applsci 11 05006 g011
Figure 12. Pseudo-Zernike moments expression.
Figure 12. Pseudo-Zernike moments expression.
Applsci 11 05006 g012
Figure 13. Distribution of PSNR and NC.
Figure 13. Distribution of PSNR and NC.
Applsci 11 05006 g013
Figure 14. PSNR of watermarked images.
Figure 14. PSNR of watermarked images.
Applsci 11 05006 g014
Table 1. Distribution of different space components.
Table 1. Distribution of different space components.
Different SpaceComponentsImagesMax ValueMin ValueStandard DeviationMean Value
Red
Applsci 11 05006 i001
2302211.8845225.5391
RGB color spaceGreen
Applsci 11 05006 i002
1411193.9441132.3320
Blue
Applsci 11 05006 i003
133948.4950113.4961
C
Applsci 11 05006 i004
57,738−308.27893608.9226
QDFT frequency spaceD
Applsci 11 05006 i005
33,877−303.91322118.4137
E
Applsci 11 05006 i006
29,055−190.00511817.4125
Table 2. Comparison of proposed scheme with existing schemes in the field of imperceptibility.
Table 2. Comparison of proposed scheme with existing schemes in the field of imperceptibility.
SchemesDWT [2]QDFT [6]QDFT [8]QDFT + QQR [18]Tensor [19] QDFT + Tensor
PSNR40.0237.71740.244039 40 . 413
Table 3. BER under the image processing attacks and geometric attacks.
Table 3. BER under the image processing attacks and geometric attacks.
SchemesCroppingLow-Pass FilteringNoise AddingMedian Filtering
DWT [2]0.06870.0410.28950.0592
QDFT + Tensor 0 . 04 0 . 0148 0 . 2307 0 . 0546
SchemesHistogram EqualizationAverage FilteringGaussian NoiseMedian FilteringSalt PeppersJPEG (50)Gaussian Filtering
QDFT [8]0.01030.02370.05220.01340.01460.02830
QDFT + Tensor 0 . 01015 0 . 0208 0 . 0515 0 . 0122 0 . 011 0.060.0113
SchemesMotion BlurAverage FilteringGaussian NoiseMedian FilteringSalt PeppersJPEG (70)Gaussian Blur
Tensor [19]0.1030.09450.19120.01220.25320.20210.0557
QDFT + Tensor 0 . 0376 0 . 0934 0 . 0386 0.0549 0 . 011 0 . 0596 0 . 0098
Table 4. NC under the image processing attacks.
Table 4. NC under the image processing attacks.
SchemesGaussian FilterMotion BlurAverage FilteringGaussian NoiseMedian FilteringSalt PeppersJPEG(60)
QDFT [6]0.9630.9860.940.9480.9550.9160.932
QDFT + Tensor 0.8487 0 . 9865 0 . 9443 0 . 9555 0 . 9552 0 . 9187 0 . 9412
SchemesGaussian FilterAverage FilteringGaussian NoiseMedian FilteringSalt PeppersJPEG(90)
QDFT+QQR [18]0.94730.97660.87890.88110.93960.99955
QDFT + Tensor 0 . 9487 0 . 9843 0 . 9535 0 . 9052 0 . 9487 0.9806
Table 5. Attack performance.
Table 5. Attack performance.
AttacksNCExtraction Watermark
Median filter ( 3 × 3 ) 0.9831
Applsci 11 05006 i007
Salt pepers ( 0.01 ) 0.8187
Applsci 11 05006 i008
Jpeg ( 90 ) 0.9806
Applsci 11 05006 i009
Motion blur ( 0.01 ) 0.9805
Applsci 11 05006 i010
Gaussian noise ( 0.01 ) 0.9522
Applsci 11 05006 i011
Gaussian filter ( 3 × 3 ) 0.8487
Applsci 11 05006 i012
Average filter ( 3 × 3 ) 0.9443
Applsci 11 05006 i013
Channel CombinationNCExtraction Watermark
Applsci 11 05006 i014
0.9531
Applsci 11 05006 i015
Applsci 11 05006 i016
0.9687
Applsci 11 05006 i017
Applsci 11 05006 i018
0.9692
Applsci 11 05006 i019
Applsci 11 05006 i020
0.9499
Applsci 11 05006 i021
Applsci 11 05006 i022
0.9427
Applsci 11 05006 i023
Applsci 11 05006 i024
0.9478
Applsci 11 05006 i025
Table 6. The low-order pseudo-Zernike moments among different images.
Table 6. The low-order pseudo-Zernike moments among different images.
Images(0,0)(2,2)(4,4)(8,8)(9,9)(11,11)
Applsci 11 05006 i026
118.47060910.630058076.52039510.981159.5784747.771317
Applsci 11 05006 i027
108.7803233.047559665.04649819.5949810.366596.043941
Applsci 11 05006 i028
108.9754874.1593135095.99125819.2122411.818015.619767
Applsci 11 05006 i029
112.2889174.1593135095.99125811.9279211.701995.490806
Applsci 11 05006 i030
83.898996123.280299478.2594567.2138748.2841531.593389
Table 7. The MLS–SVR prediction performance for test image.
Table 7. The MLS–SVR prediction performance for test image.
Rotation1116212425333845
prediction12.916.0921.8724.6524.4432.3638.8245.16
Translation1516171824274656
prediction13.3916.4516.2217.4324.2828.0144.7155.95
Scaling0.20.50.91.21.51.61.82
prediction0.180.511.111.311.421.491.771.89
Table 8. The BER under geometric transformation correction.
Table 8. The BER under geometric transformation correction.
Geometric AttacksRotationTranslationScaling
Average BER after correction0.09020.012210.0364
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, L.; Bai, R.; Lu, J.; Zhang, S.; Chang, C.-C. A Watermarking Scheme for Color Image Using Quaternion Discrete Fourier Transform and Tensor Decomposition. Appl. Sci. 2021, 11, 5006. https://doi.org/10.3390/app11115006

AMA Style

Li L, Bai R, Lu J, Zhang S, Chang C-C. A Watermarking Scheme for Color Image Using Quaternion Discrete Fourier Transform and Tensor Decomposition. Applied Sciences. 2021; 11(11):5006. https://doi.org/10.3390/app11115006

Chicago/Turabian Style

Li, Li, Rui Bai, Jianfeng Lu, Shanqing Zhang, and Ching-Chun Chang. 2021. "A Watermarking Scheme for Color Image Using Quaternion Discrete Fourier Transform and Tensor Decomposition" Applied Sciences 11, no. 11: 5006. https://doi.org/10.3390/app11115006

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop