Next Article in Journal
Analysis of Two-Color Infrared Polarization Imaging Characteristics for Target Detection and Recognition
Previous Article in Journal
Joint Resource Allocation in TWDM-PON-Enabled Cell-Free mMIMO System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Vortex Beam Transmission Compensation in Atmospheric Turbulence Using CycleGAN

1
School of Electronic Engineering, Xidian University, Xi’an 710071, China
2
School of Computer Science, Xi’an Shiyou University, Xi’an 710065, China
3
School of Physics, Xidian University, Xi’an 710071, China
*
Author to whom correspondence should be addressed.
Photonics 2023, 10(11), 1182; https://doi.org/10.3390/photonics10111182
Submission received: 30 August 2023 / Revised: 20 October 2023 / Accepted: 22 October 2023 / Published: 24 October 2023

Abstract

:
To improve the robustness of vortex beam transmission and detection in the face of atmospheric turbulence and to guarantee accurate recognition of orbital angular momentum (OAM), we present an end-to-end dynamic compensation technique for vortex beams using an improved cycle-consistent generative adversarial network (CycleGAN). This approach transforms the problem of vortex beam distortion compensation into one of image translation. The Pix2pix and CycleGAN models were extended with a structural similarity loss function to constrain turbulence distortion compensation in luminance, contrast and structure. Experiments were designed to evaluate the compensation performance from subjective and objective indicators. The simulation results demonstrate that the optical OAM intensity map is very similar to that of the target OAM light after compensation. The mean value of structural similarity is close to 1. The recognition accuracy of the OAM is improved by 4.4% compared to no distortion compensation, demonstrating that the improved CycleGAN-based compensation scheme can guarantee excellent detection accuracy without reconstructing the wavefront and saving optical hardware. The method can be implemented in real-time optical communications in atmospheric turbulence environments.

1. Introduction

It is well known that the electromagnetic wave carries energy, momentum and angular momentum. The angular momentum has a spin part associated with polarization and an orbital part associated with spatial distribution. The existing information modulation in communications is mainly carried out in the time, frequency and polarization domains; vortex beams carrying orbital angular momentum (OAM) provide a new dimension for communication [1,2,3,4,5]. The OAM has an infinite number of eigenmodes, theoretically allowing the construction of infinite dimensional Hilbert space, which greatly increases the capacity of communication systems [6]. The OAM multiplexing can greatly broaden the bandwidth of wireless communication and thus improve the spectrum utilization [7].
Unfortunately, OAM will suffer heavy distortion from atmospheric turbulence (AT) when it is used in free space optical communication [8,9,10,11,12]. The AT will disturb the wave front phase, damage the orthogonality among OAM modes and further degrade the communication performance and detection accuracy. To improve the robustness of the vortex beam communication and detection system to the influence of AT distortion and ensure the subsequent high-precision OAM mode identification, it is necessary to dynamically compensate for the distorted beam.
The vortex beam turbulence distortion compensation methods are mainly divided into those with and without a probe and wavefront sensor. In 2007, a Gaussian mutation genetic algorithm to correct the wavefront phase of the vortex beam was proposed. Skipping the reconstruction of the wavefront phase, the scheme directly controls the deformable mirror for correction and iteratively finds the optimal surface shape of the deformable mirror for wavefront correction [13]. In [14], a wavefront correction method based on the Shack–Hartmann wavefront sensor was proposed, which improved the purity of the fundamental mode beam at the receiving end and improved the accuracy of OAM mode identification. The wavefront correction technique based on the simulated annealing algorithm was proposed, and the Zernike polynomials were used to represent the wavefront phase [15]. Ren et al. used the Gerchberg–Saxton (GS) phase recovery algorithm to deal with the wavefront correction at the receiving end of the vortex optical communication system [16]. They also launched a fundamental mode Gaussian beam as a probe beam at the transmitting end for the wavefront correction problem [17]. A vortex beam wavefront correction technique based on stochastic parallel gradient descent was investigated in [18]. Fu et al. proposed a wavefront correction technique based on the GS phase recovery algorithm in the framework of a vortex optical communication system with probe beams which has a good effect on probe beam, single-mode and superposition vortex beam correction [19].
The abovementioned vortex beam wavefront correction methods are almost based on the adaptive optics system without wavefront detection. The control algorithm does not have learning and memory capabilities, and it may fall into the local optima. With the development of machine learning, deep learning-based vortex beam wavefront correction has attracted more and more attention. Deep learning-based wavefront compensation does not require a Shack–Hartmann wavefront sensor, which is not subject to turbulence intensity constraints. A novel joint AT detection and adaptive demodulation technique based on the convolutional neural network (CNN) was proposed for the OAM-based free-space optical communication [20]. Tian et al. mapped the distributed features learned from the OAM light intensity distribution to the sample label space based on the CNN. The Zernike coefficients of the first 2–400 order are output and converted into the control signal of the spatial light modulator for correction [21]. Based on CNN, Liu et al. proposed a scheme to predict the turbulent phase screen by training the mapping relationship between the intensity distribution of vortex beam and the turbulent phase, which has a strong generalization performance [22]. In [23], a CNN-based wavefront reconstruction technique to train the mapping relationship between the vortex beam intensity map and the first 20-order Zernike coefficients was carried out. A six-layer CNN model to achieve high recognition accuracy of Laguerre–Gaussian beams under different ATs was designed in [24]. The LG beam OAM-mode detection by CNN was studied experimentally in [25]. Zhou et al. realized the identification of OAM in AT based on ShuffleNet V2 for single and multiplexing positive OAM modes [26]. In [27], a vortex phase modulation method was introduced to achieve both integer topological charges and fractional ones even under a strong turbulence intensity and long transmission distance.
The core of the existing deep learning-based wavefront correction method is the wavefront reconstruction algorithm. Based on the probe beam, the wavefront is reconstructed by training the mapping relationship between the light intensity map of the probe beam and the turbulent Zernike coefficient. However, the method of wavefront reconstruction based on CNN ignores a lot of high frequency information to some extent and cannot be applied to non-circular aperture reconstruction, and the reconstruction capability is limited. As a hotspot in machine learning, the generative adversarial network (GAN) adopts adversarial training methods to generate more realistic samples [28,29,30]. GAN can be utilized in adaptive optics compensation to implement image-to-image translation. Based on condition GAN, Pix2pix trained the data in pairs, and the L1 loss function was added to realize domain adaptation, which is a classical image translation algorithm [31]. In [32], CycleGAN was proposed which only requires data from two domains, without requiring strict correspondence between them as Pix2pix does, making the application of the CycleGAN more widespread. To address the existing deep learning-based vortex beam wavefront correction technique that requires probe beams and numerous optical hardware resources, as well as reconstructing the wavefront before correction, the paper proposes an end-to-end turbulence distortion compensation technique based on improved CycleGAN to directly output the compensated OAM light intensity map.
In this paper, a vortex beam distortion compensation technique in AT based on improved CycleGAN is proposed from the perspective of image processing for OAM communication. The structural similarity (SSIM) loss function term is added to the Pix2pix and CycleGAN loss function to realize end-to-end AT distortion compensation in Section 2. The experimental results on the mean structural similarity (MSSIM), vortex beam intensity distributions and difference with and without compensation, and OAM state detection accuracy are presented in Section 3. Section 4 is devoted to the conclusion.

2. Atmospheric Turbulence Compensation Scheme

2.1. Atmospheric Turbulence Distortion Theory

Due to the existence of temperature gradient, AT causes the atmospheric refractive index to be randomly distributed, which will lead to scintillation and phase fluctuation of the vortex beam passing through the turbulence. The existing refractive index spatial power spectrum functions mainly include the Kolmogorov spectrum [33], Tatarskii spectrum [34], Hill–Andrews spectrum [35], etc. Based on the power exponent 2/3 law in inertial region, the Kolmogorov spectrum can be expressed as follows:
ϕ n ( κ ) = 0.033 C n 2 k 11 / 3 , 1 / L 0 k 1 / l 0
where C n 2 represents the atmospheric refractive index structure constant, which is used to measure the turbulence intensity. The larger the value, the more serious the distortion of the beam passing through the turbulence. Moreover, k is the wavenumber, and L 0 and l 0 represents the outer and inner scale.
When simulating the influence of AT distortion, the perturbation of the vortex beam by a single-phase screen can be used to simulate the phase change of the vortex beam after passing through a turbulent channel at a certain distance, as shown in Figure 1. The whole transmission process is equivalent to many phase screens with a certain distance [36]. There are two main methods to generate a random phase screen: the Zernike polynomial expansion method and the power spectrum inversion method [37,38].
The specific steps to describe the random distribution of phase disturbance in a Cartesian coordinate system are as follows: First, generate a complex random number matrix with a mean of 0 and variance of 1; use the phase power spectrum function, ϕ φ ( k ) , conforming to the Kolmogorov spectrum to filter; and then use the fast Fourier transform (FFT) to map and finally get the turbulent phase screen in the frequency domain. The relationship between the phase power spectrum function, ϕ φ ( k ) , and the refractive index spectral density function, ϕ n ( k ) , is as follows:
ϕ φ ( k ) = 2 π k 0 2 Δ z ϕ n ( k )
where Δ z represents the interval between the phase screens. The wavenumber is k 0 = 2 π / λ , and the wavelength of the light beams is λ . The variance of the frequency domain distribution of the phase screens is as follows:
σ 2 ( k ) = ( 2 π N Δ x ) 2 ϕ φ ( k )
where N represents the grid number of the complex random number matrix, and Δ x represents the grid spacing. The fast Fourier transform is used to calculate the turbulent phase screen as follows:
φ ( x , y ) = F F T ( C σ ( k ) )
where C represents the complex random number of N × N matrix, and FFT is the fast Fourier transform.

2.2. Vortex Beam Distortion Compensation Based on Pix2pix

Different from traditional optical communication systems, vortex beam communication loads information into different OAM states for transmission. Here, OAM mode recognition based on deep learning is realized by classifying the two-dimensional OAM light intensity maps. Without predicting the complete wavefront phase, only generating the complete OAM light intensity map for subsequent mode identification is enough. This provides a theoretical basis for the mapping of the distorted OAM beam intensity map and the target OAM beam intensity map based on the GAN training in this paper. The Pix2pix and CycleGAN flowchart adopted U-Net [39] architecture, which is more suitable for the phase recovery task, as shown in Figure 2.
The framework of end-to-end distortion compensation based on Pix2pix is shown in Figure 3. After the receiver captures the distorted OAM light intensity map, it is fed into the Pix2pix network, and the compensated OAM light intensity map is directly output for subsequent state detection tasks. Each sub-layer of the generator and discriminator of Pix2pix consists of three parts: a convolution layer, a batch normalization layer and an activation function layer. The size of the convolution kernel of the convolution layer is 4 × 4. The generator contains a total of 16 layers of the network and adopts a U-Net-like structure. The E1 to E8 layers down-sample the input OAM light intensity distribution features through convolution, and the interlayer activation function selects Leaky Relu. D8 to D1 layers perform an up-sampling operation on the input OAM light intensity distribution features through deconvolution, and the activation function between layers selects Relu. The corresponding down-sampling E layers and up-sampling D layer use skip connections to transfer the high-frequency information learned in the encoder layer to the corresponding decoder layer to prevent the loss of important high-frequency image information in the down-sampling process. The compensated OAM light intensity map output by the generator is normalized and sent to the discriminator, and its authenticity is judged in combination with the target OAM light intensity map. The discriminator contains 5 layers of network, and the activation function between layers selects the Leaky Relu.
During the training process, the size of the OAM light intensity map is uniformly set to 256 × 256 × 3, the input image is flipped horizontally with a certain probability and the Xavier method is used to evenly distribute the network weight coefficients. The input of the E1 layer is 256 × 256 × 3. After the “convolution-batch normalization-function activation” operation, the size of the output feature map becomes 128 × 128 × 3. From layer E2 to E8, the size of the feature image doubles after each layer. The output feature map of the E8 layer is 1 × 1 × 256 in size and is fed into the D8 layer. After each layer from layer D8 to D1, the size of the feature image is doubled. Finally, the output size of the D8 layer is 256 × 256 × 3, which restores the original image size. The OAM light intensity map generated by the generator is input into the discriminator. The first four layers of the discriminator are used to extract features, and then a convolution layer with a convolution kernel size of 4 × 4 is used for down-sampling to reduce the resolution of the feature map to 1 × 1. The final output vector size is 1 × 1 × 2, representing the discrimination result. The Adam optimizer is used during training, and the Adam momentum term is set to 0.9. The learning rates of the generator and discriminator are both set to 0.002, and the regularization weight in the objective function of the generator is set to 100. A total of 50 epochs were trained, and the batch size was set to 2.
Traditionally, the performance of AT distortion compensation is evaluated by using the Strehl Ratio, mean square error and peak-to-valley values. The purpose of the vortex beam AT compensation is to provide a high-quality light intensity distribution for subsequent OAM state detection. Therefore, the loss function of Pix2pix is modified here. In addition to Pix2pix’s the traditional condition GAN loss function and the L 1 loss function, a SSIM loss function term is introduced. The two intrinsic parts of the Pix2pix loss function evaluate the difference between the distribution of the generated data and the real data at the pixel level. Meanwhile the SSIM loss function can assess the image quality of the generated vortex beam in terms of luminance, contrast and structure [40]. The mean structural similarity (MSSIM) is adopted to measure the image quality after compensation.
SSIM is an evaluation criterion that is used to measure the similarity between two images. Here, the two images are intensity distributions of the undistorted target vortex beam and the distorted vortex beam. SSIM includes three indicators, luminance l ( x , y ) , contrast c ( x , y ) and structure s ( x , y ) , which are expressed as follows [40]:
l ( x , y ) = 2 μ x μ y + c 1 μ x 2 + μ y 2 + c 1
c ( x , y ) = 2 σ x σ y + c 2 σ x 2 + σ y 2 + c 2
s ( x , y ) = σ x y + c 3 σ x σ y + c 3
where c 1 = ( k 1 L ) 2 , c 2 = ( k 2 L ) 2 and c 3 = c 2 / 2 . L represents the range of image pixel values, and k 1 and k 2 are 0.01 and 0.03, respectively. Moreover, x and y represent two image signals; μ x and μ y refer to the average pixel values of image x and image y; σ x and σ y refer to the standard deviation of pixel values of image x and image y, respectively; and σ x y is the covariance of pixel values of image x and image y. Assuming that the image signal is discrete, the related parameters above are calculated as follows:
μ x = 1 M N j = 1 M i = 1 N y ( i , j )
σ x 2 = 1 M N 1 j = 1 M i = 1 N ( y ( i , j ) μ x ) 2
σ x y = 1 M N 1 j = 1 M i = 1 N ( y ( i , j ) μ x ) ( G ( x ( i , j ) ) μ y )
Substituting the above parameters into Pix2pix network assessment Equations SSIM ( x , y ) = l ( x , y ) α c ( x , y ) β s ( x , y ) γ ,where α , β , γ represent the proportion of different features in SSIM measurement, usually taken as α = β = γ = 1 , the SSIM index can be written as follows [40]:
SSIM ( x , y ) = ( 2 μ x μ y + c 1 ) ( 2 σ x y + c 2 ) ( μ x 2 + μ y 2 + c 1 ) ( σ x 2 + σ y 2 + c 2 )
The value of SSIM is between 0 and 1. The closer the value is to 1, the better the turbulence distortion compensation effect is, and the higher the image quality of the light intensity distribution of vortex beam after compensation. The loss function term of SSIM can be written as follows:
L S S I M ( G ) = E x , y ~ p d a t a ( x , y ) [ 1 S S I M ( y , G ( x ) ) ]
The final total loss function includes three parts: traditional condition GAN loss, L 1 loss function and SSIM loss:
L = arg min G max D L c G A N ( G , D ) + α L L 1 ( G ) + β L S S I M ( G )
where α represents the weight of generator L 1 loss function, and β represents the weight of generator structural similarity loss function.

2.3. Vortex Beam Distortion Compensation Based on Improved CycleGAN

The principle of end-to-end distortion compensation based on CycleGAN is shown in Figure 4. The structure of the CycleGAN generator and discriminator, the parameters of each layer and the parameter settings in the training process are the same as Pix2pix. For vortex beam AT distortion compensation, image x of the distortion domain X is the distorted OAM light intensity map captured by the receiver, and image y of the target domain Y is the undistorted target OAM light intensity map. During training, firstly, the distorted OAM light intensity map x passes through the generator G from the X domain to the Y domain and then passes through the generator F from the Y domain to the X domain to obtain the reconstructed OAM light intensity map F(G(x)) in the distortion domain. It makes the reconstructed OAM light intensity map as close to the original image x as possible. The same is true for the process of migration from the target domain Y to the distortion domain X, so that the reconstructed target OAM light intensity map is as close as possible to the original image y, and the reconstruction loss is based on the loss. The discriminators of the two unidirectional GANs were used to discriminate the reconstructed image and the original image, respectively.
The same as the Pix2pix in Section 2.2, the SSIM loss function is added to the generator loss of the CycleGAN, namely the improved CycleGAN. Since CycleGAN consists of two symmetrically mirrored classical GANs, a SSIM loss function term is also added to the generator loss function for each of the two directions:
L S S I M ( G , F ) = E x ~ P d a t a ( x ) [ 1 S S I M ( x , F ( G ( x ) ) ) ] + E y ~ P d a t a ( y ) [ 1 S S I M ( y , G ( F ( y ) ) ) ]
For the vortex beam distortion compensation, the total loss function of the improved CycleGAN includes the loss of the two classical GANs, the loss of the generator L 1 in both directions and the loss of the SSIM, written as follows:
L = arg min G , F max D X , D Y L G A N ( G , D Y , X , Y ) + L G A N ( F , D X , Y , X ) + α L c y c l e ( G , F ) + β L S S I M ( G , F )
where α denotes the weights of the total L 1 loss function term in both directions, and β represents the weights of the total SSIM function loss term in both directions.

3. Results and Discussions

3.1. Evaluation Index of Turbulence Distortion Compensation

The core of the previous wavefront correction scheme is the wavefront reconstruction algorithm. It adopts many quality evaluation indicators such as Strehl Ratio, mean square error and peak to valley values to measure the performance of the wavefront correction system and wavefront reconstruction error fluctuation. We utilize the improved CycleGAN to directly train the end-to-end mapping between the light intensity distribution of vortex beam with and without compensation. It skips the wavefront reconstruction and dynamic compensation of optical devices and turns the problem of vortex beam turbulence distortion compensation into an image repair problem. Combined with the image translation study, the effect of turbulence distortion compensation can be evaluated by the image restoration method. Thus, MSSIM is adopted here to evaluate the compensation scheme.
The turbulence distortion compensation evaluation index used in this paper can be divided into subjective evaluation and objective evaluation. Subjective evaluation refers to the subjective feeling of the human eye on the compensation effect of light intensity distribution, and it is a subjective evaluation of the similarity between the light intensity distribution of the compensated vortex beam and the target vortex beam. However, whether in the actual scientific research or industrial implementation, objective evaluation criteria are still needed. Thus, MSSIM, OAM state detection probability distribution at the receiving end before and after compensation and the OAM state detection accuracy with and without compensation were selected to evaluate the compensation performance of the proposed scheme.

3.2. Simulation Dataset Construction

The experimental data is generated by physical simulation, and the OAM state set with a unified image style (petal distribution) is selected. We selected the following 10 superposition states of OAM: { 1 , 1 }, { 1 , 2 }, { 2 , 2 }, { 2 , 3 }, { 3 , 3 }, { 3 , 4 }, { 4 , 4 }, { 4 , 5 }, { 5 , 5 } and { 6 , 6 }, as shown in Figure 5. The distorted OAM light intensity map captured at the receiving end and the corresponding undistorted target OAM light intensity map form an image pair. The wavelength is 0.6328 um, and the beam waist radius is 0.3 m. The AT distortion compensation experiments in the medium turbulence and the strong turbulence environments were carried out, respectively.
In a medium turbulence, the atmospheric refractive index structure constant C n 2 ranges from 1.0 × 10 14   m 2 / 3 to 5.5 × 10 14   m 2 / 3 at intervals of 5.0 × 10 15   m 2 / 3 . In a strong turbulence, C n 2 ranges from 1.0 × 10 13   m 2 / 3 to 5.5 × 10 13   m 2 / 3 , at intervals of 5 × 10 14   m 2 / 3 . When simulating the effects of AT distortion, ten turbulence-phase screens with a certain interval are obtained by dividing the transmission distance into ten equal parts based on the power spectral inversion method. Under different ATs, 200 light intensity maps are generated for each OAM state. The data set of medium and strong turbulence-mixed OAM states contains 20,000 samples and is divided into a training and test set at a ratio of 8:2.

3.3. Analysis of Simulation Results

MSSIM was used to evaluate the compensation performance of the vortex beam turbulence distortion compensation based on Pix2pix, CycleGAN, improved Pix2pix and improved CycleGAN. Here, improved Pix2pix and improved CycleGAN represent the network with the addition of a SSIM loss function term, marked as Pix2pix + SSIM_loss and CycleGAN + SSIM_loss, respectively. In both turbulent environments, as shown in Figure 6, the MSSIM increases with the training process and tends to be above 90%, indicating that the intensity distribution of the compensated vortex beam is close to that of the target vortex beam. Compared with Pix2pix, the MSSIM of CycleGAN is higher for the intensity distribution of the compensated vortex beam in strong turbulence. Because CycleGAN uses two unidirectional GANs with mirror symmetry, it is more suitable for cases where the data are not one-to-one symmetrical to the pixel points. Moreover, the MSSIM of the Pix2pix + SSIM_loss and CycleGAN + SSIM_loss method was improved obviously by adding the SSIM loss function term, especially in strong turbulence. As shown in Table 1, the MSSIM using the CycleGAN + SSIM_loss compensation method has a 1% and 2.8% improvement compared with that of Pix2pix in medium and strong AT, respectively.
The effect of vortex beam AT compensation based on CycleGAN is shown in Figure 7 for when the transmission distance is 4000 m and C n 2 = 3.0 × 10 13   m 2 / 3 in strong turbulence. The first column is the intensity distribution of the vortex beam captured at the receiving end, and the second column is the intensity distribution of the target vortex beam (ground truth). The third and fourth columns are the intensity distributions of the compensated vortex beam by CycleGAN and improved CycleGAN (CycleGAN + SSIM_Loss), respectively. It can be observed that the intensity distribution of the compensated vortex beam is very close to that of the target vortex beam. In addition, when the SSIM loss function term is added to the loss function of CycleGAN, namely CycleGAN + SSIM_Loss, the intensity distribution of the compensated vortex beam is much closer to ground truth and provides shaper edges for the petal-like intensity distribution, which is very important for the subsequent OAM state detection.
The differences between the intensity distributions of the compensated vortex beam and the target vortex beam are shown in Figure 8. The turbulence parameters are the same as in Figure 7. The OAM state is { 1 , 1 }, { 1 , 2 }, { 3 , 3 } and { 6 , 6 }. The first column is the intensity distribution of the vortex beam captured at the receiving end, and the second column is the intensity distribution of the target vortex beam (ground truth). Meanwhile, the third and fourth columns are the differences of CycleGAN and CycleGAN + SSIM_Loss compared with the ground truth, respectively. Notably, the more the pixel points of the difference image are biased towards black, the closer the two images are, and the better the distortion compensation is. Therefore, the improved CycleGAN (CycleGAN + SSIM_Loss) has a good compensation performance for subsequent OAM communication and detection.
AT adds a perturbation phase to the vortex beam during transmission, causing phase distortion and intermodal crosstalk, which results in reduced mode purity. In addition to comparing the intensity distribution of the vortex beam before and after compensation, the effect of distortion compensation can also be tested by comparing the OAM probability distribution of the detected vortex beam before and after compensation. Under three atmospheric refractive index structure constants, namely 1.0 × 10 14   m 2 / 3 , 3.0 × 10 14   m 2 / 3 and 5.0 × 10 14   m 2 / 3 , the distortion compensation model trained in medium turbulence is selected to compensate. Meanwhile, atmospheric refractive index structure constants 1.0 × 10 13   m 2 / 3 , 3.0 × 10 13   m 2 / 3 and 5.0 × 10 13   m 2 / 3 are selected for the strong turbulence experiment.
The OAM state detection network selects the basic CNN network. The data set is constructed by the 10 superimposed vortex beams’ light intensity distribution of { 1 , 1 }, { 1 , 2 }, { 2 , 2 }, { 2 , 3 }, { 3 , 3 }, { 3 , 4 }, { 4 , 4 }, { 4 , 5 }, { 5 , 5 } and { 6 , 6 }. When the transmission distance is 4000 m, the OAM state detection model is trained under six different turbulence intensities ( C n 2 : 1.0 × 10 14   m 2 / 3 , 3.0 × 10 14   m 2 / 3 , 5.0 × 10 14   m 2 / 3 , 1.0 × 10 13   m 2 / 3 , 3.0 × 10 13   m 2 / 3 and 5.0 × 10 13   m 2 / 3 ). Under each turbulence intensity, 2000 light intensity distributions are simulated for each OAM state. The data set contains a total of 20,000 light intensity distributions of the vortex beam and is divided into a training set and test set in the ratio of 8:2.
The transmitter emits 5000 vortex beams with OAM = {4, 4 }, and the CCD camera at the receiving end captures the light intensity distribution of vortex beams. The OAM state probability distributions before and after compensation are shown in Figure 9 based on the end-to-end turbulence distortion compensation scheme proposed in this paper, under two different C n 2 scenarios, with a 4000 m transmission distance. It can be seen from Figure 9a that when C n 2 = 3.0 × 10 14   m 2 / 3 , the probability of OAM = {4, 4 } before and after compensation is above 96.5%. Although it increases slightly after compensation, the difference is not significant. In the strong turbulent environment, C n 2 = 3.0 × 10 13   m 2 / 3 shown in Figure 9b, the probability of OAM = {4, 4 } increased by 4.6% compared with that without compensation, and it further increased by 1.9% after adding the SSIM loss function. Moreover, the comparison between Figure 9a,b shows that the probability of OAM = {4, 4 } is significantly reduced in strong turbulence with C n 2 = 3.0 × 10 13   m 2 / 3 , and the dispersion of the OAM state is more serious. It also demonstrated the necessity of distortion compensation for communication and detection in strong turbulence.
The comparison of the OAM state detection accuracy before and after compensation is shown in Figure 10. The detection accuracy of the OAM state was obviously improved under different turbulence intensity after compensation. Especially in strong turbulence, when C n 2 = 5.0 × 10 13   m 2 / 3 , the detection accuracy of the OAM state is only 85.5% without compensation, but the detection accuracy can be improved to 89.1% after CycleGAN compensation. After using CycleGAN + SSIM_loss compensation, we obtain a further 0.8% increase in the detection accuracy. Since the experiments in this paper are based on simulation data, the compensation effect will be more obvious in a real vortex optical communication system.

4. Conclusions

The paper proposes an end-to-end vortex beam turbulence distortion compensation method based on the improved CycleGAN, skipping wavefront reconstruction and directly training the mapping relationship between the light intensity distributions of distorted vortex beam and undistorted target vortex beam. Whether it is affected by turbulence distortion is regarded as two image styles, turning the vortex beam turbulence distortion compensation problem into an image-to-image translation problem without the need for probe beams and wavefronts reconstruction, which saves optical hardware and improves the compensation performance. The frameworks of Pix2pix and CycleGAN were analyzed, and the SSIM loss function was introduced to assess the image quality from luminance, contrast and structure. The feasibility of the distortion compensation scheme and the compensation performance proposed in this paper was verified according to two aspects: light intensity image comparison and difference before and after compensation, and the MSSIM and OAM state recognition accuracy with and without compensation. The results show that the intensity distribution of the compensated vortex beam of the improved CycleGAN is very close to that of the undistorted target vortex beam, and the MSSIM is close to 1. In addition, the detection accuracy of OAM using CycleGAN + SSIM_loss is improved by 4.4% compared with that before compensation in strong turbulence. This research provides a potential basis for the vortex optical communication applications in interference environments.

Author Contributions

Conceptualization and methodology, T.Q.; software, T.Q. and Y.Z.; validation, T.Q. and Y.Z.; formal analysis, Y.Z. and J.W.; editing, Z.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (62071359 and 62271381) and in part by the Postdoctoral Science Foundation in Shaanxi Province and the Fundamental Research Funds for the Central Universities (ZYTS23138).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, J.; Yang, J.; Fazal, I.; Ahmed, N.; Yan, Y.; Huang, H.; Ren, Y.; Yue, Y.; Dolinar, S.; Tur, M.; et al. Terabit free-space data transmission employing orbital angular momentum multiplexing. Nat. Photonics 2012, 6, 488–496. [Google Scholar] [CrossRef]
  2. Bozinovic, N.; Ramachandran, S. Terabit-scale orbital angular momentum mode division multiplexing in fibers. Science 2013, 340, 1545–1548. [Google Scholar] [CrossRef]
  3. Xie, G.; Li, L.; Ren, Y.; Huang, H.; Yan, Y.; Ahmed, N.; Zhao, Z.; Lavery, M.P.; Ashrafi, N.; Ashrafi, S.; et al. Performance metrics and design considerations for a free-space optical orbital-angular-momentum-multiplexed communication link. Optica 2015, 2, 357–365. [Google Scholar] [CrossRef]
  4. Willner, A. Communication with a twist. IEEE Spectr. 2016, 53, 34–39. [Google Scholar] [CrossRef]
  5. Ren, Y.; Wang, Z.; Liao, P.; Li, L.; Xie, G.; Huang, H.; Zhao, Z.; Yan, Y.; Ahmed, N.; Willner, A.; et al. Experimental characterization of a 400 Gbit/s orbital angular momentum multiplexed free-space optical link over 120 m. Opt. Lett. 2016, 41, 622–625. [Google Scholar] [CrossRef]
  6. Mair, A.; Vaziri, A.; Weihs, G.; Zeilinger, A. Entanglement of the orbital angular momentum states of photons. Nature 2001, 412, 313–316. [Google Scholar] [CrossRef]
  7. Zhou, H.; Wang, Y.; Li, X.; Xu, Z.; Li, X.; Huang, L. A deep learning approach for trustworthy high-fidelity computational holographic orbital angular momentum communication. Appl. Phys. Lett. 2021, 119, 044104. [Google Scholar] [CrossRef]
  8. Paterson, C. Atmospheric turbulence and orbital angular momentum of single photos for optical communication. Phys. Rev. Lett. 2005, 94, 3901–3904. [Google Scholar] [CrossRef]
  9. Gopaul, C.; Andrews, R. The effect of atmospheric turbulence on entangled orbital angular momentum states. New J. Phy. 2007, 9, 94. [Google Scholar] [CrossRef]
  10. Tyler, G.; Boyd, R. Influence of atmospheric turbulence on the propagation of quantum states of light carrying orbital angular momentum. Opt. Lett. 2009, 34, 142–144. [Google Scholar] [CrossRef]
  11. Yang, J.; Zhang, H.; Zhang, X.; Li, H.; Xi, L. Transmission Characteristics of Adaptive Compensation for Joint Atmospheric Turbulence Effects on the OAM-Based Wireless Communication System. Appl. Sci. 2019, 9, 901. [Google Scholar] [CrossRef]
  12. Bulygin, A.D.; Geints, Y.E.; Geints, I.Y. Vortex Beam in a Turbulent Kerr Medium for Atmospheric Communication. Photonics 2023, 10, 856. [Google Scholar] [CrossRef]
  13. Yang, P.; Xu, B.; Jiang, W.; Chen, S. Study of a genetic algorithm used in an adaptive optical system. Acta Opt. Sin. 2007, 27, 1628. [Google Scholar] [CrossRef]
  14. Zhao, S.; Leach, J.; Zheng, B. Correction effect of Shark-Hartmann algorithm on turbulence aberrations for free space optical communications using orbital angular momentum. In Proceedings of the 2010 IEEE 12th International Conference on Communication Technology, Nanjing, China, 11–14 November 2010; pp. 580–583. [Google Scholar] [CrossRef]
  15. Yu, Z.; Ma, H.; Du, S. Simulated annealing algorithm applied in adaptive near field beam shaping. Holography, Diffractive Optics, and Applications IV. SPIE 2010, 7848, 154–161. [Google Scholar] [CrossRef]
  16. Ren, Y.; Huang, H.; Yang, J.; Yan, Y.; Ahmed, N.; Yue, Y.; Willner, A.; Birnbaum, K.; Choi, J.; Erkmen, B.; et al. Correction of phase distortion of an OAM mode using GS algorithm based phase retrieval. In Proceedings of the 2012 Conference on Lasers and Electro-Optics (CLEO), San Jose, CA, USA, 6–11 May 2012; pp. 1–2. [Google Scholar]
  17. Ren, Y.; Xie, G.; Huang, H.; Bao, Y.; Ahmed, N.; Lavery, M.; Erkmen, B.; Dolinar, S.; Tur, M.; Neifeld, M.; et al. Adaptive optics compensation of multiple orbital angular momentum beams propagating through emulated atmospheric turbulence. Opt. Lett. 2014, 39, 2845–2848. [Google Scholar] [CrossRef]
  18. Xie, G.; Ren, Y.; Huang, H.; Lavery, M.; Ahmed, N.; Yan, Y.; Bao, C.; Li, L.; Zhao, Z.; Cao, Y.; et al. Phase correction for a distorted orbital angular momentum beam using a Zernike polynomials-based stochastic-parallel-gradient-descent algorithm. Opt. Lett. 2015, 40, 1197–1200. [Google Scholar] [CrossRef]
  19. Fu, S.; Zhang, S.; Wang, T.; Gao, C. Pre-turbulence compensation of orbital angular momentum beams based on a probe and the Gerchberg–Saxton algorithm. Opt. Lett. 2016, 41, 3185–3188. [Google Scholar] [CrossRef]
  20. Li, J.; Zhang, M.; Wang, D.; Wu, S.; Zhan, Y. Joint atmospheric turbulence detection and adaptive demodulation technique using the CNN for the OAM-FSO communication. Opt. Express 2018, 26, 10494–10508. [Google Scholar] [CrossRef]
  21. Tian, Q.; Lu, C.; Liu, B.; Zhu, L.; Pan, X.; Zhang, Q.; Yang, L.; Tian, F.; Xin, X. DNN-based aberration correction in a wavefront sensorless adaptive optics system. Opt. Express 2019, 27, 10765–10776. [Google Scholar] [CrossRef]
  22. Liu, J.; Wang, P.; Zhang, X.; He, Y.; Zhou, X.; Ye, H.; Li, Y.; Xu, S.; Chen, S.; Fan, D. Deep learning based atmospheric turbulence compensation for orbital angular momentum beam distortion and communication. Opt. Express 2019, 27, 16671–16688. [Google Scholar] [CrossRef]
  23. Zhai, Y.; Fu, S.; Zhang, J.; Liu, X.; Zhou, H.; Gao, C. Turbulence aberration correction for vector vortex beams using deep neural networks on experimental data. Opt. Express 2020, 28, 7515–7527. [Google Scholar] [CrossRef]
  24. Hao, Y.; Zhao, L.; Huang, T.; Wu, Y.; Jiang, T.; Wei, Z.; Deng, D.; Luo, A.; Liu, H. High-accuracy recognition of orbital angular momentum modes propagated in atmospheric turbulences based on deep learning. IEEE Access 2020, 8, 159542–159551. [Google Scholar] [CrossRef]
  25. Ke, X.; Chen, M. Recognition of orbital angular momentum vortex beam based on convolutional neural network. Microw. Opt. Technol. Lett. 2021, 63, 1960–1964. [Google Scholar] [CrossRef]
  26. Zhou, H.; Pan, Z.; Dedo, M.I.; Guo, Z.Y. High-efficiency and high-precision identification of transmitting orbital angular momentum modes in atmospheric turbulence based on an improved convolutional neural network. J. Opt. 2021, 23, 065701. [Google Scholar] [CrossRef]
  27. Xiang, Y.; Zeng, L.; Wu, M.; Luo, Z.; Ke, Y. Deep learning recognition of orbital angular momentum modes over atmospheric turbulence channels assisted by vortex phase modulation. IEEE Photonics J. 2022, 14, 8554909. [Google Scholar] [CrossRef]
  28. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Fraley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. Proc. Adv. Neural Inf. Process. Syst. 2014, 2672–2680. [Google Scholar] [CrossRef]
  29. Creswell, A.; White, T.; Dumoulin, V.; Arulkumaran, K.; Sengupta, B.; Bharath, A. Generative adversarial networks: An overview. IEEE Signal Proc. Mag. 2018, 35, 53–65. [Google Scholar] [CrossRef]
  30. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farly, D.; Ozair, S.; Courvile, A.; Bengio, Y. Generative adversarial networks. Commun. ACM 2020, 63, 139–144. [Google Scholar] [CrossRef]
  31. Isola, P.; Zhu, J.; Zhou, T.; Efros, A. Image-to-image translation with conditional adversarial networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1125–1134. [Google Scholar] [CrossRef]
  32. Zhu, J.; Park, T.; Isola, P.; Efros, A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the 2017 IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2223–2232. [Google Scholar] [CrossRef]
  33. Kolmogorov, A. The local structure of turbulence in incompressible viscous fluid for very large Reynolds numbers. Proc. R. Soc. Lond. Ser. A Math. Phys. Sci. 1991, 434, 9–13. [Google Scholar] [CrossRef]
  34. Tatarskii, V.I. Theory of Fluctuation Phenomena during Propagation of Waves of a Turbulent Atmosphere; Nauka: Moscow, Russia, 1959. [Google Scholar]
  35. Andrews, L. An analytical model for the refractive index power spectrum and its application to optical scintillations in the atmosphere. J. Mod. Opt. 1992, 39, 1849–1853. [Google Scholar] [CrossRef]
  36. Martin, J.; Flatté, S. Intensity images and statistics from numerical simulation of wave propagation in 3-D random media. Appl. Opt. 1988, 27, 2111–2126. [Google Scholar] [CrossRef]
  37. Roddier, N. Atmospheric wavefront simulation using Zernike polynomials. Opt. Eng. 1990, 29, 1174–1180. [Google Scholar] [CrossRef]
  38. McGlamery, B. Restoration of turbulence-degraded images. J. Opt. Soc. Am. A 1967, 57, 293–297. [Google Scholar] [CrossRef]
  39. Zhang, X.; Zhou, X.; Lin, M.; Sun, J. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 6848–6856. [Google Scholar] [CrossRef]
  40. Wang, Z.; Bovik, A.; Sheikh, H.; Simoncelli, E. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Turbulent-phase screen model for vortex beam transmission.
Figure 1. Turbulent-phase screen model for vortex beam transmission.
Photonics 10 01182 g001
Figure 2. Vortex beam distortion compensation system based on GAN.
Figure 2. Vortex beam distortion compensation system based on GAN.
Photonics 10 01182 g002
Figure 3. Schematic diagram of vortex beam distortion compensation based on Pix2pix.
Figure 3. Schematic diagram of vortex beam distortion compensation based on Pix2pix.
Photonics 10 01182 g003
Figure 4. Schematic diagram of vortex beam distortion compensation based on CycleGAN.
Figure 4. Schematic diagram of vortex beam distortion compensation based on CycleGAN.
Photonics 10 01182 g004
Figure 5. Ten superimposed states for OAM light intensity maps.
Figure 5. Ten superimposed states for OAM light intensity maps.
Photonics 10 01182 g005
Figure 6. Variation of MSSIM with training processes under different turbulence intensities.
Figure 6. Variation of MSSIM with training processes under different turbulence intensities.
Photonics 10 01182 g006
Figure 7. Intensity distribution of vortex beam after compensation by improved CycleGAN.
Figure 7. Intensity distribution of vortex beam after compensation by improved CycleGAN.
Photonics 10 01182 g007
Figure 8. The difference between the intensity distribution of the compensated vortex beam by improved CycleGAN.
Figure 8. The difference between the intensity distribution of the compensated vortex beam by improved CycleGAN.
Photonics 10 01182 g008
Figure 9. OAM state probability distribution before and after compensation: (a) C n 2 = 3.0 × 10 14   m 2 / 3 and (b) C n 2 = 3.0 × 10 13   m 2 / 3 .
Figure 9. OAM state probability distribution before and after compensation: (a) C n 2 = 3.0 × 10 14   m 2 / 3 and (b) C n 2 = 3.0 × 10 13   m 2 / 3 .
Photonics 10 01182 g009
Figure 10. Comparison of OAM state detection accuracy before and after compensation.
Figure 10. Comparison of OAM state detection accuracy before and after compensation.
Photonics 10 01182 g010
Table 1. Comparison of the compensation performance based on various methods.
Table 1. Comparison of the compensation performance based on various methods.
Distortion Compensation NetworkMSSIM
(Medium Turbulence)
MSSIM
(Strong Turbulence)
Pix2pix0.9440.903
Pix2pix + SSIM_Loss0.9490.922
CycleGAN0.9520.918
CycleGAN + SSIM_Loss0.9540.931
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Qu, T.; Zhang, Y.; Wu, J.; Wu, Z. Vortex Beam Transmission Compensation in Atmospheric Turbulence Using CycleGAN. Photonics 2023, 10, 1182. https://doi.org/10.3390/photonics10111182

AMA Style

Qu T, Zhang Y, Wu J, Wu Z. Vortex Beam Transmission Compensation in Atmospheric Turbulence Using CycleGAN. Photonics. 2023; 10(11):1182. https://doi.org/10.3390/photonics10111182

Chicago/Turabian Style

Qu, Tan, Yan Zhang, Jiaji Wu, and Zhensen Wu. 2023. "Vortex Beam Transmission Compensation in Atmospheric Turbulence Using CycleGAN" Photonics 10, no. 11: 1182. https://doi.org/10.3390/photonics10111182

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop