Next Article in Journal
Performance of Seven Gridded Precipitation Products over Arid Central Asia and Subregions
Next Article in Special Issue
Multi-Node Joint Power Allocation Algorithm Based on Hierarchical Game Learning in Underwater Acoustic Sensor Networks
Previous Article in Journal
Impacts of 3DEnVar-Based FY-3D MWHS-2 Radiance Assimilation on Numerical Simulations of Landfalling Typhoon Ampil (2018)
Previous Article in Special Issue
Adaptive Modulation and Coding for Underwater Acoustic Communications Based on Data-Driven Learning Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Technical Note

Hadamard–Viterbi Joint Soft Decoding for MFSK Underwater Acoustic Communications

1
School of Marine Science and Technology, Northwestern Polytechnical University, Xi’an 710072, China
2
Key Laboratory of Ocean Acoustics and Sensing, Northwestern Polytechnical University, Ministry of Industry and Information Technology, Xi’an 710072, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(23), 6038; https://doi.org/10.3390/rs14236038
Submission received: 13 October 2022 / Revised: 19 November 2022 / Accepted: 23 November 2022 / Published: 29 November 2022
(This article belongs to the Special Issue Underwater Communication and Networking)

Abstract

:
Multiple Frequency-Shift Keying (MFSK) has been used widely for underwater acoustic communications due to its low complexity and channel robustness. However, the traditional MFSK has the limitation of a low bit rate compared with coherent acoustic communication. To increase the bit rate, this study designs a new MFSK with the concept of orthogonal frequency division multiplexing (OFDM). We also adopt a channel-concatenated coding to resist the multipath interference and design the iterative joint decoding. The channel-concatenated coding consists of a Hadamard code and a convolutional code. Correspondingly, the iterative joint decoding uses the Hadamard–Viterbi joint soft decoding framework with a newly designed branch metric, which uses the Hadamard structure. As an important preprocessing link for a received signal, frame synchronization and Doppler compensation are also described in detail in this study. Simulations and experiments are conducted to show the effectiveness of the proposed MFSK underwater acoustic communications.

1. Introduction

Different from coherent communications [1,2], Multiple Frequency-Shift Keying (MFSK) can tolerate significant Doppler or delay spreads with appropriate parameter selection and forward error correction. Thus, MFSK has been widely used in underwater acoustic communications [3,4]. Mitigating large amounts of Doppler and delay spread is significantly challenging when wireless acoustic communications are used in an underwater environment [5]. In underwater acoustic channels (UAC), delay and Doppler spreading are obvious [6]. A long delay spread with little Doppler spreading is usually mitigated with a relatively long MFSK symbol period; however, this sacrifices the symbol rate.
Forward error correction has been widely used in a time delay and Doppler spread environment; for example, nonbinary LDPC codes were used for frequency-hopping frequency-shift keying (FH MFSK) underwater acoustic communication [7], turbo-coded FH/MFSK was investigated in a shallow water acoustic channel [8], and a convolutional code was used to mitigate the multipath interference in an underwater acoustic channel [9]. Reed Solomon (RS) block codes and convolutional codes were used for dealing with phase ambiguity in reception [10].
The chaotic modulation MFSK (CMFSK) has been used in confidential underwater acoustic communication [11], and the CMFSK had similar a BER performance to conventional digital modulations under multipath interference. The bit error rate (BER) performance was analyzed for FH-MFSK in [12], which showed a higher BER than radio frequency communications because the envelope amplitude statistics for FSK signals with underwater acoustics do not follow the Rayleigh or Rician distribution. Spread spectrum techniques have been used to offer low data rate acoustic links; M-ary chirp spread spectrum modulation (MCSS) was used for medium range underwater acoustic (UWA) communication [13]. The Frequency-Hopping Spread Spectrum (PC/MC-FHSS) combined with a Chip-z Transform (CZT) method was proposed to resist fading and narrow-band interference in UWA communications [14].
In dynamic oceans, the acoustic channel can vary on the scale of hundreds of milliseconds or less, since channel coherence time can be that short [15]. The depth dependence of the signal can be studied using the probe signal received on the array, e.g., measured CIRs can be obtained by using Linear Frequency Modulated (LFM) signals [16]. The designed waveforms determine the detection performance [17].
Different from the FH MFSK, orthogonal frequency division multiplexing (OFDM) has attracted extensive attention in UWA communications due to its high transmission rate and spectrum efficiency [18]. However, OFDM requires the orthogonality of the signal waves. The requirement results in a sensitivity to Doppler and phase noise [19].
Considering the advantages of the MFSK and OFDM techniques, we aim to find a tradeoff between the robustness and transmission rate. First, we aim to design a zero BER UWA communication system. Furthermore, the subcarrier allocation method of OFDM is used for the MFSK. In order to insure reliable transmission, channel-concatenated codes including a Hadamard code and a convolutional code are used in the transmitter. On the other hand, the Hadamard-based branch metric is computed for the Viterbi decoding framework in this MFSK UWA communication framework. In addition, this study describes the frame synchronization and Doppler compensation in detail. Finally, a sea trial experiment was performed in a shallow sea channel with a large multipath spread, to confirm the performance of the proposed method.

2. Brief Introduction to Orthogonal Frequency Division and MFSK

The basic idea of MFSK technology is to use signals of different frequencies to represent M different symbols, e.g., the transmitted signal is expressed as
S i ( t ) = A cos ω i t , 0 t T , i = 0 , , M 1 ,
where A denotes the amplitude, ω i is the angular frequency of the i-th carrier, T is the symbol width, and M denotes different values of the carrier frequencies.
In order to maximize the use of emission energy and the frequency efficiency, we assume the M carrier frequencies are orthogonal, i.e., we assume the symbol length is N, and ω i t = 2 π f i t , f i = i / T . The m-th sample time is m · T / N . Then,
ω i t = 2 π i m N ,
where m is an integer.
The UAC is dynamic and challenging for reliable and high-speed communications; it is necessary to consider the factors that seriously affect the use of MFSK, such as multipath and Doppler effect UWA communications.

2.1. Influence of the UAC Multipath Effect on MFSK

A long delay spread with little Doppler spreading is usually mitigated by a relatively long MFSK symbol period. However, one cannot increase the transmission rate just by increasing M due to the limited band of the UAC. Therefore, the multipath will seriously affect the transmission rate of the MFSK system.
In this study, we adopted channel coding to reduce the communication BER. In addition, frequency diversity technology was adopted, i.e., each bit is transmitted by multiple frequency points.

2.2. Influence of the Doppler Effect on MFSK

If the Doppler spread is large, while the time delay spread is small, it can be mitigated with a shorter symbol period. From the perspective of the time domain, the signal after the Doppler effect expands and contracts. Especially when the transmitted signal is longer, the Doppler spread can even be larger than the width of one symbol.
In order to minimize the influence of the Doppler effect, we designed the following system. The system works in frame-by-frame mode to ensure that the expansion and contraction of each frame is far less than one symbol. In addition, Doppler compensation measures were adopted.

3. Transmitter and Receiver

In this study, the overall system structure is shown in in Figure 1. One can see that at the transmitter, the information inputs were encoded by convolutional codes and then processed by interleaving and Hadamard coding. Random phase was used for reducing the peak to average ratio because we adopted inverse fast Fourier transform (IFFT) for MFSK modulation. A synchronization signal was then added, and the data stream was generated. The data were then transmitted through the underwater acoustic channel.
At the receiver, a matching filter was used for frame synchronization and then Doppler compensation via resampling. The data stream was then demodulated by FFT. Hadamard decoding and deinterleaving were used before the Viterbi decoding. The output value of deinterleaving provided a soft decision metric. Thus, it improved the performance of the joint decoding.
Considering that underwater acoustic communications are susceptible to systematic and random noise, we used forward error correction (FEC) for detecting and correcting a limited number of errors in the transmitted data without retransmission. Error correcting codes for FEC can be broadly categorized into two types, namely, block codes and convolution codes. The main differences between these codes are listed in Table 1.
The types of linear block codes are the Hamming codes, Walsh–Hadamard codes, cyclic codes, low-density parity-check (LDPC) codes, and the Reed–Solomon codes. The types of convolutional codes are the turbo codes and Trellis codes. Although the turbo code and LDPC code have good performance, the use of the turbo code or LDPC code is at the cost of implementation complexity. Turbo codes can obtain a higher coding gain only when the interleaving length is longer and the number of iterations is more. LDPC codes can obtain a higher coding gain only when the code length is longer. The long time delay affects the application of this kind of long code in underwater acoustic communications. Thus, concatenated coding is often used in this case. Considering that frequency diversity technology can effectively resist the frequency-selective fading of underwater acoustic channels, and the orthogonality of the Hadamard matrix facilitates distinguishing different codewords, we adopted the Hadamard code as an internal code and the convolutional code as an outer code. The details of the concatenated coding are as follows.

3.1. Hadamard Encoding and Decoding

The Hadamard matrix has excellent characteristics in that arbitrary codewords are orthogonal to each other. We adopted the Hadamard matrix to construct Hadamard codes. The Hadamard matrix H n consists of { + 1 , 1 } , with a size of n × n . It has the property of
H n H n T = n I ,
where I is an identity matrix with size of n × n . There are n / 2 different elements between any two rows of the Hadamard matrix. It has a row of elements that are all 1 or all 1 . We selected the other 2 ( n 1 ) rows of Hadamard and its complement matrices to construct the Hadamard codes. We selected 2 k rows from 2 ( n 1 ) rows to generate Hadamard codes H ( n , k ) , i.e., k bits of information were encoded into n bits by H ( n , k ) .
We adopted soft decision decoding for the Hadamard code. First, the square value of the signal amplitude of each subchannel was recorded with floating-point numbers. Then, the soft decision value was calculated, and the maximum soft decision value was selected as the final decoding result. We assumed that the fading signal and additive Gaussian white noise of each carrier frequency were independent, and the following soft decision was obtained.
M i = j = 1 n h i , j | y j | 2 , i = 1 , , 2 k ,
where M i is the soft decision value of the i-th Hadamard code, h i , j is the j-th bit of the i-th Hadamard code, and | y j | 2 is the square value of the j-th carrier signal amplitude. In the joint decoding process, we adopted the maximum soft decision value M i as the branch metric in the Viterbi decoding process. In this study, we set H ( n H = 20 , k H = 5 ) . The { + 1 , 0 } element distribution of the Hadamard code matrix H ( 20 , 5 ) is shown in Figure 2, where a black square represents 1, while a white square represents 0. The number of zeros and ones in each row was the same.

3.2. Convolution Encoding and Viterbi Decoding

The convolutional code C ( n , k , L ) is a type of error-correcting code that generates parity symbols via the sliding application, where n, k, and L denote the output bits, input bits, and constraint length, respectively. The constraint length specifies the delay for the input bit streams to the encoder. The sliding nature of the convolutional codes facilitates trellis decoding using a time-invariant trellis, whose nodes are ordered into horizontal slices (time) with each node at each time connected to at least one node. Time-invariant trellis decoding allows maximum-likelihood soft-decision decoding with reasonable complexity. Then, the number of states of C ( n , k , L ) is N s t = 2 k ( L 1 ) . In this study, we adopted C ( n c = 10 , k c = 5 , L c = 2 ) . The { + 1 , 0 } distribution of the convolutional code generator is shown in Figure 3.
For convolution encoding, if n, k, L, and the generator are given, then the next state matrix (or named state transition matrix) S c , and the output matrix O c can be computed. We adopted decimal representation for S c and O c in this study. The above parameters and matrices were used for the following encoding and decoding process.
The encoding steps of the convolution were as follows: We initialized the current state as S t n = 0 = 0 , where t n = 0 denoted the iteration time. We assumed the binary information bits b = { b i } , i = 1 , , k were given. Then, we flipped it and transferred it to decimal digits d. The decimal output of the convolution encoder was
d o = O c ( S + 1 , d + 1 ) .
The expression of ( S + 1 , d + 1 ) was adopted as MATLAB code, because the row and column of a matrix cannot be indexed by 0 in that case. On the other hand, the next state was updated as
S t n + 1 = S c ( S + 1 , d + 1 ) .
The Viterbi algorithm has found universal application in decoding convolutional codes, and it is a dynamic programming algorithm for obtaining the maximum a posteriori probability estimate of the most likely sequence of hidden states. We used it as the joint decoding framework.

3.3. Concatenated Encoding and Signal Composition

After being processed by the convolution encoder, the length of the bit stream b i n = [ b i , 0 k ] was double that of the previous one, where 0 k was an all zero vector with a length of k. 0 k ensured that the subsequent decoding returned to the 0 state. We denote the symbol b c o after the convolution encoder as
b c o = C ( 10 , 5 , 2 ) ( b i n ) .
The encoded symbol b c o was interleaved by an interleaver, which was used to convert the convolutional codes from burst error correctors to random error correctors. The result was denoted by b c i n t l . We selected k elements from b c i n t l as the row indexes of the Hadamard matrix and obtained the encoding process as:
H o u t = H ( 20 , 5 ) ( b c i n t l + 1 , : ) T .
The matrix H o u t was converted into a column vector via h = vec ( H o u t ) .
After convolution encoding, interleaving, and Hadamard encoding, the information was modulated via MFSK. In order to maximize the frequency efficiency, we adopted the IFFT to allocate subcarriers. To reduce the peak-to-average ratio, we added the random phase ϕ into the signal, as
h ϕ = h exp ( j ϕ ) .
Then, the MFSK signal was
x MFSK = N ft IFFT ( h ϕ ) ,
where N ft is the number of the IFFT points.
The acoustic wave signals x MFSK were then transmitted through water. The multipath and Doppler of the UAC were the important factors when we designed the transmitted signal. Thus, the hyperbolic frequency modulation (HFM) signal was used as the head signal x HFM for synchronization and Doppler compensation. The HFM signal was
x HFM = A cos 2 π k log ( 1 t t 0 ) , T s 2 < t < T s 2 ,
where
k = T s f 2 f 1 f 2 f 1 , t 0 = T s ( f 2 + f 1 ) 2 ( f 2 f 1 ) ,
T s denotes the time length of the HFM signal, and f 2 , f 1 denote the max and min frequency of HFM signal, respectively.

3.4. Frame Synchronization and Doppler Compensation

The receiver has no information of the specific position of the frame synchronization signal. Therefore, the frame synchronization detection was designed by the maximum value of the correlation computation. The correlation peak was searched to determine the position of the frame synchronization signal in the received signal. The frame synchronization signal detection steps were as follows.
First, we set the number of points for FFT transform
N fft = 2 log 2 ( 2 L HFM ) ,
where · denotes the round up value, and L HFM denotes the length of the HFM signal, e.g., if L HFM = 100 , then N fft = 256 .
Second, in order to generate the frequency domain representation of the HFM signal with N fft points, we reversed the original HFM and added 0, as
s HFM ( n ) = x HFM ( L HFM 1 n ) ( 0 n L HFM 1 ) 0 ( L HFM n N fft 1 ) ,
where s HFM ( n ) denotes the reverse signal x HFM ( n ) . Then, we obtained the frequency domain representation
S HFM ( k ) = n = 0 N fft 1 s HFM ( n ) | | s HFM ( n ) | | exp ( j 2 π n k / N fft ) .
Third, we adopted the segmental interception of the received signal y (with a length of L y ) to build the received signal for the search, denoted by y r = [ y 2 , y 1 ] , where y 1 = y ( i : i + L diff 1 ) , L diff = N fft L HFM , y 2 = y 1 ( end L HFM + 1 : end ) . The forward step of the selected data location point i was L diff . Then, the frequency domain representation of y r is
Y r ( k ) = n = 0 N fft 1 y r ( n ) | | y r ( n ) | | exp ( j 2 π n k / N fft ) .
In fact, the result of correlation z ( n ) can be regarded as the inverse Fourier transform of the product of Y r ( k ) and S HFM ( k ) . Thus,
z ( n ) = n = 0 N fft 1 Y r ( k ) S HFM ( k ) exp ( j 2 π n k / N fft ) .
One can select the maximum value A p and its location P m of z ( n ) . Then, the start time point is T s = P m L HFM . If T s < 0 or A p < T h , where T h is a threshold for weak signal detection, then the data location point moves forward as i = i + L diff .
After frame synchronization, the components of the receiver also included the following parts: Doppler estimation and compensation, MFSK demodulation, and joint decoding. We assumed the packet lengths of the received and transmitted signal were T rp , T tp ; then, the Doppler was estimated as
Δ = T rp T tp 1 .
It is worth noting that T rp is critical to the estimation accuracy of Doppler. Two locations of an HFM signal can be found in the received signal by finding the maximum value of its correlation operation between the HFM signal and the received signal.
Thus, the resampling rate f rs is
f rs = ( 1 + Δ ) f s ,
where f s is the original sampling rate.

3.5. Demodulation and Joint Decoding

Then, the signal was demodulated by the MFSK, shown as
y MFSK = 1 N ft FFT ( x MFSK ) .
The demodulated MFSK signal was chosen by its subcarrier index I subcarrier and then processed by deinterleaving, resulting in
y deintl = Deintl [ | y MFSK ( I subcarrier ) | 2 ] .
The result y deintl , which was the input of the joint decoder, was reshaped as a matrix Y with the size of L g × T f , where L g = n H n c / k c , T f was the time length of Y per frame.
The pseudo codes of the joint decoding algorithm are displayed in a MATLAB-like way in Algorithm 1. The bold uppercase letter denotes a matrix, while the bold lowercase one denotes a vector. 0 N s t × ( T f + 1 ) denotes the elements in a matrix of size N s t × ( T f + 1 ) are all 0, where N s t denotes the number of states, and T f denotes the data length of a frame. de2bi converts decimal numbers to their binary representations. O c , S c denote the output matrix and the state transition matrix, respectively. H denotes the Hadamard matrix.
Algorithm 1 The pseudo codes of the joint decoding algorithm
Input: received signal Y , H , O c , S c
Output: decoding bits x ^
 Initialization: branch metric matrix M b = 0 N s t × ( T f + 1 ) ;
 final state matrix S f = 0 N s t × ( T f + 1 ) ;
 previous state vector s p = 0 N s t × 1 ;
 next state vector s n = 0 N s t × 1 .
while t T f do
     Initialization : path metric m p t = 0 = ,
     new metric m n t = 0 = , all with size of N s t × 1 ;
    while  i N s t  do
       b o = de 2 bi [ O c ( i , : ) ] T ; convert decimal to binary.
      branch metric: m b t = D H ( Y ( : , t ) , b o , H , n c , k c ) ;
       m n ( S c ( i , : ) T + 1 ) = M b ( i , t ) + m b t ;
       [ m p t + 1 , I m p t + 1 ] = max [ m p t , m n ] ;
       s p ( S c ( i , : ) T + 1 ) = i 1 ;
       s n = [ s n , s p ] ( I m p t + 1 ) ;
       i = i + 1 ;
    end while
     S f ( : , t + 1 ) = s n ;
     M b ( : , t + 1 ) = m p ;
     t = t + 1 ;
end while
 Initialization: signal estimation x ^ = 0 N s t × T f , i = 0 ;
while t T f do
    current state: s c = S f ( i + 1 , T f t + 2 ) ;
     x ^ ( : , T f t + 1 ) = flip { de 2 bi [ S c ( s c + 1 , : ) = = i , k c ] } ;
     i = s c ;
end while
 Obtain the signal estimation x ^ .
The pseudo codes of the distance of the Hadamard (branch metric) are displayed in Algorithm 2, where n c and k c denote the output length as n c , when the data length k c is encoded by the Hadamard matrix H . The column number of H is denoted by n H
Algorithm 2 The pseudo codes of the distance of the Hadamard algorithm, i.e., m b t = D H ( Y ( : , t ) , b o , H , n c , k c )
Input: receive signal Y , H , b o , n c , k c
Output: branch metric m b t
 convert bits b o to k H -ary symbols s o ;
 Hadamard encoding process H o u t = H ( s o + 1 , : ) T ;
 reshape H o u t with size of [ n H n c / k c ] × N s t ;
 Obtain the branch metric m b t = Y T H o u t .
The parameters of the Hadamard code and the convolutional code are shown in Table 2.

3.6. Data Frame Format

In this study, we adopted the data frame format shown in Figure 4. HFM was used as synchronization. A guard interval (GI) was set between the synchronization signal and symbols to reduce the interference with the data segment when the receiver correlated the synchronization signal. Each MFSK symbol was composed of a cyclic prefix (CP) and information data symbols. The CP was added to resist inter symbol interference and inter-subcarrier interference caused by multipath channels. The length of the CP should be greater than the maximum length of the multipath delay, to ensure that the delay spread of the previous symbol will fall within the CP and not affect the demodulation of the current symbol.
We assumed that each frame contained 20 MFSK symbols, each MFSK symbol contained eight Hadamard symbols, and each Hadamard symbol contained 20 bits but only 5 bits of information. Thus, each frame contained 20 × 8 × 5 = 800 bits of information. However, the convolution rate was 1/2 with the k c redundant bit. Thus, the information bits were 400 k c .

4. Simulations

In order to simulate the actual sea trial environment, we set the parameters as shown in Table 3. Considering the water depth was low, we simulated the channel environment as an iso-velocity channel. We used BELLHOP to generate the Eigen sound ray as shown in Figure 5a, its CIR as shown in Figure 5b, and its sound propagation energy loss diagram as shown in Figure 5c. The CIR was used to simulate the actual channel and obtain the filtered signal as the received signal. Additive white Gaussian noise (AWGN) was used for the channel coding performance tests.
In order to test the concatenated coding and its decoding bit error ratio (BER) performance, we used convolution codes with hard and soft decoding and Hadamard decoding for comparisons. The transmitted signal was passed through the simulated channel. The constraint length was set as seven, while the code generator was expressed in octal as [171, 133]. The traceback depth was set as 32. The coding rate for the convolution code was 1/2. We adopted quadrature amplitude modulation (QAM) (M = 4) and demodulation. For soft Viterbi decoding in the convolution code, the output values of QAM were calculated using the approximate log-likelihood algorithm.
The results are shown in Figure 6, where Eb is the signal energy associated with each data bit, i.e., the signal squared amplitude divided by 2 times the user bit rate. No is the noise spectral density, i.e., the noise power in a 1 Hz bandwidth. One can see that the proposed concatenated coding had the lowest BER compared to the counterparts including the Hadamard and convolution codes with hard and soft decisions.

5. At-Sea Experiment

In this section, the at-sea experiment conducted to test the performance of the concatenated coding of MFSK underwater acoustic communications is described. The experiment location was Wuyuan Bay in Xiamen, China. The environmental parameters are shown in Table 4. The overall system structure of the transmitter and receiver is shown in Figure 7. The center frequency of the filter circuit was 30 kHz, the bandwidth was 25 kHz, the passband gain was 15 dB, the maximum attenuation was −3 dB, the passband ripple was 0.02 dB, and the stopband attenuation was −20 dB.
We obtained the channel impulse response (CIR) as shown in Figure 8 with the transmitted and received signal. It is obvious that the multipath structure varied. The variation in the CIR resulted in a 2 Hz Doppler spread.
Different from the traditional MFSK method, we took M = 240 in this study, because we took 12 rows from the Hadamard matrix, transposed them, and processed them by a vectorization operation. Considering n H = 20 , we know that the subcarrier number for the data was M = 240.
The transmitted information was a binary “Lena” graph, with the size of 128 × 128 . The received signal is shown in Figure 9; the received signal SNR was around 10 dB. The transmitted figure and the reconstructed one in the receiver are shown as Figure 10. The BER of the receiver was 0 in this case. Furthermore, when we added the AWGN to the received signal to test the robustness of the proposed communication framework to noise, the lowest SNR was 0 dB, and we retained the BER of 0.

6. Conclusions

This study adopted Hadamard–convolution concatenated coding and joint decoding, CP, Doppler compensation, interleaving, frame synchronization, etc., to form a robust underwater acoustic communication in a complete system way. Different from the traditional MFSK system, we designed a new MFSK with the concept of OFDM, i.e., M = 240. The channel-concatenated coding consisted of a Hadamard code and a convolutional code. Correspondingly, the iterative joint decoding used the Hadamard–Viterbi joint soft decoding framework. The soft decision mode was based on a newly designed branch metric, which was incorporated into the Hadamard structure. Comparisons among several channel coding methods were conducted to confirm the superiority of the joint channel-concatenated coding. Simulations and at-sea experiments were conducted to confirm the effectiveness of the proposed communication framework.

Author Contributions

F.-Y.W. proposed the framework of the algorithm and wrote the manuscript; T.T. performed the systems and the analysis of the results. B.-X.S. and Y.-C.S. participated in design of the research. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Project No. 62171369, 61701405).

Data Availability Statement

Please contact author for data requests.

Acknowledgments

The authors are grateful to the anonymous reviewers and for the research funding support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jiang, W.; Yang, X.; Tong, F.; Yang, Y.; Zhou, T. A Low-Complexity Underwater Acoustic Coherent Communication System for Small AUV. Remote Sens. 2022, 14, 3405. [Google Scholar] [CrossRef]
  2. Hu, X.; Wang, Y.; Wu, F.Y.; Huang, A. A mixing regularization parameter IPNLMS for underwater acoustic MIMO channel estimation. AEU-Int. J. Electron. Commun. 2022, 155, 154366. [Google Scholar] [CrossRef]
  3. Cai, X.; Wan, L.; Huang, Y.; Zhou, S.; Shi, Z. Further results on multicarrier MFSK based underwater acoustic communications. Phys. Commun. 2016, 18, 15–27. [Google Scholar] [CrossRef] [Green Version]
  4. Wu, Y.; Zhu, M.; Zhang, L.; Li, X.; Yang, B. Underwater acoustic communication system for 4500 m manned submersible: Modulation methods and design consideration. Tech. Acoust. 2015, 34, 244–247. [Google Scholar]
  5. Cao, X.L.; Jiang, W.H.; Tong, F. Time reversal MFSK acoustic communication in underwater channel with large multipath spread. Ocean. Eng. 2018, 152, 203–209. [Google Scholar] [CrossRef]
  6. Wu, F.Y.; Yang, K.; Tong, F.; Tian, T. Compressed sensing of delay and doppler spreading in underwater acoustic channels. IEEE Access 2018, 6, 36031–36038. [Google Scholar] [CrossRef]
  7. Fan, W.W.; Li, B.; Zhang, Y.W.; Sun, D.J. Research of FH-MFSK underwater acoustic communication based on non-Binary LDPC codes. Appl. Mech. Mater. 2014, 519–520, 945–952. [Google Scholar] [CrossRef]
  8. Yue, L.; Yang, X.L.; Wang, M.Z.; Fan, S.H. Performance of turbo coded FH/MFSK in shallow water acoustic channel. In Proceedings of the 2012 IEEE 11th International Conference on Signal Processing, Beijing, China, 21–25 October 2012; Volume 2, pp. 1401–1405. [Google Scholar]
  9. Park, J.; Seo, C.; Park, K.C.; Yoon, J.R. Effectiveness of convolutional code in multipath underwater acoustic channel. Jpn. J. Appl. Phys. 2013, 52, 07HG01. [Google Scholar] [CrossRef]
  10. Goalic, A.; Trubuil, J.; Beuzelin, N. Channel coding for underwater acoustic communication system. In Proceedings of the OCEANS 2006, Singapore, 16–19 May 2006; pp. 1–4. [Google Scholar]
  11. Shu, X.; Wang, H.; Yang, X.; Wang, J. Chaotic modulations and performance analysis for digital underwater acoustic communications. Appl. Acoust. 2016, 105, 200–208. [Google Scholar] [CrossRef]
  12. Yang, W.B.; Yang, T. M-ary frequency shift keying communications over an underwater acoustic channel: Performance comparison of data with models. J. Acoust. Soc. Am. 2006, 120, 2694–2701. [Google Scholar] [CrossRef]
  13. He, C.; Ran, M.; Meng, Q.; Huang, J. Underwater acoustic communications using M-ary chirp-DPSK modulation. In Proceedings of the IEEE 10th International Conference on Signal Processing Proceedings, Beijing, China, 24–28 October 2010; pp. 1544–1547. [Google Scholar]
  14. Xu, F.; Zhan, C.; Xie, Y.; Wang, D. Performance of CZT-assisted parallel combinatory multicarrier Frequency-Hopping Spread Spectrum over shallow underwater acoustic channels. Ocean. Eng. 2015, 110, 116–125. [Google Scholar] [CrossRef]
  15. Yang, T. Properties of underwater acoustic communication channels in shallow water. J. Acoust. Soc. Am. 2012, 131, 129–145. [Google Scholar] [CrossRef] [PubMed]
  16. Sun, Q.; Wu, F.Y.; Yang, K.; Ma, Y. Estimation of multipath delay-Doppler parameters from moving LFM signals in shallow water. Ocean. Eng. 2021, 232, 109125. [Google Scholar] [CrossRef]
  17. Han, Y.; Zheng, C.; Sun, D. Signal design for underwater acoustic positioning systems based on orthogonal waveforms. Ocean. Eng. 2016, 117, 15–21. [Google Scholar] [CrossRef]
  18. Li, B.; Zheng, S.; Tong, F. Bit-error rate based Doppler estimation for shallow water acoustic OFDM communication. Ocean Eng. 2019, 182, 203–210. [Google Scholar] [CrossRef]
  19. Qiao, G.; Babar, Z.; Ma, L.; Liu, S.; Wu, J. MIMO-OFDM underwater acoustic communication systems—A review. Phys. Commun. 2017, 23, 56–64. [Google Scholar] [CrossRef]
Figure 1. The block diagrams of the transmitter and the receiver.
Figure 1. The block diagrams of the transmitter and the receiver.
Remotesensing 14 06038 g001
Figure 2. The { + 1 , 0 } element distribution of the Hadamard code matrix H ( 20 , 5 ) , where black block denotes + 1 .
Figure 2. The { + 1 , 0 } element distribution of the Hadamard code matrix H ( 20 , 5 ) , where black block denotes + 1 .
Remotesensing 14 06038 g002
Figure 3. The { + 1 , 0 } distribution of the convolutional code generator, where black block denotes + 1 .
Figure 3. The { + 1 , 0 } distribution of the convolutional code generator, where black block denotes + 1 .
Remotesensing 14 06038 g003
Figure 4. The data frame format of MFSK communications.
Figure 4. The data frame format of MFSK communications.
Remotesensing 14 06038 g004
Figure 5. The simulated underwater channel environment: (a) Eigen sound ray, (b) CIR, and (c) the sound propagation energy loss.
Figure 5. The simulated underwater channel environment: (a) Eigen sound ray, (b) CIR, and (c) the sound propagation energy loss.
Remotesensing 14 06038 g005
Figure 6. Bit error rate performance comparisons of multiple decoding methods in multipath channels.
Figure 6. Bit error rate performance comparisons of multiple decoding methods in multipath channels.
Remotesensing 14 06038 g006
Figure 7. The overall system structure of the transmitter and receiver.
Figure 7. The overall system structure of the transmitter and receiver.
Remotesensing 14 06038 g007
Figure 8. The channel impulse response of the Wuyuan Bay experiment.
Figure 8. The channel impulse response of the Wuyuan Bay experiment.
Remotesensing 14 06038 g008
Figure 9. The received signal: (a) time domain and (b) time frequency domain.
Figure 9. The received signal: (a) time domain and (b) time frequency domain.
Remotesensing 14 06038 g009
Figure 10. Comparison of the transmitted figure and the reconstructed figure.
Figure 10. Comparison of the transmitted figure and the reconstructed figure.
Remotesensing 14 06038 g010
Table 1. The main differences between linear block codes and convolutional codes.
Table 1. The main differences between linear block codes and convolutional codes.
Linear Block CodesConvolutional Codes
In linear block codes, information bits are immediately followed by the parity bits.In convolutional codes, information bits are not followed by parity bits; instead, they are spread along the sequence.
Encoding of the current state is independent of the previous state; thus, it does not have any memory element. It depends only upon the present message bit.Encoding of the current state depends on the previous state and past elements; thus, it has a memory element for storing previous state information.
The hardware component of block codes is complex, and the encoding process is a bit difficult.The hardware component of convolutional codes is simpler, and the encoding process is easy.
Table 2. The parameter settings of the convolutional code and the Hadamard code.
Table 2. The parameter settings of the convolutional code and the Hadamard code.
The Parameter SettingsValue
Convolutional code output bits n c 10
Convolutional code input bits k c 5
Generate Matrix size 5 × 10
Output Matrix size 32 × 32
Next States Matrix size 32 × 32
Hadamard code output bits n H 20
Hadamard code input bits k H 5
Hadamard Matrix size 32 × 20
Nfft4096
Table 3. The parameter settings in the simulations.
Table 3. The parameter settings in the simulations.
The Parameter SettingsValue
Source depth2 m
Receiver depth2 m
Transmission distance1 km
Central frequency21 kHz
Doppler shift20 Hz
Sampling frequency96 kHz
Band width6 kHz
Time of HFM20 ms
Time of CP40 ms
Time of GI100 ms
Nfft4096
Table 4. The environmental parameters.
Table 4. The environmental parameters.
The ParametersValue
Source depth2 m
Receiver depth2 m
Transmission distance1 km
Central frequency21 kHz
Sampling frequency96 kHz
Band width6 kHz
Time of HFM20 ms
Time of CP40 ms
Time of GI100 ms
Nfft4096
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wu, F.-Y.; Tian, T.; Su, B.-X.; Song, Y.-C. Hadamard–Viterbi Joint Soft Decoding for MFSK Underwater Acoustic Communications. Remote Sens. 2022, 14, 6038. https://doi.org/10.3390/rs14236038

AMA Style

Wu F-Y, Tian T, Su B-X, Song Y-C. Hadamard–Viterbi Joint Soft Decoding for MFSK Underwater Acoustic Communications. Remote Sensing. 2022; 14(23):6038. https://doi.org/10.3390/rs14236038

Chicago/Turabian Style

Wu, Fei-Yun, Tian Tian, Ben-Xue Su, and Yan-Chong Song. 2022. "Hadamard–Viterbi Joint Soft Decoding for MFSK Underwater Acoustic Communications" Remote Sensing 14, no. 23: 6038. https://doi.org/10.3390/rs14236038

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop