Next Article in Journal
A Chunked and Disordered Data Privacy Protection Algorithm: Application to Resource Platform Systems
Previous Article in Journal
Reliability Assessment of Power Systems in High-Load Areas with High Proportion of Gas-Fired Units Considering Natural Gas Loss
Previous Article in Special Issue
Penetration Overload Prediction Method Based on a Deep Neural Network with Multiple Inputs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Minimum Variance Distortionless Response—Hanbury Brown and Twiss Sound Source Localization

Hubei Key Laboratory of Modern Manufacturing Quantity Engineering, School of Mechanical Engineering, Hubei University of Technology, Wuhan 430068, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(10), 6013; https://doi.org/10.3390/app13106013
Submission received: 14 April 2023 / Revised: 6 May 2023 / Accepted: 11 May 2023 / Published: 13 May 2023
(This article belongs to the Special Issue Advances in Design and Signal Processing of Sensors)

Abstract

:
Sound source target localization is an extremely useful technique that is currently utilized in many fields. The Hanbury Brown and Twiss (HBT) interference target localization method based on sound fields is not accurate enough for localization at low signal-to-noise ratios (below 0 dB). To address this problem, this paper introduces Minimum Variance Distortionless Response (MVDR) beamforming and proposes a new MVDR-HBT algorithm. Specifically, for narrowband signals, the inverse of the correlation matrix of the sound signal is calculated, and a guiding vector is constructed to compute the MVDR direction weights. These direction weights are then used to weight the correlation function of the HBT algorithm. Subsequently, the MVDR-HBT algorithm is extended from narrowband signals to broadband signals. As a result, the directivity of the HBT algorithm is optimized for wide- and narrowband signals, resulting in improved localization accuracy. Finally, the target localization accuracy of the MVDR-HBT algorithm is analyzed through simulation and localization experiments. The results show that the MVDR-HBT algorithm can accurately determine the direction of a sound source, with localization errors at different positions that are smaller than those produced by HBT. The localization performance of MVDR-HBT is considerably better than that of HBT, further verifying the simulation results. This study provides a new idea for target localization within an acoustic propagation medium (air).

1. Introduction

In recent years, with the continuous development of sound source localization technology and artificial intelligence, microphone array-based sound source localization technology [1] has found important applications in industrial inspection [2,3,4], military detection [5,6,7,8], human–computer interaction [9,10], video conferencing systems [11,12], and other fields. However, in the face of increasingly complex real-world environments, there is a higher demand for sound source localization algorithms to accurately locate sound sources under low signal-to-noise ratios (SNRs). Sound source localization under complex acoustic conditions puts greater emphasis on the anti-interference capabilities of sound source localization algorithms, and the accuracy of sound source localization is an important indicator of the performance of such algorithms. Currently, traditional sound source localization methods under low SNR include high-resolution spectral estimation [13], time delay estimation based on time difference of arrival (TDOA) [14,15], and controlled beamforming [16].
The high-resolution spectrum estimation method calculates the correlation matrix of each element to obtain subspaces under different parameters, thereby obtaining the orientation information of the sound source. Veerendra Dakulagi [17] modified the classical MUSIC algorithm by absorbing the Jordon normalization matrix in the covariance matrix to reconstruct the data. With this modification, even coherent sources can be accurately estimated in low-SNR environments. However, due to the short duration and smooth nature of the sound signal, this method struggles to meet the high-resolution requirements for estimation accuracy and has limited practicality. The time delay difference localization method [18,19] is computationally efficient and easy to implement, but the accuracy of the algorithm is highly dependent on the topology of the microphone array, and the accuracy of time delay estimation affects the position estimation. The controllable beamforming method [20,21,22,23], with the SRP (steered response power) algorithm being the most famous example [24,25], is based on the idea of weighting the output of each element and summing them together while directing the array beam to the same direction at the same time to give the direction where the expected signal achieves the maximum output power, thus achieving sound source localization. However, this method has a large computational cost and is not suitable for real-time localization.
After considering the advantages and disadvantages of traditional positioning methods, it is clear that a single traditional algorithm is no longer suitable for complex positioning requirements. Therefore, in 2019, Liu et al. [26] introduced the HBT [27] interference principle from optics into the acoustic field and proposed a positioning method based on HBT interference in the acoustic field. This method analyzes the coherence of the signal to eliminate the influence of noise on passive target positioning, and it calculates the normalized correlation function to describe the coherence of the field, thus achieving accurate positioning of weak target sound sources. This method combines the advantages of time-delay estimation and controllable beamforming, solving some of the problems associated with traditional positioning methods. However, the positioning accuracy of this method still needs to be further improved under low-SNR conditions. Based on the HBT principle and combining it with the MVDR algorithm [28,29], this paper achieves accurate positioning of sound source targets under low-SNR conditions without changing the geometry of the array.
This paper first focuses on narrowband signals and calculates the inverse of the autocorrelation matrix of the preprocessed sound signal. It then constructs the directional vector to calculate MVDR directional weights, which are used to weight the correlation function of HBT, optimizing the directionality of the HBT algorithm and thus improving the positioning accuracy. The MVDR-HBT algorithm is then extended from narrowband signals to broadband signals, enhancing the directionality of the HBT algorithm in broadband signals and further improving the positioning accuracy. This paper proposes a positioning method that combines MVDR with the HBT interference target positioning method based on the sound field, aiming to solve the problem of inaccurate positioning of HBT in low-SNR situations.
The remaining sections of this paper are organized as follows: Section 2 provides a detailed explanation of the MVDR-HBT method for wideband and narrowband sound source localization. In Section 3, we present the simulation results of the MVDR-HBT algorithm under varying sound source parameters such as SNR, sound source position, and frequency, and we compare them with those of the HBT algorithm. We analyze the localization performance of both methods. In Section 4, we further validate the effectiveness and feasibility of the proposed method through a sound source detection experiment, where sound source parameters such as SNR, sound source position, and frequency are varied. Finally, conclusions are presented in Section 5.

2. MVDR-HBT Localization Principle

To further enhance the positioning accuracy of the sound field HBT in low-SNR situations, this paper proposes an MVDR-HBT sound source positioning algorithm that combines MVDR with the sound field HBT. Firstly, the MVDR beamforming algorithm is applied to the sound field HBT. The MVDR-HBT algorithm is then derived, and to ensure its generality, it is extended from narrowband signals to wideband signals, with a theoretical study of the wideband MVDR-HBT. The MVDR-HBT positioning diagram is shown in Figure 1.

2.1. Principle of Narrowband Sound Source Localization Using MVDR-HBT

Assuming that the expected signal received by the microphone array is p s , the interference signal is p i , and the noise signal is p n , the beamforming receiver receives signal p = p s + p i + p n , which is a coupling of the expected signal, interference signal, and noise signal. p s is the target signal that we expect to receive and process, p i comes from external interference to the signal, and p n is caused by the inherent uncertainty of the signal itself. Therefore, the output beamforming is given by
y = w H   p = w H p s + p i + p n = y s + y i + y n
where y s represents the interfering signal component, y i represents the noise signal component, y n represents the desired signal component, w = w 1 , w 2 , , w n T represents the weight vector, T represents the transpose, and H represents the conjugate transpose. By combining and rearranging the p i and p n terms in Equation (1) to form p i n , Equation (2) can be obtained:
y = w H p s + p i n = y s + y i n
Based on Equation (2), the power of interference and noise signals can be obtained as shown below:
E y i n 2 = E w H p i n 2 = w H E p i n p i n H w = w H R i n w
where R i n = E p i n p i n H represents the covariance matrix of the interference signal and the noise signal.
After satisfying the matrix constraint w H a θ s = 1 , in order to minimize the interference and noise, we establish the objective optimization equation as follows:
min w H R i n w s u b j e c t   t o     w H a ( θ s )   =   1
where a θ s refers to the steering vector of the target signal.
The essence of MVDR is to solve the weight coefficients of each array element. By using the Lagrange multiplier method to solve for the optimal value of the weight vector, the following expression can be obtained:
f w , λ = w H R i n + λ w H a θ s 1 + λ w H a θ s 1
After taking the derivative of Equation (5) and setting it equal to 0, we can obtain w H = λ a H θ s R i n 1 . Substituting into Equation (5), we have
λ = a H θ s R i n 1 a θ s 1
Combining Equation (6), the weighted vector of MVDR can be obtained as follows:
w M V D R = R i n 1 a θ s a H θ s R i n 1 a θ s
To ensure the desired distortion-free signal, the smaller the interference signal and the noise signal, the better, i.e., the covariance matrix of the interference signal and the noise signal needs to be obtained. However, under actual working conditions, the signal received by the beamformer is a coupling of the desired, interference, and noise signals, and it is difficult to separate the interference signal from the noise signal. For this reason, the covariance matrix of the interference signal and the noise signal cannot be obtained. In this paper, therefore, the covariance of the received signal R x is replaced by the covariance of the received signal, and the direction is assumed to be θ s , so that the objective optimization equation is
min w H R x w s u b j e c t   t o     w H a ( θ s )   = 1  
Assume that the desired signal at this time is represented by s ( t ) with direction θ s . The received signal contains L interfering signals with direction θ i , i = 1 , 2 , , L . The noise component of the received signal is modeled as Gaussian white noise and is denoted by n ( t ) . The covariance matrix of the received signal can be represented by Equation (9):
R x = E pp H = σ s 2 a θ s a H θ s + i = 1 L σ i 2 a θ i a H θ i + σ n 2 I
If we replace the covariance matrix R x with the interference and noise signal covariance matrix R i n , we can obtain the weight vector of the new MVDR beamformer as follows:
w M V D R θ = R x 1 a ( θ ) a H ( θ ) R x 1 a ( θ )
The signal representation after MVDR weighting is as follows:
p   θ = w M V D R θ p
When arbitrarily selecting two microphones, there is coherence between the weighted signals of the two microphones. When the relative delay between the two beams of sound waves is zero, the coherence between the two signals is maximized, and the value of the correlation function is maximized, which corresponds to the location of the sound source. The maximum correlation function is expressed as Formula (12):
C max = p i θ , t · p j θ , t + Δ T p i θ , t · p j θ , t
where p i denotes the weighted signal received by the i-th microphone, P j denotes the weighted signal received by the j-th microphone, and Δ T represents the time delay. We take Δ T = r 1 r 2 v , where r 1 and r 2 denote the respective distances from the two microphones to the sound source S.

2.2. Principle of Broadband Sound Source Localization Using MVDR-HBT

A flowchart of MVDR-HBT broadband signal localization is shown in Figure 2.
From the above narrowband principle, it can be concluded that by performing FFT on the broadband signal p b , we can obtain
FFT ( p b i ) = [ p b i ( f 0 ) , p b i ( f 1 ) , p b i ( f n ) ]
The term FFT ( · ) represents the N-point fast Fourier transform (FFT), n = N / 2 .
After frequency division processing, the broadband signal is converted to a narrowband signal, and the value of w M V D R ( θ , f j ) is calculated for each frequency component in the narrowband signal.
w M V D R ( θ , f j ) = R x f j 1 a ( θ , f j ) a H ( θ , f j ) R x f j 1 a ( θ , f j )
Based on Equation (14), we can obtain the received signals p b f j corresponding to each sub-band frequency of the MVDR-weighted microphone signals:
p b f j = w M V D R ( θ , f j ) IFFT p b f j
where IFFT denotes the inverse fast Fourier transform and ⊙ represents the Hadamard product, which is defined as element-wise multiplication of corresponding positions.
Taking the weighted signals of each sub-band frequency of every microphone and averaging them, we obtain
p b i * = E p b f j
Finally, we substitute the result into Equation (12) to obtain
C max = p b i θ , t · p b j θ , t + Δ T p b i θ , t · p b j θ , t

3. Simulation Analysis

To verify the feasibility of the algorithm, a simulation analysis was carried out using Matlab software, and the positioning performance of the algorithm under a low SNR was evaluated and compared with the positioning performance of HBT. In addition, the effects of different sound source positions and frequencies on the positioning performance were also investigated.
Figure 3 shows the microphone array structure. Two linear microphone arrays were used for sound source localization. The initial parameters of the simulation were as follows: the microphone coordinates of array I were ((i − 1) × d, 0), and the microphone coordinates of array II were (10 + (i − 1) × d, 0), where i = 1, 2, 3, …, 16; D = 10 m was the distance between the two arrays, and d = 0.043 m was the distance between adjacent microphones; and the sound source signal was a 600 Hz single-frequency signal.

3.1. SNR

In this study, simulations were conducted for both HBT and MVDR-HBT under SNR values of 0 dB, −5 dB, and −10 dB. The simulation results of the HBT and MVDR-HBT algorithms for an SNR of −10 dB and the sound source location (5, 5) are shown in Figure 4. HBT generated position coordinates of (5.9, 4.9) at the maximum value of the correlation function, giving a relative error of localization of 12.81%. MVDR-HBT generated position coordinates of (5.2, 5), giving a relative error of localization of 2.83%, indicating that MVDR-HBT produced a more accurate location at a low SNR. Subsequently, MVDR-HBT was used to perform localization at −5 dB and 0 dB. These localization results were then compared with those of HBT, as summarized in Table 1. With decreasing SNR, the localization error and relative error of both MVDR-HBT and HBT increased. However, compared with HBT, the relative error percentage of MVDR-HBT was reduced by 1.41%, 4.59%, and 9.98% when the SNR was 0 dB, −5 dB, and 10 dB, respectively. These results indicate that MVDR-HBT has a better localization effect than HBT at low SNR and produces higher localization accuracy.

3.2. Distance to Sound Source Target

The simulation parameters were set as before, with the only difference being a change in the position of the sound source. The simulation results obtained using MVDR-HBT for different source positions at an SNR of −5 dB are shown in Figure 5; from these, the influence of different positions on the localization accuracy can be further studied. As can be seen from Figure 5, the localization results of MVDR-HBT at positions (5, 5) and (8, 10) were (5.1, 5) and (8.2, 10), with relative errors of localization of 1.41% and 1.56%, respectively, indicating that MVDR-HBT can still accurately locate the position at low SNR. These results were compared with the localization results of HBT, summarized in Table 2. It can be inferred from Table 2 that, with increasing distance, the localization error gradually increased under low-SNR conditions below 0 dB. In comparison, MVDR-HBT maintained good localization performance under these conditions. Therefore, the MVDR-HBT algorithm exhibits higher localization accuracy when compared with HBT. In brief, MVDR-HBT can accurately localize a sound source target in a low-SNR environment.

3.3. Frequency

To investigate the influence of frequency on the localization results, single-frequency and multi-frequency signals were selected to conduct simulation analysis using MVDR-HBT. The simulation results for the sound source position (8, 10) at 600 Hz and at 600 Hz, 700 Hz, 800 Hz are shown in Figure 6, and the effect of different frequencies on the localization accuracy of MVDR-HBT was further studied. The results are summarized in Table 3 and compared with the localization results of HBT. As shown in Table 3, at an SNR of −5 dB, the localization accuracy of MVDR-HBT became more precise with an increase in spectral width, as the sound source signal contained more information. Although the localization accuracy of HBT for multi-frequency signals improved compared to that for single-frequency signals in low-SNR situations, there was still a gap between HBT and MVDR-HBT in terms of localization error.

4. The Sound Source Detection Experiment

For an outdoor experiment, we utilized a 16-element linear microphone array. When a sound signal reaches the microphone array, there are time or phase differences between the signals on different microphones. By calculating the time or phase differences between the microphones, the direction of the signal can be inferred. However, due to the limitations of the array topology, it is difficult to accurately locate the sound source coordinates when locating far-field sound sources, as only the direction of the signal can be inferred. Therefore, in this outdoor experiment, the directional deviation between the real signal incidence direction and the calculated signal incidence direction of the localization algorithm was used to evaluate the algorithm’s localization performance.
An experimental platform based on a microphone array was constructed, and to reduce the interference of reverberation on the experiment, the experiment was conducted in an open space, as shown in Figure 7. MVDR-HBT and HBT were tested under different sound source parameters, and the localization performance of the two algorithms in a real environment was compared and analyzed.

4.1. SNR

To verify the ability of MVDR-HBT to accurately locate sound sources in low-SNR real environments, white Gaussian noise was added to the original sound signal to simulate a low-SNR environment, with SNR values of 0 dB and −5 dB. In the positioning experiments, sound sources located at coordinates (5, 5) and (2.5, 2) were tested, with the experiment parameters set as follows: the sound source signal was a piece of music ranging from 200 Hz to 2000 Hz, with a sampling frequency of 192 kHz; the microphone array had a uniform linear structure, with microphone coordinates of (0, 0), (0.129, 0), and (0.258, 0). The experimental results of MVDR-HBT for sound source coordinates (5, 5) at SNR values of 0 dB and −5 dB are shown in Figure 8, and the experimental results of MVDR-HBT and HBT for different SNR values are summarized in Table 4. In Figure 8, the bright striped area represents the direction of the sound source. The same applies to Figure 9 and Figure 10. As shown in Figure 8 and Table 4, when the SNR values were −5 dB and 0 dB, the bright striped areas fluctuated, and the fluctuations were more obvious at −5 dB than at 0 dB, but the sound source direction could still be distinguished. After adding white Gaussian noise, the performance of both algorithms decreased to some extent, but the angle deviation of MVDR-HBT was smaller than that of HBT at both sound source coordinates (5, 5) and (2.5, 2), indicating that the algorithm proposed in this paper has better positioning performance.

4.2. Distance to Sound Source Target

To investigate the impact of distance between sound sources on the localization performance, experiments were conducted to verify different target sound source positions with coordinates of (2.5, 2), (5, 3), (5, 5), and (6, 8). The experimental results obtained using MVDR-HBT and HBT for the sound source coordinates of (2.5, 2) are shown in Figure 9 and compared with the localization results of HBT, summarized in Table 5. Figure 9 and Table 5 reveal that MVDR-HBT could accurately identify the direction of the sound source at (2.5, 2), and the bright stripes were more concentrated compared to those from HBT. As the distance gradually increased, the angle deviation of both algorithms also increased, and the angle deviation of MVDR-HBT was smaller than that of HBT. When the sound source coordinates were (2.5, 2), the angle deviation of MVDR-HBT was only 0.77°, while that of HBT was 1.48°. When the sound source coordinates were (6, 8), which was the farthest distance from the microphone array, the angle deviations of both MVDR-HBT and HBT were the largest. This experiment mainly aimed to locate far-field target sound sources, and the angular deviation of MVDR-HBT was 3.17° at this time, which was smaller than that of HBT.

4.3. Frequency

In practical environments, most sound signals are multi-frequency. To investigate the effect of frequency on the localization performance, experiments were conducted on sound signals of different frequencies. The experimental results of MVDR-HBT for sound source coordinates (5, 5) on single-frequency and multi-frequency sound signals are shown in Figure 10 and were compared with the localization results of HBT, summarized in Table 6. It can be observed from Figure 10 and Table 6 that as the spectral width of the sound signal increases, the signal contains more information, resulting in more concentrated bright stripes and more accurate localization performance for MVDR-HBT. Although HBT exhibited improved localization accuracy for multi-frequency signals compared to single-frequency signals, there was still a gap in angle deviation compared to MVDR-HBT.

5. Summary and Conclusions

In this paper, we introduced a sound source localization method, called MVDR-HBT, that combines MVDR and HBT interference based on the sound field for target localization. We analyzed the feasibility of this method from a theoretical perspective and conducted simulations and experiments to evaluate its performance. The simulation results showed that the method achieved precise localization of a sound source target in a remote, low-SNR environment, and the localization accuracy obtained using MVDR-HBT was significantly better than that obtained using the HBT method. This confirmed the effectiveness and superiority of the proposed method in low-SNR environments. In addition, we experimentally verified the localization performance and anti-interference ability of the proposed algorithm in real environments. Our results demonstrate the potential for application of this algorithm in more complex environments. In short, this research into high-precision sound source localization in complex environments has significant implications.

Author Contributions

Conceptualization, M.L.; methodology, S.Q.; validation, M.L., formal analysis, X.Z.; investigation, S.Q., X.Z.; data curation, X.Z.; writing—original draft preparation, S.Q.; writing—review and editing S.Q. and M.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China [Grant No. 51805154], Hubei Provincial Natural Science Foundation of China [2022CFB473], and Green Industry Technology Leading Project of Hubei University of Technology [XJ2021004901].

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Some or all of the data, models, or code that support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liaquat, M.U.; Munawar, H.S.; Rahman, A.; Qadir, Z.; Kouzani, A.Z.; Mahmud, M.A.P. Localization of sound sources: A systematic review. Energies 2021, 14, 3910. [Google Scholar] [CrossRef]
  2. Moravec, M.; Badida, M.; Mikušová, N.; Sobotová, L.; Švajlenka, J.; Dzuro, T.J.S. Proposed options for noise reduction from a wastewater treatment plant: Case study. Sustainability 2021, 13, 2409. [Google Scholar] [CrossRef]
  3. Fiebig, W.; Dąbrowski, D. Use of acoustic camera for noise sources localization and noise reduction in the industrial plant. Arch. Acoust. 2020, 45, 111–117. [Google Scholar]
  4. Yang, X.; Xing, H.; Ji, X. Sound source omnidirectional positioning calibration method based on microphone observation angle. Complexity 2018, 2018, 2317853. [Google Scholar] [CrossRef]
  5. Miao, F.; Yang, D.; Wen, J.; Lian, X. Vibration. Moving sound source localization based on triangulation method. J. Sound Vib. 2016, 385, 93–103. [Google Scholar] [CrossRef]
  6. Thomas, B.; Hunter, A. Coherence-induced bias reduction in synthetic aperture sonar along-track micronavigation. IEEE J. Ocean. Eng. 2021, 47, 162–178. [Google Scholar] [CrossRef]
  7. Fayad, Y.; Wang, C.; Cao, Q. Electronics. Temporal-spatial subspaces modern combination method for 2D-DOA estimation in MIMO radar. J. Syst. Eng. Electron. 2017, 28, 697–702. [Google Scholar]
  8. Jin, B.; Xu, X.; Zhu, Y.; Zhang, T.; Fei, Q. Single-source aided semi-autonomous passive location for correcting the position of an underwater vehicle. IEEE Sens. J. 2019, 19, 3267–3275. [Google Scholar] [CrossRef]
  9. Liu, G.; Yuan, S.; Wu, J.; Zhang, R. A sound source localization method based on microphone array for mobile robot. In Proceedings of the 2018 Chinese Automation Congress (CAC), Xi’an, China, 30 November–2 December 2018; pp. 1621–1625. [Google Scholar]
  10. Chen, G.; Xu, Y. A sound source localization device based on rectangular pyramid structure for mobile robot. J. Sens. 2019, 2019, 4639850. [Google Scholar] [CrossRef]
  11. Ximing, D.; Wenzhong, L.; Peng, L.; Mingru, G.; Fufu, W. Speaker tracking based on microphone cross arry in the smart conference system. In Proceedings of the 2014 IEEE International Conference on Consumer Electronics-China, Shenzhen, China, 9–13 April 2014; pp. 1–4. [Google Scholar]
  12. Asp, F.; Jakobsson, A.-M.; Berninger, E. The effect of simulated unilateral hearing loss on horizontal sound localization accuracy and recognition of speech in spatially separate competing speech. Hear. Res. 2018, 357, 54–63. [Google Scholar] [CrossRef]
  13. Gierlich, R. Joint estimation of spatial and motional radar target parameters by multidimensional spectral analysis. In Proceedings of the 2015 16th International Radar Symposium (IRS), Dresden, Germany, 24–26 June 2015; pp. 95–101. [Google Scholar]
  14. Rahman, S.; Arifianto, D.; Dhanardono, T. Localization of underwater moving sound source based on time delay estimation using hydrophone array. J. Phys. Conf. Ser. 2016, 776, 012075. [Google Scholar] [CrossRef]
  15. Xu, J.; Gao, C.; Liu, H.; Yuan, X.; Dong, Y.; Liu, L. Sound Source Localization of Firearms Based on TDOA Optimization Algorithm. In Proceedings of the 2022 Global Reliability and Prognostics and Health Management (PHM-Yantai), Yantai, China, 13–16 October 2022; pp. 1–5. [Google Scholar]
  16. Shi, W.; Li, Y.; Zhao, L.; Liu, X. Controllable sparse antenna array for adaptive beamforming. IEEE Access 2019, 7, 6412–6423. [Google Scholar] [CrossRef]
  17. Lafta, N.A.; Hreshee, S.S. Modified Multiple Signal Classification Algorithm for WSNs Localization. In Proceedings of the 2020 3rd International Conference on Engineering Technology and its Applications (IICETA), Najaf, Iraq, 6–7 September 2020; pp. 56–61. [Google Scholar]
  18. Chang, H.; Li, W. Correction-based diffusion LMS algorithms for secure distributed estimation under attacks. Digit. Signal Process. 2020, 102, 102735. [Google Scholar] [CrossRef]
  19. Knapp, C.; Carter, G. The generalized correlation method for estimation of time delay. IEEE Trans. Acoust. Speech Signal Process. 1976, 24, 320–327. [Google Scholar] [CrossRef]
  20. Mu, W.; Qu, W.; Liu, G.; Zou, Z.; Liu, P. A study on beamforming location method with improved search strategy. Appl. Acoust. 2017, 36, 298–304. [Google Scholar]
  21. Dam, H.Q.H.; Nordholm, S. Source separation employing beamforming and SRP-PHAT localization in three-speaker room environments. Vietnam J. Comput. Sci. 2017, 4, 161–170. [Google Scholar] [CrossRef]
  22. Salvati, D.; Drioli, C.; Foresti, G.L.J.S.P. Sensitivity-based region selection in the steered response power algorithm. Signal Process. 2018, 153, 1–10. [Google Scholar] [CrossRef]
  23. Liu, M.; Hu, J.; Zeng, Q.; Jian, Z.; Nie, L. Sound source localization based on multi-channel cross-correlation weighted beamforming. Micromachines 2022, 13, 1010. [Google Scholar] [CrossRef] [PubMed]
  24. Liu, D.; Cai, X.; Yu, D.; Qiao, Z.; Dong, H.; Wu, M. Sound Source Localization Methods Based on Lagrange-Galerkin Spherical Grid. In Proceedings of the 2021 IEEE International Conference on Electrical Engineering and Mechatronics Technology (ICEEMT), Qingdao, China, 2–4 July 2021; pp. 665–670. [Google Scholar]
  25. Hahn, W.; Tretter, S. Optimum processing for delay-vector estimation in passive signal arrays. IEEE Trans. Inf. Theory 1973, 19, 608–614. [Google Scholar] [CrossRef]
  26. Liu, M.; Nie, L.; Li, S.; Jia, W. Passive positioning of sound target based on HBT interference. AIP Adv. 2019, 9, 105120. [Google Scholar] [CrossRef]
  27. Brown, R.H.; Twiss, R.Q.; Sciences, P. Interferometry of the intensity fluctuations in light. II. An experimental test of the theory for partially coherent light. Proc. R. Soc. Lond. Ser. A. Math. Phys. Sci. 1958, 243, 291–319. [Google Scholar]
  28. Capon, J. High-resolution frequency-wavenumber spectrum analysis. Proc. IEEE 1969, 57, 1408–1418. [Google Scholar] [CrossRef]
  29. Wolfel, M.; McDonough, J. Minimum variance distortionless response spectral estimation. IEEE Signal Process. Mag. 2005, 22, 117–126. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of MVDR-HBT sound field positioning.
Figure 1. Schematic diagram of MVDR-HBT sound field positioning.
Applsci 13 06013 g001
Figure 2. Broadband Signal Localization Process Flow Diagram of MVDR-HBT.
Figure 2. Broadband Signal Localization Process Flow Diagram of MVDR-HBT.
Applsci 13 06013 g002
Figure 3. Microphone array structure.
Figure 3. Microphone array structure.
Applsci 13 06013 g003
Figure 4. Simulation results for a sound source at (5, 5): (a) −10 dB HBT; (b) −10 dB MVDR-HBT.
Figure 4. Simulation results for a sound source at (5, 5): (a) −10 dB HBT; (b) −10 dB MVDR-HBT.
Applsci 13 06013 g004
Figure 5. Simulation results for sound sources at (a) (5, 5); (b) (8, 10).
Figure 5. Simulation results for sound sources at (a) (5, 5); (b) (8, 10).
Applsci 13 06013 g005
Figure 6. Simulation Results of MVDR-HBT from Single-Frequency to Multi-Frequency: (a) 600 Hz; (b) 600 Hz, 700 Hz, 800 Hz.
Figure 6. Simulation Results of MVDR-HBT from Single-Frequency to Multi-Frequency: (a) 600 Hz; (b) 600 Hz, 700 Hz, 800 Hz.
Applsci 13 06013 g006
Figure 7. Experimental Scene.
Figure 7. Experimental Scene.
Applsci 13 06013 g007
Figure 8. Experimental Results of MVDR-HBT at SNR values of (a) −5 dB and (b) 0 dB.
Figure 8. Experimental Results of MVDR-HBT at SNR values of (a) −5 dB and (b) 0 dB.
Applsci 13 06013 g008
Figure 9. Experimental results of MVDR-HBT and HBT at (2.5, 2): (a) HBT; (b) MVDR-HBT.
Figure 9. Experimental results of MVDR-HBT and HBT at (2.5, 2): (a) HBT; (b) MVDR-HBT.
Applsci 13 06013 g009
Figure 10. Experimental Results of MVDR-HBT for Single-Frequency to Multi-Frequency: (a) 600 Hz; (b) 200~2000 Hz.
Figure 10. Experimental Results of MVDR-HBT for Single-Frequency to Multi-Frequency: (a) 600 Hz; (b) 200~2000 Hz.
Applsci 13 06013 g010
Table 1. Results of simulation of sound targets with different SNRs.
Table 1. Results of simulation of sound targets with different SNRs.
SNR (dB)MethodPosition (m)Results (m)Errors (m)Relative Positioning Error (%)Percentage Decrease in Error (%)
−10HBT(5, 5)(5.9, 4.9)(0.9, −0.1)12.81%9.98%
MVDR-HBT(5.2, 5)(0.2, 0)2.83%
−5HBT(5.3, 4.7)(0.3, −0.3)6.00%4.59%
MVDR-HBT(5.1, 5)(0.1, 0)1.41%
0HBT(5.1, 5)(0.1, 0)1.41%1.41%
MVDR-HBT(5, 5)(0, 0)0
Table 2. Results of the simulation for the same sound target at different positions.
Table 2. Results of the simulation for the same sound target at different positions.
SNR (dB)MethodPosition (m)Results (m)Errors (m)Relative Positioning Error (%)Percentage Decrease in Error (%)
−10HBT(5, 5)(5.9, 4.9)(0.9, −0.1)12.81%9.98%
MVDR-HBT(5.2, 5)(0.2, 0)2.83%
HBT(8, 10)(7.6, 10.6)(−0.4, 0.6)5.63%3.88%
MVDR-HBT(8.1, 9.8)(0.1, −0.2)1.75%
−5HBT(5, 5)(5.3, 4.7)(0.3, −0.3)6.00%4.59%
MVDR-HBT(5.1, 5)(0.1, 0)1.41%
HBT(8, 10)(7.6, 9.6)(−0.4, −0.4)4.42%2.86%
MVDR-HBT(8.2, 10)(0.2, 0)1.56%
Table 3. Comparison of simulation results of MVDR-HBT and HBT from a single frequency to multiple.
Table 3. Comparison of simulation results of MVDR-HBT and HBT from a single frequency to multiple.
SNR (dB)Frequency (Hz)MethodPosition (m)Results (m)Errors (m)Relative Positioning Error (%)
−5600HBT(8, 10)(7.6, 9.6)(−0.4, −0.4)4.42%
MVDR-HBT(8.2, 10)(0.2, 0)1.56%
600, 700HBT(7.8, 9.5)(−0.2, −0.5)4.21%
MVDR-HBT(8.1, 10.1)(0.1, 0.1)1.10%
600, 700, 800HBT(8.1, 10.3)(0.1, 0.3)2.47%
MVDR-HBT(8, 10)(0, 0)0%
Table 4. Experimental Results Comparison of MVDR-HBT and HBT at Different SNRs.
Table 4. Experimental Results Comparison of MVDR-HBT and HBT at Different SNRs.
SNR (dB)MethodPosition (m)Distance (m)Real Incident Direction (°)Estimating the Direction of Arrival of the Signal (°)Angular Deviation (°)
0MVDR-HBT(2.5, 2)3.2051.3452.431.09
HBT53.622.28
MVDR-HBT(5, 5)7.0745.048.243.24
HBT48.953.95
−5MVDR-HBT(2.5, 2)3.2051.3453.472.13
HBT54.633.29
MVDR-HBT(5, 5)7.0745.048.743.74
HBT49.474.47
Table 5. Experimental Results Comparison of MVDR-HBT and HBT at Different Locations.
Table 5. Experimental Results Comparison of MVDR-HBT and HBT at Different Locations.
Frequency (Hz)MethodPosition (m)Distance (m)Real Incident Direction (°)Estimating the Direction of Arrival of the Signal (°)Angular Deviation (°)
200~2000MVDR-HBT(2.5, 2)3.2051.3450.570.77
HBT52.821.48
MVDR-HBT(5, 3)5.8359.0461.332.29
HBT55.853.19
MVDR-HBT(5, 5)7.0745.047.782.78
HBT48.433.43
MVDR-HBT(6, 8)1036.8733.703.17
HBT32.913.96
Table 6. Comparison of MVDR-HBT and HBT in Single-Frequency to Multi-Frequency Experiments.
Table 6. Comparison of MVDR-HBT and HBT in Single-Frequency to Multi-Frequency Experiments.
Frequency (Hz)MethodPosition (m)Distance (m)Real Incident Direction (°)Estimating the Direction of Arrival of the Signal (°)Angular Deviation (°)
600MVDR-HBT(2.5, 2)3.2051.3453.131.79
HBT53.752.41
MVDR-HBT(5, 5)7.0745.048.583.58
HBT49.394.39
200~2000MVDR-HBT(2.5, 2)3.2051.3450.570.77
HBT52.821.48
MVDR-HBT(5, 5)7.0745.047.782.78
HBT48.433.43
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, M.; Qu, S.; Zhao, X. Minimum Variance Distortionless Response—Hanbury Brown and Twiss Sound Source Localization. Appl. Sci. 2023, 13, 6013. https://doi.org/10.3390/app13106013

AMA Style

Liu M, Qu S, Zhao X. Minimum Variance Distortionless Response—Hanbury Brown and Twiss Sound Source Localization. Applied Sciences. 2023; 13(10):6013. https://doi.org/10.3390/app13106013

Chicago/Turabian Style

Liu, Mengran, Shanbang Qu, and Xuhui Zhao. 2023. "Minimum Variance Distortionless Response—Hanbury Brown and Twiss Sound Source Localization" Applied Sciences 13, no. 10: 6013. https://doi.org/10.3390/app13106013

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop