Next Article in Journal
Periodic Orbits of Quantised Restricted Three-Body Problem
Previous Article in Journal
Probing Modified Gravity Theories with Scalar Fields Using Black-Hole Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dual-Driven Solver for Reconstructing the Point Sources of Elastic Wave Based on Far-Field Data

1
School of Mathematics and Statistics, Changchun University of Science and Technology, Changchun 130022, China
2
Experiment Centre of Mathematics, Changchun University of Science and Technology, Changchun 130022, China
*
Author to whom correspondence should be addressed.
Universe 2023, 9(3), 148; https://doi.org/10.3390/universe9030148
Submission received: 18 January 2023 / Revised: 7 March 2023 / Accepted: 9 March 2023 / Published: 12 March 2023
(This article belongs to the Section Mathematical Physics)

Abstract

:
Aiming at the inverse source problem of an elastic wave, a dual-driver solver is considered to reconstruct the point sources. In this way, the number, location, and magnitude of the point sources can be reconstructed from far-field measurement data. The solver is composed of a data-driven module and a physical-driven module, which is coupled by a loss. The loss of the data-driven module and the physical-driven module are both the driving force of the solver evolution. The solver takes the far-field data as the input, and the number, location, and magnitude of the point sources as the output. It is trained by the Adam algorithm. Numerical experiments show that this method is effective for reconstructing the multi-sources.

1. Introduction

The inverse source problem is widely used in scientific fields and engineering applications, such as environmental pollution, medical diagnosis, and seismic monitoring [1,2,3]. In this paper, we establish a dual-driven solver with data-driven and physical-driven modules. The solver can be used to reconstruct the number, location and magnitude of the point sources in the elastic wave field.
The inverse source problem for elastic wave in isotropic homogeneous media is described as follows. Suppose that Ω R 2 denotes a simply-connected bounded domain that has C 2 boundary Γ = Ω . Let the elastic wave field be described by a radiation field u C 2 R 2 . Then the propagation of elastic wave from the source term S ( x ) is governed by the Lamé system:
μ Δ u + ( λ + μ ) · u + ω 2 u = S ( x ) , x R 2 ,
where λ , μ are known as the Lamé constants, satisfying μ > 0 , λ + 2 μ > 0 , and ω > 0 is the angular frequency of the elastic wave. Physically, an elastic wave field has the following decomposition form:
u = u p + u s ,
where u p and u s are P-wavefield and S-wavefield, respectively, satisfying the Kupradze–Sommerfeld radiation conditions
lim r r 1 / 2 u p r i k p u p = 0 , lim r r 1 / 2 u s r i k s u s = 0 .
Here, r = | x | , k p = ω λ + 2 μ and k s = ω μ are the wave numbers of P-wave and S-wave, respectively. The solution u to Equations (1) and (2) can be written as
u = Ω G ( ω , x , y ) S ( y ) d y ,
where G ( ω , x , y ) is the Green tensor corresponding to the Navier equation [4],
G ( ω , x , y ) = i 4 μ H 0 ( 1 ) k s | x y | I + i 4 ω 2 x x · H 0 ( 1 ) k s | x y | H 0 ( 1 ) k p | x y | , x , y R 2 .
Here, I is a 2 × 2 identity matrix, H 0 ( 1 ) is the Hankel function of the first kind and order zero.
Assume that the source term S ( x ) in the Equation (1) consists of a limited number of well-separated point sources, which can be expressed as
S ( x ) = j = 1 N p j δ x z j , N N + , z j Ω ,
where δ denotes the Dirac delta distribution, p j R 2 represents the magnitude of the jth source, z j R 2 represents the location of the jth source, j = 1 , 2 , , N , and N is the number of the sources. In particular, we know that the scattering field u ( x ; z , p ) of the location z = z 1 , z 2 , , z N and the magnitude p = p 1 , p 2 , , p N has the following asymptotic expansion form [5]
u ( x ; z , p ) = e i k p | x | | x | u p , ( x ^ ; z , p ) + e i k s | x | | x | u s , ( x ^ ; z , p ) + O 1 | x | 3 2 , | x | +
where
u p , ( x ^ ; z , p ) = e i π 4 ( λ + 2 μ ) 8 π k p j = 1 N x ^ x ^ T e i k p x ^ · z j p j ,
u s , ( x ^ ; z , p ) = e i π 4 μ 8 π k s j = 1 N e i k s x ^ · z j I x ^ x ^ T p j .
Here, λ , μ are the Lamé constants, x ^ = x / | x | represents the observation direction, i = 1 is the imaginary unit, u p , ( x ^ ; z , p ) = u p 1 , u p 2 , , u p M C M and u s , ( x ^ ; z , p ) = u s 1 , u s 2 , , u s M C M are the far-field modes of u p ( x ) and u s ( x ) respectively, and M is the number of observation directions. For a given observation direction x ^ , the far-field data u p , ( x ^ ; z , p ) and u s , ( x ^ ; z , p ) corresponding to the location z and magnitude p can be calculated by Equations (1)–(5). Our goal is to use the measured far-field data u p , ( x ^ ; z , p ) , u s , ( x ^ ; z , p ) to reconstruct the number, location, and magnitude ( z , p ) of the point sources.
In recent studies of the inverse source problem, Li, Schotland, and Yang [6] provided a model for the acoustic modulation of the current density and the material parameters that are used to formulate the inverse source problem. Liimatainen and Lin studied the inverse source problem associated with semilinear elliptic equations in Ref. [7]. Two imaging algorithms are developed for reconstructing a sound-soft cavity and its excitation sources from the total-field data, and more information can be found in the literature [8]. In Ref. [9], Jiang et al. proposed to modify the existing quasi-boundary value methods for recovering the source term and initial value simultaneously. Based on the study of the singularity of the Laplace transform of the boundary trajectory of the solution of the time-fractional diffusion equation, Janno and Kian [10] studied the inverse source problem of the time-division diffusion equation. In Ref. [11], Chaikovskii and Zhang solved the inverse source problem by an asymptotic expansion regularization algorithm in a three-dimensional case. Taking the Cauchy problem for the Beltrami-like equation associated with an analytic map as a basis, Omogbhe and Sadiq [12] provided the reconstruction method for the full (part) of the linearly anisotropic source. Jing et al. [13] proposed an algorithm combining the adjoint-pulse and regularization methods to identify the spatiotemporal information of the point source in space.
Scholars have done a lot of research on the reconstruction of the number, location, and magnitude of sources. Ohe gave a real-time reconstruction method for the multi-moving point/dipole source with the algebraic relationship between source parameters and observation data in Ref. [14]. The basic solution method [15,16] is a gridless method that uses the basic solution to expand. Chen et al. [17] proposed a modified method of fundamental solution for extending the solution using the time convolution of Green’s function and signal function. They numerically simulated the three-dimensional time-varying inverse source problem, and considered the reconstruction of multiple stationary point sources and a moving point source. In addition to the above methods, there are also some direct methods to solve the inverse source problem. The Fourier method expands the source function to Fourier, and establishes a corresponding relationship between multi-frequency data and Fourier coefficients. Following that, a rough source function is obtained [18,19]. The sampling method is used to detect the sampling area by constructing an indicator. When the sampling point is near the location of the source, the indicator will generate a maximum value. Bringing the maximum value into the indicator function, we can obtain the strength of the point sources [20,21,22]. In fact, these methods are related. For example, for the reconstruction of a moving point source, the authors of Ref. [17] proposed that the modified method of a fundamental solution was simplified as a simple sampling method at each time step. We also refer tge interested readers to Refs. [22,23,24,25] and the references therein for a further introduction on various inverse source problems.
In recent years, as neural networks have a strong self-learning ability to deal with multiple systems, some scholars have also tried to use neural network methods to solve inverse problems and have obtained some results. On the issue of obstacle scattering problems, Gao et al. [26] established a fully connected neural network to recover a scattering object from the (possibly) finite aperture radar cross section data. Based on the idea of long-term and short-term memory neural network, Yin, Yang, and Liu [27] proposed a two-layer sequence-to-sequence neural network to effectively solve the inverse problem with limited-aperture phaseless data. Sampling methods combined with deep neural networks can be used to solve the inverse scattering problem of determining the geometry of penetrable objects [28]. By utilizing a linear sampling method, Meng et al. [29] obtained the prior information of the shape of the obstacle. Next, it constructed a shape parameter inversion model using neural network and gate control ideas. Finally, the obstacle shape is rebuilt from the far-field information and the priori information of the obstacle shape. In addition to the obstacle scattering problem, Bao et al. [30] proposed a weak countermeasure network method to numerically solve a class of inverse problems, including impedance tomography and dynamic impedance layer scanning. Li and Hu [31] presented a neural network method to solve Cauchy inverse problems. Zhang et al. [32] designed two models using a neural network to identify and predict the trajectory of the moving point source by measuring the corresponding wave field. Khoo and Ying constructed a neural network structure called SwitchNet to solve the wave equation based inverse scattering problems in Ref. [33]. Yao et al. [34] used an adversarial neural network approach, which can be applied to the inverse problems with multiple parameters. For more studies on solving the problem of electromagnetic scattering by neural network methods, please refer to Refs. [35,36,37].
The neural network method is data-driven. When solving a problem, it is necessary to use data to make the weight update. The powerful nonlinear mapping ability of the network performs well in solving inverse problems. It is also illustrated by the numerical experiments in Refs. [27,29,32]. On the other hand, these literatures do not employ the physical system while utilizing the neural network to solve the inverse problem. In this way, we are unable to reflect the Lamé system (1) concealed in the training data. So we consider adding physical system to the neural network and constructing a dual-driven solver driven by a data and physical system. In Refs. [38,39], these authors put forward an idea of combining physical information and neural networks, and more studies refer to Refs. [40,41,42].
In this paper, the DDS (dual-driven solver) is established and consists of two modules: data-driven and physical-driven. The data-driven module is a neural network that takes far-field data as the input, and the information of the point sources as the output. The physical-driven module replaces the information of the point sources calculated through the neural network into the physical system to simulate the corresponding far-field data. A dual-driven solver primarily uses a data-driven model to solve and a physical-driven model to judge. By accumulating the losses of these two modules for weighting, the driving force for the evolution of the solver is obtained. Finally, the Adam optimization algorithm is used to update the neural network to improve the accuracy of the reconstruction of the point sources. Our method has the following two characteristics. First, the solver retains the original characteristics of the neural network. This is effective, and easy to implement. Second, the introduction of the physical-driven module embeds the Lamé system that satisfies the information and intensity of the source between the elastic wave far-field studied and the source in the loss function, which constrains the reconstruction results of the data-driven part.
The rest of this article is arranged as follows: In Section 2, we give the construction of DDS through a detailed description of the structural framework, the data-driven module, the physical-driven module, the definition of loss function of DDS, and the reconstruction algorithm at the end of Section 2. In Section 3, we first conduct performance experiments on the proposed DDS. Subsequently, we take DDS to solve the inverse source problem to verify the effectiveness and robustness of the proposed method. In Section 4, the paper is concluded with some relevant discussions.

2. Construction of the DDS

Considering the inverse source problem for an elastic wave, we propose a dual-driven method that uses data-driven and physical-driven modules. The method uses measured far-field data ( u p , ( x ^ ; z , p ) , u s , ( x ^ ; z , p ) ) to reconstruct the the number, location, and magnitude ( z , p ) of the point sources. To this end, we design a dual-driven solver composed of the data-driven module and the physical-driven module. The variables of the two modules in this solver affect and excite each other, and jointly drive the update of parameters of the solver.
The data-driven module uses data to train the mapping relationship between the far-field data u p , ( x ^ ; z , p ) , u s , ( x ^ ; z , p ) and the information ( z , p ) of the point sources, and uses far-field data to calculate the approximation of the information ( z , p ) of the point sources. The physical-driven module offers the physical relationship between the information of the point sources and the far-field data. It afterwards calculates the relevant far-field data u ^ p , ( x ^ ; z ^ , p ^ ) , u ^ s , ( x ^ ; z ^ , p ^ ) by approximating the physical relationship and the information ( z , p ) of the point sources. To some extent, the accuracy of the solver is explained by the losses brought from the approximation of point sources in the data-driven module and far-field data in the physical-driven module. We view the weighted summation of the two losses as the loss functions of the dual-driven solver, and use the optimization algorithm to reverse propagate and train the neural network parameters.
For ease of description, the following notation is given.
Note 1: x ^ i i = 1 M represents a discrete set of observation directions, where M N + is the number of observation points. Given the number, location, and magnitude of sources, we get the observed far-field data as
u p , ( x ^ ; z , p ) , u s , ( x ^ ; z , p ) = ( u ( 1 ) , u ( 2 ) , , u ( 2 M ) ) ,
where
u ( i ) = u p , x ^ i ; z , p , i = 1 , 2 , , M , u s , x ^ i M ; z , p , i = M + 1 , M + 2 , , 2 M .
Note 2: Assume that the location of the jth point sources are z j R 2 , j = 1 , 2 , , N , and the corresponding magnitude is p j R 2 , j = 1 , 2 , , N , the information parameter of the point sources are denoted as
( z , p ) = ( c ( 1 ) , c ( 2 ) , c ( 2 N ) ) ,
where
c ( j ) = z j , j = 1 , 2 , , N , p j N , j = N + 1 , N + 2 , , 2 N ,
N is the number of sources.

2.1. Architecture of the DDS

The structural framework of the dual-driven solver is shown in Figure 1. The training dataset is composed of the information parameter and corresponding far-field data. The far-field data u p , ( x ^ ; z , p ) , u s , ( x ^ ; z , p ) is substituted into the data-driven module to calculate the approximate of the information parameter ( z ^ , p ^ ) . We compare it with the real value of the information parameter ( z , p ) to obtain the loss of the data-driven module L N N .
The approximate of the information parameter ( z ^ , p ^ ) should satisfy the Lamé system and substitute it into the physical-driven module to obtain its corresponding far-field data u ^ p , ( x ^ ; z ^ , p ^ ) , u ^ s , ( x ^ ; z ^ , p ^ ) . The far-field data is also approximate. Comparing it with the input of the data-driven module u p , ( x ^ ; z , p ) , u s , ( x ^ ; z , p ) , the loss of the physical-driven module L M can be obtained.
We weight the sum of the losses of the two modules. The loss function of the dual-driven solver can be written as
L : = ( 1 α ) L N N + α L M ,
where α is the contribution coefficient of the physical-driven module, 0 α < 1 . When α = 0 , the dual-driven solver degenerates into a data-driven solver. Therefore, the data-driven module and the physical-driven module are coupled together to build a dual-driven solver to solve the inverse source problem.

2.2. Data-Driven Module

In this section, we build a recurrent neural network as a data-driven module based on the GRU gate control unit. The purpose is to build a sequence-to-sequence neural network to reconstruct the information parameter of the point sources.
At first, the neural network is a two-layer recurrent neural network. It takes the far-field data u p , ( x ^ ; z , p ) , u s , ( x ^ ; z , p ) as the input, the information parameter ( z ^ , p ^ ) as the output, and the GRU gate control unit as the basic computing unit. The neural network is used for the reconstruction of the information parameter ( z , p ) . Its structure is shown in Figure 2. The rectangle represents the GRU gate control unit, and its structure is shown in Figure 3.
Given the input u p , ( x ^ ; z , p ) , u s , ( x ^ ; z , p ) and the initial state h ( 0 ) , after the calculation of the first layer of GRU gate control unit, the far-field feature h is obtained
h ( t ) = G R U c ( t ) , h ( t 1 ) ,
where h ( t ) R m and h ( t 1 ) R m represent the feature of the tth and ( t 1 ) th components of the far-field data, respectively. When t = 0 , h ( 0 ) has no far-field feature. The calculation process of the GRU gate control unit is as follows:
(1) Reset gate r ( t ) determines how the input information c ( t ) is combined with the previous feature h ( t 1 ) ,
r ( t ) = S W r [ c ( t ) , h ( t 1 ) ] ,
at this point, the candidate feature h ˜ ( t ) is
h ˜ ( t ) = g W h ˜ r ( t ) h ( t 1 ) , c ( t ) ,
(2) The update gate z ( t ) determines the information to be retained by the current feature h ( t ) from the historical feature h ( t 1 ) , and the new information c ( t ) to be added from the candidate feature h ˜ ( t )
z ( t ) = S W z [ c ( t ) , h ( t 1 ) ] ,
and computes the intermediate feature
h ( t ) = ( 1 z ( t ) ) h ( t 1 ) z ( t ) h ˜ ( t ) ,
The intermediate feature h ( t ) represents the feature extracted from the first layer of the hidden layer of the neural network. To further improve the accuracy of the solver, the hidden layer adds another layer of the GRU unit for feature extraction. Input c ( t ) and intermediate feature h ( t ) into the GRU unit (9)–(12) for calculations to obtain the final feature
H ( t ) = G R U H ( t 1 ) , h ( t ) .
The output ( z ^ , p ^ ) = ( c ^ ( 1 ) , c ^ ( 2 ) , c ^ ( 2 N ) ) of the module can be expressed as
( z ^ , p ^ ) = σ W o H ,
where S ( x ) = 1 1 + e x is a sigmoid activation function, g ( x ) = e x e x e x + e x is the hyperbolic tangent activation function, σ can be any activation function, W r , W h ˜ , W z , W o are the weight of the reset gate, the intermediate state, the update gate, and the output layer, respectively, ⊗ denotes the matrix corresponding element multiplication, and ⊕ denotes the matrix splicing.
Here, the loss generated by the data-driven module can be represented by ( z ^ , p ^ ) and ( z , p ) as
L N N = 1 2 ( z , p ) ( z ^ , p ^ ) 2 = 1 2 N j = 1 2 N ( c ( j ) c ^ ( j ) ) 2 .

2.3. Physical-Driven Module

In the previous section, we considered the loss between data ( z ^ , p ^ ) , and ( z , p ) . In this section, we will continue to consider the relationship between far-field data and the information parameter of the source. From a physical point of view, the output ( z ^ , p ^ ) of the data-driven module is the result of the reconstruction of the information parameter. So, the information parameter represented by ( z ^ , p ^ ) should meet the Lamé system. If ( z ^ , p ^ ) will be substituted into the Lamé system, the corresponding far-field data u ^ p , ( x ^ ; z ^ , p ^ ) , u ^ s , ( x ^ ; z ^ , p ^ ) will be obtained. Then the loss between the far-field data of reconstruction and the real far-field data will be considered. In this way, we can evaluate the results of solver reconstruction from both data-driven and physical-driven aspects.
The physical model satisfied by the elastic wave field u is the Lamé system:
μ Δ u + ( λ + μ ) · u + ω 2 u = j = 1 N p j δ x z j , x R 2
where λ , μ are the Lamé constant, satisfying μ > 0 , λ + 2 μ > 0 , and ω > 0 is the angular frequency of the elastic wave, δ is the Dirac distribution, p j is the magnitude of the jth source, z j is the location of the jth source, and N is the number of point sources. From the discussion in the introduction, we study the correspondence between the information parameter of the source and the far-field, and take the Formulas (4) and (5) instead of the Lamé system.
We use the data-driven module to calculate ( z ^ , p ^ ) and substitute it into the Formulas (4) and (5). The far-field data u ^ p , ( x ^ ; z ^ , p ^ ) , u ^ s , ( x ^ ; z ^ , p ^ ) can be expressed as
u ^ p , ( x ^ ; z ^ , p ^ ) = e i π 4 ( λ + 2 μ ) 8 π k p j = 1 N x ^ x ^ T e i k p x ^ · z j ^ p ^ j ,
u ^ s , ( x ^ ; z ^ , p ^ ) = e i π 4 μ 8 π k s j = 1 N e i k s x ^ · z ^ j I x ^ x ^ T p ^ j ,
where z ^ = z ^ 1 , z ^ 2 , , z ^ N , z ^ j R 2 represents the location of the jth source of the reconstruction, p ^ = p ^ 1 , p ^ 2 , , p ^ N , p ^ j R 2 represents the magnitude of the jth source of the reconstruction, j = 1 , 2 , N , I is the unit matrix, x ^ = x / | x | represents the observation direction, and i represents the imaginary unit.
At this time, the loss generated by the physical-driven module with ( z ^ , p ^ ) as the input, and u ^ p , ( x ^ ; z ^ , p ^ ) , u ^ s , ( x ^ ; z ^ , p ^ ) as the output can be expressed as
L M = 1 2 u p , ( x ^ ; z , p ) , u s , ( x ^ ; z , p ) u ^ p , ( x ^ ; z ^ , p ^ ) , u ^ s , ( x ^ ; z ^ , p ^ ) 2 = 1 2 M i = 1 2 M ( u ( i ) u ^ ( i ) ) 2 .

2.4. Loss Function of the DDS

In this section, we define the form of loss function for the dual-driven solver.
Based on the loss (13) of the data-driven module and the loss (17) of the physical-driven module, the loss function of the dual-drive solver can be defined as
L = ( 1 α ) L N N + α L M ,
where L N N is the loss of the data-driven part, L M is the loss of the physical-driven part, and α is the contribution coefficient of the physical-driven model, 0 α < 1 . When α = 0 , the dual-driven solver is a two-layer GRU neural network driven by data. Through the definition of the loss function (18), the loss of physical-driven module is added to the loss function of the training neural network. It directly affects the optimization of the network weights and acts as a regularization in DDS.
For the DDS, the Adam algorithm is used to update the weights in the data-driven module to update the solver. W is used to represent the weight W r , W h ˜ , W z , W o in the solver. The weight update rules are as follows.
W l = W l 1 η m ^ l v ^ l + ξ ,
where W 0 represents the random initial weight, W l represents the value of the parameter W at l iterations, l = 1 , 2 , , ξ is a constant added to maintain numerical stability, η is the learning rate, m ^ l is the correction of m l , and v ^ l is the correction of v l ,
m ^ l = m l 1 β 1 , v ^ l = v l 1 β 2 ,
β 1 and β 2 are constants to control the exponential attenuation. m l is the exponential moving mean of the gradient, which is obtained by the first-order moment of the gradient. v l is the square gradient, which is obtained by the second-order moment of the gradient. The updates of m l and v l are as follows
m l = β 1 m l 1 + 1 β 1 g l , v l = β 2 v l 1 + 1 β 2 ( g l ) 2 , g l = W L l W l 1 ,
L l represents the value of the loss function L at l iterations, g l is the gradient matrix obtained by the derivative of the loss function L with respect to the weight W.
Finally, we give the reconstruction scheme in the following Algorithm 1.
Algorithm 1 A numerical method for reconstructing the source from far-field data
Step 1
Given the frequency k p , k s , and information parameter ( z , p ) , calculate the corresponding far-field data u p , ( x ^ ; z , p ) , u s , ( x ^ ; z , p ) ;
Step 2
Enter the far-field data u p , ( x ^ ; z , p ) , u s , ( x ^ ; z , p ) into the DDS;
Step 3
The data-driven module reconstructs the parameter ( z ^ , p ^ ) and calculates module losses (13);
Step 4
Enter ( z ^ , p ^ ) into the physical-driven module to get ( u ^ p , ( x ^ ; z ^ , p ^ ) , u ^ s , ( x ^ ; z ^ , p ^ ) ) and calculate module loss (17);
Step 5
Calculate DDS loss using module loss (18) in Step 3 and Step 4;
Step 6
Determine L < ε or achieve the maximum number of iterations: Yes, continue to Step 7; No, use the Adam algorithm to update the weight, and then return to Step 2;
Step 7
DDS outputs parameter ( z ^ , p ^ ) .

3. Numerical Experiments

Through the numerical experiments, this section shows that the constructed DDS can effectively reconstruct the location and magnitude of the source. In addition, several two-dimensional and three-dimensional numerical experiments are used to illustrate the effectiveness and robustness of DDS.
In all numerical examples, we consider the Ω = [ 6 , 6 ] d , d = 2 , 3 and Lamé constants λ = 1 , μ = 1 . For two-dimensional cases, we select a circle with a radius of R = 8 as the measurement curve Γ , and evenly distribute 10 measurement points counterclockwise from the x-axis on Γ . For three-dimensional cases, we chose 100 uniformly distributed measurement directions on the sphere Γ with a radius of R = 8 .
Experiment 1.
Selection of hyperparameters of the data-driven module in DDS.
In this experiment, we consider the value of the maximum number of iterations in the neural network. Figure 4 shows the curve of the test loss function changing with the number of iterations. It can be seen from the figure that when 0 < Iterations < 300 , the test loss decreases with the increase of the number of iterations and changes significantly. When 300 < Iterations < 400 , the test loss decreases slightly with the increase of the iterations. When 400 < Iterations, the test loss hardly changes. This means that the maximum number of iterations in the network is 400. This way, better results are obtained without wasting the calculation cost.
Some parameters of the data-driven module in DDS are set in the Table 1. More details on parameter settings can refer to Refs. [27,29,32].
Experiment 2.
Reconstruction of the location and magnitude of the single source.
In this experiment, we consider far-field data to reconstruct the location and magnitude of a single source in two-dimensional and three-dimensional cases, respectively. The calculation results are shown in Figure 5.
Figure 5 shows the reconstruction results of the location and magnitude of a single source in 2D and 3D. Compared with the two adjacent dots of different colors in the figure, we find that the location and size of the dots are basically the same. It shows that the solver can reconstruct the location and magnitude of the source in 2D and 3D. In order to quantitatively observe the accuracy, we give the information parameter of the refactoring source and the relative error between the two in Table 2 and Table 3, respectively.
As can be seen from Table 2 and Table 3, there are discrepancies between the refactored parameters of location and magnitude, but the actual values are similar, indicating that the solver can not only invert the position and strength information of the point source simultaneously, but also that the accuracy is consistent.
In the two-dimensional and three-dimensional reconstruction experiments, the relative errors of each group are different. The main reason is the random generation of weight initialization in the neural network and the decrease of the stochastic gradient in the optimization algorithm. It makes the data-driven module have a certain degree of randomness. At the same time, we can see from Table 1 that the maximum number of iterations of the solver is 400. This means that the solver will terminate and output the reconstruction results at this time, when the number of iterations reaches 400. A subsequent number of iterations will perhaps get better reconstruction results. However, the solver will only output the inversion results when the number of iterations is terminated. Combined with the above reasons, each reconstruction result will be different and the error will vary.
Least-squares method is widely used in underground scattering imaging. Let us do a comparative experiment. Using least-squares [43] to reconstruct the single source under the same data volume, the results can be seen in the Table 4. Comparing the Table 2 and Table 4, it can be seen that the reconstruction effect of the DDS is stronger than the least-squares.
From Experiment 2, it can be seen that the solver can solve the inverse source problem in 2D and 3D. Additionally, there were no inherent concerns in the solution process. Therefore, the subsequent experiments are considered in 2D.
Experiment 3.
Reconstruction of the number of multi-sources.
In this experiment, our goal is to reconstruct the number of the point sources. The information parameter of the source is shown in Table 5. On account of the number of information, the parameter in our method must be determined before reconstruction, that is, the number of point sources must be known. In the case of an unknown number, let us first move Ω and Γ in a certain direction so that the location ( 0 ξ , 0 + ξ ) , ξ < 1 is not in Ω . Secondly, it is assumed that the number of sources is Q, which requires Q > N . Then we use the solver to reconstruct the location of the source. If the location l ( 0 ξ , 0 + ξ ) of q point sources is reconstructed. It means that the q point sources does not exist, and the number of sources is N = Q q .
Table 5 shows the reconstruction of the number of two point sources. Since the number of sources is unknown in advance, we assume that there are three sources. Through the reconstruction results of Table 5, we can see that the reconstruction location parameter of point source S 3 is ( 0.000 , 0.000 ) ( 0 ξ , 0 + ξ ) , indicating that point source S 3 does not exist. This means that the number of multi-sources is two, which is consistent with the actual number of the point sources. Therefore, this method of refactoring the number of sources is feasible. At the same time, the location parameters of the reconstruction can be seen through the value of relative error.
Experiment 4.
Reconstruction of the location and magnitude of multi-sources.
This experiment considers reconstructing the location and magnitude of multi-sources in different locations when the number of sources is known. In Figure 6, we rebuild three and seven point sources, respectively.
In Figure 6, the reconstructed red dot has a good coverage of the real blue dot, which means that the reconstruction results are good. In order to accurately see the results of the reconstruction, Table 6 and Table 7 give the number, location, and magnitude parameter information of real and reconstructed sources. Comparing the relative error range of Table 6 and Table 7, we can see that the increase in the number of point sources does not affect the accuracy of the results.
In Figure 7, we present the waveform plots generated by the true and reconstructed scattering fields from three point sources. The circle ring represents the PML layer.
Experiment 5.
Reconstruction of the point sources under different noise levels.
In this experiment, different levels of noise are added to the measurement data of the point sources to test the stability of the solver. We add some random disturbances to the data, and the noise data is expressed as
u p , ( x ^ ; z , p ) , u s , ( x ^ ; z , p ) ε = ( 1 + ε γ ) u p , ( x ^ ; z , p ) , u s , ( x ^ ; z , p ) ,
where ε represents the level of noise, and γ is a random number generated by a uniform distribution U ( 1 , 1 ) . We add 1%, 5%, 10%, and 20% respectively, to the training dataset.
Table 8 shows the reconstruction results of one point source with location (3, 2) and magnitude (5, −7) at different noise levels. From it, we can plainly observe that the reconstruction results gradually deteriorate and the relative error increases as the noise level rises. This can also be verified in Figure 8. The distance between the reconstruction location and the exact location increases with the noise intensity. It has no discernible effect on the results when less than 5% noise is added. The location of the reconstruction significantly deviates from the genuine place as the noise level rises above 10%.
Figure 9 shows the reconstruction of seven point sources at different noise levels. The above view can also be demonstrated in Figure 9. In addition, we also find that with the increase in noise level, the reconstruction effect of large dots is significantly stronger than small dots. This may be caused by the observation point capturing too little information with the small magnitude of the point. In order to intuitively feel the error of each point, we give the relative error of each point at different noise levels and the average error of this set of points in Table 9.
Experiment 6.
Reconstruction of point sources under finite observation apertures.
In actual applications, full-aperture measurement of far-field data is often limited and can only be collected at limited observation points. It means that the observation geometry is only partial. This experiment considers reconstructing the location and magnitude of single point and multi-sources under finite observation apertures. This enables a thorough evaluation of the solver’s stability. We select the observation aperture ranges of 0 , 9 π 5 , 0 , 7 π 5 , 0 , π 2 and 0 , 3 π 5 respectively, and the corresponding number of observation points is M = 9 , M = 7 , M = 5 and M = 3 .
The reconstruction renderings of a single source and multi-sources in different observation aperture ranges are shown in Figure 10 and Figure 11, respectively. They clearly show that with the reduction of the observation aperture range, the reconstruction effect gradually deteriorates. In the reconstruction of multi-sources, when the observation aperture reduces to 0 , 7 π 5 , the reconstruction results have a small impact. When the observation aperture is 0 , π 2 , the observation point captures more information about the nearby points, and the reconstruction results of the point on one side of the observation point is better than the other side. When the observation aperture is 0 , 3 π 5 , the observation point that collects the information is seriously insufficient, and the reconstruction location seriously deviates from the real location.

4. Conclusions

In this paper, by aiming at the simultaneous reconstruction of the number, location, and magnitude parameters for an elastic source, we constructed a dual-driven solver based on data-driven and physical-driven modules. Our method performs well for the reconstruction of a single source and multi-sources with an unknown number. In our method, an analytic solution is not necessary. It is a way to acquire data. We select a relatively ideal situation of the problem background for easy verification of the inverse problem algorithm. Further discussion is needed for more complex cases, such as the time-domain problem, a phaseness case, which is a case where the source term contains both a moving point source and a dipole source, and so on.

Author Contributions

Software, P.M.; writing—review and editing, P.M.; data curation, Y.C.; writing—original draft preparation, Y.C.; methodology, W.Y.; funding acquisition, W.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China (Grant Agreement No. 12271207), and Jilin Natural Science Foundation (Grant Agreement No. 20220101040JC), Jilin Provincial Science and Technology Program (Grant Agreement No. YDZJ202201ZYTS585) and Jilin Industrial Technology Research and Development Project (No. 2022C047-2).

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank Hongyu Liu for the many helpful discussions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Barragan, A.; Preston, C.; Alvarez, A.; Bera, T.; Qin, Y.; Weinand, M.; Kasoff, W.; Witte, R.S. Acoustoelectric imaging of deep dipoles in a human head phantom for guiding treatment of epilepsy. J. Neural Eng. 2020, 17, 056040. [Google Scholar] [CrossRef]
  2. Gao, Y.; Li, P.; Yang, Y. On an inverse source problem for the Biot equations in electro-seismic imaging. Inverse Probl. 2019, 35, 095009. [Google Scholar] [CrossRef] [Green Version]
  3. Xue, Y.; Zhai, Z.J. Inverse identification of multiple outdoor pollutant sources with a mobile sensor. In Proceedings of the Building Simulation; Springer: Berlin/Heidelberg, Germany, 2017; Volume 10, pp. 255–263. [Google Scholar]
  4. Ammari, H.; Bretin, E.; Garnier, J.; Kang, H.; Lee, H.; Wahab, A. Mathematical Methods in Elasticity Imaging; Princeton University Press: Princeton, NJ, USA, 2015; Volume 52, pp. 5–26. [Google Scholar]
  5. Wang, X.; Guo, Y.; Bousba, S. Direct imaging for the moment tensor point sources of elastic aves. J. Comput. Phys. 2022, 448, 110731. [Google Scholar] [CrossRef]
  6. Li, W.; Schotland, J.C.; Yang, Y.; Zhong, Y. Inverse Source Problem for Acoustically-Modulated Electromagnetic Waves. arXiv 2022, arXiv:2202.11888. [Google Scholar]
  7. Liimatainen, T.; Lin, Y.H. Uniqueness results for inverse source problems of semilinear elliptic equations. arXiv 2022, arXiv:2204.11774. [Google Scholar]
  8. Zhang, D.; Guo, Y.; Wang, Y.; Chang, Y. Co-inversion of a scattering cavity and its internal sources: Uniqueness, decoupling and imaging. arXiv 2022, arXiv:2207.06133. [Google Scholar]
  9. Jiang, Y.; Liu, J.; Wang, X.S. A direct parallel-in-time quasi-boundary value method for inverse space-dependent source problems. J. Comput. Appl. Math. 2023, 423, 114958. [Google Scholar] [CrossRef]
  10. Janno, J.; Kian, Y. Inverse source problem with a posteriori boundary measurement for fractional diffusion equations. arXiv 2022, arXiv:2207.06468. [Google Scholar]
  11. Chaikovskii, D.; Zhang, Y. Solving forward and inverse problems in a non-linear 3D PDE via an asymptotic expansion based approach. arXiv 2022, arXiv:2210.05220. [Google Scholar]
  12. Omogbhe, D.; Sadiq, K. An inverse source problem for linearly anisotropic radiative sources in absorbing and scattering medium. arXiv 2022, arXiv:2211.00535. [Google Scholar]
  13. Jing, Y.; Li, F.; Gu, Z.; Tang, S. Identifying spatiotemporal information of the point pollutant source indoors based on the adjoint-regularization method. In Proceedings of the Building Simulation; Springer: Berlin/Heidelberg, Germany, 2023; pp. 1–14. [Google Scholar]
  14. Ohe, T. Real-time reconstruction of moving point/dipole wave sources from boundary measurements. Inverse Probl. Sci. Eng. 2020, 28, 1057–1102. [Google Scholar] [CrossRef]
  15. Chen, B.; Sun, Y.; Zhuang, Z. Method of fundamental solutions for a Cauchy problem of the Laplace equation in a half-plane. Bound. Value Probl. 2019, 2019, 34. [Google Scholar] [CrossRef] [Green Version]
  16. Sun, Y. Modified method of fundamental solutions for the Cauchy problem connected with the Laplace equation. Int. J. Comput. Math. 2014, 91, 2185–2198. [Google Scholar] [CrossRef]
  17. Chen, B.; Guo, Y.; Ma, F.; Sun, Y. Numerical schemes to reconstruct three-dimensional time-dependent point sources of acoustic waves. Inverse Probl. 2020, 36, 075009. [Google Scholar] [CrossRef]
  18. Wang, X.; Zhu, J.; Song, M.; Wu, W. Fourier method for reconstructing elastic body force from the coupled-wave field. Inverse Probl. Imaging 2022, 16, 325. [Google Scholar] [CrossRef]
  19. Zhang, D.; Guo, Y. Fourier method for solving the multi-frequency inverse source problem for the Helmholtz equation. Inverse Probl. 2015, 31, 035007. [Google Scholar] [CrossRef]
  20. Bousba, S.; Guo, Y.; Wang, X.; Li, L. Identifying multipolar acoustic sources by the direct sampling method. Appl. Anal. 2020, 99, 856–879. [Google Scholar] [CrossRef]
  21. Guo, Y.; Monk, P.; Colton, D. Toward a time domain approach to the linear sampling method. Inverse Probl. 2013, 29, 095016. [Google Scholar] [CrossRef]
  22. Wang, X.; Guo, Y.; Li, J.; Liu, H. Mathematical design of a novel input/instruction device using a moving acoustic emitter. Inverse Probl. 2017, 33, 105009. [Google Scholar] [CrossRef]
  23. Li, J.; Helin, T.; Li, P. Inverse random source problems for time-harmonic acoustic and elastic waves. Commun. Partial Differ. Equ. 2020, 45, 1335–1380. [Google Scholar] [CrossRef]
  24. Liu, H.; Uhlmann, G. Determining both sound speed and internal source in thermo-and photo-acoustic tomography. Inverse Probl. 2015, 31, 105005. [Google Scholar] [CrossRef] [Green Version]
  25. Zhang, D.; Guo, Y.; Li, J.; Liu, H. Retrieval of acoustic sources from multi-frequency phaseless data. Inverse Probl. 2018, 34, 094001. [Google Scholar] [CrossRef] [Green Version]
  26. Gao, Y.; Liu, H.; Wang, X.; Zhang, K. On an artificial neural network for inverse scattering problems. J. Comput. Phys. 2022, 448, 110771. [Google Scholar] [CrossRef]
  27. Yin, W.; Yang, W.; Liu, H. A neural network scheme for recovering scattering obstacles with limited phaseless far-field data. J. Comput. Phys. 2020, 417, 109594. [Google Scholar] [CrossRef]
  28. Le, T.; Nguyen, D.L.; Nguyen, V.; Truong, T. Sampling type method combined with deep learning for inverse scattering with one incident wave. arXiv 2022, arXiv:2207.10011. [Google Scholar]
  29. Meng, P.; Su, L.; Yin, W.; Zhang, S. Solving a kind of inverse scattering problem of acoustic waves based on linear sampling method and neural network. Alex. Eng. J. 2020, 59, 1451–1462. [Google Scholar] [CrossRef]
  30. Bao, G.; Ye, X.; Zang, Y.; Zhou, H. Numerical solution of inverse problems by weak adversarial networks. Inverse Probl. 2020, 36, 115003. [Google Scholar] [CrossRef]
  31. Li, Y.; Hu, X. Artificial neural network approximations of Cauchy inverse problem for linear PDEs. Appl. Math. Comput. 2022, 414, 126678. [Google Scholar] [CrossRef]
  32. Zhang, P.; Meng, P.; Yin, W.; Liu, H. A neural network method for time-dependent inverse source problem with limited-aperture data. J. Comput. Appl. Math. 2023, 421, 114842. [Google Scholar] [CrossRef]
  33. Khoo, Y.; Ying, L. SwitchNet: A neural network model for forward and inverse scattering problems. SIAM J. Sci. Comput. 2019, 41, A3182–A3201. [Google Scholar] [CrossRef] [Green Version]
  34. Yao, J.; Warner, M.; Wang, Y. Regularization of anisotropic full-waveform inversion with multiple parameters by adversarial neural networks. Geophysics 2023, 88, R95–R103. [Google Scholar] [CrossRef]
  35. Li, H.; Chen, L.; Qiu, J. Convolutional neural networks for multifrequency electromagnetic inverse problems. IEEE Antennas Wirel. Propag. Lett. 2021, 20, 1424–1428. [Google Scholar] [CrossRef]
  36. Li, L.; Wang, L.G.; Teixeira, F.L.; Liu, C.; Nehorai, A.; Cui, T.J. DeepNIS: Deep neural network for nonlinear electromagnetic inverse scattering. IEEE Trans. Antennas Propag. 2018, 67, 1819–1825. [Google Scholar] [CrossRef] [Green Version]
  37. Xu, K.; Zhang, C.; Ye, X.; Song, R. Fast Full-Wave Electromagnetic Inverse Scattering Based on Scalable Cascaded Convolutional Neural Networks. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–11. [Google Scholar] [CrossRef]
  38. Pang, G.; Lu, L.; Karniadakis, G.E. fPINNs: Fractional physics-informed neural networks. SIAM J. Sci. Comput. 2019, 41, A2603–A2626. [Google Scholar] [CrossRef] [Green Version]
  39. Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 2019, 378, 686–707. [Google Scholar] [CrossRef]
  40. Guo, R.; Lin, Z.; Shan, T.; Song, X.; Li, M.; Yang, F.; Xu, S.; Abubakar, A. Physics embedded deep neural network for solving full-wave inverse scattering problems. IEEE Trans. Antennas Propag. 2021, 70, 6148–6159. [Google Scholar] [CrossRef]
  41. Jagtap, A.D.; Kharazmi, E.; Karniadakis, G.E. Conservative physics-informed neural networks on discrete domains for conservation laws: Applications to forward and inverse problems. Comput. Methods Appl. Mech. Eng. 2020, 365, 113028. [Google Scholar] [CrossRef]
  42. Liu, Z.; Roy, M.; Prasad, D.K.; Agarwal, K. Physics-Guided Loss Functions Improve Deep Learning Performance in Inverse Scattering. IEEE Trans. Comput. Imaging 2022, 8, 236–245. [Google Scholar] [CrossRef]
  43. Shoja, A.; van der Neut, J.; Wapenaar, K. Target-oriented least-squares reverse-time migration using Marchenko double-focusing: Reducing the artefacts caused by overburden multiples. Geophys. J. Int. 2023, 233, 13–32. [Google Scholar] [CrossRef]
Figure 1. The structural framework of the dual-driven solver.
Figure 1. The structural framework of the dual-driven solver.
Universe 09 00148 g001
Figure 2. The schematic diagram of the GRU neural network structure.
Figure 2. The schematic diagram of the GRU neural network structure.
Universe 09 00148 g002
Figure 3. The schematic diagram of the GRU gate control unit structure.
Figure 3. The schematic diagram of the GRU gate control unit structure.
Universe 09 00148 g003
Figure 4. The test loss function varies with the number of iterations.
Figure 4. The test loss function varies with the number of iterations.
Universe 09 00148 g004
Figure 5. The reconstructed effect of the location and magnitude of a single source is plotted. The size of the points represents the module of the magnitude of the source, the blue points represent the real information of the source, and the red points represent the reconstructed information of the source. Each subplot represents the reconstruction result of different points. (ad) are the reconstruction of the location and magnitude of the source in 2D, and (eh) are the reconstruction of the location and magnitude of the source in 3D.
Figure 5. The reconstructed effect of the location and magnitude of a single source is plotted. The size of the points represents the module of the magnitude of the source, the blue points represent the real information of the source, and the red points represent the reconstructed information of the source. Each subplot represents the reconstruction result of different points. (ad) are the reconstruction of the location and magnitude of the source in 2D, and (eh) are the reconstruction of the location and magnitude of the source in 3D.
Universe 09 00148 g005aUniverse 09 00148 g005b
Figure 6. Reconstructed effect plots of the location and magnitude of multi-sources, where the number of multi-sources is three in (a) and the number of multi-sources is seven in (b).
Figure 6. Reconstructed effect plots of the location and magnitude of multi-sources, where the number of multi-sources is three in (a) and the number of multi-sources is seven in (b).
Universe 09 00148 g006
Figure 7. The waveform plots of the scattered field from three point sources, where (a) is the true field and (b) is the reconstructed field.
Figure 7. The waveform plots of the scattered field from three point sources, where (a) is the true field and (b) is the reconstructed field.
Universe 09 00148 g007
Figure 8. Reconstruction of one point source at different noise levels.
Figure 8. Reconstruction of one point source at different noise levels.
Universe 09 00148 g008
Figure 9. The reconstructions of the multi-sources with different noise levels. Here, (ad) represent the ε = 1 % , ε = 5 % , ε = 10 % and ε = 20 % , respectively.
Figure 9. The reconstructions of the multi-sources with different noise levels. Here, (ad) represent the ε = 1 % , ε = 5 % , ε = 10 % and ε = 20 % , respectively.
Universe 09 00148 g009
Figure 10. The reconstructions of the multi-sources with 0 , 9 π 5 , 0 , 7 π 5 , 0 , π 2 and 0 , 3 π 5 . They correspond to (ad), respectively.
Figure 10. The reconstructions of the multi-sources with 0 , 9 π 5 , 0 , 7 π 5 , 0 , π 2 and 0 , 3 π 5 . They correspond to (ad), respectively.
Universe 09 00148 g010
Figure 11. The reconstructions of the multi-sources with different observation apertures. Here, (ad) represent the 0 , 9 π 5 , 0 , 7 π 5 , 0 , π 2 , and 0 , 3 π 5 , respectively.
Figure 11. The reconstructions of the multi-sources with different observation apertures. Here, (ad) represent the 0 , 9 π 5 , 0 , 7 π 5 , 0 , π 2 , and 0 , 3 π 5 , respectively.
Universe 09 00148 g011
Table 1. Parameter setting.
Table 1. Parameter setting.
ParameterParameter Value
Number of neural network layers2
Number of GRU neurons128
Maximum number of iteration s400
Learning rate 10 3
Batch size64
Table 2. Reconstruction of four single sources from far-field data in 2D.
Table 2. Reconstruction of four single sources from far-field data in 2D.
Exact SourcesReconstructed SourcesRelative Error‰
LocationMagnitudeLocationMagnitudeLocationMagnitude
(3, 2)(5, −7)(2.987, 2.001)(4.982, −7.006)2.42.3
(−3, −2)(−1, 4)(−3.045, −2.016)(−0.993, 4.034)11.57.8
(−4, 1)(−3, −2)(−4.018, 1.014)(−3.010, −2.004)9.32.5
(0, −4)(4, 3)(0.003, −4.022)(3.999, 3.009)2.81.7
Table 3. Reconstruction of four single sources from far-field data in 3D.
Table 3. Reconstruction of four single sources from far-field data in 3D.
Exact SourcesReconstructed SourcesRelative Error‰
LocationMagnitudeLocationMagnitudeLocationMagnitude
(1, 1, 1)(−3, 7, −2)(0.987, 1.001, 0.996)(−2.982, 7.003, −2.010)6.03.8
(0, −1, −3)(5, −1, 6)(0.045, −1.016, −3.003)(4.975, −0.993, 6.034)5.75.9
(−4, 2, 1)(−4, −3, −2)(−4.018, 2.001, 1.014)(−3.983, −3.010, −2.004)6.33.2
(−5, 0, −4)(1, 3, 2)(−5.008, 0.003, −4.022)(0.997, 2.999, 2.009)2.42.6
Table 4. Using the least-squares method to reconstruct four single sources from far-field data in 2D.
Table 4. Using the least-squares method to reconstruct four single sources from far-field data in 2D.
Exact SourcesReconstructed SourcesRelative Error%
LocationMagnitudeLocationMagnitudeLocationMagnitude
(3, 2)(5, −7)(2.911, 1.776)(5.329, −7.322)2.05.6
(−3, −2)(−1, 4)(−2.705, −2.191)(−0.920, 3.944)9.64.7
(−4, 1)(−3, −2)(−4.003, 0.991)(−3.433, −2.185)0.811.8
(0, −4)(4, 3)(−0.293, −4.055)(3.835, 2.799)0.75.4
Table 5. Reconstruction of the number of sources.
Table 5. Reconstruction of the number of sources.
Exact LocationReconstructed LocationRelative Error‰
S 1 (7, 12)(6.999, 11.991)0.4
S 2 (5, 6)(5.000, 6.008)0.7
S 3 N u l l (0.000, 0.000)0
Table 6. Reconstruction of the location and magnitude of three point sources.
Table 6. Reconstruction of the location and magnitude of three point sources.
Exact SourcesReconstructed SourcesRelative Error‰
LocationMagnitudeLocationMagnitudeLocationMagnitude
S 1 (2, 2)(−5, 2)(2.004, 1.998)(−4.984, 1.997)1.52.4
S 2 (−3, 1)(2, 4)(−3.007, 1.018)(1.984, 4.002)10.24.3
S 3 (−4, −3)(−2, −5)(−4.006, −3.005)(−2.004, −4.943)1.57.7
Table 7. Reconstruction of the location and magnitude of seven point sources.
Table 7. Reconstruction of the location and magnitude of seven point sources.
Exact SourcesReconstructed SourcesRelative Error‰
LocationMagnitudeLocationMagnitudeLocationMagnitude
S 1 (1, 2)(−4, 6)(1.003, 2.005)(−3.974, 5.979)2.85.0
S 2 (−3, 1)(−2, 4)(−2.990, 0.998)(−1.997, 3.994)1.21.5
S 3 (−4, −3)(−5, −5)(−4.007, −2.999)(−4.992, −4.987)1.02.1
S 4 (−2, 4)(3, 4)(−2.003, 3.997)(3.001, 4.013)1.11.3
S 5 (4, −2)(−1, −3)(4.009, −1.986)(−0.990, −3.013)4.67.2
S 6 (3, 3)(1, 2)(3.013, 3.007)(0.993, 2.015)3.37.3
S 7 (−1, −5)(−7, 1)(−1.006, −4.972)(−6.945, 0.991)5.88.4
Table 8. Reconstruction of a point source with location (3, 2) and magnitude (5, −7) for different noise levels test.
Table 8. Reconstruction of a point source with location (3, 2) and magnitude (5, −7) for different noise levels test.
Noise Level ε Reconstructed LocationReconstructed MagnitudeRelative Error%
1%(3.013, 1.990)(4.987, −7.013)0.4
5%(3.104, 2.079)(5.025, −6.909)2.3
10%(3.244, 2.139)(4.775, −6.875)5.4
20%(3.234, 2.332)(4.478, −6.559)10.3
Table 9. Relative error of points at different noise levels.
Table 9. Relative error of points at different noise levels.
Relative Error%1%5%10%20%
S 1 1.81.912.214.2
S 2 0.72.110.49.4
S 3 1.61.30.52.4
S 4 0.57.03.74.7
S 5 1.43.93.89.8
S 6 0.83.33.27.0
S 7 0.40.55.85.4
average error1.02.95.77.6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Meng, P.; Chai, Y.; Yin, W. Dual-Driven Solver for Reconstructing the Point Sources of Elastic Wave Based on Far-Field Data. Universe 2023, 9, 148. https://doi.org/10.3390/universe9030148

AMA Style

Meng P, Chai Y, Yin W. Dual-Driven Solver for Reconstructing the Point Sources of Elastic Wave Based on Far-Field Data. Universe. 2023; 9(3):148. https://doi.org/10.3390/universe9030148

Chicago/Turabian Style

Meng, Pinchao, Yuanyuan Chai, and Weishi Yin. 2023. "Dual-Driven Solver for Reconstructing the Point Sources of Elastic Wave Based on Far-Field Data" Universe 9, no. 3: 148. https://doi.org/10.3390/universe9030148

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop