Next Article in Journal
Modeling COVID-19 Real Data Set by a New Extension of Haq Distribution
Next Article in Special Issue
Tolerance Limits and Sample-Size Determination Using Weibull Trimmed Data
Previous Article in Journal
Higher-Order Nabla Difference Equations of Arbitrary Order with Forcing, Positive and Negative Terms: Non-Oscillatory Solutions
Previous Article in Special Issue
An Accelerated Double-Integral ZNN with Resisting Linear Noise for Dynamic Sylvester Equation Solving and Its Application to the Control of the SFM Chaotic System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Advancements in Numerical Methods for Forward and Inverse Problems in Functional near Infra-Red Spectroscopy: A Review

1
Department of Fundamental and Applied Sciences, Universiti Teknologi PETRONAS, Seri Iskandar 32610, Perak, Malaysia
2
Centre for Intelligent Signal and Imaging Research, Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, Seri Iskandar 32610, Perak, Malaysia
3
Centre of Research in Enhanced Oil Recovery, Universiti Teknologi PETRONAS, Seri Iskandar 32610, Perak, Malaysia
*
Author to whom correspondence should be addressed.
Axioms 2023, 12(4), 326; https://doi.org/10.3390/axioms12040326
Submission received: 15 February 2023 / Revised: 16 March 2023 / Accepted: 18 March 2023 / Published: 28 March 2023
(This article belongs to the Special Issue Mathematical Methods in the Applied Sciences)

Abstract

:
In the field of biomedical image reconstruction, functional near infra-red spectroscopy (fNIRs) is a promising technology that uses near infra-red light for non-invasive imaging and reconstruction. Reconstructing an image requires both forward and backward problem-solving in order to figure out what the image’s optical properties are from the boundary data that has been measured. Researchers are using a variety of numerical methods to solve both the forward and backward problems in depth. This study will show the latest improvements in numerical methods for solving forward and backward problems in fNIRs. The physical interpretation of the forward problem is described, followed by the explanation of the state-of-the-art numerical methods and the description of the toolboxes. A more in-depth discussion of the numerical solution approaches for the inverse problem for fNIRs is also provided.

1. Introduction

Neuroscientists have proposed several imaging modalities to comprehend and study the anatomical and functional aspects of the human brain. Magnetic resonance imaging (MRI), computerized tomography (CT), magnetoencephalography (MEG), electroencephalography (EEG), functional magnetic resonance imaging (fMRI), and Fourier-domain near-infrared spectroscopy (fNIRs) are some of the most well-known imaging methods. fNIRs is a relatively recent non-invasive neuroimaging technology that uses near infrared light with frequency ranges between 650 and 900 nanometers to evaluate the optical characteristics of the brain tissues. In the near-infrared part of the electromagnetic spectrum, the most important optical absorbers are the oxygenated (HbO) and deoxygenated (HbR) hemoglobin’s found in brain tissue.
The location of the source and detector, as well as the equipment used, affect NIR light measurements. In the context of source or detector probes, the measurement of NIR light is regarded as a measurement of transmission or reflectance. It is possible to measure the transmission by positioning the source and detector in the opposite direction if the NIR light is bright enough. However, only biological tissues like hands and arms can be used with this technique. The source and detector probes are typically arranged on the same side of the measuring instrument when measuring reflectance. Currently, three techniques can be used to simulate how light moves through tissue: time-domain (TD), frequency-domain (FD), and continuous wave (CW) (Figure 1) [1,2,3,4,5].
TD systems illuminate tissues with incredibly brief light pulses, which are widened and attenuated as they travel through the tissue. Detectors in time-resolved devices capture the temporal distribution of photons as they leave the tissue. The optical properties of the tissue can be figured out by looking at the shape and size of this distribution [3]. In FD systems, the light that comes in is changed in amplitude at a frequency between tens and hundreds of megahertz. Both the change in amplitude and the change in phase with respect to the signal that came in are measured. By using both data formats, it is possible to get unique information about the optical properties of tissues, such as the absorption and scattering coefficients [4]. The simplest and least expensive approach is CW mode. It makes use of a light source that is modulated at a frequency lower than a few tens of hertz or one that has a constant amplitude. It only examines the light’s amplitude attenuation after it has contacted biological tissues. Therefore, attenuation effects due to light scattering and absorption cannot be separated. It, however, has the highest signal-to-noise ratio. The most common modality is this one [5].
In fNIRs, the scalp is covered with an Optode montage, a spatially distributed arrangement of sources and detectors that emit and detect near-infrared light. The HbO and HbR) hemoglobin found in brain tissue are the two most prominent optical absorbers in the near-infrared range of the electromagnetic spectrum, respectively. The result of this conversion is that variations in hemoglobin concentration ([HbO] and [HbR]) at a single location can be derived from differences in optical density (OD) detected at two or more wavelengths. It is common practice to use a modified version of the Beer-Lambert Law (mBLL) when calculating these changes.
Mathematically, the procedure of image reconstruction entails reconstructing the optical properties using the experimentally measured boundary data and can be thought of as consisting of two parts: developing a forward model of light propagation and obtaining an inverse solution to the forward problem (Figure 2). The forward problem tries to estimate the boundary data at the position of the detector based on the distribution of the optical properties inside the object. This means making an estimate of the sensitivity matrix as absorption changes at each location in the head or trying to predict the optical flux density at the detectors based on a geometric model with optical parameters like source-detector location and functionality. The inverse problem is based on the same general equation as the forward problem. However, the goal is to dissect the vector of intracranial phenomena that can explain the vector of observed scalp values, given a specific sensitivity matrix.
This review is being done to learn more about the basic ideas behind the forward and inverse problems in fNIRs. For researchers who are new to the subject, it is designed to provide insight into the most up-to-date methods for tackling the problem and the types of toolboxes currently being used. It is also meant to give the reader a good idea of the best ways to solve the inverse problem in fNIRs so that the reader can understand these methods thoroughly.
The following is the flow of the paper; it begins with the fundamental concepts of modeling light transport through biological tissue as a forward problem, which are discussed in detail. The available methods and toolboxes that were applied to simulate the forward problem were thoroughly investigated. The review also includes an in-depth discussion of the inverse problem and a detailed explanation of various available image reconstruction methods. Aside from that, the paper offers a comparison of several algorithms as well as conclusions and recommendations.

2. Mathematical Modeling of Light Transport in Biological Tissue as Forward Problem

The radiative transport equation (RTE), which is based on the idea that energy stays the same as light moves through a volume element of a medium with an absorber and scattered light, accurately describes how light moves through biological tissue. The RTE in the TD is expressed as [8,9],
  c r t + Ω · V + μ a r + μ r I r , Ω , t = μ s r 4 π d Ω ˊ P r , Ω · Ω ˊ I r , Ω , t + q r , Ω , t
here I r , Ω , t described as the energy radiance or light intensity as a function of position r x , y , z , Ω is defined as angular direction with zenith and azimuth angles, and time t . The absorption and scattering coefficients are represented by μ a r and μ s r , respectively. The velocity of light in a turbid medium is denoted by c r , and the light source is denoted by q r , Ω , t . Moreover, P r , Ω · Ω ˊ is the scattering phase function, which determines the probability that a photon travelling in a direction Ω ˊ will be scattered in that direction Ω during a scattering event. And P is normalised to the value of 1.
4 π d Ω ˊ P r , Ω · Ω ˊ = 1 1 P c o s θ d c o s θ = 1  
As the sprinkling phase function, the Henyey–Greenstein function is widely used as follows:
P r , Ω · Ω ˊ = 1 4 π 1 g r 2 1 + g r 2 2 g Ω · Ω ˊ 3 2
where g r denotes the anisotropy factor, which ranges from 1 (full backscattering) to + 1 (full forward scattering) and anything between 0 (isotropic scattering).
A numerical solution to the RTE is challenging since it is an integrodifferential equation, and the computational complexity for numerical solutions is exceedingly high. On the other hand, the diffusion equation (DE) assumes that radiance in a medium that is optically thick and has multiple scattering is almost entirely isotropic. The DE can be calculated using the diffusion approximation to the RTE. The following equation shows the TD and DE:
c r t Φ r , t · κ r Φ r , t + μ a r Φ r , t = q r , t
where Φ r , t denotes the fluence rate as estimated by 4 π d Ω I r , Ω , t , κ r is denoted as diffusion coefficient as determined by 1 / 3 μ a r + 1 g μ s r , and q r , t signifies the light source as calculated by 4 π d Ω q r , Ω , t and the reduced scattering coefficient is defined as 1 g μ s = μ s ˊ .
Similarly, RTE in terms of FD and CW is given as follow [10]:
c r t Φ r , ω · κ r Φ r , ω + μ a r Φ r , ω = q r , ω
The fluence rate with modulated frequency ω from the light source q r , ω in a medium at the same frequency is denoted by Φ r , ω . In an FD, the frequency ω 0 , whereas in a CW instrument, the frequency is equal to zero. In the fNIRs context, the DA equation is generally nonlinear, so it can be linearized as given in [11] and then used to perform the Rytov approximation [12]. When performing functional brain imaging, the absorption coefficient is assumed to be proportional to hemoglobin change, whereas the scattering coefficient is supposed to be constant. So, under these assumptions, the Rytov approximation can be formulated as [13]
y = A x
where A denotes the sensitivity matrix as determined by the absorption proportion within the brain, y is the difference in log-ratio between the optical density recorded before and after blood flow, x denotes the change in absorption coefficient.

3. Methods for Forward Model Simulation

The methods used to solve the forward problem are discussed in this section. The forward problem, in general, considers the modeling of light propagation from sources to sensors across the head. The solutions to this problem can be divided into three categories. (i) Analytical techniques (ii) Numerical techniques (iii) Stochastic techniques.

3.1. Analytical Methods

The term “Green’s function approach” generally refers to the analytical method. The solution can be visualized using Green’s function, which is defined as follows when the source is represented as a spatial and temporal delta function: First and foremost, one must ascertain their own GI functions. Following that, Green’s functions can be used to create more general solutions. In homogeneous media, the convolution of these Green’s functions with the source term yields the full fluence rate solution, which is simple to compute.
Equation gives the most basic analytical solution for TD-DE for an infinitely homogeneous medium [14],
ϕ r , t = c 4 π D c t 3 2 e x p r 2 4 D c t μ a c t
where r is the distance from the origin to a point impulse source. The authors [15] first used the mirror image source method to find analytical solutions for TD-DE for semi-infinite and slab media with a zero-boundary condition. The pulsed laser source systems (TD systems) are close enough to the source that they can be calculated with convolution methods [16].
Even in modern times, Green’s function approach is most commonly used to find solutions to the DE in regular geometries [15,17]. For instance, researchers [17] came up with ways to solve an endless cylinder by putting in a source line that goes on forever. Also, they used Green’s function method to solve the DE for a point source in several regular geometries. In addition, authors [18] Using a series expansion method, solved the DE for concentric spheres. In a separate piece of work, authors [19] solved the DE in the CW, frequency, and time domains using the Green’s function approach with extended boundary conditions for a multiple-layered finite cylinder. These solutions were obtained by solving the equation for a multiple-layered finite cylinder. In addition, researchers [20] provided a CW solution for a point source that made use of the extrapolated boundary conditions in cylindrical coordinates. Finally, by employing a number of different integral transformations, Liemert and Kienle were able to derive specific solutions for the DE [21] when it was applied to a homogeneous and turbid medium with a point source.
In recent research, Erkol et al. [22] have derived analytical solutions to the DE in two and three dimensions for the steady state CW case in a cylindrical media. In this case, a Dirac function with different strengths is used to model the light source. To get the Green’s function for the Robin boundary condition, an integral method is used. This method is extremely adaptable, allowing the implementation of any boundary condition (i.e., not limited to the Robin boundary condition). This is also applicable to other regular geometries, like spherical. Because finding solutions to the DE at the boundary is the primary focus of their study, this method is perfectly suited for determining the DOI in homogeneous or nearly homogeneous environments.
Theoretically, analytical solutions could be a direct and accurate way to get light to travel, but the complexity of biological tissue makes it hard to make analytical solutions. The analytical solutions of the RTE and the DA are faster to calculate, but they can only be used for certain specific geometries with values that are almost all the same inside. Therefore, numerical methods are usually used to solve the RTE and the DA models. The critical constraint in its applicability is that the solutions are only available for simple homogeneous geometries [17], which induces severe modeling errors by providing a poor approximation [23]. In some cases, it has been possible to get these solutions for time-domain DE, like slab media [24].

3.2. Numerical Methods

In diffuse optical imaging, numerical methods are often used because they are good at simulating how light moves through realistic, complex geometries as well as different types of media. Numerical solutions for the forward model can be found using the partial differential equation, which can be solved in a variety of ways. The finite difference method (FDM), the finite volume method (FVM), the boundary element method (BEM), and the finite-element method (FEM) are all examples of this.

3.2.1. Finite Difference Method

In the finite difference method, the medium is broken up into small pieces using a regular grid, and complex shapes are made around the points inside the grid. Points with absorption values in the thousands are assigned outside of the required form. It has been demonstrated that this method produces more accurate results than other methods such as Monte Carlo and analytic solutions [25]. However, because the FEM is so simple to use when dealing with complex geometries, the FDM is rarely used in DOT applications. However, it has been used to determine the dispersion of light in the human brain and the cranium of a rat [26,27]. This method has been employed in the literature: for additional details, read the following studies [28,29,30].

3.2.2. Finite Volume Method

In a way like the finite element and finite difference methods, the FVM calculates values at discrete points within a meshing geometry. In this way, both approaches compute values. The element (in a cell center formulation) is known as a volume of control, or VC for short, in FVM. This is a distinct region of space in which the PDEs will be integrated. During this step of the process, you will be evaluating the volumetric sources as well as the surface fluxes that flow into and out of VC. In order to convert the surface integral into volume integrals, it will be necessary to make use of Gauss’ theorem. Interpolation functions that are the same, like the FDM method, or almost the same, like the Laplace equation, are used to get close to surface derivatives. The name of the method comes from the fact that each node in the mesh takes up a relatively small amount of space.
The primary advantage of this method over FDM is that it does not require the use of structured grids. Additionally, the effort that would have been required to transform the provided mesh into a structured numerical grid internally may be completely avoided. In the same way as with FDM, the approximation that is reached results in a discrete solution; however, the variables are often positioned at the centres of the cells rather than at the nodal points. This is not always the case, however, as there are also approaches that centre on the face of the volume. Interpolation is used to determine the values of field variables at locations other than storage locations (such as vertices). This is the case regardless.
The finite volume technique is used a lot in optical tomography reconstructions [31,32], because it uses less energy than other methods. It takes a long time to run [33], despite the fact that it has a high level of mesh flexibility, which is necessary for modelling complex shapes.

3.2.3. Boundary Element Method

The BEM has evolved as a viable alternative mathematical technique over the last twenty years. Because it just necessitates surface discretisation and hence is less computationally expensive. BEM is like FDM and FEM in that it calculates values at discrete points for solving PDEs. The simplicity of this method is derived from the fact that it meshes only the boundary of the body rather than the full domainIn DOT, the BEM uses Green’s second identity to describe the field via its integral on the surface, i.e., photon density and fluxes. In large-scale geometries [34,35,36,37,38,39], it outperforms FEM in terms of performance, but it cannot predict light propagation in complicated heterogeneous domains accurately. This is attributed to the complex nature of the boundaries encountered between the tissue interfaces. The hybrid or coupled BEM-FEM method has also been employed. It shows that, compared to analytical solutions, the meshing task can be made easier and the size of the problem can be reduced while the model’s correctness is kept.
The BEM is better than the FEM because you don’t have to break up the area you’re looking at into smaller pieces. Instead, you only need to know the area’s edges. As a result, meshing effort is reduced, and system matrices are smaller. However, the BEM has some disadvantages over the FEM; the BEM matrices are fully populated with complex and frequency-dependent coefficients, which reduces the solution’s efficiency. Furthermore, singularities may occur in the solution, which must be avoided [6].

3.2.4. Finite Element Method

In optical imaging applications [40,41,42,43,44,45,46] with irregular boundaries, FEM is one of the most common ways to solve the DE. FEM is a mathematical method for approximating boundary values and making absorption spectra and optical flux for a given distribution of absorption and diffusion coefficients. The method employs a collection of basis functions on a mesh, also known as interpolation functions, to convert the PDE into a system of differential equations in finite-dimensional space [41]. As a result of its ability to handle irregular geometries [47], it has been utilized to solve both the RTE and DE models [41,48,49]. As a result, numerical solutions are required. Because of its ease in handling complex geometries and modeling boundary effects, the FEM is more versatile than other methods, including the finite difference method. The FEM is a variational method that uses a family of finite-dimensional basis functions to approximate the solution.
Researchers like the FEM because it uses a piecewise representation of the solution in terms of certain basis functions. The computational domain is broken up into smaller areas called “finite elements”, and the solution for each element is built from the basis functions. The typical method for obtaining the actual equations is to restate the conservation equation in weak form, write the field variables in terms of the basis functions, multiply the equation by the appropriate test functions, and then integrate over an element. Because the FEM solution is expressed in terms of specific basis functions, it is much better known than the FDM or FVM solutions. This can be a double-edged sword because the selection of basis functions is critical, and boundary conditions may be more difficult to formulate. Again, a system of equations (usually for nodal values) is obtained and must be solved in order to obtain a solution.

3.3. Stochastic Methods

The Monte-Carlo (MC) simulation is the most widely used stochastic approach for modeling photon transport through tissue. It is used with random-walk or Markov-chain models to provide the best results. A photon’s or a photon packet’s propagation across a medium can be simulated using MC models, which helps make the process more efficient. This process is accomplished by tracing the photon’s passage through the medium and modeling each event the photon meets sequentially. More than two decades ago, it became a standard method for simulating light transport in tissues because of its versatility and rigorousness in dealing with turbid fluids with complicated structures.
The MC method entails the following steps: In the first step, voxels representing various types of tissues are first divided into three-dimensional tissue geometry. In the second step, the optical properties of each voxel, such as scattering and absorption, are allocated to each voxel in the second step. The third step is to “inject” a photon at a specific location on the surface of this shape. The photon’s movement is accomplished in the fourth step through probabilistic scattering and absorption as it travels through tissue. Repeat steps 3–4 hundred or even millions of times to figure out how much fluence (photon weight) and how far each tissue type has travelled through it [50].
Interest in using MC to calculate the forward model for optical tomography has resurfaced in recent years, thanks to the combination of efficient MC formulations with improved processing capacity and geometrical complexity [51,52].

4. Types of Toolboxes for Forward Model Simulation

There is a wide variety of software/toolboxes available to simulate forward problems that are currently in use. Some of them are listed and explained in greater detail below.

4.1. MCML

Due to its user-friendliness, Researchers [50] first introduced the programming tool known as MC simulation for light propagation in multi-layered tissue (MCML) in planner geometry, which is still widely used today. The multilayer model was greatly simplified. The simulation geometry was set by the number of layers and the thickness of each layer. Each layer represented a homogeneous part of the simulated medium. The MCML simulation code is written in ANSI C, which is a standard programming language. Figure 3 shows the main steps of the MCML simulation process, which are explained and shown in [53].

4.2. NIRFAST

The Near-infrared Frequency-domain Absorption and Scattering Tomography (NIRFAST) program is a FEM-based technique developed by the National Institute of Standards and Technology in 2009 [54], and this software is offered free of charge. In this package, many MATLAB.m files are produced and executables are included, which the user can customise to incorporate the programme into their measurement apparatus (Figure 4, for details, see [54]). NIRFAST requires that a finite element mesh be provided before any simulation can be started. The user’s responsibility is to provide this mesh, which can be in either 2D or 3D format. NIRFAST cannot produce a mesh on its own. The DE is changed into a set of linear equations that can be solved on a finite element grid. A finite element mesh represents the flux rate at each node.
NIRFAST has been shown to work well for geometries with a single boundary condition, especially when the boundary condition is a modified Robin (or Type III) in which air is assumed to surround the simulation region (as implemented in NIRFAST), also known as a Neumann boundary condition. NIRFAST has been developed for 2D and 3D and is widely used for FEM analysis in forward models with image reconstruction. It is available for free via the following URL link: http://newton.ex.ac.uk/research/biomedical/hd/NIRFAST.html (accessed on 5 October 2022).

4.3. TOAST++

To tackle DOT’s forward and inverse problems, Martin Schweiger and Simon R. Arridge [55] developed an efficient open-source software framework that some researchers are using. Originally built in C++, it was later rewritten as a toolbox that includes a set of MATLAB routines and PYTHON code, which is now available. This software suite contains libraries for computation of sparse matrices, finite-element, alternative numerical modeling, nonlinear inverse, MATLAB and, python bindings, and visualization tools (see Figure 5). This toolbox offers parallel matrix assembly and solver capabilities for distributed and shared memory architectures and graphics processor platforms, which enable scalability on distributed and collective memory architectures. In this way, researchers can quickly design analysis tools without worrying about developing the low-level sparsity matrix and finite-element subroutines beforehand.

4.4. MCX/MMC

Qianqian Fang created open-source MC simulators called Monte Carlo eXtreme (MCX) and Mesh-based Monte Carlo (MMC) in 2009 [56]. These simulators are two of the most advanced Monte Carlo programs available today, and researchers use them to simulate light propagation as photons across complex biological tissues [56,57]. Binary executable software was used to develop the first versions of MCX and MMC. Because of MATLAB’s popularity among academic researchers, MEX variants such as MCXLAB, MMCLAB, and voxel-based MC (vMC) have been developed to make it more user-friendly for scientists. These open-source MC programs are essential resources for academics and students interested in modeling light interaction in tissue and comprehending fundamental theories [58,59]. Figure 6 depicts the basic steps of the MCX simulation process for the forward problem.

4.5. ValoMC

Based on the MC method, Leino et al. [60] made ValoMC, an open-source program. With this software package, you can solve problems like the number of photons in the computing domain and their presence at the domain boundary. It is a useful tool for researchers because it can simulate complex measurement geometries with different light sources, intensity-modulated light, and optical parameter distributions that change in different places. Also, the interface for MATLAB (The Math Works Inc., Natick, MA) is made to be easy to use and to let users set up and solve problems quickly. The code for the software simulation is written in C++, and the Open MP parallelization library is used to make it work in multiple places at once. Visit the website at https://inverselight.github.io/ValoMC/ (accessed on 5 October 2022) and click on the “Download” button to get the software.
In the last few years, many ways to solve the forward problem have been written about. Table 1 provides an overview of these methods.

5. Inverse Problem

In image reconstruction, the inverse problem is figuring out where the changes in absorption along the path of the diffuse light are. This can be done by using the relationship between the scalp and the law of propagation. In order to solve the image reconstruction problem, the forward model must be turned around, which can be written as the linear underdetermined inverse problem when there is noise.
y = A x + γ
γ is the noise present in the data and A is the Jacobian/sensitivity matrix.
The Jacobian matrix shows the relationship between how sensitively light intensity is measured on the surface of the head and the optical properties of the head itself. The image reconstruction problem requires the direct inversion of the Jacobian/sensitivity matrix, which makes it a highly underdetermined and poorly posed problem. Because of the ill-conditioning of the system, regularization techniques must be employed to obtain a reliable solution. In the literature, several image reconstruction methods for the solution of inverse problems have been developed. Regularization-based methods and Bayesian estimating methods, which are two fundamental methodologies, have dominated the literature for a very long time.

6. Methods for Inverse Problem Solution

The various methods employed to solve the inverse problem (Equation (2)) will be explained in further detail in this section. Among these methods are back projection, singular value decomposition (SVD), truncated singular value decomposition (tSVD), lease square QR decomposition (LSQR), regularized lease square QR decomposition (rLSQR), minimum norm estimate (MNE), weighted minimum norm estimate (WMNE), low-resolution electromagnetic tomography (LORETA), L1-norm, hierarchical Bayesian (HB) as MAP estimate, expectation-maximization (EM), maximum entropy on the mean (MEM), and Bayesian model averaging (BMA).
The basic formulation of the inverse methods for the solution of fNIRs is given in the section. These methods are also described in terms of their mathematical form. According to the previously published literature, the performance of the inverse methods is thoroughly explained. The comparison is being made using a variety of parameters, including sparsity, spatial resolution, localization error, image quality, root mean square error, and quantitative and qualitative reconstructions, among other things.

6.1. Back Projection

Back projection is the inverse technique of projection. While projection aims to extract data from an image, back projection seeks to extract the image from the data calculated during the projection process. As a result, the back-projection process accepts as input the results matrix returned by the projection process, as well as all data related to the projection process that may be beneficial in completing the process. The BP method in image reconstruction is more straightforward and consists of back projecting the boundary measurements in the sensitivity matrix in the following manner [61,62]
x B P = A T y
This method assumes that the sensitivity matrix is orthogonal in a broad sense (for example, that it is an estimate of its pseudo-inverse), which is not always the case. Nevertheless, this method has been employed in the literature [62,63,64] even though it typically overestimates the amplitude when multiple measurements are taken simultaneously.
Back projection is better than other iterative methods because it makes images faster with less processing power. But it can be hard to know how much oxygen is in the blood or to use breast mammography as a screening tool when there isn’t enough quantitative information. Also, most diagnostic imaging techniques used today, like MRI and CT scans, use only qualitative information to make important diagnoses, like finding tumors and where they are. Back propagation is also efficient in terms of computing, but it has a low spatial resolution, which makes it hard to tell apart multiple objects that absorb light.

6.2. Singular Value Decomposition (SVD) and Truncated Singular Value Decomposition (tSVD)

The SVD and its hybrid version, the tSVD, try to find the pseudo-inverse of the sensitivity matrix while ignoring the smallest singular values that cause numerical instability (this solution will show the main contribution of the sensitivity matrix) [65].
Consider U i and V i to be the i-th column vectors of U and V correspondingly, the SVD decomposition as a decomposition of A into rank one matrices as
A = i = 1 n σ i U i V i T
U and V are orthonormal column vectors correspondingly, while σ i are the nonnegative singular values (in descending order); If the inverted form of the solution is multiplied by the boundary measurements, the solution is found as follows:
x S V D / t S V D = i = 1 n U i T y σ i V
As the literature shows, Gupta, Saurabh, et al. [66] compare the SVD method to the Levenberg–Marquard method. SVD is computationally efficient and is applied to experimental data. Furthermore, prior information is used in conjunction with SVD by Zhan, Yuxuan, et al. [67] to significantly improve the crosswalk between the retrieved parameter. On the other hand, the tSVD solution is known for reconstructed images that are blurry [65].

6.3. Least Square by QR Decomposition (LSQR) and Regularized LSQR (rLSQR)

The LSQR method by Paige and Saunders [68] and its hybrid version, the rLSQR method, are both based on Tikhonov regularization, but they also add a term that makes the method more regular. The mathematical formulation for LSQR and rLSQR is given under [69]:
x L S Q R / r L S Q R = a r g m i n y A x 2 + α x x i n i t i a l 2
As opposed to the previous technique, this one does not require that the matrix A be saved; rather, it requires that one matrix-vector product with A and one matrix-vector product with A T be assessed for each iteration.
The LSQR was presented by Prakash, Jaya, and Phaneendra K. Yalavarthy [70] in comparison to the regularized minimum residual approach (MRM). Compared to the MRM method, the LSQR method outperforms it in terms of computational time, the number of iterations, and image quality. It is applied to experimental data obtained from gelatine phantoms. Furthermore, C. B. Shaw et al. [71] demonstrate the computational efficiency and effectiveness of the LSQR using a simulated blood—vascular phantom experiment. Both quantitative and qualitative reconstructions benefit from the LSQR technique. However, hybrid algorithms, which incorporate the variation and modification of least square image reconstruction algorithms, have been developed and used in the literature [72,73,74].

6.4. Minimum Norm Estimate (MNE) and Weighted Minimum Norm Estimate (WMNE)

MNE is the most common inverse method. It was created to solve the inverse problem of MEG, and the norm solution is used to find the location of the EEG source. The mathematical formulation for MNE is given as under:
x M N E = a r g m i n y A x 2 + α x 2
Similarly, the WMNE can be written as follow:
x W M N E = a r g m i n y A x 2 + α W x 2
The MNE solution, like tSVD, is known for producing scattered and blurry reconstruction images [75].

6.5. Low-Resolution Electromagnetic Tomographt (LORETA)

LORETA was initially created and used to locate EEG sources by Pascual-Marqui et al. [76]. LORETA has been used as a regularization method for fNIRs, which also considers L2-norm formulation as described for the MNE method by incorporating the Laplacian operator [11]. The mathematical formula for LORETA is given as follows:
x L O R E T A = a r g m i n y A x 2 + α L x 2
It is possible to interpret it as a weighted form of the MNE solution that aims to achieve maximum smoothness across space. Despite this, it continues to generate results with a vast spatial extent.

6.6. L1-Norm

L 1 -norm method has been developed and applied for EEG/MEG localization problem. The mathematical formulation for the L 1 -norm is given as under:
x L 1 n o r m = a r g m i n y A x 2 + α x 1
The L 1 -norm method has been demonstrated to have improved noise tolerance qualities and enhanced convergence features. It has also been shown to make solutions to other linear estimating problems more L 1 -norm sparse.
According to Habermehl, Christina, et al. [65], the L1-norm delivers the best results on experimental data (Gelatine cylindrical phantom that simulates breast tissues) compared to L0, L2, tSVD, and wMNE. Additionally, it demonstrates that the incorporation of the sparse algorithm into the procedure has the potential to improve accuracy. Meanwhile, the inclusion of sparsity in the lp norm minimization (0 < p < 1) as presented by Prakash, Jaya, et al. holds promise in improving the image quality compared to the L0-norm method [77,78]. The results of a numerical experiment conducted by S. Okawa et al. [79] demonstrate that lp sparsity regularisation improves spatial resolution. In addition, it describes how the reconstructed region is affected by the value of p . A lower p -value suggests that the target is highly localized.
Another image reconstruction approach is Bayesian estimation, which relies on a probabilistic model of observations and constraints called the likelihood function and prior distribution.

6.7. Hierarchical Bayesian as MAP Estimate

HB approach was initially developed and applied for the MEG localization problem [80]. In this method, observation and regularization are described as hierarchical probabilistic models. The HB estimation method uses an ARD prior to introduce the regularization parameter at each voxel position, which controls the degree of penalty. The basic formulation of the HB method for fNIRs is presented here (see [81] for detailed information).
i.
Considering the measurement noise γ as a Gaussian distribution N 0 ,   ν and the forward problem as a probabilistic model as
P y / x ~ N A x ,   ν
where ν is the covariance matrix.
ii.
Assuming the data prior distribution and likelihood function as l o g P x / y and l o g P x / C respectively.
iii.
Computation of the posterior distribution of the unknown as
x M A P = a r g m a x l o g P x / y + l o g P x / C
where C anatomical prior image.
P x ,   y ,   θ ,   ϑ = P y / x P x / θ ,   ϑ P θ P ϑ
iv.
By applying the variational Bayesian (VB) method, the posterior could be written as variational free energy
F Q x ,   θ ,   ϑ = Q x ,   θ ,   ϑ l o g P x ,   y ,   θ ,   ϑ Q x ,   θ ,   ϑ d β d a d x
with
Q x ,   θ ,   ϑ = Q x Q θ Q ϑ
image by maximizing the free energy, providing the reconstruction, and applying the Bayes rule to the posterior distribution.
In contrast to more traditional ways of regularizing, the idea of using Bayesian regularization to solve fNIRs has only been around for a short time. In a Bayesian paradigm, where all unknowns are thought of as random variables, the prior density is what is thought about the solution before the facts are considered. As a result, in conventional regularization, the prior functions similarly to the penalty term. The traditional Tikhonov regularized solution and the Bayesian maximum a posteriori (MAP) estimate have a well-established relationship, with classes of penalty functions and priors favoring similar types of solutions [82].
The HB algorithm for fNIRs has been proposed and used to make the changes in blood flow in the cortex and scalp less random and smoother. Using phantoms to test the performance and improve the accuracy of depth and spatial resolution [83]. Recent research by Shimokawa, Takeaki, et al. [83] provides the HB method with an ARD prior for fNIRs, as well as the inclusion of the two-step method. The sensitivity-normalized Tikhonov regularisation is utilised in the initial step of the process to locate a preliminary. In the second step, the result is refined through applying the hierarchical Bayesian estimate method. Furthermore, in another study, T. Shimokawa et al. [84] provide the HB method with Laplacian smooth prior to spatially variant Tikhonov regularization. This study include Two-layer phantom experiments, as well as the inclusion of MRI-based head-model simulations, are carried outBased on the results of that experiment, the proposed algorithm estimates the smooth, superficial activity in the scalp while also assessing the deep, localized activity in the cortical region. T. Aihara et al. also used the HB method to estimate spontaneous changes in cortical hemodynamic [85], in contrast to the task-related changes discussed in [86] for fMRI data. P. Hiltunen et al. [87] used the Bayesian and EM methods, as well as Tikhonov regularisation, in another study. Estimates of both the spatial organisation and the physical parameters can be obtained concurrently by using a Bayesian technique with a Gaussian prior. The reconstructed images’ contrast is improved by the algorithm that was proposed, which has a high degree of spatial precision.

6.8. Expectation-Maximization (EM)

The Expectation-Minimization (EM) method for fNIRs sense was developed and employed by Cao et al. [88], and the mathematical description of the EM method can be described as follows:
By incorporating misplaced data and maximising the comprehensive penalised log-likelihood estimator, the maximum penalised log-likelihood estimator (MPLE) can be obtained.
x E M = a r g m a x y A x 2 2 δ 2 α x 1
The EM procedure generates a sequence of approximations x k by alternating two phases (as shown below) until some stopping requirement is fulfilled.
E-step given the observed data y and the current estimate μ k , the conditional anticipation of the whole log-likelihood could be computed as
x k = μ k + β 2 δ 2 A T y A μ k
M-step: Update the estimated value of x k
x k + 1 = a r g m a x μ x k 2 2 δ 2 α x 1
Equation can be explained separately for each element x l k + 1 as
x l k + 1 = a r g m a x μ l 2 + 2 μ l x l 2 δ 2 α x 1
x l is the element. It can be resolved using the soft threshold technique.

6.9. Maximum Entropy on the Mean (MEM)

The MEM method was first introduced by [89], and it has since been utilised and rigorously analysed in the context of EEG/MEG source imaging research [90,91]. MEM is not a new statistical method in the traditional sense, but rather a novel stochastic approach that leads to deterministic methods when some discretization step trends toward zero. Cai et al. [92] recently employed and evaluated the MEM approach to solving the inverse problem of fNIRs reconstruction.
Consider the variable x as an arbitrary variable with the probability distribution d P x = P x d x then the unique solution d P x could be attained as
d P x = a r g m a x d P x C m S v d P x
where S v d P x is the Kullback-Leiber divergence or v -entropy of d P x and define it as
S v d P x = x log d P x d v x d P x = x f x log f x d v x
d v x is the prior distribution, the MEM solution from the gradient of free energy F v is obtained as follows?
x M E M = ξ F v ξ = A T λ
where λ = a r g m a x λ D λ , with the cost function D λ = λ T y F v A T λ 1 2 λ T ν ν T λ .

6.10. Bayesian Model Averaging

The fundamental concept of BMA theory, which was initially developed and applied to MEG/EEG, is a mixture of Bayesian hierarchical models that can be used to estimate highly parameterized models [93]. Using Bayesian inference (BI) assumptions based on the given model or data (prior probability distribution), BMA may be used to construct the posterior distribution for quantities of interest [94]. The following is the basic mathematical description of BMA for fNIRs image-based model reconstruction (see [11] for additional information) and is given in more detail below:
i.
Consider the basic assumption of the Bayesian formulization of the given problem as a normal probability density function as
p y / x , φ = N A x , φ
where φ represents as hyperparameters which is unknown [11].
ii.
The estimation of the parameter as the first level of inference using the Bayes theorem is described as the posterior probability density function a
x / y , φ , H k = p y / x , φ , H k p x / φ , H k p y / x , φ , H k p x / φ , H k d φ
where H k represents as k-th model which is to be considered for the given problem.
iii.
The estimation of the hyperparameters as 2nd level of inference is describing as the posterior probability density function as
p φ / y , H k = p y / φ , H k p φ / H k p y / φ , H k p φ / H k d φ
iv.
The estimation of the model as the third level of inference as the posterior probability density function
p H k / y = p y / H k p H k p y / H k p H k d φ
v.
Lastly, marginalizing the first, second, and third level of inference as posterior pdf as
p x / y = f o r a l l H k p x / y , H k p H k = k p x / y , H k p H k
This procedure considers all possible solutions for every model (using 1st and 2nd level of inference) and averages weighted by each model’s posterior model probability (PMP).
Furthermore, J. Tremblay et al. [11] applied the BMA for fNIRs and their results show that in terms of localization error, ROI, and RMSE, the BMA produces better results.

7. Conclusions

fNIRs is a practical approach due to their portability, little interference in magnetic and electrical fields, hyper-scanning, ease of use for neonates, and the fact that they require no ongoing maintenance. As a result of the rapid development of fNIRS devices and analytic toolboxes and its findings’ reliability in various fields, the fNIRs approach can be considered a versatile and promising instrument. In fNIRs, the image reconstruction problem is divided into two parts: the model used to predict light distribution in tissue (the forward problem) and the method used to estimate the optical properties of the domain in tissue (the inverse problem). In order to achieve correct image reconstruction, it is essential to do accurate forward model simulation and develop methods to address inverse problems.
Concerning how to solve the problem, many researchers have used and presented a wide range of methods, such as toolboxes. FEM and MC are the two most advanced forward model simulation technologies today. Various toolboxes are being built and put into operation to improve the accuracy and efficiency of the forward model simulations. Regarding forward models, NIRFAST for FEM and MCX for the MC method are the most often used and developed software packages up to this moment.
When it comes to the solution of the inverse problem, the inverse methods such as back projection, SVD, tSVD, LSQR, rLSQR, MNE, WMNE, LORETA, l 1 -norm, HB as a MAP estimate, EM, MEM, and BMA, have been employed thusly. According to the research, while considering inverse methods, it is vital to consider factors such as computational time, localization ability, localization error, energy error, system complexity, improved resolution, and improved image quality, among others. According to the research reviewed above, when numerous measurements are collected at the same time, the back-projection method gives an overestimation of the amplitude. The SVD, tSVD, LSQR, and rLSQR methods are all efficient in terms of computational resources. On the other hand, the L1-norm and lp regularisation approaches have been found to be sparser than the other inverse methods, which is a positive development.
Incorporating priors into the inverse approach improves image quality and spatial accuracy. For this reason, the HB method has been employed in the literature and has produced satisfactory outcomes. Based on the prior information, the EM method for fNIRs has been used to increase the image quality and resolution by incorporating sparsity regularisation into the image. Furthermore, in terms of localization error, ROI, and RMSE, the BMA produces better results. Recently the MEM method has been used for fNIRs, and it has been proven to be more accurate and robust than both MNE and wMNE.
Considering the preceding, it is evident and apparent that, while the methods employed thus far have produced satisfactory results, continuous improvement in inverse problem solutions is ongoing. As a result, it may be possible to utilize the inverse method, which incorporates the sparse algorithm and prior information, to improve image quality and reduce localization error.

Author Contributions

Conceptualization, A.H. and I.F.; methodology, A.H.; software, M.Z.; validation, M.S.M., I.F. and T.B.T.; formal analysis, T.B.T.; investigation, I.F.; resources, I.F.; writing—original draft preparation, A.H.; writing—review and editing, A.H.; visualization, M.Z.; supervision, I.F.; funding acquisition, I.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Acknowledgments

This research was funded by PETRONAS through YUTP grant (015LC0-372).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ferrari, M.; Mottola, L.; Quaresima, V. Principles, techniques, and limitations of near infrared spectroscopy. Can. J. Appl. Physiol. 2004, 29, 463–487. [Google Scholar] [CrossRef] [PubMed]
  2. Wolf, M.; Ferrari, M.; Quaresima, V. Progress of near-infrared spectroscopy and topography for brain and muscle clinical applications. J. Biomed. Opt. 2007, 12, 062104. [Google Scholar] [PubMed]
  3. Bérubé-Lauzière, Y.; Crotti, M.; Boucher, S.; Ettehadi, S.; Pichette, J.; Rech, I. Prospects on time-domain diffuse optical tomography based on time-correlated single photon counting for small animal imaging. J. Spectrosc. 2016, 2016, 1947613. [Google Scholar] [CrossRef]
  4. Lo, P.A.; Chiang, H.K. Three-dimensional fluorescence diffuse optical tomography using the adaptive spatial prior approach. J. Med. Biol. Eng. 2019, 39, 827–834. [Google Scholar] [CrossRef]
  5. Applegate, M.; Istfan, R.; Spink, S.; Tank, A.; Roblyer, D. Recent advances in high speed diffuse optical imaging in biomedicine. APL Photonics 2020, 5, 040802. [Google Scholar] [CrossRef]
  6. Pellicer, A.; del Carmen Bravo, M. Near-infrared spectroscopy: A methodology-focused review. In Seminars in Fetal and Neonatal Medicine; Elsevier: Amsterdam, The Netherlands, 2011; Volume 16, pp. 42–49. [Google Scholar]
  7. Rahim, R.A.; Chen, L.L.; San, C.K.; Rahiman, M.H.F.; Fea, P.J. Multiple fan-beam optical tomography: Modelling techniques. Sensors 2009, 9, 8562–8578. [Google Scholar] [CrossRef]
  8. Klose, A.D.; Netz, U.; Beuthan, J.; Hielscher, A.H. Optical tomography using the time-independent equation of radiative transfer—Part 1: Forward model. J. Quant. Spectrosc. Radiat. Transf. 2002, 72, 691–713. [Google Scholar] [CrossRef]
  9. Hoshi, Y.; Yamada, Y. Overview of diffuse optical tomography and its clinical applications. J. Biomed. Opt. 2016, 21, 091312. [Google Scholar] [CrossRef]
  10. Arridge, S.R. Optical tomography in medical imaging. Inverse Probl. 1999, 15, R41. [Google Scholar] [CrossRef]
  11. Tremblay, J.; Martínez-Montes, E.; Vannasing, P.; Nguyen, D.K.; Sawan, M.; Lepore, F.; Gallagher, A. Comparison of source localization techniques in diffuse optical tomography for fNIRS application using a realistic head model. Biomed. Opt. Express 2018, 9, 2994–3016. [Google Scholar] [CrossRef]
  12. Madsen, S.J. Optical Methods and Instrumentation in Brain Imaging and Therapy; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  13. Kavuri, V.C.; Lin, Z.-J.; Tian, F.; Liu, H. Sparsity enhanced spatial resolution and depth localization in diffuse optical tomography. Biomed. Opt. Express 2012, 3, 943–957. [Google Scholar] [CrossRef]
  14. Yamada, Y.; Suzuki, H.; Yamashita, Y. Time-domain near-infrared spectroscopy and imaging: A review. Appl. Sci. 2019, 9, 1127. [Google Scholar] [CrossRef]
  15. Patterson, M.S.; Chance, B.; Wilson, B.C. Time resolved reflectance and transmittance for the noninvasive measurement of tissue optical properties. Appl. Opt. 1989, 28, 2331–2336. [Google Scholar] [CrossRef] [PubMed]
  16. Arridge, S.R.; Hebden, J.C. Optical imaging in medicine: II. Modelling and reconstruction. Phys. Med. Biol. 1997, 42, 841. [Google Scholar] [CrossRef] [PubMed]
  17. Arridge, S.R.; Cope, M.; Delpy, D. The theoretical basis for the determination of optical pathlengths in tissue: Temporal and frequency analysis. Phys. Med. Biol. 1992, 37, 1531. [Google Scholar] [CrossRef] [PubMed]
  18. Sikora, J.; Zacharopoulos, A.; Douiri, A.; Schweiger, M.; Horesh, L.; Arridge, S.R.; Ripoll, J. Diffuse photon propagation in multilayered geometries. Phys. Med. Biol. 2006, 51, 497. [Google Scholar] [CrossRef]
  19. Liemert, A.; Kienle, A. Light diffusion in N-layered turbid media: Frequency and time domains. J. Biomed. Opt. 2010, 15, 025002. [Google Scholar] [CrossRef]
  20. Zhang, A.; Piao, D.; Bunting, C.F.; Pogue, B.W. Photon diffusion in a homogeneous medium bounded externally or internally by an infinitely long circular cylindrical applicator. I. Steady-state theory. JOSA A 2010, 27, 648–662. [Google Scholar] [CrossRef]
  21. Liemert, A.; Kienle, A. Light diffusion in a turbid cylinder. I. Homogeneous case. Opt. Express 2010, 18, 9456–9473. [Google Scholar] [CrossRef]
  22. Erkol, H.; Nouizi, F.; Unlu, M.; Gulsen, G. An extended analytical approach for diffuse optical imaging. Phys. Med. Biol. 2015, 60, 5103. [Google Scholar] [CrossRef]
  23. Arridge, S.R. Methods in diffuse optical imaging. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 2011, 369, 4558–4576. [Google Scholar] [CrossRef] [PubMed]
  24. Martelli, F.; Sassaroli, A.; Yamada, Y.; Zaccanti, G. Analytical approximate solutions of the time-domain diffusion equation in layered slabs. JOSA A 2002, 19, 71–80. [Google Scholar] [CrossRef] [PubMed]
  25. Pogue, B.; Patterson, M.; Jiang, H.; Paulsen, K. Initial assessment of a simple system for frequency domain diffuse optical tomography. Phys. Med. Biol. 1995, 40, 1709. [Google Scholar] [CrossRef]
  26. Hielscher, A.H.; Alcouffe, R.E.; Barbour, R.L. Comparison of finite-difference transport and diffusion calculations for photon migration in homogeneous and heterogeneous tissues. Phys. Med. Biol. 1998, 43, 1285. [Google Scholar] [CrossRef] [PubMed]
  27. Hielscher, A.H.; Klose, A.D.; Hanson, K.M. Gradient-based iterative image reconstruction scheme for time-resolved optical tomography. IEEE Trans. Med. Imaging 1999, 18, 262–271. [Google Scholar] [CrossRef]
  28. Tanifuji, T.; Chiba, N.; Hijikata, M. FDTD (finite difference time domain) analysis of optical pulse responses in biological tissues for spectroscopic diffused optical tomography. In Proceedings of the Technical Digest. CLEO/Pacific Rim 2001. 4th Pacific Rim Conference on Lasers and Electro-Optics (Cat. No. 01TH8557), Chiba, Japan, 15–19 July 2001; p. TuD3_5. [Google Scholar]
  29. Tanifuji, T.; Hijikata, M. Finite difference time domain (FDTD) analysis of optical pulse responses in biological tissues for spectroscopic diffused optical tomography. IEEE Trans. Med. Imaging 2002, 21, 181–184. [Google Scholar] [CrossRef]
  30. Ichitsubo, K.; Tanifuji, T. Time-resolved noninvasive optical parameters determination in three-dimensional biological tissue using finite difference time domain analysis with nonuniform grids for diffusion equations. In Proceedings of the 2005 IEEE Engineering in Medicine and Biology 27th Annual Conference, Shanghai, China, 17–18 January 2006; pp. 3133–3136. [Google Scholar]
  31. Ren, K.; Abdoulaev, G.S.; Bal, G.; Hielscher, A.H. Algorithm for solving the equation of radiative transfer in the frequency domain. Opt. Lett. 2004, 29, 578–580. [Google Scholar] [CrossRef]
  32. Montejo, L.D.; Kim, H.-K.K.; Hielscher, A.H. A finite-volume algorithm for modeling light transport with the time-independent simplified spherical harmonics approximation to the equation of radiative transfer. In Proceedings of the Optical Tomography and Spectroscopy of Tissue IX, San Francisco, CA, USA, 22–27 January 2011; p. 78960J. [Google Scholar]
  33. Soloviev, V.Y.; D’Andrea, C.; Mohan, P.S.; Valentini, G.; Cubeddu, R.; Arridge, S.R. Fluorescence lifetime optical tomography with discontinuous Galerkin discretisation scheme. Biomed. Opt. Express 2010, 1, 998–1013. [Google Scholar] [CrossRef]
  34. Zacharopoulos, A.D.; Arridge, S.R.; Dorn, O.; Kolehmainen, V.; Sikora, J. Three-dimensional reconstruction of shape and piecewise constant region values for optical tomography using spherical harmonic parametrization and a boundary element method. Inverse Probl. 2006, 22, 1509. [Google Scholar] [CrossRef]
  35. Grzywacz, T.; Sikora, J.; Wojtowicz, S. Substructuring methods for 3-D BEM multilayered model for diffuse optical tomography problems. IEEE Trans. Magn. 2008, 44, 1374–1377. [Google Scholar] [CrossRef]
  36. Srinivasan, S.; Ghadyani, H.R.; Pogue, B.W.; Paulsen, K.D. A coupled finite element-boundary element method for modeling Diffusion equation in 3D multi-modality optical imaging. Biomed. Opt. Express 2010, 1, 398–413. [Google Scholar] [CrossRef]
  37. Srinivasan, S.; Carpenter, C.M.; Ghadyani, H.R.; Taka, S.J.; Kaufman, P.A.; Wells, W.A.; Pogue, B.W.; Paulsen, K.D. Image guided near-infrared spectroscopy of breast tissue in vivo using boundary element method. J. Biomed. Opt. 2010, 15, 061703. [Google Scholar] [CrossRef] [PubMed]
  38. Elisee, J.P.; Gibson, A.; Arridge, S. Combination of boundary element method and finite element method in diffuse optical tomography. IEEE Trans. Biomed. Eng. 2010, 57, 2737–2745. [Google Scholar] [CrossRef]
  39. Xie, W.; Deng, Y.; Lian, L.; Wang, K.; Luo, Z.; Gong, H. Boundary element method for diffuse optical tomography. In Proceedings of the 2013 Seventh International Conference on Image and Graphics, Qingdao, China, 26–28 July 2013; pp. 5–8. [Google Scholar]
  40. Arridge, S.; Schweiger, M.; Hiraoka, M.; Delpy, D. A finite element approach for modelig photon transport in tissue. Med. Phys. 1993, 20, 299–309. [Google Scholar] [CrossRef] [PubMed]
  41. Schweiger, M.; Arridge, S.R.; Delpy, D.T. Application of the finite-element method for the forward and inverse models in optical tomography. J. Math. Imaging Vis. 1993, 3, 263–283. [Google Scholar] [CrossRef]
  42. Jiang, H.; Paulsen, K.D. Finite-element-based higher order diffusion approximation of light propagation in tissues. In Proceedings of the Optical Tomography, Photon Migration, and Spectroscopy of Tissue and Model Media: Theory, Human Studies, and Instrumentation, San Jose, CA, USA, 1–28 February 1995; pp. 608–614. [Google Scholar]
  43. Schweiger, M.; Arridge, S.; Hiraoka, M.; Delpy, D. The finite element method for the propagation of light in scattering media: Boundary and source conditions. Med. Phys. 1995, 22, 1779–1792. [Google Scholar] [CrossRef] [PubMed]
  44. Gao, F.; Niu, H.; Zhao, H.; Zhang, H. The forward and inverse models in time-resolved optical tomography imaging and their finite-element method solutions. Image Vis. Comput. 1998, 16, 703–712. [Google Scholar] [CrossRef]
  45. Jiang, H. Frequency-domain fluorescent diffusion tomography: A finite-element-based algorithm and simulations. Appl. Opt. 1998, 37, 5337–5343. [Google Scholar] [CrossRef]
  46. Klose, A.D.; Hielscher, A.H. Iterative reconstruction scheme for optical tomography based on the equation of radiative transfer. Med. Phys. 1999, 26, 1698–1707. [Google Scholar] [CrossRef]
  47. Dehghani, H.; Srinivasan, S.; Pogue, B.W.; Gibson, A. Numerical modelling and image reconstruction in diffuse optical tomography. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 2009, 367, 3073–3093. [Google Scholar] [CrossRef]
  48. Paulsen, K.D.; Jiang, H. Spatially varying optical property reconstruction using a finite element diffusion equation approximation. Med. Phys. 1995, 22, 691–701. [Google Scholar] [CrossRef]
  49. Gao, H.; Zhao, H. A fast-forward solver of radiative transfer equation. Transp. Theory Stat. Phys. 2009, 38, 149–192. [Google Scholar] [CrossRef]
  50. Wang, L.; Jacques, S.L.; Zheng, L. MCML—Monte Carlo modeling of light transport in multi-layered tissues. Comput. Methods Programs Biomed. 1995, 47, 131–146. [Google Scholar] [CrossRef] [PubMed]
  51. Chen, J.; Intes, X. Comparison of Monte Carlo methods for fluorescence molecular tomography—Computational efficiency. Med. Phys. 2011, 38, 5788–5798. [Google Scholar] [CrossRef]
  52. Chen, J.; Intes, X. Time-gated perturbation Monte Carlo for whole body functional imaging in small animals. Opt. Express 2009, 17, 19566–19579. [Google Scholar] [CrossRef]
  53. Periyasamy, V.; Pramanik, M. Advances in Monte Carlo simulation for light propagation in tissue. IEEE Rev. Biomed. Eng. 2017, 10, 122–135. [Google Scholar] [CrossRef]
  54. Dehghani, H.; Eames, M.E.; Yalavarthy, P.K.; Davis, S.C.; Srinivasan, S.; Carpenter, C.M.; Pogue, B.W.; Paulsen, K.D. Near infrared optical tomography using NIRFAST: Algorithm for numerical model and image reconstruction. Commun. Numer. Methods Eng. 2009, 25, 711–732. [Google Scholar] [CrossRef]
  55. Schweiger, M.; Arridge, S.R. The Toast++ software suite for forward and inverse modeling in optical tomography. J. Biomed. Opt. 2014, 19, 040801. [Google Scholar] [CrossRef]
  56. Fang, Q.; Boas, D.A. Monte Carlo simulation of photon migration in 3D turbid media accelerated by graphics processing units. Opt. Express 2009, 17, 20178–20190. [Google Scholar] [CrossRef] [PubMed]
  57. Fang, Q. Mesh-based Monte Carlo method using fast ray-tracing in Plücker coordinates. Biomed. Opt. Express 2010, 1, 165–175. [Google Scholar] [CrossRef] [PubMed]
  58. Yu, L.; Nina-Paravecino, F.; Kaeli, D.R.; Fang, Q. Scalable and massively parallel Monte Carlo photon transport simulations for heterogeneous computing platforms. J. Biomed. Opt. 2018, 23, 010504. [Google Scholar] [CrossRef] [PubMed]
  59. Yan, S.; Fang, Q. Hybrid mesh and voxel based Monte Carlo algorithm for accurate and efficient photon transport modeling in complex bio-tissues. Biomed. Opt. Express 2020, 11, 6262–6270. [Google Scholar] [CrossRef] [PubMed]
  60. Leino, A.A.; Pulkkinen, A.; Tarvainen, T. ValoMC: A Monte Carlo software and MATLAB toolbox for simulating light transport in biological tissue. OSA Contin. 2019, 2, 957–972. [Google Scholar] [CrossRef]
  61. Walker, S.A.; Fantini, S.; Gratton, E. Image reconstruction by backprojection from frequency-domain optical measurements in highly scattering media. Appl. Opt. 1997, 36, 170–179. [Google Scholar] [CrossRef] [PubMed]
  62. Boas, D.; Chen, K.; Grebert, D.; Franceschini, M. Improving the diffuse optical imaging spatial resolution of the cerebral hemodynamic response to brain activation in humans. Opt. Lett. 2004, 29, 1506–1508. [Google Scholar] [CrossRef] [PubMed]
  63. Zhai, Y.; Cummer, S.A. Fast tomographic reconstruction strategy for diffuse optical tomography. Opt. Express 2009, 17, 5285–5297. [Google Scholar] [CrossRef] [PubMed]
  64. Das, T.; Dileep, B.; Dutta, P.K. Generalized curved beam back-projection method for near-infrared imaging using banana function. Appl. Opt. 2018, 57, 1838–1848. [Google Scholar] [CrossRef]
  65. Habermehl, C.; Steinbrink, J.M.; Müller, K.-R.; Haufe, S. Optimizing the regularization for image reconstruction of cerebral diffuse optical tomography. J. Biomed. Opt. 2014, 19, 096006. [Google Scholar] [CrossRef]
  66. Gupta, S.; Yalavarthy, P.K.; Roy, D.; Piao, D.; Vasu, R.M. Singular value decomposition based computationally efficient algorithm for rapid dynamic near-infrared diffuse optical tomography. Med. Phys. 2009, 36, 5559–5567. [Google Scholar] [CrossRef]
  67. Zhan, Y.; Eggebrecht, A.T.; Culver, J.P.; Dehghani, H. Singular value decomposition based regularization prior to spectral mixing improves crosstalk in dynamic imaging using spectral diffuse optical tomography. Biomed. Opt. Express 2012, 3, 2036–2049. [Google Scholar] [CrossRef]
  68. Paige, C.C.; Saunders, M.A. LSQR: An algorithm for sparse linear equations and sparse least squares. ACM Trans. Math. Softw. TOMS 1982, 8, 43–71. [Google Scholar] [CrossRef]
  69. Hussain, A.; Faye, I.; Muthuvalu, M.S.; Boon, T.T. Least Square QR Decomposition Method for Solving the Inverse Problem in Functional Near Infra-Red Spectroscopy. In Proceedings of the 2021 IEEE 19th Student Conference on Research and Development (SCOReD), Kota Kinabalu, Malaysia, 23–25 November 2021; pp. 362–366. [Google Scholar]
  70. Prakash, J.; Yalavarthy, P.K. A LSQR-type method provides a computationally efficient automated optimal choice of regularization parameter in diffuse optical tomography. Med. Phys. 2013, 40, 033101. [Google Scholar] [CrossRef] [PubMed]
  71. Shaw, C.B.; Prakash, J.; Pramanik, M.; Yalavarthy, P.K. Least squares QR-based decomposition provides an efficient way of computing optimal regularization parameter in photoacoustic tomography. J. Biomed. Opt. 2013, 18, 080501. [Google Scholar] [CrossRef] [PubMed]
  72. Yalavarthy, P.K.; Pogue, B.W.; Dehghani, H.; Paulsen, K.D. Weight-matrix structured regularization provides optimal generalized least-squares estimate in diffuse optical tomography. Med. Phys. 2007, 34, 2085–2098. [Google Scholar] [CrossRef] [PubMed]
  73. Yalavarthy, P.K.; Pogue, B.W.; Dehghani, H.; Carpenter, C.M.; Jiang, S.; Paulsen, K.D. Structural information within regularization matrices improves near infrared diffuse optical tomography. Opt. Express 2007, 15, 8043–8058. [Google Scholar] [CrossRef] [PubMed]
  74. Yalavarthy, P.K.; Lynch, D.R.; Pogue, B.W.; Dehghani, H.; Paulsen, K.D. Implementation of a computationally efficient least-squares algorithm for highly under-determined three-dimensional diffuse optical tomography problems. Med. Phys. 2008, 35, 1682–1697. [Google Scholar] [CrossRef]
  75. Haufe, S.; Nikulin, V.V.; Ziehe, A.; Müller, K.-R.; Nolte, G. Combining sparsity and rotational invariance in EEG/MEG source reconstruction. NeuroImage 2008, 42, 726–738. [Google Scholar] [CrossRef]
  76. Pascual-Marqui, R.D.; Michel, C.M.; Lehmann, D. Low resolution electromagnetic tomography: A new method for localizing electrical activity in the brain. Int. J. Psychophysiol. 1994, 18, 49–65. [Google Scholar] [CrossRef]
  77. Prakash, J.; Shaw, C.B.; Manjappa, R.; Kanhirodan, R.; Yalavarthy, P.K. Sparse recovery methods hold promise for diffuse optical tomographic image reconstruction. IEEE J. Sel. Top. Quantum Electron. 2013, 20, 74–82. [Google Scholar] [CrossRef]
  78. Shaw, C.B.; Yalavarthy, P.K. Performance evaluation of typical approximation algorithms for nonconvex ℓ p-minimization in diffuse optical tomography. JOSA A 2014, 31, 852–862. [Google Scholar] [CrossRef]
  79. Okawa, S.; Hoshi, Y.; Yamada, Y. Improvement of image quality of time-domain diffuse optical tomography with lp sparsity regularization. Biomed. Opt. Express 2011, 2, 3334–3348. [Google Scholar] [CrossRef] [PubMed]
  80. Sato, M.-A.; Yoshioka, T.; Kajihara, S.; Toyama, K.; Goda, N.; Doya, K.; Kawato, M. Hierarchical Bayesian estimation for MEG inverse problem. NeuroImage 2004, 23, 806–826. [Google Scholar] [CrossRef]
  81. Guven, M.; Yazici, B.; Intes, X.; Chance, B. Hierarchical bayesian algorithm for diffuse optical tomography. In Proceedings of the 34th Applied Imagery and Pattern Recognition Workshop (AIPR’05), Washington, DC, USA, 19 October–21 December 2005; pp. 6–145. [Google Scholar]
  82. Calvetti, D.; Pragliola, M.; Somersalo, E.; Strang, A. Sparse reconstructions from few noisy data: Analysis of hierarchical Bayesian models with generalized gamma hyperpriors. Inverse Probl. 2020, 36, 025010. [Google Scholar] [CrossRef]
  83. Shimokawa, T.; Kosaka, T.; Yamashita, O.; Hiroe, N.; Amita, T.; Inoue, Y.; Sato, M.A. Hierarchical Bayesian estimation improves depth accuracy and spatial resolution of diffuse optical tomography. Opt. Express 2012, 20, 20427–20446. [Google Scholar] [CrossRef] [PubMed]
  84. Shimokawa, T.; Kosaka, T.; Yamashita, O.; Hiroe, N.; Amita, T.; Inoue, Y.; Sato, M.A. Extended hierarchical Bayesian diffuse optical tomography for removing scalp artifact. Biomed. Opt. Express 2013, 4, 2411–2432. [Google Scholar] [CrossRef] [PubMed]
  85. Aihara, T.; Shimokawa, T.; Ogawa, T.; Okada, Y.; Ishikawa, A.; Inoue, Y.; Yamashita, O. Resting-state functional connectivity estimated with hierarchical bayesian diffuse optical tomography. Front. Neurosci. 2020, 14, 32. [Google Scholar] [CrossRef]
  86. Yamashita, O.; Shimokawa, T.; Aisu, R.; Amita, T.; Inoue, Y.; Sato, M.-A. Multi-subject and multi-task experimental validation of the hierarchical Bayesian diffuse optical tomography algorithm. Neuroimage 2016, 135, 287–299. [Google Scholar] [CrossRef]
  87. Hiltunen, P.; Prince, S.; Arridge, S. A combined reconstruction–classification method for diffuse optical tomography. Phys. Med. Biol. 2009, 54, 6457. [Google Scholar] [CrossRef]
  88. Cao, N.; Nehorai, A.; Jacob, M. Image reconstruction for diffuse optical tomography using sparsity regularization and expectation-maximization algorithm. Opt. Express 2007, 15, 13695–13708. [Google Scholar] [CrossRef]
  89. Amblard, C.; Lapalme, E.; Lina, J.-M. Biomagnetic source detection by maximum entropy and graphical models. IEEE Trans. Biomed. Eng. 2004, 51, 427–442. [Google Scholar] [CrossRef]
  90. Grova, C.; Daunizeau, J.; Lina, J.-M.; Bénar, C.G.; Benali, H.; Gotman, J. Evaluation of EEG localization methods using realistic simulations of interictal spikes. Neuroimage 2006, 29, 734–753. [Google Scholar] [CrossRef] [PubMed]
  91. Chowdhury, R.A.; Lina, J.M.; Kobayashi, E.; Grova, C. MEG source localization of spatially extended generators of epileptic activity: Comparing entropic and hierarchical bayesian approaches. PLoS ONE 2013, 8, e55969. [Google Scholar] [CrossRef] [PubMed]
  92. Cai, Z.; Machado, A.; Chowdhury, R.A.; Spilkin, A.; Vincent, T.; Aydin, Ü.; Pellegrino, G.; Lina, J.-M.; Grova, C. Diffuse optical reconstructions of fNIRS data using Maximum Entropy on the Mean. bioRxiv 2021, 23, 2021-02. [Google Scholar] [CrossRef]
  93. Trujillo-Barreto, N.J.; Aubert-Vázquez, E.; Valdés-Sosa, P.A. Bayesian model averaging in EEG/MEG imaging. NeuroImage 2004, 21, 1300–1319. [Google Scholar] [CrossRef] [PubMed]
  94. Fragoso, T.M.; Bertoli, W.; Louzada, F. Bayesian model averaging: A systematic review and conceptual classification. Int. Stat. Rev. 2018, 86, 1–28. [Google Scholar] [CrossRef]
Figure 1. Visual representation of the case (c) continuous wave, case (b) frequency domain, and case (a) (adapted from Ref. [6]).
Figure 1. Visual representation of the case (c) continuous wave, case (b) frequency domain, and case (a) (adapted from Ref. [6]).
Axioms 12 00326 g001
Figure 2. A graphical representation of the forward and inverse problems (adapted from Ref. [7]).
Figure 2. A graphical representation of the forward and inverse problems (adapted from Ref. [7]).
Axioms 12 00326 g002
Figure 3. Fundamental steps of MCML technique (adapted from Ref. [53]).
Figure 3. Fundamental steps of MCML technique (adapted from Ref. [53]).
Axioms 12 00326 g003
Figure 4. Fundamental steps of the NIRFAST technique for the forward problem (adapted from Ref. [54]).
Figure 4. Fundamental steps of the NIRFAST technique for the forward problem (adapted from Ref. [54]).
Axioms 12 00326 g004
Figure 5. Libraries for Toast++ technique for forward model simulation (adapted from Ref [55]).
Figure 5. Libraries for Toast++ technique for forward model simulation (adapted from Ref [55]).
Axioms 12 00326 g005
Figure 6. Basic steps of MCX technique (adapted from Ref. [56]).
Figure 6. Basic steps of MCX technique (adapted from Ref. [56]).
Axioms 12 00326 g006
Table 1. Details about the various methods and types of toolboxes/software used for the simulation forward problem in fNIRs measurements.
Table 1. Details about the various methods and types of toolboxes/software used for the simulation forward problem in fNIRs measurements.
ReferencesForward Simulation MethodSimulation Software/ToolboxData Type
B. W. Pogue et al., 1995 [61]FDMN/AN/A
M. A. Ansari et al., 2014 [62]BEMN/AN/A
Dehghani, Hamid, et al., 2009 [54]FEMNIRFASTBreast model data
Yalavarthy, Phaneendra K. et al., 2007–2008 [7,63,64]FEMN/APhantom
Brigadoi, Sabrina, et al., [65]FEMToast++Real resting-state data
Chiarelli, Antonio M., et al., 2016 [66]FEMNIRFASTPhantom
Lu, Wenqi, Daniel Lighter, and Iain B. Styles. 2018 [67]FEMNIRFASTRealistic simulation data
Machado, A., et al., 2018 [68]MCMCXRealistic simulation data
Yu, Leiming, et al., 2018 [58]MCMCXPhantom
Jiang, Jingjing, et al., 2020 [69]MC and FEMMCX and Toast++Silicon phantom experiment
Fu, Xiaoxue, and John E. Richards. 2021 [70]MCMCXRealistic simulation data
Cai, Zhengchen, et al., 2021 [71]MCMCXRealistic
Mazumder, Dibbyan, et al., 2021 [72]MCMCXRealistic simulation data
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hussain, A.; Faye, I.; Muthuvalu, M.S.; Tang, T.B.; Zafar, M. Advancements in Numerical Methods for Forward and Inverse Problems in Functional near Infra-Red Spectroscopy: A Review. Axioms 2023, 12, 326. https://doi.org/10.3390/axioms12040326

AMA Style

Hussain A, Faye I, Muthuvalu MS, Tang TB, Zafar M. Advancements in Numerical Methods for Forward and Inverse Problems in Functional near Infra-Red Spectroscopy: A Review. Axioms. 2023; 12(4):326. https://doi.org/10.3390/axioms12040326

Chicago/Turabian Style

Hussain, Abida, Ibrahima Faye, Mohana Sundaram Muthuvalu, Tong Boon Tang, and Mudasar Zafar. 2023. "Advancements in Numerical Methods for Forward and Inverse Problems in Functional near Infra-Red Spectroscopy: A Review" Axioms 12, no. 4: 326. https://doi.org/10.3390/axioms12040326

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop