Next Article in Journal
Thyroglossal Duct Lipoma: A Case Report and a Systematic Review of the Literature for Its Management
Previous Article in Journal
Hybrid Multilevel Thresholding Image Segmentation Approach for Brain MRI
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Radar-Based Microwave Breast Imaging Using Neurocomputational Models

by
Mustafa Berkan Bicer
Electrical and Electronics Engineering Department, Engineering Faculty, Tarsus University, 33400 Mersin, Turkey
Diagnostics 2023, 13(5), 930; https://doi.org/10.3390/diagnostics13050930
Submission received: 18 January 2023 / Revised: 21 February 2023 / Accepted: 27 February 2023 / Published: 1 March 2023
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)

Abstract

:
In this study, neurocomputational models are proposed for the acquisition of radar-based microwave images of breast tumors using deep neural networks (DNNs) and convolutional neural networks (CNNs). The circular synthetic aperture radar (CSAR) technique for radar-based microwave imaging (MWI) was utilized to generate 1000 numerical simulations for randomly generated scenarios. The scenarios contain information such as the number, size, and location of tumors for each simulation. Then, a dataset of 1000 distinct simulations with complex values based on the scenarios was built. Consequently, a real-valued DNN (RV-DNN) with five hidden layers, a real-valued CNN (RV-CNN) with seven convolutional layers, and a real-valued combined model (RV-MWINet) consisting of CNN and U-Net sub-models were built and trained to generate the radar-based microwave images. While the proposed RV-DNN, RV-CNN, and RV-MWINet models are real-valued, the MWINet model is restructured with complex-valued layers (CV-MWINet), resulting in a total of four models. For the RV-DNN model, the training and test errors in terms of mean squared error (MSE) are found to be 103.400 and 96.395, respectively, whereas for the RV-CNN model, the training and test errors are obtained to be 45.283 and 153.818. Due to the fact that the RV-MWINet model is a combined U-Net model, the accuracy metric is analyzed. The proposed RV-MWINet model has training and testing accuracy of 0.9135 and 0.8635, whereas the CV-MWINet model has training and testing accuracy of 0.991 and 1.000, respectively. The peak signal-to-noise ratio (PSNR), universal quality index (UQI), and structural similarity index (SSIM) metrics were also evaluated for the images generated by the proposed neurocomputational models. The generated images demonstrate that the proposed neurocomputational models can be successfully utilized for radar-based microwave imaging, especially for breast imaging.

1. Introduction

In the health care industry, the diagnosis and treatment of diseases has become increasingly reliant on rapidly advancing technology. Currently, cardiovascular diseases are the leading cause of death, followed by cancer in second place [1,2]. Although cancer is a non-communicable disease with various types, breast cancer is the most prevalent form of cancer among women [1,3]. Despite the fact that breast cancer can be discovered reasonably quickly and easily due to the development of medical imaging technologies, if it is not diagnosed at an early stage, it can develop into later stages and be fatal. In addition, it is crucial to detect breast cancer at an early stage since it might metastasize and spread to other tissues, resulting in the development of additional malignancies. Although a variety of modalities are used to identify breast cancer at an early stage, X-ray mammography is the most frequently utilized primary modality [3]. However, the drawbacks of X-ray mammography include the use of ionizing X-rays for imaging, low mobility, low sensitivity, and painful compression of breast tissue between two planes. In addition, X-ray mammography, which is significantly more effective in detecting benign cancers, may necessitate an additional biopsy to detect malignant tumors [3]. Ultrasonography (USG), which is used as an adjunct to X-ray mammography, utilizes sound waves for imaging purposes, at frequencies inaudible to the human ear. Since the penetration depth of the sound waves into the human body is not particularly deep, it is important to apply pressure to the body with the probe and use a matching medium for good imaging, despite the fact that these sound waves convey information about the breast tissue. In addition, when a mass is found by USG, a biopsy should be performed to obtain further information about the mass. Magnetic resonance imaging (MRI) has become an alternative to X-ray mammography and USG by generating images based on the principle of magnetic resonance. The imaging provided by measuring the response of the body of the patient, which has been subjected to a magnetic field, to some waves applied in this area has a higher sensitivity than other techniques but a lower specificity. In addition, MRI has drawbacks such as a high cost, a longer imaging procedure, and an unpleasant measurement. These drawbacks of primary modalities such as X-ray mammography, USG, and MRI have motivated researchers to develop alternative techniques. Microwave imaging (MWI) is an alternative imaging modality that utilizes low-frequency and low-power electromagnetic waves for imaging and has been intensively researched by researchers. In MWI, electromagnetic waves in the non-ionizing microwave frequency band are generated and used to illuminate breast tissue with electromagnetic waves through antennas. MWI offers significant advantages over conventional modalities, thanks to its specially designed measurement instruments that may provide a more comfortable examination. In addition, electromagnetic waves can be generated via cost-effective and mobile MWI devices, and systems that are easily transportable to regions where mobility is required can be constructed. Researchers in the field of MWI have conducted numerous studies, particularly concerning the operating frequency and imaging methods [4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24]. Li et al. [20] proposed a CNN-based model for solving non-linear inverse electromagnetic problems with deep learning (DL) models. The images were obtained by collecting the scattered electromagnetic fields from the illuminated target and applying these collected fields to the DL model. The authors [20] discuss the theory underlying the relationship between DL models and non-linear inverse electromagnetic problems, and demonstrate the performance of their approach using the Modified National Institute of Standards and Technology (MNIST) dataset. Barrachina et al. [25] proposed the use of complex-valued and real-valued U-Net models for semantic segmentation in polarimetric synthetic aperture radar (PolSAR) images. Jing et al. [26] presented a CNN model with complex values for near-field millimeter-wave imaging. The proposed model consists of fully conventional layers and enhances the input image data. Experimental measurements were conducted at 34.5 GHz, and the performance of the model was demonstrated using the measurement results. Yadav et al. [27] developed a microwave tomography (MWT) approach based on neural networks for usage in industrial microwave drying systems. The authors intended to determine the distribution of moisture in an industrial drying system using this method. In their study, experiments were performed utilizing a linear MWT array to determine the distribution of moisture in the Hephaistos microwave oven system. Wang et al. [5] proposed a compressed-sensing (CS)-based convolutional neural network (CSR-Net) model for microwave sparse reconstruction. The authors validated the performance of their proposed model on various simulated and measured data. The authors also performed three-dimensional imaging using the results of the model they developed using complex-valued data. Ambrosanio et al. [28] proposed a deep neural network model for breast imaging. The model estimates the dielectric constant and tissue conductivity using the scattered electric field matrix as input data. The performance of the model, which comprises 3 layers with 2000 nodes each, is compared to the cross-correlated contrast source inversion (CC-CSI) [29] and adaptive multi-threshold iterative shrinkage thresholding algorithm (AMTISTA) [30] techniques. Dey et al. [21] presented an approach for breast lesion localization in microwave imaging utilizing pulse-coupled neural networks (PCNN). The authors of reference [21] obtained 61 breast images of 35 individuals using microwave imaging from a matching-liquid-free system operating between 1 GHz and 9 GHz, and 81.82% success was achieved as a malignant finding (MF) performance. Shao et al. [31] developed an auto-encoder-based DL algorithm that transforms 4 GHz data received from 24 × 24 antenna array data into 128 × 128 images. The performance of the model was evaluated by comparing the images using the distorted-Born iterative method (DBIM) and the phase confocal method (PCM) techniques. The developed model [31] utilizes the complex input data as a two-dimensional image in amplitude and phase. Chiu et al. [32] examined the U-Net and object-attentional super-resolution network (OASRN) models for electromagnetic imaging. Using a setup of 32 transmitting and 32 receiving antennas, scattered field measurements were carried out with the addition of Gaussian noise. The authors [32] concluded that the OASRN model is superior to the U-Net model based on a comparison of the obtained images and results. Khoshdel et al. [22] developed a model based on DL for three-dimensional breast imaging. Three-dimensional CSI images are applied as the input to the proposed U-Net-based DL model, and a three-dimensional dielectric map is generated as output. It has been demonstrated that the U-Net model, which enhances the CSI images applied to the input, produces superior results as compared to the CSI method [22]. Qin et al. [23] developed a breast imaging model based on DL using microwave and ultrasonic data. The proposed model [23] utilizes ultrasound and microwave data as input, combines them, and applies convolutional layers. The output of the model is divided into two branches to provide the segmentation result and regression results, such as the dielectric constant. Considering the studies in the literature [22,33,34,35,36], it can be seen that the application of DL models in medical imaging systems is rising. DL models produce faster and higher-quality results than conventional imaging techniques, and they are becoming more popular in imaging systems.
In this study, four models utilizing deep neural networks and convolutional neural networks are proposed for the generation of monostatic radar-based microwave images using backscattered electric field data using the CSAR principle. The images generated by the models are compared to those obtained by a matching pursuit-based (MP-based) [19,37] algorithm, and the performances of the models are discussed.
The highlights of this study are as follows:
  • In this study, conventional imaging was carried out utilizing CSAR-based numerical data and an MP-based algorithm.
  • For imaging, both the matching-pursuit-based method and the neurocomputational models utilized raw, unprocessed real-valued, and complex-valued numerical data. Computed or measured scattered electric field data can therefore be applied directly to models without preprocessing.
  • RV-DNN and RV-CNN models are proposed, followed by two combined neurocomputational models (RV-MWINet and CV-MWINet) employing the proposed CNN model structure, which combines the U-Net structure. The images generated by the proposed models are compared to those generated by the matching-pursuit algorithm. The study demonstrates that the processing and generation speeds of the proposed models are faster than those of conventional imaging techniques, and that the resulting images are of higher quality.
  • By placing a screw in the sand and an unhealthy tumor phantom in a healthy phantom, a total of 12 measurements were taken in the range of 1 GHz to 10 GHz, using the measurement setup. In order to train the CV-MWINet model, measurement data were added to the dataset obtained from simulated data. Also, the performance of the proposed model on both simulated and measured data is discussed.

2. The Forward Problem Based on the Circular Synthetic Aperture Radar (CSAR) Principle

The simulation data used in this study were generated based on the monostatic circular synthetic aperture radar (CSAR) principle [38], and the simulation data acquisition setup is illustrated in Figure 1. In this method, a transceiver antenna is rotated at certain intervals on a concentric circle with a stationary object in the imaging domain (Ω) with a dielectric distribution ε(r), and collects backscattered electric field data from this domain. This method assumes that the imaging domain is entirely encompassed by the radiation pattern of the antenna. Thus, the electric field measurements backscattered from the imaging domain contain information about the target object. The backscattered electric field data obtained in accordance with the structure depicted in Figure 1 comprise information regarding skin and tumors.
According to the CSAR concept, the back-scattered electric field in frequency domain can be expressed as [37],
E s ( f , ϕ ) = A 0 e j 4 π f ε r μ r c R ( ϕ ) ,
where A0, f, εr, μr, c, and R(ϕ) denote the amplitude of the electric field, frequency, relative permittivity, magnetic permeability, the phase velocity of the wave, and the Euclidean distance function between the scatterer and antenna. For most common materials, μr is considered as 1. For the sake of simplicity, the imaging field is considered to be homogenous, and the tumor and skin are supposed to be discrete perfect scatterers. The angle-dependent Euclidean distance in the expression given in Equation (1) is calculated by Equation (2) [38].
R ( ϕ ) = | x a R 0 cos ( ϕ ) | 2 + | y a R 0 sin ( ϕ ) | 2
As shown by the equation, the distance is calculated using the difference between the antenna position and the projection of the scatterers on the axis. The single transceiver antenna in the imaging system collects the backscattered electric field data from the imaging domain by positioning itself at the measurement positions shown in Figure 1 at predetermined intervals. For each measurement point, the backscattered electric field data from all scattering points within the imaging domain is collected to yield the overall electric field data. This procedure is repeated for all measurement points, resulting in 360-degree data coverage of the imaging region. The measured data may contain information regarding the maximum range (Rm), which can be determined using Equation (3) [38].
R m = N Δ r ,
where N represents the number of frequencies, Δr represents the range resolution and is calculated using Equation (4) [38].
Δ r = c 2 N Δ f
Δf in Equation (4) represents the bandwidth used in the measurement system. The parameters and values specified in Table 1 were utilized to acquire the total backscattered electric field data from the imaging plane.
Using Equation (1) through (4), between one and three tumor scatterers with diameters between 0.2 cm and 0.9 cm and random positions and shapes in the imaging domain were generated, and numerical data for these scatterers were computed. Consequently, a complex-valued backscattered electric field dataset for 1000 scatterers was created. The dataset, each consisting of backscattered electric field data with dimensions (301 × 90), had dimensions (1000, 301, 90) in total (number of data, number of frequencies, number of angles).

3. Phantom Fabrication and Measurement

In this study, measurements were carried out to be used for model training. To obtain the measurement data, phantoms of both healthy and tumor tissues were fabricated using methods similar to those described by Ortega-Palacios et al. [39]. Figure 2 depicts the images of the phantom fabrication, dielectric constant measurements, and microwave imaging measurement setup.
Using a dielectric probe, the dielectric constants of the phantoms were measured between 1 GHz and 10 GHz, as shown in Figure 2a. Figure 2b depicts a measurement setup in a large, empty space outside the setup. During the measurements, an ultra-wideband (UWB) horn antenna was employed. For the sake of simplicity, the rotation of the material was chosen over the antenna in the measurement setup. The computer-controlled turntable was rotated at angles of 4 degrees, and the scattering parameter (S11) was measured at a total of 90 angles for a total of 360 degrees. Figure 3 depicts the dielectric constant measurement graph of the phantoms manufactured as shown in Figure 2a.
When analyzing the dielectric constants presented in Figure 3 for the healthy phantom and the tumor phantom, a dielectric contrast of 4 to 6 is observed. There was a total of 12 measurements performed, including 7 obtained by placing metal screws at 7 distinct locations in the fine sand and 5 obtained by placing the tumor phantom at 5 points on the healthy phantom. The measurement data were added to the dataset used to train the deep learning model along with the simulation data.

4. Microwave Imaging (MWI) Using Deep Learning (DL) Models

The similarities between DL models and non-linear electromagnetic scattering are initially discussed in this study. Then, the use of three distinct real-valued and one complex-valued DL approaches will be explained. These are real-valued deep neural network-based (RV-DNN), real-valued convolutional network-based (RV-CNN), and combined real-valued and complex-valued DL models consisting of CNN and U-Net-based models (RV-MWINet and CV-MWINet).

4.1. Similarities between DL and Non-Linear Electromagnetic Scattering

The relationship between DL and non-linear electromagnetic scattering, as established by Li et al. [20], is considered in this study. For the configuration depicted in Figure 1, the total electric field value E ( n ) ( r ) , where E i ( n ) ( r ) is the total incident electric field and E s ( n ) is the total scattered electric field, can be calculated using Equation (5) [20].
E ( n ) ( r ) = E i ( n ) ( r ) + E s ( n ) = E i ( n ) ( r ) + k 0 2 Ω ( i 4 ) H 0 ( 1 ) ( | r r | ) χ ( r ) E ( n ) ( r ) d r
The parameters n, k0, H 0 ( 1 ) and χ represent the index of the scattering, the wavenumber of the background medium, the first-kind zeroth-order Hankel function, and the contrast function, respectively. r = (x, y) and r’ = (x’, y’) indicate the field and source positions, respectively, and are evaluated as r, r’ ∈ Ω. In computational imaging, the imaging region surrounded by antennas and whose content is unknown is regarded as being divided into pixels. The values of the pixels provide information related to the contrast values. Consequently, the value of the scattered electric field to be used in the imaging process is computed using Equation (6) [20].
E s ( n ) = G d E ( n ) χ
E ( n ) E i n c ( n ) = G s E ( n ) χ
Green’s function is represented by G in Equations (6) and (7). Iteratively applying Equations (6) and (7) yields the expression given in Equation (8) for the (t+1)th stage of the contrast function [20].
χ ( t + 1 ) = arg min χ [ n δ E s ( n ) J ( t ) ( n ) δ χ 2 2 + ( χ ) ]
In Equation (8), δ E s ( n ) and δ χ are defined as δ E s ( n ) E s ( n ) E s ( n ) ( χ ( t ) ) and δ χ χ χ ( t ) . The (t) indices in the expressions denote the value of t-th iteration. J ( t ) ( n ) represents the Jacobian matrix of E s ( n ) with regard to χ ( t ) . ( χ ) , which denotes the regularization in Equation (8), is defined as shown in Equation (9) for simplicity [20].
( χ ) = D χ 1
In Equation (9), the parameter D is utilized to describe a sparse transformation process like a wavelet. The contrast function at time t + 1 can be defined as in Equation (10) [20].
χ ( t + 1 ) = D H S { D χ ( t ) + D [ n ( J ( t ) ( n ) ) H J ( t ) ( n ) ] n ( J ( t ) ( n ) ) H δ E s ( n ) }
S{.} and H in Equation (10) denote the element-wise soft-threshold and conjugate transpose, respectively. Equation (10) can be rearranged as Equations (11) and (12) to illustrate the connection between NN and non-linear electromagnetic scattering [20].
D χ ( t + 1 ) = S { D χ ( t ) + D [ n ( J ( t ) ( n ) ) H J ( t ) ( n ) ] n A ( t ) ( n ) δ E s ( n ) } = S { D χ ( t ) + D [ n ( J ( t ) ( n ) ) H J ( t ) ( n ) ] × n A ( t ) ( n ) ( E s ( n ) G d E ( t ) ( n ) χ ( t ) ) }
D χ ( t + 1 ) = S { P ( t ) χ ( t ) + b ( t ) } z
The parameters P(t) and b(t) in Equation (11) are given in Equations (13) and (14) [20].
P ( t ) D P ( t ) D P ( t ) [ n ( J ( t ) ( n ) ) H J ( t ) ( n ) ] n A ( t ) ( n ) G d E ( t ) ( n )
b ( t ) D [ n ( J ( t ) ( n ) ) H J ( t ) ( n ) ] n A ( t ) ( n ) E s ( n )
Equation (12) is conceptually comparable to the definition of a fully connected NN. The parameters P and b in Equation (12) correspond to the weights and bias values of fully connected NNs. The indices (t) of these parameters represent the neural network layers. This similarity and relationship demonstrate that DL models are applicable to non-linear electromagnetic scattering challenges.

4.2. Deep Neural Network-Based (DNN-Based) Imaging

Given that the dataset built through numerical computations in this study contains backscattered electric field data with complex values, the real-valued DNN (RV-DNN) model is constructed to handle the absolute value of the complex values. Figure 4 illustrates the representative architecture of the proposed RV-DNN model.
The input values supplied to the model at the input layer are passed straight to the first layer by the input elements depicted in Figure 4. Using Equation (15), the outputs of each element in the hidden layers and the output layer are computed.
h i = σ ( j = 1 N W i j x j + b i h )
In Equation (15), the parameters hi, N, Wij, xj, and bih represent the output value of the element, the number of inputs to the element, the weight coefficients at the input of the element, the values at the input of the element, and the bias value, respectively. The parameter σ represents the activation function, and the rectified linear unit (ReLU) activation function used in this study is given in Equation (16).
σ = max ( 0 ,   u )
Each of the 1000 data in the dataset comprises magnitude values of the complex-valued backscattered electric field data with a size of (301 × 90). The RV-DNN model was constructed to handle real-valued input and output data in one dimension. Thus, the two-dimensional input and output data were transformed into one-dimensional vectors, and the model was trained using these vectors. The model, which is designed with an input layer consisting of 27,090 elements, contains a total of 5 hidden layers, with the number of elements being (128, 128, 128, 128, 128). The output layer of the model comprises 16,384 elements, as the size of the image to be generated using the model is 128 × 128. The chosen settings for the training phase of the model include using the ReLU function as the activation function, the Adam algorithm as the optimization technique, an epoch number of 1000, a batch size of 32, and minimizing the mean squared error (MSE) as the metric. The 10-fold cross-validation method was applied to evaluate the performance of the model. In addition to cross-validation, the model was trained with 90% data and tested with 10% data. To compare the performance of the models considered in the study, images were also obtained using the traditional MP-based imaging algorithm.

4.3. Convolutional Neural Networks-Based (CNN-Based) Imaging

In this study, a sequential real-valued CNN (RV-CNN) model for the imaging of the backscattered electric field data is proposed. In the convolution process, the filtered output data are obtained by convolving the input data with the filter, also known as the kernel matrix. The filtering allows for the extraction of various attributes of the handled data. The convolution of the input data x with the four-dimensional f filter is calculated using Equation (17) [40]. The xl+1 derived from Equation (17) belongs to the solution set H l + 1 × W l + 1 × D l + 1 .
x i l + 1 , j l + 1 , d l + 1 = ρ ( i = 0 H j = 0 W d l = 0 D l f i , j , d l , d × x i l + 1 + i , j l + 1 + j , d l l )
In the equation, xl represents the input of the lth layer, while xl+1 represents the output of this layer, as well as the input of the (l + 1)th layer. f represents the kernel function for H × W × D l × D , while the ρ function is the activation function.
Also, the rest of the parameters are defined as Hl+1 = Hl – H + 1, Wl+1 = Wl − W + 1 and Dl+1 = D. H × W represents the spatial span of each kernel, whereas D indicates the total number of kernels. The RV-CNN model developed in this study also employs the ReLU activation function derived from Equation (16). The RV-CNN model proposed in the study is shown in Figure 5.
The model shown in Figure 5 contains 7 convolutions and 3 fully connected layers. Details of the properties of the layers are given in Table 2.
As with the proposed RV-DNN model, the RV-CNN model is designed to obtain the one-dimensional dielectric map vector. In order to train the model, 1000 input data consisting of (301 × 90) backscattered electric field values were utilized. At the output of the model, a total of 1000 data consisting of one-dimensional dielectric map vectors of length 16,384 were obtained through training. The output vector is reshaped into a two-dimensional form during the imaging step. The proposed RV-CNN model was trained using the ReLU function as the activation function, Adam algorithm as the optimization algorithm, 2000 epochs, a batch size of 32, and the mean squared error (MSE) as the metric to minimize. Similar to the RV-DNN model, 10-fold cross-validation approach was used to evaluate the performance of the model.

4.4. U-Net-Based Combined Neurocomputational Imaging Model

In this study, two neurocomputational models, named MWINet, are proposed for use in microwave imaging by combining the proposed CNN model with the U-Net-based model. For this purpose, a U-Net-based model extends the sequential CNN model. The proposed model utilizes raw scattered electric field data as the input and generates a one-dimensional microwave image. The structure of the proposed MWINet model is given in Figure 6.
As seen from Figure 6, the CNN structure in the initial layers of the proposed MWINet model provides general imaging, while the U-Net section is responsible for image cleaning and tumor structural clarification. For the purposes of this study, the layers of the model depicted in Figure 6 were constructed as RV-MWINet models with real-valued layers and CV-MWINet models with complex-valued layers.
In order to train the RV-MWINet model, 1000 input data consisting of (301 × 90) backscattered electric field values were utilized. At the output of the model, a total of 1000 data consisting of one-dimensional dielectric map vectors of length 16,384 were obtained through training. The output data used to train the model was converted to be binary valued. The output vector is reshaped into a two-dimensional form during the imaging step.
For the proposed RV-MWINet model, the real-valued ReLU function was chosen as the activation function for the inner layers, and the sigmoid activation function was chosen for the output layer. In the layers of the CV-MWINet model; however, the cartesian ReLU (CReLU) activation function given by Equation (18) is used, but the amplitude of the complex sigmoid function given by Equation (19) is used in the output layer.
σ C Re L U = max ( 0 , x ) + j max ( 0 , y )
σ C S i g m o i d = 1 1 + e x + j 1 1 + e y
In Equations (18) and (19), the parameters x and y represent the real and imaginary components of the input data, respectively. The optimization algorithm selected was Adam, with 500 epochs, a batch size of 32, and accuracy as the metric to be maximized. Similar to the proposed RV-DNN and RV-CNN models, a 10-fold cross-validation approach was used to evaluate the performance of the model. While 1000 real-valued data were used to train and evaluate the performance of the RV-MWINet model, 12 measurement data were added to the data used to train and analyze the performance of the CV-MWINet model.

4.5. Evaluation Metrics

In this study, accuracy (ACC), mean squared error (MSE), peak signal-to-noise ratio (PSNR), universal quality image index (UQI), and structural similarity (SSIM) metrics were utilized to examine the images generated by the proposed neurocomputational models. For the MSE metric, the equation given in Equation (20) is used.
M S E = 1 m n i = 0 m 1 j = 0 n 1 [ x ( i , j ) y ( i , j ) ] 2
The variables x and y in the equation represent the input and output images of size m × n. Although MSE is a significant metric in regression problems, it is more typical to utilize the well-known PSNR, UQI, and SSIM metrics to visually analyze images. Equation (21) is utilized to calculate the PSNR measure.
P S N R = 10 log 10 ( M I 2 M S E )
The MI parameter in the equation represents the maximum value of the pixels. In addition to PSNR, the UQI and SSIM metrics given in Equations (27) and (28) provide significant information about the generated images. The values of the variables used in Equations (27) and (28) are calculated by Equations (22)–(26).
μ x = 1 N i = 1 N x i
μ y = 1 N i = 1 N y i
σ x 2 = 1 N 1 i = 1 N ( x i μ x ) 2
σ y 2 = 1 N 1 i = 1 N ( y i μ y ) 2
σ x y = 1 N 1 i = 1 N ( x i μ x ) ( y i μ y )
U Q I = 4 σ x y μ x μ y ( σ x 2 + σ y 2 ) [ μ x 2 + μ y 2 ]
In Equation (27), the dynamic range of the UQI value is [−1, 1]. The optimal value is 1, which can only be achieved when the two images are identical. In the equations, μ represents the mean, and σ represents the variance. In fact, the UQI value is the premise of the SSIM calculation. The SSIM metric is calculated by Equation (28).
S S I M ( x , y ) = ( 2 μ x μ y + C 1 ) ( 2 σ x y + C 2 ) ( μ x 2 + μ y 2 + C 1 ) ( σ x 2 + σ y 2 + C 2 )
Comparing Equations (28) and (27), it can be observed that the difference in the equations is due to the C1 and C2 coefficients. The UQI value is achieved when C1 and C2 in the SSIM equation are both set to 0.

5. Numerical Results and Discussion

In this study, 1000 complex-valued backscattering electric field data were generated numerically using the setup in Figure 1, and the parameters and values in Table 1. The magnitudes of this data were used to create a dataset for real-valued neurocomputational models, while another dataset was generated for the CV-MWINet model using the original complex values along with 12 measured values. To improve the generalizability of the proposed models, the number of data was kept as high as possible. Thus, the input data have the dimensions (1000, 301, 90, 1), whereas the output data have the dimensions (1000, 512, 512, 1). To simplify training and testing of the models, the output images were resized to have dimensions (1000, 128, 128, 1). The 10-fold cross-validation method was used for the performance evaluation of the proposed neurocomputational models. Although different epochs were used to train the models, 1000 epochs and 32 batch sizes were chosen in the 10-fold cross-validation process for the four models. Table 3 provides a comparison of the evaluation results obtained through 10-fold cross-validation using the train data. The values in Table 3 are expressed as the mean value ± the standard deviation.
MSE and SSIM metrics are presented in Table 3 for the proposed RV-DNN and RV-CNN models, while ACC and SSIM metrics are presented for the MWINet models. This is because the proposed RV-DNN and RV-CNN models use float-valued output images for training, whereas the MWINet models use binary-valued output images. On examining the data in Table 3, it can be seen that the RV-DNN model has a higher MSE error than the RV-CNN model, while the SSIM metrics are greater for the RV-DNN model than for the RV-CNN model. A comparison of the 10-fold cross-validation results of the MWINet models with those of the other models indicates that the MWINet models have superior training performance. Table 4 presents a comparison of the 10-fold cross-validation performance of the proposed neurocomputational models using test data.
In terms of MSE error, the RV-CNN model outperforms the RV-DNN model, although the SSIM values are comparable. The MWINet models are observed to produce superior outcomes compared to the proposed RV-DNN and RV-CNN models. After a 10-fold cross-validation, the dataset was shuffled, and neurocomputational models were then trained utilizing 90% of the data. The remaining data was utilized for both validation and testing. Figure 7 illustrates the change in the MSE measure during the training and validation of the RV-DNN model.
In Figure 7, the MSE error for the training data begins at a high value and rapidly decreases below 200 in the early epochs. However, after the 20th epoch, the rate of error reduction decreases and follows a monotonic downward trend over the thousand epochs. The validation MSE error, on the other hand, follows a monotonic trajectory of about 200, albeit with minor ripples. The MSE errors of the proposed RV-DNN model are obtained as 103.40007 and 96.39562 during training and testing, whereas the SSIM metrics are calculated as 0.92424 and 0.93020, respectively. Figure 8 depicts the change in the MSE metric during the training and validation of the proposed RV-CNN model.
The MSE metric indicates a dramatic fall in the initial epochs and a monotonic reduction in the subsequent epochs, as depicted in Figure 8. In comparison to the proposed RV-DNN model, the RV-CNN model exhibits a closer variance between the train and validation errors. The normalization layers used in the model help to keep the validation error close to the train error. The MSE errors of the proposed RV-CNN model are obtained as 45.283 and 153.818 during training and testing, whereas the SSIM metrics are calculated as 0.91000 and 0.92300, respectively.
Figure 9 depicts the change in the accuracy metric of the RV-MWINet model during training and validation.
Figure 9 illustrates a slower rise in training accuracy compared to validation accuracy. Due to the chosen batch size and the fact that the solution space has a high number of local minimums, the accuracy curves contain numerous ripples. Due to the design of the RV-MWINet model, both the CNN structure in the first model layers and the U-Net-based model layers are trained simultaneously. Since image generation and improvement are performed concurrently, it is acceptable for the number of ripples to increase throughout training and validation. The MSE, SSIM, and accuracy metrics for the training phase of the proposed RV-MWINet model are 0.00083, 0.99996, and 0.91139, while the same metrics for the testing process are 0.00467, 0.99957, and 0.86359. To account for the effect of the phase component of the complex-valued backscattered electric field data, each layer of the MWINet model in Figure 6 was replaced with a complex-valued layer to construct the CV-MWINet model structure. Figure 10 illustrates the evolution of the accuracy metrics of the proposed CV-MWINet model for training and validation over 500 epochs.
During the training of the CV-MWINet model, the complex average cross-entropy (CACE) loss function as given by Equation (29) was utilized, and the model weights at the iteration with the effective weight distribution were kept.
L o s s A C E = 1 2 [ L o s s C C E ( Re ( y p r e d ) , y t r u e ) + L o s s C C E ( Im ( y p r e d ) , y t r u e ) ]
In Equation (29), ACE and CCE represent average cross-entropy and category cross-entropy, respectively. The proposed CV-MWINet model was trained for 500 epochs with a batch size of 32 and achieved a training accuracy of 0.991 and a validation accuracy of 1.000.
In order to compare the performance of the proposed models, the RV-DNN, RV-CNN, and MWINet models are employed to generate images from data samples. Also, images were generated using the conventional MP-based MWI imaging technique using the same data. Figure 11 depicts the images generated by randomly selected training data samples. The ground truth images are depicted in Figure 11a,g,m.
Figure 11b,h,n depict radar-based images generated by the MP-based method for data containing one tumor, two tumors, and three tumors, respectively. Even though the backscattered electric field data contains information on a relatively modest scatterer, the radar-based MP-based image can make this scatterer appear larger than it actually is when MP-based images are evaluated. According to the case involving a single tumor, the image obtained from the RV-DNN model provides limited information regarding the position of the tumor. Although the RV-CNN model produces a clearer image of the same tumor, the RV-MWINet model is seen to produce the most accurate image. In cases involving two tumors, the MP-based algorithm generated a substantially larger image for the smaller tumor. In the images generated by the RV-DNN and RV-CNN models proposed, the small tumor is not visible. In this scenario, the RV-MWINet model delivers the most accurate representation of ground truth. Figure 11k demonstrates that the RV-MWINet model is able to image relatively small tumors. In the scenario involving three tumors, one tumor is positioned far away, while the other two are located quite close to one another. In this case, the MP-based algorithm treats two nearby tumors as a single tumor, as shown in Figure 11n. The RV-DNN model does not provide a good solution for distinguishing between two tumors, and the resulting image is quite noisy. The image generated by the proposed RV-CNN model is superior to those generated by the conventional method and the RV-DNN model, but it also contains noise. Figure 11q depicts the image generated by the RV-MWINet model, which is the image most similar to the ground truth. Images obtained with CV-MWINet are given in Figure 11f,l,r. When these images are analyzed, it can be observed that they are identical to the ground truth images. It may be stated that processing the complex-valued input information in complex-valued layers without losing the imaginary component of the data enhances the image quality at the output of the CV-MWINet. Similar to Figure 11, Figure 12 shows the images generated by the MP-based algorithm, RV-DNN model, RV-CNN model, and MWINet models for test data samples. In the case of a single tumor, the location of the tumor can be detected, albeit imprecisely, using blurry images obtained with the MP-based algorithm, RV-DNN model, and RV-CNN model. As seen in Figure 12e, RV-MWINet provided the cleanest and finest image for this case. In a scenario with two tumors, the MP-based method generates a rather large tumor image for the small tumor, as depicted in Figure 12h. This image also demonstrates that the MP-based algorithm depicts the tumor as being sufficiently massive to extend beyond the skin. The RV-DNN-based image in this scenario is quite noisy, so only the position of the major tumor is recognizable. Even if the image is noisy, the RV-CNN model can generate a better image than other models. In contrast, the MWINet models generated the most precise results in these scenarios. In all test scenarios, the CV-MWINet model achieves the best results compared to the other models, while the RV-MWINet model produces results that are comparable to those of the CV-MWINet model. It is possible to say that the usage of complex-valued data improves the performance of the model.
In the final scenario with two large tumors and one small tumor, the MP-based algorithm presents two adjacent tumors as if they were a single tumor. The small tumor is not visible in the image generated by the RV-DNN model. In this case, the RV-CNN model displays two adjacent tumors as a single tumor. However, as shown in Figure 12q,r, the RV-MWINet and CV-MWINet models accurately predicted the location and size of the three tumors in this scenario.
In order to analyze the results of the application of the models to a real-world problem following the simulation studies, a metal screw was placed in fine sand and a tumor phantom was placed in a healthy phantom, and measurement data was collected using a horn antenna and an Agilent vector network analyzer in accordance with the monostatic CSAR principle. The utilization of metal in fine sand allows the analysis of the effects of PEC material in a homogeneously distributed environment, whereas the tumor phantom placed within a healthy phantom is a method of simulating a realistic patient. In the measurement scenarios presented in Table 5, the scatterers were placed at a specific distance and a 45-degree angle to the x-axis relative to the center of the imaging domain.
Figure 13 depicts the images generated by the CV-MWINet model using data that was collected from measurements of scenarios involving metal screws in fine sand.
Although measurement results were also utilized to train CV-MWINet, the performance of the model for measurement data was also remarkably precise. Figure 14 illustrates images generated from the CV-MWINet model utilizing measurement data with the tumor phantom located within the healthy phantom. In scenarios utilizing phantoms, where the radius of the tumor phantom is around 2 cm, the tumors in the images are also large. Figure 14 illustrates that in scenarios #5, #7, and #8, the images obtained from the model closely match the ground truth images, however, in scenario #6, the images derived from the CV-MWINet model depict two adjacent tumors when there should be only one tumor. One of the main reasons for this inaccuracy is due to the use of a small number of measurement data in the dataset used to train the model. It can be stated that increasing the number of measurement data yields more precise results.
Table 6 provides PSNR, UQI, and SSIM metrics for the entire dataset in addition to simulation data for the models proposed in this study.
Analyzing the numerical metrics in Table 6 reveals that the neurocomputational models proposed in this study produce images of higher quality than conventional techniques. Even if the metrics of the proposed RV-DNN and RV-CNN models are comparable, it is noticeable that the RV-CNN model outperforms the RV-DNN model when analyzing images. Images and metrics provided by the MWINet models demonstrate that this model generates exceptionally high-quality microwave images. Even though their training time is longer, it is a well-known fact that deep learning models generate images quickly during the testing phase. In contrast, traditional algorithms can generate images over extended periods of time. The times required to generate the traditional images depicted in Figure 11 and Figure 12 are listed in Table 7, based on the mesh size employed by the MP-based method utilized in this study.
As shown in Table 7, imaging was carried out using an MP-based technique with 9061 and 16,105 mesh points. The imaging time required by the MP-based approach is not dependent on the number of tumors but is heavily reliant on the number of mesh points. The neurocomputational models proposed in this study can generate images of superior quality in less time than conventional techniques.

6. Conclusions

In this study, three distinct neurocomputational models based on DNNs, CNNs, and U-Net are presented for radar-based microwave imaging using raw backscattered electric field data. The neurocomputational models proposed in this study are trained and tested using backscattered electric field data collected via the CSAR concept. In the training and testing phases, the RV-DNN model gives results with MSE errors of 103.40007 and 96.39562, whereas the RV-CNN model produces results with 45.283 and 153.818 errors for the same data. Similarly, the RV-DNN model produced images with SSIMs of 0.92424 and 0.93020 in the training and testing phase, while the RV-CNN model produced images with SSIMs of 0.91000 and 0.92300. The MSE, SSIM, and accuracy metrics for the training phase of the proposed RV-MWINet model are 0.00083, 0.99996, and 0.91139, while the same metrics for the testing process are 0.00467, 0.99957, and 0.86359. For the CV-MWINet model using complex-valued data, PSNR, UQI, and SSIM values as training metrics were obtained as 209.09540, 0.96754, and 1.00000, respectively, while the same metrics for the test were obtained as 209.46525, 0.96995, and 1.00000, respectively. The images generated by neurocomputational models are compared to those obtained by MP-based algorithm. Analyzing the obtained images demonstrates that the proposed neurocomputational models generate more effective results.

Funding

This research was funded by The Scientific and Technological Research Council of Turkey (TUBITAK) with project number 122E093.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. What Do People Die from?—Our World in Data. Available online: https://ourworldindata.org/what-does-the-world-die-from (accessed on 9 January 2023).
  2. FastStats—Leading Causes of Death. Available online: https://www.cdc.gov/nchs/fastats/leading-causes-of-death.htm (accessed on 9 January 2023).
  3. Hassan, A.M.; El-Shenawee, M. Review of Electromagnetic Techniques for Breast Cancer Detection. IEEE Rev. Biomed. Eng. 2011, 4, 103–118. [Google Scholar] [CrossRef]
  4. Zhang, Z.Q.; Liu, Q.H. Three-Dimensional Nonlinear Image Reconstruction for Microwave Biomedical Imaging. IEEE Trans. Biomed. Eng. 2004, 51, 544–548. [Google Scholar] [CrossRef] [PubMed]
  5. Mirza, A.F.; See, C.H.; Danjuma, I.M.; Asif, R.; Abd-Alhameed, R.A.; Noras, J.M.; Clarke, R.W.; Excell, P.S. An Active Microwave Sensor for Near Field Imaging. IEEE Sens. J. 2017, 17, 2749–2757. [Google Scholar] [CrossRef]
  6. Kranold, L.; Hazarika, P.; Popovic, M. Investigation of antenna array configurations for dispersive breast models. In Proceedings of the 2017 11th European Conference on Antennas and Propagation, EUCAP 2017, Paris, France, 19–24 March 2017; pp. 2737–2741. [Google Scholar] [CrossRef]
  7. Hossain, D.; Mohan, A.S. Cancer Detection in Highly Dense Breasts Using Coherently Focused Time-Reversal Microwave Imaging. IEEE Trans. Comput. Imaging 2017, 3, 928–939. [Google Scholar] [CrossRef] [Green Version]
  8. Elahi, M.; Curtis, C.; Lavoie, B.; Glavin, M.; Jones, E.; Fear, E.; O’Halloran, M. Performance of leading artifact removal algorithms assessed across microwave breast imaging prototype scan configurations. Comput. Med. Imaging Graph. 2017, 58, 33–44. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Ji, J.; Tong, K.-F.; Al-Armaghany, A.; Leung, T.S. A feasibility study of elastography based confocal microwave imaging technique for breast cancer detection. Optik 2017, 144, 108–114. [Google Scholar] [CrossRef]
  10. Aydın, E.A.; Keleş, M.K. Breast cancer detection using K-nearest neighbors data mining method obtained from the bow-tie antenna dataset. Int. J. RF Microw. Comput.-Aided Eng. 2017, 27, e21098. [Google Scholar] [CrossRef]
  11. Li, Q.; Xiao, X.; Wang, L.; Song, H.; Kono, H.; Liu, P.; Lu, H.; Kikkawa, T. Direct Extraction of Tumor Response Based on Ensemble Empirical Mode Decomposition for Image Reconstruction of Early Breast Cancer Detection by UWB. IEEE Trans. Biomed. Circuits Syst. 2015, 9, 710–724. [Google Scholar] [CrossRef]
  12. Bah, M.H.; Hong, J.-S.; Jamro, D.A. UWB patch antenna and breast mimicking phantom design and implementation for microwave breast cancer detection using TIME REVERSAL MUSIC. Microw. Opt. Technol. Lett. 2016, 58, 549–554. [Google Scholar] [CrossRef]
  13. Yin, T.; Ali, F.H.; Reyes-Aldasoro, C.C. A Robust and Artifact Resistant Algorithm of Ultrawideband Imaging System for Breast Cancer Detection. IEEE Trans. Biomed. Eng. 2015, 62, 1514–1525. [Google Scholar] [CrossRef] [Green Version]
  14. Wang, F.; Arslan, T.; Wang, G. Breast cancer detection with microwave imaging system using wearable conformal antenna arrays. In Proceedings of the IST 2017—IEEE International Conference on Imaging Systems and Techniques, Beijing, China, 18–20 October 2017; pp. 1–6. [Google Scholar] [CrossRef]
  15. Ünal, I.; Türetken, B.; Canbay, C. Spherical Conformal Bow-Tie Antenna for Ultra-Wide Band Microwave Imaging of Breast Cancer Tumor. Appl. Comput. Electromagn. Soc. J. (ACES) 2021, 29, 124–133. Available online: https://journals.riverpublishers.com/index.php/ACES/article/view/11071 (accessed on 9 January 2023).
  16. Fear, E.C.; Bourqui, J.; Curtis, C.; Mew, D.; Docktor, B.; Romano, C. Microwave Breast Imaging with a Monostatic Radar-Based System: A Study of Application to Patients. IEEE Trans. Microw. Theory Tech. 2013, 61, 2119–2128. [Google Scholar] [CrossRef]
  17. Bicer, M.B.; Akdagli, A. An Experimental Study on Microwave Imaging of Breast Cancer with the Use of Tumor Phantom. Appl Comput. Electromagn. Soc. J. 2017, 32, 941–947. [Google Scholar] [CrossRef]
  18. Bicer, M.B.; Akdagli, A. Implementation of the inverse circular radon transform-based imaging approach for breast cancer screening. Int. J. RF Microw. Comput. Eng. 2018, 28, e21279. [Google Scholar] [CrossRef]
  19. Bicer, M.B.; Akdagli, A.; Ozdemir, C. A Matching-pursuit based approach for detecting and imaging breast cancer tumor. Prog. Electromagn. Res. M 2018, 64, 65–76. [Google Scholar] [CrossRef] [Green Version]
  20. Li, L.; Wang, L.G.; Teixeira, F.L.; Liu, C.; Nehorai, A.; Cui, T.J. DeepNIS: Deep Neural Network for Nonlinear Electromagnetic Inverse Scattering. IEEE Trans. Antennas Propag. 2018, 67, 1819–1825. [Google Scholar] [CrossRef] [Green Version]
  21. Dey, M.; Rana, S.P.; Loretoni, R.; Duranti, M.; Sani, L.; Vispa, A.; Raspa, G.; Ghavami, M.; Dudley, S.; Tiberi, G. Automated breast lesion localisation in microwave imaging employing simplified pulse coupled neural network. PLoS ONE 2022, 17, e0271377. [Google Scholar] [CrossRef]
  22. Khoshdel, V.; Asefi, M.; Ashraf, A.; LoVetri, J. Full 3D Microwave Breast Imaging Using a Deep-Learning Technique. J. Imaging 2020, 6, 80. [Google Scholar] [CrossRef]
  23. Qin, Y.; Ran, P.; Rodet, T.; Lesselier, D. Breast Imaging by Convolutional Neural Networks From Joint Microwave and Ultrasonic Data. IEEE Trans. Antennas Propag. 2022, 70, 6265–6276. [Google Scholar] [CrossRef]
  24. Bicer, M.B. Radar-Based Microwave Imaging Using Deep Complex Neural Networks: A Simulation Study on Breast Cancer. In Proceedings of the V. International Halich Congress on Multidisciplinary Scientific Research, Istanbul, Türkiye, 15 January 2023; p. 175. [Google Scholar]
  25. Barrachina, J.A.; Ren, C.; Morisseau, C.; Vieiliard, G.; Ovarlez, J.-P. Impact of PolSAR pre-processing and balancing methods on complex-valued neural networks segmentation tasks. arXiv 2022, arXiv:2210.17419. [Google Scholar] [CrossRef]
  26. Jing, H.; Li, S.; Miao, K.; Wang, S.; Cui, X.; Zhao, G.; Sun, H. Enhanced Millimeter-Wave 3-D Imaging via Complex-Valued Fully Convolutional Neural Network. Electronics 2022, 11, 147. [Google Scholar] [CrossRef]
  27. Yadav, R.; Omrani, A.; Link, G.; Vauhkonen, M.; Lähivaara, T. Microwave Tomography Using Neural Networks for Its Application in an Industrial Microwave Drying System. Sensors 2021, 21, 6919. [Google Scholar] [CrossRef] [PubMed]
  28. Ambrosanio, M.; Franceschini, S.; Pascazio, V.; Baselice, F. An End-to-End Deep Learning Approach for Quantitative Microwave Breast Imaging in Real-Time Applications. Bioengineering 2022, 9, 651. [Google Scholar] [CrossRef]
  29. Sun, S.; Kooij, B.J.; Jin, T.; Yarovoy, A.G. Cross-Correlated Contrast Source Inversion. IEEE Trans. Antennas Propag. 2017, 65, 2592–2603. [Google Scholar] [CrossRef] [Green Version]
  30. Ambrosanio, M.; Kosmas, P.; Pascazio, V. A Multithreshold Iterative DBIM-Based Algorithm for the Imaging of Heterogeneous Breast Tissues. IEEE Trans. Biomed. Eng. 2018, 66, 509–520. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  31. Shao, W.; Du, Y. Microwave Imaging by Deep Learning Network: Feasibility and Training Method. IEEE Trans. Antennas Propag. 2020, 68, 5626–5635. [Google Scholar] [CrossRef]
  32. Chiu, C.C.; Kang, T.H.; Chen, P.H.; Jiang, H.; Chen, Y.K. Comparison of U-Net and OASRN neural network for microwave imaging. J. Electromagn. Waves Appl. 2022, 37, 93–109. [Google Scholar] [CrossRef]
  33. Dispirito, A.; Li, D.; Vu, T.; Chen, M.; Zhang, D.; Luo, J.; Horstmeyer, R.; Yao, J. Reconstructing Undersampled Photoacoustic Microscopy Images Using Deep Learning. IEEE Trans. Med. Imaging 2020, 40, 562–570. [Google Scholar] [CrossRef]
  34. Kim, J.S. Improved image resolution during zooming in ultrasound image using deep learning technique. In Proceedings of the IEEE International Ultrasonics Symposium, IUS 2020, Las Vegas, NV, USA, 7–11 September 2020. [Google Scholar] [CrossRef]
  35. Chanumolu, R.; Alla, L.; Chirala, P.; Chennampalli, N.C.; Kolla, B.P. Multimodal Medical Imaging Using Modern Deep Learning Approaches. In Proceedings of the IEEE VLSI DCS 2022: 3rd IEEE Conference on VLSI Device, Circuit and System 2022, Kolkata, India, 26–27 February 2022; pp. 184–187. [Google Scholar] [CrossRef]
  36. Ruiz, Y.; Cavagnaro, M.; Crocco, L. An Effective Framework for Deep-Learning-Enhanced Quantitative Microwave Imaging and Its Potential for Medical Applications. Sensors 2023, 23, 643. [Google Scholar] [CrossRef]
  37. Su, T.; Ozdemir, C.; Ling, H. On extracting the radiation center representation of antenna radiation patterns on a complex platform. Microw. Opt. Technol. Lett. 2000, 26, 4–7. [Google Scholar] [CrossRef]
  38. Ozdemir, C. Inverse Synthetic Aperture Radar Imaging with MATLAB Algorithms; Wiley-Interscience: Hoboken, NJ, USA, 2012; ISBN 9780470284841. [Google Scholar]
  39. Ortega-Palacios, R.; Leija, L.; Vera, A.; Cepeda, M.F.J. Measurement of Breast—Tumor Phantom Dielectric Properties for Mi-crowave Breast Cancer Treatment Evaluation. In Proceedings of the Program and Abstract Book—2010 7th International Conference on Electrical Engineering, Computing Science and Automatic Control, CCE 2010, Tuxtla Gutierrez, Mexico, 8–10 September 2010; pp. 216–219. [Google Scholar]
  40. Wu, J. Introduction to Convolutional Neural Networks; National Key Lab Novel Software Technology, Nanjing University: Nanjing, China, 2017. [Google Scholar]
Figure 1. Simulation setup for two-dimensional breast tumor imaging (The red arcs from the antenna to the imaging field represent the propagating wave, the gray arrows the scattered field, and the red arrows the backscattered field).
Figure 1. Simulation setup for two-dimensional breast tumor imaging (The red arcs from the antenna to the imaging field represent the propagating wave, the gray arrows the scattered field, and the red arrows the backscattered field).
Diagnostics 13 00930 g001
Figure 2. (a) Fantom fabrication, dielectric constant measurement of (b) healthy and (c) tumor phantoms and (d) microwave measurement setup for two-dimensional breast tumor imaging.
Figure 2. (a) Fantom fabrication, dielectric constant measurement of (b) healthy and (c) tumor phantoms and (d) microwave measurement setup for two-dimensional breast tumor imaging.
Diagnostics 13 00930 g002
Figure 3. Dielectric constants of the fabricated phantoms between 1 GHz and 10 GHz.
Figure 3. Dielectric constants of the fabricated phantoms between 1 GHz and 10 GHz.
Diagnostics 13 00930 g003
Figure 4. The proposed RV-DNN model for microwave medical imaging.
Figure 4. The proposed RV-DNN model for microwave medical imaging.
Diagnostics 13 00930 g004
Figure 5. The proposed RV-CNN model for microwave medical imaging (Input data is shown in red color).
Figure 5. The proposed RV-CNN model for microwave medical imaging (Input data is shown in red color).
Diagnostics 13 00930 g005
Figure 6. The proposed MWINet model for microwave medical imaging (Input data is shown in red color).
Figure 6. The proposed MWINet model for microwave medical imaging (Input data is shown in red color).
Diagnostics 13 00930 g006
Figure 7. Mean squared error (MSE) curves for training and validation phases of the proposed RV-DNN model.
Figure 7. Mean squared error (MSE) curves for training and validation phases of the proposed RV-DNN model.
Diagnostics 13 00930 g007
Figure 8. Mean squared error (MSE) curves for training and validation phases of the proposed RV-CNN model.
Figure 8. Mean squared error (MSE) curves for training and validation phases of the proposed RV-CNN model.
Diagnostics 13 00930 g008
Figure 9. Accuracy curves for training and validation phases of the proposed RV-MWINet model.
Figure 9. Accuracy curves for training and validation phases of the proposed RV-MWINet model.
Diagnostics 13 00930 g009
Figure 10. Accuracy curves for training and validation phases of the proposed CV-MWINet model.
Figure 10. Accuracy curves for training and validation phases of the proposed CV-MWINet model.
Diagnostics 13 00930 g010
Figure 11. Comparison of samples of microwave images generated by the proposed neurocomputational models for train data ((a,g,m) are ground truth images).
Figure 11. Comparison of samples of microwave images generated by the proposed neurocomputational models for train data ((a,g,m) are ground truth images).
Diagnostics 13 00930 g011aDiagnostics 13 00930 g011b
Figure 12. Comparison of samples of microwave images generated by the proposed neurocomputational models for test data ((a,g,m) are ground truth images).
Figure 12. Comparison of samples of microwave images generated by the proposed neurocomputational models for test data ((a,g,m) are ground truth images).
Diagnostics 13 00930 g012
Figure 13. Comparison of samples of microwave images generated by the proposed CV-MWINet model for measurement data (metal screw in fine dust).
Figure 13. Comparison of samples of microwave images generated by the proposed CV-MWINet model for measurement data (metal screw in fine dust).
Diagnostics 13 00930 g013
Figure 14. Comparison of samples of microwave images generated by the proposed CV-MWINet model for measurement data (tumor phantom in healthy phantom).
Figure 14. Comparison of samples of microwave images generated by the proposed CV-MWINet model for measurement data (tumor phantom in healthy phantom).
Diagnostics 13 00930 g014
Table 1. Values for simulation parameters.
Table 1. Values for simulation parameters.
ParameterValue
Start Frequency (GHz)1
Stop Frequency (GHz)10
Frequency Count301
Skin Radius (cm)7
Gap Between Skin and Antenna (cm)2
Number of Tumor Scatterers1–3
Radius Range of Tumor Scatterers (cm)0.2–0.9
Rotation Angle Increment (°)4
Table 2. Properties of the proposed CNN-based model layers.
Table 2. Properties of the proposed CNN-based model layers.
LayerOutput ShapeNumber of Parameters
Convolution 2D(299, 89, 32)288
Batch Normalization(299, 89, 32)128
Convolution 2D(297, 86, 32)9216
Batch Normalization(297, 86, 32)128
Maximum Pooling 2D(99, 28, 32)-
Convolution 2D(97, 26, 64)18,432
Batch Normalization(97, 26, 64)256
Convolution 2D(95, 24, 64)36,864
Batch Normalization(95, 24, 64)256
Maximum Pooling 2D(31, 8, 64)-
Convolution 2D(29, 6, 128)73,728
Batch Normalization(29, 6, 128)512
Convolution 2D(27, 4, 128)147,456
Batch Normalization(27, 4, 128)512
Convolution 2D(25, 2, 128)147,456
Batch Normalization(25, 2, 128)512
Flatten6400-
Fully Connected #1204813,107,200
Batch Normalization20488192
Fully Connected #220484,196,352
Fully Connected #316,38433,570,816
Table 3. Performance metrics of the proposed neurocomputational models for 10-fold cross-validation using train data.
Table 3. Performance metrics of the proposed neurocomputational models for 10-fold cross-validation using train data.
ParametersRV-DNNRV-CNNRV-MWINetCV-MWI-Net
MSESSIMMSESSIMACCSIMACCSSIM
10-fold Cross-ValidationFold #197.784 ± 45.1530.918 ± 0.03162.731 ± 33.5400.897 ± 0.0510.999 ± 0.0011.000 ± 0.0001.000 ± 0.0001.000 ± 0.000
Fold #2100.917 ± 47.9510.925 ± 0.02975.192 ± 42.9590.893 ± 0.0520.988 ± 0.0050.998 ± 0.0001.000 ± 0.0001.000 ± 0.000
Fold #3102.443 ± 48.7060.922 ± 0.03065.007 ± 41.7740.888 ± 0.0540.998 ± 0.0021.000 ± 0.0001.000 ± 0.0001.000 ± 0.000
Fold #492.100 ± 43.2080.925 ± 0.02974.251 ± 48.9420.886 ± 0.0530.994 ± 0.0040.999 ± 0.0001.000 ± 0.0001.000 ± 0.000
Fold #5101.076 ± 47.0540.924 ± 0.03161.865 ± 38.7300.887 ± 0.0540.998 ±0.0011.000 ± 0.0001.000 ± 0.0001.000 ± 0.000
Fold #692.854 ± 41.1110.924 ± 0.03179.385 ± 47.1230.890 ± 0.0581.000 ± 0.0001.000 ± 0.0001.000 ± 0.0001.000 ± 0.000
Fold #798.932 ± 46.8100.924 ± 0.03065.930 ± 49.8680.891 ± 0.0560.999 ± 0.0011.000 ± 0.0001.000 ± 0.0001.000 ± 0.000
Fold #898.564 ± 45.5030.925 ± 0.03061.795 ± 39.8460.890 ± 0.0520.999 ± 0.0011.000 ± 0.0001.000 ± 0.0001.000 ± 0.000
Fold #9102.653 ± 49.1340.921 ± 0.03161.114 ± 43.1990.892 ± 0.0530.994 ± 0.0041.000 ± 0.0001.000 ± 0.0001.000 ± 0.000
Fold #1093.116 ± 43.1220.927 ± 0.03071.750 ± 58.4930.888 ± 0.0570.996 ± 0.0031.000 ± 0.0001.000 ± 0.0001.000 ± 0.000
Average98.044 ± 45.7750.924 ± 0.03067.902 ± 44.4470.890 ± 0.0540.997 ± 0.0021.000 ± 0.0001.000 ± 0.0001.000 ± 0.000
Table 4. Performance metrics of the proposed neurocomputational models for 10-fold cross-validation using test data.
Table 4. Performance metrics of the proposed neurocomputational models for 10-fold cross-validation using test data.
ParametersRV-DNNRV-CNNRV-MWINetCV-MWINet
MSESSIMMSESSIMACCSSIMACCSSIM
10-fold Cross-ValidationFold #1185.183 ± 124.5980.914 ± 0.030157.868 ± 108.6000.915 ± 0.0300.995 ± 0.0041.000 ± 0.0000.992 ± 0.0050.999 ± 0.001
Fold #2207.658 ± 136.5530.912 ± 0.033162.565 ± 107.2900.910 ± 0.0330.987 ± 0.0050.998 ± 0.0000.993 ± 0.0050.999 ± 0.001
Fold #3195.671 ± 132.3560.911 ± 0.032156.713 ± 109.3970.910 ± 0.0270.993 ± 0.0040.999 ± 0.0010.993 ± 0.0040.999 ± 0.001
Fold #4200.928 ± 137.8450.919 ± 0.027163.380 ± 109.8410.906 ± 0.0320.992 ± 0.0050.999 ± 0.0010.993 ± 0.0040.999 ± 0.001
Fold #5181.389 ± 119.2390.916 ± 0.031152.357 ± 107.1600.915 ± 0.0280.993 ± 0.0040.999 ± 0.0000.993 ± 0.0050.999 ± 0.001
Fold #6216.705 ± 140.8310.912 ± 0.030178.969 ± 122.0730.909 ± 0.0330.993 ± 0.0050.999 ± 0.0010.993 ± 0.0050.999 ± 0.001
Fold #7202.940 ± 135.8070.915 ± 0.029168.076 ± 117.4040.910 ± 0.0320.993 ± 0.0040.999 ± 0.0010.994 ± 0.0040.999 ± 0.000
Fold #8187.140 ± 126.5290.914 ± 0.024150.096 ± 112.9630.911 ± 0.0300.995 ± 0.0041.000 ± 0.0000.994 ± 0.0040.999 ± 0.000
Fold #9198.709 ± 135.1040.914 ± 0.028164.587 ± 112.9250.913 ± 0.0280.991 ± 0.0060.999 ± 0.0010.992 ± 0.0050.999 ± 0.001
Fold #10197.682 ± 123.1310.916 ± 0.029166.285 ± 111.9770.910 ± 0.0290.993 ± 0.0040.999 ± 0.0010.993 ± 0.0050.999 ± 0.001
Average197.401 ± 131.1990.914 ± 0.030162.089 ± 111.9630.911 ± 0.0300.993 ± 0.0050.999 ± 0.0010.993 ± 0.0040.999 ± 0.001
Table 5. Scenarios used in the measurement.
Table 5. Scenarios used in the measurement.
ScenariosMaterialsDistance from the Center (cm)
#1Metal screw in fine sand0
#22
#34
#46
#5Tumor phantom in healthy phantom0
#62
#74
#85.5
Table 6. Performance metrics of the proposed neurocomputational models for the images given in Figure 9 and Figure 10.
Table 6. Performance metrics of the proposed neurocomputational models for the images given in Figure 9 and Figure 10.
Metrics/ModelsTrain DataTest DataAvgs. ± Stds.
1 Tumor2 Tumor3 Tumor1 Tumor2 Tumor3 TumorAll Train SetAll Test Set
PSNR (dB)MP-Based Algorithm25.8765624.1457922.8375619.7808823.1283922.51522
RV-DNN Model23.094822.0272517.784523.7657918.92421.2775420.37510 ± 2.8974620.52958 ± 2.93180
RV-CNN Model23.6218722.132919.9504621.51320.5432919.7172621.22355 ± 2.2764721.38717 ± 2.62633
RV-MWINet Model42.3521334.3923534.3275137.0093135.9459534.0069734.68058 ± 3.2435334.57853 ± 3.53797
CV-MWINet Model217.02188207.71069209.097967210.92949207.84857206.52970209.09540 ± 3.56411209.46525 ± 3.59434
UQIMP-based Algorithm0.9140.925530.901380.820.919240.89136
RV-DNN Model0.740230.739410.716590.745540.723340.737380.72929 ± 0.012390.72974 ± 0.1239
RV-CNN Model0.742120.738950.728280.734490.730980.728870.73380 ± 0.008540.73426 ± 0.00957
RV-MWINet Model0.99950.997920.997830.99860.998420.997580.99759 ± 0.001720.99750 ± 0.00211
CV-MWINet Model0.991180.9679160.9663610.983120.968250.953680.96754 ± 0.016320.96995 ± 0.01479
SSIMMP- Based Algorithm0.826750.848760.807920.670930.836870.78471
RV-DNN Model0.755380.746240.71420.758020.72570.735830.73705 ± 0.020060.73754 ± 0.01913
RV-CNN Model0.756430.729770.7250.741770.70180.725720.73220 ± 0.019530.73457 ± 0.02198
RV-MWINet Model0.998780.992910.993120.996420.994730.991590.99295 ± 0.003960.99302 ± 0.00419
CV-MWINet Model1.000001.000001.000001.000001.000001.000001.00000 ± 0.000001.00000 ± 0.00000
– Not available.
Table 7. Image generation time with the MP-based algorithm.
Table 7. Image generation time with the MP-based algorithm.
Mesh Points9061 Points16,105 Points
Train Data1 Tumor189.96657 s385.11506 s
2 Tumor186.09587 s391.25924 s
3 Tumor184.26689 s337.86427 s
Test Data1 Tumor180.65272 s386.98390 s
2 Tumor185.19212 s370.29391 s
3 Tumor184.13824 s376.40420 s
Avgs. ± Stds.185.05210 ± 3.03536 s374.6534 ± 19.55980 s
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bicer, M.B. Radar-Based Microwave Breast Imaging Using Neurocomputational Models. Diagnostics 2023, 13, 930. https://doi.org/10.3390/diagnostics13050930

AMA Style

Bicer MB. Radar-Based Microwave Breast Imaging Using Neurocomputational Models. Diagnostics. 2023; 13(5):930. https://doi.org/10.3390/diagnostics13050930

Chicago/Turabian Style

Bicer, Mustafa Berkan. 2023. "Radar-Based Microwave Breast Imaging Using Neurocomputational Models" Diagnostics 13, no. 5: 930. https://doi.org/10.3390/diagnostics13050930

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop