Next Article in Journal
A Terahertz Fast-Sweep Optoelectronic Frequency-Domain Spectrometer: Calibration, Performance Tests, and Comparison with TDS and FDS
Next Article in Special Issue
A Semi-Supervised Extreme Learning Machine Algorithm Based on the New Weighted Kernel for Machine Smell
Previous Article in Journal
Serial Decoders-Based Auto-Encoders for Image Reconstruction
Previous Article in Special Issue
PCB Network Analysis for Circuit Partitioning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

2D/3D Multimode Medical Image Alignment Based on Spatial Histograms

1
School of Automation, University of Electronic Science and Technology of China, Chengdu 610054, China
2
College of Computer Science and Cyber Security, Chengdu University of Technology, Chengdu 610059, China
3
Department of Geography and Anthropology, Louisiana State University, Baton Rouge, LA 70803, USA
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2022, 12(16), 8261; https://doi.org/10.3390/app12168261
Submission received: 30 July 2022 / Revised: 15 August 2022 / Accepted: 16 August 2022 / Published: 18 August 2022

Abstract

:
The key to image-guided surgery (IGS) technology is to find the transformation relationship between preoperative 3D images and intraoperative 2D images, namely, 2D/3D image registration. A feature-based 2D/3D medical image registration algorithm is investigated in this study. We use a two-dimensional weighted spatial histogram of gradient directions to extract statistical features, overcome the algorithm’s limitations, and expand the applicable scenarios under the premise of ensuring accuracy. The proposed algorithm was tested on CT and synthetic X-ray images, and compared with existing algorithms. The results show that the proposed algorithm can improve accuracy and efficiency, and reduce the initial value’s sensitivity.

1. Introduction

Image-guided surgery, which involves computer vision, biomedicine, imaging, automatic control, and other disciplines, is an interdisciplinary research direction [1]. Through the comprehensive application of a variety of medical image information [2], it carries out the preoperative diagnosis [3,4,5,6,7], disease analysis [8,9], planning of the surgical path, intraoperative localization of the lesion, real-time tracking of surgical instruments [10], and adjustment of the spatial position of surgical instruments to achieve an accurate diagnosis—so as to provide groundbreaking and precise treatment [11]. This technology provides many benefits for patients, such as reducing surgical trauma, speeding up recovery, and reducing hospital stay and costs. The accurate image information in image-guided surgery is obtained by integrating preoperative and intraoperative images and navigation technology [12,13]. Usually, high-resolution three-dimensional scanning methods such as MRI, CT, and PET are used to obtain the desired image of the anatomical region of interest [14,15]. This image with high-resolution characteristics and more spatial information can better reflect the human tissue structure and physiological information. However, the imaging time is extended, which is not conducive to the surgical environment. The data used in the operation are two-dimensional ultrasound, X-ray, and optical images [16]. These two-dimensional images have the characteristics of fast imaging and low radiation, adapting to the operating environment. However, their resolution is low, so it is difficult to obtain accurate and complete lesion location and texture information. Therefore, a three-dimensional image is needed to display higher-dimensional information. In image-guided surgery, the preoperative and intraoperative images are mapped to the same coordinate system by comparing the corresponding information in the same tissue or organ to keep the anatomical structure consistent [2]. Preoperative and intraoperative data registration and surgical instrument tracking provide surgeons with information about the instrument’s current position relative to the planned trajectory, nearby vulnerable structures, and the final target in image-guided minimally invasive surgery. Pre-intervention images with X-ray or ultrasound images in interventional radiology make tools such as catheters and needles visible, significantly improving navigation accuracy. In image-guided endoscopic surgery, 3D virtual images of anatomy and pathology are generated from preoperative images and registered in real-time endoscopic images. Through augmented reality visualization, anatomical structures hidden under tissues can be displayed. In extracorporeal radiotherapy, the registration of planned CT images and daily pre-treatment images can achieve accurate patient positioning, which is essential for accurate targeted dose delivery and avoiding exposure of health-critical tissues.
Image-guided surgery’s success largely depends on preoperative and intraoperative image data registration accuracy and 2D/3D registration [17,18,19,20]. Image registration is developed for the integration of multisource image information. Image registration’s main purpose is to find the spatial transformation mapping relationship between two or more images in the same location to obtain the maximum image information [11,21,22,23]. These images can come from different times, different imaging equipment, or different angles. After registration, the spatial pose and texture information is consistent. The images that need registration due to spatial transformation are called moving images [24,25,26]. The standard images without transformation are called reference images. Spatial transformation can be linear or nonlinear.
Researchers have proposed a new method called intensity distance [27,28]. The essence of gray distance information is to add the Euclidean distances of all pixels with the same gray value. Therefore, it contains three kinds of image information: gray information, pixel coordinate information, and pixel number information, increasing the reliability compared with single-gray-level information registration. Others [4] have proposed 2D/3D registration methods combining machine learning and geometric transformation. The projection space based on the traditional projection algorithm contains three translation and rotation parameters [29]. The high complexity of projection space dramatically affects the timeliness of registration. Ghafurian et al. [30] considered that the 2D/3D registration problem needs to search the complex solution space, leading to many calculations, so they proposed a spatial parameter-decoupling method to achieve registration. According to the dimensional differences between the registration images, image registration can be divided into 2D/2D registration, 2D/3D registration, 3D/3D registration, and time-series registration [31]. The uses of these approaches are different. This paper mainly studies 2D/3D registration in image-guided surgery.
This paper mainly proposes a 2D/3D registration algorithm based on a spatial histogram. The algorithm introduces a spatial histogram into the 2D/3D registration problem, and develops a weighted spatial histogram of gradient directions for registration, improving the registration accuracy and convergence range of translation transformation.

2. Materials and Methods

The algorithms involved in this paper are all based on the open-source software toolkit ITK (Insight Toolkit) [32]. ITK is an open-source toolkit for medical imaging research, mainly used for medical image registration and segmentation. Moreover, ITK includes many image-processing algorithms, such as medical image filtering and image data statistical analysis. When ITK is used to develop medical image registration, it is necessary to build a complex environment, which is not reported here.
The DICOM sequence obtained from the human brain model’s CT scan was used as a 3D floating image in the registration experiment. In addition, the digitally reconstructed radiograph (DRR) generated by projection rendering under specific CT parameters was used as a 2D reference image to simulate a real X-ray image. The size of the CT image was 512 × 512 × 283, the voxel spacing was 0.7813 × 0.7813 × 1.0, and the unit was mm. The projected image size was 512 × 512, the pixel spacing was 0.5 × 0.5, and the unit was mm. The 3D screenshot of the CT image is shown in Figure 1, and the 3D model rendered by the CT image is shown in Figure 2.
Resampling was carried out on the CT images to facilitate the calculation and reduce the amount of experimental calculation. The sampled image data are shown in Table 1.
The experiment’s pixel range for all 2D images was linearly mapped to 0–255 through Formula (1):
o u t p u t P i x e l = ( i n p u t P i x e l i n p M i n ) × 255 i n p M a x i n p M i n
A weighted histogram of gradient directions (WHGD) is a simple image histogram that considers only the gradient information in the image [33]. It takes the gradient direction as the histogram bin’s division basis, making it sensitive to rotation transformation. Like other image histograms, the weighted histogram of gradient directions completely ignores the image’s spatial information, making it very insensitive to translation transformation. To complete the registration, it must be assisted by other registration algorithms. Because of the complexity of parameter decoupling, the accuracy of translation parameters is difficult to guarantee. Spatial information can be added to the gradient direction histogram to compensate for the translation transform’s sensitivity, helping to overcome these defects. Therefore, this paper proposes the use of a spatial histogram instead of a simple image histogram to count the gradient feature information of the image.
For a discrete function f : x v , the simple histogram of f is h f : v N ; N is a set of nonnegative integers, and h f ( v ) is the number or frequency of elements x . This describes a simple zero-order moment histogram that discards all information about a domain or space. Stanley et al. [34] proposed the use of high-order moments containing spatial information to make up for the defects of the simple histogram, and named the new histogram a spatial histogram or spatial graph. A spatial histogram can capture the frequency information of function elements and the spatial domain information of function. Each pixel’s weight in space is determined by the average value and covariance of the pixel position when the spatial histogram is calculated based on the second moment of the image.
For a 2D image I , its value at the pixel coordinates ( x , y ) is v , representing the original pixel value or the value after image preprocessing. The pixel value is assumed. The second-order spatial histogram of image I can be expressed by Formula (1) by dividing the histogram interval according to the value v .
h I ( 2 ) ( b ) = n b , μ b , b , b = 1 , , B
  h I ( 0 ) ( b ) = n b , b = 1 , , B
where B represents the number of bins in the spatial histogram, n b is the number of b pixels belonging to the bin, and μ b and b are the coordinate mean value and coordinate variance of b pixels in the bin, respectively. Function (3) is a simple histogram of an image. By comparing Function (2) with Function (3), the spatial histogram extracts the mean and variance of pixel coordinates while extracting frequency information, which is beneficial for calculating the similarity between the two histograms. The similarity between two spatial histograms can be expressed in the form of a weighted sum, as shown in Formula (4):
ρ ( h , h ) = b = 1 B ψ b ρ n ( n b , n b )
where ρ n is the distance measure, which is used to measure the similarity between the same bin in the histograms h and h , while ψ b is the weighted coefficient. When calculating the similarity between simple histograms, ψ b = 1 ; ρ ( h , h ) degenerates to the common histogram similarity calculation method. For the second-order spatial histogram, ψ b is determined by the product of two probability distributions related to the coordinate mean value and coordinate variance—that is, the Gaussian distribution described by μ b obeying ( μ b , b ) . The Gaussian distribution described by μ b obeying ( μ b , b ) , ψ b is shown in Formula (5):
ψ b = η   e x p { 1 2 ( μ b μ b ) T ^ b 1 ( μ b μ b ) }
^ b 1 = b 1 + ( b ) 1  
η = 1 2 π | b | 1 2 1 2 π | b | 1 2  
where η is the Gaussian normalization constant, and its definition is shown in Formula (7). The exponential part in Formula (5) is the average of the Mahalanobis distance between μ b and μ b , as well as between μ b and μ b .
A spatial histogram can be considered to be a geometric model. It makes up for the gap between the typical histogram and more specific models (such as translation, rotation, affine, projection, or B-spline). Like simple histograms, spatial histograms are computationally efficient and compared between corresponding image blocks. However, a spatial histogram retains some geometric information between pixels, unlike a simple histogram. More specifically, the spatial histogram captures the global positional information of pixels.
We propose using a spatial histogram instead of the simple histogram to extract the statistical characteristics of image gradient direction information and gradient amplitude information, to solve the registration algorithm’s problem based on the gradient direction histogram. Specifically, when constructing the weighted histogram of the reference image and the DRR image, each bin’s coordinate mean value and coordinate variance in the weighted histogram of gradient directions should be calculated. The new statistical histogram is called the weighted spatial histogram of gradient directions (WSHGD). The expression is shown in Formula (8):
h ( d ) = n d , μ d , d , d = 0 , 1 , , 359
n d = ( i , j ) G D d G M ( i , j )
G D d = { ( i , j ) | G D ( i , j ) = d }  
The WSHGD still takes gradient direction as the division basis of the histogram interval. Here, every 1 degree is a bin, for a total of 360 bins. According to Formula (10), every pixel ( i , j ) of the image is divided into corresponding bins, where μ d and d are the coordinate mean vector and the coordinate variance matrix of the pixel in the bin, respectively, G M ( i , j ) represents the gradient amplitude at the pixel ( i , j ) ,   G D ( i , j ) represents the gradient direction at the pixel ( i , j ) , and G M ( i , j ) [ 0 , 1 ] , while G M ( i , j ) 0 , 360 ) ; n d represents the height of the d bin in the histogram—as shown in Formula (9), this equals the sum of gradient amplitudes corresponding to all pixels in the bin.
When the weighted spatial histogram of gradient directions is used for 2D/3D registration [35], only the WSHGD of the reference and DRR images, respectively, needs to be constructed. Then, the similarity between the two WSHGDs can be calculated by the weighted distance measure. The weighted distance measure is the objective function of the registration process, and its definition is shown in Formula (11):
ρ ( h R , h D ) = d = 0 359 ψ d ρ n ( n R d , n D d )
ψ d = η   e x p { 1 2 ( μ R d μ D d ) T ^ d 1 ( μ R d μ D d ) }
where h R and h D denote the WSHGD of the reference image I R and the DRR image I D , respectively, n R d and n D d are the d bin of the corresponding h R and h D , respectively, and μ R d and μ D d are the coordinate mean values corresponding to n R d and n D d respectively. R d and D d are the coordinate variances corresponding to n R d and n D d , respectively. η is the Gaussian normalization constant, and ρ n is a commonly used distance measure. The coordinate mean μ d and   coordinate   variance   d contained in the WSHGD are spatial information. When the image is translated, the coordinates of the internal pixels change accordingly so that the sum can reflect the change in the image caused by translation transformation. Moreover, the sensitivity of the histogram to rotation is increased. In 2D/3D registration based on the WSHGD, because the WSHGD is sensitive to translation and rotation transformation, it can optimize the translation parameters and rotation parameters at the same time, without the assistance of other registration algorithms. This makes the registration process more straightforward and the algorithm more robust. Moreover, introducing pixel coordinate information can overcome the limitations of the WSHGD algorithm. The image’s foreground no longer needs to be against a larger background, because all of the transformation parameters are optimized together.
The 2D/3D registration optimization based on the WSHGD features can be expressed as shown in Function (13). Function (14) represents the mapping from the reference image I R to the WSHGD features, while Function (15) represents the mapping from the DRR image I D to the WSHGD features, which are obtained from the 3D floating image I M through spatial transformation and DRR projection.
T g = a r g m i n T ρ ( h R , h D ) = a r g m i n T d = 0 359 ψ d ρ n ( n R d , n D d )
I R j R = n R d , μ R d , R d
P ( T ( I M ) ) = I D h D = n D d , μ D d , D d

3. Results

Firstly, the CT image is transformed into a rigid body according to the initial space transformation parameters [8,16,36]. Then, the CT image is projected to generate a DRR image. Next, the WSHGDs of the DRR image and the reference image are extracted. Finally, the distance measured between the two WSHGDs is calculated. By taking the distance measure as the objective function, the Powell–Brent optimization algorithm optimizes the space transformation parameters until the optimization algorithm reaches the iteration stop condition. As a result, Manhattan distance and J-divergence distance are better performance distance measures. Manhattan distance is also known as city block distance (Formula (16)), and J-divergence distance can be calculated with Formula (17).
C B D ( h R , h D ) = a l l   d | n R d n D d |
J C D ( h R , h D ) = a l l   d | n R d n D d | l n a d b d
where a d = m a x { n R d , n D d } , b d = m i n { n R d , n D d } . In this experiment, the sum of Manhattan distance and J-divergence distance is used as the distance measure. The definition of the function ρ n ( n R d , n D d ) is shown in Formula (18):
ρ n ( n R d , n D d ) = C B D ( h R , h D ) + J C D ( h R , h D ) = a l l   d | n R d n D d | ( 1 + l n a d b d )
Two statistical histogram models—WSHGD and WHGD—were used as the experimental control group to perform 2D/3D registration between CT and DRR images of the skull model to verify the effectiveness and accuracy of the WSHGD. The Powell algorithm was used for the optimization algorithm, and rigid-body transformation was used for the spatial transformation model. The one-dimensional search accuracy of the Powell optimization algorithm was set to 0.01. The overall iterative accuracy of the algorithm was set to 0.001. The maximum number of iterations was set to 1000. The rigid-body transformation parameters were arranged in the order ( α , β , θ , t x , t y , t z ) . The first three parameters were the rotation along the X-, Y-, and Z-axes. Finally, the last three parameters were the translation along the X-, Y-, and Z-axes.
Firstly, three groups of experiments were conducted to perform qualitative analysis. The truth value of the first group of experimental reference images was set to ( 20 , 20 , 20 , 10 , 10 , 10 ) , the second group was set to ( 10 , 10 , 10 , 10 , 10 , 10 ) , and the third group was set to ( 10 , 10 , 10 , 20 , 20 , 20 ) . In the experiments, the initial values were optimized as ( 0 , 0 , 0 , 0 , 0 , 0 ) , as shown in Figure 3 (the DRR image obtained by CT projection at the initial value). Due to the increased resolution of the DRR image, the skull information can be more clearly displayed.
The results of the first group of experiments are shown in Figure 4. The second group of experimental results is shown in Figure 5, while the third is shown in Figure 6. The first column represents the reference image. After registration, the second column represents the DRR image generated by 3D CT projection. The third column represents the difference between the reference and registered DRR images. The first row corresponds to the registration results based on WSHGD features. The second row corresponds to the registration results based on WHGD features.
The registration results of the WHGD and WSHGD at different initial points were analyzed quantitatively to verify the WSHGD features’ effectiveness in 2D/3D registration. For the true value point ( 0 , 0 , 0 , 0 , 0 , 0 ) of the rigid-body transformation, the three rotation parameters of the optimized initial value were selected within ± 60 degrees. We took 10 degrees as the sampling unit. The three translation parameters of the optimized initial value were selected within ± 40 mm, with 5 mm as the sampling unit. The sampling space ensures that the registration requirements of the WHGD can still be met after the space transformation. This paper selects the differences between the images after registration for qualitative analysis. The smoother the difference image, the smaller the difference between the registration result and the reference image.
Furthermore, the mean absolute error (MAE), mean error (ME), and standard deviation of the error (SDE) were used as evaluation indices [35,37]. The experimental results are shown in Table 2. To more intuitively reflect the differences between the two methods in terms of mean error and standard deviation of the error, the registration results were visualized, as shown in Figure 7, where blue represents rotational error and red represents translational error.

4. Discussion

Comparing the different images of the three groups of experiments, the difference images of the WHGD in the three experiments were smoother. The difference images of the WSHGD in Experiment 2 and Experiment 3 were smooth, but there was a significant deviation in Experiment 1. Therefore, the WHGD features have better stability in the registration process. In contrast, the WSHGD features cannot guarantee consistent registration accuracy, but the WSHGD features can still achieve successful registration.
As shown in Table 2, compared with the WSHGD proposed in this paper, the parameter-decoupling registration method based on WHGD features performs better in terms of MAE and ME, reflecting that the registration accuracy based on WHGD features is higher. Compared with Figure 7, the standard deviation of the WSHGD method is more significant than that of the WHGD method, indicating that the registration performance based on WHGD is more stable. Sometimes, WSHGD registration may have a more significant deviation, consistent with qualitative analysis. However, from the perspective of MAE, the registration accuracy of the WHGD method and the WSHGD method is not very different. Based on WSHGD features, CT and X-ray image registration can also be achieved within an acceptable accuracy range. When the sampling space of the translation parameters of the initial point of the rigid-body transformation is expanded, the DRR image’s foreground will exceed the image’s field of view. It is challenging to ensure consistent registration accuracy using the WHGD method, and sometimes it cannot even be registered. In contrast, the WSHGD method can still achieve registration, but its accuracy is reduced; however, it is still within the acceptable range.
Based on the above analysis, the pixel coordinates’ mean and variance information can be introduced through a spatial histogram to ensure sensitivity to rotation transformation. This ensures that the synchronous optimization of translation and rotation parameters sacrifices a certain amount of precision to expand the convergence range of translation transformation. However, the weighted histogram of gradient directions is only suitable for the foreground—one of the limitations of small image registration.

5. Conclusions

This paper introduces two weighted histograms of gradient directions and second-order spatial histogram concepts, and analyzes the advantages and limitations of weighted histograms of gradient directions in 2D/3D registration. The weighted histogram of gradient directions is sensitive to rotation and scaling, but insensitive to translation. It leads to a complex registration process and a small convergence range of translation parameters, and the algorithm has certain limitations. This paper introduces a spatial histogram into the registration process. It proposes a 2D/3D registration algorithm based on a weighted spatial histogram of gradient directions to solve these problems.
The algorithm uses the spatial weighted histogram of gradient directions to extract the statistical characteristics of the image. The pixels’ positional information is added based on retaining the weighted histogram of gradient directions’ sensitivity to the rotation. By weighting the distance measure of the weighted histogram of gradient directions with the coordinate mean and square difference, the spatial information and the gradient information are effectively combined.
Our experimental results show that the spatial weighted histogram of gradient directions has a high sensitivity to translation and rotation. As a result, the convergence range of the algorithm is larger. When the organizational structure in the image’s foreground exceeds the image’s field of view, it can still achieve successful registration.
There are still some aspects that can be improved and expanded. The experimental data used in this paper were two-dimensional simulated X-ray images and three-dimensional CT images. Their imaging principles are similar, but many types of medical images are used in clinical practice. Moreover, there is a significant difference between some types of images in terms of imaging principles [38]. Therefore, the 2D/3D registration method based on DRR is not always applicable.
Deep learning technology is developing rapidly [39], and has strong performance in feature extraction. However, there are few studies on 2D/3D registration based on deep learning. Therefore, combining deep learning technology with 2D/3D registration should be a research focus in the future.

Author Contributions

Conceptualization, W.Z., B.Y. and S.L.; methodology, B.Y. and L.Y.; software, Y.W.; validation, S.L.; formal analysis, Y.W. and L.Y.; investigation, B.Y.; resources, Y.B. and S.L.; data curation, Y.W.; writing—original draft preparation, Y.B. and L.Y.; writing—review and editing, Y.B., M.L., S.L. and L.Y.; visualization, Y.W. and L.Y.; supervision, B.Y.; project administration, W.Z.; funding acquisition, W.Z. All authors have read and agreed to the published version of the manuscript.

Funding

Supported by the Sichuan Science and Technology Program (Grant: 2021YFQ0003).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Publicly available datasets were analyzed in this study. These data can be found here: [https://data.kitware.com/#collection/57b5c9e58d777f126827f5a1] accessed on 23 February 2020.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Cleary, K.; Peters, T.M. Image-guided interventions: Technology review and clinical applications. Annu. Rev. Biomed. Eng. 2010, 12, 119–142. [Google Scholar] [CrossRef] [PubMed]
  2. Steininger, P.; Neuner, M.; Fritscher, K.; Sedlmayer, F.; Deutschmann, H. A novel class of machine-learning-driven real-time 2D/3D tracking methods: Texture model registration (TMR). In Proceedings of the Medical Imaging 2011: Visualization, Image-Guided Procedures, and Modeling: International Society for Optics and Photonics, Lake Buena Vista, FL, USA, 12–17 February 2011; p. 79640G. [Google Scholar]
  3. Sahandifar, P.; Kleiven, S. Influence of nonlinear soft tissue modeling on the external and internal forces during lateral hip impacts. J. Mech. Behav. Biomed. Mater. 2021, 124, 104743. [Google Scholar] [CrossRef] [PubMed]
  4. Tang, Y.; Liu, S.; Deng, Y.; Zhang, Y.; Yin, L.; Zheng, W. Construction of force haptic reappearance system based on Geomagic Touch haptic device. Comput. Methods Programs Biomed. 2020, 190, 105344. [Google Scholar] [CrossRef] [PubMed]
  5. Yang, B.; Liu, C.; Zheng, W.; Liu, S. Motion prediction via online instantaneous frequency estimation for vision-based beating heart tracking. Inf. Fusion 2017, 35, 58–67. [Google Scholar] [CrossRef]
  6. Yang, B.; Liu, C.; Huang, K.; Zheng, W. A triangular radial cubic spline deformation model for efficient 3D beating heart tracking. Signal Image Video Process. 2017, 11, 1329–1336. [Google Scholar] [CrossRef]
  7. Jin, J.Y.; Ryu, S.; Faber, K.; Mikkelsen, T.; Chen, Q.; Li, S.; Movsas, B. 2D/3D image fusion for accurate target localization and evaluation of a mask based stereotactic system in fractionated stereotactic radiotherapy of cranial lesions. Med. Phys. 2006, 33, 4557–4566. [Google Scholar] [CrossRef]
  8. Yang, B.; Liu, C.; Zheng, W.; Liu, S.; Huang, K. Reconstructing a 3D heart surface with stereo-endoscope by learning eigen-shapes. Biomed. Opt. Express 2018, 9, 6222–6236. [Google Scholar] [CrossRef]
  9. Xu, C.; Yang, B.; Guo, F.; Zheng, W.; Poignet, P. Sparse-view CBCT reconstruction via weighted Schatten p-norm minimization. Opt. Express 2020, 28, 35469–35482. [Google Scholar] [CrossRef]
  10. Zhou, Y.; Zheng, W.; Shen, Z. A New Algorithm for Distributed Control Problem with Shortest-Distance Constraints. Math. Probl. Eng. 2016, 2016, 1604824. [Google Scholar] [CrossRef]
  11. Alam, F.; Rahman, S.U.; Ullah, S.; Gulati, K. Medical image registration in image guided surgery: Issues, challenges and research opportunities. Biocybern. Biomed. Eng. 2018, 38, 71–89. [Google Scholar] [CrossRef]
  12. Chi, C.; Du, Y.; Ye, J.; Kou, D.; Qiu, J.; Wang, J.; Tian, J.; Chen, X. Intraoperative imaging-guided cancer surgery: From current fluorescence molecular imaging methods to future multi-modality imaging technology. Theranostics 2014, 4, 1072. [Google Scholar] [CrossRef] [PubMed]
  13. Alam, F.; Rahman, S.U.; Khusro, S.; Ullah, S.; Khalil, A. Evaluation of medical image registration techniques based on nature and domain of the transformation. J. Med. Imaging Radiat. Sci. 2016, 47, 178–193. [Google Scholar] [CrossRef] [PubMed]
  14. Fu, D.; Kuduvalli, G. A fast, accurate, and automatic 2D–3D image registration for image-guided cranial radiosurgery. Med. Phys. 2008, 35, 2180–2194. [Google Scholar] [CrossRef] [PubMed]
  15. Alam, F.; Rahman, S.U.; Khalil, A. An investigation towards issues and challenges in medical image registration. J. Postgrad. Med. Inst. 2017, 31, 224–233. [Google Scholar]
  16. Otake, Y.; Armand, M.; Armiger, R.S.; Kutzer, M.D.; Basafa, E.; Kazanzides, P.; Taylor, R.H. Intraoperative image-based multiview 2D/3D registration for image-guided orthopaedic surgery: Incorporation of fiducial-based C-arm tracking and GPU-acceleration. IEEE Trans. Med. Imaging 2011, 31, 948–962. [Google Scholar] [CrossRef]
  17. Birkfellner, W.; Stock, M.; Figl, M.; Gendrin, C.; Hummel, J.; Dong, S.; Kettenbach, J.; Georg, D.; Bergmann, H. Stochastic rank correlation: A robust merit function for 2D/3D registration of image data obtained at different energies. Med. Phys. 2009, 36, 3420–3428. [Google Scholar] [CrossRef]
  18. Markelj, P.; Tomaževič, D.; Pernuš, F.; Likar, B. Optimizing bone extraction in MR images for 3D/2D gradient based registration of MR and X-ray images. In Proceedings of the Medical Imaging 2007: Image Processing: International Society for Optics and Photonics, San Diego, CA, USA, 17–22 February 2007; p. 651224. [Google Scholar]
  19. Miao, S.; Wang, Z.J.; Liao, R. A CNN regression approach for real-time 2D/3D registration. IEEE Trans. Med. Imaging 2016, 35, 1352–1363. [Google Scholar] [CrossRef]
  20. Zheng, J.; Miao, S.; Wang, Z.J.; Liao, R. Pairwise domain adaptation module for CNN-based 2-D/3-D registration. J. Med. Imaging 2018, 5, 021204. [Google Scholar] [CrossRef]
  21. Tang, T.S.; Ellis, R.E.; Fichtinger, G. Fiducial registration from a single X-Ray image: A new technique for fluoroscopic guidance and radiotherapy. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Lima, Peru, 4–8 October 2020; Springer: Berlin/Heidelberg, Germany, 2000; pp. 502–511. [Google Scholar]
  22. Zheng, J.; Miao, S.; Liao, R. Learning CNNS with pairwise domain adaption for real-time 6dof ultrasound transducer detection and tracking from x-ray images. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Quebec City, QC, Canada, 10–14 September 2017; Springer: Berlin/Heidelberg, Germany, 2017; pp. 646–654. [Google Scholar]
  23. Zheng, G.; Zhang, X.; Jonić, S.; Thévenaz, P.; Unser, M.; Nolte, L.-P. Point similarity measures based on MRF modeling of difference images for spline-based 2D-3D rigid registration of X-ray fluoroscopy to CT images. In Proceedings of the International Workshop on Biomedical Image Registration, Utrecht, The Netherlands, 9–11 July 2006; Springer: Berlin/Heidelberg, Germany, 2006; pp. 186–194. [Google Scholar]
  24. Pan, H.; Zhou, C.; Zhu, Q.; Zheng, D. A fast registration from 3D CT images to 2D X-ray images. In Proceedings of the 2018 IEEE 3rd International Conference on Big Data Analysis (ICBDA): IEEE, Shanghai, China, 9–12 March 2018; pp. 351–355. [Google Scholar]
  25. Russakoff, D.B.; Rohlfing, T.; Mori, K.; Rueckert, D.; Ho, A.; Adler, J.R.; Maurer, C. Fast generation of digitally reconstructed radiographs using attenuation fields with application to 2D-3D image registration. IEEE Trans. Med. Imaging 2005, 24, 1441–1454. [Google Scholar] [CrossRef]
  26. Tomazevic, D.; Likar, B.; Slivnik, T.; Pernus, F. 3-D/2-D registration of CT and MR to X-ray images. IEEE Trans. Med. Imaging 2003, 22, 1407–1416. [Google Scholar] [CrossRef]
  27. Shao, Z.; Han, J.; Liang, W.; Tan, J.; Guan, Y. Robust and fast initialization for intensity-based 2D/3D registration. Adv. Mech. Eng. 2014, 6, 989254. [Google Scholar] [CrossRef]
  28. Aouadi, S.; Sarry, L. Accurate and precise 2D–3D registration based on X-ray intensity. Comput. Vis. Image Underst. 2008, 110, 134–151. [Google Scholar] [CrossRef]
  29. Tomazevic, D.; Likar, B.; Pernus, F. 3-D/2-D registration by integrating 2-D information in 3-D. IEEE Trans. Med. Imaging 2005, 25, 17–27. [Google Scholar] [CrossRef]
  30. Ghafurian, S.; Hacihaliloglu, I.; Metaxas, D.N.; Tan, V.; Li, K. A computationally efficient 3D/2D registration method based on image gradient direction probability density function. Neurocomputing 2017, 229, 100–108. [Google Scholar] [CrossRef]
  31. Toth, D.; Miao, S.; Kurzendorfer, T.; Rinaldi, C.A.; Liao, R.; Mansi, T.; Rhode, K.; Mountney, P. 3D/2D model-to-image registration by imitation learning for cardiac procedures. Int. J. Comput. Assist. Radiol. Surg. 2018, 13, 1141–1149. [Google Scholar] [CrossRef] [PubMed]
  32. Johnson, H.; McCormick, M.; Ibanez, L. The ITK Software Guide Book 2: Design and Functionality-Volume 2; Kitware Inc.: Clifton Park, NY, USA, 2015. [Google Scholar]
  33. Li, H.; Wang, J.; Han, C. Image Mosaic and Hybrid Fusion Algorithm Based on Pyramid Decomposition. In Proceedings of the 2020 International Conference on Virtual Reality and Visualization (ICVRV), Galinhas, Brazil, 13–14 November 2020. [Google Scholar]
  34. Birchfield, S.T.; Rangarajan, S. Spatiograms versus histograms for region-based tracking. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05): IEEE, San Diego, CA, USA, 20–25 June 2005; pp. 1158–1163. [Google Scholar]
  35. Zhang, Z.; Liu, Y.; Tian, J.; Liu, S.; Yang, B.; Xiang, L.; Yin, L.; Zheng, W. Study on Reconstruction and Feature Tracking of Silicone Heart 3D Surface. Sensors 2021, 21, 7570. [Google Scholar] [CrossRef] [PubMed]
  36. Jiao, P.; Qin, A.; Zhao, W.; Ouyang, J.; Zhang, M.; Fan, J.; Zhong, S.; Li, J. 2D/3D registration system based on single X-ray image and CT data. J. Med. Biomech. 2010, 6, E460–E464. [Google Scholar]
  37. Li, Y.; Zheng, W.; Liu, X.; Mou, Y.; Yin, L.; Yang, B. Research and improvement of feature detection algorithm based on FAST. Rend. Lincei Sci. Fis. Nat. 2021, 32, 775–789. [Google Scholar] [CrossRef]
  38. Deng, Y.; Tang, Y.; Yang, B.; Zheng, W.; Liu, S.; Liu, C. A Review of Bilateral Teleoperation Control Strategies with Soft Environment. In Proceedings of the 2021 6th IEEE International Conference on Advanced Robotics and Mechatronics (ICARM): IEEE, Chongqing, China, 3–5 July 2021; pp. 459–464. [Google Scholar]
  39. Wang, Y.; Tian, J.; Liu, Y.; Yang, B.; Liu, S.; Yin, L.; Zheng, W. Adaptive Neural Network Control of Time Delay Teleoperation System Based on Model Approximation. Sensors 2021, 21, 7443. [Google Scholar] [CrossRef]
Figure 1. Three dimensional CT image of the brain model.
Figure 1. Three dimensional CT image of the brain model.
Applsci 12 08261 g001
Figure 2. CT rendering of the brain model.
Figure 2. CT rendering of the brain model.
Applsci 12 08261 g002
Figure 3. DRR image at the initial value.
Figure 3. DRR image at the initial value.
Applsci 12 08261 g003
Figure 4. Results of Experiment 1: (a) reference image 1; (b) DRR image based on WSHGD registration; (c) difference image after registration based on the WSHGD; (d) reference image 1; (e) DRR image registered based on WHGD; (f) difference image after registration based on the WHGD.
Figure 4. Results of Experiment 1: (a) reference image 1; (b) DRR image based on WSHGD registration; (c) difference image after registration based on the WSHGD; (d) reference image 1; (e) DRR image registered based on WHGD; (f) difference image after registration based on the WHGD.
Applsci 12 08261 g004aApplsci 12 08261 g004b
Figure 5. The results of Experiment 2: (a) reference image 2; (b) DRR image based on WSHGD registration; (c) difference image after registration based on the WSHGD; (d) reference image 2; (e) DRR image registered based on WHGD; (f) difference image after registration based on the WHGD.
Figure 5. The results of Experiment 2: (a) reference image 2; (b) DRR image based on WSHGD registration; (c) difference image after registration based on the WSHGD; (d) reference image 2; (e) DRR image registered based on WHGD; (f) difference image after registration based on the WHGD.
Applsci 12 08261 g005
Figure 6. Result of Experiment 3: (a) reference image 3; (b) DRR image based on WSHGD registration; (c) difference image after registration based on the WSHGD; (d) reference image 3; (e) DRR image registered based on WHGD; (f) difference image after registration based on the WHGD.
Figure 6. Result of Experiment 3: (a) reference image 3; (b) DRR image based on WSHGD registration; (c) difference image after registration based on the WSHGD; (d) reference image 3; (e) DRR image registered based on WHGD; (f) difference image after registration based on the WHGD.
Applsci 12 08261 g006
Figure 7. Registration error and mean values of the weighted histogram of gradient directions (WHGD) and the weighted spatial histogram of gradient directions (WSHGD).
Figure 7. Registration error and mean values of the weighted histogram of gradient directions (WHGD) and the weighted spatial histogram of gradient directions (WSHGD).
Applsci 12 08261 g007
Table 1. Related parameters of image registration.
Table 1. Related parameters of image registration.
DataSizeSpacing/mmPixel Range
CT image200 × 200 × 1422 × 2 × 211,024~2976
Analog X-ray image (DRR)300 × 3001 × 10~255
Table 2. Registration results of the WSHGD and WHGD.
Table 2. Registration results of the WSHGD and WHGD.
Rotation/°Translation/mm
WHGDMAE0.4830.503
SDE0.6700.237
ME−0.093−0.417
WSHGDMAE0.5630.877
SDE1.5931.027
ME−0.193−0.632
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ban, Y.; Wang, Y.; Liu, S.; Yang, B.; Liu, M.; Yin, L.; Zheng, W. 2D/3D Multimode Medical Image Alignment Based on Spatial Histograms. Appl. Sci. 2022, 12, 8261. https://doi.org/10.3390/app12168261

AMA Style

Ban Y, Wang Y, Liu S, Yang B, Liu M, Yin L, Zheng W. 2D/3D Multimode Medical Image Alignment Based on Spatial Histograms. Applied Sciences. 2022; 12(16):8261. https://doi.org/10.3390/app12168261

Chicago/Turabian Style

Ban, Yuxi, Yang Wang, Shan Liu, Bo Yang, Mingzhe Liu, Lirong Yin, and Wenfeng Zheng. 2022. "2D/3D Multimode Medical Image Alignment Based on Spatial Histograms" Applied Sciences 12, no. 16: 8261. https://doi.org/10.3390/app12168261

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop