Next Article in Journal
A Perspective on Lifelong Open-Ended Learning Autonomy for Robotics through Cognitive Architectures
Next Article in Special Issue
Real-Time Dense Reconstruction with Binocular Endoscopy Based on StereoNet and ORB-SLAM
Previous Article in Journal
Age Related Functional Connectivity Signature Extraction Using Energy-Based Machine Learning Techniques
Previous Article in Special Issue
Inertial Tracking System for Monitoring Dual Mobility Hip Implants In Vitro
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

3D Multi-Modality Medical Imaging: Combining Anatomical and Infrared Thermal Images for 3D Reconstruction

by
Mauren Abreu de Souza
*,
Daoana Carolaine Alka Cordeiro
,
Jonathan de Oliveira
,
Mateus Ferro Antunes de Oliveira
and
Beatriz Leandro Bonafini
Programa de Pós-Graduação em Tecnologia em Saúde (PPGTS), Pontifícia Universidade Católica do Paraná, Curitiba 80215-901, Brazil
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(3), 1610; https://doi.org/10.3390/s23031610
Submission received: 6 December 2022 / Revised: 17 January 2023 / Accepted: 25 January 2023 / Published: 1 February 2023
(This article belongs to the Special Issue 3D Sensing and Imaging for Biomedical Investigations)

Abstract

:
Medical thermography provides an overview of the human body with two-dimensional (2D) information that assists the identification of temperature changes, based on the analysis of surface distribution. However, this approach lacks spatial depth information, which can be enhanced by adding multiple images or three-dimensional (3D) systems. Therefore, the methodology applied for this paper generates a 3D point cloud (from thermal infrared images), a 3D geometry model (from CT images), and the segmented inner anatomical structures. Thus, the following computational processing was employed: Structure from Motion (SfM), image registration, and alignment (affine transformation) between the 3D models obtained to combine and unify them. This paper presents the 3D reconstruction and visualization of the respective geometry of the neck/bust and inner anatomical structures (thyroid, trachea, veins, and arteries). Additionally, it shows the whole 3D thermal geometry in different anatomical sections (i.e., coronal, sagittal, and axial), allowing it to be further examined by a medical team, improving pathological assessments. The generation of 3D thermal anatomy models allows for a combined visualization, i.e., functional and anatomical images of the neck region, achieving encouraging results. These 3D models bring correlation of the inner and outer regions, which could improve biomedical applications and future diagnosis with such a methodology.

1. Introduction

Infrared thermography (IRT) presents different applications, ranging from engineering to biomedical applications [1,2,3,4]. In the medical field, assessing the amount of infrared radiation emitted by the human body can indicate normal or abnormal temperature distribution patterns. The use of medical thermography has the advantage of being contactless, enabling the visualization of temperature variations using thermal infrared cameras. In this scenario, it is possible to identify regions with high vascular activity, infection, tumor tissues, and muscle tensions. For example, infrared analysis identifies thermal variations, such as heat, redness, burning, local swelling, and high or low blood circulation in the examined body regions [4].
Computer systems and algorithms for medical diagnoses are becoming frequent since they favor further analysis and more accurate diagnoses and are less susceptible to errors. Doi [5] defines diagnostic support systems, such as CAD (Computer Aided Diagnosis), as a set of methodologies that predict the findings of lesions and abnormalities in images, describing patterns capable of estimating better diagnosis.
Most medical thermography research comes from two-dimensional (2D) acquisitions, once only the standard temperature distribution is obtained from the thermal images and no spatial depth information is detected with single 2D infrared imaging modality. On the other hand, the depth information is perceived due to the photogrammetry approach from multiple images obtained at different positions/angles, which enables the calculation of a 3D point cloud. This is based on the stereo-vision from the camera positioning changes due to the parallax in space, which leads to 3D spatial information [6]. Then, computer vision techniques enable such procedures, covering imaging acquisition, processing, and analysis [7]. In addition to visible cameras, there are also the three-dimensional (3D) scanning or photogrammetric systems (either commercial or customized), which can generate 3D models of objects and body parts, adding crucial geometric details for image analysis and recognition, especially for biomedical applications.
The literature brings several references about the generation of 3D thermal imaging models, mainly focused on 3D thermography (i.e., only presenting the outer 3D thermal shell of interest and not including inner information) for different applications, which are worth mentioning. For example, Cabrelles et al. [8] showed the reconstruction of an archaeological monument in Petra (Jordan), generating 3D models (using both visible and thermal cameras). Yang et al. [9] presented customized equipment containing two smartphones and a low-cost thermal camera to acquire visual and thermal images. They also used the Structure from Motion (SfM) methodology, and the results presented were focused on the 3D thermal model reconstruction of buildings, facades, and concrete samples. Regarding another application, in the textile industry, Domina et al. [10] employed both 3D scanning systems and thermal images for mapping the 3D body scan of subjects together with their thermal variability, which focused on finding the woman’s thermal patterns. More recently, Campione et al. [11] presented a 3D thermal imaging system for cultural heritage applications, merging data from a generic thermal camera and a laser scanner.
Now focusing on biomedical applications, several approaches have used 3D thermography for diagnosing diabetic foot, such as Aksenov et al. [12], Ju [13], Barone et al. [14], and Parker et al. [15], among others. For example, Barone et al. [14] presented the integration of a customized 3D scanner (for geometrical data acquisition) together with a thermal image collection. Their focus is mainly on evaluating ulceration for diabetic foot disease. Parker et al. [15] also presented their thermal stereoscope system, employing three digital SLR cameras, an infrared camera, and a structured light pattern to be projected. The application was for foot surface reconstruction as well.
Since there are several possible configurations for obtaining 3D thermograms, such as how many and which cameras (visible and thermal) are to be used, or the inclusion of 3D scanning systems (either custom-made or commercial systems), Chernov et al. [16] presented an overview of some of these systems. Additionally, they introduced their system, which comprises two units consisting of an off-the-shelf depth camera, rigidly mounted onto a FLIR thermal camera for biomedical applications. Cao et al. [17] introduced a robust mobile and real-time 3D thermographic reconstruction method through depth and thermal sensor fusion. In this configuration, their device consisted of a thermal camera and an RGB-D sensor, which enables the generation of 3D thermal models. They applied a customized method, called thermal-guided iterative closest point (T-ICP), which was compared to other established ICP algorithms. From our research group, Krefer et al. [18] presented a Structure from Motion (SfM) methodology, in which a decoupled system was employed, since it used 3D geometry (from a 3D commercial scanner as input) and the infrared images, acquired at different moments. Then, for proving the concept of the method, it was applied to a test object and some volunteers as well.
More recently, Bader [19] presented image processing and 3D reconstruction using the SfM method as well. His system consisted of two cameras in the visible spectral range and two thermal cameras. Doremalen et al. [20] presented a system composed of a high-resolution medical 3D imaging system—a 3D scanner (Vectra XT), aligned with three smartphone-based thermal infrared cameras. The purpose was the generation of 3D thermographs for inflammation detection in diabetic foot disease in a hospital environment. Mancilla et al. [21] employed a solution based on the SfM and Multi-view Stereo approach, to generate 3D thermal models, also focused on diagnosing diabetic foot. Schramm et al. [22] presented 3D thermograms and some advantages compared to other existing systems. Schmoll et al. [23] also presented 3D thermography describing the fusion of geometry and temperature sensors, but for heat dissipation applications.
Regarding thermal simulations, we provide just a brief presentation. For instance, Ledwon et al. [24] proposed a novel concept of the Convolutional Neural Network (CNN) on thermal tomography, which employed a reconstruction of the heat distribution from planar thermal images and synthetic design, for the validation of a model on a setup with a single heated object. Another recent study by Paz et al. [25] also explored a thermal reconstruction on a 3D model based on the segmentation of magnetic resonance images (MRI), to prove the concept of inner temperature propagations from a specific patient case, which was applied to detect changes in thyroid metabolism through the finite element analysis using ANSYS software. However, such thermal simulations and thermal tomography approaches are not the focus of this proposed research.
Additionally, based on the literature analysis, a variety of customized systems are observed, ranging from decoupled to combined systems. The integrated systems, which have infrared and visible cameras, provide a much more straightforward data combination, as both sensing devices are triggered together. On the other hand, the decoupled system approach allows the reuse of previously acquired data obtained at different moments or by additional modalities, for example, the anatomy medical images (such as MRI, CT, etc.). However, this is possible since it kept adequate data acquisition to maintain similar acquisition conditions (primarily related to spatial positioning and configuration of the body region to be examined), which is also a limitation for biomedical applications.
The aforementioned references are not exhaustive, since the focus here is not a deep review, but to demonstrate awareness of this topic. So, based on the applications from this brief literature review, it is concluded that there is still space for further development. Additionally, none of these previous papers had presented the inclusion of anatomy images. Therefore, the approach presented in this paper consists of medical applications, including the usual anatomy images (which are clinically already approved and employed in routine use) by the medical team.
On the topic of the 3D reconstruction of medical/anatomical images in DICOM format files, there are commercial software that perform such processing. So, the standard medical DICOM files acquired through computerized tomography (CT) or magnetic resonance imaging (MRI) are reconstructed, based on segmentation tools of the body structures under analysis. Examples are given, such as MIMICS® and 3D Slicer® software [26,27]. Using such an approach enables the 3D object model to be realistic and compatible with the real organ, resulting in the accurate volume and the perfect geometry. For example, Shin et al. [27] presented the importance of the 3D segmentation and reconstruction process, as it employed MIMICS®. So, it was shown how effective the segmentation and 3D reconstruction performed were. Additionally, Kaur & Jindal [28] used the same methodology for processing medical images.
Still, regarding the 3D reconstruction of objects/bodies by processing a set of 2D anatomy images, especially when merging and combining different medical imaging modalities (such as CT, MRI, and IRT), it is worth mentioning some studies. For instance, Souza et al. [29,30] first presented the original methodology of the 3D THERMO-SCAN, by using the SfM technique with a sequence of infrared images. Additionally, it a case study was presented for dentistry applications [30]. In these references, three imaging modalities were included: 2D infrared images, 3D scanned model, and the DICOM imaging slices. Then, as a result, a 3D thermal outer shell model combined with internal anatomy slices was obtained. Additionally, Schollemann et al. [31] presented the reconstruction and visualization of 3D mouse models showing internal anatomical information (based on the DICOM CT images) together with the 3D thermal outer shell, forming a multimodal imaging modality as well. These previous studies employed customized visualization software, primarily implemented for such purposes.
Then, within this established background, it is worth mentioning that these previous references [18,29,30,31] differ in some characteristics from the proposed research brought into this current paper. For example, they did not include the visualization of the inner structures presented here within well-segmented and delimitated anatomy Additionally, the 3D visualization tool originally employed in [18,29,30,31] had been changed; as in this research, we employed MeVisLab® software (version 3.6.1), which provides much more options and support for 3D visualizations, as well as being open-source.
Therefore, there is space for further developments in this research field, allowing for the processing and generation of 3D thermal models, incorporating infrared images attached to the 3D geometry (inner and outer shell), focused on presenting a medical case study (neck/thyroid region). The proposed methodology performs correlations based on the superficial thermal changes, indicating the high activity of an organ/body and the correspondent connection with geometry structures (such as the segmented inner and outer anatomy), which are all unified and visualized employing MeVisLab® software.
Thus, the purpose of this research is to provide a unified 3D thermal anatomical model, containing data from different medical imaging sources, presenting computational processing. Therefore, all imaging processing and registration allows the medical team to perform further correlations between the superficial temperature changes obtained through infrared images and their correspondent geometry of interest (anatomical structures). The contribution of this paper is to provide the generation of a unique 3D thermal model—which is called the 3D THERMO-SCAN methodology [29]. Such an approach is based on the imaging registration and alignment between different imaging modalities (infrared functional and CT anatomical images), combined with the segmented anatomy structures of interest incorporated into the model.
This paper structure is organized as follows: Section 2 describes the background information, presenting the Structured from Motion (SfM) and the Affine coordinates transformation. Section 3 presents the methodology—covering the data imaging acquisition (both infrared and anatomy images), the anatomy thresholding, the 3D imaging reconstruction, the 3D thermal geometry, the 3D registration, and the final 3D THERMO-SCAN model. Section 4 presents the results and their analyses, including the 3D visualization of the models obtained within the segmentation results, the generation of the 3D thermal model, and the complete visualization showing the final 3D THERMO-SCAN model. Lastly, Section 5 and Section 6 present the discussion and conclusions, respectively.

2. Background Information

A brief description of the concepts employed in this research is appropriate. So, we are presenting the process behind the Structure from Motion (SfM) approach and the image registration, mainly focusing on the Affine Transformation (applied between different 3D models) obtained from other imaging systems.

2.1. Structured from Motion (SfM)

Structure from Motion (SfM) is the process of 3D reconstruction from a series of images taken from different viewpoints, which has been employed over the years in computer vision, 3D scanning, augmented reality, among others [32,33]. This method commonly starts with feature extraction, matching, and geometric verification, which serves as the foundation for a 3D reconstruction stage that incrementally registers new images, performs the triangulation of the same scene points, filters outliers, and refines its reconstruction via bundle adjustment (BA) [33].
Regarding the SfM processing, this research employed the VisualSFM GUI application Open Source, initially developed by Wu [34,35], which consists of a group of techniques to generate a 3D point cloud via a 2D image compilation, collected throughout the whole imaged object. The steps used are described below.
  • Compute Missing Matches (CMM)—make the images to converge on a plane. This image convergence comes from a two-dimensional plane into a three-dimensional plane corresponding to a 3D object. Through this CMM process, VisualSFM® software interprets embedded 2D images and transforms them into a 3D object. Identifying the common features within all the images provided enough features to distinguish and detect the same mutual points.
  • Sparse Reconstruction—based on the estimation of the points (coordinates) positioning, which is automated and performed by applying the SIFT (Scale Invariant Feature Transform) method [36]. Next, the features are combined to create a pool of coordinates in space, characterizing a point cloud. However, after generating the cloud, VisualSFM® can erroneously present the position of the cameras in space, causing failures in generating the result from the cloud, so the third step is necessary (BA).
  • Bundle Adjustment (BA)—corresponds to the camera positioning optimization in photogrammetry [32]. The algorithm is responsible for refining the 3D points into a geometric scene (i.e., into a 3D coordinate system). Additionally, the data obtained from the stereo-image pairs are approximated to the absolute values of the coordinates, optimizing, and decreasing errors in the point cloud generation.
  • Dense Reconstruction—enables a point cloud to be generated that is completely dense and united. Additionally, to refine the point could, there is also another function called Find More Points, which enables the inclusion of additional points (coordinates) to provide a much denser point cloud.
To evaluate the correlation between pairs of images, where the aim is to justify the generation of a 3D model using 2D images, the VisualSFM® employs the SIFT (Scale Invariant Feature Transform) algorithm. This algorithm combines common points between images and validates the distance of these points using the metric of the smallest Euclidean distance. In addition, there is a filter through RANSAC (Random Sample Consensus) for the matches found that are not consistent [18].

2.2. Affine Transformation

Image registration is fundamental, since it allows the mapping and transformation of two different images into a common coordinate system, through which it is possible to align and visualize both 3D models altogether [37,38].
The transformation process is required when images are acquired through different sensors, resolutions, or positions. Each registration method is formed by a set of equations, which transform one image’s coordinates into the other image’s coordinates [39].
There are different coordinate transformations such as identity, rigid, affine, and non-rigid. The main difference between non-rigid and rigid is the nature of the transformation. Rigid registration aims to find the six degrees of freedom, as can be seen in Equation (1), which maps any point in the source image to the corresponding point in the destination image [40]. Additionally, Equation (2) demonstrates an extension of the affine transformation model, which has twelve degrees of freedom and allows shear and scaling as well.
T x ,   y ,   z = ( x y z 1 ) = ( cos β cos γ cos α sin γ + sin α sin β cos γ sin α sin γ cos α sin β cos γ t x cos β sin γ cos α cos γ sin α sin β sin γ sin α cos γ + cos α sin β sin γ t y sin β sin α cos β cos α cos β t z 0 0 0 1 ) x y z 1
T x ,   y ,   z = x y z 1 = a 00 a 01 a 02 a 03 a 10 a 11 a 12 a 13 a 20 a 21 a 22 a 23 0 0 0 1 x y z 1

3. Material and Methods

The methodology applied in this research is summarized in the schematic diagram from Figure 1, contributing to an overall understanding of the proposed research. So, the central processing steps and their corresponding tools (software) are displayed: (1) From the DICOM anatomy images, a 3D model of the anatomic inner and outer structures is processed with MIMICS software. (2) From the 2D infrared images, a 3D point cloud is obtained using MATLAB® and VisualSFM® softwares. (3) Then, the image alignment between the 3D point cloud and the 3D model is performed, generating a 3D thermal model/shell (based on the corresponding thermal texture projection onto the 3D CT geometry) using MeshLab software. (4) Finally, it is possible to visualize all the 3D structures inside the 3D thermal geometry and to visualize the inner and outer models and the DICOM images altogether and simultaneously with MeVisLab® software.

3.1. Data Imaging Acquisition

For the imaging acquisition, data is obtained from two different modalities: infrared (functional) images and computerized tomography (CT) (anatomy) images, which are detailed below.

3.1.1. Infrared Images

For the infrared imaging acquisition, an acclimatization period of about 15 min is necessary. Then, the body region to be evaluated must be undressed. Thermal images were initially acquired via a FLIR thermal camera (model A325). Unlike most modern cameras, it is worth mentioning that this camera provides only infrared images and not visible images. The camera was positioned on a tripod, and the volunteers were placed in a swivel chair, where a complete video was collected going from the left to the right side (covering approximately 180°).
The acquisition of several thermal images of the neck region made it possible to evaluate and analyze these images for clinical assessment (obtained in video format, SEQ). The manipulation of these images allows the proper conversion to other formats that are more appropriate for the computational processing in the sequence.
The infrared data contains information about temperature. However, the range is only useful for representation purposes and clinical evaluation (initially defined in FLIR Tools and implemented in MATLAB).
After collecting the images, the FLIR Tools software is used for initial manipulations in the data file. The obtained video is saved in “SEQ” file format and converted to a “CSV” file (using FLIR Tools software). With this configuration, the temperature of each pixel of the thermal image is saved in a text file (CSV). Then, by using MATLAB, the “CSV” file is converted to the “MAT” format, to facilitate further processing in MATLAB as well (Section 3.3.1).

3.1.2. Anatomy Images

Complementing the clinical protocol, patients underwent the acquisition of anatomical images (which can be either magnetic resonance imaging (MRI) or computerized tomography (CT)). To illustrate, Figure 2 shows some of the individual anatomical imaging planes (axial, coronal, and sagittal) obtained through the CT system. The CT equipment used in this research was a GE Optima CT660, with 64 channels, which allows acquisitions of 40 mm, providing isotropic images of 0.35 mm of spatial resolution. Additionally, for this CT acquisition, an iodine contrast is injected into the patient, so the thyroid gland is more evident in the images since the grayscale is better differentiated among the different anatomical structures.
The purpose of the DICOM anatomical images is to provide inner information about the anatomy structures, which will be segmented (Section 3.2.1), to complement the generation of the 3D geometry model, as seen in Figure 3.

3.2. Anatomy Thresholding and 3D Imaging Reconstruction

3.2.1. Segmentation of the Neck Region and Inner Structures

After the anatomical image data collection from CT (Section 3.1.2), a segmentation of the neck region (external geometry) and the inner structures was performed using the MIMICS Innovation Suite® 17.0 software. The focus of this study is about the assessment of the thyroid gland, and for this reason, the following structures are segmented: thyroid, trachea, veins, arteries, and bust/neck.
The MIMICS software allows the visualization of the DICOM file, and the corresponding 3D reconstruction of these images [41]. The DICOM image set is a file that unifies and organizes the format of images obtained in such anatomy exams [42]. These images are imported into the software through the New Project option, which allows the visualization of each DICOM slice in the sagittal, coronal, and axial planes, where it is possible to navigate between them, as shown in Figure 4.
Through the axial imaging slices, a mask is created with the Threshold tool; thus, the edges of each structure in all the apparent layers are demarcated. After the delimitations, the software automatically fills in the gaps between the DICOM slices, thus forming a solid 3D object. Fault correction was performed using the Multiple Slice Edit tool, where removing or adding points of interest is possible. Figure 5 illustrates all the structures of interest with the mask and the complete 3D reconstruction.
After finishing the segmentation, the 3D model is exported to the Blender software (version 3.3), where the surface of the organs is smoothed, allowing their appearance to be as close as possible to the actual structures.
It is noteworthy that the tools used in the Blender software are in the Sculpting mode, where the Smooth option performs the smoothing of the objects, and the Fill and Clay options fill in gaps present in the 3D object/model. All anatomy structures (thyroid, trachea, veins, arteries, and bust) undergo the same process.

3.3. 3D Thermal Geometry

3.3.1. Infrared Imaging Pre-Processing

Afterward, the infrared images (frames) extracted from the thermal video undergo a pre-processing step (using FLIR Tools 6.4 and MATLAB). To illustrate, Figure 6 shows three samples (different frames) of the infrared data, represented in the Rainbow HC palette, which are initially visualized for clinical assessment, using the FLIR Tools® software.
For the image treatment, some pre-processing steps were performed, as follows: (1) Import the CSV file (Comma Separated Values) previously generated in FLIR Tools®. (2) Generation of the frames (2D images). (3) Image intensification—a step characterized by a non-bijective transformation of the thermal representation in the image, which changes the color palette according to the previously recorded temperature. (4) Conversion of the files to a known image format (i.e., Portable Network Graphics—PNG).
The image intensification process (step (3)) consists of the same approach employed by Krefer et al. [18], in which a customized colormap is applied. Such transformation is based on changing the intensity levels of the thermal images (which are normalized in grayscale, and black and white (B/W) enhancements). This is performed to increase the number of regions with high-contrast texture and, consequently, the number of feature points detected. This customized intensification process is necessary since the thermal infrared images of the human skin do not have as many details as their corresponding visible images. Considering this and the fact that the SfM methodology was not initially designed for thermal images, such processing is essential for the proper performance of the SfM within such infrared images.
In this way, it is possible to improve the visual quality of the images to facilitate the generation of the 3D point cloud (SfM—Section 2.1). Figure 7 illustrates the resulting visualization of this intensity transformation. Thus, it summarizes the whole process, which is as follows: the most appropriate thermal palette is chosen, in this case, the RainHi color palette (represented by (A)—originally chosen using FLIR Tools). Next, the images are converted to a Grayscale Palette (B), which still represents low contrast in its palette. Then, Thermal Intensification (C) demonstrates an intermediate process, where such improvements are already perceived. The additional procedure in this intensification transformation presents stronger thermal differences, which is based on Black and White (B/W) enhancements (D) showing the whole image with much stronger contrast differences. These transformations are performed in MATLAB.

3.3.2. SfM (Structured from Motion): 3D Point Cloud Generation

After pre-processing, the SfM (Structure from Motion) algorithm [43,44] is applied to generate a 3D point cloud from the thermal images. This step uses several images collected sequentially, allowing the selection of all frames. Applying the epipolar geometry shown in Figure 8a makes it possible to identify the same point in the two images [32]. Thus, the greater the correlation of the features between the images, the greater the number of corresponding points between each pair of images. Consequently, better 3D results are obtained, also denoted by Figure 8b.
In this research, the VisualSFM GUI application Open Source was employed allowing the 3D point cloud generation of the examined patient, with the 2D thermal images as input.
To generate the dense point could in the 3D space, obtained from the infrared images, processing using the VisualSFM® application is employed, which is applied in the following order: (1) Compute Missing Matches (CMM); (2) Sparse Reconstruction—based on the estimation of the points (coordinates) positioning, following the SIFT (Scale Invariant Feature Transform) method; (3) Bundle Adjustment (BA); and (4) Dense Reconstruction—which were previously described in Section 2.1.
The 3D point cloud is obtained as part of the SfM process, performed using the VisualSFM® software. This way, it is possible to map both the object being imaged (photographed) and the camera’s positioning [29,30]. In addition, we perform a camera calibration process to guarantee the metrics. Then, information on the camera’s intrinsic parameters is also included (such as focal length, main point, and radial/tangential distortions).

3.4. 3D Registration: Between 3D Thermal Model and Anatomy Images

Following the SFM methodology, merging the point cloud (obtained from the thermal images) with the 3D outer geometry (obtained from the reconstruction of anatomical images, in this case from CT images) is necessary. Then, a 3D registration occurs through a manual alignment (Affine Registration) between the point cloud and the 3D geometry, using the tools from the MeshLab® software.
Thus, Figure 9 demonstrates the alignment process between the point cloud and the 3D geometry, which are finally presented in the same coordinate system and on the same scale.
The flowchart shown in Figure 10 illustrates the methodology used, in which it is possible to observe that the thermal images (c) represent the data input in the VisualSFM® software. Therefore, the point cloud (b) is generated as part of detecting features obtained through thermal images (c). After this procedure, both the 3D shell (a) and the point cloud (b) are aligned, and the thermal images are projected onto the 3D outer surface, generating the 3D thermal shell (d).
Right after the alignment, it follows the thermal imaging projection (as represented in Figure 11), which is visualized within its inherent texture/thermal differences, generating what is called the 3D thermal shell/geometry. Detailing what arises from Figure 11, there is a demonstration of the texturing process on the previously segmented bust, in different imaging positions/views (frontal and lateral sides). Therefore, the steps of the texturing and projection process are: (1) the representation of the bust (3D model) without thermal texturing; (2) the projection of the infrared images onto the 3D model (showing transparency); (3) the visualization of the projection with all the infrared images directly on the model; (4) finally, the generation of the 3D thermal shell.

3.5. 3D THERMO-SCAN Complete Visualization

By using the interface of MeVisLab® (version 3.6.1) [45], it is possible to generate a complete 3D model, which we called the 3D THERMO-SCAN model, due to the unification of the following models: (A) the 3D model obtained from the anatomic CT images (from Section 3.2Figure 12A), combined with (B) the 3D thermal outer model (from Section 3.3Figure 12B). This final 3D model allows the combined visualization altogether among the independent models, as shown in Section 3.5, leading to Figure 12C. Therefore, this research delivers the 3D combined visualization of several medical images into a common coordinate system. The new thermal and anatomical 3D model was saved into a 3D format (Polygon File Format (PLY) or Stanford Triangle Format (STL)). Then, it was imported into the MeVisLab® interface for visualization purposes.
Therefore, the MeVisLab® software can add more than one 3D object/model into the same coordinate system, even if they are in different file formats. However, to visualize the thermal shell with internal content (i.e., internal tissues and organs), it is necessary to include the DICOM image file that corresponds to the anatomical images.
The following data are included for the complete 3D model visualization: segmented 3D structures (thyroid, trachea, veins, and arteries), DICOM image slices, and the 3D outer thermal shell. These data are all joined into a unique coordinate system, which is merged/combined to allow complete 3D visualization. This fusion visualization takes place through programming modules presented in MeVisLab® software, which performs the fusion of the various imaging modalities (3D thermal model geometry, the DICOM anatomy images, and the segmented inner structures). Figure 13 illustrates the programmed modules to allow the 3D visualization of the complete 3D model.
In these modules, there is a connection between them, and each one performs a distinct function. The blue modules in the lower left corner have the purpose of importing the DICOM files. Sequentially, looking from left to right, the following modules have the function of importing the 3D thermal shell (infrared) and rendering the object.
The rightmost blue and green pairs (Load and Renderer) have the functionality to import the previously designed segmentations, which, in this case, are the thyroid, trachea, veins, and arteries. Finally, the modules described above are coupled and put together in a visualization module (View3D). Thus, the final complete 3D model comprises the 3D thermal shell, anatomical images (DICOM), and the segmented inner structures (thyroid, trachea, veins and arteries).

4. Results

4.1. Segmentation Results

Through CT images, it was possible to identify the thyroid gland and other structures in the anatomical sections (i.e., coronal, sagittal, and axial), as illustrated in Figure 14, evidencing the anatomical segmentation of the analyzed region. The 3D thyroid selection and reconstruction are highlighted in green in this image.
Figure 15 demonstrates one of the axial anatomical slices, illustrating the mask’s delimitation based on the thyroid gland’s thresholding in the correspondent slice. After including the mask in all the DICOM slices that show the thyroid, the software performed a filling between the slices, thus forming a 3D object with the entire gland’s geometry.
Usually, anatomical images contain noise, which can generate flaws in the reconstructed 3D model. Such noise can be attributed to various causes, such as the type of image (CT, MRI, etc.); additional accessories, such as orthodontic retainers, dental implants, and amalgams from dental restorations; movements of the patient/volunteer; among others. Due to these imperfections, it is necessary to improve and smooth the 3D model, which is transferred to the 3-Matic Medical software. Additionally, for a final 3D modeling treatment, the 3D model is also smoothed in the Blender software (version 3.3). The other anatomy structures were also segmented and smoothed with this process to correct possible failures. The results obtained from the thyroid, trachea, veins, and arteries are also shown in Figure 16.
With all the inner anatomy structures segmented, the complete 3D model is obtained, for later fusion with the 3D thermal outer shell. The result showing the 3D reconstruction and visualization of all the structures combined, including the inner structures and the external geometry of the neck, is shown in Figure 17.
As an illustration of the result of the anatomical region by viewing only the anatomical slices (in DICOM format), Figure 18 shows the three cross-sections obtained through the CT images (coronal, sagittal, and axial), which are visualized in MeVisLab.

4.2. Generation of 3D Thermal Model: 3D Registration and Alignment

Since the 3D model of the outer geometry is obtained from an anatomy (CT) image, the obtained 3D model is considered a gold standard geometry. Then, after the infrared images are coupled and aligned with the 3D shell, the results from the 3D thermal shell (including the thermal texture) are obtained and then evaluated. After the alignment, the images are projected and represented by visualizing the inherent texture and thermal variations, generating the 3D thermal shell, as shown in Figure 11 (column 4).

4.3. Generation of 3D THERMO-SCAN Model: Complete 3D Visualization

The final visualization of the complete 3D model obtained using the MeVisLab® software demonstrates the fusion and registration between the 3D thermal shell and the DICOM data (i.e., the CT anatomical images), presented in Figure 19.
Therefore, this 3D THERMO-SCAN methodology, originally developed by our research group, enables a 3D visualization using a specific interface for this purpose, the MeVisLab®. This interface allows 3D visualization, including several imaging modalities: infrared texture outer shell, 3D inner geometries, and anatomical images (DICOM files)—all of which are registered and combined into a unique visualization.
On the inside of the complete 3D model, a set of segmented structures are presented where it is possible to observe the following structures: thyroid (pink), trachea (yellow), and veins and arteries (blue) (as seen in Figure 20). Also, a video overview illustrating these results can be found in the Supplementary Information (Video S1). These inner colors were chosen to highlight and emphasize the anatomical region under analysis. These views are obtained through the View3D module, to control the cuts through the SoClipPlane module in the MeVisLab® software.
The main advantage of MevisLab® is that it offers a complete toolset, including a variety of digital image processing methods, as well as providing libraries capable of processing DICOM format files (i.e., anatomy images). By means of connectable modules, the algorithms and applications are developed through networks of functional units, in which a module represents a method to be applied to the input data (image). Figure 13 represents the network of modules employed in this research. Still, the networks can be encapsulated in macro modules to be reused in other algorithms or take part in customized applications [46].
Additionally, by using the software MeVisLab®, the generated 3D THERMO-SCAN model is filled (incorporated) internally with the segmented structures, enabling visualization of the reconstructed structures from different types of files. In this case, it was possible to add more than one 3D object, which will be visualized together in the same coordinate system (such as the 3D thermal outer shell/geometry, the DICOM imaging slices, and the segmented structures, e.g., thyroid, trachea, veins, and arteries).
This visualization interface (MeVisLab) has the differential for the joint visualization of the external layer (that is, the thermal shell with the superimposed texture), with the anatomical structures (internally) already previously segmented (through CT images/slices), and with the inclusion of the DICOM anatomy imaging slices itself.
Additionally, with the case study presented here, in each 3D model (both 3D thermal and 3D geometry), seven positions spread all over the model were independently selected, as indicated in Figure 21. Then, the displacement error of these spatial positions (XYZ) was calculated based on their Euclidean distance (as presented in Table 1). It obtained an average error of 1.77 mm, which is considered reasonable, especially for the size of the subject being imaged (covering the head, neck, shoulders, and torso). According to Vasavada et al. [47], the average neck width is about 106 mm and the head width about 148 mm (for female adults averaging 1690 mm height and 66 kg body weight).
Such measurements were inspired by the previous research by Krefer et al. [18], in which an average error of 4.58 mm was obtained for the four subjects imaged. On the other hand, when using a test object, 1.41 mm was obtained. Therefore, this initial quantification represents the efficacy of the proposed methodology, which could be expanded by other medical applications related to infection, inflammation, and different pathologies (which provide more temperature variations), achieving accurate diagnoses and follow-ups.

5. Discussion

This research describes the generation of complete 3D THERMO-SCAN models (i.e., a customized methodology), including the imaging modalities originally presented: functional (infrared) and anatomy (CT) images. The results were available by using both imaging modalities and their corresponding computational processing, together with the visualization tools (performed using SfM methodology, affine alignment, MIMICS, MeshLab and MeVisLab software). The process consists of several steps, covering the imaging acquisition, segmentation of the CT anatomical images, and processing the infrared functional images to combine and unify them into a single and unique 3D model.
The segmentation of anatomical images (DICOM) is one of the first stages of image processing. There are automatic segmentation methods, but the most modern ones allow user interaction with the software. It is possible to define what should be extracted from each DICOM slice [48].
According to Grady [48], an interactive segmentation algorithm must have four qualities: fast processing, fast editing, the ability to segment the image by itself with enough user interactions, and finally, it must offer an intuitive segmentation process. The segmentation must be performed in software where these manipulations are possible, as the 3D structures will be highlighted when the 3D thermal geometry shell and the anatomical images are joined together. For this reason, the MIMICS software was chosen.
After the anatomic segmentation, the SfM methodology employs infrared images for generating a 3D point cloud, which is based on the calculation of the intrinsic 3D positioning coordinates (X, Y, Z). Then, this mapping/modeling of the body (volunteer) not only generates the point cloud of the object, but also obtains the positioning of the cameras in the space. The generation of the 3D thermal outer shell needs to be performed using an alignment due to the reason that we have two different imaging modalities. Such images are obtained within two different imaging acquisitions: CT images (anatomy imaging modality) and infrared images (functional imaging modality), which are acquired by different equipment at different moments. That is why it is crucial to perform such alignment between such different imaging modalities.
The initial 3D geometry obtained is a hollow model, showing only the outer shell (Figure 12a), which does not contain internal organs and structures, nor present thermal differences and color palette variations. However, within the inclusion of the DICOM images (Figure 12), the inner anatomic structures information is obtained. For these reasons, the alignment is also necessary, i.e., to keep everything together in a common coordinate system (which is the DICOM imaging system). On the other hand, infrared thermal images are located in a different coordinate system. So, after the infrared imaging is projected onto the 3D shell, the 3D thermal model is finally generated (Figure 12b), which is already located in the DICOM system to unify it all.
In this research, we proposed a similar methodology to Krefer et al. [18], which computes 3D thermal models by employing pose estimation via Structure from Motion (SfM). However, in their work, only the generation of the 3D thermal outer shell was shown. On the other hand, Schollemann et al. [31] pointed to an additional approach to generate 3D anatomical thermal models based on multi-modality imaging, combining (outer and inner) from both infrared images and CT data, but in [31], no inner segmented anatomical structures were presented. Therefore, this is the contribution of this research: a 3D thermal anatomy model with the visualization of segmented inner organs inside it, together with the anatomy single imaging slices.
In this paper, the final visualization and fusion of different medical imaging modalities is performed using MeVisLab. The use of such software is justified based on several recent studies in the medical field, such as De buck et al. [49], Hernandez et al. [50], Liu et al. [51], Chen et al. [52], Regnstrand et al. [53], and Egger et al. [54]. Additionally, the use of MeVisLab enables further processing tools for developers as well, including customized programing in C++ and Python.
Therefore, this paper brings the inclusion of multi-modality imaging systems, such as infrared thermography, CT anatomical DICOM images, and the correspondent segmented anatomy regions of interest. All these data, including the complete 3D thermal models, are combined and visualized into a common coordinate system to be presented altogether.

6. Conclusions

This paper presents the generation of complete 3D thermal anatomy models based on the methodology known as 3D THERMO-SCAN, which includes different imaging modalities. The images being used are infrared thermal and anatomical CT images. For the processing, different computational tools were employed: MATLAB, MIMICS, Visual SfM, MeshLab and MeVisLab software—enabling the 3D reconstruction and visualization of both imaging modalities into a unique and unified 3D model, combining functional and anatomy images altogether.
This research showed results on the thyroid and neck region, demonstrating that combined functional and anatomical images have great relevance for medicine related to diagnosing and monitoring pathologies over time. It can also be considered that this methodology has great potential for further expansion, as it could be applied to other organs (parts) of the human body.
Within the proposed case study, an evaluation of the 3D thermal geometry was performed, achieving 1.77 mm precision (when compared with the CT geometry itself). Therefore, the methodology was considered reasonable, regarding the close-range imaging acquisition of a whole human head/bust/neck and the combination of different imaging modalities (i.e., infrared thermal images and CT medical data), acquired at different moments. This solution involves a decoupled acquisition mainly due to the limitation of including external interferences in the clinical anatomy CT or MRI imaging acquisitions—as it is neither allowed nor feasible to include infrared acquisitions during a CT/MRI scanning process.
Then, this represents a valuable solution, especially when we are unable to simultaneously collect data. These solutions may support the evolution of medicine, allowing the inclusion of additional exams (using different technologies and modalities), such as different upcoming imaging modalities. For that reason, with the combination and fusion of supplementary imaging methods, there is always the need for further processing, in order to include applications in different medical fields, allowing us to achieve fast and accurate diagnoses and follow-ups.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/s23031610/s1. Video S1: Overview of the results.

Author Contributions

Conceptualization, M.A.d.S.; methodology, M.A.d.S., D.C.A.C., and M.F.A.d.O.; software, J.d.O. and M.F.A.d.O.; investigation, M.A.d.S., D.C.A.C., and M.F.A.d.O.; resources, M.A.d.S.; data curation, M.F.A.d.O. and M.A.d.S.; writing—original draft preparation, M.A.d.S., D.C.A.C., J.d.O., M.F.A.d.O., and B.L.B.; writing—review and editing, M.A.d.S., D.C.A.C., J.d.O., M.F.A.d.O., and B.L.B.; supervision, M.A.d.S.; project administration, M.A.d.S.; funding acquisition, M.A.d.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by CAPES (Coordenação de Aperfeiçoamento de Pessoal de Nível Superior)—Program PROBRAL grant number 23038.002643/2018-01, and CNPq (Conselho Nacional de Desenvolvimento Científico e Tecnológico)—UNIVERSAL grant number 432038/2016-7 from Brazil.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the policy of the Ethics Committee, from Pontifícia Universidade Católica do Paraná (PUCPR) Curitiba, Brazil.

Informed Consent Statement

Written informed consent was obtained from the subject involved in this study, prior to undertaking the imaging acquisition.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kylili, A.; Fokaides, P.A.; Christou, P.; Kalogirou, S.A. Infrared thermography (IRT) applications for building diagnostics: A review. Appl. Energy 2014, 134, 531–549. [Google Scholar] [CrossRef]
  2. Frodella, W.; Lazzeri, G.; Moretti, S.; Keizer, J.; Verheijen, F.G.A. Applying Infrared Thermography to Soil Surface Temperature Monitoring: Case Study of a High-Resolution 48 h Survey in a Vineyard (Anadia, Portugal). Sensors 2020, 20, 2444. [Google Scholar] [CrossRef] [PubMed]
  3. Rekant, S.I.; Lyons, M.A.; Pacheco, J.M.; Arzt, J.; Rodriguez, L.L. Veterinary applications of infrared thermography. Am. J. Vet. Res. 2016, 77, 98–107. [Google Scholar] [CrossRef] [PubMed]
  4. Jasti, N.; Bista, S.; Bhargav, H.; Sinha, S.; Gupta, S.; Chaturvedi, S.K.; Gangadhar, B.N. Medical Applications of Infrared Thermography: A Narrative Review. J. Stem Cells Hauppauge 2019, 14, 35–53. [Google Scholar]
  5. Doi, K. Computer-Aided Diagnosis in Medical Imaging: Historical Review, Current Status and Future potential. Comput. Med. Imaging Graph. 2007, 31, 198–211. [Google Scholar] [CrossRef]
  6. Luhmann, T.; Robson, S.; Kyle, S.; Boehm, J. Close-Range Photogrammetry and 3D Imaging; Walter de Gruyter: Berlin, Germany, 2020. [Google Scholar]
  7. Foster, S.; Halbstein, D. Integrating 3D Modeling, Photogrammetry and Design; Springer: London, UK, 2014. [Google Scholar]
  8. Cabrelles, M.; Galcerá, S.; Navarro, S.; Lerma, J.L.; Akasheh, T.; Haddad, N. Integration of 3D Laser Scanning, Photogrammetry and Thermography to Record Architectural Monuments. In Proceedings of the 22nd CIPA Symposium, Kyoto, Japan, 11–15 October 2009. [Google Scholar]
  9. Yang, M.-D.; Su, T.-C.; Lin, H.-Y. Fusion of Infrared Thermal Image and Visible Image for 3D Thermal Model Reconstruction Using Smartphone Sensors. Sensors 2018, 18, 2003. [Google Scholar] [CrossRef]
  10. Domina, T.; Kinnicutt, P.; Macgillivray, M. Thermal Pattern Variations Analysed using 2D/3D Mapping Techniques among Females. J. Text. Appar. Technol. Manag. 2011, 7, 1–15. [Google Scholar]
  11. Campione, I.; Lucchi, F.; Santopuoli, N.; Seccia, L. 3D Thermal Imaging System with Decoupled Acquisition for Industrial and Cultural Heritage Applications. Appl. Sci. 2020, 10, 828. [Google Scholar] [CrossRef]
  12. Aksenov, P.; Clark, I.; Grant, D.; Inman, A.; Vartikovski, L.; Nebel, J.-C. 3D thermography for quantification of heat generation resulting from inflammation. 8th 3D Modelling Symposium, Paris, France, 3–7 July 2003. [Google Scholar]
  13. Ju, X.; Nebel, J.-C.; Siebert, J.P. 3D Thermography Imaging Standardization Technique for Inflammation Diagnosis. In Proceedings of the Photonics Asia 2004, Beijing, China, 8–12 November 2004; pp. 266–273. [Google Scholar]
  14. Barone, S.; Paoli, A.; Razionale, A.V. A biomedical application combining visible and thermal 3D imaging. In Proceedings of the XVIII Congreso internactional de Ingenieria Grafica, Barcelona, Spain, 31 May–2 June 2006; pp. 1–9. [Google Scholar]
  15. Parker, M.D.; Taberner, A.J.; Nielsen, P.M. A Thermal Stereoscope for Surface Reconstruction of The Diabetic Foot. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Boston, MA, USA, 30 August–3 September 2011; pp. 306–309. [Google Scholar]
  16. Chernov, G.; Chernov, V.; Barboza Flores, M. 3D Dynamic Thermography System for Biomedical Applications. In Application of Infrared to Biomedical Sciences; Ng, E.Y., Etehadtavakol, M., Eds.; Springer: Singapore, 2017; pp. 517–545. [Google Scholar]
  17. Cao, Y.; Xu, B.; Ye, Z.; Yang, J.; Cao, Y.; Tisse, C.-L.; Li, X. Depth and Thermal Sensor Fusion to Enhance 3D Thermographic Reconstruction. Opt. Express 2018, 26, 8179. [Google Scholar] [CrossRef]
  18. Krefer, A.G.; Lie, M.M.I.; Borba, G.B.; Gamba, H.R.; Lavarda, M.D.; Abreu de Souza, M. A method for generating 3D thermal models with decoupled acquisition. Comput. Methods Programs Biomed. 2017, 151, 79–90. [Google Scholar] [CrossRef]
  19. Bader, C. 3D Thermal Imaging: Acquisition and Surface Reconstruction. Master Thesis, ICP Institute of Computational Physics, ZHAW Zurich University of Applied Sciences School of Engineering, Zurich, Switzerland, 2019. [Google Scholar]
  20. van Doremalen, R.F.M.; van Netten, J.J.; van Baal, J.G.; Vollenbroek-Hutten, M.M.R.; van der Heijden, F. Infrared 3D Thermography for Inflammation Detection in Diabetic Foot Disease: A Proof of Concept. J. Diabetes Sci. Technol. 2020, 14, 46–54. [Google Scholar] [CrossRef]
  21. Mancilla, R.B.; PHAN, B. Anatomical 3D Modeling Using IR Sensors and Radiometric Processing Based on Structure from Motion: Towards a Tool for the Diabetic Foot Diagnosis. Sensors 2021, 21, 3918. [Google Scholar] [CrossRef]
  22. Schramm, S.; Osterhold, P.; Schmoll, R.; Kroll, A. Combining modern 3D reconstruction and thermal imaging: Generation of large-scale 3D thermograms in real-time. Quant. InfraRed Thermogr. J. 2022, 19, 295–311. [Google Scholar] [CrossRef]
  23. Schmoll, R.; Schramm, S.; Breitenstein, T.; Kroll, A. Method and Experimental Investigation of Surface Heat Dissipation Measurement Using 3D Thermography. J. Sens. Sens. Syst 2022, 11, 41–49. [Google Scholar] [CrossRef]
  24. Ledwon, D.; Sage, A.; Juszczyk, J.; Rudzki, M.; Badura, P. Tomographic reconstruction from planar thermal imaging using convolutional neural network. Sci. Rep. 2022, 12, 2347. [Google Scholar] [CrossRef]
  25. Paz, A.A.C.; De Souza, M.A.; Brock, P.W.; Ferreira Mercuri, E.G. Finite element analysis to predict temperature distribution in the human neck with abnormal thyroid: A proof of concept. Comput. Methods Programs Biomed. 2022, 227, 107234. [Google Scholar]
  26. Chang, C.; Chung, P.; Hong, Y.; Tseng, C. A Neural Network for Thyroid Segmentation and Volume Estimation in CT Images. IEEE Comput. Intell. Mag. 2011, 6, 43–55. [Google Scholar] [CrossRef]
  27. Shin, D.S.; Lee, S.; Park, H.S.; Lee, S.-B.; Chung, M.S. Segmentation and surface reconstruction of a cadaver heart on Mimics software. Folia Morphol. 2014, 74, 372–377. [Google Scholar] [CrossRef]
  28. Kaur, J.; Jindal, A. Comparison of Thyroid Segmentation Algorithms in Ultrasound and Scintigraphy Images. Int. J. Comput. Appl. 2012, 50, 24–27. [Google Scholar] [CrossRef]
  29. Souza, M.A.; de Borba, G.B.; Krefer, A.G.; Gamba, H.R. 3D THERMO-SCAN—Multi-modality image registration. In Proceedings of the 2016 SAI Computing Conference (SAI), London, UK, 13–15 July 2016; pp. 302–306. [Google Scholar]
  30. Souza, M.A.; de Krefer, A.G.; Borba, G.B.; E Silva, G.J.V.; Franco, A.P.G.O.; Gamba, H.R. Generation of 3D thermal models for dentistry applications. In Proceedings of the 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 16–20 August 2016; pp. 1397–1400. [Google Scholar]
  31. Scholleman, F.; Barbosa Pereira, C.; Rosenhain, S.; Follmann, A.; Gremse, F.; Kiessling, F.; Czaplik, M.; Abreu de Souza, M. An Anatomical Thermal 3D Model in Preclinical Research: Combining CT and Thermal Images. Sensors 2021, 21, 1200. [Google Scholar] [CrossRef]
  32. Schönberger, L.; Frahm, J.-M. Structure-from-Motion Revisited. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 4104–4113. [Google Scholar]
  33. Özyeşil, O.; Voroninski, V.; Basri, R.; Singer, A. A survey of structure from motion. Acta Numer. 2017, 26, 305–364. [Google Scholar] [CrossRef]
  34. Wu, C. Towards linear-time incremental structure from motion, In Proceedings of the International Conference on 3D Vision. Seattle, WA, USA, 29 June–1 July 2013; pp. 127–134. [Google Scholar]
  35. Vacca, G. Overview Begin of Open Source Software for Close Range Photogrammetry. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2019, XLII-4/W14, 239–245. [Google Scholar] [CrossRef]
  36. Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  37. Maintz, J.B.A.; Viergever, M.A. A Survey of Medical Image Registration. Med. Image Anal. 1998, 2, 1–36. [Google Scholar] [CrossRef] [PubMed]
  38. Viergever, M.A.; Maintz, J.B.; Klein, S.; Murphy, K.; Staring, M.; Pluim, J.P. A survey of medical image registration—under review. Med. Image Anal. 2016, 33, 140–144. [Google Scholar] [CrossRef]
  39. Van Den Elsen, P.A.; Pol, E.D.; Viergever, M.A. Medical image matching–A review with classification. IEEE Eng. Med. Biol. Mag. 1993, 12, 26–39. [Google Scholar] [CrossRef]
  40. Rueckert, D.; Frangi, A.F.; Schnabel, J.A. Automatic Construction of 3D Statistical Deformation Models Using Non-Rigid Registration. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2001. MICCAI 2001; Lecture Notes in Computer Science; Niessen, W.J., Viergever, M.A., Eds.; Springer: Berlin, Heidelberg, 2001; Volume 2208. [Google Scholar]
  41. Unteregger, F.; Thommen, J.; Honegger, F.; Potthast, S.; Zwicky, S.; Storck, C. How Age and Frequency Impact the Thyroid Cartilages of Professional Singers. J Voice 2019, 33, 284–289. [Google Scholar] [CrossRef]
  42. Jin, C.; He, Z.Z.; Yang, Y.; Liu, J. MRI-based three-dimensional thermal physiological characterization of thyroid gland of human body. Med. Eng. Phys. 2014, 36, 16–25. [Google Scholar] [CrossRef]
  43. Szeliski, R. Computer Vision: Algorithms and Applications; Springer: London, UK, 2010; p. 812. [Google Scholar]
  44. Furukawa, Y.; Ponce, J. Accurate, Dense and Robust Multi-View Stereopsis. Trans. Pattern Anal. Mach. Intell. 2010, 32, 1362–1376. [Google Scholar] [CrossRef]
  45. Wallner, J.; Hochegger, K.; Chen, X.; Mischak, I.; Reinbacher, K.; Pau, M.; Zrnc, T.; Schwenzer-Zimmerer, K.; Zemann, W.; Schmalstieg, D.; et al. Clinical evaluation of semi-automatic open-source algorithmic software segmentation of the mandibular bone: Practical feasibility and assessment of a new course of action. PLoS ONE 2018, 13, e0196378. [Google Scholar] [CrossRef]
  46. Heckel, F.; Schwier, M.; Peitgen, H.-O. Object-oriented application development with MeVisLab and Python. In Proceedings of the Informatik 2009–Im Focus das Leben, Lübeck, Germany, 28 September–2 October 2009. [Google Scholar]
  47. Vasavada, A.N.; Danaraj, J.; Siegmund, G.P. Head and neck anthropometry, vertebral geometry and neck strength in height-matched men and women. J. Biomech. 2008, 41, 114–121. [Google Scholar] [CrossRef]
  48. Grady, L. Random Walks for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 1768–1783. [Google Scholar] [CrossRef]
  49. De Buck, S.; Van De Bruaene, A.; Budts, W.; Suetens, P. MeVisLab-OpenVR prototyping platform for virtual reality medical applications. Int. J. Comput. Assist. Radiol. Surg. 2022, 17, 2065–2069. [Google Scholar] [CrossRef]
  50. Hernández, A.C.; Shilo, A.; Péry, P.; Raidou, R. Calvariam: Visual Educational Resources for Maxillofacial Surgery. Eurographics Workshop on Visual Computing for Biology and Medicine, Vienna, Austria, 22–23 September 2022. [Google Scholar]
  51. Lui, K.; Liu, H.; Wang, H.; Yang, X.; Huang, D.; Zhou, X.; Gao, Y.; Shen, Y. An application framework of 3D assessment image registration accuracy and untouched surface area in canal instrumentation laboratory research with micro-computed tomography. Clin. Oral Investig. 2023, 27, 715–725. Available online: https://doi.org/10.1007/s00784-022-04819-w (accessed on 3 December 2022). [CrossRef]
  52. Chen, M.; Wang, H.; Tsauo, C.; Huang, D.; Zhou, X.; He, J.; Gao, Y. Micro-computed tomography analysis of root canal morphology and thickness of crown and root of mandibular incisors in Chinese population. Clin. Oral Investig. 2022, 26, 901–910. [Google Scholar] [CrossRef]
  53. Regnstrand, T.; Ezeldeen, M.; Shujaat, S.; Ayidh Alqahtani, K.; Benchimol, D.; Jacobs, R. Three-dimensional quantification of the relationship between the upper first molar and maxillary sinus. Clin. Exp. Dent. Res. 2022, 8, 750–756. [Google Scholar] [CrossRef]
  54. Egger, J.; Gall, M.; Tax, A.; Ücal, M.; Zefferer, U.; Li, X.; von Campe, G.; Schäfer, U.; Schmalstieg, D.; Chen, X. Interactive reconstructions of cranial 3D implants under MeVisLab as an alternative to commercial planning software. PLoS ONE 2017, 12, e0172694. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of the proposed method.
Figure 1. Schematic diagram of the proposed method.
Sensors 23 01610 g001
Figure 2. Thyroid seen in differentiated grayscale, which is shown in: (a) axial, (b) sagittal and (c) coronal.
Figure 2. Thyroid seen in differentiated grayscale, which is shown in: (a) axial, (b) sagittal and (c) coronal.
Sensors 23 01610 g002
Figure 3. Representation of the DICOM anatomy imaging slices, illustrating the generation of the anatomy 3D generation and the correspondent segmented 3D model.
Figure 3. Representation of the DICOM anatomy imaging slices, illustrating the generation of the anatomy 3D generation and the correspondent segmented 3D model.
Sensors 23 01610 g003
Figure 4. MIMICS software interface, where it is possible to observe the anatomical planes (sagittal, coronal, and axial).
Figure 4. MIMICS software interface, where it is possible to observe the anatomical planes (sagittal, coronal, and axial).
Sensors 23 01610 g004
Figure 5. Visualization of the different masks (representing the different anatomic structures) and the complete segmented 3D model.
Figure 5. Visualization of the different masks (representing the different anatomic structures) and the complete segmented 3D model.
Sensors 23 01610 g005
Figure 6. Thermal visualization representation in FLIR Tools® at three different angles.
Figure 6. Thermal visualization representation in FLIR Tools® at three different angles.
Sensors 23 01610 g006
Figure 7. Representation of the image intensification process (step 3).
Figure 7. Representation of the image intensification process (step 3).
Sensors 23 01610 g007
Figure 8. Illustration of the SfM technique: (a) representation of epipolar geometry; (b) result obtained from one of the mapped volunteers, showing the point cloud and the positioning of the cameras.
Figure 8. Illustration of the SfM technique: (a) representation of epipolar geometry; (b) result obtained from one of the mapped volunteers, showing the point cloud and the positioning of the cameras.
Sensors 23 01610 g008
Figure 9. Alignment visualization: (a) pre-alignment; (b) during alignment; (c) post-alignment.
Figure 9. Alignment visualization: (a) pre-alignment; (b) during alignment; (c) post-alignment.
Sensors 23 01610 g009
Figure 10. Flowchart showing the fusion and alignment of the imaging modalities: (a) 3D shell; (b) point cloud; and (c) thermal images, allowing the generation of the (d) 3D thermal shell.
Figure 10. Flowchart showing the fusion and alignment of the imaging modalities: (a) 3D shell; (b) point cloud; and (c) thermal images, allowing the generation of the (d) 3D thermal shell.
Sensors 23 01610 g010
Figure 11. Texturing process steps: (1) 3D geometry (bust) segmented, without texture. (2) Applying the projection of the infrared images onto the 3D model (whole bust) showing a transparency. (3) Incorporation of the infrared images onto the 3D model. (4) Generation of the 3D thermal shell. These steps are represented by different positioning angles: (A) right side, (B) frontal and (C) left side of the volunteer.
Figure 11. Texturing process steps: (1) 3D geometry (bust) segmented, without texture. (2) Applying the projection of the infrared images onto the 3D model (whole bust) showing a transparency. (3) Incorporation of the infrared images onto the 3D model. (4) Generation of the 3D thermal shell. These steps are represented by different positioning angles: (A) right side, (B) frontal and (C) left side of the volunteer.
Sensors 23 01610 g011
Figure 12. (A) 3D anatomy model generated from the 3D reconstruction (by the CT DICOM images). (B) 3D Thermal Model (obtained from the SfM methodology and its consequent thermal imaging projection onto the 3D outer bust model). (C) Combined registered 3D complete model—represented at the common coordinate system.
Figure 12. (A) 3D anatomy model generated from the 3D reconstruction (by the CT DICOM images). (B) 3D Thermal Model (obtained from the SfM methodology and its consequent thermal imaging projection onto the 3D outer bust model). (C) Combined registered 3D complete model—represented at the common coordinate system.
Sensors 23 01610 g012
Figure 13. Representation of the network of modules used in MeVisLab® to generate the complete 3D visualization.
Figure 13. Representation of the network of modules used in MeVisLab® to generate the complete 3D visualization.
Sensors 23 01610 g013
Figure 14. Illustration of a set of CT images visualized in the MIMICS software, to allow for the segmentation and 3D reconstruction of the thyroid geometry in the neck region.
Figure 14. Illustration of a set of CT images visualized in the MIMICS software, to allow for the segmentation and 3D reconstruction of the thyroid geometry in the neck region.
Sensors 23 01610 g014
Figure 15. A highlight of an axial CT image, showing the delimitation of the segmentation mask (in green), applied over the structure to be segmented (i.e., in this case, the thyroid gland).
Figure 15. A highlight of an axial CT image, showing the delimitation of the segmentation mask (in green), applied over the structure to be segmented (i.e., in this case, the thyroid gland).
Sensors 23 01610 g015
Figure 16. Representation of segmented inner anatomical structures: (a) thyroid (before smoothening/treatment) and (b) thyroid (after smoothening), (c) trachea, and (d) veins and arteries.
Figure 16. Representation of segmented inner anatomical structures: (a) thyroid (before smoothening/treatment) and (b) thyroid (after smoothening), (c) trachea, and (d) veins and arteries.
Sensors 23 01610 g016
Figure 17. Segmentation results: (a) 3D geometry of the bust/neck (outer region only); (b) 3D model combined, showing all inner and outer anatomical structures.
Figure 17. Segmentation results: (a) 3D geometry of the bust/neck (outer region only); (b) 3D model combined, showing all inner and outer anatomical structures.
Sensors 23 01610 g017
Figure 18. Anatomy images (DICOM) showing the corresponding 3D visualization in MeVisLab. Special attention is given to the detail of the segmented thyroid region (represented in red), within the three views: (a) coronal, (b) sagittal and (c) axial.
Figure 18. Anatomy images (DICOM) showing the corresponding 3D visualization in MeVisLab. Special attention is given to the detail of the segmented thyroid region (represented in red), within the three views: (a) coronal, (b) sagittal and (c) axial.
Sensors 23 01610 g018
Figure 19. The results were visualized after the fusion between the 3D thermal shell (external) with the DICOM data (internal).
Figure 19. The results were visualized after the fusion between the 3D thermal shell (external) with the DICOM data (internal).
Sensors 23 01610 g019
Figure 20. 3D visualization of the complete model, showing the structures in different sections/slices: (a) axial, (b) coronal, and (c) sagittal. A video overview illustrating these results can be found in the Supplementary Material (Video S1).
Figure 20. 3D visualization of the complete model, showing the structures in different sections/slices: (a) axial, (b) coronal, and (c) sagittal. A video overview illustrating these results can be found in the Supplementary Material (Video S1).
Sensors 23 01610 g020
Figure 21. Positioning markers representing the seven points selected for quantification.
Figure 21. Positioning markers representing the seven points selected for quantification.
Sensors 23 01610 g021
Table 1. Displacement error for the positioning markers between the 3D geometry and the 3D thermal model.
Table 1. Displacement error for the positioning markers between the 3D geometry and the 3D thermal model.
Volunteer
(Case Study)
Average Error (mm)St. Deviation (mm)Minimum Error (mm)Maximum Error (mm)
1.771.060.283.39
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Abreu de Souza, M.; Alka Cordeiro, D.C.; Oliveira, J.d.; Oliveira, M.F.A.d.; Bonafini, B.L. 3D Multi-Modality Medical Imaging: Combining Anatomical and Infrared Thermal Images for 3D Reconstruction. Sensors 2023, 23, 1610. https://doi.org/10.3390/s23031610

AMA Style

Abreu de Souza M, Alka Cordeiro DC, Oliveira Jd, Oliveira MFAd, Bonafini BL. 3D Multi-Modality Medical Imaging: Combining Anatomical and Infrared Thermal Images for 3D Reconstruction. Sensors. 2023; 23(3):1610. https://doi.org/10.3390/s23031610

Chicago/Turabian Style

Abreu de Souza, Mauren, Daoana Carolaine Alka Cordeiro, Jonathan de Oliveira, Mateus Ferro Antunes de Oliveira, and Beatriz Leandro Bonafini. 2023. "3D Multi-Modality Medical Imaging: Combining Anatomical and Infrared Thermal Images for 3D Reconstruction" Sensors 23, no. 3: 1610. https://doi.org/10.3390/s23031610

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop