Next Article in Journal
Evaluation of FAST COVID-19 SARS-CoV-2 Antigen Rapid Test Kit for Detection of SARS-CoV-2 in Respiratory Samples from Mildly Symptomatic or Asymptomatic Patients
Next Article in Special Issue
COVID-19 Infection Segmentation and Severity Assessment Using a Self-Supervised Learning Approach
Previous Article in Journal
First Characterization of ADAMTS-4 in Kidney Tissue and Plasma of Patients with Chronic Kidney Disease—A Potential Novel Diagnostic Indicator
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

COVI3D: Automatic COVID-19 CT Image-Based Classification and Visualization Platform Utilizing Virtual and Augmented Reality Technologies

1
Robotics and Industrial Automation Division, Centre de Développement des Technologies Avancées (CDTA), Algiers 16081, Algeria
2
Department of Computer Science, College of Staten Island, 2800 Victory Blvd Staten Island, New York, NY 10314, USA
3
Faculty of Industrial Education, Rajamangala University of Technology Phra Nakhon, 399 Samsen Rd. Vachira Phayaban, Bangkok 10300, Thailand
4
Department of Computer Science and Information Technology, Kasdi Merbah University of Ouargla, Ouargla 30000, Algeria
*
Author to whom correspondence should be addressed.
Diagnostics 2022, 12(3), 649; https://doi.org/10.3390/diagnostics12030649
Submission received: 7 February 2022 / Revised: 28 February 2022 / Accepted: 2 March 2022 / Published: 7 March 2022
(This article belongs to the Special Issue The Role of CT in 2019 Novel Coronavirus Pneumonia (COVID-19))

Abstract

:
Recently many studies have shown the effectiveness of using augmented reality (AR) and virtual reality (VR) in biomedical image analysis. However, they are not automating the COVID level classification process. Additionally, even with the high potential of CT scan imagery to contribute to research and clinical use of COVID-19 (including two common tasks in lung image analysis: segmentation and classification of infection regions), publicly available data-sets are still a missing part in the system care for Algerian patients. This article proposes designing an automatic VR and AR platform for the severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) pandemic data analysis, classification, and visualization to address the above-mentioned challenges including (1) utilizing a novel automatic CT image segmentation and localization system to deliver critical information about the shapes and volumes of infected lungs, (2) elaborating volume measurements and lung voxel-based classification procedure, and (3) developing an AR and VR user-friendly three-dimensional interface. It also centered on developing patient questionings and medical staff qualitative feedback, which led to advances in scalability and higher levels of engagement/evaluations. The extensive computer simulations on CT image classification show a better efficiency against the state-of-the-art methods using a COVID-19 dataset of 500 Algerian patients. The developed system has been used by medical professionals for better and faster diagnosis of the disease and providing an effective treatment plan more accurately by using real-time data and patient information.

1. Introduction

Variants of COVID-19 have been reported ubiquitously world-wide, causing more infections and spreading faster than any previously known form of the virus [1]. This raises the urgent need for developing effective and safe COVID-19 vaccines [2]. While COVID-19 presents with a variety of symptoms, accurate diagnostic methods are still relevant for slowing the spread of SARS-CoV-2 despite the emergence of COVID-19 vaccines [3,4,5]. Reverse-transcription-polymerase-chain-reaction (RT-PCR) is a commonly used protocol in the detection and quantification of virus infections [6]. However, it is time-consuming and may provide both false-negative (FN) and false-positive (FP) rates [7]. Since COVID-19 causes lung complications, such as pneumonia, computed tomography (CT) of the scan is the most frequently used diagnostic tool [8,9,10]. In Algeria, CT scans have been widely used and show good clinical diagnostics [11]. Both private clinics and public hospital healthcare systems mostly use CT scan imagery to measure the severity of COVID-19 due to the lack of diagnostic kits and higher RT-PCR’s false prediction [12]. Several computer vision applications have emerged to address different sides in the fight against the propagation of COVID-19 [13], including segmentation and severity classification methods [14,15,16].
Since COVID lesions inside patient lungs vary in shape and size, localization of these lesions is still challenging. Segmentation, is a vital step, which allows lesions identification and localization, and separates the lesions from the lung and bronchopulmonary systems. Meanwhile, the lung-region-based methods separate the whole lung (including lobes) from other regions in CT images [17,18]. For the time being, the lung-lesion-based methods split the infected regions from healthy lung regions [19,20,21].
On the other hand, quantification and classification of severity could provide radiologists with relevant assistance for prioritizing patients so that truly serious cases receive care first. Several works have utilized deep learning methods to address the quantification of COVID-19. Shen [22] designed a system that supports radiologists in identifying the patient’s COVID-19 severity degree. Authors in [23] introduced four matrices to evaluate COVID-19-related lesions using chest CT imagery. Work performed in [16] quantified infection by calculating the rates of lesion pixels compared to the lung pixels. Pu, in [24], showed the process of using CT images for quantifying COVID-19 progression and severity in an automated way. Sun [25] developed an extreme gradient boosting (XGBoost) machine learning model for estimating COVID-19 severities by integrating multi-omics data. Finally, Huang [26] described the longitudinal evolution of severity using deep learning methods with CT imagery. The patients were classified into four severity levels and grades (critical, severe, moderate, and mild). The review of the studies showed better performance for severity classification. However, the proposed models failed to take into account small lung regions and regions surrounding the vessels. Moreover, these models often utilized small datasets for training and testing.
Despite great advances in segmentation and severity classification methods for COVID-19 diagnosis, there is still room for additional research to explore these issues. One of the current challenges is to introduce a three-dimensional framework for developing an advanced COVID-19 classification and lesion visualization using cutting-edge technologies such as virtual reality (VR) and augmented reality (AR). In the last decade, researchers in biomedical areas have expressed their interest in VR and AR as novel technologies for better treatments and healthcare information systems (HIS) [27,28,29,30]. This is particularly due to recent developments in camera technology and the processing power of computer hardware. VR and AR could be an alternative solution for the 3D visualization of biomedical images compared to 2D standard technologies [31,32]. They may serve as non-destructive diagnosis methods for better analyzing, locating, and measuring the volume of infected regions [33]. VR immerses users in an entirely digital environment, while AR overlays real-life 3D models. The ongoing COVID-19 pandemic shows an increased need for VR and AR [34,35]. The work presented in [36,37] sets some VR applications related to the pandemic. These cutting-edge technologies could be great for preventing the pandemic. They could also be useful within the health care community.
In this paper, an automated COVID-19 CT-scan imagery data analysis, classification, and 3D visualization using both virtual and augmented reality technologies are reported in detail in the next sections. The main contributions of this paper include the following:
  • A new accurate double logarithmic entropy KL2 algorithm for segmentation and localization in CT images.
  • A novel lesion/lung voxel-based measurement method for quantifying infection.
  • A new combined VR/AR 3D visualization system with a user-friendly interface.
  • A COVI3D platform that implements the segmentation, classification and Virtual, Augmented reality algorithms.
This work used a dataset from 500 patients (22,400 CT slices) for tests and validation. The remainder of this paper is structured as follows: Section 2 reports the proposed methods, including the segmentation, 3D classification as well as approaches related to VR and AR visualization and interaction. Section 3 presents experiments and evaluation results. Section 4 and Section 5 provide the discussion and conclusions of the paper.

2. Materials and Methods

A detailed discussion of the COVI3D system overview of the proposed method is presented in this section. It consists of three main modules: segmentation module (SM), 3D reconstruction and classification module (3DRCM), and virtual/augmented reality rendering and interaction module (VAR2IM), as shown in Figure 1. Firstly, the SM module inputs CT-scan images to get segmental lesions. Secondly, the role of the 3DRCM module is to apply the marching cube and volume rendering algorithms to generate 3D mesh models, and then we introduce the voxel-based classification procedure. Lastly, the VAR2IM module provides a volumetric model display of three-dimensional lungs, including lesions.

2.1. KL2-Entropy Based Recognition and Segmentation Algorithm

Since there was noise in the input computed tomography images, we pre-processe the data with the algorithm proposed in [38] (see Section 2.1). Once the image was enhanced, we applied a new double logarithmic Kapur entropy (KL2) method that partitions a CT image into pneumonia and common regions. The proposed method represents a combination of double logarithmic entropy (L2_Entropy) and Kapur’s entropy (K_Entropy) [39]. L2—entropy is defined as:
T L = a r g max t s 1 t   +   s 2 t
where:
s 1 t   = h l t   +   1 γ cos log log h l t   +   1  
and:
s 2 t   = h u t   +   1 γ cos log log h u t   +   1  
h l t   = S t = 1 t = n H s , t  
h u t   = S t = n + 1 t = L H s , t  
H s , t   = c a r d P i , j  
where Pi,j represents a paired image between a given image, Xi,j, and a denoised image; H(s, t) denotes a matched histogram on a 2D luminance plane, x 0 s and t x L 1 .
K_entropy is an unsupervised thresholding technique that selects optimal thresholds based on the entropy of segmented histograms [39]. The objective function of Kapur’s entropy can be defined as:
T K = a r g max t K t  
where:
K t   = H 0 + H 1 + + H n  
H 0 = t = 0 t 1 1 p t c 0 log p t c 0 ;   c 0 = t = 0 t 1 1 p t H 1 = t = t 1 t 2 1 p t c 1 log p t c 1 ;   c 1 = t = t 1 t 2 1 p t H n = t = t n L 1 p t c n log p t c n ;   c n = t = t n L 1 p t
where H 0 ,   H 1 , , H n represent the entropy value with { t 0 ,   t 1 , , t n } thresholds. p(t) denotes a probability density function of an image, and c n is a cumulative density function.
KL2—entropy is a combination of Kapur’s threshold and the Double Logarithmic threshold:
T = a r g max t λ · T K + T L 1 λ · T K · T L m a w T K , T L  
where 0 ≤ λ ≤ 1.
T K and T L respectively denote Kapur’s threshold and the double logarithmic threshold, and λ represents a threshold weight. From the defined region threshold T, a local mask of sub-regions was generated as shown in the following equation:
m i ,   j ,   k = x L 1   i f   c a r d R i , j , k     T x 0   i f   c a r d R i , j , k   >   T
where Ri,j,k denotes the number of pixels in sub-region k. T is a threshold. x0 and xL−1 represent the initial and the last grayscale levels of an image.
To generate the global mask of sub-regions, we rely on:
M i , j = max k m i , j , k  
where mi,j,k represents the local mask of the sub-region and i,j denotes the size of a given image. k is the index of mi,j,k.

2.2. Three-Dimensional Reconstruction and Classification

The surface models of segmented 2D images were used to generate the data volumes by applying iso-surface extraction marching cubes [40] and data volume-rendering [41] algorithms. An iso-surface can be defined as a set of iso-values in a volume data where the expression is given as:
x ,   y ,   z   ϵ R 3   : f x , y , z   = k
where (x, y, z) represents the grid position and k   ϵ   represents an iso-value.
The iso-surface extraction process consists of generating triangular meshes to approximately illustrate the desired surface. The marching cubes scheme is essentially based on the divide-and-conquer technique [40] denoting that the volume data is processed by voxels (represented by cubes) [42]. For data volume rendering, we used ray-casting 3D reconstruction algorithm [41]. For each pixel of the screen, the corresponding ray passes through voxels that have been given opacity and color values, thus forming a 3D model that can describe the internal information. For a single voxel, we have two characteristic values [41]: c(xi) is a shade, calculated from a reflection model using the local gradient; α(xi) is an opacity, derived from tissue types of known CT values.
For each voxel along a ray, the standard transparency formula can be defined as:
C o u t = C i n 1 α x i + c x i · α x i  
where C o u t describes the outgoing intensity and color for voxel xi along the ray, and C i n is the incoming intensity for the voxel x i .
As voxels represent values on regular grids in three-dimensional space, they could be used in refining the classification of infection and to help radiologists discriminate between different severity levels over the dataset.
Let x i denote a voxel of the 3D infected lung model and n the number of voxels of the model. The total lung volume is given as:
V = i = 1 n x i
To quantify the severity, we propose splitting the 3D model into two 3D sub-models. The first sub-model includes the lesion only, while the second sub-model contains lung and bronchi 3D information (see Figure 2). For each sub-model, we calculate the number of voxels as follows:
V i n f = i = 1 n x i _ l e s i o n
V l u n g = i = 1 n x i _ l u n g
where xi_lesion and xi_lung represent a lesion voxel and a lung-bronchi voxel, respectively.
Then, we calculate the percentage of infection (PI) as the infected volumes V i n f over the entire lung-bronchi volume V l u n g . The expression is given below:
P I = V i n f V l u n g  
Figure 2 shows the classification procedure on a 3D infected lung. For better 3D visualization, red color was used for lesion voxels, while grey color was associated with lung-bronchi voxels. The 3D model is classified as positive with COVID-19 when one voxel, at least, is considered as COVID-19 infection. The 3D model contains at least one colored red voxel otherwise, it is considered clean (only gray voxels).
From the value of PI, the patient’s severity was classified (see Figure 2). We applied the approach proposed in [26] to define four classes of severity: mild, moderate, severe, and critical infection.

2.3. Virtual and Augmented Reality Visualization and Interaction for Diagnostic Aid

In this sub-section, we integrate both virtual and augmented reality with CT-scan imagery to generate a three-dimensional and realistic display of COVID-19 lesions. Recent research in biomedical applications has shown the usefulness of game engines for 3D medical image visualization [32,43]. The Unity3D is one of the prominent game engine platforms (www.unity3d.com, accessed on 16 December 2021) for building 3D medical images and simulation of real-world environments [43]. In our case, we used Unity3D as support to generate 3D models.
The dataset contains 3D reconstructed models of infected lungs. Unity3D takes charge of several 3D formats, including OBJ and FBX. However, we first converted the 3D reconstructed data into 3D mesh models using Blender software. We used decimation to reduce the number of vertices and facets of the 3D meshes. Then, we transformed refined models into FBX datasets overlaid into Unity3D as a game component for both AR and VR environments.

2.3.1. 3D Visualization

In terms of VR interface, Unity3D encompasses game objects that manage lights, cameras, 3D models and elements of the environment. The most important game objects include 3D lungs models imported into FBX format from Blender. To complete the VR diagnostic office, some optional elements could be imported from Unity’s Asset Store and/or dedicated software. VR visualization is performed through a computer monitor connected to a head mounted display (HMD), the standalone Oculus Quest 2 or Oculus Rift S. [44]. Oculus SDK was used as a support for managing both 3D visualization and head and hand movement.
In terms of the AR interface, we integrated Vuforia SDK (https://developer.vuforia.com/, accessed on 16 December 2021) for scene recognition and display augmented COVID-19 lungs in the radiologists’ working environment. Vuforia supports several functions of Unity3D, such as image and video processing, tracking, video rendering, code and user-defined targets. For AR visualization, we utilized Microsoft Surface Tablets and Smartphones, both equipped with a video camera, to process spatial information in three dimensions. We, also, used an AR head display with integrated mono or stereo camera.

2.3.2. 3D Interaction

We implemented a 3D interaction algorithm through two data-gloves in order to manipulate 3D models in both VR and AR. Two phases were addressed: (1) access the 3D lung and (2) manipulate the 3D lung.
Access the 3D lung: we used zoom-in technique [45] for reaching and bringing distant 3D lungs by zooming into the working environment.
Manipulate the 3D lung: first, we defined a virtual hand model that reproduces the same movement of a real hand. Then, we calculated the incidence coordinate between two virtual fingers (index and thumb) and a 3D lung. For each position of the two virtual fingers, we updated 3D coordinate lungs to match the hand movements.
Let the encompassing lung volume (E) be E 3 . Let the index and thumb of a virtual hand, be i n d e x s u b s e t   h , t h u m b s u b s e t   t and h × t   3 × 3 . Suppose the Volume of the lung (V) where V 3 .
We computed the incidence coordinate between the index and encompassing lung volume (E) as x d i , j , y d i , j , z d i , j   =   h i , j E . Then, we computed the incidence coordinate between the thumb and encompassing lung volume (E) as x t k , l , y t k , l , z t k , l   = t k , l E .
We calculated the new coordinates of the two incidence coordinates h i + 1 , j + 1 and t k + 1 , l + 1 . Let also Θ ĩ ,   T ĩ be, respectively, rotation matrix, translation vector of a lung i along the x, y and z axes.
The manipulation function M l u n g   =   Θ ĩ ,   T ĩ was calculated through the following Algorithm 1:
Algorithm 1: 3D manipulation algorithm
  Input: index’s coordinate h i , thumb’s coordinate t k , encompassing lung
    volume’s coordinates e
  Output: lung’s matrix (rotation matrix Θ ĩ , translation vector T ĩ )
    h i T coordinateFormIndex x d i , y d i , z d i
    t k T coordinateFormThumb x t k , y t k , z t k
    e j T coordinateFormCube x c j , y c j , z c j
    e l T coordinateFormCube x c l , y c l , z c l
  while ( h i T c o n s t ) and ( t k T c o n s t ) do
   if ( h i T = e j T ) and ( t k T = e l T ) then
     h i , j T ← IncidenceIndexCube ( h i T )
     t k , l T ← IncidenceThumbCube ( t k T )
     if ( h i + 1 , j + 1 T h i , j T ) and ( t k + 1 , l + 1 T t k , l T ) then
       Θ ĩ ← ( h i + 1 , j + 1 T t k + 1 , l + 1 T ) × h i , j T t k , l T 1  
       T ĩ ← [ h i + 1 , j + 1 T h i + 1 , j + 1 T t k + 1 , l + 1 T × h i , j T t k , l T 1 × h i , j ]
     end
   end
  end

3. Experiments and Summary Results

This section shows the results obtained from the proposed classification and visualization techniques applied on CT-imagery containing 500 with a confirmed positive COVID-19 test. These COVID-19 data were provided from the EL-BAYANE Radiology Center and Medical Imaging and labeled by a medical expert. Furthermore, we provide summary subjective results of the VR and AR visualization and interaction.

3.1. Database

Numerous COVID-19 public data-sets are available. However, few of them are from north African countries. Figure 3 shows the statistics of the collected database. In order to obtain high-quality labeled data, we asked two experienced radiologists to tag the maximum number of lesions from CT imagery.

3.2. Three-Dimensional Reconstruction and Classification Results

The marching cubes algorithm [40] was applied on segmented images in order to extract polygonal meshes from the iso-surfaces of the 3D pathological structures (conversion from CT voxels to meshes). The meshes were corrected, and consistent models were generated. The voxel scaling effect, due to the segmentation process in anisotropic CT data, was smoothed. Artifacts generated were removed using modifiers such as the Relax and TurboSmooth, to avoid loss in the model volume. The digital 3D reconstructed models generated can be imported into various digital modeling software that enables good representation, interpretation, and analysis of multi-variant lung involvement with different levels of severity.
We calculate the percentage of infection by the ratio of lesion and lung volumes based on the calculated number of voxels. For example, for a patient with lesion and lung, volume of 273,585 and 6,899,553 voxels respectively, the ratio was 3.96% which corresponds to severity degree mild.
We further compared the proposed with an existing classification approach [16] using the same quality metrics (as illustrated in Table 1). We considered the average score of multiple lung region infections.
As illustrated above, the comparison was conducted with four quality metrics. The statistics of the accuracy and specificity were greater using the proposed method. The proposed method showed slightly lower sensitivity and precision values, which may be related to the way the radiologists labeled the dataset.

3.3. Virtual Reality, Augmented Reality Visualization and Interaction Results

Figure 4 shows a radiologist visualizing three situations of COVID-19 severity (mild, moderate, or critical infection) with a VR interface through multi-view access, using an Oculus Rift S head mounted display (HMD). The experimenter can, even, navigate into the 3D lungs and see more details on the 3D lesion texture, using touch controllers, such as the developed data-gloves or Oculus touch.
The radiologist is capable of making appropriate and quick decisions regarding the three patients. For example, the first patient (see Figure 4a) could be asked to take a treatment and isolate at home, while the third patient (see Figure 4c) would be admitted to the emergency department in the hospital.
For the AR viewer, we obtained a similar situation, but the 3D models were directly aligned on the patient’s body (see Figure 5). We used a dedicated AR HMD with video camera-integrated and developed data-gloves to process spatial information in the radiologist’s real environment and provide multi-view augmented reality visualization of three patients with different stages of infection (see Figure 5a–c). The radiologist movements, through head and hand tracking, were translated into the 3D working environment, making the experience more realistic.

3.3.1. Experimental Evaluation

We performed a subjective evaluation with more than one hundred COVID-19 infected patients. We compare our proposed visualizations with CT-scan conventional visualizations.

3.3.2. Setup

EL-BAYANE Radiology and Medical Imaging Center provided, for the COVI3D platform, an anonymized digital patient database containing CT imagery and patient information, such as diagnostic reports and history. The platform could be generalizable outside of the El-BAYANE Center. COVI3D used a workstation MSI-Intel i7-9750H CPU with 2.60 GHz, 32 GB Memory, and Graphic Card NVIDIA® GeForce™ RTX 2060. This workstation was connected with Oculus Rift™ S, and Vuzix’s Wrap 1200 glasses for VR and AR experiences, respectively. The light version of the proposed user interfaces was implemented in Tablets Microsoft Surface with Intel® Pentium® 4415Y CPU with 1.61 GHz, 4 GB Memory, and Intel® HD Graphics 615 graphic card.

3.3.3. Subject

Two main categories of participants were involved in the experiments: (1) recovered patients and (2) medical staff, composed of radiologists/doctors (students, residents and experts) and nurses.

3.3.4. Procedure

(1) Recovered patients (n = 150) were enrolled in this study. The mean age and range were 64.38 ± 7.86 years. We evaluated the awareness of the disease severity and its impact on patients to protect themselves after recovery. The participants were asked to access the diagnosis results through 2D images and 3D models visualized in VR. Each participant had enough time to become familiar with the VR application and CT images. (2) For medical staff, we measured the usefulness of VR and AR as a diagnostic aid. Eighteen subjects (n = 18) experienced both VR and AR applications and provided their feedback. The mean age and range were 48.38 ± 9.32 years. The subjects were instructed to use Oculus VR display, Vuzix AR glasses, Tablets and Smartphones for two times 8 minutes to be familiar with the 3D interfaces.
Once the experimenters completed their trials, we asked them to fill out two survey questionnaires individually. They responded using a 7-point Likert scale: (1) Strongly Disagree, (2) Disagree, (3) Somewhat Disagree, (4) Fair, (5) Somewhat Agree, (6) Agree, (7) Strongly Agree. Table 2 shows patient questionnaire (PQ), while Table 3 presents the medical staff questionnaire (MSQ).
Results: We analyzed patient and medical staff responses with repeated-measures ANOVAs. Table 4 and Figure 6 show the average responses of PQ regarding 2D images and 3D models and on display devices. Table 5 provides the average responses of MSQ and the differences among residents, experts and students.
Patient survey: 37 of 150 patients completed the survey. Overall, the VR models performed better than CT-scan 2D images (see Table 4). Patients provided better comprehension of disease using 3D models compared to 2D images. The same results have been observed regarding lesion size and location. They also noted the best understanding of the treatment plan (6.11/7 vs. 5.03/7). Finally, they reported a greater awareness of the disease severity using VR models compared to CT images.
The 37 participants reported the usefulness of the three display devices (see Figure 6), with results for Oculus Rift ranging from 5.95—6.447/7, smartphone from 4.008—5.05/7, and Tablet from 4.544—5.293/7. They found Oculus Rift more helpful than Tablet for understanding COVID-19 volume and its distribution inside the lung (6.447 ± 1.043 vs. 4.764 ± 1.988, p = 0.03). Moreover, the participants noted that Oculus Rift was more valuable than others to be conscious of the severity of disease (6.317 ± 1.302 vs. 5.05 ± 2.345 vs. 5.293 ± 1.435, p < 0.04). They provided similar opinions on most questions.
Medical staff survey: among 18 participants evaluating the system, we had seven medical students, five radiology residents, three experts and three nurses. Most participants (see Table 5) considered VR and AR experiences as enjoyable (91.32% of responses positive) and agreed that assessment of complex cases was more comprehensive (90.68% of responses positive) and may be performed quickly (81.28% of responses positives). They noticed that AR models provided the true scale of lesion volumes, as well as some pertinent anatomical structures. Participants reported their ability to classify severity cases efficiently and prioritize patients with serious cases to receive treatment first over other patients.
More than 62% rated the didactic potential of VR and AR for clinical use and training residents and medical students. Experts reported the potential for clinical applications and training. Nevertheless, three participants encountered difficulties in being familiar with AR glasses. Two participants complained of motion sickness.

4. Discussion

The evaluation results through VR and AR showed the usefulness of 3D display on understanding disease severity. Recovered patients participating in the experiments revealed the potential of VR models in lesion recognition and location compared to 2D CT images. Medical staff found VR and AR enjoyable and easy to use as they can diagnose complex cases better and faster. Moreover, experts pointed out the potential of clinical use of these novel technologies. Finally, residents may benefit from these technologies to take part in the diagnosis. Published studies also show growing interest in using VR and AR to train residents [46,47].
Among the weakness of the study were the motion sickness and individual incompatibility that VR and AR may cause. Appropriate devices, with low latency and high resolutions could further reduce these effects. That irritation could also be minimized through the repetitive use of such technologies.

5. Conclusions

Over the last decade, virtual reality and augmented reality have represented a breakthrough for healthcare professionals. They offer 3D models that describe a patient’s internal structure realistically. With the ongoing COVID-19 pandemic, with its variants, the use of these technologies became highly recommended.
The main contribution of this study is to develop an original approach for efficiently bringing together segmentation, classification, virtual and augmented reality with computerized tomography images for COVID-19 diagnostic aid. We provided a powerful automatic platform for visually assessing and classifying the 3D internal lung structure of infected COVID-19 patients to enhance 2D image-based classical diagnostics. Moreover, the platform is able to classify the severity level of COVID-19 based on the Percentage of Infection (PI) through the dataset of more than 500 patients. On the other hand, medical staff reported the usefulness of the proposed COVI3D platform in diagnostics since it provides a better interpretation and analysis of radiological results. The proposed could be a serious alternative for treatment planning and training. Doctors expressed their ability to analyze 3D COVID-19 models from different perspectives and with depth recognition, which is not feasible with the 2D screens. However, the 3D models allow better visualization of the giving results with a realistic view and an accurate scale. Thus, we offer the opportunity to save time and money and be able to study the internal textures of infected lungs. Furthermore, the proposed system could also be a relevant solution for recovered patients to be aware of the disease severity and protect themselves and those around them.
In the future, we intend to implement the COVI3D platform in public hospitals as well as private clinics to make it easier for doctors and radiologists to review CT scans on a regular basis. As a result, we should continue to advance the development of automatic segmentation, classification, and 3D visualization approaches that may be used in a variety of clinical circumstances. In this work, we only focused on CT medical images, but more research into magnetic resonance compatibility is preferable. COVI3D should also be tested on a wide number of medical personnel in a variety of settings, such as ethnicity (Asian, European, etc.). We also plan to extend the platform to support coronavirus mutations and evaluate the robustness of our classification approach against various variants. Finally, we expect to quantify and visualize post-COVID-19 symptoms such as pulmonary embolism following recovery from COVID-19. In the next decade, we think that medical professionals, residents and students could widely use VR and AR as they become available to the general public.

Author Contributions

S.A., S.B. and A.O., methodology; S.B., A.O. and T.T., validation; S.B., A.O. and N.Z.-H., formal analysis, investigation; S.A., data generation; S.B., A.O. and M.Z., writing—original draft preparation; S.B. and A.O.; writing—review and editing; D.A. and M.M., project administration. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Ethical review and approval were waived for this study, due to the lack of a Review Board and an Ethics Committee.

Informed Consent Statement

Written informed consent has been obtained from the patient(s) to publish this paper.

Data Availability Statement

Data supporting reported results can be found at the Centre de Développement des Technologies Avancées (CDTA), Robotics and Industrial Automation Division, Algiers, Algeria.

Acknowledgments

The authors would like to thank Moussa ABDOU from CDTA, who graciously recorded the AR application, and Mohamed Lamine ABDELLI, who generously provided the open-sourced image database from the EL-BAYANE, Radiology and Medical Imaging Center. The authors would, also, like to thank the staff of the Centre Médico-Social (CMS-CDTA) Amira Hamou, Kamal Ait Saada, and Azeddine Yeddou for their contributions to the questionnaire survey. Thanks to Rajamangala University of Technology Phra Nakhon (RMUTP) to support some programming facilities.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
COVID-19Coronavirus Disease
PIPercentage of Infection
ARAugmented Reality
VRVirtual Reality
COVID-SVARCOVID-19 Segmentation, Virtual and Augmented Reality dataset
HMDHead-Mounted Display
PQPatient Questionnaire
MSQMedical Staff Questionnaire

References

  1. Puligedda, R.D.; Al-Saleem, F.H.; Wirblich, C.; Kattala, C.D.; Jovic, M.; Geiszler, L.; Devabhaktuni, H.; Feuerstein, G.Z.; Schnell, M.J.; Sack, M.; et al. A strategy to detect emerging non-delta SARS-CoV-2 variants with a monoclonal antibody specific for the N501 spike residue. Diagnostics 2021, 11, 2092. [Google Scholar] [CrossRef]
  2. Pomara, C.; Sessa, F.; Ciaccio, M.; Dieli, F.; Esposito, M.; Giammanco, G.; Garozzo, S.; Giarratano, A.; Prati, D.; Rappa, F.; et al. COVID-19 vaccine and death: Causality algorithm according to the WHO eligibility diagnosis. Diagnostics 2021, 11, 955. [Google Scholar] [CrossRef] [PubMed]
  3. Strizova, Z.; Smetanova, J.; Bartunkova, J.; Milota, T. Principles and challenges in anti-COVID-19 vaccine development. Int. Arch. Allergy Immunol. 2021, 182, 339–349. [Google Scholar] [CrossRef] [PubMed]
  4. Haque, A.; Pant, A.B. Efforts at COVID-19 vaccine development: Challenges and successes. Vaccines 2020, 8, 739. [Google Scholar] [CrossRef]
  5. Li, Y.; Tenchov, R.; Smoot, J.; Liu, C.; Watkins, S.; Zhou, Q. A comprehensive review of the global efforts on COVID-19 vaccine development. ACS Cent. Sci. 2021, 7, 512–533. [Google Scholar] [CrossRef] [PubMed]
  6. Renzoni, A.; Perez, F.; Ngo Nsoga, M.T.; Yerly, S.; Boehm, E.; Gayet-Ageron, A.; Kaiser, L.; Schibler, M. Analytical evaluation of visby medical RT-PCR portable device for rapid detection of SARS-CoV-2. Diagnostics 2021, 11, 813. [Google Scholar] [CrossRef]
  7. Tahamtan, A.; Ardebili, A. Real-time RT-PCR in COVID-19 detection: Issues affecting the results. Expert Rev. Mol. Diagn. 2020, 20, 453–454. [Google Scholar] [CrossRef] [Green Version]
  8. Oulefki, A.; Agaian, S.; Trongtirakul, T.; Benbelkacem, S.; Aouam, D.; Zenati-Henda, N.; Abdelli, M.-L. Virtual reality visualization for computerized COVID-19 lesion segmentation and interpretation. Biomed. Signal. Process. Control 2022, 73, 103371. [Google Scholar] [CrossRef]
  9. Bollineni, V.R.; Nieboer, K.H.; Döring, S.; Buls, N.; de Mey, J. The role of CT imaging for management of COVID-19 in epidemic area: Early experience from a University Hospital. Insights Imaging 2021, 12, 10. [Google Scholar] [CrossRef]
  10. Gülgösteren, S.; Aloglu, M.; Akgündüz, B.; Gökçek, A.; Atikcan, Ş. Characteristics of thoracic CT findings in differentiating COVID-19 pneumonia from non-COVID-19 viral pneumonia. J. Health Care Res. 2021, 2, 110. [Google Scholar] [CrossRef]
  11. Benbelkacem, S.; Oulefki, A.; Agaian, S.S.; Trongtirakul, T.; Aouam, D.; Zenati-Henda, N.; Amara, K. Lung infection region quantification, recognition, and virtual reality rendering of CT scan of COVID-19. In Multimodal Image Exploitation and Learning 2021; International Society for Optics and Photonics: Bellingham, WA, USA, 2021; Volume 11734, p. 117340I. [Google Scholar] [CrossRef]
  12. Fujioka, T.; Takahashi, M.; Mori, M.; Tsuchiya, J.; Yamaga, E.; Horii, T.; Yamada, H.; Kimura, M.; Kimura, K.; Kitazume, Y.; et al. Evaluation of the usefulness of CO-RADS for chest CT in patients suspected of having COVID-19. Diagnostics 2020, 10, 608. [Google Scholar] [CrossRef]
  13. Wang, S.; Dong, D.; Li, L.; Li, H.; Bai, Y.; Hu, Y.; Huang, Y.; Yu, X.; Liu, S.; Qiu, X.; et al. A deep learning radiomics model to identify poor outcome in COVID-19 patients with underlying health conditions: A multicenter study. IEEE J. Biomed. Health Inform. 2021, 25, 2353–2362. [Google Scholar] [CrossRef]
  14. Wang, Z.; Liu, Q.; Dou, Q. Contrastive cross-site learning with redesigned net for COVID-19 CT classification. IEEE J. Biomed. Health Inform. 2020, 24, 2806–2813. [Google Scholar] [CrossRef] [PubMed]
  15. Yang, Z.; Zhao, L.; Wu, S.; Chen, C.Y.-C. Lung lesion localization of COVID-19 from chest CT image: A novel weakly supervised learning method. IEEE J. Biomed. Health Inform. 2021, 25, 1864–1872. [Google Scholar] [CrossRef]
  16. Qiblawey, Y.; Tahir, A.; Chowdhury, M.; Khandakar, A.; Kiranyaz, S.; Rahman, T.; Ibtehaz, N.; Mahmud, S.; Maadeed, S.; Musharavati, F.; et al. Detection and severity classification of COVID-19 in CT images using deep learning. Diagnostics 2021, 11, 893. [Google Scholar] [CrossRef]
  17. Akbari, Y.; Hassen, H.; Al-Maadeed, S.; Zughaier, S.M. COVID-19 lesion segmentation using lung CT scan images: Comparative study based on active contour models. Appl. Sci. 2021, 11, 8039. [Google Scholar] [CrossRef]
  18. Wang, B.; Jin, S.; Yan, Q.; Xu, H.; Luo, C.; Wei, L.; Zhao, W.; Hou, X.; Ma, W.; Xu, Z.; et al. AI-assisted CT imaging analysis for COVID-19 screening: Building and deploying a medical AI system. Appl. Soft Comput. 2021, 98, 106897. [Google Scholar] [CrossRef] [PubMed]
  19. Shan, F.; Gao, Y.; Wang, J.; Shi, W.; Shi, N.; Han, M.; Xue, Z.; Shen, D.; Shi, Y. Lung infection quantification of COVID-19 in CT images with deep learning. arXiv 2020, arXiv:2003.04655. [Google Scholar]
  20. Fan, D.-P.; Zhou, T.; Ji, G.-P.; Zhou, Y.; Chen, G.; Fu, H.; Shen, J.; Shao, L. Inf-Net: Automatic COVID-19 lung infection segmentation from CT images. IEEE Trans. Med. Imaging 2020, 39, 2626–2637. [Google Scholar] [CrossRef]
  21. Ma, J.; Nie, Z.; Wang, C.; Dong, G.; Zhu, Q.; He, J.; Gui, L.; Yang, X. Active contour regularized semi-supervised learning for COVID-19 CT infection segmentation with limited annotations. Phys. Med. Biol. 2020, 65, 225034. [Google Scholar] [CrossRef]
  22. Shen, C.; Yu, N.; Cai, S.; Zhou, J.; Sheng, J.; Liu, K.; Zhou, H.; Guo, Y.; Niu, G. Quantitative computed tomography analysis for stratifying the severity of coronavirus disease 2019. J. Pharm. Anal. 2020, 10, 123–129. [Google Scholar] [CrossRef]
  23. Chaganti, S.; Grenier, P.; Balachandran, A.; Chabin, G.; Cohen, S.; Flohr, T.; Georgescu, B.; Grbic, S.; Liu, S.; Mellot, F.; et al. Automated quantification of CT patterns associated with COVID-19 from chest CT. Radiol. Artif. Intell. 2020, 2, e200048. [Google Scholar] [CrossRef]
  24. Pu, J.; Leader, J.K.; Bandos, A.; Ke, S.; Wang, J.; Shi, J.; Du, P.; Guo, Y.; Wenzel, S.E.; Fuhrman, C.R.; et al. Automated quantification of COVID-19 severity and progression using chest CT images. Eur. Radiol. 2021, 31, 436–446. [Google Scholar] [CrossRef]
  25. Sun, C.; Bai, Y.; Chen, D.; He, L.; Zhu, J.; Ding, X.; Luo, L.; Ren, Y.; Xing, H.; Jin, X.; et al. Accurate classification of COVID-19 patients with different severity via machine learning. Clin. Transl. Med. 2021, 11, e323. [Google Scholar] [CrossRef]
  26. Huang, L.; Han, R.; Ai, T.; Yu, P.; Kang, H.; Tao, Q.; Xia, L. Serial quantitative chest CT assessment of COVID-19: A deep learning approach. Radiol. Cardiothorac. Imaging 2020, 2, e200075. [Google Scholar] [CrossRef] [Green Version]
  27. McCann, R.A.; Armstrong, C.M.; Skopp, N.A.; Edwards-Stewart, A.; Smolenski, D.J.; June, J.D.; Metzger-Abamukong, M.; Reger, G.M. Virtual reality exposure therapy for the treatment of anxiety disorders: An evaluation of research quality. J. Anxiety Disord. 2014, 28, 625–631. [Google Scholar] [CrossRef] [PubMed]
  28. Moreno, A.; Wall, K.J.; Thangavelu, K.; Craven, L.; Ward, E.; Dissanayaka, N.N. A systematic review of the use of virtual reality and its effects on cognition in individuals with neurocognitive disorders. Alzheimer’s Dement. Transl. Res. Clin. Interv. 2019, 5, 834–850. [Google Scholar] [CrossRef] [PubMed]
  29. Aouam, D.; Zenati-Henda, N.; Benbelkacem, S.; Hamitouche, C. An Interactive VR system for anatomy training. In Mixed Reality and Three-Dimensional Computer Graphics; IntechOpen: London, UK, 2020. [Google Scholar]
  30. Touel, S.; Mekkadem, M.; Kenoui, M.; Benbelkacem, S. Collocated learning experience within collaborative augmented environment (anatomy course). In Proceedings of the 5th International Conference on Electrical Engineering, Boumerdes, Algeria, 29–31 October 2017; pp. 1–5. [Google Scholar] [CrossRef]
  31. Yeung, A.W.K.; Tosevska, A.; Klager, E.; Eibensteiner, F.; Laxar, D.; Stoyanov, J.; Glisic, M.; Zeiner, S.; Kulnik, S.T.; Crutzen, R.; et al. Virtual and augmented reality applications in medicine: Analysis of the scientific literature. J. Med. Internet Res. 2021, 23, e25499. [Google Scholar] [CrossRef] [PubMed]
  32. González Izard, S.; Sánchez Torres, R.; Alonso Plaza, Ó.; Juanes Méndez, J.A.; García-Peñalvo, F.J. Nextmed: Automatic imaging segmentation, 3D reconstruction, and 3D model visualization platform using augmented and virtual reality. Sensors 2020, 20, 2962. [Google Scholar] [CrossRef]
  33. Pareek, T.G.; Mehta, U.; Gupta, A. A survey: Virtual reality model for medical diagnosis. Biomed. Pharmacol. J. 2018, 11, 2091–2100. [Google Scholar] [CrossRef]
  34. Asadzadeh, A.; Samad-Soltani, T.; Rezaei-Hachesu, P. Applications of virtual and augmented reality in infectious disease epidemics with a focus on the COVID-19 outbreak. Inform. Med. Unlocked 2021, 24, 00579. [Google Scholar] [CrossRef]
  35. Shaikh, N.F.; Kunjir, A.; Shaikh, J.; Mahalle, P.N. COVID-19 Public Health Measures: An Augmented Reality Perspective; CRC Press: Boca Raton, FL, USA, 2021. [Google Scholar]
  36. Singh, R.P.; Javaid, M.; Kataria, R.; Tyagi, M.; Haleem, A.; Suman, R. Significant applications of virtual reality for COVID-19 pandemic. Diabetes Metab. Syndr. Clin. Res. Rev. 2020, 14, 661–664. [Google Scholar] [CrossRef]
  37. Mantovani, E.; Zucchella, C.; Bottiroli, S.; Federico, A.; Giugno, R.; Sandrini, G.; Chiamulera, C.; Tamburin, S. Telemedicine and virtual reality for cognitive rehabilitation: A roadmap for the COVID-19 pandemic. Front. Neurol. 2020, 11, 926. [Google Scholar] [CrossRef] [PubMed]
  38. Oulefki, A.; Agaian, S.; Trongtirakul, T.; Laouar, A.K. Automatic COVID-19 lung infected region segmentation and measurement using CT-scans images. Pattern Recognit. 2020, 114, 107747. [Google Scholar] [CrossRef] [PubMed]
  39. Kapur, T.; Grimson, W.E.L.; Wells, W.M., III; Kikinis, R. Segmentation of brain tissue from magnetic resonance images. Med. Image Anal. 1996, 1, 109–127. [Google Scholar] [CrossRef] [Green Version]
  40. Lorensen, W.E.; Cline, H.E. Marching cubes: A high resolution 3D surface construction algorithm. ACM Siggraph Comput. Graph. 1987, 21, 163–169. [Google Scholar] [CrossRef]
  41. Levoy, M. Display of surfaces from volume data. IEEE Comput. Graph. Appl. 1988, 8, 29–37. [Google Scholar] [CrossRef] [Green Version]
  42. Cirne, M.V.M.; Pedrini, H. Marching cubes technique for volumetric visualization accelerated with graphics processing units. J. Braz. Comput. Soc. 2013, 19, 223–233. [Google Scholar] [CrossRef] [Green Version]
  43. Wheeler, G.; Deng, S.; Toussaint, N.; Pushparajah, K.; Schnabel, J.A.; Simpson, J.M.; Gomez, A. Virtual interaction and visualization of 3D medical imaging data with VTK and Unity. Healthc. Technol. Lett. 2018, 5, 148–153. [Google Scholar] [CrossRef]
  44. Chessa, M.; Maiello, G.; Borsari, A.; Bex, P.J. The perceptual quality of the oculus rift for immersive virtual reality. Hum.-Comput. Interact. 2019, 34, 51–82. [Google Scholar] [CrossRef]
  45. Bellarbi, A.; Otmane, S.; Zenati, N.; Belghit, H. Design and evaluation of Zoom-based 3D interaction technique for augmented reality. In Proceedings of the Virtual Reality International Conference—Laval Virtual, Online, 22 March 2017; pp. 1–4. [Google Scholar] [CrossRef]
  46. Kowalewski, K.-F.; Hendrie, J.; Schmidt, M.W.; Garrow, C.; Bruckner, T.; Proctor, T.; Paul, S.; Adigüzel, D.; Bodenstedt, S.; Erben, A.; et al. Development and validation of a sensor- and expert model-based training system for laparoscopic surgery: The iSurgeon. Surg. Endosc. 2017, 31, 2155–2165. [Google Scholar] [CrossRef] [PubMed]
  47. Graafland, M.; Schraagen, J.M.; Schijven, M.P. Systematic review of serious games for medical education and surgical skills training. Br. J. Surg. 2012, 99, 1322–1330. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The proposed framework of lesion segmentation, classification and virtual/augmented reality rendering and diagnosis of COVID-19.
Figure 1. The proposed framework of lesion segmentation, classification and virtual/augmented reality rendering and diagnosis of COVID-19.
Diagnostics 12 00649 g001
Figure 2. Proposed approach for severity classification.
Figure 2. Proposed approach for severity classification.
Diagnostics 12 00649 g002
Figure 3. COVID-SVAR data statistics.
Figure 3. COVID-SVAR data statistics.
Diagnostics 12 00649 g003
Figure 4. Virtual reality viewer with different stages of disease severity, (a) mild, (b) moderate and (c) critical infection.
Figure 4. Virtual reality viewer with different stages of disease severity, (a) mild, (b) moderate and (c) critical infection.
Diagnostics 12 00649 g004
Figure 5. Augmented reality viewer with different stages of disease severity with different patients, (a) mild, (b) moderate and (c) critical infection.
Figure 5. Augmented reality viewer with different stages of disease severity with different patients, (a) mild, (b) moderate and (c) critical infection.
Diagnostics 12 00649 g005
Figure 6. Responses to the PQ questionnaire using the three display methods (the error bars indicate the standard).
Figure 6. Responses to the PQ questionnaire using the three display methods (the error bars indicate the standard).
Diagnostics 12 00649 g006
Table 1. Quantitative evaluation of severity classification. Bold font indicates best result obtained for each experiment.
Table 1. Quantitative evaluation of severity classification. Bold font indicates best result obtained for each experiment.
Classification of SeverityAccuracyPrecisionSensitivitySpecificity
Ratio of pixels [16]0.973 ± 0.020.858± 0.050.869 ± 0.050.974 ± 0.02
Proposed0.996 ± 0.000.796 ± 0.070.815 ± 0.060.997 ± 0.03
Table 2. Patient questionnaire (PQ).
Table 2. Patient questionnaire (PQ).
TopicQuestion
SPQ1: Understanding of diseaseHow do you rate your comprehension of your COVID/disease? (1: Not at all–7: Very well)
SPQ2: Disease awarenessI understand how big my volume COVID-lesion is? (1: Not at all–7: Very much)
SPQ3: Disease locationI can understand my COVID lesion location (1: Not at all–7: Very well)
SPQ4: Treatment plan awarenessI can understand the reasons my doctor provided the treatment plan? (1: Not at all–7: Very much)
SPQ5: SatisfactionI’m feeling good with the treatment plan? (1: Not at all–7: Very much)
SPQ6: 3D model analysisThe 3D model helps me to learn about COVID-19 infection? (1: Not at all–7: Very much)
SPQ7: COVID gravity awarenessThe 3D model helps me understand the complication from the COVID propagation? (1: Not at all–7: Very much)
Table 3. Medical staff questionnaire (MSQ).
Table 3. Medical staff questionnaire (MSQ).
TopicQuestion
SPQ1: ComfortWas the VR & AR pleasant? (1: Not at all–7: Very well)
SPQ2: Usefulness (severity classification)Is the evaluation of complex cases better with VR & AR compared to standard display? (1: Not at all–7: Very much)
SPQ3: Fastness (severity classification)Is the evaluation of complex cases faster? (1: Not at all–7: Very well)
SPQ4: Training efficiency (1)How did you rate the ability for student training? (1: Not at all–7: Very much)
SPQ5: Training efficiency (2)How did you rate the ability for resident training? (1: Not at all–7: Very much)
SPQ6: Practical useHow did you rate the ability for clinical use? (1: Very low–7: Very high)
Table 4. Survey responses for understanding of COVID-19 disease using CT scan imagery against VR models.
Table 4. Survey responses for understanding of COVID-19 disease using CT scan imagery against VR models.
CT ImagesVR Models
Comprehension of disease4.670 ± 0.6786.250 ± 0.494
Lesion size3.231 ± 0.7626.193 ± 0.672
Lesion location3.769 ± 0.5256.613 ± 0.239
Comfort Level4.931 ± 0.4386.108 + 0.219
Awareness of the disease gravity4.296 ± 0.3976.201 ± 0.264
Table 5. Responses with median answers on the 7-Likert scale.
Table 5. Responses with median answers on the 7-Likert scale.
QuestionsExperts
(n = 03)
Resident
(n = 04)
Medical
Students (n = 07)
Nurses
(n = 04)
Comfort5.566.136.436.36
Usefulness (severity classification)5.186.496.586.28
Fastness (severity classification)5.106.026.246.10
Training efficiency (1)6.296.576.716.51
Training efficiency (2)6.126.216.326.32
Practical use6.396.506.616.23
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Benbelkacem, S.; Oulefki, A.; Agaian, S.; Zenati-Henda, N.; Trongtirakul, T.; Aouam, D.; Masmoudi, M.; Zemmouri, M. COVI3D: Automatic COVID-19 CT Image-Based Classification and Visualization Platform Utilizing Virtual and Augmented Reality Technologies. Diagnostics 2022, 12, 649. https://doi.org/10.3390/diagnostics12030649

AMA Style

Benbelkacem S, Oulefki A, Agaian S, Zenati-Henda N, Trongtirakul T, Aouam D, Masmoudi M, Zemmouri M. COVI3D: Automatic COVID-19 CT Image-Based Classification and Visualization Platform Utilizing Virtual and Augmented Reality Technologies. Diagnostics. 2022; 12(3):649. https://doi.org/10.3390/diagnostics12030649

Chicago/Turabian Style

Benbelkacem, Samir, Adel Oulefki, Sos Agaian, Nadia Zenati-Henda, Thaweesak Trongtirakul, Djamel Aouam, Mostefa Masmoudi, and Mohamed Zemmouri. 2022. "COVI3D: Automatic COVID-19 CT Image-Based Classification and Visualization Platform Utilizing Virtual and Augmented Reality Technologies" Diagnostics 12, no. 3: 649. https://doi.org/10.3390/diagnostics12030649

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop