Next Article in Journal
Deep Learning for the Pathologic Diagnosis of Hepatocellular Carcinoma, Cholangiocarcinoma, and Metastatic Colorectal Cancer
Next Article in Special Issue
Therapeutic Decision Making in Prevascular Mediastinal Tumors Using CT Radiomics and Clinical Features: Upfront Surgery or Pretreatment Needle Biopsy?
Previous Article in Journal
A Systematic Review and Meta-Analysis of the Impact of Cancer and Its Treatment Protocol on the Success of Orthodontic Treatment
Previous Article in Special Issue
An Interpretable Three-Dimensional Artificial Intelligence Model for Computer-Aided Diagnosis of Lung Nodules in Computed Tomography Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Rapid On-Site AI-Assisted Grading for Lung Surgery Based on Optical Coherence Tomography

1
Section of Thoracic Surgery, Mackay Memorial Hospital, Taipei City 10449, Taiwan
2
Intensive Care Unit, Mackay Memorial Hospital, Taipei City 10449, Taiwan
3
Department of Medicine, Mackay Medical College, New Taipei City 25245, Taiwan
4
Department of Optometry, Mackay Junior College of Medicine, Nursing, and Management, Taipei City 11260, Taiwan
5
Biomedical Optical Imaging Lab, Department of Photonics, College of Electrical and Computer Engineering, National Yang Ming Chiao Tung University, Hsinchu City 30010, Taiwan
6
Department of Pathology, Mackay Memorial Hospital, New Taipei City 25160, Taiwan
7
Department of Pathology, Taipei Medical University Hospital, Taipei City 11030, Taiwan
8
Department of Pathology, School of Medicine, College of Medicine, Taipei Medical University, Taipei City 11030, Taiwan
9
Institute of Biomedical Engineering, College of Electrical and Computer Engineering, National Yang Ming Chiao Tung University, Hsinchu City 30010, Taiwan
10
Medical Device Innovation and Translation Center, National Yang Ming Chiao Tung University, Taipei City 11259, Taiwan
*
Author to whom correspondence should be addressed.
Cancers 2023, 15(22), 5388; https://doi.org/10.3390/cancers15225388
Submission received: 3 October 2023 / Revised: 2 November 2023 / Accepted: 8 November 2023 / Published: 13 November 2023
(This article belongs to the Special Issue Applications of Machine and Deep Learning in Thoracic Malignancies)

Abstract

:

Simple Summary

In early-stage lung cancer surgery, determining the extent of resection relies on microscopic examination of frozen sections (FSs), especially when the histology is unknown preoperatively. While optical coherence tomography (OCT) holds promise for instant lung cancer diagnosis, grading tumors with OCT remains challenging. Our study proposes an interactive human–machine interface (HMI) that integrates a mobile OCT system, deep learning, and attention mechanisms. The interactive HMI can mark lesion locations on real-time images and perform tumor grading, aiding clinical decisions. In a trial with twelve preoperatively indeterminate adenocarcinoma patients who underwent thoracoscopic resection, the results of the presented HMI system outperformed frozen sections, achieving an 84.9% overall accuracy compared to FSs’ 20%, showcasing the HMI’s potential for rapid diagnostics and improved patient outcomes.

Abstract

The determination of resection extent traditionally relies on the microscopic invasiveness of frozen sections (FSs) and is crucial for surgery of early lung cancer with preoperatively unknown histology. While previous research has shown the value of optical coherence tomography (OCT) for instant lung cancer diagnosis, tumor grading through OCT remains challenging. Therefore, this study proposes an interactive human–machine interface (HMI) that integrates a mobile OCT system, deep learning algorithms, and attention mechanisms. The system is designed to mark the lesion’s location on the image smartly and perform tumor grading in real time, potentially facilitating clinical decision making. Twelve patients with a preoperatively unknown tumor but a final diagnosis of adenocarcinoma underwent thoracoscopic resection, and the artificial intelligence (AI)-designed system mentioned above was used to measure fresh specimens. Results were compared to FSs benchmarked on permanent pathologic reports. Current results show better differentiating power among minimally invasive adenocarcinoma (MIA), invasive adenocarcinoma (IA), and normal tissue, with an overall accuracy of 84.9%, compared to 20% for FSs. Additionally, the sensitivity and specificity, the sensitivity and specificity were 89% and 82.7% for MIA and 94% and 80.6% for IA, respectively. The results suggest that this AI system can potentially produce rapid and efficient diagnoses and ultimately improve patient outcomes.

Graphical Abstract

1. Introduction

According to statistics from the American Cancer Society (ACS) in 2023 [1], lung cancer ranks second in incidence among all cancers affecting both men and women, boasting the highest mortality rate. However, most patients are in the advanced stages of treatment due to the difficulty in the early diagnosis of lung cancer due to its discreet symptoms in the past. A retrospective study found that lung cancer mortality correlated with the time interval from diagnosis to the start of treatment significantly [2]. Thus, with the advancement of imaging technology, more early-stage lung cancer with preoperatively indeterminate histology is found, even incidentally [3]. These early-stage and small lung tumors make rapid on-site intraoperative diagnoses crucial, as the histological grade influences surgical strategies [4] and indirectly affects disease survival.
Presently, the gold standard of microscopic diagnosis is the formalin-fixed paraffin-embedded section (FFPE), in which samples are extracted intraoperatively. Despite the microscopic resolution of FFPE enabling a definite diagnosis, FFPE diagnosis takes a couple of days to obtain results and thus is not applicable for instant diagnosis. As a substitute for FFPE, a frozen section (FS) provides acceptable classification power for lung tumors. The FS has been the standard and only intraoperative diagnostic tool for a long time [5]. Unfortunately, owing to the intrinsic limitations of FSs, the accuracy for grading lung cancer presents constraints and is a matter of debate [6,7,8]. A microscopic examination of intact tissue morphology is needed to achieve high differentiation power. However, according to the literature reviewed, frozen sections still have certain risks of misleading surgical procedures with an accuracy of 37~95% [4,7]. Developing an optional or assistant method for intraoperative histologic diagnoses is always required. Optical coherence tomography (OCT) was reported to permit real-time and depth-resolved images with submicron resolution. OCT, being non-contact, non-invasive, and non-radiative entities, is based on the principle of low coherence interferometry. Previous research verified that OCT was a potentially promising tool for assisting lung tumor surgery [9,10]. OCT could distinguish intraoperatively between cancerous and normal tissue on fresh ex vivo specimens [9]. Furthermore, OCT could provide strengths of qualitative and additional quantitative analysis for lung tumors [10]. This OCT technology can create compatible images through instant scanning within seconds to minutes. Furthermore, digitalized images make the differentiated diagnoses of instant images easier and optimized, as the benefits of digitalized data can be processed through artificial intelligence (AI).
The power of AI has recently been reported with outstanding growth. The continuous management of incoming data processed by AI increased sensitivity and decreased mistakes. OCT imaging in combination with AI has also emerged for various types of cancer [10,11,12,13,14,15,16,17,18,19]. For example, in the study of breast cancer, a deep neural network (DNN) was used to perform real-time edge assessment in breast lumpectomy surgery, using AI to identify the edge of breast cancer [10]. One of the more exciting articles uses reverse active learning to diagnose breast cancer OCT images [18]. However, more trials still need to discuss the contributions of this new technology to diagnoses of cancer-related lung tumors. Accordingly, the current study is designed to explore the potential roles of the combination of OCT and AI technology in tumor diagnoses.

2. Materials and Methods

This clinical trial for lung tumors did not entail direct human body intervention; all tissue specimens remained confined within the hospital premises. The study was approved by the Institutional Review Board (IRB) of Mackay Memorial Hospital, focusing exclusively on primary lung tumor patients aged 20 to 80, with an explicit exclusion criterion for individuals with metastatic carcinoma or who had previously undergone targeted therapy, systemic radiation therapy, or chemotherapy. The OCT cart was systematically deployed during surgical procedures to practice clinical scenarios in the operating room. Subsequently, ex vivo tissue samples excised from operations were utilized as the studied subjects for this comprehensive research endeavor.
The surgical team promptly intraoperatively provided lesions and normal tissue samples during the initial phase. The normal tissue was at least 20 mm away from the maximal visible area of the lesion. These specimens served as the foundational elements for establishing a comprehensive database. Each piece of tissue was approximately 4 mm × 4 mm × 4 mm and underwent rapid OCT-performed three-dimensional scanning with image reconstruction. The scanning area was 2.4 mm × 2.4 mm. Any specimen could be scanned multiple times at different angles to increase sample size diversity. This process included a data access time of about several minutes. At the same time, the surgeon also sent a small piece of the lesion specimen to the pathology department for an intraoperative FS. It took about 20 min to wait for a preliminary diagnosis by the pathologist. As for the final pathologic report, the definitive benchmarked diagnosis was available about a week later. Regarding the pathologic diagnosis, adenocarcinoma in situ (AIS) is defined as a small adenocarcinoma (≤3 cm) with a pure lepidic growth pattern while lacking any stromal, lymphovascular, pleural, alveolar space invasion or necrosis. Minimally invasive adenocarcinoma (MIA) is defined as an adenocarcinoma (≤3 cm) with a predominant lepidic pattern and ≤5 mm invasive component (such as acinar, papillary, solid, or micropapillary patterns), whereas invasive adenocarcinoma (IA) is defined as an adenocarcinoma with an invasive component measuring >5 mm in its greatest dimension.
As previously published [20], the customized SD-OCT system was employed in this research. Figure 1 illustrates our experimental setup, characterized by a single-mode fiber-based unbalanced Michelson interferometer configuration. The wavelength of the light source emanates from broadband superluminescent diodes (SLDs) with an average output power of 11 mW (cBLMD-S-371-HP2-SM-OI, Superlum, Carrigtohill, Ireland). The laser’s wavelength is centered at 840 nm, with a full width at half maximum (FWHM) of 51.2 nm, thereby attaining a theoretically calculated axial resolution of 6.06 μm. The lateral resolution, defined by the spot size measurement at the focal plane, approximates 10 μm (in air). The A-line scanning rate achieves 20 kHz, and the spectral interference signals are efficiently acquired by a commercially available spectrometer (Cobra-800-880, Wasatch Photonics, Logan, UT, USA). These spectral signals are then seamlessly converted from analog to digital (A/D) format using a personal computer.
In the fiber pathway, the only essential component is a 50/50 coupler (C). The light emitted by the superluminescent diode (SLD) initially traverses through this coupler; after that, the resultant twin beams propagate toward the reference and sample arms. In the reference arm, the beam is collimated utilizing a fiber collimator (FC2) (F280APC-850, Thorlabs Inc., Lafayette, CO, USA) incorporating an adjustable neutral density filter (ND-filter) and achromatic lens (L1) (AC254-030-B-ML, Thorlabs Inc., Lafayette, CO, USA), subsequently encountering reflection by a silver-coated mirror (M). As for the sample arm, the beam traverses through a pair of precisely controlled galvanometric scanning mirrors (G1 and G2) (6220H Series, Cambridge Technology Inc., Peachtree Corners, GA, USA). These beams emanating from the reference and sample arms converge with each other at C, thereby engendering interference signals and introducing them into the spectrometer. The spectrometer comprises a transmission grating and a linear line scan camera featuring 2048 pixels. Ultimately, the data flow to our PC is facilitated through a Camera Link connection. The hardware flowchart of the system was designed using a self-built LabVIEW program. (LabVIEW 2016, National Instruments, Austin, TX, USA). Furthermore, the data acquisition device (DAQ) (USB 6343, National Instruments Inc., Austin, TX, USA) adeptly governs the precise movement of G1 and G2 to generate two-dimensional (2D) or three-dimensional (3D) waveforms. The acquired data are then transmitted to the PC via a Camera Link connection.
The practical frame rate is 20 frames per second (fps). In conditions where shot noise predominates, the theoretically calculated sensitivity of this architecture is 110.3 dB. With an ND-filter attenuation of 50 dB, the measured signal-to-noise ratio (SNR) is 40 dB, indicating a current sensitivity of 90 dB. The scanning range covers 2.4 mm along both axes. G1 carries out 2000 line scans, each lasting 0.1 s, while G2 operates at 0.025 Hz, necessitating 40 s to complete a single C-scan volume. Consequently, 400 B-scan images (2D) were amalgamated to construct a volumetric set (3D). As confirmed by the surgical team, we conducted volumetric measurements at two distinct tissue sites: the lesion and the normal.
Overall processing was performed using the programming language Python v3.8, harnessing the power of CUDA GPU acceleration on a high-performance Windows-based computer boasting 16.0 GB of RAM, an Intel Core i5-7500 CPU, and an NVIDIA GeForce GTX1660 GPU. The flowchart is shown in Figure 2. All raw data first underwent calibration of k linearity and window cropping. Subsequently, to mitigate speckle noise, we generated despeckled images by employing an averaging approach across seven adjacent B-scans after translational registration. These images were then resized and normalized into the size of 128 pixels (depth) × 218 pixels (width), corresponding to an actual scan range of 1.4 mm (depth) × 2.4 mm (width). The values were rescaled from 0 to 1 before importing them into the training convolution neural network. Furthermore, data augmentation was rigorously implemented by incorporating random combinations of width and height shifting, shearing, zooming, and horizontal flipping. Notably, to safeguard the integrity of the morphological features in the OCT images, the parameters governing shearing and zooming were meticulously set at a conservative ratio of 0.1.
The current study’s neural network model structure aligns with our prior publication [21]; it is an attention-mechanism-based ResNet model for classifying brain tumor tissue OCT images, rendering it a pertinent candidate for our lung tumor dataset. In the realm of neural network design, the attention ResNet model comprises 14 layers featuring six optimized residual blocks, as elucidated in Figure 3a. Notably, attention mechanisms have been thoughtfully integrated into the final residual block, incorporating filter sizes of 32 and 64. This augmentation ensures the attention mechanism remains fully engaged, even when capturing rudimentary features. Furthermore, a supplementary attention path, as depicted in Figure 3b, has been introduced. This design permits the attention mechanism to gracefully attenuate during training if it struggles to discern more pertinent features. The neural network effectively captures superior features when the Alpha (α) value exceeds zero. During the training phase, a batch size of 16 images was employed, and the stochastic gradient descent (SGD) optimizer was selected with a learning rate of 0.0001 and a momentum value of 0.9. The activation function used Leaky ReLU, and the value was set to 0.01 to prevent the output and input from showing a linear relationship. Furthermore, the additional L2 regularization (weight 0.05) penalty term was used to avoid the overfitting of the model and make the loss function smoother. Categorical cross-entropy, one of the most commonly used loss functions, served as the criterion to evaluate optimization performance, and the training process concluded when the validation data’s loss function ceased to decrease for 15 consecutive epochs.
T-distributed stochastic neighbor embedding (t-SNE), a non-linear machine learning dimensionality reduction method, can maintain local structure during dimensionality reduction. It was proposed by Laurens van der Maaten and Geoffrey Hinton in 2008 [22]. Gaussian distribution with low-dimensional information through t-distribution and Kullback–Leibler divergence (KLD) calculation are performed for the similarity of two probability density functions and gradient descent to find the best solution. On the other hand, gradient class activation mapping (grad-CAM), an innovation by R. R. Selvaraju [23], is modified as classification decisions. Python v3.8, a cloud-based database saver to back up collected patient data and associated information, and Matplotlib’s mouse-responsive functionalities for unveiling individualized OCT characteristics within the t-SNE image are also applied in the current study.

3. Results

Twelve patients with a preoperatively indeterminate tumor underwent thoracoscopic resection. Five patients permanently diagnosed with minimally invasive adenocarcinoma (MIA) and seven with invasive adenocarcinoma (IA) were recruited in this study. The specimens were extracted during the ordinal operations. Any specimen might be scanned multiple times using OCT to increase the data. The recruitment details are listed in Table 1. We divided the data into the training dataset and the testing dataset. All data separations were based on individual patients to generalize the model applicability to unseen OCT images, as shown in Table 2.
The overall accuracy derived from the confusion matrix stands at an impressive 84.9%, with individual class accuracy exceeding 80% for each category, demonstrating an acceptable classification capability. Figure 4 shows the confusion matrix from our model, including the number of pictures (Figure 4a) and normalized probability (Figure 4b). Specifically, for MIA, the sensitivity and specificity are 89% and 82.7%, respectively, while for IA, they are 94% and 80.6%.
Figure 5 shows the trend of changes concerning the epochs. We plot the learning curve in the processing. Figure 5a shows the accuracy of training (red line) and testing (green line) processing. The accuracy of the test data exhibited significant fluctuations initially, stabilizing and converging starting from the 150 epochs. Furthermore, the loss curve of the test data (blue) is slightly higher than that of the training data (yellow). It exhibits a gradual plateauing trend over epochs, highlighting the stability of our model. In order to better understand the model performance, the receiver operating characteristic (ROC) curves of IA and MIA were calculated from the testing data, as shown in Figure 5b,c. The areas under the curves (AUCs) were 0.99 and 0.96, showing excellent differentiation powers for the targeted tumors. The AUC of the ROC curve reflects the comprehensive performance of the model under different thresholds if the model can distinguish between positive and negative classes well and maintain sensitivity and specificity under various thresholds.
Subsequently, we harnessed the capabilities of t-SNE in conjunction with OCT images and grad-CAM to ascertain whether the model authentically fixates on the correct features instead of background information (Figure 6). Within the t-SNE interface, the posteriorly situated lighter-colored data points represent the training dataset, while the anterior darker-colored data points signify the test dataset. Data points of matching hues on the t-SNE image denote the same category, with a t-SNE perplexity parameter set to 35. The t-SNE conclusively reveals that data points of similar colors coalesce into distinct clusters, with only minor instances of overlapping with other categories. This affirms that our model has undergone robust training and can effectively discriminate between the three types. Misclassified data points will be discussed further in subsequent sections.
Moreover, exploring the analysis of grad-CAM heatmaps, we examined the intricate features extracted by our model within each histology-graded distinct category. In IA images, we observe a phenomenon characterized by interrupted attenuation with discontinuous reflection (Figure 6a). This feature is absent in MIA images. Remarkably, MIA images show relatively formless homogeneity (Figure 6b). In the normal tissue images (NOR), the model focuses on the tissue’s superficial regions, which show irregular and dense spots (Figure 6c). However, the attenuation discontinuity in NOR appears randomly.
In addition to the reasonably good visual results obtained from t-SNE images, we have also implemented an interactive t-SNE interface. When individual data points within the interface are clicked on, the corresponding OCT image data are promptly retrieved alongside a window presenting pertinent patient information and the related OCT image data. In a separate interface, patient details such as name, chart number, gender, sample number, volume number, and the number of B-scans, in addition to the neural network’s classification probabilities and grad-CAM image, are presented. This feature affords us the immediate capability to scrutinize and analyze our dataset. Furthermore, predictions for the respective data points are provided beneath this window, with associated probabilities and visual CAM images. Figure 6 vividly illustrates the interactive outcomes attained by selecting MIA (Figure 7a) and IA (Figure 7b) data points. These images conclusively demonstrate the neural network’s adeptness at delivering precise predictions for both MIA and IA cases, with the CAM images discerningly highlighting the relevant features of interest rather than fixating on extraneous background information.

4. Discussion

Both diagnoses and treatments in medicine are crucial in real-world practice. Lungs are an organ of combined solid and luminal structures in the deep body. Specialists in managing lung tumors, especially for minor conditions increasing incredibly slowly in size, are faced with combined procedures of instant diagnostic judgments between benign and malignant lesions and subsequent surgical treatments. Surgery is considered the standard management for early-stage non-small-cell lung cancer, as small and early tumors in the pulmonary parenchyma are hard to access through bronchoscopy for diagnosis [24]. Therefore, rapid on-site accurate diagnoses, as eagerly needed, provide essential data for subsequent surgical strategies and are especially imperative in pulmonary oncology. Currently, definite diagnosis of tumor entities in the microscopic spectrum can only be supplied by pathologic procedures days after the surgery, leaving time-effective issues for improvement. As a result, instant biopsy through FS reports becomes essential to approach histological data; it is performed by the running duty-shifted pathologist, taking slices at one location to determine a diagnosis. However, detailed incorrect results commonly exist in particularly small lesions.
Compared with FSs, OCT provides continuous slice images with a range of around 2.4 mm × 2.4 mm. Generally, surgeons can distinguish the tumor area with bare eyes. However, the final pathology must still confirm the invasive regions’ existence and detailed content. If the resected specimen for OCT scanning does not contain enough invasive parts, the model may lead to misclassifications.
The evolutionary development of cell biology for lung adenocarcinoma is believed to proceed sequentially through atypical adenomatous hyperplasia (AAH), adenocarcinoma in situ (AIS), MIA, and overt IA [25,26]. Cancer cells sometimes take a long time to grow, even up to several years. Improvements in the health environment and people’s medical vigilance result in more and more clinical cases of early-stage cancer.
According to literature statistics, the five-year survival rate of early-stage cancer with immediate treatment, especially before the MIA stage, can reach 100% [27]. An important issue is the opposite considerations between the resection criteria extents required for adequate tumor clearance and maximal preservation of lung volume to preserve better pulmonary function. Both destruction and protection are considered crucial. Therefore, surgical treatments according to the set reference based on histological grade played an essential role and were required. Generally, the histologic MIA status of lung tumors is considered the cut-off point for the extent of surgical resection. Beyond MIA, limited resection with so-called sublobar resection was adequate and recommended [28]. Above IA, however, extensive resection by lobectomy or more is often needed for sufficient cancer clearance [24,28,29,30,31]. The current study shows better discrimination capability by OCT integrated with AI (OCT-AI) compared to the traditional FSs. In addition, time saving is another strength of the current electromechanical system (EMS). This OCT-AI can provide data management within tens of seconds to a few minutes, while an FS usually needs at least 30 min. Thus, such an efficient AI-integrated EMS would have the potential to be an optional tool for rapid on-site diagnoses of preoperatively indeterminate tumors.
Although the current model achieves an accuracy rate of approximately 80%, mis-classifications still exist, as with other tools. By scrutinizing the features contributing to misclassifications in the CAM images, we can gain insights into the sources of error (Figure 8). Figure 8a illustrates a scenario in which MIA is erroneously classified as IA, while Figure 8b depicts IA being misclassified as MIA. In Figure 8a, the model principally directs attention to areas characterized by discontinuous attenuation (red arrow of Figure 7a), a hallmark feature of IA in OCT images, as mentioned before. In Figure 7b, the model fixates on relatively uniform regions (blue arrow of Figure 8b), resulting in a misclassification as MIA. Notably, there are also structureless areas near the surface (blue arrow of Figure 8a), indicative of MIA features, yet the model simultaneously focuses on the regions of discontinuous attenuation below. Similarly, some dense spots (red arrow of Figure 8b), the feature of IA, exist in Figure 8b. We infer that the simultaneous co-existence of IA and MIA features would lead to misclassifications. However, the mistakes of such conditions are very low, with a probability of around 4% (misclassification of IA) and 11% (misclassification of MIA).
Tumor spread through air spaces (STAS) is prevalent in regular pathologic findings [32]. In the current study, specimens of normal tissue of around 20 mm or at least the size of the lesions are provided for OCT scanning as a calibrated benchmark. Tumor resections of at least 20 mm are commonly recommended for suspected malignancy, which was the previously generally accepted safety margin [33,34]. The tumor margin distance is a primary concern regarding local control. Therefore, the current study chose normal tissue with a 20 mm distance or at least the size of the lesions for evaluation. STAS is challenging to interpret in a frozen section specimen, and better accuracy is needed [35,36]. On the other hand, as shown in the confusion matrix in Figure 4, normal tissue images can sometimes be erroneously categorized as IA. These conditions are observed to present in all categories but are mostly misclassified in IA specimens. Through quantitative comparison, IA and NOR determined by the OCT-AI system show a particular possibility of mixed discrimination. However, the strengths of OCT-AI lie in the capability of calibrated accuracy by self-promotion through extensive data collection. STAS in lung cancer has been reported to have numerous associations with poor survival [29,36]. Thus, from the concept of treatment effectiveness, wide excision by lobectomy would be suggested if the OCT-AI system reads specimens to be IA of small tumors.
In this era with many pieces of AI-based software, there are some considered suitable for applicable approaches for the current study, such as support vector machines (SVMs) and artificial neural networks (ANNs), including feedforward neural networks (FNNs), recurrent neural networks (RNNs), and convolutional neural networks (CNNs). In this study, a CNN’s convolutional computation was chosen, utilizing convolutional layers to learn features from data automatically [21]. We gave up using SVMs, ANNs, FNNs, and RNNs because there were defects of requiring manual feature extraction tasks and a lack of automatic learning of features from data. An attention-based ResNet model combined with a CNN was selected owing to the strengths of good classification performance for lung cancer. The advantages lay in that this combination reduced the need for extensive domain knowledge and experience and was less demanding in terms of raw data, as well as parameter tuning for good performance. CNNs are typically used in tasks related to computer vision, image classification, and image recognition and are particularly suitable for managing spatial data.
The AI training process is like a complex matrix, and it is difficult to understand the learning details and interpretation basis intuitively. Therefore, it is necessary to use visualization tools to perform dimensional reduction. Standard methods are principal component analysis (PCA) and t-distributed stochastic neighbor embedding (t-SNE). PCA is a linear dimensionality reduction technique that may not capture complex, non-linear relationships in the data as effectively as t-SNE. Others, such as locally linear embedding (LLE), aim to preserve local linear relationships between data points. Isomap is suitable for data with intrinsic non-linear structures but may be sensitive to noise and require careful parameter tuning. In this study, t-SNE was employed for model visualization.
A module with non-linear machine learning dimensionality reduction, t-SNE, is used. Simultaneously, a modified method designed by Laurens van der Maaten and Geoffrey Hinton in 2008 was proposed [22]. The main idea is to approximate high-dimensional information through Gaussian distribution and low-dimensional information through t-distribution. Kullback–Leibler divergence (KLD) calculation is performed for the similarity of two probability density functions and gradient descent to find the best solution. In the current research, t-SNE was judiciously applied to scrutinize the distributions of data emanating from diverse samples and elucidate their inherent properties. On the other hand, gradient class activation mapping (grad-CAM) was systematically employed for comprehensive model evaluation. Grad-CAM, an innovation pioneered by R. R. Selvaraju [23], facilitates the mapping of regions within an image that significantly influences the model’s classification decisions. This activation map is derived from gradient calculations to the output concerning a given input image. In our specific application, grad-CAM was harnessed to visualize the salient features prioritized by the trained model.
Validating the model’s training through result visualization is of paramount importance. However, even with excellent training outcomes, the practical utility for clinical healthcare professionals often hinges on the involvement of engineers. Therefore, designing an interactive human–machine interface (HMI) aims to make the user interface easily understood and provide clinical personnel with real-time patient information, an assessment of symptom severity, and an approximate lesion location [37,38,39,40]. An HMI enables clinicians to formulate the quickest surgical strategies, ultimately improving patient outcomes.
The designed processing of the system involved the creation of the interface using Python v3.8. A cloud-based database for backing up collected patient data and associated information, which can be modified and expanded upon in the patient data section, was created. After importing the dataset into the neural network for post-model training, classification probabilities of t-SNE and grad-CAM images were obtained. The Matplotlib plotting library within Python provides an object-oriented application programming interface (API) for embedding graphics into the application interface. Due to Matplotlib’s mouse-responsive functionalities, clicking on individual points within the t-SNE image unveils its OCT images. In a separate interface, patient details such as name, chart number, gender, sample number, volume number, and the number of B-scans, in addition to the neural network’s classification probabilities and grad-CAM image, are presented. This integrated interface is engineered to provide a holistic depiction of the final classification results.
Interactive HMI implementation can provide a friendly and straightforward operating interface on the clinical terminal, whether the classification is correct or incorrect. In addition, quantitative probabilities can provide suggestive data for surgeons with alternative diagnostic guidance. Figure 9 shows two misjudgments on the interactive HMI. The model predicts three probabilities of MIA, IA, and NOR; the final prediction is the largest. Although misclassification, as also exhibited by other diagnostic tools, existed in some cases, the OCT-AI system provided the additional probability data of tumor categories, as displayed on the left side of the interactive HMI, to be considered for decisional judgment by the clinicians. Figure 9a illustrates an NOR from an IA patient erroneously categorized as IA. Probabilities are IA: 66.04% and NOR: 33.96%. Figure 9b indicates the model assignment of reasonably similar possibilities for both IA and MIA. Hypothetically, MIA and IA exist simultaneously. This interface design can empower physicians to access detailed probability information of misclassifications promptly, helping them make well-informed decisions during surgical procedures.

5. Conclusions

This study represents the first real-time clinical investigation of lung cancer tissue using the AI-integrated OCT for grading classification. Our customized neural network model can accurately classify normal tissue, MIA, and IA. Additionally, we have developed an interactive HMI that allows clinicians to assess deep and detailed tumor information, including digital data from OCT scans and, most importantly, visualized images and available predictive probability assessment only by clicking on data points in t-SNE images. Despite the current model’s high accuracy, some misclassifications still exist, indicating spaces left for improvement. In the future, the authors aim to gather more data to enhance the model’s reliability; incorporate features such as in situ cancer, precancerous lesions, and benign conditions; and achieve a more comprehensive and nuanced classification system. In summary, the current proposed method has the potential to enhance traditional pathological examination, improve diagnostic efficiency, and assist patients with ensuring appropriate treatment and better outcomes.

Author Contributions

Conceptualization, H.-C.L. and C.-W.S.; methodology, H.-C.L. and R.-C.Z.; software, M.-H.L. and R.-C.Z.; validation, W.-C.C., M.-H.L. and R.-C.Z.; formal analysis, H.-C.L. and M.-H.L.; investigation, H.-C.L. and C.-W.S.; resources, H.-C.L. and W.-C.C.; data curation, M.-H.L. and R.-C.Z.; writing—original draft preparation, M.-H.L.; writing—review and editing, H.-C.L.; visualization, M.-H.L.; supervision, C.-W.S. and Y.-M.W.; project administration, H.-C.L., W.-C.C., Y.-M.W. and C.-W.S.; funding acquisition, H.-C.L., W.-C.C., Y.-M.W. and C.-W.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the interdisciplinary academic research corporation of National Yang Ming Chiao Tung University and Mackay Memorial Hospital (MMH-CT-10808, MMH-CT-10906, MMH-CT-11001, and MMH-CT-11104) and the National Science and Technology Council (Grant Nos. 109-2221-E-009-018-MY3, 111-2221-E-A49-047-MY3, 111-2221-E-195-002, 111-2221-E075-002, 111-2314-B-561-001, 111-2321-B-A49-003, and 111-2314-B-A49-078).

Institutional Review Board Statement

The study was approved by the Institutional Review Board (IRB) of Mackay Memorial Hospital, number 18MMHIS084.

Informed Consent Statement

Informed consent was obtained from patients, parents, and legal guardians.

Data Availability Statement

The data underlying the results presented in this paper are not publicly available according to the protection of human research participants.

Acknowledgments

The authors of this work would like to thank the Lung Cancer Foundation in Memory of Kwang-Shun Lu.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Siegel, R.L.; Miller, K.D.; Wagle, N.S.; Jemal, A. Cancer statistics, 2023. CA Cancer J. Clin. 2023, 73, 17–48. [Google Scholar] [CrossRef]
  2. Tsai, C.H.; Kung, P.T.; Kuo, W.Y.; Tsai, W.C. Effect of time interval from diagnosis to treatment for non-small cell lung cancer on survival: A national cohort study in Taiwan. BMJ Open 2020, 10, e034351. [Google Scholar] [CrossRef]
  3. Scholten, E.T.; de Jong, P.A.; de Hoop, B.; van Klaveren, R.; van Amelsvoort-van de Vorst, S.; Oudkerk, M.; Vliegenthart, R.; Koning, H.J.; Aalst, C.M.; Vernhout, R.M.; et al. Towards a close computed tomography monitoring approach for screen detected subsolid pulmonary nodules? Eur. Respir. J. 2015, 45, 765–773. [Google Scholar] [CrossRef]
  4. Yeh, Y.C.; Nitadori, J.I.; Kadota, K.; Yoshizawa, A.; Rekhtman, N.; Moreira, A.L.; Sima, C.S.; Rusch, V.W.; Adusumilli, P.S.; Travis, W.D. Using frozen section to identify histological patterns in stage I lung adenocarcinoma of ≤ 3 cm: Accuracy and interobserver agreement. Histopathology 2015, 66, 922–938. [Google Scholar] [CrossRef]
  5. Xiang, Z.; Zhang, J.; Zhao, J.; Shao, J.; Zhao, L.; Zhang, Y.; Qin, G.; Xing, J.; Han, Y.; Yu, K. An effective inflation treatment for frozen section diagnosis of small-sized lesions of the lung. J. Thorac. Dis. 2020, 12, 1488. [Google Scholar] [CrossRef]
  6. Xu, X.; Chung, J.H.; Jheon, S.; Sung, S.W.; Lee, C.T.; Lee, J.H.; Choe, G. The accuracy of frozen section diagnosis of pulmonary nodules: Evaluation of inflation method during intraoperative pathology consultation with cryosection. J. Thorac. Oncol. 2010, 5, 39–44. [Google Scholar] [CrossRef]
  7. Liu, S.; Wang, R.; Zhang, Y.; Li, Y.; Cheng, C.; Pan, Y.; Xiang, J.; Zhang, Y.; Chen, H.; Sun, Y. Precise diagnosis of intraoperative frozen section is an effective method to guide resection strategy for peripheral small-sized lung adenocarcinoma. J. Clin. Oncol. 2016, 34, 307–313. [Google Scholar] [CrossRef]
  8. Takahashi, Y.; Kuroda, H.; Oya, Y.; Matsutani, N.; Matsushita, H.; Kawamura, M. Challenges for real-time intraoperative diagnosis of high risk histology in lung adenocarcinoma: A necessity for sublobar resection. Thorac. Cancer 2019, 10, 1663–1668. [Google Scholar] [CrossRef]
  9. Lin, M.H.; Liu, H.C.; Hsiao, T.Y.; Ting, C.H.; Sun, C.W. A bedside feasibility study with optical coherence tomography for real-time tumor-located of lung cancer. Health Technol. 2021, 5, 2. [Google Scholar] [CrossRef]
  10. Triki, A.R.; Blaschko, M.B.; Jung, Y.M.; Song, S.; Han, H.J.; Kim, S.I.; Joo, C. Intraoperative margin assessment of human breast tissue in optical coherence tomography images using deep neural networks. Comput. Med. Imaging Graph. 2018, 69, 21–32. [Google Scholar] [CrossRef]
  11. Kansal, S.; Goel, S.; Bhattacharya, J.; Srivastava, V. Generative adversarial network–convolution neural network based breast cancer classification using optical coherence tomographic images. Laser Phys. 2020, 30, 115601. [Google Scholar] [CrossRef]
  12. Mojahed, D.; Ha, R.S.; Chang, P.; Gan, Y.; Yao, X.; Angelini, B.; Hibshoosh, H.; Taback, B.; Hendon, C.P. Fully automated postlumpectomy breast margin assessment utilizing convolutional neural network based optical coherence tomography image classification method. Acad. Radiol. 2020, 27, e81–e86. [Google Scholar] [CrossRef] [PubMed]
  13. Moiseev, A.; Snopova, L.; Kuznetsov, S.; Buyanova, N.; Elagin, V.; Sirotkina, M.; Kiseleva, E.; Matveev, L.; Zaitsev, V.; Feldchtein, F. Pixel classification method in optical coherence tomography for tumor segmentation and its complementary usage with oct microangiography. J. Biophotonics 2018, 11, e201700072. [Google Scholar] [CrossRef] [PubMed]
  14. Wan, S.; Lee, H.C.; Huang, X.; Xu, T.; Xu, T.; Zeng, X.; Zhang, Z.; Sheikine, Y.; Connolly, J.L.; Fujimoto, J.G.; et al. Integrated local binary pattern texture features for classification of breast tissue imaged by optical coherence microscopy. Med. Image Anal. 2017, 38, 104–116. [Google Scholar] [CrossRef]
  15. Lu, W.; Tong, Y.; Yu, Y.; Xing, Y.; Chen, C.; Shen, Y. Deep learning-based automated classification of multi-categorical abnormalities from optical coherence tomography images. Transl. Vis. Sci. Technol. 2018, 7, 41. [Google Scholar] [CrossRef]
  16. Wang, H.; Won, D.; Yoon, S.W. A deep separable neural network for human tissue identification in three-dimensional optical coherence tomography images. IISE Trans. Healthc. Syst. Eng. 2019, 9, 250–271. [Google Scholar] [CrossRef]
  17. Ma, Y.; Xu, T.; Huang, X.; Wang, X.; Li, C.; Jerwick, J.; Ning, Y.; Zeng, X.; Wang, B.; Wang, Y.; et al. Computer-aided diagnosis of label-free 3-d optical coherence microscopy images of human cervical tissue. IEEE Trans. Biomed. Eng. 2019, 66, 2447–2456. [Google Scholar] [CrossRef]
  18. Singla, N.; Dubey, K.; Srivastava, V. Automated assessment of breast cancer margin in optical coherence tomography images via pretrained convolutional neural network. J. Biophotonics 2019, 12, e201800255. [Google Scholar] [CrossRef]
  19. Li, Y.Q.; Chiu, K.S.; Liu, X.R.; Hsiao, T.Y.; Zhao, G.; Li, S.J.; Sun, C.W. Polarization-Sensitive Optical Coherence Tomography for Brain Tumor Characterization. IEEE J. Sel. Top. Quantum Electron. 2019, 25, 1–7. [Google Scholar] [CrossRef]
  20. Liu, H.C.; Lin, M.H.; Ting, C.H.; Wang, Y.M.; Sun, C.W. Intraoperative application of optical coherence tomography for lung tumor. J. Biophotonics 2023, 2023, e202200344. [Google Scholar] [CrossRef]
  21. Hsu, S.P.; Hsiao, T.Y.; Pai, L.C.; Sun, C.W. Differentiation of primary central nervous system lymphoma from glioblastoma using optical coherence tomography based on attention ResNet. Neurophotonics 2022, 9, 015005. [Google Scholar] [CrossRef] [PubMed]
  22. Min, R.; Stanley, D.A.; Yuan, Z.; Bonner, A.; Zhang, Z. A deep non-linear feature mapping for large-margin knn classification. In Proceedings of the 2009 Ninth IEEE International Conference on Data Mining, Miami Beach, FL, USA, 6–9 December 2009. [Google Scholar]
  23. Selvaraju, R.R.; Das, A.; Vedantam, R.; Cogswell, M.; Parikh, D.; Batra, D. Grad-CAM: Why did you say that? arXiv 2016, arXiv:1611.07450. [Google Scholar]
  24. Raman, V.; Yang, C.F.J.; Deng, J.Z.; D’Amico, T.A. Surgical treatment for early stage non-small cell lung cancer. J. Thorac. Dis. 2018, 10, S898–S904. [Google Scholar] [CrossRef] [PubMed]
  25. Noguchi, M. Stepwise progression of pulmonary adenocarcinoma—Clinical and molecular implications. Cancer Metastasis Rev. 2010, 29, 15–21. [Google Scholar] [CrossRef]
  26. Inamura, K. Clinicopathological characteristics and mutations driving development of early lung adenocarcinoma: Tumor initiation and progression. Int. J. Mol. Sci. 2018, 19, 1259. [Google Scholar] [CrossRef]
  27. Masai, K.; Sakurai, H.; Sukeda, A.; Suzuki, S.; Asakura, K.; Nakagawa, K.; Asamura, H.; Watanabe, S.I.; Motoi, N.; Hiraoka, N. Prognostic impact of margin distance and tumor spread through air spaces in limited resection for primary lung cancer. J. Thorac. Oncol. 2017, 12, 1788–1797. [Google Scholar] [CrossRef]
  28. Altorki, N.; Wang, X.; Kozono, D.; Watt, C.; Landrenau, R.; Wigle, D.; Port, J.; Jones, D.R.; Conti, M.; Ashrafi, A.S.; et al. Lobar or sublobar resection for peripheral stage IA non–small-cell lung cancer. N. Engl. J. Med. 2023, 388, 489–498. [Google Scholar] [CrossRef]
  29. Zhu, E.; Xie, H.; Dai, C.; Zhang, L.; Huang, Y.; Dong, Z.; Guo, J.; Su, H.; Ren, Y.; Shi, P.; et al. Intraoperatively measured tumor size and frozen section results should be considered jointly to predict the final pathology for lung adenocarcinoma. Mod. Pathol. 2018, 31, 1391–1399. [Google Scholar] [CrossRef]
  30. Zhang, Y.; Ma, X.; Shen, X.; Wang, S.; Li, Y.; Hu, H.; Chen, H. Surgery for pre-and minimally invasive lung adenocarcinoma. J. Thorac. Cardiovasc. Surg. 2022, 163, 456–464. [Google Scholar] [CrossRef]
  31. Saji, H.; Okada, M.; Tsuboi, M.; Nakajima, R.; Suzuki, K.; Aokage, K.; Aoki, T.; Okami, J.; Yoshino, I.; Ito, H.; et al. Segmentectomy versus lobectomy in small-sized peripheral non-small-cell lung cancer (JCOG0802/WJOG4607L): A multicentre, open-label, phase 3, randomised, controlled, non-inferiority trial. Lancet 2022, 399, 1607–1617. [Google Scholar] [CrossRef]
  32. Zhou, F.; Villalba, J.A.; Sayo, T.M.S.; Narula, N.; Pass, H.; Mino-Kenudson, M.; Moreira, A.L. Assessment of the feasibility of frozen sections for the detection of spread through air spaces (STAS) in pulmonary adenocarcinoma. Mod. Pathol. 2022, 35, 210–217. [Google Scholar] [CrossRef] [PubMed]
  33. Kodama, K.; Doi, O.; Higashiyama, M.; Yokouchi, H. Intentional limited resection for selected patients with T1 N0 M0 non-small-cell lung cancer: A single-institution study. J. Thorac. Cardiovasc. Surg. 1997, 114, 347–353. [Google Scholar] [CrossRef]
  34. Sawabata, N.; Ohta, M.; Matsumura, A.; Nakagawa, K.; Hirano, H.; Maeda, H.; Matsuda, H.; Thoracic Surgery Study Group of Osaka University. Optimal distance of malignant negative margin in excision of nonsmall cell lung cancer: A multicenter prospective study. Ann. Thorac. Surg. 2004, 77, 415–420. [Google Scholar] [CrossRef] [PubMed]
  35. Mukhopadhyay, S.; Sudarshan, M. Spread through airspaces (STAS) on frozens: Too much, too soon. Mod. Pathol. 2022, 35, 140–141. [Google Scholar] [CrossRef]
  36. Shih, A.R.; Mino-Kenudson, M. Updates on spread through air spaces (STAS) in lung cancer. Histopathology 2020, 77, 173–180. [Google Scholar] [CrossRef]
  37. Kumar, N.; Lee, S.C. Human-machine interface in smart factory: A systematic literature review. Technol. Forecast. Soc. Chang. 2022, 174, 121284. [Google Scholar] [CrossRef]
  38. Ma, S.; Wang, X.; Li, P.; Yao, N.; Xiao, J.; Liu, H.; Zhang, Z.; Yu, L.; Tao, G.; Li, X.; et al. Optical micro/nano fibers enabled smart textiles for human–machine interface. Adv. Fiber Mater. 2022, 4, 1108–1117. [Google Scholar] [CrossRef]
  39. Tao, K.; Chen, Z.; Yu, J.; Zeng, H.; Wu, J.; Wu, Z.; Jia, Q.; Li, P.; Fu, Y.; Chang, H.; et al. Ultra-sensitive, deformable, and transparent triboelectric tactile sensor based on micro-pyramid patterned ionic hydrogel for interactive human–machine interfaces. Adv. Sci. 2022, 9, 2104168. [Google Scholar] [CrossRef]
  40. Le, X.; Shi, Q.; Sun, Z.; Xie, J.; Lee, C. Noncontact human–machine interface using complementary information fusion based on mems and triboelectric sensors. Adv. Sci. 2022, 9, 2201056. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of the SD-OCT. Red solid lines are the fiber path, black dotted lines are the electric path, and the colored areas are near-infrared in free space. SLD-LS, superluminescent diode light source; SMF, single-mode fiber; C, coupler; FC1 and FC2, fiber collimator; ND-filter, neutral density filter; L1 and L2, achromat lens; M, mirror; S, sample platform; G1 and G2, galvano scanners; DAQ, data acquisition (NI-6343, National Instruments); PSU, power supply; A/D converter, analog-to-digital converter.
Figure 1. Schematic diagram of the SD-OCT. Red solid lines are the fiber path, black dotted lines are the electric path, and the colored areas are near-infrared in free space. SLD-LS, superluminescent diode light source; SMF, single-mode fiber; C, coupler; FC1 and FC2, fiber collimator; ND-filter, neutral density filter; L1 and L2, achromat lens; M, mirror; S, sample platform; G1 and G2, galvano scanners; DAQ, data acquisition (NI-6343, National Instruments); PSU, power supply; A/D converter, analog-to-digital converter.
Cancers 15 05388 g001
Figure 2. The overall process of the research. The clinically obtained tissues are measured in real time. The data are then reconstructed, pre-processed, and imported into the neural network to obtain the tissue classification results. Finally, the interactive human–machine interface (HMI) is used to verify the correctness of the results.
Figure 2. The overall process of the research. The clinically obtained tissues are measured in real time. The data are then reconstructed, pre-processed, and imported into the neural network to obtain the tissue classification results. Finally, the interactive human–machine interface (HMI) is used to verify the correctness of the results.
Cancers 15 05388 g002
Figure 3. Attention ResNet model architecture diagram [12]. (a) Neural network ResNet is designed and (b) attention mechanism is added to improve training results. Yellow arrow: residual path. Green arrow: attention path.
Figure 3. Attention ResNet model architecture diagram [12]. (a) Neural network ResNet is designed and (b) attention mechanism is added to improve training results. Yellow arrow: residual path. Green arrow: attention path.
Cancers 15 05388 g003
Figure 4. Confusion matrix from testing data of our model. (a) is the number of pictures, (b) is the normalized probability. The sensitivities and specificities of MIA and IA were 89%, 82.7%, 94%, and 80.6%, leading to an overall accuracy of 84.9%.
Figure 4. Confusion matrix from testing data of our model. (a) is the number of pictures, (b) is the normalized probability. The sensitivities and specificities of MIA and IA were 89%, 82.7%, 94%, and 80.6%, leading to an overall accuracy of 84.9%.
Cancers 15 05388 g004
Figure 5. Learning curve and ROC curve. (a) is the learning curve of training and testing data, including their accuracy and loss with respect to epochs. The ROC curve of (b) invasive adenocarcinoma (IA) and (c) minimally invasive adenocarcinoma (MIA). The areas under the curve (AUCs) are 0.99 and 0.96, respectively.
Figure 5. Learning curve and ROC curve. (a) is the learning curve of training and testing data, including their accuracy and loss with respect to epochs. The ROC curve of (b) invasive adenocarcinoma (IA) and (c) minimally invasive adenocarcinoma (MIA). The areas under the curve (AUCs) are 0.99 and 0.96, respectively.
Cancers 15 05388 g005
Figure 6. Model performance visualization t-SNE diagram: (a) IA, (b) MIA, and (c) NOR OCT images of lung tissue observed from a subjective point of view (top) paired with the grad-CAM diagrams (bottom) that the model focuses on. White arrows show that NOR tissue has an abundance of dense light spots.
Figure 6. Model performance visualization t-SNE diagram: (a) IA, (b) MIA, and (c) NOR OCT images of lung tissue observed from a subjective point of view (top) paired with the grad-CAM diagrams (bottom) that the model focuses on. White arrows show that NOR tissue has an abundance of dense light spots.
Cancers 15 05388 g006
Figure 7. Deployment of interactive HMI. Results of (a) MIA and (b) IA show that our model performs well.
Figure 7. Deployment of interactive HMI. Results of (a) MIA and (b) IA show that our model performs well.
Cancers 15 05388 g007
Figure 8. Image of neural network misclassification. (a) shows MIA being misjudged as IA, and (b) shows IA being misjudged as MIA. Red arrows mark the IA feature, while blue arrows point to the MIA feature.
Figure 8. Image of neural network misclassification. (a) shows MIA being misjudged as IA, and (b) shows IA being misjudged as MIA. Red arrows mark the IA feature, while blue arrows point to the MIA feature.
Cancers 15 05388 g008
Figure 9. Misjudgments on interactive HMI. The model predicts three probabilities of MIA, IA, and NOR; the final prediction is the largest. (a) NOR from an IA patient erroneously categorized as IA. (b) Misclassification between IA and MIA. OCT-AI system provided the probabilities of tumor categories.
Figure 9. Misjudgments on interactive HMI. The model predicts three probabilities of MIA, IA, and NOR; the final prediction is the largest. (a) NOR from an IA patient erroneously categorized as IA. (b) Misclassification between IA and MIA. OCT-AI system provided the probabilities of tumor categories.
Cancers 15 05388 g009
Table 1. Recruitment information.
Table 1. Recruitment information.
PatientSpecimens (Tumor/Normal)OCT VolumesPermanent DiagnosisFrozen SectionData Splitting
A45IA *IATraining
24Training
B33Training
22Training
C33Training
22Training
D22Testing
22Testing
E22Training
F24Testing
G23Training
H11MIA **IATraining
11Training
I24MIATesting
12Training
J33AIS ***Training
22Training
K11AISTraining
11Testing
L59IATraining
22Training
* IA, invasive adenocarcinoma; ** MIA, minimally invasive adenocarcinoma; *** AIS, adenocarcinoma in situ.
Table 2. Data separation. The columns from left to right are diagnosis type, OCT training data, and test data. The brackets are the number of C-scan groups, and the left side of the brackets is the total number of B-scans.
Table 2. Data separation. The columns from left to right are diagnosis type, OCT training data, and test data. The brackets are the number of C-scan groups, and the left side of the brackets is the total number of B-scans.
DiagnosisTrainingTesting
OCT VolumesOCT FramesOCT VolumesOCT Frames
IA1612,12063569
MIA1412,01343069
NOR 11413,22543349
Total4437,358149987
1 NOR, normal lung tissue.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, H.-C.; Lin, M.-H.; Chang, W.-C.; Zeng, R.-C.; Wang, Y.-M.; Sun, C.-W. Rapid On-Site AI-Assisted Grading for Lung Surgery Based on Optical Coherence Tomography. Cancers 2023, 15, 5388. https://doi.org/10.3390/cancers15225388

AMA Style

Liu H-C, Lin M-H, Chang W-C, Zeng R-C, Wang Y-M, Sun C-W. Rapid On-Site AI-Assisted Grading for Lung Surgery Based on Optical Coherence Tomography. Cancers. 2023; 15(22):5388. https://doi.org/10.3390/cancers15225388

Chicago/Turabian Style

Liu, Hung-Chang, Miao-Hui Lin, Wei-Chin Chang, Rui-Cheng Zeng, Yi-Min Wang, and Chia-Wei Sun. 2023. "Rapid On-Site AI-Assisted Grading for Lung Surgery Based on Optical Coherence Tomography" Cancers 15, no. 22: 5388. https://doi.org/10.3390/cancers15225388

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop