Next Article in Journal
Estimation of Sedimentary Rock Porosity Using a Digital Image Analysis
Next Article in Special Issue
Deep-Learning Algorithms for Prescribing Insoles to Patients with Foot Pain
Previous Article in Journal
Detection of Cyberbullying Patterns in Low Resource Colloquial Roman Urdu Microtext using Natural Language Processing, Machine Learning, and Ensemble Techniques
Previous Article in Special Issue
Age Estimation from Brain Magnetic Resonance Images Using Deep Learning Techniques in Extensive Age Range
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quality Assurance of Chest X-ray Images with a Combination of Deep Learning Methods

1
Department of Radiology, Otaru General Hospital, Otaru 047-0152, Japan
2
Graduate School of Health Sciences, Hokkaido University, Sapporo 060-0812, Japan
3
Faculty of Health Sciences, Hokkaido University, Sapporo 060-0812, Japan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(4), 2067; https://doi.org/10.3390/app13042067
Submission received: 4 January 2023 / Revised: 2 February 2023 / Accepted: 3 February 2023 / Published: 5 February 2023
(This article belongs to the Special Issue AI Technology in Medical Image Analysis)

Abstract

:
Background: Chest X-ray (CXR) imaging is the most common examination; however, no automatic quality assurance (QA) system using deep learning (DL) has been established for CXR. This study aimed to construct a DL-based QA system and assess its usefulness. Method: Datasets were created using over 23,000 images from Chest-14 and clinical images. The QA system consisted of three classification models and one regression model. The classification method was used for the correction of image orientation, left–right reversal, and estimating the patient’s position, such as standing, sitting, and lying. The regression method was used for the correction of the image angle. ResNet-50, VGG-16, and the original convolutional neural network (CNN) were compared under five cross-fold evaluations. The overall accuracy of the QA system was tested using clinical images. The mean correction time of the QA system was measured. Result: ResNet-50 demonstrated higher performance in the classification. The original CNN was preferred in the regression. The orientation, angle, and left–right reversal of all images were fully corrected in all images. Moreover, patients’ positions were estimated with 96% accuracy. The mean correction time was approximately 0.4 s. Conclusion: The DL-based QA system quickly and accurately corrected CXR images.

1. Introduction

Chest X-ray (CXR) imaging has been used for diagnostic imaging since the beginning, and despite the development of diagnostic imaging equipment such as computed tomography (CT) and magnetic resonance imaging (MRI), CXR imaging has still been used for screening for cardiovascular and pulmonary diseases. The cardiothoracic ratio (CTR) measurement using CXR images is an important index to evaluate heart diseases [1,2,3]. CTR is usually calculated as the ratio of the largest transverse heart dimension to the largest transverse chest dimension in a CXR obtained from posterior irradiation. CXR with posterior irradiation can avoid the enlargement of the cardiac shadow caused by the proximity of the detector and heart, based on the geometry of the X-ray system. Lung water also works as an important index to assess heart and lung diseases and can be evaluated in lung water secondary to increased intravascular hydrostatic pressure due to cardiogenic edema resulting from heart failure, increased permeability of pulmonary microvessels due to acute lung injury or acute respiratory distress syndrome, and other causes [4,5]. Since the beginning of the coronavirus disease-2019 (COVID-19) pandemic, CXR imaging has played a crucial role in the evaluation of lung conditions. It is easily accessed, has a low radiation exposure as a primary screening, and a low cost to assess pneumonia, which is one of the typical symptoms of COVID-19 [6,7,8]. The importance of the CXR examination in daily medicine is undoubtedly important. However, during CXR imaging, both CTR and the distribution of lung water change with the patient’s position. For example, if the patient is able to walk independently, CXR imaging is performed in the standing position; therefore, if lung water is detected, it is concentrated in the lower lung field due to the gravitational effect. However, in a patient with difficulty standing, such as a critically ill patient, CXR imaging is performed in the supine position, and therefore, if lung water is present, it is distributed over the entire dorsal side of the lungs. In other words, the patient’s position is an important factor in the accurate assessment of CXR images in clinical practice. Therefore, the patient’s position should be reported to readers immediately after CXR imaging.
However, most of these works depend on manual operations by radiological technologists. For example, technologists add a marker that shows the patient’s position, such as “standing”, “sitting”, and “lying”, to the free space in the CXR image immediately after CXR imaging. Even though these markers allow the image reader, the radiologist, and the physician in the medical department to identify the imaging status at a glance for a large number of taken clinical images, human errors may occur at some point during this process, and these errors may cause serious incidents by misunderstanding the patient’s position in clinical practice [9]. For instance, imaging findings will be confused by a misunderstanding of the patient’s position. Moreover, CXR output with left-right reversal is due to the difference in the irradiation direction due to incorrect imaging parameters of the X-ray console between a standing position and another position. Despite the large efforts of radiological technologists, these are quite often encountered in clinical practice. Therefore, an automatic and highly accurate system without manual operation is desired.
Artificial intelligence (AI) has been applied for the automatic assessment of medical images in recent years [10,11]. Amount of image data is needed to construct a high-accuracy model in machine learning (ML) and deep learning (DL). From this view, CXR, which has been traditional and common, may be suitable for attempts to apply ML and DL methods [12]. Meanwhile, Marcus G et al. argue that the difficulty of replacing human workers with AI is from the view of the core frameworks of human knowledge: time, space, causality, and basic knowledge of physical objects and humans and their interactions [13]. Hence, the medical field has been trying to implement AI specialized for specific tasks. We planned to construct DL models to focus on the quality assurance (QA) of CXR.
The previous study reported achieving correction of radiographs using AI [14,15]. We hypothesized that a QA system of CXR could be developed using QA modes. The classification method can estimate the orientation of the CXR and the patient’s position. The regression method can assess the angle of the CXR. Therefore, we attempted to construct an automatic QA system combining the classification method and the regression method. We expected that the system would correct the orientation, angle, and left-right reversal of CXR and estimate the patient’s position. Finally, the QA system adds the marker demonstrating the patient’s position in the free space of the CXR. In other words, manual work by technologists is completely replaced by automation in the QA system. If the QA system offers high accuracy in the management of CXR, the QA system can perform one of the important roles in preventing serious medical accidents and can be one of the most effective techniques to improve the overall quality of medical procedures and contribute to patient safety. This study aimed to construct a DL-based QA system for CXR and assess its usefulness.

2. Materials and Methods

This study was conducted in accordance with the principles of the Declaration of Helsinki and was approved by the local Institutional Review Board of the Otaru General Hospital (04–022). All the patients or their families provided oral or written informed consent to participate in this study.

2.1. Concept of a QA System for CXR

The workflow outline of the QA system is shown in Figure 1. The outline of the datasets is summarized in Table 1. The QA system consisted of four models: (1) correction of orientation, (2) correction of angle, (3) correction of left–right reversal, and (4) judging the patient’s position. We used the regression method for the correction of the angle, and another correction was performed using the classification methods.
In this QA system, each image is programmed to undergo correction at each step according to the judgment of each DL model. Finally, the QA system adds the marker in the free space of the CXR to inform readers of the patient’s position according to the judgement of the DL model. Makers were added at the right upper corner of the images.
The QA system consisted of four steps. Images were corrected in their orientation, angle, and left–right reversal, then the patient’s position was estimated through the QA system. The QA system was constructed by combining the classification method and the regression method. QA, quality assurance.

2.2. Datasets and Preprocessing of Images

Chest-14 data [16,17,18] and clinical images were used for this study. Only the patients’ positions were evaluated using the clinical data of Otaru General Hospital. Other datasets were constructed using Chest-14. To focus on adult patients, data of children aged <15 years from Chest-14 were carefully excluded. Inappropriate images were also excluded from Chest-14 based on the visual review by the four radiological technologists. All images were confirmed as having no subjects with left-right organ reversal by two radiological technologists. All images were converted into a 224-square matrix in Joint Photographic Experts Group (JPEG) format to apply CNNs.

2.3. Dataset for the Orientation Correction

Throughout the above processing, a total of 20,000 images were prepared for the orientation correction model. To avoid training similar images, all images were selected from the first examination of each patient from Chest-14. The images were divided into 16,000 training images and 4000 test images. The dataset included 1/4 of each of 0°, 90°, 180°, and 270° rotations. Half of the images were flipped horizontally.

2.4. Dataset for the Angle Correction

In the same way as the datasets for the correction of orientation, a total of 20,000 images were prepared from the first examination of each patient in Chest-14. Half of the images were flipped horizontally. All images were randomly rotated from −25 degrees to 25 degrees using the computed processing. The images were divided into 16,000 training images and 4000 testing images.

2.5. Dataset for the Left–Right Reversal Correction

A total of 20,000 images were prepared from the first examination of each patient in Chest-14. Half of the images were flipped horizontally with computed processing, and images were divided into 16,000 training images and 4000 testing images.
Horizontal flipping of chest radiographs can occur due to incorrect parameter settings on the X-ray console. For example, if a chest radiograph is taken in the sitting or lying position using the console’s parameters for the standing position, the resulting image may be flipped horizontally.

2.6. Dataset for the Judgment of the Patient’s Position

Clinical images obtained in Otaru General Hospital were utilized for the dataset for the judgment of the patient’s position. A total of 3000 images were prepared for each position: standing, sitting, and lying. A total of 2600 and 400 images were used for training and testing data, respectively. Thus, a total of 7800 images for training and 1200 images for testing data were prepared.

2.7. Dataset for Overall Test

The QA system was constructed by combining the models trained using the above datasets. For the assessment of the overall quality of the QA system, a total of 120 clinical images, including 40 images of each position, were prepared. A random left–right inversion process was applied to 30% of the total images, and all images were randomly rotated from 0 to −359 degrees. Therefore, the dataset contained a mixture of images in various states. We masked the makers informing patient’s position in the left upper space of CXR with a black cover in pre-processing of the clinical images.

2.8. Training for Creating Models

The software for building the DL method was developed with in-house MATLAB software (The Mathworks, Inc., Natick, MA, USA), and a desktop computer with an NVIDIA GeForce RTX 3080 graphics card (Nvidia Corporation, Santa Clara, CA, USA) was used. Three convolutional neural networks (CNNs) were used for image training, and they could be selected in MATLAB: VGG-16 (Visual Geometry Group) [19], ResNet-50 (Residual Neural Network) [20], and the original CNN. The original CNN was a simple CNN that was constructed by reducing the convolution layers in VGG-16. The architecture of the original CNN is shown in Figure 2. The classification method was applied, except for the angle correction. The regression method was applied for angle correction. In the regression method, the final layer was replaced with the regression layer, with an output size of 1. The dropout layer was added in front of the regression layer. The optimizer was Adam.
The loss function was cross-entropy for the classification method and half-mean squared error for the regression method. Fivefold cross-validation was performed for all training. The following hyperparameters were used: maximum number of training epochs, 30; initial learning ratio, 0.0001; learning rate drop period, 5; and learning rate drop factor, 0.2. The early stopping method was used. The validation patience was 10 days, and the batch size was 16. In the classification method, the model with the highest area under the curve (AUC) was selected as the best model. In the regression method, the lowest mean absolute error (MAE) was selected as the best model.

2.9. Overall Test

We performed the overall test to evaluate the accuracy of our QA system. The QA system corrected the orientation, left-right reversal, and angle, then estimated the patient’s position. Finally, a marker was added to the free space in the upper left corner of the image according to the patient’s position. The image correction was confirmed with the visual evaluation by two radiological technologists who had 20 and 5 years of experience. Precision, recall, and overall accuracy in estimating the patient’s position were calculated. The mean processing time of the QA system was measured. The dataset included 120 clinical images as described in the above.

2.10. Statistical Analysis

All continuous variables are shown as means ± standard deviations, regardless of the datasets. The Steel–Dwass test was used to compare the performance of the CNN after the Kruskal–Wallis test. A p-value < 0.05 was considered statistically significant. R version 4.1.1 was used for all statistical analysis and figure creation.

3. Results

3.1. Training for the Created Models

The accuracy and AUC are compared in Figure 3. In the classification method, ResNet-50 showed a tendency toward higher accuracy and AUC in all corrections. In the regression method, the original CNN showed significantly lower MAE of degrees and higher correlation coefficients among the three CNNs. The performance of the best models is shown in Table 2.
Three CNNs were used: the original CNN, ResNet-50, and VGG-16. Both ResNet-50 and VGG-16 were trained. In the classification, ResNet-50 demonstrated higher accuracy and AUC among the three CNNs. In the regression, the original CNN showed significantly higher performance. CNN, convolutional neural networks.
In the classification method, ResNet-50 was superior. In the regression method, the original CNN demonstrated significantly higher performance. CNN, convolutional neural networks.

3.2. Overall Test

The examples of before–after correction using the QA system are shown in Figure 4. The confusion matrix of the overall test is shown in Figure 5. All images were corrected to normal orientation from various statuses. Makers informing the patient’s position when took CXR were automatically added at right upper corner of images. The overall accuracy was 95.8%, and the time for correction was 0.39 ± 0.074 s. All images with standing positions were completely estimated. A few images were confused between the sitting position and the lying position.
All images were perfectly corrected in their orientation, angle, and left–right reversal.
The patient’s position was estimated with 96% accuracy within 0.4 s.

4. Discussion

We developed a QA system for CXR imaging using DL methods. The orientation, angle, and left–right reversal of CXR were corrected completely within approximately 0.4 s. In addition, the patient’s position, such as standing, sitting, and lying, was estimated with 96% accuracy.
Fonseca A et al. achieved 99.4% accuracy for the orientation correction of pediatric CXR using a machine learning method [14]. In contrast, we acquired 100% accuracy for the correction of orientation, angle, and left-right reversal of CXR. This result suggested the advantage of using CNNs for the correction of CXR. Hržić F et al. reported highly accurate correction of various parts of radiographs using CNNs. They achieved 99.3% accuracy using VGG-16 and 0.02 s of processing with the GPU [15]. Meanwhile, we proposed a specific QA system with 100% geometric correction in CXR. This result showed that it might be better to construct the QA system according to each body part. The processing time will not become a large problem for routine clinical use because under 1 s is clearly faster than manual operation by technologists.
Totally, the main advantage of our study compared to the above studies is that we combined four simple models to construct the QA system. Moreover, we also estimated the patient’s position directly related to the imaging findings. We believed that our study could propose a novel, comprehensive QA system for CXR based on the DL method.
The constructed QA system combined classification and regression using CNN. ResNet-50 demonstrated better performance for classification, correct orientation, left-right, and patient position estimation. We have already confirmed good compatibility with classification in medical images and ResNet-50 in magnetic resonance images [21]. The same trend was confirmed in this study.
On the contrary, the original CNN that had simple layers showed significantly higher performance for the regression method for correcting the angle of CXR images. We thought that the original CNN could avoid overtraining compared with other CNNs in the regression of angle correction. ResNet-50 might have too many deep layers to use the angle correction. The accuracy of the entire QA system can be assured by using the appropriate model for each task. In the detection of pneumonia in CXR using DL, Szeppi P et al. achieved extremely high accuracy, such as 97%, using modified VGG-16. This study strongly suggests that VGG-16-like CNNs are compatible with CXR and can exhibit high performance [22]. They showed the efficacy of dropouts on CNN. Our CNNs may be able to be refined according to their method.
QA systems should be built into the X-ray console system. CXR is immediately corrected, and the patient’s position’s marker is assigned. Then, radiological technologists confirm the corrected CXR and send it to the picture archiving and communication system.
Recently, medical errors have been serious concerns in the clinical scene. In the United States, medical errors claim more lives in hospitals than motor vehicle accidents, breast cancer, or AIDS [10]. According to the Swiss cheese model and Heinrich’s law, many minor accidents are hidden behind serious medical accidents. Therefore, the prevention of minor accidents is directly related to the prevention of major accidents.
In the circumstances, artificial intelligence has been employed to develop an automatic QA system in the medical field. Claessens M et al. summarized the current status of the QA system based on AI in radiotherapy. In the radiotherapy field, various technologies using AI have been applied. The examples are auto-segmentation, image registration, auto-planning, CT image generation, patient QA, and machine QA. They suggested the importance of the correct use of case-specific and routine QA in clinical practice [23]. Chan MF et al. reported that machine learning approaches are explored, highlighting specific applications in machine and patient-specific quality assurance (QA). They introduced the clinical usefulness and limitations of machine learning in QA systems. They argued for the necessity of a sanity check and a second check before the clinical use of QA systems based on AI. Additionally, they suggested that we should address the limitations of both data and ML models. [24] Thus, we believe that we should avoid total reliance on AI-based QA systems, especially in the field of radiography, where AI-based QA systems have not been developed in comparison with radiotherapy. In this sense, our QA system for CXR will work as a powerful tool to perform double-checks with radiological technologists.
A large amount of radiography has been taken in daily practice. Therefore, the QA system for radiographs is sure to make a significant contribution to daily clinical practice. However, many studies of radiographs using AI technology have been performed to diagnose using the classification method and to estimate the quantitative value using the regression method. We considered that these techniques should be applied to the construction of a QA system for radiographs. However, no study has reported an automatic QA system for CXR imaging to automatically correct the orientation and angle and estimate the patient’s position. Hence, we attempted to develop a QA system for CXR, which is the most common radiograph in clinical practice.
One of the reasons CXR was selected was that a large dataset such as Chest-14, which had over one hundred thousand CXR was available, and many clinical images were also stored on our institution’s server. To create a DL model with high accuracy, a large data set is necessary. In addition, the CXR has only one pattern, which the chest is in the center of the image, and the pattern of the image is extremely small compared to other parts, such as the extremities. We did not need to prepare images of many patterns. This fact works as an advantage in the preparation of datasets and the model training process. Therefore, CXR was suitable to attempt to construct a QA system based on AI as a novel study.
We excluded images with the same patients from Chest-14 to avoid training similar images. Many images became unusable, but this process was necessary to avoid overfitting specific images. It seems that the dataset had sufficient size for this task because complete correction was achieved in the orientation, angle, and left-right reversal. Chest-14 does not provide information relating detail patient’s position when taking XR, such as standing, sitting, or lying. Therefore, we used the clinical images of our institution as a dataset to train the estimation of the patient’s position, even though dataset was small compared with other datasets. As a result, we obtained 96% accuracy in the patient’s position estimation. We believe that the data set size was sufficient for this study.
Meanwhile, a few images were confused between the sitting position and the lying position in this system. This is because strict positioning management was not enforced. In addition, in both the sitting and lying positions, X-rays are irradiated from the front side of the patient. Therefore, CXR images in both positions may become very similar. From the clinical perspective, the degree of body raising is adjusted according to the patient’s respiratory status. We think that the blurring of the boundary between the sitting position and the lying position is, to some extent, unavoidable.
On the other hand, the standing position was completely classified. Thus, the direction of X-ray irradiation was fully distinguished. The direction of X-ray irradiation is directly related to CXR findings, such as the distribution of pleural effusions, air inclusions and the measurement of CTR. Therefore, an accurate estimation of the irradiation direction is an important factor in CXR management in clinical practice. Moreover, orientation, image angle, and left-right reversal were fully corrected by the QA system. The processing time is clearly faster than a manual operation. These facts proved the usefulness of the QA system.
This study involves several limitations. First, this study was performed at a single institution. Further investigation is desired to evaluate the generalization performance of the QA system. The larger datasets, including another institution’s clinical images, should be applied for model training. Moreover, we excluded images of children under 15 from the datasets. This processing may lead to a statistical bias compared with the actual clinical scene. Although this processing is essential to achieving a high-accuracy model for CXR in adult patients [12].
Second, all clinical images were retrospectively prepared. Therefore, a black mask is present in the upper-right corner of the image to hide a previous marker corresponding to the patient’s position. This mask might work as a landmark to correct the orientation, angle, and left-right reversal. Further investigation using plain CXR imaging is warranted. Although, the correction model is created from Chest14 data that does not have a mask. Third, all evaluations were performed using a GPU machine. The portable X-ray console PC without a GPU may delay the processing time. The clinical trial should be attempted using a daily-use X-ray console system.
Third, this study aimed to construct a specific AI for the QA of CXR. Hence, this system cannot make fully alternative human work from the perspective of the difference between the AI model and human cognition [13]. Still, furthermore, double-checking should be enforced by both the QA system and technologists.

5. Conclusions

This study showed that the deep learning model could perform quality assurance for CXR. QA system was constructed by combining the classification method and the regression method. The orientation, angle, and left-right reversal were fully corrected. The regression method was used for the correction of the angle. Other corrections were performed using the classification method. Finally, the QA system added the maker demonstrating the patient’s position when it took the CXR, such as standing, sitting, and lying. Patient’s position was estimated with 96% accuracy. The correction processing was about 0.4 s. This system will make a large contribution to clinical practice.

Author Contributions

Conceptualization, D.O.; methodology, D.O.; software, D.O.; validation, S.S., Y.H. and S.K.; investigation, S.S., Y.H. and S.K.; writing—original draft preparation, D.O.; writing—review and editing, H.S.; visualization, D.O.; supervision, H.S.; project administration, D.O. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This study was conducted in accordance with the principles of the Declaration of Helsinki and was approved by the local Institutional Review Board of the Otaru General Hospital (04–022). All the patients or their families provided oral or written informed consent to participate in this study. All authors had no conflicts of interest.

Informed Consent Statement

Written or oral consent was obtained from all subjects involved in the study.

Data Availability Statement

We used Chest-14 datasets. (https://nihcc.app.box.com/v/ChestXray-NIHCC accessed on 4 January 2023). The clinical images are not available.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial intelligence
AUCArea under the curve
CNNConvolution neural network
COVID-19Coronavirus disease-2019
CTComputed tomography
CTRCardiothoracic ratio
CXRChest X-ray
DLDeep learning
JPEGJoint Photographic Experts Group
MAEMean absolute error
MLMachine learning
MRIMagnetic resonance imaging
QAQuality assurance

References

  1. Loomba, R.S.; Shah, P.H.; Nijhawan, K.; Aggarwal, S.; Arora, R. Cardiothoracic ratio for prediction of left ventricular dilation: A systematic review and pooled analysis. Future Cardiol. 2015, 11, 171–175. [Google Scholar] [CrossRef] [PubMed]
  2. Truszkiewicz, K.; Poręba, M.; Poręba, R.; Gać, P. Radiological Cardiothoracic Ratio as a Potential Predictor of Right Ventricular Enlargement in Patients with Suspected Pulmonary Embolism Due to COVID-19. J. Clin. Med. Res. 2021, 10, 5703. [Google Scholar] [CrossRef] [PubMed]
  3. Yotsueda, R.; Taniguchi, M.; Tanaka, S.; Eriguchi, M.; Fujisaki, K.; Torisu, K.; Masutani, K.; Hirakata, H.; Kitazono, T.; Tsuruya, K. Cardiothoracic Ratio and All-Cause Mortality and Cardiovascular Disease Events in Hemodialysis Patients: The Q-Cohort Study. Am. J. Kidney Dis. 2017, 70, 84–92. [Google Scholar] [CrossRef]
  4. Cardinale, L.; Priola, A.M.; Moretti, F.; Volpicelli, G. Effectiveness of chest radiography, lung ultrasound and thoracic computed tomography in the diagnosis of congestive heart failure. World J. Radiol. 2014, 6, 230–237. [Google Scholar] [CrossRef] [PubMed]
  5. Gupta, R.K.; Newbould, R.D.; Matthews, P.M. Methods of Measuring Lung Water. Pediatr. Crit. Care Med. 2012, 13, 209–215. [Google Scholar] [CrossRef]
  6. Liu, T.Y.; Rai, A.; Ditkofsky, N.; Deva, D.P.; Dowdell, T.R.; Ackery, A.D.; Mathur, S. Cost benefit analysis of portable chest radiography through glass: Initial experience at a tertiary care centre during COVID-19 pandemic. J. Med. Imaging Radiat. Sci. 2021, 52, 186–190. [Google Scholar] [CrossRef]
  7. Rorat, M.; Zińczuk, A.; Szymański, W.; Simon, K.; Guziński, M. Usefulness of a portable chest radiograph in the initial diagnosis of coronavirus disease 2019. Pol. Arch. Intern. Med. 2020, 130, 906–909. [Google Scholar]
  8. Saez de Gordoa, E.; Portella, A.; Escudero-Fernández, J.M.; Andreu Soriano, J. Usefulness of chest X-rays for detecting COVID-19 pneumonia during the SARS-CoV-2 pandemic. Radiologia 2022, 64, 310–316. [Google Scholar] [CrossRef]
  9. Pescarini, L.; Inches, I. Systematic approach to human error in radiology. Radiol. Med. 2006, 111, 252–267. [Google Scholar] [CrossRef]
  10. Kora, P.; Ooi, C.P.; Faust, O.; Raghavendra, U.; Gudigar, A.; Chan, W.Y.; Meenakshi, K.; Swaraja, K.; Plawiak, P.; Rajendra Acharya, U. Transfer Learning Techniques for Medical Image Analysis: A Review. Biocybern. Biomed. Eng. 2022, 42, 79–107. [Google Scholar] [CrossRef]
  11. Mondal, M.R.H.; Bharati, S.; Podder, P. Diagnosis of COVID-19 Using Machine Learning and Deep Learning: A Review. Curr. Med. Imaging Rev. 2021, 17, 1403–1418. [Google Scholar] [CrossRef]
  12. Roccetti, M.; Delnevo, G.; Casini, L.; Cappiello, G. Is Bigger Always Better? A Controversial Journey to the Center of Machine Learning Design, with Uses and Misuses of Big Data for Predicting Water Meter Failures. J. Big Data 2019, 6, 1–23. Available online: https://www.proquest.com›docview (accessed on 4 January 2023). [CrossRef]
  13. Marcus, G.; Davis, E. Insights for AI from the Human Mind. Commun. ACM 2020, 64, 38–41. [Google Scholar] [CrossRef]
  14. Fonseca, A.; Vieira, G.S.; Felix, J.; Freire Sobrinho, P.; Silva, A.V.P.; Soares, F. Automatic Orientation Identification of Pediatric Chest X-Rays. In Proceedings of the 2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC), Madrid, Spain, 13–17 July 2020. [Google Scholar]
  15. Hržić, F.; Tschauner, S.; Sorantin, E.; Štajduhar, I. XAOM: A Method for Automatic Alignment and Orientation of Radiographs for Computer-Aided Medical Diagnosis. Comput. Biol. Med. 2021, 132, 104300. [Google Scholar] [CrossRef] [PubMed]
  16. Sim, J.Z.T.; Ting, Y.-H.; Tang, Y.; Feng, Y.; Lei, X.; Wang, X.; Chen, W.-X.; Huang, S.; Wong, S.-T.; Lu, Z.; et al. Diagnostic Performance of a Deep Learning Model Deployed at a National COVID-19 Screening Facility for Detection of Pneumonia on Frontal Chest Radiographs. Healthcare 2022, 10, 175. [Google Scholar] [CrossRef] [PubMed]
  17. Lenga, M.; Schulz, H.; Saalbach, A. Continual Learning for Domain Adaptation in Chest X-Ray Classification. In Proceedings of the Third Conference on Medical Imaging with Deep Learning, PMLR, Montreal, QC, Canada, 6–8 July 2020; Volume 121, pp. 413–423. [Google Scholar]
  18. Baltruschat, I.M.; Nickisch, H.; Grass, M.; Knopp, T.; Saalbach, A. Comparison of Deep Learning Approaches for Multi-Label Chest X-Ray Classification. Sci. Rep. 2019, 9, 1–10. [Google Scholar] [CrossRef]
  19. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  20. He, Z.; Ren, S. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  21. Sugimori, H.; Hamaguchi, H.; Fujiwara, T.; Ishizaka, K. Classification of type of brain magnetic resonance images with deep learning technique. Magn. Reson. Imaging 2021, 77, 180–185. [Google Scholar] [CrossRef]
  22. Szepesi, P.; Szilágyi, L. Detection of Pneumonia Using Convolutional Neural Networks and Deep Learning. Biocybern. Biomed. Eng. 2022, 42, 1012–1022. [Google Scholar] [CrossRef]
  23. Claessens, M.; Oria, C.S.; Brouwer, C.L.; Ziemer, B.P.; Scholey, J.E.; Lin, H.; Witztum, A.; Morin, O.; El Naqa, I.; Van Elmpt, W.; et al. Quality Assurance for AI-Based Applications in Radiation Therapy. Semin. Radiat. Oncol. 2022, 2, 421–431. [Google Scholar] [CrossRef]
  24. Chan, M.F.; Witztum, A.; Valdes, G. Integration of AI and machine learning in radiotherapy QA. Front. Artif. Intell. 2020, 3, 577620. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Concept of the QA system.
Figure 1. Concept of the QA system.
Applsci 13 02067 g001
Figure 2. Outline of the original convolutional neural network.
Figure 2. Outline of the original convolutional neural network.
Applsci 13 02067 g002
Figure 3. Summary of the accuracy and area under the curve of the receiver operating characteristics analysis. * p < 0.05.
Figure 3. Summary of the accuracy and area under the curve of the receiver operating characteristics analysis. * p < 0.05.
Applsci 13 02067 g003
Figure 4. Images of each patient’s position before and after evaluation through the QA system.
Figure 4. Images of each patient’s position before and after evaluation through the QA system.
Applsci 13 02067 g004
Figure 5. Confusion matrix of the overall test.
Figure 5. Confusion matrix of the overall test.
Applsci 13 02067 g005
Table 1. Summary of datasets for each training and the overall test.
Table 1. Summary of datasets for each training and the overall test.
Task MethodSourceTrain ImagesTest ImagesRemarksPre-Processing
OrientationClassification Including 1/4 each of 0°, 90°, 180°, and 270° rotations.Excluding images under 15 years old and inappropriate images by visual review
AngleRegressionChest-1416,0004000Randomly rotation from −25 degrees to 25 degrees The first exam of each patient
Left-Right reversalClassification Half of the images underwent horizontal flips.
Patient’s positionClassificationClinical image78001200Including 3000 images of each position, which were standing, sitting, and lying.
Including 40 images of each position, which were standing, sitting, and lying.DICOM images were converted to JPEG format.
Overall test 12030% of the images underwent horizontal flips.Maker was masked.
Randomly rotation from 0 degrees to −359 degrees.
Table 2. Summary of the accuracy and area under the curve of the best-performing models.
Table 2. Summary of the accuracy and area under the curve of the best-performing models.
MethodTaskBest CNNAccuracyAUC
OrientationResNet-501.00001.0000
ClassificationLeft-right reversalResNet-500.98950.99579
PositionResNet-500.95250.99017
MAE (degrees)r
RegressionDegreeOriginal CNN0.190200.99977
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Oura, D.; Sato, S.; Honma, Y.; Kuwajima, S.; Sugimori, H. Quality Assurance of Chest X-ray Images with a Combination of Deep Learning Methods. Appl. Sci. 2023, 13, 2067. https://doi.org/10.3390/app13042067

AMA Style

Oura D, Sato S, Honma Y, Kuwajima S, Sugimori H. Quality Assurance of Chest X-ray Images with a Combination of Deep Learning Methods. Applied Sciences. 2023; 13(4):2067. https://doi.org/10.3390/app13042067

Chicago/Turabian Style

Oura, Daisuke, Shinpe Sato, Yuto Honma, Shiho Kuwajima, and Hiroyuki Sugimori. 2023. "Quality Assurance of Chest X-ray Images with a Combination of Deep Learning Methods" Applied Sciences 13, no. 4: 2067. https://doi.org/10.3390/app13042067

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop