Next Issue
Volume 9, July
Previous Issue
Volume 9, May
 
 

J. Imaging, Volume 9, Issue 6 (June 2023) – 19 articles

Cover Story (view full-size image): Similar to classical watermarking techniques for multimedia content, DNN watermarking should satisfy high robustness against attacks such as fine-tuning, pruning neurons, and overwriting. In this study, we extended the method, encoded by a constant weight code, such that the model can be applied to any convolution layer of the DNN model, and designed a watermark detector based on a statistical analysis of the extracted weight parameters to evaluate whether the model is watermarked. The figure displays the statistical distribution of weight parameters in three typical CNN models trained using the ImageNet dataset. According to the characteristics of these distributions, some parameters for embedding a watermark can be determined to consider the trade-off among robustness, transparency, and capacity. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
23 pages, 6559 KiB  
Article
Predicting the Tumour Response to Radiation by Modelling the Five Rs of Radiotherapy Using PET Images
by Rihab Hami, Sena Apeke, Pascal Redou, Laurent Gaubert, Ludwig J. Dubois, Philippe Lambin, Dimitris Visvikis and Nicolas Boussion
J. Imaging 2023, 9(6), 124; https://doi.org/10.3390/jimaging9060124 - 20 Jun 2023
Viewed by 1665
Abstract
Despite the intensive use of radiotherapy in clinical practice, its effectiveness depends on several factors. Several studies showed that the tumour response to radiation differs from one patient to another. The non-uniform response of the tumour is mainly caused by multiple interactions between [...] Read more.
Despite the intensive use of radiotherapy in clinical practice, its effectiveness depends on several factors. Several studies showed that the tumour response to radiation differs from one patient to another. The non-uniform response of the tumour is mainly caused by multiple interactions between the tumour microenvironment and healthy cells. To understand these interactions, five major biologic concepts called the “5 Rs” have emerged. These concepts include reoxygenation, DNA damage repair, cell cycle redistribution, cellular radiosensitivity and cellular repopulation. In this study, we used a multi-scale model, which included the five Rs of radiotherapy, to predict the effects of radiation on tumour growth. In this model, the oxygen level was varied in both time and space. When radiotherapy was given, the sensitivity of cells depending on their location in the cell cycle was taken in account. This model also considered the repair of cells by giving a different probability of survival after radiation for tumour and normal cells. Here, we developed four fractionation protocol schemes. We used simulated and positron emission tomography (PET) imaging with the hypoxia tracer 18F-flortanidazole (18F-HX4) images as input data of our model. In addition, tumour control probability curves were simulated. The result showed the evolution of tumours and normal cells. The increase in the cell number after radiation was seen in both normal and malignant cells, which proves that repopulation was included in this model. The proposed model predicts the tumour response to radiation and forms the basis for a more patient-specific clinical tool where related biological data will be included. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

12 pages, 1084 KiB  
Article
Segmentation of 4D Flow MRI: Comparison between 3D Deep Learning and Velocity-Based Level Sets
by Armando Barrera-Naranjo, Diana M. Marin-Castrillon, Thomas Decourselle, Siyu Lin, Sarah Leclerc, Marie-Catherine Morgant, Chloé Bernard, Shirley De Oliveira, Arnaud Boucher, Benoit Presles, Olivier Bouchot, Jean-Joseph Christophe and Alain Lalande
J. Imaging 2023, 9(6), 123; https://doi.org/10.3390/jimaging9060123 - 19 Jun 2023
Viewed by 1755
Abstract
A thoracic aortic aneurysm is an abnormal dilatation of the aorta that can progress and lead to rupture. The decision to conduct surgery is made by considering the maximum diameter, but it is now well known that this metric alone is not completely [...] Read more.
A thoracic aortic aneurysm is an abnormal dilatation of the aorta that can progress and lead to rupture. The decision to conduct surgery is made by considering the maximum diameter, but it is now well known that this metric alone is not completely reliable. The advent of 4D flow magnetic resonance imaging has allowed for the calculation of new biomarkers for the study of aortic diseases, such as wall shear stress. However, the calculation of these biomarkers requires the precise segmentation of the aorta during all phases of the cardiac cycle. The objective of this work was to compare two different methods for automatically segmenting the thoracic aorta in the systolic phase using 4D flow MRI. The first method is based on a level set framework and uses the velocity field in addition to 3D phase contrast magnetic resonance imaging. The second method is a U-Net-like approach that is only applied to magnitude images from 4D flow MRI. The used dataset was composed of 36 exams from different patients, with ground truth data for the systolic phase of the cardiac cycle. The comparison was performed based on selected metrics, such as the Dice similarity coefficient (DSC) and Hausdorf distance (HD), for the whole aorta and also three aortic regions. Wall shear stress was also assessed and the maximum wall shear stress values were used for comparison. The U-Net-based approach provided statistically better results for the 3D segmentation of the aorta, with a DSC of 0.92 ± 0.02 vs. 0.86 ± 0.5 and an HD of 21.49 ± 24.8 mm vs. 35.79 ± 31.33 mm for the whole aorta. The absolute difference between the wall shear stress and ground truth slightly favored the level set method, but not significantly (0.754 ± 1.07 Pa vs. 0.737 ± 0.79 Pa). The results showed that the deep learning-based method should be considered for the segmentation of all time steps in order to evaluate biomarkers based on 4D flow MRI. Full article
Show Figures

Figure 1

18 pages, 1746 KiB  
Article
A Robust Approach to Multimodal Deepfake Detection
by Davide Salvi, Honggu Liu, Sara Mandelli, Paolo Bestagini, Wenbo Zhou, Weiming Zhang and Stefano Tubaro
J. Imaging 2023, 9(6), 122; https://doi.org/10.3390/jimaging9060122 - 19 Jun 2023
Cited by 6 | Viewed by 3740
Abstract
The widespread use of deep learning techniques for creating realistic synthetic media, commonly known as deepfakes, poses a significant threat to individuals, organizations, and society. As the malicious use of these data could lead to unpleasant situations, it is becoming crucial to distinguish [...] Read more.
The widespread use of deep learning techniques for creating realistic synthetic media, commonly known as deepfakes, poses a significant threat to individuals, organizations, and society. As the malicious use of these data could lead to unpleasant situations, it is becoming crucial to distinguish between authentic and fake media. Nonetheless, though deepfake generation systems can create convincing images and audio, they may struggle to maintain consistency across different data modalities, such as producing a realistic video sequence where both visual frames and speech are fake and consistent one with the other. Moreover, these systems may not accurately reproduce semantic and timely accurate aspects. All these elements can be exploited to perform a robust detection of fake content. In this paper, we propose a novel approach for detecting deepfake video sequences by leveraging data multimodality. Our method extracts audio-visual features from the input video over time and analyzes them using time-aware neural networks. We exploit both the video and audio modalities to leverage the inconsistencies between and within them, enhancing the final detection performance. The peculiarity of the proposed method is that we never train on multimodal deepfake data, but on disjoint monomodal datasets which contain visual-only or audio-only deepfakes. This frees us from leveraging multimodal datasets during training, which is desirable given their lack in the literature. Moreover, at test time, it allows to evaluate the robustness of our proposed detector on unseen multimodal deepfakes. We test different fusion techniques between data modalities and investigate which one leads to more robust predictions by the developed detectors. Our results indicate that a multimodal approach is more effective than a monomodal one, even if trained on disjoint monomodal datasets. Full article
(This article belongs to the Special Issue Robust Deep Learning Techniques for Multimedia Forensics and Security)
Show Figures

Figure 1

13 pages, 1800 KiB  
Article
Live Cell Light Sheet Imaging with Low- and High-Spatial-Coherence Detection Approaches Reveals Spatiotemporal Aspects of Neuronal Signaling
by Mariana Potcoava, Donatella Contini, Zachary Zurawski, Spencer Huynh, Christopher Mann, Jonathan Art and Simon Alford
J. Imaging 2023, 9(6), 121; https://doi.org/10.3390/jimaging9060121 - 16 Jun 2023
Viewed by 1226
Abstract
Light sheet microscopy in live cells requires minimal excitation intensity and resolves three-dimensional (3D) information rapidly. Lattice light sheet microscopy (LLSM) works similarly but uses a lattice configuration of Bessel beams to generate a flatter, diffraction-limited z-axis sheet suitable for investigating subcellular compartments, [...] Read more.
Light sheet microscopy in live cells requires minimal excitation intensity and resolves three-dimensional (3D) information rapidly. Lattice light sheet microscopy (LLSM) works similarly but uses a lattice configuration of Bessel beams to generate a flatter, diffraction-limited z-axis sheet suitable for investigating subcellular compartments, with better tissue penetration. We developed a LLSM method for investigating cellular properties of tissue in situ. Neural structures provide an important target. Neurons are complex 3D structures, and signaling between cells and subcellular structures requires high resolution imaging. We developed an LLSM configuration based on the Janelia Research Campus design or in situ recording that allows simultaneous electrophysiological recording. We give examples of using LLSM to assess synaptic function in situ. In presynapses, evoked Ca2+ entry causes vesicle fusion and neurotransmitter release. We demonstrate the use of LLSM to measure stimulus-evoked localized presynaptic Ca2+ entry and track synaptic vesicle recycling. We also demonstrate the resolution of postsynaptic Ca2+ signaling in single synapses. A challenge in 3D imaging is the need to move the emission objective to maintain focus. We have developed an incoherent holographic lattice light-sheet (IHLLS) technique to replace the LLS tube lens with a dual diffractive lens to obtain 3D images of spatially incoherent light diffracted from an object as incoherent holograms. The 3D structure is reproduced within the scanned volume without moving the emission objective. This eliminates mechanical artifacts and improves temporal resolution. We focus on LLS and IHLLS applications and data obtained in neuroscience and emphasize increases in temporal and spatial resolution using these approaches. Full article
(This article belongs to the Special Issue Fluorescence Imaging and Analysis of Cellular System)
Show Figures

Figure 1

18 pages, 14851 KiB  
Article
A Computational Approach to Hand Pose Recognition in Early Modern Paintings
by Valentine Bernasconi, Eva Cetinić and Leonardo Impett
J. Imaging 2023, 9(6), 120; https://doi.org/10.3390/jimaging9060120 - 15 Jun 2023
Cited by 1 | Viewed by 2036
Abstract
Hands represent an important aspect of pictorial narration but have rarely been addressed as an object of study in art history and digital humanities. Although hand gestures play a significant role in conveying emotions, narratives, and cultural symbolism in the context of visual [...] Read more.
Hands represent an important aspect of pictorial narration but have rarely been addressed as an object of study in art history and digital humanities. Although hand gestures play a significant role in conveying emotions, narratives, and cultural symbolism in the context of visual art, a comprehensive terminology for the classification of depicted hand poses is still lacking. In this article, we present the process of creating a new annotated dataset of pictorial hand poses. The dataset is based on a collection of European early modern paintings, from which hands are extracted using human pose estimation (HPE) methods. The hand images are then manually annotated based on art historical categorization schemes. From this categorization, we introduce a new classification task and perform a series of experiments using different types of features, including our newly introduced 2D hand keypoint features, as well as existing neural network-based features. This classification task represents a new and complex challenge due to the subtle and contextually dependent differences between depicted hands. The presented computational approach to hand pose recognition in paintings represents an initial attempt to tackle this challenge, which could potentially advance the use of HPE methods on paintings, as well as foster new research on the understanding of hand gestures in art. Full article
(This article belongs to the Special Issue Pattern Recognition Systems for Cultural Heritage)
Show Figures

Figure 1

13 pages, 711 KiB  
Article
Digital Breast Tomosynthesis: Towards Dose Reduction through Image Quality Improvement
by Ana M. Mota, João Mendes and Nuno Matela
J. Imaging 2023, 9(6), 119; https://doi.org/10.3390/jimaging9060119 - 11 Jun 2023
Cited by 1 | Viewed by 1628
Abstract
Currently, breast cancer is the most commonly diagnosed type of cancer worldwide. Digital Breast Tomosynthesis (DBT) has been widely accepted as a stand-alone modality to replace Digital Mammography, particularly in denser breasts. However, the image quality improvement provided by DBT is accompanied by [...] Read more.
Currently, breast cancer is the most commonly diagnosed type of cancer worldwide. Digital Breast Tomosynthesis (DBT) has been widely accepted as a stand-alone modality to replace Digital Mammography, particularly in denser breasts. However, the image quality improvement provided by DBT is accompanied by an increase in the radiation dose for the patient. Here, a method based on 2D Total Variation (2D TV) minimization to improve image quality without the need to increase the dose was proposed. Two phantoms were used to acquire data at different dose ranges (0.88–2.19 mGy for Gammex 156 and 0.65–1.71 mGy for our phantom). A 2D TV minimization filter was applied to the data, and the image quality was assessed through contrast-to-noise ratio (CNR) and the detectability index of lesions before and after filtering. The results showed a decrease in 2D TV values after filtering, with variations of up to 31%, increasing image quality. The increase in CNR values after filtering showed that it is possible to use lower doses (−26%, on average) without compromising on image quality. The detectability index had substantial increases (up to 14%), especially in smaller lesions. So, not only did the proposed approach allow for the enhancement of image quality without increasing the dose, but it also improved the chances of detecting small lesions that could be overlooked. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

11 pages, 2684 KiB  
Article
Short-Term Precision and Repeatability of Radiofrequency Echographic Multi Spectrometry (REMS) on Lumbar Spine and Proximal Femur: An In Vivo Study
by Carmelo Messina, Salvatore Gitto, Roberta Colombo, Stefano Fusco, Giada Guagliardo, Mattia Piazza, Jacopo Carlo Poli, Domenico Albano and Luca Maria Sconfienza
J. Imaging 2023, 9(6), 118; https://doi.org/10.3390/jimaging9060118 - 11 Jun 2023
Cited by 4 | Viewed by 1213
Abstract
To determine the short-term intra-operator precision and inter-operator repeatability of radiofrequency echographic multi-spectrometry (REMS) at the lumbar spine (LS) and proximal femur (FEM). All patients underwent an ultrasound scan of the LS and FEM. Both precision and repeatability, expressed as root-mean-square coefficient of [...] Read more.
To determine the short-term intra-operator precision and inter-operator repeatability of radiofrequency echographic multi-spectrometry (REMS) at the lumbar spine (LS) and proximal femur (FEM). All patients underwent an ultrasound scan of the LS and FEM. Both precision and repeatability, expressed as root-mean-square coefficient of variation (RMS-CV) and least significant change (LSC) were obtained using data from two consecutive REMS acquisitions by the same operator or two different operators, respectively. The precision was also assessed in the cohort stratified according to BMI classification. The mean (±SD) age of our subjects was 48.9 ± 6.8 for LS and 48.3 ± 6.1 for FEM. Precision was assessed on 42 subjects at LS and 37 subjects on FEM. Mean (±SD) BMI was 24.71 ± 4.2 for LS and 25.0 ± 4.84 for FEM. Respectively, the intra-operator precision error (RMS-CV) and LSC resulted in 0.47% and 1.29% at the spine and 0.32% and 0.89% at the proximal femur evaluation. The inter-operator variability investigated at the LS yielded an RMS-CV error of 0.55% and LSC of 1.52%, whereas for the FEM, the RMS-CV was 0.51% and the LSC was 1.40%. Similar values were found when subjects were divided into BMI subgroups. REMS technique provides a precise estimation of the US-BMD independent of subjects’ BMI differences. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

19 pages, 776 KiB  
Article
White Box Watermarking for Convolution Layers in Fine-Tuning Model Using the Constant Weight Code
by Minoru Kuribayashi, Tatsuya Yasui and Asad Malik
J. Imaging 2023, 9(6), 117; https://doi.org/10.3390/jimaging9060117 - 09 Jun 2023
Cited by 2 | Viewed by 1256
Abstract
Deep neural network (DNN) watermarking is a potential approach for protecting the intellectual property rights of DNN models. Similar to classical watermarking techniques for multimedia content, the requirements for DNN watermarking include capacity, robustness, transparency, and other factors. Studies have focused on robustness [...] Read more.
Deep neural network (DNN) watermarking is a potential approach for protecting the intellectual property rights of DNN models. Similar to classical watermarking techniques for multimedia content, the requirements for DNN watermarking include capacity, robustness, transparency, and other factors. Studies have focused on robustness against retraining and fine-tuning. However, less important neurons in the DNN model may be pruned. Moreover, although the encoding approach renders DNN watermarking robust against pruning attacks, the watermark is assumed to be embedded only into the fully connected layer in the fine-tuning model. In this study, we extended the method such that the model can be applied to any convolution layer of the DNN model and designed a watermark detector based on a statistical analysis of the extracted weight parameters to evaluate whether the model is watermarked. Using a nonfungible token mitigates the overwriting of the watermark and enables checking when the DNN model with the watermark was created. Full article
(This article belongs to the Special Issue Robust Deep Learning Techniques for Multimedia Forensics and Security)
Show Figures

Figure 1

19 pages, 570 KiB  
Article
An Optimization-Based Family of Predictive, Fusion-Based Models for Full-Reference Image Quality Assessment
by Domonkos Varga
J. Imaging 2023, 9(6), 116; https://doi.org/10.3390/jimaging9060116 - 08 Jun 2023
Cited by 1 | Viewed by 1282
Abstract
Given the reference (distortion-free) image, full-reference image quality assessment (FR-IQA) algorithms seek to assess the perceptual quality of the test image. Over the years, many effective, hand-crafted FR-IQA metrics have been proposed in the literature. In this work, we present a novel framework [...] Read more.
Given the reference (distortion-free) image, full-reference image quality assessment (FR-IQA) algorithms seek to assess the perceptual quality of the test image. Over the years, many effective, hand-crafted FR-IQA metrics have been proposed in the literature. In this work, we present a novel framework for FR-IQA that combines multiple metrics and tries to leverage the strength of each by formulating FR-IQA as an optimization problem. Following the idea of other fusion-based metrics, the perceptual quality of a test image is defined as the weighted product of several already existing, hand-crafted FR-IQA metrics. Unlike other methods, the weights are determined in an optimization-based framework and the objective function is defined to maximize the correlation and minimize the root mean square error between the predicted and ground-truth quality scores. The obtained metrics are evaluated on four popular benchmark IQA databases and compared to the state of the art. This comparison has revealed that the compiled fusion-based metrics are able to outperform other competing algorithms, including deep learning-based ones. Full article
Show Figures

Figure 1

34 pages, 6681 KiB  
Review
Imaging of Gastrointestinal Tract Ailments
by Boyang Sun, Jingang Liu, Silu Li, Jonathan F. Lovell and Yumiao Zhang
J. Imaging 2023, 9(6), 115; https://doi.org/10.3390/jimaging9060115 - 08 Jun 2023
Viewed by 4679
Abstract
Gastrointestinal (GI) disorders comprise a diverse range of conditions that can significantly reduce the quality of life and can even be life-threatening in serious cases. The development of accurate and rapid detection approaches is of essential importance for early diagnosis and timely management [...] Read more.
Gastrointestinal (GI) disorders comprise a diverse range of conditions that can significantly reduce the quality of life and can even be life-threatening in serious cases. The development of accurate and rapid detection approaches is of essential importance for early diagnosis and timely management of GI diseases. This review mainly focuses on the imaging of several representative gastrointestinal ailments, such as inflammatory bowel disease, tumors, appendicitis, Meckel’s diverticulum, and others. Various imaging modalities commonly used for the gastrointestinal tract, including magnetic resonance imaging (MRI), positron emission tomography (PET) and single photon emission computed tomography (SPECT), and photoacoustic tomography (PAT) and multimodal imaging with mode overlap are summarized. These achievements in single and multimodal imaging provide useful guidance for improved diagnosis, staging, and treatment of the corresponding gastrointestinal diseases. The review evaluates the strengths and weaknesses of different imaging techniques and summarizes the development of imaging techniques used for diagnosing gastrointestinal ailments. Full article
Show Figures

Figure 1

12 pages, 14005 KiB  
Article
Clinical Utility of 18Fluorodeoxyglucose Positron Emission Tomography/Computed Tomography (18F-FDG PET/CT) in Multivisceral Transplant Patients
by Shao Jin Ong, Lisa M. Sharkey, Kai En Low, Heok K. Cheow, Andrew J. Butler and John R. Buscombe
J. Imaging 2023, 9(6), 114; https://doi.org/10.3390/jimaging9060114 - 07 Jun 2023
Viewed by 998
Abstract
Multivisceral transplant (MVTx) refers to a composite graft from a cadaveric donor, which often includes the liver, the pancreaticoduodenal complex, and small intestine transplanted en bloc. It remains rare and is performed in specialist centres. Post-transplant complications are reported at a higher rate [...] Read more.
Multivisceral transplant (MVTx) refers to a composite graft from a cadaveric donor, which often includes the liver, the pancreaticoduodenal complex, and small intestine transplanted en bloc. It remains rare and is performed in specialist centres. Post-transplant complications are reported at a higher rate in multivisceral transplants because of the high levels of immunosuppression used to prevent rejection of the highly immunogenic intestine. In this study, we analyzed the clinical utility of 28 18F-FDG PET/CT scans in 20 multivisceral transplant recipients in whom previous non-functional imaging was deemed clinically inconclusive. The results were compared with histopathological and clinical follow-up data. In our study, the accuracy of 18F-FDG PET/CT was determined as 66.7%, where a final diagnosis was confirmed clinically or via pathology. Of the 28 scans, 24 scans (85.7%) directly affected patient management, of which 9 were related to starting of new treatments and 6 resulted in an ongoing treatment or planned surgery being stopped. This study demonstrates that 18F-FDG PET/CT is a promising technique in identifying life-threatening pathologies in this complex group of patients. It would appear that 18F-FDG PET/CT has a good level of accuracy, including for those MVTx patients suffering from infection, post-transplant lymphoproliferative disease, and malignancy. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

21 pages, 14377 KiB  
Article
An Enhanced Photogrammetric Approach for the Underwater Surveying of the Posidonia Meadow Structure in the Spiaggia Nera Area of Maratea
by Francesca Russo, Silvio Del Pizzo, Fabiana Di Ciaccio and Salvatore Troisi
J. Imaging 2023, 9(6), 113; https://doi.org/10.3390/jimaging9060113 - 31 May 2023
Cited by 1 | Viewed by 1302
Abstract
The Posidonia oceanica meadows represent a fundamental biological indicator for the assessment of the marine ecosystem’s state of health. They also play an essential role in the conservation of coastal morphology. The composition, extent, and structure of the meadows are conditioned by the [...] Read more.
The Posidonia oceanica meadows represent a fundamental biological indicator for the assessment of the marine ecosystem’s state of health. They also play an essential role in the conservation of coastal morphology. The composition, extent, and structure of the meadows are conditioned by the biological characteristics of the plant itself and by the environmental setting, considering the type and nature of the substrate, the geomorphology of the seabed, the hydrodynamics, the depth, the light availability, the sedimentation speed, etc. In this work, we present a methodology for the effective monitoring and mapping of the Posidonia oceanica meadows by means of underwater photogrammetry. To reduce the effect of environmental factors on the underwater images (e.g., the bluish or greenish effects), the workflow is enhanced through the application of two different algorithms. The 3D point cloud obtained using the restored images allowed for a better categorization of a wider area than the one made using the original image elaboration. Therefore, this work aims at presenting a photogrammetric approach for the rapid and reliable characterization of the seabed, with particular reference to the Posidonia coverage. Full article
(This article belongs to the Section Image and Video Processing)
Show Figures

Figure 1

11 pages, 33561 KiB  
Article
Terahertz Constant Velocity Flying Spot for 3D Tomographic Imaging
by Abderezak Aouali, Stéphane Chevalier, Alain Sommier and Christophe Pradere
J. Imaging 2023, 9(6), 112; https://doi.org/10.3390/jimaging9060112 - 31 May 2023
Cited by 1 | Viewed by 998
Abstract
This work reports on a terahertz tomography technique using constant velocity flying spot scanning as illumination. This technique is essentially based on the combination of a hyperspectral thermoconverter and an infrared camera used as a sensor, a source of terahertz radiation held on [...] Read more.
This work reports on a terahertz tomography technique using constant velocity flying spot scanning as illumination. This technique is essentially based on the combination of a hyperspectral thermoconverter and an infrared camera used as a sensor, a source of terahertz radiation held on a translation scanner, and a vial of hydroalcoholic gel used as a sample and mounted on a rotating stage for the measurement of its absorbance at several angular positions. From the projections made in 2.5 h and expressed in terms of sinograms, the 3D volume of the absorption coefficient of the vial is reconstructed by a back-projection method based on the inverse Radon transform. This result confirms that this technique is usable on samples of complex and nonaxisymmetric shapes; moreover, it allows 3D qualitative chemical information with a possible phase separation in the terahertz spectral range to be obtained in heterogeneous and complex semitransparent media. Full article
Show Figures

Figure 1

17 pages, 10707 KiB  
Article
Lithium Metal Battery Quality Control via Transformer–CNN Segmentation
by Jerome Quenum, Iryna V. Zenyuk and Daniela Ushizima
J. Imaging 2023, 9(6), 111; https://doi.org/10.3390/jimaging9060111 - 31 May 2023
Cited by 1 | Viewed by 1524
Abstract
Lithium metal battery (LMB) has the potential to be the next-generation battery system because of its high theoretical energy density. However, defects known as dendrites are formed by heterogeneous lithium (Li) plating, which hinders the development and utilization of LMBs. Non-destructive techniques to [...] Read more.
Lithium metal battery (LMB) has the potential to be the next-generation battery system because of its high theoretical energy density. However, defects known as dendrites are formed by heterogeneous lithium (Li) plating, which hinders the development and utilization of LMBs. Non-destructive techniques to observe the dendrite morphology often use X-ray computed tomography (XCT) to provide cross-sectional views. To retrieve three-dimensional structures inside a battery, image segmentation becomes essential to quantitatively analyze XCT images. This work proposes a new semantic segmentation approach using a transformer-based neural network called TransforCNN that is capable of segmenting out dendrites from XCT data. In addition, we compare the performance of the proposed TransforCNN with three other algorithms, U-Net, Y-Net, and E-Net, consisting of an ensemble network model for XCT analysis. Our results show the advantages of using TransforCNN when evaluating over-segmentation metrics, such as mean intersection over union (mIoU) and mean Dice similarity coefficient (mDSC), as well as through several qualitatively comparative visualizations. Full article
(This article belongs to the Special Issue Computer Vision and Deep Learning: Trends and Applications)
Show Figures

Figure 1

12 pages, 797 KiB  
Article
A Convolutional Neural Network-Based Connectivity Enhancement Approach for Autism Spectrum Disorder Detection
by Fatima Zahra Benabdallah, Ahmed Drissi El Maliani, Dounia Lotfi and Mohammed El Hassouni
J. Imaging 2023, 9(6), 110; https://doi.org/10.3390/jimaging9060110 - 31 May 2023
Cited by 2 | Viewed by 1486
Abstract
Autism spectrum disorder (ASD) represents an ongoing obstacle facing many researchers to achieving early diagnosis with high accuracy. To advance developments in ASD detection, the corroboration of findings presented in the existing body of autism-based literature is of high importance. Previous works put [...] Read more.
Autism spectrum disorder (ASD) represents an ongoing obstacle facing many researchers to achieving early diagnosis with high accuracy. To advance developments in ASD detection, the corroboration of findings presented in the existing body of autism-based literature is of high importance. Previous works put forward theories of under- and over-connectivity deficits in the autistic brain. An elimination approach based on methods that are theoretically comparable to the aforementioned theories proved the existence of these deficits. Therefore, in this paper, we propose a framework that takes into account the properties of under- and over-connectivity in the autistic brain using an enhancement approach coupled with deep learning through convolutional neural networks (CNN). In this approach, image-alike connectivity matrices are created, and then connections related to connectivity alterations are enhanced. The overall objective is the facilitation of early diagnosis of this disorder. After conducting tests using information from the large multi-site Autism Brain Imaging Data Exchange (ABIDE I) dataset, the results show that this approach provides an accurate prediction value reaching up to 96%. Full article
(This article belongs to the Topic Medical Image Analysis)
Show Figures

Figure 1

11 pages, 2015 KiB  
Article
Gender, Smoking History, and Age Prediction from Laryngeal Images
by Tianxiao Zhang, Andrés M. Bur, Shannon Kraft, Hannah Kavookjian, Bryan Renslo, Xiangyu Chen, Bo Luo and Guanghui Wang
J. Imaging 2023, 9(6), 109; https://doi.org/10.3390/jimaging9060109 - 29 May 2023
Viewed by 1401
Abstract
Flexible laryngoscopy is commonly performed by otolaryngologists to detect laryngeal diseases and to recognize potentially malignant lesions. Recently, researchers have introduced machine learning techniques to facilitate automated diagnosis using laryngeal images and achieved promising results. The diagnostic performance can be improved when patients’ [...] Read more.
Flexible laryngoscopy is commonly performed by otolaryngologists to detect laryngeal diseases and to recognize potentially malignant lesions. Recently, researchers have introduced machine learning techniques to facilitate automated diagnosis using laryngeal images and achieved promising results. The diagnostic performance can be improved when patients’ demographic information is incorporated into models. However, the manual entry of patient data is time-consuming for clinicians. In this study, we made the first endeavor to employ deep learning models to predict patient demographic information to improve the detector model’s performance. The overall accuracy for gender, smoking history, and age was 85.5%, 65.2%, and 75.9%, respectively. We also created a new laryngoscopic image set for the machine learning study and benchmarked the performance of eight classical deep learning models based on CNNs and Transformers. The results can be integrated into current learning models to improve their performance by incorporating the patient’s demographic information. Full article
Show Figures

Figure 1

14 pages, 3162 KiB  
Article
Transformative Effect of COVID-19 Pandemic on Magnetic Resonance Imaging Services in One Tertiary Cardiovascular Center
by Tatiana A. Shelkovnikova, Aleksandra S. Maksimova, Nadezhda I. Ryumshina, Olga V. Mochula, Valery K. Vaizov, Wladimir Y. Ussov and Nina D. Anfinogenova
J. Imaging 2023, 9(6), 108; https://doi.org/10.3390/jimaging9060108 - 28 May 2023
Viewed by 1542
Abstract
The aim of study was to investigate the transformative effect of the COVID-19 pandemic on magnetic resonance imaging (MRI) services in one tertiary cardiovascular center. The retrospective observational cohort study analyzed data of MRI studies (n = 8137) performed from 1 January [...] Read more.
The aim of study was to investigate the transformative effect of the COVID-19 pandemic on magnetic resonance imaging (MRI) services in one tertiary cardiovascular center. The retrospective observational cohort study analyzed data of MRI studies (n = 8137) performed from 1 January 2019 to 1 June 2022. A total of 987 patients underwent contrast-enhanced cardiac MRI (CE-CMR). Referrals, clinical characteristics, diagnosis, gender, age, past COVID-19, MRI study protocols, and MRI data were analyzed. The annual absolute numbers and rates of CE-CMR procedures in our center significantly increased from 2019 to 2022 (p-value < 0.05). The increasing temporal trends were observed in hypertrophic cardiomyopathy (HCMP) and myocardial fibrosis (p-value < 0.05). The CE-CMR findings of myocarditis, acute myocardial infarction, ischemic cardiomyopathy, HCMP, postinfarction cardiosclerosis, and focal myocardial fibrosis prevailed in men compared with the corresponding values in women during the pandemic (p-value < 0.05). The frequency of myocardial fibrosis occurrence increased from ~67% in 2019 to ~84% in 2022 (p-value < 0.05). The COVID-19 pandemic increased the need for MRI and CE-CMR. Patients with a history of COVID-19 had persistent and newly occurring symptoms of myocardial damage, suggesting chronic cardiac involvement consistent with long COVID-19 requiring continuous follow-up. Full article
Show Figures

Figure 1

0 pages, 5370 KiB  
Article
A Siamese Transformer Network for Zero-Shot Ancient Coin Classification
by Zhongliang Guo, Ognjen Arandjelović, David Reid, Yaxiong Lei and Jochen Büttner
J. Imaging 2023, 9(6), 107; https://doi.org/10.3390/jimaging9060107 - 25 May 2023
Cited by 2 | Viewed by 1776 | Correction
Abstract
Ancient numismatics, the study of ancient coins, has in recent years become an attractive domain for the application of computer vision and machine learning. Though rich in research problems, the predominant focus in this area to date has been on the task of [...] Read more.
Ancient numismatics, the study of ancient coins, has in recent years become an attractive domain for the application of computer vision and machine learning. Though rich in research problems, the predominant focus in this area to date has been on the task of attributing a coin from an image, that is of identifying its issue. This may be considered the cardinal problem in the field and it continues to challenge automatic methods. In the present paper, we address a number of limitations of previous work. Firstly, the existing methods approach the problem as a classification task. As such, they are unable to deal with classes with no or few exemplars (which would be most, given over 50,000 issues of Roman Imperial coins alone), and require retraining when exemplars of a new class become available. Hence, rather than seeking to learn a representation that distinguishes a particular class from all the others, herein we seek a representation that is overall best at distinguishing classes from one another, thus relinquishing the demand for exemplars of any specific class. This leads to our adoption of the paradigm of pairwise coin matching by issue, rather than the usual classification paradigm, and the specific solution we propose in the form of a Siamese neural network. Furthermore, while adopting deep learning, motivated by its successes in the field and its unchallenged superiority over classical computer vision approaches, we also seek to leverage the advantages that transformers have over the previously employed convolutional neural networks, and in particular their non-local attention mechanisms, which ought to be particularly useful in ancient coin analysis by associating semantically but not visually related distal elements of a coin’s design. Evaluated on a large data corpus of 14,820 images and 7605 issues, using transfer learning and only a small training set of 542 images of 24 issues, our Double Siamese ViT model is shown to surpass the state of the art by a large margin, achieving an overall accuracy of 81%. Moreover, our further investigation of the results shows that the majority of the method’s errors are unrelated to the intrinsic aspects of the algorithm itself, but are rather a consequence of unclean data, which is a problem that can be easily addressed in practice by simple pre-processing and quality checking. Full article
(This article belongs to the Special Issue Pattern Recognition Systems for Cultural Heritage)
Show Figures

Figure 1

21 pages, 23878 KiB  
Article
Manipulating Pixels in Computer Graphics by Converting Raster Elements to Vector Shapes as a Function of Hue
by Tajana Koren Ivančević, Nikolina Stanić Loknar, Maja Rudolf and Diana Bratić
J. Imaging 2023, 9(6), 106; https://doi.org/10.3390/jimaging9060106 - 23 May 2023
Viewed by 1189
Abstract
This paper proposes a method for changing pixel shape by converting a CMYK raster image (pixel) to an HSB vector image, replacing the square cells of the CMYK pixels with different vector shapes. The replacement of a pixel by the selected vector shape [...] Read more.
This paper proposes a method for changing pixel shape by converting a CMYK raster image (pixel) to an HSB vector image, replacing the square cells of the CMYK pixels with different vector shapes. The replacement of a pixel by the selected vector shape is done depending on the detected color values for each pixel. The CMYK values are first converted to the corresponding RGB values and then to the HSB system, and the vector shape is selected based on the obtained hue values. The vector shape is drawn in the defined space, according to the row and column matrix of the pixels of the original CMYK image. Twenty-one vector shapes are introduced to replace the pixels depending on the hue. The pixels of each hue are replaced by a different shape. The application of this conversion has its greatest value in the creation of security graphics for printed documents and the individualization of digital artwork by creating structured patterns based on the hue. Full article
(This article belongs to the Topic Color Image Processing: Models and Methods (CIP: MM))
Show Figures

Graphical abstract

Previous Issue
Next Issue
Back to TopTop