Journal Description
Journal of Imaging
Journal of Imaging
is an international, multi/interdisciplinary, peer-reviewed, open access journal of imaging techniques published online monthly by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), PubMed, PMC, dblp, Inspec, Ei Compendex, and other databases.
- Journal Rank: CiteScore - Q2 (Computer Graphics and Computer-Aided Design)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 21.2 days after submission; acceptance to publication is undertaken in 4.8 days (median values for papers published in this journal in the second half of 2022).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Latest Articles
White Box Watermarking for Convolution Layers in Fine-Tuning Model Using the Constant Weight Code
J. Imaging 2023, 9(6), 117; https://doi.org/10.3390/jimaging9060117 (registering DOI) - 09 Jun 2023
Abstract
Deep neural network (DNN) watermarking is a potential approach for protecting the intellectual property rights of DNN models. Similar to classical watermarking techniques for multimedia content, the requirements for DNN watermarking include capacity, robustness, transparency, and other factors. Studies have focused on robustness
[...] Read more.
Deep neural network (DNN) watermarking is a potential approach for protecting the intellectual property rights of DNN models. Similar to classical watermarking techniques for multimedia content, the requirements for DNN watermarking include capacity, robustness, transparency, and other factors. Studies have focused on robustness against retraining and fine-tuning. However, less important neurons in the DNN model may be pruned. Moreover, although the encoding approach renders DNN watermarking robust against pruning attacks, the watermark is assumed to be embedded only into the fully connected layer in the fine-tuning model. In this study, we extended the method such that the model can be applied to any convolution layer of the DNN model and designed a watermark detector based on a statistical analysis of the extracted weight parameters to evaluate whether the model is watermarked. Using a nonfungible token mitigates the overwriting of the watermark and enables checking when the DNN model with the watermark was created.
Full article
(This article belongs to the Special Issue Robust Deep Learning Techniques for Multimedia Forensics and Security)
Open AccessArticle
An Optimization-Based Family of Predictive, Fusion-Based Models for Full-Reference Image Quality Assessment
J. Imaging 2023, 9(6), 116; https://doi.org/10.3390/jimaging9060116 - 08 Jun 2023
Abstract
►▼
Show Figures
Given the reference (distortion-free) image, full-reference image quality assessment (FR-IQA) algorithms seek to assess the perceptual quality of the test image. Over the years, many effective, hand-crafted FR-IQA metrics have been proposed in the literature. In this work, we present a novel framework
[...] Read more.
Given the reference (distortion-free) image, full-reference image quality assessment (FR-IQA) algorithms seek to assess the perceptual quality of the test image. Over the years, many effective, hand-crafted FR-IQA metrics have been proposed in the literature. In this work, we present a novel framework for FR-IQA that combines multiple metrics and tries to leverage the strength of each by formulating FR-IQA as an optimization problem. Following the idea of other fusion-based metrics, the perceptual quality of a test image is defined as the weighted product of several already existing, hand-crafted FR-IQA metrics. Unlike other methods, the weights are determined in an optimization-based framework and the objective function is defined to maximize the correlation and minimize the root mean square error between the predicted and ground-truth quality scores. The obtained metrics are evaluated on four popular benchmark IQA databases and compared to the state of the art. This comparison has revealed that the compiled fusion-based metrics are able to outperform other competing algorithms, including deep learning-based ones.
Full article

Figure 1
Open AccessReview
Imaging of Gastrointestinal Tract Ailments
J. Imaging 2023, 9(6), 115; https://doi.org/10.3390/jimaging9060115 - 08 Jun 2023
Abstract
►▼
Show Figures
Gastrointestinal (GI) disorders comprise a diverse range of conditions that can significantly reduce the quality of life and can even be life-threatening in serious cases. The development of accurate and rapid detection approaches is of essential importance for early diagnosis and timely management
[...] Read more.
Gastrointestinal (GI) disorders comprise a diverse range of conditions that can significantly reduce the quality of life and can even be life-threatening in serious cases. The development of accurate and rapid detection approaches is of essential importance for early diagnosis and timely management of GI diseases. This review mainly focuses on the imaging of several representative gastrointestinal ailments, such as inflammatory bowel disease, tumors, appendicitis, Meckel’s diverticulum, and others. Various imaging modalities commonly used for the gastrointestinal tract, including magnetic resonance imaging (MRI), positron emission tomography (PET) and single photon emission computed tomography (SPECT), and photoacoustic tomography (PAT) and multimodal imaging with mode overlap are summarized. These achievements in single and multimodal imaging provide useful guidance for improved diagnosis, staging, and treatment of the corresponding gastrointestinal diseases. The review evaluates the strengths and weaknesses of different imaging techniques and summarizes the development of imaging techniques used for diagnosing gastrointestinal ailments.
Full article

Figure 1
Open AccessArticle
Clinical Utility of 18Fluorodeoxyglucose Positron Emission Tomography/Computed Tomography (18F-FDG PET/CT) in Multivisceral Transplant Patients
J. Imaging 2023, 9(6), 114; https://doi.org/10.3390/jimaging9060114 - 07 Jun 2023
Abstract
Multivisceral transplant (MVTx) refers to a composite graft from a cadaveric donor, which often includes the liver, the pancreaticoduodenal complex, and small intestine transplanted en bloc. It remains rare and is performed in specialist centres. Post-transplant complications are reported at a higher rate
[...] Read more.
Multivisceral transplant (MVTx) refers to a composite graft from a cadaveric donor, which often includes the liver, the pancreaticoduodenal complex, and small intestine transplanted en bloc. It remains rare and is performed in specialist centres. Post-transplant complications are reported at a higher rate in multivisceral transplants because of the high levels of immunosuppression used to prevent rejection of the highly immunogenic intestine. In this study, we analyzed the clinical utility of 28 18F-FDG PET/CT scans in 20 multivisceral transplant recipients in whom previous non-functional imaging was deemed clinically inconclusive. The results were compared with histopathological and clinical follow-up data. In our study, the accuracy of 18F-FDG PET/CT was determined as 66.7%, where a final diagnosis was confirmed clinically or via pathology. Of the 28 scans, 24 scans (85.7%) directly affected patient management, of which 9 were related to starting of new treatments and 6 resulted in an ongoing treatment or planned surgery being stopped. This study demonstrates that 18F-FDG PET/CT is a promising technique in identifying life-threatening pathologies in this complex group of patients. It would appear that 18F-FDG PET/CT has a good level of accuracy, including for those MVTx patients suffering from infection, post-transplant lymphoproliferative disease, and malignancy.
Full article
(This article belongs to the Section Medical Imaging)
►▼
Show Figures

Figure 1
Open AccessArticle
An Enhanced Photogrammetric Approach for the Underwater Surveying of the Posidonia Meadow Structure in the Spiaggia Nera Area of Maratea
J. Imaging 2023, 9(6), 113; https://doi.org/10.3390/jimaging9060113 - 31 May 2023
Abstract
The Posidonia oceanica meadows represent a fundamental biological indicator for the assessment of the marine ecosystem’s state of health. They also play an essential role in the conservation of coastal morphology. The composition, extent, and structure of the meadows are conditioned by the
[...] Read more.
The Posidonia oceanica meadows represent a fundamental biological indicator for the assessment of the marine ecosystem’s state of health. They also play an essential role in the conservation of coastal morphology. The composition, extent, and structure of the meadows are conditioned by the biological characteristics of the plant itself and by the environmental setting, considering the type and nature of the substrate, the geomorphology of the seabed, the hydrodynamics, the depth, the light availability, the sedimentation speed, etc. In this work, we present a methodology for the effective monitoring and mapping of the Posidonia oceanica meadows by means of underwater photogrammetry. To reduce the effect of environmental factors on the underwater images (e.g., the bluish or greenish effects), the workflow is enhanced through the application of two different algorithms. The 3D point cloud obtained using the restored images allowed for a better categorization of a wider area than the one made using the original image elaboration. Therefore, this work aims at presenting a photogrammetric approach for the rapid and reliable characterization of the seabed, with particular reference to the Posidonia coverage.
Full article
(This article belongs to the Section Image and Video Processing)
►▼
Show Figures

Figure 1
Open AccessArticle
Terahertz Constant Velocity Flying Spot for 3D Tomographic Imaging
J. Imaging 2023, 9(6), 112; https://doi.org/10.3390/jimaging9060112 - 31 May 2023
Abstract
►▼
Show Figures
This work reports on a terahertz tomography technique using constant velocity flying spot scanning as illumination. This technique is essentially based on the combination of a hyperspectral thermoconverter and an infrared camera used as a sensor, a source of terahertz radiation held on
[...] Read more.
This work reports on a terahertz tomography technique using constant velocity flying spot scanning as illumination. This technique is essentially based on the combination of a hyperspectral thermoconverter and an infrared camera used as a sensor, a source of terahertz radiation held on a translation scanner, and a vial of hydroalcoholic gel used as a sample and mounted on a rotating stage for the measurement of its absorbance at several angular positions. From the projections made in 2.5 h and expressed in terms of sinograms, the 3D volume of the absorption coefficient of the vial is reconstructed by a back-projection method based on the inverse Radon transform. This result confirms that this technique is usable on samples of complex and nonaxisymmetric shapes; moreover, it allows 3D qualitative chemical information with a possible phase separation in the terahertz spectral range to be obtained in heterogeneous and complex semitransparent media.
Full article

Figure 1
Open AccessArticle
Lithium Metal Battery Quality Control via Transformer–CNN Segmentation
J. Imaging 2023, 9(6), 111; https://doi.org/10.3390/jimaging9060111 - 31 May 2023
Abstract
Lithium metal battery (LMB) has the potential to be the next-generation battery system because of its high theoretical energy density. However, defects known as dendrites are formed by heterogeneous lithium (Li) plating, which hinders the development and utilization of LMBs. Non-destructive techniques to
[...] Read more.
Lithium metal battery (LMB) has the potential to be the next-generation battery system because of its high theoretical energy density. However, defects known as dendrites are formed by heterogeneous lithium (Li) plating, which hinders the development and utilization of LMBs. Non-destructive techniques to observe the dendrite morphology often use X-ray computed tomography (XCT) to provide cross-sectional views. To retrieve three-dimensional structures inside a battery, image segmentation becomes essential to quantitatively analyze XCT images. This work proposes a new semantic segmentation approach using a transformer-based neural network called TransforCNN that is capable of segmenting out dendrites from XCT data. In addition, we compare the performance of the proposed TransforCNN with three other algorithms, U-Net, Y-Net, and E-Net, consisting of an ensemble network model for XCT analysis. Our results show the advantages of using TransforCNN when evaluating over-segmentation metrics, such as mean intersection over union (mIoU) and mean Dice similarity coefficient (mDSC), as well as through several qualitatively comparative visualizations.
Full article
(This article belongs to the Special Issue Computer Vision and Deep Learning: Trends and Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
A Convolutional Neural Network-Based Connectivity Enhancement Approach for Autism Spectrum Disorder Detection
J. Imaging 2023, 9(6), 110; https://doi.org/10.3390/jimaging9060110 - 31 May 2023
Abstract
Autism spectrum disorder (ASD) represents an ongoing obstacle facing many researchers to achieving early diagnosis with high accuracy. To advance developments in ASD detection, the corroboration of findings presented in the existing body of autism-based literature is of high importance. Previous works put
[...] Read more.
Autism spectrum disorder (ASD) represents an ongoing obstacle facing many researchers to achieving early diagnosis with high accuracy. To advance developments in ASD detection, the corroboration of findings presented in the existing body of autism-based literature is of high importance. Previous works put forward theories of under- and over-connectivity deficits in the autistic brain. An elimination approach based on methods that are theoretically comparable to the aforementioned theories proved the existence of these deficits. Therefore, in this paper, we propose a framework that takes into account the properties of under- and over-connectivity in the autistic brain using an enhancement approach coupled with deep learning through convolutional neural networks (CNN). In this approach, image-alike connectivity matrices are created, and then connections related to connectivity alterations are enhanced. The overall objective is the facilitation of early diagnosis of this disorder. After conducting tests using information from the large multi-site Autism Brain Imaging Data Exchange (ABIDE I) dataset, the results show that this approach provides an accurate prediction value reaching up to 96%.
Full article
(This article belongs to the Topic Medical Image Analysis)
►▼
Show Figures

Figure 1
Open AccessArticle
Gender, Smoking History, and Age Prediction from Laryngeal Images
by
, , , , , , and
J. Imaging 2023, 9(6), 109; https://doi.org/10.3390/jimaging9060109 - 29 May 2023
Abstract
►▼
Show Figures
Flexible laryngoscopy is commonly performed by otolaryngologists to detect laryngeal diseases and to recognize potentially malignant lesions. Recently, researchers have introduced machine learning techniques to facilitate automated diagnosis using laryngeal images and achieved promising results. The diagnostic performance can be improved when patients’
[...] Read more.
Flexible laryngoscopy is commonly performed by otolaryngologists to detect laryngeal diseases and to recognize potentially malignant lesions. Recently, researchers have introduced machine learning techniques to facilitate automated diagnosis using laryngeal images and achieved promising results. The diagnostic performance can be improved when patients’ demographic information is incorporated into models. However, the manual entry of patient data is time-consuming for clinicians. In this study, we made the first endeavor to employ deep learning models to predict patient demographic information to improve the detector model’s performance. The overall accuracy for gender, smoking history, and age was 85.5%, 65.2%, and 75.9%, respectively. We also created a new laryngoscopic image set for the machine learning study and benchmarked the performance of eight classical deep learning models based on CNNs and Transformers. The results can be integrated into current learning models to improve their performance by incorporating the patient’s demographic information.
Full article

Figure 1
Open AccessArticle
Transformative Effect of COVID-19 Pandemic on Magnetic Resonance Imaging Services in One Tertiary Cardiovascular Center
by
, , , , , and
J. Imaging 2023, 9(6), 108; https://doi.org/10.3390/jimaging9060108 - 28 May 2023
Abstract
The aim of study was to investigate the transformative effect of the COVID-19 pandemic on magnetic resonance imaging (MRI) services in one tertiary cardiovascular center. The retrospective observational cohort study analyzed data of MRI studies (n = 8137) performed from 1 January
[...] Read more.
The aim of study was to investigate the transformative effect of the COVID-19 pandemic on magnetic resonance imaging (MRI) services in one tertiary cardiovascular center. The retrospective observational cohort study analyzed data of MRI studies (n = 8137) performed from 1 January 2019 to 1 June 2022. A total of 987 patients underwent contrast-enhanced cardiac MRI (CE-CMR). Referrals, clinical characteristics, diagnosis, gender, age, past COVID-19, MRI study protocols, and MRI data were analyzed. The annual absolute numbers and rates of CE-CMR procedures in our center significantly increased from 2019 to 2022 (p-value < 0.05). The increasing temporal trends were observed in hypertrophic cardiomyopathy (HCMP) and myocardial fibrosis (p-value < 0.05). The CE-CMR findings of myocarditis, acute myocardial infarction, ischemic cardiomyopathy, HCMP, postinfarction cardiosclerosis, and focal myocardial fibrosis prevailed in men compared with the corresponding values in women during the pandemic (p-value < 0.05). The frequency of myocardial fibrosis occurrence increased from ~67% in 2019 to ~84% in 2022 (p-value < 0.05). The COVID-19 pandemic increased the need for MRI and CE-CMR. Patients with a history of COVID-19 had persistent and newly occurring symptoms of myocardial damage, suggesting chronic cardiac involvement consistent with long COVID-19 requiring continuous follow-up.
Full article
(This article belongs to the Topic Fighting against COVID-19: Latest Advances, Challenges and Methodologies)
►▼
Show Figures

Figure 1
Open AccessArticle
A Siamese Transformer Network for Zero-Shot Ancient Coin Classification
J. Imaging 2023, 9(6), 107; https://doi.org/10.3390/jimaging9060107 - 25 May 2023
Abstract
Ancient numismatics, the study of ancient coins, has in recent years become an attractive domain for the application of computer vision and machine learning. Though rich in research problems, the predominant focus in this area to date has been on the task of
[...] Read more.
Ancient numismatics, the study of ancient coins, has in recent years become an attractive domain for the application of computer vision and machine learning. Though rich in research problems, the predominant focus in this area to date has been on the task of attributing a coin from an image, that is of identifying its issue. This may be considered the cardinal problem in the field and it continues to challenge automatic methods. In the present paper, we address a number of limitations of previous work. Firstly, the existing methods approach the problem as a classification task. As such, they are unable to deal with classes with no or few exemplars (which would be most, given over 50,000 issues of Roman Imperial coins alone), and require retraining when exemplars of a new class become available. Hence, rather than seeking to learn a representation that distinguishes a particular class from all the others, herein we seek a representation that is overall best at distinguishing classes from one another, thus relinquishing the demand for exemplars of any specific class. This leads to our adoption of the paradigm of pairwise coin matching by issue, rather than the usual classification paradigm, and the specific solution we propose in the form of a Siamese neural network. Furthermore, while adopting deep learning, motivated by its successes in the field and its unchallenged superiority over classical computer vision approaches, we also seek to leverage the advantages that transformers have over the previously employed convolutional neural networks, and in particular their non-local attention mechanisms, which ought to be particularly useful in ancient coin analysis by associating semantically but not visually related distal elements of a coin’s design. Evaluated on a large data corpus of 14,820 images and 7605 issues, using transfer learning and only a small training set of 542 images of 24 issues, our Double Siamese ViT model is shown to surpass the state of the art by a large margin, achieving an overall accuracy of 81%. Moreover, our further investigation of the results shows that the majority of the method’s errors are unrelated to the intrinsic aspects of the algorithm itself, but are rather a consequence of unclean data, which is a problem that can be easily addressed in practice by simple pre-processing and quality checking.
Full article
(This article belongs to the Special Issue Pattern Recognition Systems for Cultural Heritage)
►▼
Show Figures

Figure 1
Open AccessArticle
Manipulating Pixels in Computer Graphics by Converting Raster Elements to Vector Shapes as a Function of Hue
J. Imaging 2023, 9(6), 106; https://doi.org/10.3390/jimaging9060106 - 23 May 2023
Abstract
This paper proposes a method for changing pixel shape by converting a CMYK raster image (pixel) to an HSB vector image, replacing the square cells of the CMYK pixels with different vector shapes. The replacement of a pixel by the selected vector shape
[...] Read more.
This paper proposes a method for changing pixel shape by converting a CMYK raster image (pixel) to an HSB vector image, replacing the square cells of the CMYK pixels with different vector shapes. The replacement of a pixel by the selected vector shape is done depending on the detected color values for each pixel. The CMYK values are first converted to the corresponding RGB values and then to the HSB system, and the vector shape is selected based on the obtained hue values. The vector shape is drawn in the defined space, according to the row and column matrix of the pixels of the original CMYK image. Twenty-one vector shapes are introduced to replace the pixels depending on the hue. The pixels of each hue are replaced by a different shape. The application of this conversion has its greatest value in the creation of security graphics for printed documents and the individualization of digital artwork by creating structured patterns based on the hue.
Full article
(This article belongs to the Topic Color Image Processing: Models and Methods (CIP: MM))
►▼
Show Figures

Graphical abstract
Open AccessArticle
Perceptual Translucency in 3D Printing Using Surface Texture
J. Imaging 2023, 9(5), 105; https://doi.org/10.3390/jimaging9050105 - 22 May 2023
Abstract
We propose a method of reproducing perceptual translucency in three-dimensional printing. In contrast to most conventional methods, which reproduce the physical properties of translucency, we focus on the perceptual aspects of translucency. Humans are known to rely on simple cues to perceive translucency,
[...] Read more.
We propose a method of reproducing perceptual translucency in three-dimensional printing. In contrast to most conventional methods, which reproduce the physical properties of translucency, we focus on the perceptual aspects of translucency. Humans are known to rely on simple cues to perceive translucency, and we develop a method of reproducing these cues using the gradation of surface textures. Textures are designed to reproduce the intensity distribution of the shading and thus provide a cue for the perception of translucency. In creating textures, we adopt computer graphics to develop an image-based optimization method. We validate the effectiveness of the method through subjective evaluation experiments using three-dimensionally printed objects. The results of the validation suggest that the proposed method using texture may increase perceptual translucency under specific conditions. As a method for translucent 3D printing, our method has the limitation that it depends on the observation conditions; however, it provides knowledge to the field of perception that the human visual system can be cheated by only surface textures.
Full article
(This article belongs to the Special Issue Imaging Technologies for Understanding Material Appearance)
►▼
Show Figures

Figure 1
Open AccessArticle
Combining CNNs and Markov-like Models for Facial Landmark Detection with Spatial Consistency Estimates
J. Imaging 2023, 9(5), 104; https://doi.org/10.3390/jimaging9050104 - 22 May 2023
Abstract
The accurate localization of facial landmarks is essential for several tasks, including face recognition, head pose estimation, facial region extraction, and emotion detection. Although the number of required landmarks is task-specific, models are typically trained on all available landmarks in the datasets, limiting
[...] Read more.
The accurate localization of facial landmarks is essential for several tasks, including face recognition, head pose estimation, facial region extraction, and emotion detection. Although the number of required landmarks is task-specific, models are typically trained on all available landmarks in the datasets, limiting efficiency. Furthermore, model performance is strongly influenced by scale-dependent local appearance information around landmarks and the global shape information generated by them. To account for this, we propose a lightweight hybrid model for facial landmark detection designed specifically for pupil region extraction. Our design combines a convolutional neural network (CNN) with a Markov random field (MRF)-like process trained on only 17 carefully selected landmarks. The advantage of our model is the ability to run different image scales on the same convolutional layers, resulting in a significant reduction in model size. In addition, we employ an approximation of the MRF that is run on a subset of landmarks to validate the spatial consistency of the generated shape. This validation process is performed against a learned conditional distribution, expressing the location of one landmark relative to its neighbor. Experimental results on popular facial landmark localization datasets such as 300 w, WFLW, and HELEN demonstrate the accuracy of our proposed model. Furthermore, our model achieves state-of-the-art performance on a well-defined robustness metric. In conclusion, the results demonstrate the ability of our lightweight model to filter out spatially inconsistent predictions, even with significantly fewer training landmarks.
Full article
(This article belongs to the Topic Computer Vision and Image Processing)
►▼
Show Figures

Figure 1
Open AccessArticle
Tomosynthesis-Detected Architectural Distortions: Correlations between Imaging Characteristics and Histopathologic Outcomes
by
, , , , , and
J. Imaging 2023, 9(5), 103; https://doi.org/10.3390/jimaging9050103 - 19 May 2023
Abstract
Objective: to determine the positive predictive value (PPV) of tomosynthesis (DBT)-detected architectural distortions (ADs) and evaluate correlations between AD’s imaging characteristics and histopathologic outcomes. Methods: biopsies performed between 2019 and 2021 on ADs were included. Images were interpreted by dedicated breast imaging radiologists.
[...] Read more.
Objective: to determine the positive predictive value (PPV) of tomosynthesis (DBT)-detected architectural distortions (ADs) and evaluate correlations between AD’s imaging characteristics and histopathologic outcomes. Methods: biopsies performed between 2019 and 2021 on ADs were included. Images were interpreted by dedicated breast imaging radiologists. Pathologic results after DBT-vacuum assisted biopsy (DBT-VAB) and core needle biopsy were compared with AD detected by DBT, synthetic2D (synt2D) and ultrasound (US). Results: US was performed to assess a correlation for ADs in all 123 cases and a US correlation was identified in 12/123 (9.7%) cases, which underwent US-guided core needle biopsy (CNB). The remaining 111/123 (90.2%) ADs were biopsied under DBT guidance. Among the 123 ADs included, 33/123 (26.8%) yielded malignant results. The overall PPV for malignancy was 30.1% (37/123). The imaging-specific PPV for malignancy was 19.2% (5/26) for DBT-only ADs, 28.2% (24/85) for ADs visible on DBT and synth2D mammography and 66.7% (8/12) for ADs with a US correlation with a statistically significant difference among the three groups (p = 0.01). Conclusions: DBT-only ADs demonstrated a lower PPV of malignancy when compared with syntD mammography, and DBT detected ADs but not low enough to avoid biopsy. As the presence of a US correlate was found to be related with malignancy, it should increase the radiologist’s level of suspicion, even when CNB returned a B3 result.
Full article
(This article belongs to the Topic Medical Image Analysis)
►▼
Show Figures

Figure 1
Open AccessReview
Intraoperative Gamma Cameras: A Review of Development in the Last Decade and Future Outlook
J. Imaging 2023, 9(5), 102; https://doi.org/10.3390/jimaging9050102 - 16 May 2023
Abstract
Portable gamma cameras suitable for intraoperative imaging are in active development and testing. These cameras utilise a range of collimation, detection, and readout architectures, each of which can have significant and interacting impacts on the performance of the system as a whole. In
[...] Read more.
Portable gamma cameras suitable for intraoperative imaging are in active development and testing. These cameras utilise a range of collimation, detection, and readout architectures, each of which can have significant and interacting impacts on the performance of the system as a whole. In this review, we provide an analysis of intraoperative gamma camera development over the past decade. The designs and performance of 17 imaging systems are compared in depth. We discuss where recent technological developments have had the greatest impact, identify emerging technological and scientific requirements, and predict future research directions. This is a comprehensive review of the current and emerging state-of-the-art as more devices enter clinical practice.
Full article
(This article belongs to the Special Issue Imaging Technology for Nuclear Medicine: Recent Advances and Future Outlook)
►▼
Show Figures

Figure 1
Open AccessArticle
Examination for the Factors Involving to Joint Effusion in Patients with Temporomandibular Disorders Using Magnetic Resonance Imaging
by
, , , , , and
J. Imaging 2023, 9(5), 101; https://doi.org/10.3390/jimaging9050101 - 16 May 2023
Abstract
►▼
Show Figures
Background: This study investigated the factors involving joint effusion in patients with temporomandibular disorders. Methods: The magnetic resonance images of 131 temporomandibular joints (TMJs) of patients with temporomandibular disorders were evaluated. Gender, age, disease classification, duration of manifestation, muscle pain, TMJ pain, jaw
[...] Read more.
Background: This study investigated the factors involving joint effusion in patients with temporomandibular disorders. Methods: The magnetic resonance images of 131 temporomandibular joints (TMJs) of patients with temporomandibular disorders were evaluated. Gender, age, disease classification, duration of manifestation, muscle pain, TMJ pain, jaw opening disturbance, disc displacement with and without reduction, deformation of the articular disc, deformation of bone, and joint effusion were investigated. Differences in the appearance of symptoms and observations were evaluated using cross-tabulation. The differences in the amounts of synovial fluid in joint effusion vs. duration of manifestation were analyzed using the Kruskal–Wallis test. Multiple logistic regression analysis was performed to analyze the factors contributing to joint effusion. Results: Manifestation duration was significantly longer when joint effusion was not recognized (p < 0.05). Arthralgia and deformation of the articular disc were related to a high risk of joint effusion (p < 0.05). Conclusions: The results of this study suggest that joint effusion recognized in magnetic resonance imaging was easily observed when the manifestation duration was short, and arthralgia and deformation of the articular disc were related to a higher risk of joint effusion.
Full article

Figure 1
Open AccessArticle
Assessing the Design of Interactive Radial Data Visualizations for Mobile Devices
J. Imaging 2023, 9(5), 100; https://doi.org/10.3390/jimaging9050100 - 14 May 2023
Abstract
►▼
Show Figures
The growing use of mobile devices in daily life has led to an increased demand for the display of large amounts of data. In response, radial visualizations have emerged as a popular type of visualization in mobile applications due to their visual appeal.
[...] Read more.
The growing use of mobile devices in daily life has led to an increased demand for the display of large amounts of data. In response, radial visualizations have emerged as a popular type of visualization in mobile applications due to their visual appeal. However, previous research has highlighted issues with these visualizations, namely misinterpretation due to their column length and angles. This study aims to provide guidelines for designing interactive visualizations on mobile devices and new evaluation methods based on the results of an empirical study. The perception of four types of circular visualizations on mobile devices was assessed through user interaction. All four types of circular visualizations were found to be suitable for use within mobile activity tracking applications, with no statistically significant difference in responses by type of visualization or interaction. However, distinguishing characteristics of each visualization type were revealed depending on the category that is in focus (memorability, readability, understanding, enjoyment, and engagement). The research outcomes provide guidelines for designing interactive radial visualizations on mobile devices, enhance the user experience, and introduce new evaluation methods. The study’s results have significant implications for the design of visualizations on mobile devices, particularly in activity tracking applications.
Full article

Figure 1
Open AccessArticle
Future Prediction of Shuttlecock Trajectory in Badminton Using Player’s Information
J. Imaging 2023, 9(5), 99; https://doi.org/10.3390/jimaging9050099 - 11 May 2023
Abstract
Video analysis has become an essential aspect of net sports, such as badminton. Accurately predicting the future trajectory of balls and shuttlecocks can significantly benefit players by enhancing their performance and enabling them to devise effective game strategies. This paper aims to analyze
[...] Read more.
Video analysis has become an essential aspect of net sports, such as badminton. Accurately predicting the future trajectory of balls and shuttlecocks can significantly benefit players by enhancing their performance and enabling them to devise effective game strategies. This paper aims to analyze data to provide players with an advantage in the fast-paced rallies of badminton matches. The paper delves into the innovative task of predicting future shuttlecock trajectories in badminton match videos and presents a method that takes into account both the shuttlecock position and the positions and postures of the players. In the experiments, players were extracted from the match video, their postures were analyzed, and a time-series model was trained. The results indicate that the proposed method improved accuracy by 13% compared to methods that solely used shuttlecock position information as input, and by 8.4% compared to methods that employed both shuttlecock and player position information as input.
Full article
(This article belongs to the Special Issue Computer Vision and Deep Learning: Trends and Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
Multispectral Satellite Image Analysis for Computing Vegetation Indices by R in the Khartoum Region of Sudan, Northeast Africa
by
and
J. Imaging 2023, 9(5), 98; https://doi.org/10.3390/jimaging9050098 - 11 May 2023
Abstract
Desertification is one of the most destructive climate-related issues in the Sudan–Sahel region of Africa. As the assessment of desertification is possible by satellite image analysis using vegetation indices (VIs), this study reports on the technical advantages and capabilities of scripting the ‘raster’
[...] Read more.
Desertification is one of the most destructive climate-related issues in the Sudan–Sahel region of Africa. As the assessment of desertification is possible by satellite image analysis using vegetation indices (VIs), this study reports on the technical advantages and capabilities of scripting the ‘raster’ and ‘terra’ R-language packages for computing the VIs. The test area which was considered includes the region of the confluence between the Blue and White Niles in Khartoum, southern Sudan, northeast Africa and the Landsat 8–9 OLI/TIRS images taken for the years 2013, 2018 and 2022, which were chosen as test datasets. The VIs used here are robust indicators of plant greenness, and combined with vegetation coverage, are essential parameters for environmental analytics. Five VIs were calculated to compare both the status and dynamics of vegetation through the differences between the images collected within the nine-year span. Using scripts for computing and visualising the VIs over Sudan demonstrates previously unreported patterns of vegetation to reveal climate–vegetation relationships. The ability of the R packages ‘raster’ and ‘terra’ to process spatial data was enhanced through scripting to automate image analysis and mapping, and choosing Sudan for the case study enables us to present new perspectives for image processing.
Full article
(This article belongs to the Section Color, Multi-spectral, and Hyperspectral Imaging)
►▼
Show Figures

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Applied Sciences, Electronics, Modelling, J. Imaging
Computer Vision and Image Processing
Topic Editor: Silvia Liberata UlloDeadline: 30 June 2023
Topic in
Applied Sciences, Sensors, J. Imaging, MAKE
Applications in Image Analysis and Pattern Recognition
Topic Editors: Bin Fan, Wenqi RenDeadline: 31 August 2023
Topic in
AI, Applied Sciences, BDCC, Digital, Healthcare, J. Imaging, Signals
Research on the Application of Digital Signal Processing
Topic Editors: KC Santosh, Alejandro Rodríguez-GonzálezDeadline: 30 September 2023
Topic in
Applied Sciences, Electronics, J. Imaging, Sensors, Signals
Visual Object Tracking: Challenges and Applications
Topic Editors: Shunli Zhang, Xin Yu, Kaihua Zhang, Yang YangDeadline: 31 October 2023

Conferences
Special Issues
Special Issue in
J. Imaging
Computer Vision and Deep Learning: Trends and Applications
Guest Editor: Pier Luigi MazzeoDeadline: 30 June 2023
Special Issue in
J. Imaging
Fluorescence Imaging and Analysis of Cellular System
Guest Editor: Ashutosh SharmaDeadline: 1 August 2023
Special Issue in
J. Imaging
Advances in PET/CT Imaging for Diagnosis in Sarcoidosis
Guest Editor: Marco TanaDeadline: 1 September 2023
Special Issue in
J. Imaging
Explainable AI for Image-Aided Diagnosis
Guest Editors: António Cunha, Paulo A.C. Salgado, Teresa Paula PerdicoúlisDeadline: 30 September 2023