Next Issue
Volume 7, October
Previous Issue
Volume 7, August
 
 

J. Imaging, Volume 7, Issue 9 (September 2021) – 31 articles

Cover Story (view full-size image): Photoacoustic imaging is a rapidly expanding diagnostic technique which is based on the formation of ultrasonic waves following the absorption of pulsed optical radiation, to provide in-depth information within turbid media. In this study, we evaluate the capabilities of a reflection-mode photoacoustic imaging prototype as regards the revelation of hidden graphite layers covered by various paints in thick artwork mock-ups. We have demonstrated that the highly transmissive photoacoustic waves can recover underlying sketches with up to 8 times improved contrast compared to standard near-infrared images, paving thus the way to further relevant applications in the field of cultural heritage diagnostics. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
18 pages, 1489 KiB  
Review
To Grasp the World at a Glance: The Role of Attention in Visual and Semantic Associative Processing
by Nurit Gronau
J. Imaging 2021, 7(9), 191; https://doi.org/10.3390/jimaging7090191 - 20 Sep 2021
Cited by 4 | Viewed by 2545
Abstract
Associative relations among words, concepts and percepts are the core building blocks of high-level cognition. When viewing the world ‘at a glance’, the associative relations between objects in a scene, or between an object and its visual background, are extracted rapidly. The extent [...] Read more.
Associative relations among words, concepts and percepts are the core building blocks of high-level cognition. When viewing the world ‘at a glance’, the associative relations between objects in a scene, or between an object and its visual background, are extracted rapidly. The extent to which such relational processing requires attentional capacity, however, has been heavily disputed over the years. In the present manuscript, I review studies investigating scene–object and object–object associative processing. I then present a series of studies in which I assessed the necessity of spatial attention to various types of visual–semantic relations within a scene. Importantly, in all studies, the spatial and temporal aspects of visual attention were tightly controlled in an attempt to minimize unintentional attention shifts from ‘attended’ to ‘unattended’ regions. Pairs of stimuli—either objects, scenes or a scene and an object—were briefly presented on each trial, while participants were asked to detect a pre-defined target category (e.g., an animal, a nonsense shape). Response times (RTs) to the target detection task were registered when visual attention spanned both stimuli in a pair vs. when attention was focused on only one of two stimuli. Among non-prioritized stimuli that were not defined as to-be-detected targets, findings consistently demonstrated rapid associative processing when stimuli were fully attended, i.e., shorter RTs to associated than unassociated pairs. Focusing attention on a single stimulus only, however, largely impaired this relational processing. Notably, prioritized targets continued to affect performance even when positioned at an unattended location, and their associative relations with the attended items were well processed and analyzed. Our findings portray an important dissociation between unattended task-irrelevant and task-relevant items: while the former require spatial attentional resources in order to be linked to stimuli positioned inside the attentional focus, the latter may influence high-level recognition and associative processes via feature-based attentional mechanisms that are largely independent of spatial attention. Full article
(This article belongs to the Special Issue Human Attention and Visual Cognition)
Show Figures

Figure 1

40 pages, 1107 KiB  
Review
A Bottom-Up Review of Image Analysis Methods for Suspicious Region Detection in Mammograms
by Parita Oza, Paawan Sharma, Samir Patel and Alessandro Bruno
J. Imaging 2021, 7(9), 190; https://doi.org/10.3390/jimaging7090190 - 18 Sep 2021
Cited by 32 | Viewed by 6111
Abstract
Breast cancer is one of the most common death causes amongst women all over the world. Early detection of breast cancer plays a critical role in increasing the survival rate. Various imaging modalities, such as mammography, breast MRI, ultrasound and thermography, are used [...] Read more.
Breast cancer is one of the most common death causes amongst women all over the world. Early detection of breast cancer plays a critical role in increasing the survival rate. Various imaging modalities, such as mammography, breast MRI, ultrasound and thermography, are used to detect breast cancer. Though there is a considerable success with mammography in biomedical imaging, detecting suspicious areas remains a challenge because, due to the manual examination and variations in shape, size, other mass morphological features, mammography accuracy changes with the density of the breast. Furthermore, going through the analysis of many mammograms per day can be a tedious task for radiologists and practitioners. One of the main objectives of biomedical imaging is to provide radiologists and practitioners with tools to help them identify all suspicious regions in a given image. Computer-aided mass detection in mammograms can serve as a second opinion tool to help radiologists avoid running into oversight errors. The scientific community has made much progress in this topic, and several approaches have been proposed along the way. Following a bottom-up narrative, this paper surveys different scientific methodologies and techniques to detect suspicious regions in mammograms spanning from methods based on low-level image features to the most recent novelties in AI-based approaches. Both theoretical and practical grounds are provided across the paper sections to highlight the pros and cons of different methodologies. The paper’s main scope is to let readers embark on a journey through a fully comprehensive description of techniques, strategies and datasets on the topic. Full article
Show Figures

Figure 1

18 pages, 1042 KiB  
Article
Per-COVID-19: A Benchmark Dataset for COVID-19 Percentage Estimation from CT-Scans
by Fares Bougourzi, Cosimo Distante, Abdelkrim Ouafi, Fadi Dornaika, Abdenour Hadid and Abdelmalik Taleb-Ahmed
J. Imaging 2021, 7(9), 189; https://doi.org/10.3390/jimaging7090189 - 18 Sep 2021
Cited by 16 | Viewed by 3538
Abstract
COVID-19 infection recognition is a very important step in the fight against the COVID-19 pandemic. In fact, many methods have been used to recognize COVID-19 infection including Reverse Transcription Polymerase Chain Reaction (RT-PCR), X-ray scan, and Computed Tomography scan (CT- scan). In addition [...] Read more.
COVID-19 infection recognition is a very important step in the fight against the COVID-19 pandemic. In fact, many methods have been used to recognize COVID-19 infection including Reverse Transcription Polymerase Chain Reaction (RT-PCR), X-ray scan, and Computed Tomography scan (CT- scan). In addition to the recognition of the COVID-19 infection, CT scans can provide more important information about the evolution of this disease and its severity. With the extensive number of COVID-19 infections, estimating the COVID-19 percentage can help the intensive care to free up the resuscitation beds for the critical cases and follow other protocol for less severity cases. In this paper, we introduce COVID-19 percentage estimation dataset from CT-scans, where the labeling process was accomplished by two expert radiologists. Moreover, we evaluate the performance of three Convolutional Neural Network (CNN) architectures: ResneXt-50, Densenet-161, and Inception-v3. For the three CNN architectures, we use two loss functions: MSE and Dynamic Huber. In addition, two pretrained scenarios are investigated (ImageNet pretrained models and pretrained models using X-ray data). The evaluated approaches achieved promising results on the estimation of COVID-19 infection. Inception-v3 using Dynamic Huber loss function and pretrained models using X-ray data achieved the best performance for slice-level results: 0.9365, 5.10, and 9.25 for Pearson Correlation coefficient (PC), Mean Absolute Error (MAE), and Root Mean Square Error (RMSE), respectively. On the other hand, the same approach achieved 0.9603, 4.01, and 6.79 for PCsubj, MAEsubj, and RMSEsubj, respectively, for subject-level results. These results prove that using CNN architectures can provide accurate and fast solution to estimate the COVID-19 infection percentage for monitoring the evolution of the patient state. Full article
(This article belongs to the Special Issue Deep Learning in Medical Image Analysis, Volume II)
Show Figures

Figure 1

19 pages, 22144 KiB  
Article
A Low Redundancy Wavelet Entropy Edge Detection Algorithm
by Yiting Tao, Thomas Scully, Asanka G. Perera, Andrew Lambert and Javaan Chahl
J. Imaging 2021, 7(9), 188; https://doi.org/10.3390/jimaging7090188 - 17 Sep 2021
Cited by 4 | Viewed by 2273
Abstract
Fast edge detection of images can be useful for many real-world applications. Edge detection is not an end application but often the first step of a computer vision application. Therefore, fast and simple edge detection techniques are important for efficient image processing. In [...] Read more.
Fast edge detection of images can be useful for many real-world applications. Edge detection is not an end application but often the first step of a computer vision application. Therefore, fast and simple edge detection techniques are important for efficient image processing. In this work, we propose a new edge detection algorithm using a combination of the wavelet transform, Shannon entropy and thresholding. The new algorithm is based on the concept that each Wavelet decomposition level has an assumed level of structure that enables the use of Shannon entropy as a measure of global image structure. The proposed algorithm is developed mathematically and compared to five popular edge detection algorithms. The results show that our solution is low redundancy, noise resilient, and well suited to real-time image processing applications. Full article
(This article belongs to the Special Issue Edge Detection Evaluation)
Show Figures

Figure 1

36 pages, 9461 KiB  
Article
Detecting Salient Image Objects Using Color Histogram Clustering for Region Granularity
by Seena Joseph and Oludayo O. Olugbara
J. Imaging 2021, 7(9), 187; https://doi.org/10.3390/jimaging7090187 - 16 Sep 2021
Cited by 3 | Viewed by 3196
Abstract
Salient object detection represents a novel preprocessing stage of many practical image applications in the discipline of computer vision. Saliency detection is generally a complex process to copycat the human vision system in the processing of color images. It is a convoluted process [...] Read more.
Salient object detection represents a novel preprocessing stage of many practical image applications in the discipline of computer vision. Saliency detection is generally a complex process to copycat the human vision system in the processing of color images. It is a convoluted process because of the existence of countless properties inherent in color images that can hamper performance. Due to diversified color image properties, a method that is appropriate for one category of images may not necessarily be suitable for others. The selection of image abstraction is a decisive preprocessing step in saliency computation and region-based image abstraction has become popular because of its computational efficiency and robustness. However, the performances of the existing region-based salient object detection methods are extremely hooked on the selection of an optimal region granularity. The incorrect selection of region granularity is potentially prone to under- or over-segmentation of color images, which can lead to a non-uniform highlighting of salient objects. In this study, the method of color histogram clustering was utilized to automatically determine suitable homogenous regions in an image. Region saliency score was computed as a function of color contrast, contrast ratio, spatial feature, and center prior. Morphological operations were ultimately performed to eliminate the undesirable artifacts that may be present at the saliency detection stage. Thus, we have introduced a novel, simple, robust, and computationally efficient color histogram clustering method that agglutinates color contrast, contrast ratio, spatial feature, and center prior for detecting salient objects in color images. Experimental validation with different categories of images selected from eight benchmarked corpora has indicated that the proposed method outperforms 30 bottom-up non-deep learning and seven top-down deep learning salient object detection methods based on the standard performance metrics. Full article
(This article belongs to the Special Issue Advancing Color Image Processing)
Show Figures

Figure 1

22 pages, 1451 KiB  
Article
Food Tray Sealing Fault Detection in Multi-Spectral Images Using Data Fusion and Deep Learning Techniques
by Mohamed Benouis, Leandro D. Medus, Mohamed Saban, Abdessattar Ghemougui and Alfredo Rosado-Muñoz
J. Imaging 2021, 7(9), 186; https://doi.org/10.3390/jimaging7090186 - 16 Sep 2021
Cited by 5 | Viewed by 2471
Abstract
A correct food tray sealing is required to preserve food properties and safety for consumers. Traditional food packaging inspections are made by human operators to detect seal defects. Recent advances in the field of food inspection have been related to the use of [...] Read more.
A correct food tray sealing is required to preserve food properties and safety for consumers. Traditional food packaging inspections are made by human operators to detect seal defects. Recent advances in the field of food inspection have been related to the use of hyperspectral imaging technology and automated vision-based inspection systems. A deep learning-based approach for food tray sealing fault detection using hyperspectral images is described. Several pixel-based image fusion methods are proposed to obtain 2D images from the 3D hyperspectral image datacube, which feeds the deep learning (DL) algorithms. Instead of considering all spectral bands in region of interest around a contaminated or faulty seal area, only relevant bands are selected using data fusion. These techniques greatly improve the computation time while maintaining a high classification ratio, showing that the fused image contains enough information for checking a food tray sealing state (faulty or normal), avoiding feeding a large image datacube to the DL algorithms. Additionally, the proposed DL algorithms do not require any prior handcraft approach, i.e., no manual tuning of the parameters in the algorithms are required since the training process adjusts the algorithm. The experimental results, validated using an industrial dataset for food trays, along with different deep learning methods, demonstrate the effectiveness of the proposed approach. In the studied dataset, an accuracy of 88.7%, 88.3%, 89.3%, and 90.1% was achieved for Deep Belief Network (DBN), Extreme Learning Machine (ELM), Stacked Auto Encoder (SAE), and Convolutional Neural Network (CNN), respectively. Full article
(This article belongs to the Special Issue Hyperspectral Imaging and Its Applications)
Show Figures

Figure 1

8 pages, 1304 KiB  
Article
Lobular Breast Cancer Conspicuity on Digital Breast Tomosynthesis Compared to Synthesized 2D Mammography: A Multireader Study
by Giovanna Romanucci, Lisa Zantedeschi, Anna Ventriglia, Sara Mercogliano, Maria Vittoria Bisighin, Loredana Cugola, Paola Bricolo, Rossella Rella, Marta Mandarà, Chiara Benassuti, Andrea Caneva and Francesca Fornasa
J. Imaging 2021, 7(9), 185; https://doi.org/10.3390/jimaging7090185 - 12 Sep 2021
Cited by 3 | Viewed by 3214
Abstract
Objectives: To compare the conspicuity of lobular breast cancers at digital breast tomosynthesis (DBT) versus synthesized 2D mammography (synt2D). Materials and methods: Seventy-six women (mean age 61.2 years, range 50–74 years) submitted to biopsy in our institution, from 2019 to 2021, with proven [...] Read more.
Objectives: To compare the conspicuity of lobular breast cancers at digital breast tomosynthesis (DBT) versus synthesized 2D mammography (synt2D). Materials and methods: Seventy-six women (mean age 61.2 years, range 50–74 years) submitted to biopsy in our institution, from 2019 to 2021, with proven invasive lobular breast cancer (ILC) were enrolled in this retrospective study. The participants underwent DBT and synt2D. Five breast radiologists, with different years of experience in breast imaging, independently assigned a conspicuity score (ordinal 6-point scale) to DBT and synt2D. Lesion conspicuity was compared, for each reader, between the synt2D overall conspicuity interpretation and DBT overall conspicuity interpretation using a Wilcoxon matched pairs test. Results: A total of 50/78 (64%) cancers were detected on both synt2D and DBT by all the readers, while 28/78 (26%) cancers where not recognized by at least one reader on synt2D. For each reader, in comparison with synt2D, DBT increased significantly the conspicuity of ILC (p < 0.0001). The raw proportion of high versus low conspicuity by modality confirmed that cancers were more likely to have high conspicuity at DBT than synt2D. Conclusions: ILCs were more likely to have high conspicuity at DBT than at synt2D, increasing the chances of the detection of ILC breast cancer. Full article
(This article belongs to the Special Issue X-ray Digital Radiography and Computed Tomography)
Show Figures

Figure 1

20 pages, 4388 KiB  
Article
New Approach to Dental Morphometric Research Based on 3D Imaging Techniques
by Armen V. Gaboutchian, Vladimir A. Knyaz and Dmitry V. Korost
J. Imaging 2021, 7(9), 184; https://doi.org/10.3390/jimaging7090184 - 12 Sep 2021
Cited by 8 | Viewed by 2370
Abstract
Recent progress in imaging and image processing techniques has provided for improvements in odontological research in a variety of aspects. Thus, the presented method has been developed precisely in order to assess metrically 3D reconstructions of teeth. Rapidly and accurately obtained data of [...] Read more.
Recent progress in imaging and image processing techniques has provided for improvements in odontological research in a variety of aspects. Thus, the presented method has been developed precisely in order to assess metrically 3D reconstructions of teeth. Rapidly and accurately obtained data of a wide range and appropriate density are sufficient enough for morphometric studies rather than tooth size assessments which are inherent to conventional techniques. The main contributions providing for holistic and objective morphometric analysis of teeth are the following: (1) interpretation of basic dental morphological features; (2) automated of orientational coordinate system setup based on tooth surface analysis; (3) new tooth morphometric parameters which could not be obtained through application of conventional odontometric techniques; (4) methodological novelty for automated odontomorphometric analysis pipeline. Application of tomographic imaging, which has been used for obtaining 3D models, expands the proposed method potential further through providing detailed and comprehensive reconstructions of teeth. The current study was conducted on unique material from the archaeological site of Sunghir related to the Upper Palaeolithic period. Metric assessments of external and internal morphological layers of teeth were performed in common orientation and sectioning. The proposed technique allowed more profound analysis of Sunghirian teeth which date back to the times of modern human morphology formation. Full article
Show Figures

Figure 1

12 pages, 2569 KiB  
Article
Revealing Hidden Features in Multilayered Artworks by Means of an Epi-Illumination Photoacoustic Imaging System
by George J. Tserevelakis, Antonina Chaban, Evgenia Klironomou, Kristalia Melessanaki, Jana Striova and Giannis Zacharakis
J. Imaging 2021, 7(9), 183; https://doi.org/10.3390/jimaging7090183 - 10 Sep 2021
Cited by 8 | Viewed by 2750
Abstract
Photoacoustic imaging is a novel, rapidly expanding technique, which has recently found several applications in artwork diagnostics, including the uncovering of hidden layers in paintings and multilayered documents, as well as the thickness measurement of optically turbid paint layers with high accuracy. However, [...] Read more.
Photoacoustic imaging is a novel, rapidly expanding technique, which has recently found several applications in artwork diagnostics, including the uncovering of hidden layers in paintings and multilayered documents, as well as the thickness measurement of optically turbid paint layers with high accuracy. However, thus far, all the presented photoacoustic-based imaging technologies dedicated to such measurements have been strictly limited to thin objects due to the detection of signals in transmission geometry. Unavoidably, this issue restricts seriously the applicability of the imaging method, hindering investigations over a wide range of cultural heritage objects with diverse geometrical and structural features. Here, we present an epi-illumination photoacoustic apparatus for diagnosis in heritage science, which integrates laser excitation and respective signal detection on one side, aiming to provide universal information in objects of arbitrary thickness and shape. To evaluate the capabilities of the developed system, we imaged thickly painted mock-ups, in an attempt to reveal hidden graphite layers covered by various optically turbid paints, and compared the measurements with standard near-infrared (NIR) imaging. The obtained results prove that photoacoustic signals reveal underlying sketches with up to 8 times improved contrast, thus paving the way for more relevant applications in the field. Full article
Show Figures

Figure 1

13 pages, 4374 KiB  
Article
Recycling-Oriented Characterization of Post-Earthquake Building Waste by Different Sensing Techniques
by Oriana Trotta, Giuseppe Bonifazi, Giuseppe Capobianco and Silvia Serranti
J. Imaging 2021, 7(9), 182; https://doi.org/10.3390/jimaging7090182 - 08 Sep 2021
Cited by 10 | Viewed by 1931
Abstract
In this paper, a methodological approach based on hyperspectral imaging (HSI) working in the short-wave infrared range (1000–2500 nm) was developed and applied for the recycling-oriented characterization of post-earthquake building waste. In more detail, the presence of residual cement mortar on the surface [...] Read more.
In this paper, a methodological approach based on hyperspectral imaging (HSI) working in the short-wave infrared range (1000–2500 nm) was developed and applied for the recycling-oriented characterization of post-earthquake building waste. In more detail, the presence of residual cement mortar on the surface of tile fragments that can be recycled as aggregates was estimated. The acquired hyperspectral images were analyzed by applying different chemometric methods: principal component analysis (PCA) for data exploration and partial least-squares-discriminant analysis (PLS-DA) to build classification models. Micro-X-ray fluorescence (micro-XRF) maps were also obtained on the same samples in order to validate the HSI classification results. Results showed that it is possible to identify cement mortar on the surface of the recycled tile, evaluating its degree of liberation. The recognition is automatic and non-destructive and can be applied for recycling-oriented purposes at recycling plants. Full article
(This article belongs to the Special Issue Hyperspectral Imaging and Its Applications)
Show Figures

Figure 1

16 pages, 2979 KiB  
Article
Effective Recycling Solutions for the Production of High-Quality PET Flakes Based on Hyperspectral Imaging and Variable Selection
by Paola Cucuzza, Silvia Serranti, Giuseppe Bonifazi and Giuseppe Capobianco
J. Imaging 2021, 7(9), 181; https://doi.org/10.3390/jimaging7090181 - 08 Sep 2021
Cited by 8 | Viewed by 2508
Abstract
In this study, effective solutions for polyethylene terephthalate (PET) recycling based on hyperspectral imaging (HSI) coupled with variable selection method, were developed and optimized. Hyperspectral images of post-consumer plastic flakes, composed by PET and small quantities of other polymers, considered as contaminants, were [...] Read more.
In this study, effective solutions for polyethylene terephthalate (PET) recycling based on hyperspectral imaging (HSI) coupled with variable selection method, were developed and optimized. Hyperspectral images of post-consumer plastic flakes, composed by PET and small quantities of other polymers, considered as contaminants, were acquired in the short-wave infrared range (SWIR: 1000–2500 nm). Different combinations of preprocessing sets coupled with a variable selection method, called competitive adaptive reweighted sampling (CARS), were applied to reduce the number of spectral bands useful to detect the contaminants in the PET flow stream. Prediction models based on partial least squares-discriminant analysis (PLS-DA) for each preprocessing set, combined with CARS, were built and compared to evaluate their efficiency results. The best performance result was obtained by a PLS-DA model using multiplicative scatter correction + derivative + mean center preprocessing set and selecting only 14 wavelengths out of 240. Sensitivity and specificity values in calibration, cross-validation and prediction phases ranged from 0.986 to 0.998. HSI combined with CARS method can represent a valid tool for identification of plastic contaminants in a PET flakes stream increasing the processing speed as requested by sensor-based sorting devices working at industrial level. Full article
(This article belongs to the Special Issue Hyperspectral Imaging and Its Applications)
Show Figures

Figure 1

19 pages, 1033 KiB  
Article
Incremental Learning for Dermatological Imaging Modality Classification
by Ana C. Morgado, Catarina Andrade, Luís F. Teixeira and Maria João M. Vasconcelos
J. Imaging 2021, 7(9), 180; https://doi.org/10.3390/jimaging7090180 - 07 Sep 2021
Cited by 3 | Viewed by 2165
Abstract
With the increasing adoption of teledermatology, there is a need to improve the automatic organization of medical records, being dermatological image modality a key filter in this process. Although there has been considerable effort in the classification of medical imaging modalities, this has [...] Read more.
With the increasing adoption of teledermatology, there is a need to improve the automatic organization of medical records, being dermatological image modality a key filter in this process. Although there has been considerable effort in the classification of medical imaging modalities, this has not been in the field of dermatology. Moreover, as various devices are used in teledermatological consultations, image acquisition conditions may differ. In this work, two models (VGG-16 and MobileNetV2) were used to classify dermatological images from the Portuguese National Health System according to their modality. Afterwards, four incremental learning strategies were applied to these models, namely naive, elastic weight consolidation, averaged gradient episodic memory, and experience replay, enabling their adaptation to new conditions while preserving previously acquired knowledge. The evaluation considered catastrophic forgetting, accuracy, and computational cost. The MobileNetV2 trained with the experience replay strategy, with 500 images in memory, achieved a global accuracy of 86.04% with only 0.0344 of forgetting, which is 6.98% less than the second-best strategy. Regarding efficiency, this strategy took 56 s per epoch longer than the baseline and required, on average, 4554 megabytes of RAM during training. Promising results were achieved, proving the effectiveness of the proposed approach. Full article
(This article belongs to the Special Issue Continual Learning in Computer Vision: Theory and Applications)
Show Figures

Figure 1

30 pages, 408 KiB  
Article
A Survey of Brain Tumor Segmentation and Classification Algorithms
by Erena Siyoum Biratu, Friedhelm Schwenker, Yehualashet Megersa Ayano and Taye Girma Debelee
J. Imaging 2021, 7(9), 179; https://doi.org/10.3390/jimaging7090179 - 06 Sep 2021
Cited by 70 | Viewed by 10505
Abstract
A brain Magnetic resonance imaging (MRI) scan of a single individual consists of several slices across the 3D anatomical view. Therefore, manual segmentation of brain tumors from magnetic resonance (MR) images is a challenging and time-consuming task. In addition, an automated brain tumor [...] Read more.
A brain Magnetic resonance imaging (MRI) scan of a single individual consists of several slices across the 3D anatomical view. Therefore, manual segmentation of brain tumors from magnetic resonance (MR) images is a challenging and time-consuming task. In addition, an automated brain tumor classification from an MRI scan is non-invasive so that it avoids biopsy and make the diagnosis process safer. Since the beginning of this millennia and late nineties, the effort of the research community to come-up with automatic brain tumor segmentation and classification method has been tremendous. As a result, there are ample literature on the area focusing on segmentation using region growing, traditional machine learning and deep learning methods. Similarly, a number of tasks have been performed in the area of brain tumor classification into their respective histological type, and an impressive performance results have been obtained. Considering state of-the-art methods and their performance, the purpose of this paper is to provide a comprehensive survey of three, recently proposed, major brain tumor segmentation and classification model techniques, namely, region growing, shallow machine learning and deep learning. The established works included in this survey also covers technical aspects such as the strengths and weaknesses of different approaches, pre- and post-processing techniques, feature extraction, datasets, and models’ performance evaluation metrics. Full article
(This article belongs to the Special Issue Advanced Computational Methods for Oncological Image Analysis)
Show Figures

Figure 1

13 pages, 5498 KiB  
Article
Noise Reduction for Single-Shot Grating-Based Phase-Contrast Imaging at an X-ray Backlighter
by Stephan Schreiner, Bernhard Akstaller, Lisa Dietrich, Pascal Meyer, Paul Neumayer, Max Schuster, Andreas Wolf, Bernhard Zielbauer, Veronika Ludwig, Thilo Michel, Gisela Anton and Stefan Funk
J. Imaging 2021, 7(9), 178; https://doi.org/10.3390/jimaging7090178 - 05 Sep 2021
Cited by 4 | Viewed by 2369
Abstract
X-ray backlighters allow the capture of sharp images of fast dynamic processes due to extremely short exposure times. Moiré imaging enables simultaneously measuring the absorption and differential phase-contrast (DPC) of these processes. Acquiring images with one single shot limits the X-ray photon flux, [...] Read more.
X-ray backlighters allow the capture of sharp images of fast dynamic processes due to extremely short exposure times. Moiré imaging enables simultaneously measuring the absorption and differential phase-contrast (DPC) of these processes. Acquiring images with one single shot limits the X-ray photon flux, which can result in noisy images. Increasing the photon statistics by repeating the experiment to gain the same image is not possible if the investigated processes are dynamic and chaotic. Furthermore, to reconstruct the DPC and transmission image, an additional measurement captured in absence of the object is required. For these reference measurements, shot-to-shot fluctuations in X-ray spectra and a source position complicate the averaging of several reference images for noise reduction. Here, two approaches of processing multiple reference images in combination with one single object image are evaluated regarding the image quality. We found that with only five reference images, the contrast-to-noise ratio can be improved by approximately 13% in the DPC image. This promises improvements for short-exposure single-shot acquisitions of rapid processes, such as laser-produced plasma shock-waves in high-energy density experiments at backlighter X-ray sources such as the PHELIX high-power laser facility. Full article
(This article belongs to the Special Issue X-ray Digital Radiography and Computed Tomography)
Show Figures

Figure 1

13 pages, 940 KiB  
Article
Deep Features for Training Support Vector Machines
by Loris Nanni, Stefano Ghidoni and Sheryl Brahnam
J. Imaging 2021, 7(9), 177; https://doi.org/10.3390/jimaging7090177 - 05 Sep 2021
Cited by 12 | Viewed by 2298
Abstract
Features play a crucial role in computer vision. Initially designed to detect salient elements by means of handcrafted algorithms, features now are often learned using different layers in convolutional neural networks (CNNs). This paper develops a generic computer vision system based on features [...] Read more.
Features play a crucial role in computer vision. Initially designed to detect salient elements by means of handcrafted algorithms, features now are often learned using different layers in convolutional neural networks (CNNs). This paper develops a generic computer vision system based on features extracted from trained CNNs. Multiple learned features are combined into a single structure to work on different image classification tasks. The proposed system was derived by testing several approaches for extracting features from the inner layers of CNNs and using them as inputs to support vector machines that are then combined by sum rule. Several dimensionality reduction techniques were tested for reducing the high dimensionality of the inner layers so that they can work with SVMs. The empirically derived generic vision system based on applying a discrete cosine transform (DCT) separately to each channel is shown to significantly boost the performance of standard CNNs across a large and diverse collection of image data sets. In addition, an ensemble of different topologies taking the same DCT approach and combined with global mean thresholding pooling obtained state-of-the-art results on a benchmark image virus data set. Full article
(This article belongs to the Special Issue Deep Learning for Visual Contents Processing and Analysis)
Show Figures

Figure 1

24 pages, 11300 KiB  
Article
Visible and Thermal Image-Based Trunk Detection with Deep Learning for Forestry Mobile Robotics
by Daniel Queirós da Silva, Filipe Neves dos Santos, Armando Jorge Sousa and Vítor Filipe
J. Imaging 2021, 7(9), 176; https://doi.org/10.3390/jimaging7090176 - 03 Sep 2021
Cited by 19 | Viewed by 3395
Abstract
Mobile robotics in forests is currently a hugely important topic due to the recurring appearance of forest wildfires. Thus, in-site management of forest inventory and biomass is required. To tackle this issue, this work presents a study on detection at the ground level [...] Read more.
Mobile robotics in forests is currently a hugely important topic due to the recurring appearance of forest wildfires. Thus, in-site management of forest inventory and biomass is required. To tackle this issue, this work presents a study on detection at the ground level of forest tree trunks in visible and thermal images using deep learning-based object detection methods. For this purpose, a forestry dataset composed of 2895 images was built and made publicly available. Using this dataset, five models were trained and benchmarked to detect the tree trunks. The selected models were SSD MobileNetV2, SSD Inception-v2, SSD ResNet50, SSDLite MobileDet and YOLOv4 Tiny. Promising results were obtained; for instance, YOLOv4 Tiny was the best model that achieved the highest AP (90%) and F1 score (89%). The inference time was also evaluated, for these models, on CPU and GPU. The results showed that YOLOv4 Tiny was the fastest detector running on GPU (8 ms). This work will enhance the development of vision perception systems for smarter forestry robots. Full article
Show Figures

Figure 1

42 pages, 6844 KiB  
Article
iDocChip: A Configurable Hardware Accelerator for an End-to-End Historical Document Image Processing
by Menbere Kina Tekleyohannes, Vladimir Rybalkin, Muhammad Mohsin Ghaffar, Javier Alejandro Varela, Norbert Wehn and Andreas Dengel
J. Imaging 2021, 7(9), 175; https://doi.org/10.3390/jimaging7090175 - 03 Sep 2021
Cited by 1 | Viewed by 2912
Abstract
In recent years, there has been an increasing demand to digitize and electronically access historical records. Optical character recognition (OCR) is typically applied to scanned historical archives to transcribe them from document images into machine-readable texts. Many libraries offer special stationary equipment for [...] Read more.
In recent years, there has been an increasing demand to digitize and electronically access historical records. Optical character recognition (OCR) is typically applied to scanned historical archives to transcribe them from document images into machine-readable texts. Many libraries offer special stationary equipment for scanning historical documents. However, to digitize these records without removing them from where they are archived, portable devices that combine scanning and OCR capabilities are required. An existing end-to-end OCR software called anyOCR achieves high recognition accuracy for historical documents. However, it is unsuitable for portable devices, as it exhibits high computational complexity resulting in long runtime and high power consumption. Therefore, we have designed and implemented a configurable hardware-software programmable SoC called iDocChip that makes use of anyOCR techniques to achieve high accuracy. As a low-power and energy-efficient system with real-time capabilities, the iDocChip delivers the required portability. In this paper, we present the hybrid CPU-FPGA architecture of iDocChip along with the optimized software implementations of the anyOCR. We demonstrate our results on multiple platforms with respect to runtime and power consumption. The iDocChip system outperforms the existing anyOCR by 44× while achieving 2201× higher energy efficiency and a 3.8% increase in recognition accuracy. Full article
(This article belongs to the Special Issue Image Processing Using FPGAs 2021)
Show Figures

Figure 1

12 pages, 569 KiB  
Article
Study on Data Partition for Delimitation of Masses in Mammography
by Luís Viegas, Inês Domingues and Mateus Mendes
J. Imaging 2021, 7(9), 174; https://doi.org/10.3390/jimaging7090174 - 02 Sep 2021
Cited by 1 | Viewed by 2607
Abstract
Mammography is the primary medical imaging method used for routine screening and early detection of breast cancer in women. However, the process of manually inspecting, detecting, and delimiting the tumoral massess in 2D images is a very time-consuming task, subject to human errors [...] Read more.
Mammography is the primary medical imaging method used for routine screening and early detection of breast cancer in women. However, the process of manually inspecting, detecting, and delimiting the tumoral massess in 2D images is a very time-consuming task, subject to human errors due to fatigue. Therefore, integrated computer-aided detection systems have been proposed, based on modern computer vision and machine learning methods. In the present work, mammogram images from the publicly available Inbreast dataset are first converted to pseudo-color and then used to train and test a Mask R-CNN deep neural network. The most common approach is to start with a dataset and split the images into train and test set randomly. However, since there are often two or more images of the same case in the dataset, the way the dataset is split may have an impact on the results. Our experiments show that random partition of the data can produce unreliable training, so the dataset must be split using case-wise partition for more stable results. In experimental results, the method achieves an average true positive rate of 0.936 with 0.063 standard deviation using random partition and 0.908 with 0.002 standard deviation using case-wise partition, showing that case-wise partition must be used for more reliable results. Full article
(This article belongs to the Special Issue Advanced Computational Methods for Oncological Image Analysis)
Show Figures

Figure 1

12 pages, 1712 KiB  
Article
Optimizing the Simplicial-Map Neural Network Architecture
by Eduardo Paluzo-Hidalgo, Rocio Gonzalez-Diaz, Miguel A. Gutiérrez-Naranjo and Jónathan Heras
J. Imaging 2021, 7(9), 173; https://doi.org/10.3390/jimaging7090173 - 01 Sep 2021
Cited by 1 | Viewed by 2123
Abstract
Simplicial-map neural networks are a recent neural network architecture induced by simplicial maps defined between simplicial complexes. It has been proved that simplicial-map neural networks are universal approximators and that they can be refined to be robust to adversarial attacks. In this paper, [...] Read more.
Simplicial-map neural networks are a recent neural network architecture induced by simplicial maps defined between simplicial complexes. It has been proved that simplicial-map neural networks are universal approximators and that they can be refined to be robust to adversarial attacks. In this paper, the refinement toward robustness is optimized by reducing the number of simplices (i.e., nodes) needed. We have shown experimentally that such a refined neural network is equivalent to the original network as a classification tool but requires much less storage. Full article
(This article belongs to the Special Issue Formal Verification of Imaging Algorithms for Autonomous System)
Show Figures

Figure 1

27 pages, 105251 KiB  
Review
Micro-CT for Biological and Biomedical Studies: A Comparison of Imaging Techniques
by Kleoniki Keklikoglou, Christos Arvanitidis, Georgios Chatzigeorgiou, Eva Chatzinikolaou, Efstratios Karagiannidis, Triantafyllia Koletsa, Antonios Magoulas, Konstantinos Makris, George Mavrothalassitis, Eleni-Dimitra Papanagnou, Andreas S. Papazoglou, Christina Pavloudi, Ioannis P. Trougakos, Katerina Vasileiadou and Angeliki Vogiatzi
J. Imaging 2021, 7(9), 172; https://doi.org/10.3390/jimaging7090172 - 01 Sep 2021
Cited by 26 | Viewed by 5696
Abstract
Several imaging techniques are used in biological and biomedical studies. Micro-computed tomography (micro-CT) is a non-destructive imaging technique that allows the rapid digitisation of internal and external structures of a sample in three dimensions and with great resolution. In this review, the strengths [...] Read more.
Several imaging techniques are used in biological and biomedical studies. Micro-computed tomography (micro-CT) is a non-destructive imaging technique that allows the rapid digitisation of internal and external structures of a sample in three dimensions and with great resolution. In this review, the strengths and weaknesses of some common imaging techniques applied in biological and biomedical fields, such as optical microscopy, confocal laser scanning microscopy, and scanning electron microscopy, are presented and compared with the micro-CT technique through five use cases. Finally, the ability of micro-CT to create non-destructively 3D anatomical and morphological data in sub-micron resolution and the necessity to develop complementary methods with other imaging techniques, in order to overcome limitations caused by each technique, is emphasised. Full article
(This article belongs to the Special Issue X-ray Digital Radiography and Computed Tomography)
Show Figures

Figure 1

22 pages, 428 KiB  
Article
On the Efficacy of Handcrafted and Deep Features for Seed Image Classification
by Andrea Loddo and Cecilia Di Ruberto
J. Imaging 2021, 7(9), 171; https://doi.org/10.3390/jimaging7090171 - 31 Aug 2021
Cited by 11 | Viewed by 2338
Abstract
Computer vision techniques have become important in agriculture and plant sciences due to their wide variety of applications. In particular, the analysis of seeds can provide meaningful information on their evolution, the history of agriculture, the domestication of plants, and knowledge of diets [...] Read more.
Computer vision techniques have become important in agriculture and plant sciences due to their wide variety of applications. In particular, the analysis of seeds can provide meaningful information on their evolution, the history of agriculture, the domestication of plants, and knowledge of diets in ancient times. This work aims to propose an exhaustive comparison of several different types of features in the context of multiclass seed classification, leveraging two public plant seeds data sets to classify their families or species. In detail, we studied possible optimisations of five traditional machine learning classifiers trained with seven different categories of handcrafted features. We also fine-tuned several well-known convolutional neural networks (CNNs) and the recently proposed SeedNet to determine whether and to what extent using their deep features may be advantageous over handcrafted features. The experimental results demonstrated that CNN features are appropriate to the task and representative of the multiclass scenario. In particular, SeedNet achieved a mean F-measure of 96%, at least. Nevertheless, several cases showed satisfactory performance from the handcrafted features to be considered a valid alternative. In detail, we found that the Ensemble strategy combined with all the handcrafted features can achieve 90.93% of mean F-measure, at least, with a considerably lower amount of times. We consider the obtained results an excellent preliminary step towards realising an automatic seeds recognition and classification framework. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

32 pages, 7118 KiB  
Article
Designing a Computer-Vision Application: A Case Study for Hand-Hygiene Assessment in an Open-Room Environment
by Chengzhang Zhong, Amy R. Reibman, Hansel A. Mina and Amanda J. Deering
J. Imaging 2021, 7(9), 170; https://doi.org/10.3390/jimaging7090170 - 30 Aug 2021
Cited by 7 | Viewed by 2677
Abstract
Hand-hygiene is a critical component for safe food handling. In this paper, we apply an iterative engineering process to design a hand-hygiene action detection system to improve food-handling safety. We demonstrate the feasibility of a baseline RGB-only convolutional neural network (CNN) in the [...] Read more.
Hand-hygiene is a critical component for safe food handling. In this paper, we apply an iterative engineering process to design a hand-hygiene action detection system to improve food-handling safety. We demonstrate the feasibility of a baseline RGB-only convolutional neural network (CNN) in the restricted case of a single scenario; however, since this baseline system performs poorly across scenarios, we also demonstrate the application of two methods to explore potential reasons for its poor performance. This leads to the development of our hierarchical system that incorporates a variety of modalities (RGB, optical flow, hand masks, and human skeleton joints) for recognizing subsets of hand-hygiene actions. Using hand-washing video recorded from several locations in a commercial kitchen, we demonstrate the effectiveness of our system for detecting hand hygiene actions in untrimmed videos. In addition, we discuss recommendations for designing a computer vision system for a real application. Full article
(This article belongs to the Special Issue Deep Learning for Visual Contents Processing and Analysis)
Show Figures

Figure 1

25 pages, 26277 KiB  
Article
Robust 3D Face Reconstruction Using One/Two Facial Images
by Ola Lium, Yong Bin Kwon, Antonios Danelakis and Theoharis Theoharis
J. Imaging 2021, 7(9), 169; https://doi.org/10.3390/jimaging7090169 - 30 Aug 2021
Cited by 2 | Viewed by 4160
Abstract
Being able to robustly reconstruct 3D faces from 2D images is a topic of pivotal importance for a variety of computer vision branches, such as face analysis and face recognition, whose applications are steadily growing. Unlike 2D facial images, 3D facial data are [...] Read more.
Being able to robustly reconstruct 3D faces from 2D images is a topic of pivotal importance for a variety of computer vision branches, such as face analysis and face recognition, whose applications are steadily growing. Unlike 2D facial images, 3D facial data are less affected by lighting conditions and pose. Recent advances in the computer vision field have enabled the use of convolutional neural networks (CNNs) for the production of 3D facial reconstructions from 2D facial images. This paper proposes a novel CNN-based method which targets 3D facial reconstruction from two facial images, one in front and one from the side, as are often available to law enforcement agencies (LEAs). The proposed CNN was trained on both synthetic and real facial data. We show that the proposed network was able to predict 3D faces in the MICC Florence dataset with greater accuracy than the current state-of-the-art. Moreover, a scheme for using the proposed network in cases where only one facial image is available is also presented. This is achieved by introducing an additional network whose task is to generate a rotated version of the original image, which in conjunction with the original facial image, make up the image pair used for reconstruction via the previous method. Full article
(This article belongs to the Special Issue 3D Human Understanding)
Show Figures

Figure 1

18 pages, 10158 KiB  
Article
SEDIQA: Sound Emitting Document Image Quality Assessment in a Reading Aid for the Visually Impaired
by Jane Courtney
J. Imaging 2021, 7(9), 168; https://doi.org/10.3390/jimaging7090168 - 30 Aug 2021
Cited by 4 | Viewed by 2222
Abstract
For visually impaired people (VIPs), the ability to convert text to sound can mean a new level of independence or the simple joy of a good book. With significant advances in optical character recognition (OCR) in recent years, a number of reading aids [...] Read more.
For visually impaired people (VIPs), the ability to convert text to sound can mean a new level of independence or the simple joy of a good book. With significant advances in optical character recognition (OCR) in recent years, a number of reading aids are appearing on the market. These reading aids convert images captured by a camera to text which can then be read aloud. However, all of these reading aids suffer from a key issue—the user must be able to visually target the text and capture an image of sufficient quality for the OCR algorithm to function—no small task for VIPs. In this work, a sound-emitting document image quality assessment metric (SEDIQA) is proposed which allows the user to hear the quality of the text image and automatically captures the best image for OCR accuracy. This work also includes testing of OCR performance against image degradations, to identify the most significant contributors to accuracy reduction. The proposed no-reference image quality assessor (NR-IQA) is validated alongside established NR-IQAs and this work includes insights into the performance of these NR-IQAs on document images. SEDIQA is found to consistently select the best image for OCR accuracy. The full system includes a document image enhancement technique which introduces improvements in OCR accuracy with an average increase of 22% and a maximum increase of 68%. Full article
(This article belongs to the Special Issue Image and Video Quality Assessment)
Show Figures

Figure 1

14 pages, 4561 KiB  
Article
Mobile-Based 3D Modeling: An In-Depth Evaluation for the Application in Indoor Scenarios
by Martin De Pellegrini, Lorenzo Orlandi, Daniele Sevegnani and Nicola Conci
J. Imaging 2021, 7(9), 167; https://doi.org/10.3390/jimaging7090167 - 29 Aug 2021
Cited by 2 | Viewed by 1884
Abstract
Indoor environment modeling has become a relevant topic in several application fields, including augmented, virtual, and extended reality. With the digital transformation, many industries have investigated two possibilities: generating detailed models of indoor environments, allowing viewers to navigate through them; and mapping surfaces [...] Read more.
Indoor environment modeling has become a relevant topic in several application fields, including augmented, virtual, and extended reality. With the digital transformation, many industries have investigated two possibilities: generating detailed models of indoor environments, allowing viewers to navigate through them; and mapping surfaces so as to insert virtual elements into real scenes. The scope of the paper is twofold. We first review the existing state-of-the-art (SoA) of learning-based methods for 3D scene reconstruction based on structure from motion (SFM) that predict depth maps and camera poses from video streams. We then present an extensive evaluation using a recent SoA network, with particular attention on the capability of generalizing on new unseen data of indoor environments. The evaluation was conducted by using the absolute relative (AbsRel) measure of the depth map prediction as the baseline metric. Full article
(This article belongs to the Section Mixed, Augmented and Virtual Reality)
Show Figures

Figure 1

25 pages, 21086 KiB  
Article
Visible Light Spectrum Extraction from Diffraction Images by Deconvolution and the Cepstrum
by Mikko E. Toivonen, Topi Talvitie, Chang Rajani and Arto Klami
J. Imaging 2021, 7(9), 166; https://doi.org/10.3390/jimaging7090166 - 28 Aug 2021
Viewed by 2248
Abstract
Accurate color determination in variable lighting conditions is difficult and requires special devices. We considered the task of extracting the visible light spectrum using ordinary camera sensors, to facilitate low-cost color measurements using consumer equipment. The approach uses a diffractive element attached to [...] Read more.
Accurate color determination in variable lighting conditions is difficult and requires special devices. We considered the task of extracting the visible light spectrum using ordinary camera sensors, to facilitate low-cost color measurements using consumer equipment. The approach uses a diffractive element attached to a standard camera and a computational algorithm for forming the light spectrum from the resulting diffraction images. We present two machine learning algorithms for this task, based on alternative processing pipelines using deconvolution and cepstrum operations, respectively. The proposed methods were trained and evaluated on diffraction images collected using three cameras and three illuminants to demonstrate the generality of the approach, measuring the quality by comparing the recovered spectra against ground truth measurements collected using a hyperspectral camera. We show that the proposed methods are able to reconstruct the spectrum, and, consequently, the color, with fairly good accuracy in all conditions, but the exact accuracy depends on the specific camera and lighting conditions. The testing procedure followed in our experiments suggests a high degree of confidence in the generalizability of our results; the method works well even for a new illuminant not seen in the development phase. Full article
(This article belongs to the Section Color, Multi-spectral, and Hyperspectral Imaging)
Show Figures

Figure 1

26 pages, 19959 KiB  
Review
Automated Detection and Diagnosis of Diabetic Retinopathy: A Comprehensive Survey
by Vasudevan Lakshminarayanan, Hoda Kheradfallah, Arya Sarkar and Janarthanam Jothi Balaji
J. Imaging 2021, 7(9), 165; https://doi.org/10.3390/jimaging7090165 - 27 Aug 2021
Cited by 51 | Viewed by 8241
Abstract
Diabetic Retinopathy (DR) is a leading cause of vision loss in the world. In the past few years, artificial intelligence (AI) based approaches have been used to detect and grade DR. Early detection enables appropriate treatment and thus prevents vision loss. For this [...] Read more.
Diabetic Retinopathy (DR) is a leading cause of vision loss in the world. In the past few years, artificial intelligence (AI) based approaches have been used to detect and grade DR. Early detection enables appropriate treatment and thus prevents vision loss. For this purpose, both fundus and optical coherence tomography (OCT) images are used to image the retina. Next, Deep-learning (DL)-/machine-learning (ML)-based approaches make it possible to extract features from the images and to detect the presence of DR, grade its severity and segment associated lesions. This review covers the literature dealing with AI approaches to DR such as ML and DL in classification and segmentation that have been published in the open literature within six years (2016–2021). In addition, a comprehensive list of available DR datasets is reported. This list was constructed using both the PICO (P-Patient, I-Intervention, C-Control, O-Outcome) and Preferred Reporting Items for Systematic Review and Meta-analysis (PRISMA) 2009 search strategies. We summarize a total of 114 published articles which conformed to the scope of the review. In addition, a list of 43 major datasets is presented. Full article
(This article belongs to the Special Issue Frontiers in Retinal Image Processing)
Show Figures

Figure 1

16 pages, 7030 KiB  
Article
SpineDepth: A Multi-Modal Data Collection Approach for Automatic Labelling and Intraoperative Spinal Shape Reconstruction Based on RGB-D Data
by Florentin Liebmann, Dominik Stütz, Daniel Suter, Sascha Jecklin, Jess G. Snedeker, Mazda Farshad, Philipp Fürnstahl and Hooman Esfandiari
J. Imaging 2021, 7(9), 164; https://doi.org/10.3390/jimaging7090164 - 27 Aug 2021
Cited by 2 | Viewed by 2543
Abstract
Computer aided orthopedic surgery suffers from low clinical adoption, despite increased accuracy and patient safety. This can partly be attributed to cumbersome and often radiation intensive registration methods. Emerging RGB-D sensors combined with artificial intelligence data-driven methods have the potential to streamline these [...] Read more.
Computer aided orthopedic surgery suffers from low clinical adoption, despite increased accuracy and patient safety. This can partly be attributed to cumbersome and often radiation intensive registration methods. Emerging RGB-D sensors combined with artificial intelligence data-driven methods have the potential to streamline these procedures. However, developing such methods requires vast amount of data. To this end, a multi-modal approach that enables acquisition of large clinical data, tailored to pedicle screw placement, using RGB-D sensors and a co-calibrated high-end optical tracking system was developed. The resulting dataset comprises RGB-D recordings of pedicle screw placement along with individually tracked ground truth poses and shapes of spine levels L1–L5 from ten cadaveric specimens. Besides a detailed description of our setup, quantitative and qualitative outcome measures are provided. We found a mean target registration error of 1.5 mm. The median deviation between measured and ground truth bone surface was 2.4 mm. In addition, a surgeon rated the overall alignment based on 10% random samples as 5.8 on a scale from 1 to 6. Generation of labeled RGB-D data for orthopedic interventions with satisfactory accuracy is feasible, and its publication shall promote future development of data-driven artificial intelligence methods for fast and reliable intraoperative registration. Full article
Show Figures

Figure 1

18 pages, 53028 KiB  
Article
Comparative Study of Data Matrix Codes Localization and Recognition Methods
by Ladislav Karrach and Elena Pivarčiová
J. Imaging 2021, 7(9), 163; https://doi.org/10.3390/jimaging7090163 - 27 Aug 2021
Cited by 4 | Viewed by 6150
Abstract
We provide a comprehensive and in-depth overview of the various approaches applicable to the recognition of Data Matrix codes in arbitrary images. All presented methods use the typical “L” shaped Finder Pattern to locate the Data Matrix code in the image. Well-known image [...] Read more.
We provide a comprehensive and in-depth overview of the various approaches applicable to the recognition of Data Matrix codes in arbitrary images. All presented methods use the typical “L” shaped Finder Pattern to locate the Data Matrix code in the image. Well-known image processing techniques such as edge detection, adaptive thresholding, or connected component labeling are used to identify the Finder Pattern. The recognition rate of the compared methods was tested on a set of images with Data Matrix codes, which is published together with the article. The experimental results show that methods based on adaptive thresholding achieved a better recognition rate than methods based on edge detection. Full article
(This article belongs to the Special Issue Edge Detection Evaluation)
Show Figures

Figure 1

11 pages, 1831 KiB  
Article
Ground Truth Data Generator for Eye Location on Infrared Driver Recordings
by Sorin Valcan and Mihail Gaianu
J. Imaging 2021, 7(9), 162; https://doi.org/10.3390/jimaging7090162 - 27 Aug 2021
Cited by 5 | Viewed by 1715
Abstract
Labeling is a very costly and time consuming process that aims to generate datasets for training neural networks in several functionalities and projects. In the automotive field of driver monitoring it has a huge impact, where much of the budget is used for [...] Read more.
Labeling is a very costly and time consuming process that aims to generate datasets for training neural networks in several functionalities and projects. In the automotive field of driver monitoring it has a huge impact, where much of the budget is used for image labeling. This paper presents an algorithm that will be used for generating ground truth data for 2D eye location in infrared images of drivers. The algorithm is implemented with many detection restrictions, which makes it very accurate but not necessarily very constant. The resulting dataset shall not be modified by any human factor and will be used to train neural networks, which we expect to have a very good accuracy and a much better consistency for eye detection than the initial algorithm. This paper proves that we can automatically generate very good quality ground truth data for training neural networks, which is still an open topic in the automotive industry. Full article
(This article belongs to the Special Issue Imaging Studies for Face and Gesture Analysis)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop