Applications of Video, Digital Image Processing and Deep Learning

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (30 October 2023) | Viewed by 13762

Special Issue Editors


E-Mail Website
Guest Editor
Electrical and Computer Engineering Department, University of Peloponnese, 221 00 Tripoli, Greece
Interests: digital signal processing; biomedical signal processing; blind source separation; speech recognition; pattern recognition; EEG and MEG data analysis
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Mathematics, University of Thessaly, 351 00 Lamia, Greece
Interests: image processing; computer vision; artificial intelligence; deep learning; biomedical applications
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Image and video processing are two research areas of great interest to academics as well as industry professionals worldwide. Their wide-ranging applications include but are not limited to the medical field, robotic vision, autonomous driving, industrial applications, forensics, security, and biometrics, just to name a few. On the other hand, machine learning and, in particular, deep learning have taken the research community by storm, with a wide range of different deep learning networks and architectures being applied to a great number of computer vision or image processing problems. The aim of this Special Issue, "Advances and Applications of Video and Digital Image Processing and Deep Learning", is to bring together the shared ideas of professionals from academia, research and industry about problems and solutions relating to the multifaceted aspects of these disciplines. This Special Issue aims to provide a venue for a wide and diverse audience to survey recent research advances and challenges in deep learning for applications in image and video processing by presenting novel, optimized, high-performance, and hybrid deep-learning-based approaches.

Topics of interest may include, but are not limited to, the following:

  • medical imaging;
  • image registration;
  • image restoration;
  • image segmentation;
  • image tracking;
  • image processing in industrial applications;
  • video processing, video classification;
  • transfer learning;
  • video or image annotation.

Dr. Athanasios Koutras
Dr. Stavros Karkanis
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • medical imaging
  • image registration
  • image restoration
  • image segmentation
  • image tracking
  • image processing in industrial applications
  • video processing, video classification
  • transfer learning
  • video or image annotation

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

12 pages, 2135 KiB  
Article
A Lightweight Reconstruction Model via a Neural Network for a Video Super-Resolution Model
by Xinkun Tang, Ying Xu, Feng Ouyang and Ligu Zhu
Appl. Sci. 2023, 13(18), 10165; https://doi.org/10.3390/app131810165 - 09 Sep 2023
Viewed by 944
Abstract
Super-resolution in image and video processing has been a challenge in computer vision, with its progression creating substantial societal ramifications. More specifically, video super-resolution methodologies aim to restore spatial details while upholding the temporal coherence among frames. Nevertheless, their extensive parameter counts and [...] Read more.
Super-resolution in image and video processing has been a challenge in computer vision, with its progression creating substantial societal ramifications. More specifically, video super-resolution methodologies aim to restore spatial details while upholding the temporal coherence among frames. Nevertheless, their extensive parameter counts and high demand for computational resources challenge the deployment of existing deep convolutional neural networks on mobile platforms. In response to these concerns, our research undertakes an in-depth investigation into deep convolutional neural networks and offers a lightweight model for video super-resolution, capable of reducing computational load. In this study, we bring forward a unique lightweight model for video super-resolution, the Deep Residual Recursive Network (DRRN). The model applies residual learning to stabilize the Recurrent Neural Network (RNN) training, meanwhile adopting depth-wise separable convolution to boost the efficiency of super-resolution operations. Thorough experimental evaluations reveal that our proposed model excels in computational efficiency and in generating refined and temporally consistent results for video super-resolution. Hence, this research presents a crucial stride toward applying video super-resolution strategies on devices with resource limitations. Full article
(This article belongs to the Special Issue Applications of Video, Digital Image Processing and Deep Learning)
Show Figures

Figure 1

8 pages, 4377 KiB  
Communication
Confirmation of Final Bolt Tightening via Deep Learning-Based Image Processing
by Tomotaka Fukuoka, Takahiro Minami and Makoto Fujiu
Appl. Sci. 2023, 13(13), 7573; https://doi.org/10.3390/app13137573 - 27 Jun 2023
Viewed by 869
Abstract
In Japan, the final tightening of bolts in the bolt-tightening operations is guaranteed to have been performed correctly by visually determining the change in markings during the temporary tightening operation performed by the technician. However, the engineer must confirm many bolts; further, the [...] Read more.
In Japan, the final tightening of bolts in the bolt-tightening operations is guaranteed to have been performed correctly by visually determining the change in markings during the temporary tightening operation performed by the technician. However, the engineer must confirm many bolts; further, the amount of time needed for the confirmation work and the inability to keep an objective record of the confirmation results present problems. To solve these problems, we developed a system for automating the final tightening of bolts using deep learning-based image-processing technology. The proposed system takes as input videos of bolt fastening points, extracts individual bolts, extracts markings on the extracted bolts, and makes fastening decisions based on the markings. In the judgment stage, the system processes information on each bolt where a marking is detected; thus, it is possible to leave this information as objective data. In this paper, we evaluated the accuracy of each automated step using an actual bridge video. We also compared the confirmation time with human confirmation. As a result of the confirmation, our proposed method reduces the confirmation time by about 33% in comparison to human confirmation. Full article
(This article belongs to the Special Issue Applications of Video, Digital Image Processing and Deep Learning)
Show Figures

Figure 1

20 pages, 2479 KiB  
Article
Image-Based Crack Detection Using Total Variation Strain DVC Regularization
by Zaira Manigrasso, Wannes Goethals, Pierre Kibleur, Matthieu N. Boone, Wilfried Philips and Jan Aelterman
Appl. Sci. 2023, 13(12), 6980; https://doi.org/10.3390/app13126980 - 09 Jun 2023
Viewed by 868
Abstract
Introduction: Accurately detecting cracks is crucial for assessing the health of materials. Manual detection methods are time-consuming, leading to the development of automatic detection techniques based on image processing and machine learning. These methods utilize morphological image processing and material deformation analysis through [...] Read more.
Introduction: Accurately detecting cracks is crucial for assessing the health of materials. Manual detection methods are time-consuming, leading to the development of automatic detection techniques based on image processing and machine learning. These methods utilize morphological image processing and material deformation analysis through Digital Image or Volume Correlation techniques (DIC/DVC) to identify cracks. The strain field derived from DIC/DVC tends to be noisy. Traditional denoising methods sacrifice spatial resolution, limiting their effectiveness in capturing abrupt structural deformations such as fractures. Method: In this study, a novel DVC regularization method is proposed to obtain a sharper and less noisy strain field. The method minimizes the total variation of spatial strain field components based on the assumption of approximate strain constancy within material phases. Results: The proposed methodology is validated using simulated data and actual 4D μ-CT experimental data. Compared to classical denoising methods, the proposed DVC regularization method provides a more reliable crack detection with fewer false positives. Conclusions: These results highlight the possibility of estimating a low-noise strain field without relying on the spatial smoothness assumption, thereby improving accuracy and reliability in crack detection. Full article
(This article belongs to the Special Issue Applications of Video, Digital Image Processing and Deep Learning)
Show Figures

Figure 1

12 pages, 20350 KiB  
Article
A New Instrument Monitoring Method Based on Few-Shot Learning
by Beini Zhang, Liping Li, Yetao Lyu, Shuguang Chen, Lin Xu and Guanhua Chen
Appl. Sci. 2023, 13(8), 5185; https://doi.org/10.3390/app13085185 - 21 Apr 2023
Viewed by 942
Abstract
As an important part of the industrialization process, fully automated instrument monitoring and identification are experiencing an increasingly wide range of applications in industrial production, autonomous driving, and medical experimentation. However, digital instruments usually have multi-digit features, meaning that the numeric information on [...] Read more.
As an important part of the industrialization process, fully automated instrument monitoring and identification are experiencing an increasingly wide range of applications in industrial production, autonomous driving, and medical experimentation. However, digital instruments usually have multi-digit features, meaning that the numeric information on the screen is usually a multi-digit number greater than 10. Therefore, the accuracy of recognition with traditional algorithms such as threshold segmentation and template matching is low, and thus instrument monitoring still relies heavily on human labor at present. However, manual monitoring is costly and not suitable for risky experimental environments such as those involving radiation and contamination. The development of deep neural networks has opened up new possibilities for fully automated instrument monitoring; however, neural networks generally require large training datasets, costly data collection, and annotation. To solve the above problems, this paper proposes a new instrument monitoring method based on few-shot learning (FLIMM). FLIMM improves the average accuracy (ACC) of the model to 99% with only 16 original images via effective data augmentation method. Meanwhile, due to the controllability of simulated image generation, FLIMM can automatically generate annotation information for simulated numbers, which greatly reduces the cost of data collection and annotation. Full article
(This article belongs to the Special Issue Applications of Video, Digital Image Processing and Deep Learning)
Show Figures

Figure 1

9 pages, 2148 KiB  
Article
DFA-UNet: Efficient Railroad Image Segmentation
by Yan Zhang, Kefeng Li, Guangyuan Zhang, Zhenfang Zhu and Peng Wang
Appl. Sci. 2023, 13(1), 662; https://doi.org/10.3390/app13010662 - 03 Jan 2023
Cited by 8 | Viewed by 3337
Abstract
In computer vision technology, image segmentation is a significant technological advancement for the current problems of high-speed railroad image scene changes, low segmentation accuracy, and serious information loss. We propose a segmentation algorithm, DFA-UNet, based on an improved U-Net network architecture. The model [...] Read more.
In computer vision technology, image segmentation is a significant technological advancement for the current problems of high-speed railroad image scene changes, low segmentation accuracy, and serious information loss. We propose a segmentation algorithm, DFA-UNet, based on an improved U-Net network architecture. The model uses the same encoder–decoder structure as U-Net. To be able to extract image features efficiently and further integrate the weights of each channel feature, we propose to embed the DFA attention module in the encoder part of the model for the adaptive adjustment of feature map weights. We evaluated the performance of the model on the RailSem19 dataset. The results showed that our model showed improvements of 2.48%, 0.22%, 3.31%, 0.97%, and 2.2% in mIoU, F1-score, Accuracy, Precision, and Recall, respectively, compared with U-Net. The model can effectively achieve the segmentation of railroad images. Full article
(This article belongs to the Special Issue Applications of Video, Digital Image Processing and Deep Learning)
Show Figures

Figure 1

19 pages, 7376 KiB  
Article
Identification of Corrosion on the Inner Walls of Water Pipes Using a VGG Model Incorporating Attentional Mechanisms
by Qian Zhao, Lu Li and Lihua Zhang
Appl. Sci. 2022, 12(24), 12731; https://doi.org/10.3390/app122412731 - 12 Dec 2022
Viewed by 1648
Abstract
To accurately classify and identify the different corrosion patterns on the inner walls of water-supply pipes with different morphologies and complex and variable backgrounds, an improved VGG16 convolutional neural network classification model is proposed. Firstly, the S.E attention mechanism is added to the [...] Read more.
To accurately classify and identify the different corrosion patterns on the inner walls of water-supply pipes with different morphologies and complex and variable backgrounds, an improved VGG16 convolutional neural network classification model is proposed. Firstly, the S.E attention mechanism is added to the traditional VGG network model, which can be used to distinguish the importance of each channel of the feature map and re-weight the feature map through the globally calculated channel attention. Secondly, the joint-loss-function method is used to improve the loss function and further improve the classification performance of the model. The experimental results show that the proposed model can effectively identify different pipe-corrosion patterns with an accuracy of 95.266%, higher than the unimproved VGG and AlexNet models. Full article
(This article belongs to the Special Issue Applications of Video, Digital Image Processing and Deep Learning)
Show Figures

Figure 1

Review

Jump to: Research

22 pages, 2606 KiB  
Review
Image Processing Approach for Grading IVF Blastocyst: A State-of-the-Art Review and Future Perspective of Deep Learning-Based Models
by Iza Sazanita Isa, Umi Kalsom Yusof and Murizah Mohd Zain
Appl. Sci. 2023, 13(2), 1195; https://doi.org/10.3390/app13021195 - 16 Jan 2023
Cited by 2 | Viewed by 3683
Abstract
The development of intelligence-based methods and application systems has expanded for the use of quality blastocyst selection in in vitro fertilization (IVF). Significant models on assisted reproductive technology (ART) have been discovered, including ones that process morphological image approaches and extract attributes of [...] Read more.
The development of intelligence-based methods and application systems has expanded for the use of quality blastocyst selection in in vitro fertilization (IVF). Significant models on assisted reproductive technology (ART) have been discovered, including ones that process morphological image approaches and extract attributes of blastocyst quality. In this study, (1) the state-of-the-art in ART is established using an automated deep learning approach, applications for grading blastocysts in IVF, and related image processing techniques. (2) Thirty final publications in IVF and deep learning were found by an extensive literature search from databases using several relevant sets of keywords based on papers published in full-text English articles between 2012 and 2022. This scoping review sparks fresh thought in deep learning-based automated blastocyst grading. (3) This scoping review introduces a novel notion in the realm of automated blastocyst grading utilizing deep learning applications, showing that these automated methods can frequently match or even outperform skilled embryologists in particular deep learning tasks. This review adds to our understanding of the procedure for selecting embryos that are suitable for implantation and offers important data for the creation of an automated computer-based system for grading blastocysts that applies deep learning. Full article
(This article belongs to the Special Issue Applications of Video, Digital Image Processing and Deep Learning)
Show Figures

Figure 1

Back to TopTop