sensors-logo

Journal Browser

Journal Browser

Advances in AI-Based Processing of Image and Video Data Acquired by Various Sensors

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: 30 June 2024 | Viewed by 2370

Special Issue Editors

School of Electrical and Computer Engineering , Ben Gurion University of the Negev, Be’er-Sheva 84105001, Israel
Interests: image and video processing/compression; deep learning in various emerging applications in computer vision
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Electrical and Computer Engineering , Ben Gurion University of the Negev, Be’er-Sheva 84105001, Israel
Interests: image and video correction and analysis
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Different types of imaging sensors produce a large variety of images, covering different electromagnetic regimes and various characteristics of the imaged objects. The field of artificial intelligence (AI) has made significant strides in recent years, particularly in the areas of image and video processing. With the proliferation of high-resolution imaging sensors and the ever-increasing amounts of visual data being generated by them, AI-based techniques have become indispensable for efficient and accurate image and video processing and analysis.

This Special Issue aims to bring together researchers and practitioners from academia and industry to showcase the latest advancements in AI-based image- and video-processing technology. We welcome original research papers, review articles, and short communications on topics including but not limited to:

  • Deep-learning techniques for image and video processing
  • Computer vision and pattern recognition
  • Object detection and tracking in video streams
  • Semantic segmentation and image classification
  • Image and video restoration and enhancement
  • Generative models for image and video synthesis
  • Multimodal data fusion and analysis
  • Applications of AI-based image and video processing in various domains
  • Super-resolution techniques based on AI
  • Image and video compression based on AI
  • Medical image processing with AI

We encourage submissions that showcase novel techniques and applications, as well as contributions that demonstrate the practicality and scalability of AI-based image- and video-processing solutions. All papers will undergo a rigorous peer-review process to ensure high quality and relevance to the theme of this Special Issue.

Dr. Ofer Hadar
Dr. Yitzhak Yitzhaky
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • DL in computer vision
  • AI in computer vision
  • image and video processing with AI

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 10188 KiB  
Article
Optimized OTSU Segmentation Algorithm-Based Temperature Feature Extraction Method for Infrared Images of Electrical Equipment
by Xueli Liu, Zhanlong Zhang, Yuefeng Hao, Hui Zhao and Yu Yang
Sensors 2024, 24(4), 1126; https://doi.org/10.3390/s24041126 - 08 Feb 2024
Cited by 3 | Viewed by 590
Abstract
Infrared image processing is an effective method for diagnosing faults in electrical equipment, in which target device segmentation and temperature feature extraction are key steps. Target device segmentation separates the device to be diagnosed from the image, while temperature feature extraction analyzes whether [...] Read more.
Infrared image processing is an effective method for diagnosing faults in electrical equipment, in which target device segmentation and temperature feature extraction are key steps. Target device segmentation separates the device to be diagnosed from the image, while temperature feature extraction analyzes whether the device is overheating and has potential faults. However, the segmentation of infrared images of electrical equipment is slow due to issues such as high computational complexity, and the temperature information extracted lacks accuracy due to the insufficient consideration of the non-linear relationship between the image grayscale and temperature. Therefore, in this study, we propose an optimized maximum between-class variance thresholding method (OTSU) segmentation algorithm based on the Gray Wolf Optimization (GWO) algorithm, which accelerates the segmentation speed by optimizing the threshold determination process using OTSU. The experimental results show that compared to the non-optimized method, the optimized segmentation method increases the threshold calculation time by more than 83.99% while maintaining similar segmentation results. Based on this, to address the issue of insufficient accuracy in temperature feature extraction, we propose a temperature value extraction method for infrared images based on the K-nearest neighbor (KNN) algorithm. The experimental results demonstrate that compared to traditional linear methods, this method achieves a 73.68% improvement in the maximum residual absolute value of the extracted temperature values and a 78.95% improvement in the average residual absolute value. Full article
Show Figures

Figure 1

15 pages, 3288 KiB  
Article
Offshore Oil Spill Detection Based on CNN, DBSCAN, and Hyperspectral Imaging
by Ce Zhan, Kai Bai, Binrui Tu and Wanxing Zhang
Sensors 2024, 24(2), 411; https://doi.org/10.3390/s24020411 - 10 Jan 2024
Viewed by 762
Abstract
Offshore oil spills have the potential to inflict substantial ecological damage, underscoring the critical importance of timely offshore oil spill detection and remediation. At present, offshore oil spill detection typically combines hyperspectral imaging with deep learning techniques. While these methodologies have made significant [...] Read more.
Offshore oil spills have the potential to inflict substantial ecological damage, underscoring the critical importance of timely offshore oil spill detection and remediation. At present, offshore oil spill detection typically combines hyperspectral imaging with deep learning techniques. While these methodologies have made significant advancements, they prove inadequate in scenarios requiring real-time detection due to limited model detection speeds. To address this challenge, a method for detecting oil spill areas is introduced, combining convolutional neural networks (CNNs) with the DBSCAN clustering algorithm. This method aims to enhance the efficiency of oil spill area detection in real-time scenarios, providing a potential solution to the limitations posed by the intricate structures of existing models. The proposed method includes a pre-feature selection process applied to the spectral data, followed by pixel classification using a convolutional neural network (CNN) model. Subsequently, the DBSCAN algorithm is employed to segment oil spill areas from the classification results. To validate our proposed method, we simulate an offshore oil spill environment in the laboratory, utilizing a hyperspectral sensing device to collect data and create a dataset. We then compare our method with three other models—DRSNet, CNN-Visual Transformer, and GCN—conducting a comprehensive analysis to evaluate the advantages and limitations of each model. Full article
Show Figures

Figure 1

10 pages, 4888 KiB  
Communication
Highway Visibility Estimation in Foggy Weather via Multi-Scale Fusion Network
by Pengfei Xiao, Zhendong Zhang, Xiaochun Luo, Jiaqing Sun, Xuecheng Zhou, Xixi Yang and Liang Huang
Sensors 2023, 23(24), 9739; https://doi.org/10.3390/s23249739 - 10 Dec 2023
Viewed by 730
Abstract
Poor visibility has a significant impact on road safety and can even lead to traffic accidents. The traditional means of visibility monitoring no longer meet the current needs in terms of temporal and spatial accuracy. In this work, we propose a novel deep [...] Read more.
Poor visibility has a significant impact on road safety and can even lead to traffic accidents. The traditional means of visibility monitoring no longer meet the current needs in terms of temporal and spatial accuracy. In this work, we propose a novel deep network architecture for estimating the visibility directly from highway surveillance images. Specifically, we employ several image feature extraction methods to extract detailed structural, spectral, and scene depth features from the images. Next, we design a multi-scale fusion network to adaptively extract and fuse vital features for the purpose of estimating visibility. Furthermore, we create a real-scene dataset for model learning and performance evaluation. Our experiments demonstrate the superiority of our proposed method to the existing methods. Full article
Show Figures

Figure 1

Back to TopTop