sensors-logo

Journal Browser

Journal Browser

Special Issue "Deep Learning-Based Image and Signal Sensing and Processing"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: 1 September 2023 | Viewed by 2138

Special Issue Editors

Graduate Institute of Communication Engineering, National Taiwan University, Taipei 10617, Taiwan
Interests: digital signal processing; digital image processing
Institute of Electronics, National Yang Ming Chiao Tung University, Hsinchu 300093, Taiwan
Interests: signal processing; deep learning; green learning; wireless communications
Department of Information and Computer Engineering, Chung Yuan Christian University, Taoyuan 320314, Taiwan
Interests: pattern recognition; computer vision; deep learning; learning analytics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Deep learning is very effective in signal sensing, computer vision, and object recognition. Many of the advanced sensing image and signal processing algorithms proposed in recent years are related to it. Deep learning is a critical technique in image sensing and signal sensing. In image processing, deep learning techniques have been widely applied in object detection, object recognition, object tracking, image denoising, image quality improvement, and medical image analysis. In signal processing, deep learning techniques can be applied to speech recognition, musical signal recognition, source separation, signal quality improvement, ECG and EEG signal analysis, and medical signal processing. Therefore, deep learning techniques are important for both academic research and product design. In this special session, we encourage the authors to submit the manuscripts that are related to the algorithms, architectures, solutions, and applications adopting deep learning techniques. Potential topics include but are not limited to: 

  • Face detection and recognition
  • Learning-based object detection
  • Learning-based object tracing and ReID
  • Hand gesture recognition
  • Human motion recognition
  • Semantic, instance, and panoptic segmentation
  • Image denoising and quality enhancement
  • Medical image processing
  • Learning-based speech recognition
  • Music signal recognition
  • Source separation and echo removal for vocal signals
  • Signal denoising and quality improvement
  • Medical signal analysis

Prof. Dr. Jian-Jiun Ding
Prof. Dr. Feng-Tsun Chien
Dr. Chih-Chang Yu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • sensing
  • object detection
  • object recognition
  • tracking
  • medical image processing
  • denoising
  • signal enhancement
  • speech
  • music signal recognition
  • medical signal processing

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Obstacle Detection System for Navigation Assistance of Visually Impaired People Based on Deep Learning Techniques
Sensors 2023, 23(11), 5262; https://doi.org/10.3390/s23115262 - 01 Jun 2023
Viewed by 241
Abstract
Visually impaired people seek social integration, yet their mobility is restricted. They need a personal navigation system that can provide privacy and increase their confidence for better life quality. In this paper, based on deep learning and neural architecture search (NAS), we propose [...] Read more.
Visually impaired people seek social integration, yet their mobility is restricted. They need a personal navigation system that can provide privacy and increase their confidence for better life quality. In this paper, based on deep learning and neural architecture search (NAS), we propose an intelligent navigation assistance system for visually impaired people. The deep learning model has achieved significant success through well-designed architecture. Subsequently, NAS has proved to be a promising technique for automatically searching for the optimal architecture and reducing human efforts for architecture design. However, this new technique requires extensive computation, limiting its wide use. Due to its high computation requirement, NAS has been less investigated for computer vision tasks, especially object detection. Therefore, we propose a fast NAS to search for an object detection framework by considering efficiency. The NAS will be used to explore the feature pyramid network and the prediction stage for an anchor-free object detection model. The proposed NAS is based on a tailored reinforcement learning technique. The searched model was evaluated on a combination of the Coco dataset and the Indoor Object Detection and Recognition (IODR) dataset. The resulting model outperformed the original model by 2.6% in average precision (AP) with acceptable computation complexity. The achieved results proved the efficiency of the proposed NAS for custom object detection. Full article
(This article belongs to the Special Issue Deep Learning-Based Image and Signal Sensing and Processing)
Show Figures

Figure 1

Article
Tool Wear Condition Monitoring Method Based on Deep Learning with Force Signals
Sensors 2023, 23(10), 4595; https://doi.org/10.3390/s23104595 - 09 May 2023
Viewed by 602
Abstract
Tool wear condition monitoring is an important component of mechanical processing automation, and accurately identifying the wear status of tools can improve processing quality and production efficiency. This paper studied a new deep learning model, to identify the wear status of tools. The [...] Read more.
Tool wear condition monitoring is an important component of mechanical processing automation, and accurately identifying the wear status of tools can improve processing quality and production efficiency. This paper studied a new deep learning model, to identify the wear status of tools. The force signal was transformed into a two-dimensional image using continuous wavelet transform (CWT), short-time Fourier transform (STFT), and Gramian angular summation field (GASF) methods. The generated images were then fed into the proposed convolutional neural network (CNN) model for further analysis. The calculation results show that the accuracy of tool wear state recognition proposed in this paper was above 90%, which was higher than the accuracy of AlexNet, ResNet, and other models. The accuracy of the images generated using the CWT method and identified with the CNN model was the highest, which is attributed to the fact that the CWT method can extract local features of an image and is less affected by noise. Comparing the precision and recall values of the model, it was verified that the image obtained by the CWT method had the highest accuracy in identifying tool wear state. These results demonstrate the potential advantages of using a force signal transformed into a two-dimensional image for tool wear state recognition and of applying CNN models in this area. They also indicate the wide application prospects of this method in industrial production. Full article
(This article belongs to the Special Issue Deep Learning-Based Image and Signal Sensing and Processing)
Show Figures

Figure 1

Article
PW-360IQA: Perceptually-Weighted Multichannel CNN for Blind 360-Degree Image Quality Assessment
Sensors 2023, 23(9), 4242; https://doi.org/10.3390/s23094242 - 24 Apr 2023
Viewed by 499
Abstract
Image quality assessment of 360-degree images is still in its early stages, especially when it comes to solutions that rely on machine learning. There are many challenges to be addressed related to training strategies and model architecture. In this paper, we propose a [...] Read more.
Image quality assessment of 360-degree images is still in its early stages, especially when it comes to solutions that rely on machine learning. There are many challenges to be addressed related to training strategies and model architecture. In this paper, we propose a perceptually weighted multichannel convolutional neural network (CNN) using a weight-sharing strategy for 360-degree IQA (PW-360IQA). Our approach involves extracting visually important viewports based on several visual scan-path predictions, which are then fed to a multichannel CNN using DenseNet-121 as the backbone. In addition, we account for users’ exploration behavior and human visual system (HVS) properties by using information regarding visual trajectory and distortion probability maps. The inter-observer variability is integrated by leveraging different visual scan-paths to enrich the training data. PW-360IQA is designed to learn the local quality of each viewport and its contribution to the overall quality. We validate our model on two publicly available datasets, CVIQ and OIQA, and demonstrate that it performs robustly. Furthermore, the adopted strategy considerably decreases the complexity when compared to the state-of-the-art, enabling the model to attain comparable, if not better, results while requiring less computational complexity. Full article
(This article belongs to the Special Issue Deep Learning-Based Image and Signal Sensing and Processing)
Show Figures

Figure 1

Article
SR-FEINR: Continuous Remote Sensing Image Super-Resolution Using Feature-Enhanced Implicit Neural Representation
Sensors 2023, 23(7), 3573; https://doi.org/10.3390/s23073573 - 29 Mar 2023
Viewed by 541
Abstract
Remote sensing images often have limited resolution, which can hinder their effectiveness in various applications. Super-resolution techniques can enhance the resolution of remote sensing images, and arbitrary resolution super-resolution techniques provide additional flexibility in choosing appropriate image resolutions for different tasks. However, for [...] Read more.
Remote sensing images often have limited resolution, which can hinder their effectiveness in various applications. Super-resolution techniques can enhance the resolution of remote sensing images, and arbitrary resolution super-resolution techniques provide additional flexibility in choosing appropriate image resolutions for different tasks. However, for subsequent processing, such as detection and classification, the resolution of the input image may vary greatly for different methods. In this paper, we propose a method for continuous remote sensing image super-resolution using feature-enhanced implicit neural representation (SR-FEINR). Continuous remote sensing image super-resolution means users can scale a low-resolution image into an image with arbitrary resolution. Our algorithm is composed of three main components: a low-resolution image feature extraction module, a positional encoding module, and a feature-enhanced multi-layer perceptron module. We are the first to apply implicit neural representation in a continuous remote sensing image super-resolution task. Through extensive experiments on two popular remote sensing image datasets, we have shown that our SR-FEINR outperforms the state-of-the-art algorithms in terms of accuracy. Our algorithm showed an average improvement of 0.05 dB over the existing method on ×30 across three datasets. Full article
(This article belongs to the Special Issue Deep Learning-Based Image and Signal Sensing and Processing)
Show Figures

Figure 1

Back to TopTop