sensors-logo

Journal Browser

Journal Browser

Smart Image Recognition and Detection Sensors

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: 31 August 2024 | Viewed by 5113

Special Issue Editors


E-Mail Website
Guest Editor
Department of Geography, University of Connecticut, U-4148, Storrs, CT, USA
Interests: potential landslide detection; landslide susceptibility mapping; deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
College of Resources and Environment (CRE), University of Chinese Academy of Sciences, Beijing 100049, China
Interests: environmental remote sensing; ecological remote sensing; remote sensing assessment of aerosol effects
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The rapid advance of smart image recognition has brought new chances of development for remote sensing and earth observation technology. The development of sensor technology also brought more and more ways to obtain remote sensing data, especially very high-resolution images. With the growing overwhelming amount of data, more automatic and accurate algorithms are needed to interpret images. In the last decade, the development of deep learning has provided a new way for smart image processing. Therefore, the application of deep learning based smart image processing in remote sensing has a very broad prospect.

This special issue aims to share the latest advances in smart image recognition related to remote sensing. Researches with innovative methods are very welcome. We especially encourage the use of freely available remote sensing data and open-source processing software, as it helps to conduct analysis anywhere in the world and promotes data equality. The data and code are recommended to be uploaded as supplementary material and shared to the public. Review papers will also be considered.

Potential topics for this Special Issue may include, but are not limited to, the following:

  • Classification and segmentation of remote sensing images
  • Object recognition and detection using optical or SAR sensors
  • Remote sensing scene classification
  • Remote sensing image change detection
  • Remote sensing application of Unmanned aerial vehicle (UAV)

Dr. Zhijie Zhang
Dr. Jiakui Tang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • land use and land cover
  • object detection
  • remote sensing scene classification
  • UAV remote sensing
  • change detection
  • machine learning
  • deep learning

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 6112 KiB  
Article
CM-YOLOv8: Lightweight YOLO for Coal Mine Fully Mechanized Mining Face
by Yingbo Fan, Shanjun Mao, Mei Li, Zheng Wu and Jitong Kang
Sensors 2024, 24(6), 1866; https://doi.org/10.3390/s24061866 - 14 Mar 2024
Cited by 1 | Viewed by 653
Abstract
With the continuous development of deep learning, the application of object detection based on deep neural networks in the coal mine has been expanding. Simultaneously, as the production applications demand higher recognition accuracy, most research chooses to enlarge the depth and parameters of [...] Read more.
With the continuous development of deep learning, the application of object detection based on deep neural networks in the coal mine has been expanding. Simultaneously, as the production applications demand higher recognition accuracy, most research chooses to enlarge the depth and parameters of the network to improve accuracy. However, due to the limited computing resources in the coal mining face, it is challenging to meet the computation demands of a large number of hardware resources. Therefore, this paper proposes a lightweight object detection algorithm designed specifically for the coal mining face, referred to as CM-YOLOv8. The algorithm introduces adaptive predefined anchor boxes tailored to the coal mining face dataset to enhance the detection performance of various targets. Simultaneously, a pruning method based on the L1 norm is designed, significantly compressing the model’s computation and parameter volume without compromising accuracy. The proposed algorithm is validated on the coal mining dataset DsLMF+, achieving a compression rate of 40% on the model volume with less than a 1% drop in accuracy. Comparative analysis with other existing algorithms demonstrates its efficiency and practicality in coal mining scenarios. The experiments confirm that CM-YOLOv8 significantly reduces the model’s computational requirements and volume while maintaining high accuracy. Full article
(This article belongs to the Special Issue Smart Image Recognition and Detection Sensors)
Show Figures

Figure 1

22 pages, 91538 KiB  
Article
Study on the Influence of Label Image Accuracy on the Performance of Concrete Crack Segmentation Network Models
by Kaifeng Ma, Mengshu Hao, Wenlong Shang, Jinping Liu, Junzhen Meng, Qingfeng Hu, Peipei He and Shiming Li
Sensors 2024, 24(4), 1068; https://doi.org/10.3390/s24041068 - 06 Feb 2024
Viewed by 480
Abstract
A high-quality dataset is a basic requirement to ensure the training quality and prediction accuracy of a deep learning network model (DLNM). To explore the influence of label image accuracy on the performance of a concrete crack segmentation network model in a semantic [...] Read more.
A high-quality dataset is a basic requirement to ensure the training quality and prediction accuracy of a deep learning network model (DLNM). To explore the influence of label image accuracy on the performance of a concrete crack segmentation network model in a semantic segmentation dataset, this study uses three labelling strategies, namely pixel-level fine labelling, outer contour widening labelling and topological structure widening labelling, respectively, to generate crack label images and construct three sets of crack semantic segmentation datasets with different accuracy. Four semantic segmentation network models (SSNMs), U-Net, High-Resolution Net (HRNet)V2, Pyramid Scene Parsing Network (PSPNet) and DeepLabV3+, were used for learning and training. The results show that the datasets constructed from the crack label images with pix-el-level fine labelling are more conducive to improving the accuracy of the network model for crack image segmentation. The U-Net had the best performance among the four SSNMs. The Mean Intersection over Union (MIoU), Mean Pixel Accuracy (MPA) and Accuracy reached 85.47%, 90.86% and 98.66%, respectively. The average difference between the quantized width of the crack image segmentation obtained by U-Net and the real crack width was 0.734 pixels, the maximum difference was 1.997 pixels, and the minimum difference was 0.141 pixels. Therefore, to improve the segmentation accuracy of crack images, the pixel-level fine labelling strategy and U-Net are the best choices. Full article
(This article belongs to the Special Issue Smart Image Recognition and Detection Sensors)
Show Figures

Figure 1

21 pages, 5860 KiB  
Article
Road-MobileSeg: Lightweight and Accurate Road Extraction Model from Remote Sensing Images for Mobile Devices
by Guangjun Qu, Yue Wu, Zhihong Lv, Dequan Zhao, Yingpeng Lu, Kefa Zhou, Jiakui Tang, Qing Zhang and Aijun Zhang
Sensors 2024, 24(2), 531; https://doi.org/10.3390/s24020531 - 15 Jan 2024
Viewed by 587
Abstract
Current road extraction models from remote sensing images based on deep learning are computationally demanding and memory-intensive because of their high model complexity, making them impractical for mobile devices. This study aimed to develop a lightweight and accurate road extraction model, called Road-MobileSeg, [...] Read more.
Current road extraction models from remote sensing images based on deep learning are computationally demanding and memory-intensive because of their high model complexity, making them impractical for mobile devices. This study aimed to develop a lightweight and accurate road extraction model, called Road-MobileSeg, to address the problem of automatically extracting roads from remote sensing images on mobile devices. The Road-MobileFormer was designed as the backbone structure of Road-MobileSeg. In the Road-MobileFormer, the Coordinate Attention Module was incorporated to encode both channel relationships and long-range dependencies with precise position information for the purpose of enhancing the accuracy of road extraction. Additionally, the Micro Token Pyramid Module was introduced to decrease the number of parameters and computations required by the model, rendering it more lightweight. Moreover, three model structures, namely Road-MobileSeg-Tiny, Road-MobileSeg-Small, and Road-MobileSeg-Base, which share a common foundational structure but differ in the quantity of parameters and computations, were developed. These models varied in complexity and were available for use on mobile devices with different memory capacities and computing power. The experimental results demonstrate that the proposed models outperform the compared typical models in terms of accuracy, lightweight structure, and latency and achieve high accuracy and low latency on mobile devices. This indicates that the models that integrate with the Coordinate Attention Module and the Micro Token Pyramid Module surpass the limitations of current research and are suitable for road extraction from remote sensing images on mobile devices. Full article
(This article belongs to the Special Issue Smart Image Recognition and Detection Sensors)
Show Figures

Figure 1

17 pages, 44970 KiB  
Article
Research on the Efficiency of Bridge Crack Detection by Coupling Deep Learning Frameworks with Convolutional Neural Networks
by Kaifeng Ma, Xiang Meng, Mengshu Hao, Guiping Huang, Qingfeng Hu and Peipei He
Sensors 2023, 23(16), 7272; https://doi.org/10.3390/s23167272 - 19 Aug 2023
Cited by 1 | Viewed by 1180
Abstract
Bridge crack detection based on deep learning is a research area of great interest and difficulty in the field of bridge health detection. This study aimed to investigate the effectiveness of coupling a deep learning framework (DLF) with a convolutional neural network (CNN) [...] Read more.
Bridge crack detection based on deep learning is a research area of great interest and difficulty in the field of bridge health detection. This study aimed to investigate the effectiveness of coupling a deep learning framework (DLF) with a convolutional neural network (CNN) for bridge crack detection. A dataset consisting of 2068 bridge crack images was randomly split into training, verification, and testing sets with a ratio of 8:1:1, respectively. Several CNN models, including Faster R-CNN, Single Shot MultiBox Detector (SSD), You Only Look Once (YOLO)-v5(x), U-Net, and Pyramid Scene Parsing Network (PSPNet), were used to conduct experiments using the PyTorch, TensorFlow2, and Keras frameworks. The experimental results show that the Harmonic Mean (F1) values of the detection results of the Faster R-CNN and SSD models under the Keras framework are relatively large (0.76 and 0.67, respectively, in the object detection model). The YOLO-v5(x) model of the TensorFlow2 framework achieved the highest F1 value of 0.67. In semantic segmentation models, the U-Net model achieved the highest detection result accuracy (AC) value of 98.37% under the PyTorch framework. The PSPNet model achieved the highest AC value of 97.86% under the TensorFlow2 framework. These experimental results provide optimal coupling efficiency parameters of a DLF and CNN for bridge crack detection. A more accurate and efficient DLF and CNN model for bridge crack detection has been obtained, which has significant practical application value. Full article
(This article belongs to the Special Issue Smart Image Recognition and Detection Sensors)
Show Figures

Figure 1

20 pages, 4324 KiB  
Article
Sand-Dust Image Enhancement Using Chromatic Variance Consistency and Gamma Correction-Based Dehazing
by Jong-Ju Jeon, Tae-Hee Park and Il-Kyu Eom
Sensors 2022, 22(23), 9048; https://doi.org/10.3390/s22239048 - 22 Nov 2022
Cited by 5 | Viewed by 1540
Abstract
In sand-dust environments, the low quality of images captured outdoors adversely affects many remote-based image processing and computer vision systems, because of severe color casts, low contrast, and poor visibility of sand-dust images. In such cases, conventional color correction methods do not guarantee [...] Read more.
In sand-dust environments, the low quality of images captured outdoors adversely affects many remote-based image processing and computer vision systems, because of severe color casts, low contrast, and poor visibility of sand-dust images. In such cases, conventional color correction methods do not guarantee appropriate performance in outdoor computer vision applications. In this paper, we present a novel color correction and dehazing algorithm for sand-dust image enhancement. First, we propose an effective color correction method that preserves the consistency of the chromatic variances and maintains the coincidence of the chromatic means. Next, a transmission map for image dehazing is estimated using the gamma correction for the enhancement of color-corrected sand-dust images. Finally, a cross-correlation-based chromatic histogram shift algorithm is proposed to reduce the reddish artifacts in the enhanced images. We performed extensive experiments for various sand-dust images and compared the performance of the proposed method to that of several existing state-of-the-art enhancement methods. The simulation results indicated that the proposed enhancement scheme outperforms the existing approaches in terms of both subjective and objective qualities. Full article
(This article belongs to the Special Issue Smart Image Recognition and Detection Sensors)
Show Figures

Figure 1

Back to TopTop