sensors-logo

Journal Browser

Journal Browser

Object Detection and IOU Based on Sensors: Methods and Applications

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: closed (15 March 2024) | Viewed by 5520

Special Issue Editor


E-Mail Website
Guest Editor
School of Artificial Intelligence, Hebei University of Technology, Tianjin 300130, China
Interests: artificial intelligence (computational intelligence); parallel evolutionary algorithms and their applications; Internet of Things and sensor networks; multi-objective evolution of deep neural networks

Special Issue Information

Dear Colleagues,

With the development of sensors, Intersection over Union (IoU) for object detection based on sensors has been widely used in various fields. Intersection over union is an evaluation metric used to measure the accuracy of an object detector on a particular dataset. Intersection over union can be used to evaluate the performance of HOG + linear SVM object detectors and convolutional neural network detectors (R-CNN, Faster R-CNN, YOLO, etc.).

This Special issue focuses on “Object Detection and IOU Based on Sensors: Methods and Applications”. The Special Issue aims to provide a state-of-the-art overview of object detection, object tracking, and object recognition. Potential topics include, but are not limited to, the following:

  • Intersection over union (IoU) for object detection and recognition;
  • Convolutional neural networks (CNN) for object detection and recognition;
  • YOLO (real-time object detection);
  • Sensor and sensing technologies for object detection and recognition;
  • Image classification;
  • Image segmentation;
  • Three-dimensional computer vision;
  • Three-dimensional object detection and recognition;
  • Visual surveillance and monitoring.

Prof. Dr. Bin Cao
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

14 pages, 5585 KiB  
Article
Vision-Based On-Road Nighttime Vehicle Detection and Tracking Using Improved HOG Features
by Li Zhang, Weiyue Xu, Cong Shen and Yingping Huang
Sensors 2024, 24(5), 1590; https://doi.org/10.3390/s24051590 - 29 Feb 2024
Cited by 1 | Viewed by 485
Abstract
The lack of discernible vehicle contour features in low-light conditions poses a formidable challenge for nighttime vehicle detection under hardware cost constraints. Addressing this issue, an enhanced histogram of oriented gradients (HOGs) approach is introduced to extract relevant vehicle features. Initially, vehicle lights [...] Read more.
The lack of discernible vehicle contour features in low-light conditions poses a formidable challenge for nighttime vehicle detection under hardware cost constraints. Addressing this issue, an enhanced histogram of oriented gradients (HOGs) approach is introduced to extract relevant vehicle features. Initially, vehicle lights are extracted using a combination of background illumination removal and a saliency model. Subsequently, these lights are integrated with a template-based approach to delineate regions containing potential vehicles. In the next step, the fusion of superpixel and HOG (S-HOG) features within these regions is performed, and the support vector machine (SVM) is employed for classification. A non-maximum suppression (NMS) method is applied to eliminate overlapping areas, incorporating the fusion of vertical histograms of symmetrical features of oriented gradients (V-HOGs). Finally, the Kalman filter is utilized for tracking candidate vehicles over time. Experimental results demonstrate a significant improvement in the accuracy of vehicle recognition in nighttime scenarios with the proposed method. Full article
(This article belongs to the Special Issue Object Detection and IOU Based on Sensors: Methods and Applications)
Show Figures

Figure 1

17 pages, 6837 KiB  
Article
Improved YOLOv7-Based Algorithm for Detecting Foreign Objects on the Roof of a Subway Vehicle
by Weijun Wang, Jinyuan Chen, Zucheng Huang, Hai Yuan, Peng Li, Xuyao Jiang, Xintong Wang, Cheng Zhong and Qunxu Lin
Sensors 2023, 23(23), 9440; https://doi.org/10.3390/s23239440 - 27 Nov 2023
Cited by 2 | Viewed by 1155
Abstract
Subway vehicle roofs must be inspected when entering and exiting the depot to ensure safe subway vehicle operations. This paper presents an improved method for detecting foreign objects on subway vehicle roofs based on the YOLOv7 algorithm. First, we capture images of foreign [...] Read more.
Subway vehicle roofs must be inspected when entering and exiting the depot to ensure safe subway vehicle operations. This paper presents an improved method for detecting foreign objects on subway vehicle roofs based on the YOLOv7 algorithm. First, we capture images of foreign objects using a line-scan camera at the depot entrance and exit, creating a dataset of foreign roof objects. Subsequently, we address the shortcomings of the YOLOv7 algorithm by introducing the Ghost module, an improved weighted bidirectional feature pyramid network (WBiFPN), and the Wise intersection over union (WIoU) bounding-box regression loss function. These enhancements are incorporated to build the subway vehicle roof foreign object detection model based on the improved YOLOv7, which we refer to as YOLOv7-GBW. The experimental results demonstrate the practicality and usability of the proposed method. The analysis of the experimental results indicates that the YOLOv7-GBW algorithm achieves a detection accuracy of 90.29% at a speed of 54.3 frames per second (fps) with a parameter count of 15.51 million. The improved YOLOv7 model outperforms mainstream detection algorithms in terms of detection accuracy, speed, and parameter count. This finding confirms that the proposed method meets the requirements for detecting foreign objects on subway vehicle roofs. Full article
(This article belongs to the Special Issue Object Detection and IOU Based on Sensors: Methods and Applications)
Show Figures

Figure 1

18 pages, 8420 KiB  
Article
TMNet: A Two-Branch Multi-Scale Semantic Segmentation Network for Remote Sensing Images
by Yupeng Gao, Shengwei Zhang, Dongshi Zuo, Weihong Yan and Xin Pan
Sensors 2023, 23(13), 5909; https://doi.org/10.3390/s23135909 - 26 Jun 2023
Cited by 1 | Viewed by 1316
Abstract
Pixel-level information of remote sensing images is of great value in many fields. CNN has a strong ability to extract image backbone features, but due to the localization of convolution operation, it is challenging to directly obtain global feature information and contextual semantic [...] Read more.
Pixel-level information of remote sensing images is of great value in many fields. CNN has a strong ability to extract image backbone features, but due to the localization of convolution operation, it is challenging to directly obtain global feature information and contextual semantic interaction, which makes it difficult for a pure CNN model to obtain higher precision results in semantic segmentation of remote sensing images. Inspired by the Swin Transformer with global feature coding capability, we design a two-branch multi-scale semantic segmentation network (TMNet) for remote sensing images. The network adopts the structure of a double encoder and a decoder. The Swin Transformer is used to increase the ability to extract global feature information. A multi-scale feature fusion module (MFM) is designed to merge shallow spatial features from images of different scales into deep features. In addition, the feature enhancement module (FEM) and channel enhancement module (CEM) are proposed and added to the dual encoder to enhance the feature extraction. Experiments were conducted on the WHDLD and Potsdam datasets to verify the excellent performance of TMNet. Full article
(This article belongs to the Special Issue Object Detection and IOU Based on Sensors: Methods and Applications)
Show Figures

Figure 1

Review

Jump to: Research

16 pages, 3941 KiB  
Review
Overview of Image Datasets for Deep Learning Applications in Diagnostics of Power Infrastructure
by Bogdan Ruszczak, Paweł Michalski and Michał Tomaszewski
Sensors 2023, 23(16), 7171; https://doi.org/10.3390/s23167171 - 14 Aug 2023
Cited by 2 | Viewed by 1920
Abstract
The power sector is one of the most important engineering sectors, with a lot of equipment that needs to be appropriately maintained, often spread over large areas. With the recent advances in deep learning techniques, many applications can be developed that could be [...] Read more.
The power sector is one of the most important engineering sectors, with a lot of equipment that needs to be appropriately maintained, often spread over large areas. With the recent advances in deep learning techniques, many applications can be developed that could be used to automate the power line inspection process, replacing previously manual activities. However, in addition to these novel algorithms, this approach requires specialized datasets, collections that have been properly curated and labeled with the help of experts in the field. When it comes to visual inspection processes, these data are mainly images of various types. This paper consists of two main parts. The first one presents information about datasets used in machine learning, especially deep learning. The need to create domain datasets is justified using the example of the collection of data on power infrastructure objects, and the selected repositories of different collections are compared. In addition, selected collections of digital image data are characterized in more detail. The latter part of the review also discusses the use of an original dataset containing 2630 high-resolution labeled images of power line insulators and comments on the potential applications of this collection. Full article
(This article belongs to the Special Issue Object Detection and IOU Based on Sensors: Methods and Applications)
Show Figures

Figure 1

Back to TopTop