Computer Vision Applications for Autonomous Vehicles

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Electrical and Autonomous Vehicles".

Deadline for manuscript submissions: closed (15 April 2024) | Viewed by 2289

Special Issue Editors


E-Mail Website
Guest Editor
School of Electronics Engineering, College of Electrical & Computer Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea
Interests: machine learning; artificial intelligence

E-Mail Website
Guest Editor
School of Artificial Intelligence, Yong In University, Yongin 17092, Republic of Korea
Interests: multi-sensor-based computer vision; advanced driver assistance systems

Special Issue Information

Dear Colleagues,

Computer vision (CV) methods are extensively utilized by engineers to address a variety of practical vision challenges. We urge researchers to share their experimental and theoretical findings, focusing on the practical application of CV techniques to autonomous vehicles such as cars, drones, robots, and more across all scientific and engineering domains. Papers submitted should highlight innovative applications of CV in real-world engineering contexts related to autonomous vehicles. There are no limitations on paper length. If electronic files or software detailing calculations or experimental processes cannot be conventionally published, they can be provided as supplementary digital content.

The focal points of this Special Issue include but are not limited to innovative applications of:

  • Image and video interpretation;
  • Video analysis and captioning;
  • Image retrieval;
  • Image enhancement;
  • Vision-based robotics;
  • Sensor fusion;
  • Multimedia;
  • 3D reconstruction and localization;
  • Object detection and tracking;
  • Event prediction.

Dr. Yuseok Ban
Dr. Kyungjae Lee
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • deep learning
  • machine learning
  • computer vision
  • autonomous vehicle
  • image and video understanding

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 8829 KiB  
Article
Multiple Moving Vehicles Tracking Algorithm with Attention Mechanism and Motion Model
by Jiajun Gao, Guangjie Han, Hongbo Zhu and Lyuchao Liao
Electronics 2024, 13(1), 242; https://doi.org/10.3390/electronics13010242 - 04 Jan 2024
Viewed by 1241
Abstract
With the acceleration of urbanization and the increasing demand for travel, current road traffic is experiencing rapid growth and more complex spatio-temporal logic. Vehicle tracking on roads presents several challenges, including complex scenes with frequent foreground–background transitions, fast and nonlinear vehicle movements, and [...] Read more.
With the acceleration of urbanization and the increasing demand for travel, current road traffic is experiencing rapid growth and more complex spatio-temporal logic. Vehicle tracking on roads presents several challenges, including complex scenes with frequent foreground–background transitions, fast and nonlinear vehicle movements, and the presence of numerous unavoidable low-score detection boxes. In this paper, we propose AM-Vehicle-Track, following the proven-effective paradigm of tracking by detection (TBD). At the detection stage, we introduce the lightweight channel block attention mechanism (LCBAM), facilitating the detector to concentrate more on foreground features with limited computational resources. At the tracking stage, we innovatively propose the noise-adaptive extended Kalman filter (NSA-EKF) module to extract vehicles’ motion information while considering the impact of detection confidence on observation noise when dealing with nonlinear motion. Additionally, we borrow the Byte data association method to address unavoidable low-score detection boxes, enabling secondary association to reduce ID switches. We achieve 42.2 MOTA, 51.2 IDF1, and 364 IDs on the test set of VisDrone-MOT with 72 FPS. The experimental results showcase our approach’s highly competitive performance, attaining SOTA tracking performance with a fast speed. Full article
(This article belongs to the Special Issue Computer Vision Applications for Autonomous Vehicles)
Show Figures

Figure 1

13 pages, 3140 KiB  
Article
Prex-Net: Progressive Exploration Network Using Efficient Channel Fusion for Light Field Reconstruction
by Dong-Myung Kim, Young-Suk Yoon, Yuseok Ban and Jae-Won Suh
Electronics 2023, 12(22), 4661; https://doi.org/10.3390/electronics12224661 - 15 Nov 2023
Viewed by 675
Abstract
Light field (LF) reconstruction is a technique for synthesizing views between LF images and various methods have been proposed to obtain high-quality LF reconstructed images. In this paper, we propose a progressive exploration network using efficient channel fusion for light field reconstruction (Prex-Net), [...] Read more.
Light field (LF) reconstruction is a technique for synthesizing views between LF images and various methods have been proposed to obtain high-quality LF reconstructed images. In this paper, we propose a progressive exploration network using efficient channel fusion for light field reconstruction (Prex-Net), which consists of three parts to quickly produce high-quality synthesized LF images. The initial feature extraction module uses 3D convolution to obtain deep correlations between multiple LF input images. In the channel fusion module, the extracted initial feature map passes through successive up- and down-fusion blocks and continuously searches for features required for LF reconstruction. The fusion block collects the pixels of channels by pixel shuffle and applies convolution to the collected pixels to fuse the information existing between channels. Finally, the LF restoration module synthesizes LF images with high angular resolution through simple convolution using the concatenated outputs of down-fusion blocks. The proposed Prex-Net synthesizes views between LF images faster than existing LF restoration methods and shows good results in the PSNR performance of the synthesized image. Full article
(This article belongs to the Special Issue Computer Vision Applications for Autonomous Vehicles)
Show Figures

Figure 1

Back to TopTop