sensors-logo

Journal Browser

Journal Browser

Object Recognition with Vision Sensors Based on Machine Learning and Deep Learning

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: 10 July 2024 | Viewed by 5425

Special Issue Editors


E-Mail Website
Guest Editor
School of Computer Science and Engineering, Sun Yat-Sen University, Guangzhou 510006, China
Interests: image understanding; machine learning; language understanding; open-world semantic understanding

E-Mail Website
Guest Editor
School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
Interests: pattern recognition; computer vision

Special Issue Information

Dear Colleagues,

As massive vision sensors have been established, object recognition has achieved significant success due to visual big data and advanced machine learning methods (e.g., deep learning). However, there are still several challenges, e.g., the assumption of closed-world recognition, the requirement of large-scale labeled images, the problem of few-shot learning, and poor model interpretability. This Special Issue aims to address the object recognition methods with vision sensors designed to meet these challenges.

Dr. Meng Yang
Dr. Jianjun Qian
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • open-world object recognition
  • semi-supervised object recognition
  • weakly supervised object recognition
  • unsupervised object recognition
  • few-shot object recognition
  • interpretable object recognition
  • object recognition based on deep learning
  • object recognition based on machine learning

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 9386 KiB  
Article
Three-Dimensional Positioning for Aircraft Using IoT Devices Equipped with a Fish-Eye Camera
by Junichi Mori, Makoto Morinaga, Takumi Asakura, Takenobu Tsuchiya, Ippei Yamamoto, Kentaro Nishino and Shigenori Yokoshima
Sensors 2023, 23(22), 9108; https://doi.org/10.3390/s23229108 - 10 Nov 2023
Viewed by 852
Abstract
Radar is an important sensing technology for three-dimensional positioning of aircraft. This method requires detecting the response from the object to the signal transmitted from the antenna, but the accuracy becomes unstable due to effects such as obstruction and reflection from surrounding buildings [...] Read more.
Radar is an important sensing technology for three-dimensional positioning of aircraft. This method requires detecting the response from the object to the signal transmitted from the antenna, but the accuracy becomes unstable due to effects such as obstruction and reflection from surrounding buildings at low altitudes near the antenna. Accordingly, there is a need for a ground-based positioning method with high accuracy. Among the positioning methods using cameras that have been proposed for this purpose, we have developed a multisite synchronized positioning system using IoT devices equipped with a fish-eye camera, and have been investigating its performance. This report describes the details and calibration experiments for this technology. Also, a case study was performed in which flight paths measured by existing GPS positioning were compared with results from the proposed method. Although the results obtained by each of the methods showed individual characteristics, the three-dimensional coordinates were a good match, showing the effectiveness of the positioning technology proposed in this study. Full article
Show Figures

Figure 1

24 pages, 3227 KiB  
Article
An Improved Wildfire Smoke Detection Based on YOLOv8 and UAV Images
by Saydirasulov Norkobil Saydirasulovich, Mukhriddin Mukhiddinov, Oybek Djuraev, Akmalbek Abdusalomov and Young-Im Cho
Sensors 2023, 23(20), 8374; https://doi.org/10.3390/s23208374 - 10 Oct 2023
Cited by 7 | Viewed by 4086
Abstract
Forest fires rank among the costliest and deadliest natural disasters globally. Identifying the smoke generated by forest fires is pivotal in facilitating the prompt suppression of developing fires. Nevertheless, succeeding techniques for detecting forest fire smoke encounter persistent issues, including a slow identification [...] Read more.
Forest fires rank among the costliest and deadliest natural disasters globally. Identifying the smoke generated by forest fires is pivotal in facilitating the prompt suppression of developing fires. Nevertheless, succeeding techniques for detecting forest fire smoke encounter persistent issues, including a slow identification rate, suboptimal accuracy in detection, and challenges in distinguishing smoke originating from small sources. This study presents an enhanced YOLOv8 model customized to the context of unmanned aerial vehicle (UAV) images to address the challenges above and attain heightened precision in detection accuracy. Firstly, the research incorporates Wise-IoU (WIoU) v3 as a regression loss for bounding boxes, supplemented by a reasonable gradient allocation strategy that prioritizes samples of common quality. This strategic approach enhances the model’s capacity for precise localization. Secondly, the conventional convolutional process within the intermediate neck layer is substituted with the Ghost Shuffle Convolution mechanism. This strategic substitution reduces model parameters and expedites the convergence rate. Thirdly, recognizing the challenge of inadequately capturing salient features of forest fire smoke within intricate wooded settings, this study introduces the BiFormer attention mechanism. This mechanism strategically directs the model’s attention towards the feature intricacies of forest fire smoke, simultaneously suppressing the influence of irrelevant, non-target background information. The obtained experimental findings highlight the enhanced YOLOv8 model’s effectiveness in smoke detection, proving an average precision (AP) of 79.4%, signifying a notable 3.3% enhancement over the baseline. The model’s performance extends to average precision small (APS) and average precision large (APL), registering robust values of 71.3% and 92.6%, respectively. Full article
Show Figures

Figure 1

Back to TopTop