sensors-logo

Journal Browser

Journal Browser

Deep Neural Networks Sensing for RGB-D Motion Recognition

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: closed (10 March 2024) | Viewed by 2121

Special Issue Editor

College of Engineering & Computer Science, Australian National University, Canberra, Australia
Interests: low-level vision; image classification; scene understanding; machine learning; computer vision; optimization

Special Issue Information

Dear Colleagues,

Human motion recognition is one of the essential branches of human-centered research activities. In recent years, motion recognition based on RGB-D data has attracted much attention. Along with the development of artificial intelligence, deep learning techniques have gained remarkable success in computer vision. In particular, convolutional neural networks (CNN) have achieved great success for image-based tasks, and recurrent neural networks (RNN) are renowned for sequence-based problems. Specifically, deep learning methods based on the CNN and RNN architectures have been adopted for motion recognition using RGB-D data.

In this special issue, we hope to discuss the latest progress in motion recognition based on RGB-D in detail and the future research direction.

Dr. Saeed Anwar
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • human-gait
  • artificial vision
  • RGB-D camera
  • people tracking
  • neurological disease
  • fall prevention
  • machine learning
  • mobile robot
  • RGB-D 3D reconstruction
  • RGB-D human activity understanding
  • RGB-D calibration
  • RGB-D SLAM
  • RGB-D data processing

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 10789 KiB  
Article
Vision System for a Forestry Navigation Machine
by Tiago Pereira, Tiago Gameiro, José Pedro, Carlos Viegas and N. M. Fonseca Ferreira
Sensors 2024, 24(5), 1475; https://doi.org/10.3390/s24051475 - 24 Feb 2024
Viewed by 475
Abstract
This article presents the development of a vision system designed to enhance the autonomous navigation capabilities of robots in complex forest environments. Leveraging RGBD and thermic cameras, specifically the Intel RealSense 435i and FLIR ADK, the system integrates diverse visual sensors with advanced [...] Read more.
This article presents the development of a vision system designed to enhance the autonomous navigation capabilities of robots in complex forest environments. Leveraging RGBD and thermic cameras, specifically the Intel RealSense 435i and FLIR ADK, the system integrates diverse visual sensors with advanced image processing algorithms. This integration enables robots to make real-time decisions, recognize obstacles, and dynamically adjust their trajectories during operation. The article focuses on the architectural aspects of the system, emphasizing the role of sensors and the formulation of algorithms crucial for ensuring safety during robot navigation in challenging forest terrains. Additionally, the article discusses the training of two datasets specifically tailored to forest environments, aiming to evaluate their impact on autonomous navigation. Tests conducted in real forest conditions affirm the effectiveness of the developed vision system. The results underscore the system’s pivotal contribution to the autonomous navigation of robots in forest environments. Full article
(This article belongs to the Special Issue Deep Neural Networks Sensing for RGB-D Motion Recognition)
Show Figures

Figure 1

14 pages, 4857 KiB  
Article
Pruning Points Detection of Sweet Pepper Plants Using 3D Point Clouds and Semantic Segmentation Neural Network
by Truong Thi Huong Giang and Young-Jae Ryoo
Sensors 2023, 23(8), 4040; https://doi.org/10.3390/s23084040 - 17 Apr 2023
Cited by 4 | Viewed by 1261
Abstract
Automation in agriculture can save labor and raise productivity. Our research aims to have robots prune sweet pepper plants automatically in smart farms. In previous research, we studied detecting plant parts by a semantic segmentation neural network. Additionally, in this research, we detect [...] Read more.
Automation in agriculture can save labor and raise productivity. Our research aims to have robots prune sweet pepper plants automatically in smart farms. In previous research, we studied detecting plant parts by a semantic segmentation neural network. Additionally, in this research, we detect the pruning points of leaves in 3D space by using 3D point clouds. Robot arms can move to these positions and cut the leaves. We proposed a method to create 3D point clouds of sweet peppers by applying semantic segmentation neural networks, the ICP algorithm, and ORB-SLAM3, a visual SLAM application with a LiDAR camera. This 3D point cloud consists of plant parts that have been recognized by the neural network. We also present a method to detect the leaf pruning points in 2D images and 3D space by using 3D point clouds. Furthermore, the PCL library was used to visualize the 3D point clouds and the pruning points. Many experiments are conducted to show the method’s stability and correctness. Full article
(This article belongs to the Special Issue Deep Neural Networks Sensing for RGB-D Motion Recognition)
Show Figures

Figure 1

Back to TopTop