sensors-logo

Journal Browser

Journal Browser

Deep Learning for 3D Image and Point Sensors

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: 30 June 2024 | Viewed by 2926

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, University of North Carolina-Charlotte, Charlotte, NC 28223-0001, USA
Interests: computer vision; pattern recognition; image processing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, University of North Carolina-Charlotte, Charlotte, NC 28223-0001, USA
Interests: deep learning; real-time AI; image processing

Special Issue Information

Dear Colleagues,

This Special Issue seeks submissions that use 3D sensor technologies in combination with AI algorithms (e.g., deep learning) to solve problems in visual analysis. Sensor technologies of interest include RGBD sensors, LiDaR sensors and 2D image sensors from which 3D information is to be extracted (e.g., structure-from-motion (SfM)).

Potential submission topics include: deep learning architectures for 3D AI, neuromorphic architectures for 3D AI, 3D pose recognition, 3D trajectory estimation, 3D object recognition, 3D surface reconstruction, sensor data compression, 3D object completion, 3D from single images, 3D scene analysis, 3D datasets for AI, and explainable AI for 3D inference.

While this Special Issue places emphasis on deep learning methods, applications of deep learning systems and novel integrations of deep learning systems as system-level AI solutions for complex problems are also relevant.

Dr. Andrew R. Willis
Dr. Hamed Tabkhi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • RGBD sensors
  • point cloud

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

11 pages, 2384 KiB  
Article
Global Guided Cross-Modal Cross-Scale Network for RGB-D Salient Object Detection
by Shuaihui Wang, Fengyi Jiang and Boqian Xu
Sensors 2023, 23(16), 7221; https://doi.org/10.3390/s23167221 - 17 Aug 2023
Viewed by 725
Abstract
RGB-D saliency detection aims to accurately localize salient regions using the complementary information of a depth map. Global contexts carried by the deep layer are key to salient objection detection, but they are diluted when transferred to shallower layers. Besides, depth maps may [...] Read more.
RGB-D saliency detection aims to accurately localize salient regions using the complementary information of a depth map. Global contexts carried by the deep layer are key to salient objection detection, but they are diluted when transferred to shallower layers. Besides, depth maps may contain misleading information due to the depth sensors. To tackle these issues, in this paper, we propose a new cross-modal cross-scale network for RGB-D salient object detection, where the global context information provides global guidance to boost performance in complex scenarios. First, we introduce a global guided cross-modal and cross-scale module named G2CMCSM to realize global guided cross-modal cross-scale fusion. Then, we employ feature refinement modules for progressive refinement in a coarse-to-fine manner. In addition, we adopt a hybrid loss function to supervise the training of G2CMCSNet over different scales. With all these modules working together, G2CMCSNet effectively enhances both salient object details and salient object localization. Extensive experiments on challenging benchmark datasets demonstrate that our G2CMCSNet outperforms existing state-of-the-art methods. Full article
(This article belongs to the Special Issue Deep Learning for 3D Image and Point Sensors)
Show Figures

Figure 1

10 pages, 2003 KiB  
Article
RGBD Salient Object Detection, Based on Specific Object Imaging
by Xiaolian Liao, Jun Li, Leyi Li, Caoxi Shangguan and Shaoyan Huang
Sensors 2022, 22(22), 8973; https://doi.org/10.3390/s22228973 - 19 Nov 2022
Cited by 1 | Viewed by 1632
Abstract
RGBD salient object detection, based on the convolutional neural network, has achieved rapid development in recent years. However, existing models often focus on detecting salient object edges, instead of objects. Importantly, detecting objects can more intuitively display the complete information of the detection [...] Read more.
RGBD salient object detection, based on the convolutional neural network, has achieved rapid development in recent years. However, existing models often focus on detecting salient object edges, instead of objects. Importantly, detecting objects can more intuitively display the complete information of the detection target. To take care of this issue, we propose a RGBD salient object detection method, based on specific object imaging, which can quickly capture and process important information on object features, and effectively screen out the salient objects in the scene. The screened target objects include not only the edge of the object, but also the complete feature information of the object, which realizes the detection and imaging of the salient objects. We conduct experiments on benchmark datasets and validate with two common metrics, and the results show that our method reduces the error by 0.003 and 0.201 (MAE) on D3Net and JLDCF, respectively. In addition, our method can still achieve a very good detection and imaging performance in the case of the greatly reduced training data. Full article
(This article belongs to the Special Issue Deep Learning for 3D Image and Point Sensors)
Show Figures

Figure 1

Back to TopTop