sensors-logo

Journal Browser

Journal Browser

Computer Vision and Virtual Reality: Technologies and Applications

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: 20 October 2024 | Viewed by 1768

Special Issue Editors

School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China
Interests: computer vision; virtual reality; digital twin
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science and Information Systems, Texas A&M University, Commerce, TX 75428, USA
Interests: autonomous driving; computer vision; cyber physical systems; cyber security
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science, North China Electric Power University, Baoding 071003, China
Interests: computer animation; deep learning

E-Mail Website
Guest Editor
School of Data Science and Media Intelligence, Communication University of China, Beijing 100024, China
Interests: virtual reality; 3D vision; digital humans

Special Issue Information

Dear Colleagues,

Computer vision (CV) is one of the fundamental technologies for immersive virtual reality (VR) and augmented reality (AR) systems, in which cameras are often used to capture the real-world information. Sensor-captured imaging and related intelligent processing algorithms support 3D reconstruction, scene understanding, gesture recognition, eye tracking, object detection and tracking, etc., all of which contribute to creating a more realistic, interactive and fascinating virtual world.

This Special Issue is open to multidisciplinary research on the convergence of CV, VR/AR. It covers original research articles, reviews, and communication surveys in the described domain that include but are not limited to the following topics:

  • Deep learning in image processing;
  • Image segmentation;
  • Object detection and recognition;
  • Vision-based tracking and sensing;
  • Pose estimation;
  • Human–computer interaction;
  • 3D reconstruction and computer graphics;
  • SLAM;
  • Scene understanding;
  • Augmented reality;
  • Emerging VR/AR applications and systems based on CV technologies.

We look forward to receiving your contributions.

Dr. Hai Huang
Dr. Yuehua Wang
Dr. Xuqiang Shao
Dr. Ming Meng
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • computer vision
  • computer graphics
  • virtual reality
  • augmented reality
  • machine learning
  • deep learning
  • human–computer interaction
  • 3D reconstruction
  • 3D vision

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 9499 KiB  
Article
Part2Point: A Part-Oriented Point Cloud Reconstruction Framework
by Yu-Cheng Feng, Sheng-Yun Zeng and Tyng-Yeu Liang
Sensors 2024, 24(1), 34; https://doi.org/10.3390/s24010034 - 20 Dec 2023
Viewed by 686
Abstract
Three-dimensional object modeling is necessary for developing virtual and augmented reality applications. Traditionally, application engineers must manually use art software to edit object shapes or exploit LIDAR to scan physical objects for constructing 3D models. This is very time-consuming and costly work. Fortunately, [...] Read more.
Three-dimensional object modeling is necessary for developing virtual and augmented reality applications. Traditionally, application engineers must manually use art software to edit object shapes or exploit LIDAR to scan physical objects for constructing 3D models. This is very time-consuming and costly work. Fortunately, GPU recently provided a cost-effective solution for massive data computation. With GPU support, many studies have proposed 3D model generators based on different learning architectures, which can automatically convert 2D object pictures into 3D object models with good performance. However, as the demand for model resolution increases, the required computing time and memory space increase as significantly as the parameters of the learning architecture, which seriously degrades the efficiency of 3D model construction and the feasibility of resolution improvement. To resolve this problem, this paper proposes a part-oriented point cloud reconstruction framework called Part2Point. This framework segments the object’s parts, reconstructs the point cloud for individual object parts, and combines the part point clouds into the complete object point cloud. Therefore, it can reduce the number of learning network parameters at the exact resolution, effectively minimizing the calculation time cost and the required memory space. Moreover, it can improve the resolution of the reconstructed point cloud so that the reconstructed model can present more details of object parts. Full article
(This article belongs to the Special Issue Computer Vision and Virtual Reality: Technologies and Applications)
Show Figures

Figure 1

16 pages, 2490 KiB  
Article
E2LNet: An Efficient and Effective Lightweight Network for Panoramic Depth Estimation
by Jiayue Xu, Jianping Zhao, Hua Li, Cheng Han and Chao Xu
Sensors 2023, 23(22), 9218; https://doi.org/10.3390/s23229218 - 16 Nov 2023
Viewed by 694
Abstract
Monocular panoramic depth estimation has various applications in robotics and autonomous driving due to its ability to perceive the entire field of view. However, panoramic depth estimation faces two significant challenges: global context capturing and distortion awareness. In this paper, we propose a [...] Read more.
Monocular panoramic depth estimation has various applications in robotics and autonomous driving due to its ability to perceive the entire field of view. However, panoramic depth estimation faces two significant challenges: global context capturing and distortion awareness. In this paper, we propose a new framework for panoramic depth estimation that can simultaneously address panoramic distortion and extract global context information, thereby improving the performance of panoramic depth estimation. Specifically, we introduce an attention mechanism into the multi-scale dilated convolution and adaptively adjust the receptive field size between different spatial positions, designing the adaptive attention dilated convolution module, which effectively perceives distortion. At the same time, we design the global scene understanding module to integrate global context information into the feature maps generated using the feature extractor. Finally, we trained and evaluated our model on three benchmark datasets which contains the virtual and real-world RGB-D panorama datasets. The experimental results show that the proposed method achieves competitive performance, comparable to existing techniques in both quantitative and qualitative evaluations. Furthermore, our method has fewer parameters and more flexibility, making it a scalable solution in mobile AR. Full article
(This article belongs to the Special Issue Computer Vision and Virtual Reality: Technologies and Applications)
Show Figures

Figure 1

Back to TopTop