sensors-logo

Journal Browser

Journal Browser

Challenges and Future Trends of 3D Image Sensing, Visualization, and Processing

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: 30 November 2024 | Viewed by 3561

Special Issue Editor


E-Mail Website
Guest Editor
Polytechnic School of Design, Management and Production Technologies Aveiro-Norte, University of Aveiro, Estrada do Cercal, 449 3720-509 Santiago de Riba-Ul, Oliveira de Azeméis, Portugal
Interests: software engineering; AI; 3D modelling and programming; mobile development; distributed systems

Special Issue Information

Dear Colleagues,

The field of 3D image sensing, visualization, and processing has been thriving and growing in the last decade, by the increasing demand for accurate and efficient 3D sensing technologies across multiple industries, such as robotics, autonomous vehicles, and virtual reality, with remarkable progress although, facing many challenges and numerous future trends.

Some of the key challenges include improving the accuracy, effectiveness and robustness of 3D sensing technologies, which implies the development of more efficient algorithms for processing and analyzing large amounts of 3D data (big data), and finding ways to make 3D visualization and interaction more intuitive and user-friendly (UI/UX).

Near future trends towards the development of more advanced 3D sensors and the integration of 3D sensing into mobile devices and other consumer products. Machine and deep learning and artificial intelligence will play an increasingly important role in processing and analyzing 3D data, while virtual and augmented reality will become more prevalent in industries such as production, gaming, healthcare, and education. In short, the future of 3D image sensing, visualization, and processing is likely to be characterized by continued innovation and growth.

Dr. Miguel Oliveira
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • 3D image sensing
  • visualization
  • processing
  • machine learning
  • deep learning
  • artificial intelligence
  • virtual reality
  • augmented reality
  • Industry 5.0
  • healthcare

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 11286 KiB  
Article
Analysis of the Photogrammetric Use of 360-Degree Cameras in Complex Heritage-Related Scenes: Case of the Necropolis of Qubbet el-Hawa (Aswan Egypt)
by José Luis Pérez-García, José Miguel Gómez-López, Antonio Tomás Mozas-Calvache and Jorge Delgado-García
Sensors 2024, 24(7), 2268; https://doi.org/10.3390/s24072268 - 02 Apr 2024
Viewed by 886
Abstract
This study shows the results of the analysis of the photogrammetric use of 360-degree cameras in complex heritage-related scenes. The goal is to take advantage of the large field of view provided by these sensors and reduce the number of images used to [...] Read more.
This study shows the results of the analysis of the photogrammetric use of 360-degree cameras in complex heritage-related scenes. The goal is to take advantage of the large field of view provided by these sensors and reduce the number of images used to cover the entire scene compared to those needed using conventional cameras. We also try to minimize problems derived from camera geometry and lens characteristics. In this regard, we used a multi-sensor camera composed of six fisheye lenses, applying photogrammetric procedures to several funerary structures. The methodology includes the analysis of several types of spherical images obtained using different stitching techniques and the comparison of the results of image orientation processes considering these images and the original fisheye images. Subsequently, we analyze the possible use of the fisheye images to model complex scenes by reducing the use of ground control points, thus minimizing the need to apply surveying techniques to determine their coordinates. In this regard, we applied distance constraints based on a previous extrinsic calibration of the camera, obtaining results similar to those obtained using a traditional schema based on points. The results have allowed us to determine the advantages and disadvantages of each type of image and configuration, providing several recommendations regarding their use in complex scenes. Full article
Show Figures

Figure 1

25 pages, 11159 KiB  
Article
Multi-Resolution 3D Rendering for High-Performance Web AR
by Argyro-Maria Boutsi, Charalabos Ioannidis and Styliani Verykokou
Sensors 2023, 23(15), 6885; https://doi.org/10.3390/s23156885 - 03 Aug 2023
Viewed by 1361
Abstract
In the context of web augmented reality (AR), 3D rendering that maintains visual quality and frame rate requirements remains a challenge. The lack of a dedicated and efficient 3D format often results in the degraded visual quality of the original data and compromises [...] Read more.
In the context of web augmented reality (AR), 3D rendering that maintains visual quality and frame rate requirements remains a challenge. The lack of a dedicated and efficient 3D format often results in the degraded visual quality of the original data and compromises the user experience. This paper examines the integration of web-streamable view-dependent representations of large-sized and high-resolution 3D models in web AR applications. The developed cross-platform prototype exploits the batched multi-resolution structures of the Nexus.js library as a dedicated lightweight web AR format and tests it against common formats and compression techniques. Built with AR.js and Three.js open-source libraries, it allows the overlay of the multi-resolution models by interactively adjusting the position, rotation and scale parameters. The proposed method includes real-time view-dependent rendering, geometric instancing and 3D pose regression for two types of AR: natural feature tracking (NFT) and location-based positioning for large and textured 3D overlays. The prototype achieves up to a 46% speedup in rendering time compared to optimized glTF models, while a 34 M vertices 3D model is visible in less than 4 s without degraded visual quality in slow 3D networks. The evaluation under various scenes and devices offers insights into how a multi-resolution scheme can be adopted in web AR for high-quality visualization and real-time performance. Full article
Show Figures

Figure 1

15 pages, 12087 KiB  
Article
Computational Integral Imaging Reconstruction via Elemental Image Blending without Normalization
by Eunsu Lee, Hyunji Cho and Hoon Yoo
Sensors 2023, 23(12), 5468; https://doi.org/10.3390/s23125468 - 09 Jun 2023
Cited by 1 | Viewed by 955
Abstract
This paper presents a novel computational integral imaging reconstruction (CIIR) method using elemental image blending to eliminate the normalization process in CIIR. Normalization is commonly used in CIIR to address uneven overlapping artifacts. By incorporating elemental image blending, we remove the normalization step [...] Read more.
This paper presents a novel computational integral imaging reconstruction (CIIR) method using elemental image blending to eliminate the normalization process in CIIR. Normalization is commonly used in CIIR to address uneven overlapping artifacts. By incorporating elemental image blending, we remove the normalization step in CIIR, leading to decreased memory consumption and computational time compared to those of existing techniques. We conducted a theoretical analysis of the impact of elemental image blending on a CIIR method using windowing techniques, and the results showed that the proposed method is superior to the standard CIIR method in terms of image quality. We also performed computer simulations and optical experiments to evaluate the proposed method. The experimental results showed that the proposed method enhances the image quality over that of the standard CIIR method, while also reducing memory usage and processing time. Full article
Show Figures

Figure 1

Back to TopTop