Geometry Reconstruction from Images (2nd Edition)

A special issue of Journal of Imaging (ISSN 2313-433X). This special issue belongs to the section "Visualization and Computer Graphics".

Deadline for manuscript submissions: 31 May 2024 | Viewed by 3368

Special Issue Editor


E-Mail Website
Guest Editor
XLIM Institute, UMR CNRS 7252, University of Poitiers, 86073 Poitiers, France
Interests: computer graphics; lighting simulation; reflectance models; image-based rendering
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Our Special Issue on "Geometry Reconstruction from Images" has been successful; 10 papers have been published (https://www.mdpi.com/journal/jimaging/special_issues/geometry_reconstruction). The Journal of Imaging proposes a second issue in this same area. Recovering 3D content from images has been a tremendous source of research advances, with many different focuses, targeted applications, needs, or scientific starting points. A wide range of existing approaches nowadays are employed in the industry for many considerations, including, for instance, quality in engineering production, video-based security, or 3D modeling in gaming applications or movies. Yet, reconstructing a representation of a scene observed through a camera remains a challenging aspect in general, and the specific question of producing a (static or dynamic) geometric model has led to decades of research and still corresponds to a very active scientific domain. Sensors are continuously evolving, bringing more and more accuracy, resolution, and new opportunities for reconstructing objects’ shapes and/or detailed geometric variations.

This new call will be dedicated to, but not limited to, 3D reconstruction from videos, multispectral images, or time of flight. We wish to encourage original contributions that focus on the power of imaging methods to recover geometric representations of objects or parts of objects. Contributions may correspond to various approaches, including shape-from-X approaches, deep learning, photometric stereo, NERF approaches, etc.

We hope this new call will be of interest to many authors.

Dr. Daniel Meneveaux
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • geometry from images and videos
  • reconstruction from 3D images
  • reconstruction from time of flight sensors
  • reconstruction from multispectral images
  • reconstruction from lightfield cameras
  • multiview reconstruction
  • photometric stereo
  • epipolar geometry
  • space carving and coloring
  • differential geometry
  • deep learning based reconstruction
  • medical imaging
  • radar, satellites
  • cultural heritage
  • virtual and augmented reality

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 10168 KiB  
Article
Single-Image-Based 3D Reconstruction of Endoscopic Images
by Bilal Ahmad, Pål Anders Floor, Ivar Farup and Casper Find Andersen
J. Imaging 2024, 10(4), 82; https://doi.org/10.3390/jimaging10040082 - 28 Mar 2024
Viewed by 599
Abstract
A wireless capsule endoscope (WCE) is a medical device designed for the examination of the human gastrointestinal (GI) tract. Three-dimensional models based on WCE images can assist in diagnostics by effectively detecting pathology. These 3D models provide gastroenterologists with improved visualization, particularly in [...] Read more.
A wireless capsule endoscope (WCE) is a medical device designed for the examination of the human gastrointestinal (GI) tract. Three-dimensional models based on WCE images can assist in diagnostics by effectively detecting pathology. These 3D models provide gastroenterologists with improved visualization, particularly in areas of specific interest. However, the constraints of WCE, such as lack of controllability, and requiring expensive equipment for operation, which is often unavailable, pose significant challenges when it comes to conducting comprehensive experiments aimed at evaluating the quality of 3D reconstruction from WCE images. In this paper, we employ a single-image-based 3D reconstruction method on an artificial colon captured with an endoscope that behaves like WCE. The shape from shading (SFS) algorithm can reconstruct the 3D shape using a single image. Therefore, it has been employed to reconstruct the 3D shapes of the colon images. The camera of the endoscope has also been subjected to comprehensive geometric and radiometric calibration. Experiments are conducted on well-defined primitive objects to assess the method’s robustness and accuracy. This evaluation involves comparing the reconstructed 3D shapes of primitives with ground truth data, quantified through measurements of root-mean-square error and maximum error. Afterward, the same methodology is applied to recover the geometry of the colon. The results demonstrate that our approach is capable of reconstructing the geometry of the colon captured with a camera with an unknown imaging pipeline and significant noise in the images. The same procedure is applied on WCE images for the purpose of 3D reconstruction. Preliminary results are subsequently generated to illustrate the applicability of our method for reconstructing 3D models from WCE images. Full article
(This article belongs to the Special Issue Geometry Reconstruction from Images (2nd Edition))
Show Figures

Figure 1

21 pages, 10758 KiB  
Article
Neural Radiance Field-Inspired Depth Map Refinement for Accurate Multi-View Stereo
by Shintaro Ito, Kanta Miura, Koichi Ito and Takafumi Aoki
J. Imaging 2024, 10(3), 68; https://doi.org/10.3390/jimaging10030068 - 08 Mar 2024
Viewed by 1215
Abstract
In this paper, we propose a method to refine the depth maps obtained by Multi-View Stereo (MVS) through iterative optimization of the Neural Radiance Field (NeRF). MVS accurately estimates the depths on object surfaces, and NeRF accurately estimates the depths at object boundaries. [...] Read more.
In this paper, we propose a method to refine the depth maps obtained by Multi-View Stereo (MVS) through iterative optimization of the Neural Radiance Field (NeRF). MVS accurately estimates the depths on object surfaces, and NeRF accurately estimates the depths at object boundaries. The key ideas of the proposed method are to combine MVS and NeRF to utilize the advantages of both in depth map estimation and to use NeRF for depth map refinement. We also introduce a Huber loss into the NeRF optimization to improve the accuracy of the depth map refinement, where the Huber loss reduces the estimation error in the radiance fields by placing constraints on errors larger than a threshold. Through a set of experiments using the Redwood-3dscan dataset and the DTU dataset, which are public datasets consisting of multi-view images, we demonstrate the effectiveness of the proposed method compared to conventional methods: COLMAP, NeRF, and DS-NeRF. Full article
(This article belongs to the Special Issue Geometry Reconstruction from Images (2nd Edition))
Show Figures

Figure 1

15 pages, 2033 KiB  
Article
Fast Data Generation for Training Deep-Learning 3D Reconstruction Approaches for Camera Arrays
by Théo Barrios, Stéphanie Prévost and Céline Loscos
J. Imaging 2024, 10(1), 7; https://doi.org/10.3390/jimaging10010007 - 27 Dec 2023
Viewed by 1261
Abstract
In the last decade, many neural network algorithms have been proposed to solve depth reconstruction. Our focus is on reconstruction from images captured by multi-camera arrays which are a grid of vertically and horizontally aligned cameras that are uniformly spaced. Training these networks [...] Read more.
In the last decade, many neural network algorithms have been proposed to solve depth reconstruction. Our focus is on reconstruction from images captured by multi-camera arrays which are a grid of vertically and horizontally aligned cameras that are uniformly spaced. Training these networks using supervised learning requires data with ground truth. Existing datasets are simulating specific configurations. For example, they represent a fixed-size camera array or a fixed space between cameras. When the distance between cameras is small, the array is said to be with a short baseline. Light-field cameras, with a baseline of less than a centimeter, are for instance in this category. On the contrary, an array with large space between cameras is said to be of a wide baseline. In this paper, we present a purely virtual data generator to create large training datasets: this generator can adapt to any camera array configuration. Parameters are for instance the size (number of cameras) and the distance between two cameras. The generator creates virtual scenes by randomly selecting objects and textures and following user-defined parameters like the disparity range or image parameters (resolution, color space). Generated data are used only for the learning phase. They are unrealistic but can present concrete challenges for disparity reconstruction such as thin elements and the random assignment of textures to objects to avoid color bias. Our experiments focus on wide-baseline configuration which requires more datasets. We validate the generator by testing the generated datasets with known deep-learning approaches as well as depth reconstruction algorithms in order to validate them. The validation experiments have proven successful. Full article
(This article belongs to the Special Issue Geometry Reconstruction from Images (2nd Edition))
Show Figures

Figure 1

Back to TopTop