remotesensing-logo

Journal Browser

Journal Browser

Special Issue "Photogrammetry Meets AI"

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "AI Remote Sensing".

Deadline for manuscript submissions: 15 December 2023 | Viewed by 9969

Special Issue Editors

3D Optical Metrology Unit, Bruno Kessler Foundation (FBK), Via Sommarive 18, 38123 Trento, Italy
Interests: photogrammetry; automation; data fusion; artificial intelligence; geospatial data analytics
Special Issues, Collections and Topics in MDPI journals
1. Department of Civil, Environmental and Geodetic Engineering, The Ohio State University, Columbus, OH 43210, USA
2. Department of Electrical and Computer Engineering, The Ohio State University, Columbus, OH 43210, USA
Interests: data analytics; aerial/satellite photogrammetry; remote sensing; image processing; machine learning; 3D computer vision; 3D modeling/change detection; deformation analysis; unmanned aerial vehicles; image dense matching
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

For many years, photogrammetry has been the leading methodology to derive 3D metric and accurate information from imagery, at different scales (from satellite to aerial, terrestrial and under water) and from different sensors (linear, frame, panoramic). The inclusion of computer vision and robotics solutions has increased the level of automation in image processing and 3D data generation, leading to mainstream automatic solutions and massive 3D digitization processes. The recent advent of artificial intelligence methods based on machine and deep learning approaches is again changing the photogrammetric processes leading to unexpected automated solutions that can truly revolutionize the mapping and 3D documentation sector.

This Special Issue wants to focus on this recent change for 3D geometric tasks, and is seeking high-quality papers that explore all the potentialities offered by AI in photogrammetric problems. Papers should report progresses in supporting, integrating and boosting key areas of photogrammetry with AI-based methods. In particular, the following topics should be addressed in the proposed submissions:

  • Image matching and learning-based tie points extraction;
  • Outlier removal;
  • Structure from motion and bundle adjustment;
  • Camera project loss and calibration;
  • Simultaneous localization and mapping (SLAM) in the era of deep learning;
  • Monocular depth estimation;
  • Multi-view stereo (MVS) and dense point cloud generation with neural networks;
  • 3D representation and reconstruction with neural radiance field (NeRF);
  • Implicit methods for 3D representation from images and mesh reconstruction;
  • 3D fusion of heterogenous datasets;
  • Learning-based DSM inpainting;
  • Point clouds editing, cleaning and filtering;
  • Quantitative evaluations and analyses within applications.

Dr. Fabio Remondino
Dr. Rongjun Qin
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • photogrammetry
  • machine/deep learning
  • structure from motion
  • 3D reconstruction
  • NeRF
  • data integration and fusion
  • quantitative analyses and comparisons

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

23 pages, 8734 KiB  
Article
Motorcycle Detection and Collision Warning Using Monocular Images from a Vehicle
Remote Sens. 2023, 15(23), 5548; https://doi.org/10.3390/rs15235548 - 28 Nov 2023
Viewed by 483
Abstract
Motorcycle detection and collision warning are essential features in advanced driver assistance systems (ADAS) to ensure road safety, especially in emergency situations. However, detecting motorcycles from videos captured from a car is challenging due to the varying shapes and appearances of motorcycles. In [...] Read more.
Motorcycle detection and collision warning are essential features in advanced driver assistance systems (ADAS) to ensure road safety, especially in emergency situations. However, detecting motorcycles from videos captured from a car is challenging due to the varying shapes and appearances of motorcycles. In this paper, we propose an integrated and innovative remote sensing and artificial intelligence (AI) methodology for motorcycle detection and distance estimation based on visual data from a single camera installed in the back of a vehicle. Firstly, MD-TinyYOLOv4 is used for detecting motorcycles, refining the neural network through SPP (spatial pyramid pooling) feature extraction, Mish activation function, data augmentation techniques, and optimized anchor boxes for training. The proposed algorithm outperforms eight existing YOLO versions, achieving a precision of 81% at a speed of 240 fps. Secondly, a refined disparity map of each motorcycle’s bounding box is estimated by training a Monodepth2 with a bilateral filter for distance estimation. The proposed fusion model (motorcycle’s detection and distance from vehicle) is evaluated with depth stereo camera measurements, and the results show that 89% of warning scenes are correctly detected, with an alarm notification time of 0.022 s for each image. Outcomes indicate that the proposed integrated methodology provides an effective solution for ADAS, with promising results for real-world applications, and can be suitable for running on mobility services or embedded computing boards instead of the super expensive and powerful systems used in some high-tech unmanned vehicles. Full article
(This article belongs to the Special Issue Photogrammetry Meets AI)
Show Figures

Figure 1

22 pages, 5258 KiB  
Article
A Critical Analysis of NeRF-Based 3D Reconstruction
Remote Sens. 2023, 15(14), 3585; https://doi.org/10.3390/rs15143585 - 18 Jul 2023
Cited by 7 | Viewed by 5958
Abstract
This paper presents a critical analysis of image-based 3D reconstruction using neural radiance fields (NeRFs), with a focus on quantitative comparisons with respect to traditional photogrammetry. The aim is, therefore, to objectively evaluate the strengths and weaknesses of NeRFs and provide insights into [...] Read more.
This paper presents a critical analysis of image-based 3D reconstruction using neural radiance fields (NeRFs), with a focus on quantitative comparisons with respect to traditional photogrammetry. The aim is, therefore, to objectively evaluate the strengths and weaknesses of NeRFs and provide insights into their applicability to different real-life scenarios, from small objects to heritage and industrial scenes. After a comprehensive overview of photogrammetry and NeRF methods, highlighting their respective advantages and disadvantages, various NeRF methods are compared using diverse objects with varying sizes and surface characteristics, including texture-less, metallic, translucent, and transparent surfaces. We evaluated the quality of the resulting 3D reconstructions using multiple criteria, such as noise level, geometric accuracy, and the number of required images (i.e., image baselines). The results show that NeRFs exhibit superior performance over photogrammetry in terms of non-collaborative objects with texture-less, reflective, and refractive surfaces. Conversely, photogrammetry outperforms NeRFs in cases where the object’s surface possesses cooperative texture. Such complementarity should be further exploited in future works. Full article
(This article belongs to the Special Issue Photogrammetry Meets AI)
Show Figures

Graphical abstract

Other

Jump to: Research

18 pages, 27623 KiB  
Technical Note
DDL-MVS: Depth Discontinuity Learning for Multi-View Stereo Networks
Remote Sens. 2023, 15(12), 2970; https://doi.org/10.3390/rs15122970 - 07 Jun 2023
Cited by 1 | Viewed by 1483
Abstract
We propose an enhancement module called depth discontinuity learning (DDL) for learning-based multi-view stereo (MVS) methods. Traditional methods are known for their accuracy but struggle with completeness. While recent learning-based methods have improved completeness at the cost of accuracy, our DDL approach aims [...] Read more.
We propose an enhancement module called depth discontinuity learning (DDL) for learning-based multi-view stereo (MVS) methods. Traditional methods are known for their accuracy but struggle with completeness. While recent learning-based methods have improved completeness at the cost of accuracy, our DDL approach aims to improve accuracy while retaining completeness in the reconstruction process. To achieve this, we introduce the joint estimation of depth and boundary maps, where the boundary maps are explicitly utilized for further refinement of the depth maps. We validate our idea by integrating it into an existing learning-based MVS pipeline where the reconstruction depends on high-quality depth map estimation. Extensive experiments on various datasets, namely DTU, ETH3D, “Tanks and Temples”, and BlendedMVS, show that our method improves reconstruction quality compared to our baseline, Patchmatchnet. Our ablation study demonstrates that incorporating the proposed DDL significantly reduces the depth map error, for instance, by more than 30% on the DTU dataset, and leads to improved depth map quality in both smooth and boundary regions. Additionally, our qualitative analysis has shown that the reconstructed point cloud exhibits enhanced quality without any significant compromise on completeness. Finally, the experiments reveal that our proposed model and strategies exhibit strong generalization capabilities across the various datasets. Full article
(This article belongs to the Special Issue Photogrammetry Meets AI)
Show Figures

Graphical abstract

Back to TopTop