remotesensing-logo

Journal Browser

Journal Browser

Advances in Photogrammetry and Remote Sensing: Data Processing and Innovative Applications

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Engineering Remote Sensing".

Deadline for manuscript submissions: closed (28 February 2022) | Viewed by 28874

Special Issue Editors

1. JOANNEUM RESEARCH Forschungsgesellschaft mbH, DIGITAL, Remote Sensing and Geoinformation, Steyrergasse 17, 8010 Graz, Austria
2. Institute of Geodesy, Graz University of Technology, Steyrergasse 30, 8010 Graz, Austria
Interests: photogrammetry; computer vision; remote sensing; machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Leibniz Universität Hannover, Institute for Photogrammetry and Geoinformation, 30167 Hannover, Germany
Interests: photogrammetry; remote sensing; height models

Special Issue Information

Dear Colleagues,

Photogrammetric remote sensing is evolving, especially with the introduction of novel optical and SAR sensors and due to novel processing methods and innovative applications. Methods from computer vision, machine learning, and deep learning influence remote-sensing-based metrology and foster novel applications—for example, forest assessment, city modeling, land cover and land use classification, carbon reporting, farm land monitoring, change detection, glacier observation, flood prediction, coastal mapping, determination of subsidence, or disaster damage mapping. The driving power are advances in photogrammetry and in remote sensing, allowing the generation of higher-quality source material from, for instance, stereo matching, 3D reconstruction, neural networks on 2D images and on 3D point clouds.

This Special Issue aims to collect papers discussing such advances and breakthroughs in photogrammetric remote sensing. Submitted manuscripts should mainly focus on novelties introduced by recent approaches that link photogrammetry and remote sensing, for example, with the following topics:

  • 3D remote sensing with SAR and optical sensors;
  • Image orientation and geo-referencing;
  • Discrete 3D representation of the surface of the Earth;
  • Digital surface, elevation and terrain models (DSMs, DEMs, DTMs);
  • Forest assessment;
  • City modeling;
  • Land cover and land use classification;
  • Carbon reporting;
  • Farm land monitoring;
  • Change detection;
  • Glacier observation;
  • Flood prediction;
  • Coastal mapping;
  • Determination of subsidence;
  • Disaster damage mapping.

Dr. Roland Perko
Prof. Mattia Crespi
Dr. Karsten Jacobsen
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Photogrammetry
  • Remote sensing
  • Computer vision
  • Machine learning
  • Deep learning

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

23 pages, 17592 KiB  
Article
Partial Scene Reconstruction for Close Range Photogrammetry Using Deep Learning Pipeline for Region Masking
by Mahmoud Eldefrawy, Scott A. King and Michael Starek
Remote Sens. 2022, 14(13), 3199; https://doi.org/10.3390/rs14133199 - 03 Jul 2022
Cited by 4 | Viewed by 2357
Abstract
3D reconstruction is a beneficial technique to generate 3D geometry of scenes or objects for various applications such as computer graphics, industrial construction, and civil engineering. There are several techniques to obtain the 3D geometry of an object. Close-range photogrammetry is an inexpensive, [...] Read more.
3D reconstruction is a beneficial technique to generate 3D geometry of scenes or objects for various applications such as computer graphics, industrial construction, and civil engineering. There are several techniques to obtain the 3D geometry of an object. Close-range photogrammetry is an inexpensive, accessible approach to obtaining high-quality object reconstruction. However, state-of-the-art software systems need a stationary scene or a controlled environment (often a turntable setup with a black background), which can be a limiting factor for object scanning. This work presents a method that reduces the need for a controlled environment and allows the capture of multiple objects with independent motion. We achieve this by creating a preprocessing pipeline that uses deep learning to transform a complex scene from an uncontrolled environment into multiple stationary scenes with a black background that is then fed into existing software systems for reconstruction. Our pipeline achieves this by using deep learning models to detect and track objects through the scene. The detection and tracking pipeline uses semantic-based detection and tracking and supports using available pretrained or custom networks. We develop a correction mechanism to overcome some detection and tracking shortcomings, namely, object-reidentification and multiple detections of the same object. We show detection and tracking are effective techniques to address scenes with multiple motion systems and that objects can be reconstructed with limited or no knowledge of the camera or the environment. Full article
Show Figures

Figure 1

27 pages, 2670 KiB  
Article
Quality Control for the BPG Lossy Compression of Three-Channel Remote Sensing Images
by Fangfang Li, Vladimir Lukin, Oleg Ieremeiev and Krzysztof Okarma
Remote Sens. 2022, 14(8), 1824; https://doi.org/10.3390/rs14081824 - 10 Apr 2022
Cited by 7 | Viewed by 1818
Abstract
This paper deals with providing the desired quality in the Better Portable Graphics (BPG)-based lossy compression of color and three-channel remote sensing (RS) images. Quality is described by the Mean Deviation Similarity Index (MDSI), which is proven to be one of the best [...] Read more.
This paper deals with providing the desired quality in the Better Portable Graphics (BPG)-based lossy compression of color and three-channel remote sensing (RS) images. Quality is described by the Mean Deviation Similarity Index (MDSI), which is proven to be one of the best metrics for characterizing compressed image quality due to its high conventional and rank-order correlation with the Mean Opinion Score (MOS) values. The MDSI properties are studied and three main areas of interest are determined. It is shown that quite different quality and compression ratios (CR) can be observed for the same values of the quality parameter Q that controls compression, depending on the compressed image complexity. To provide the desired quality, a modified two-step procedure is proposed and tested. It has a preliminary stage carried out offline (in advance). At this stage, an average rate-distortion curve (MDSI on Q) is obtained and it is available until the moment when a given image has to be compressed. Then, in the first step, an image is compressed using the starting Q determined from the average rate-distortion curve for the desired MDSI. After this, the image is decompressed and the produced MDSI is calculated. In the second step, if necessary, the parameter Q is corrected using the average rate-distortion curve, and the image is compressed with the corrected Q. Such a procedure allows a decrease in the MDSI variance by around one order after two steps compared to variance after the first step. This is important for the MDSI of approximately 0.2–0.25 corresponding to the distortion invisibility threshold. The BPG performance comparison to some other coders is performed and examples of its application to real-life RS images are presented. Full article
Show Figures

Graphical abstract

21 pages, 5141 KiB  
Article
Building Height Extraction from GF-7 Satellite Images Based on Roof Contour Constrained Stereo Matching
by Chenni Zhang, Yunfan Cui, Zeyao Zhu, San Jiang and Wanshou Jiang
Remote Sens. 2022, 14(7), 1566; https://doi.org/10.3390/rs14071566 - 24 Mar 2022
Cited by 20 | Viewed by 3750
Abstract
Building height is one of the basic geographic information for planning and analysis in urban construction. It is still very challenging to estimate the accurate height of complex buildings from satellite images, especially for buildings with podium. This paper proposes a solution for [...] Read more.
Building height is one of the basic geographic information for planning and analysis in urban construction. It is still very challenging to estimate the accurate height of complex buildings from satellite images, especially for buildings with podium. This paper proposes a solution for building height estimation from GF-7 satellite images by using a roof contour constrained stereo matching algorithm and DSM (Digital Surface Model) based bottom elevation estimation. First, an object-oriented roof matching algorithm is proposed based on building contour to extract accurate building roof elevation from GF-7 stereo image, and DSM generated from the GF-7 stereo images is then used to obtain building bottom elevation. Second, roof contour constrained stereo matching is conducted between backward and forward image blocks, in which the difference of standard deviation maps is used for the similarity measure. To deal with the multi-height problem of podium buildings, the gray difference image is adopted to segment podium buildings, and re-matching is conducted to find out their actual heights. Third, the building height is obtained through the elevation difference between the building top and bottom, in which the evaluation of the building bottom is calculated according to the elevation histogram statistics of the building buffer in DSM. Finally, two GF-7 stereo satellite images, collected in Yingde, Guangzhou, and Xi’an, Shanxi, are used for performance evaluation. Besides, the aerial LiDAR point cloud is used for absolute accuracy evaluation. The results demonstrate that compared with other methods, our solution obviously improves the accuracy of height estimation of high-rise buildings. The MAE (Mean Absolute Error) of the estimated building heights in Yingde is 2.31 m, and the MAE of the estimated elevation of building top and bottom is approximately 1.57 m and 1.91 m, respectively. Then the RMSE (Root Mean Square Error) of building top and bottom is 2.01 m and 2.57 m. As for the Xi’an dataset with 7 buildings with podium out of 40 buildings, the MAE of the estimated building height is 1.69 m and the RMSE is 2.34 m. The proposed method can be an effective solution for building height extraction from GF-7 satellite images. Full article
Show Figures

Figure 1

23 pages, 5782 KiB  
Article
Multi-Epoch and Multi-Imagery (MEMI) Photogrammetric Workflow for Enhanced Change Detection Using Time-Lapse Cameras
by Xabier Blanch, Anette Eltner, Marta Guinau and Antonio Abellan
Remote Sens. 2021, 13(8), 1460; https://doi.org/10.3390/rs13081460 - 09 Apr 2021
Cited by 21 | Viewed by 4137
Abstract
Photogrammetric models have become a standard tool for the study of surfaces, structures and natural elements. As an alternative to Light Detection and Ranging (LiDAR), photogrammetry allows 3D point clouds to be obtained at a much lower cost. This paper presents an enhanced [...] Read more.
Photogrammetric models have become a standard tool for the study of surfaces, structures and natural elements. As an alternative to Light Detection and Ranging (LiDAR), photogrammetry allows 3D point clouds to be obtained at a much lower cost. This paper presents an enhanced workflow for image-based 3D reconstruction of high-resolution models designed to work with fixed time-lapse camera systems, based on multi-epoch multi-images (MEMI) to exploit redundancy. This workflow is part of a fully automatic working setup that includes all steps: from capturing the images to obtaining clusters from change detection. The workflow is capable of obtaining photogrammetric models with a higher quality than the classic Structure from Motion (SfM) time-lapse photogrammetry workflow. The MEMI workflow reduced the error up to a factor of 2 when compared to the previous approach, allowing for M3C2 standard deviation of 1.5 cm. In terms of absolute accuracy, using LiDAR data as a reference, our proposed method is 20% more accurate than models obtained with the classic workflow. The automation of the method as well as the improvement of the quality of the 3D reconstructed models enables accurate 4D photogrammetric analysis in near-real time. Full article
Show Figures

Graphical abstract

19 pages, 21031 KiB  
Article
Improved Real-Time Natural Hazard Monitoring Using Automated DInSAR Time Series
by Krisztina Kelevitz, Kristy F. Tiampo and Brianna D. Corsa
Remote Sens. 2021, 13(5), 867; https://doi.org/10.3390/rs13050867 - 25 Feb 2021
Cited by 3 | Viewed by 2834
Abstract
As part of the collaborative GeoSciFramework project, we are establising a monitoring system for the Yellowstone volcanic area that integrates multiple geodetic and seismic data sets into an advanced cyber-infrastructure framework that will enable real-time streaming data analytics and machine learning and allow [...] Read more.
As part of the collaborative GeoSciFramework project, we are establising a monitoring system for the Yellowstone volcanic area that integrates multiple geodetic and seismic data sets into an advanced cyber-infrastructure framework that will enable real-time streaming data analytics and machine learning and allow us to better characterize associated long- and short-term hazards. The goal is to continuously ingest both remote sensing (GNSS, DInSAR) and ground-based (seismic, thermal and gas observations, strainmeter, tiltmeter and gravity measurements) data and query and analyse them in near-real time. In this study, we focus on DInSAR data processing and the effects from using various atmospheric corrections and real-time orbits on the automated processing and results. We find that the atmospheric correction provided by the European Centre for Medium-Range Weather Forecasts (ECMWF) is currently the most optimal for automated DInSAR processing and that the use of real-time orbits is sufficient for the early-warning application in question. We show analysis of atmospheric corrections and using real-time orbits in a test case over the Kilauea volcanic area in Hawaii. Finally, using these findings, we present results of displacement time series in the Yellowstone area between May 2018 and October 2019, which are in good agreement with GNSS data where available. These results will contribute to a baseline model that will be the basis of a future early-warning system that will be continuously updated with new DInSAR data acquisitions. Full article
Show Figures

Figure 1

20 pages, 3987 KiB  
Article
Robust Feature Matching with Spatial Smoothness Constraints
by Xu Huang, Xue Wan and Daifeng Peng
Remote Sens. 2020, 12(19), 3158; https://doi.org/10.3390/rs12193158 - 26 Sep 2020
Cited by 8 | Viewed by 3636
Abstract
Feature matching is to detect and match corresponding feature points in stereo pairs, which is one of the key techniques in accurate camera orientations. However, several factors limit the feature matching accuracy, e.g., image textures, viewing angles of stereo cameras, and resolutions of [...] Read more.
Feature matching is to detect and match corresponding feature points in stereo pairs, which is one of the key techniques in accurate camera orientations. However, several factors limit the feature matching accuracy, e.g., image textures, viewing angles of stereo cameras, and resolutions of stereo pairs. To improve the feature matching accuracy against these limiting factors, this paper imposes spatial smoothness constraints over the whole feature point sets with the underlying assumption that feature points should have similar matching results with their surrounding high-confidence points and proposes a robust feature matching method with the spatial smoothness constraints (RMSS). The core algorithm constructs a graph structure from the feature point sets and then formulates the feature matching problem as the optimization of a global energy function with first-order, spatial smoothness constraints based on the graph. For computational purposes, the global optimization of the energy function is then broken into sub-optimizations of each feature point, and an approximate solution of the energy function is iteratively derived as the matching results of the whole feature point sets. Experiments on close-range datasets with some above limiting factors show that the proposed method was capable of greatly improving the matching robustness and matching accuracy of some feature descriptors (e.g., scale-invariant feature transform (SIFT) and Speeded Up Robust Features (SURF)). After the optimization of the proposed method, the inlier number of SIFT and SURF was increased by average 131.9% and 113.5%, the inlier percentages between the inlier number and the total matches number of SIFT and SURF were increased by average 259.0% and 307.2%, and the absolute matching accuracy of SIFT and SURF was improved by average 80.6% and 70.2%. Full article
Show Figures

Graphical abstract

19 pages, 7503 KiB  
Article
Point Cloud Stacking: A Workflow to Enhance 3D Monitoring Capabilities Using Time-Lapse Cameras
by Xabier Blanch, Antonio Abellan and Marta Guinau
Remote Sens. 2020, 12(8), 1240; https://doi.org/10.3390/rs12081240 - 13 Apr 2020
Cited by 11 | Viewed by 4974
Abstract
The emerging use of photogrammetric point clouds in three-dimensional (3D) monitoring processes has revealed some constraints with respect to the use of LiDAR point clouds. Oftentimes, point clouds (PC) obtained by time-lapse photogrammetry have lower density and precision, especially when Ground Control Points [...] Read more.
The emerging use of photogrammetric point clouds in three-dimensional (3D) monitoring processes has revealed some constraints with respect to the use of LiDAR point clouds. Oftentimes, point clouds (PC) obtained by time-lapse photogrammetry have lower density and precision, especially when Ground Control Points (GCPs) are not available or the camera system cannot be properly calibrated. This paper presents a new workflow called Point Cloud Stacking (PCStacking) that overcomes these restrictions by making the most of the iterative solutions in both camera position estimation and internal calibration parameters that are obtained during bundle adjustment. The basic principle of the stacking algorithm is straightforward: it computes the median of the Z coordinates of each point for multiple photogrammetric models to give a resulting PC with a greater precision than any of the individual PC. The different models are reconstructed from images taken simultaneously from, at least, five points of view, reducing the systematic errors associated with the photogrammetric reconstruction workflow. The algorithm was tested using both a synthetic point cloud and a real 3D dataset from a rock cliff. The synthetic data were created using mathematical functions that attempt to emulate the photogrammetric models. Real data were obtained by very low-cost photogrammetric systems specially developed for this experiment. Resulting point clouds were improved when applying the algorithm in synthetic and real experiments, e.g., 25th and 75th error percentiles were reduced from 3.2 cm to 1.4 cm in synthetic tests and from 1.5 cm to 0.5 cm in real conditions. Full article
Show Figures

Graphical abstract

Other

Jump to: Research

15 pages, 608 KiB  
Technical Note
A New Combined Adjustment Model for Geolocation Accuracy Improvement of Multiple Sources Optical and SAR Imagery
by Niangang Jiao, Feng Wang and Hongjian You
Remote Sens. 2021, 13(3), 491; https://doi.org/10.3390/rs13030491 - 30 Jan 2021
Cited by 5 | Viewed by 2390
Abstract
Numerous earth observation data obtained from different platforms have been widely used in various fields, and geometric calibration is a fundamental step for these applications. Traditional calibration methods are developed based on the rational function model (RFM), which is produced by image vendors [...] Read more.
Numerous earth observation data obtained from different platforms have been widely used in various fields, and geometric calibration is a fundamental step for these applications. Traditional calibration methods are developed based on the rational function model (RFM), which is produced by image vendors as a substitution of the rigorous sensor model (RSM). Generally, the fitting accuracy of the RFM is much higher than 1 pixel, whereas the result decreases to several pixels in mountainous areas, especially for Synthetic Aperture Radar (SAR) imagery. Therefore, this paper proposes a new combined adjustment for geolocation accuracy improvement of multiple sources satellite SAR and optical imagery. Tie points are extracted based on a robust image matching algorithm, and relationships between the parameters of the range-doppler (RD) model and the RFM are developed by transformed into the same Geodetic Coordinate systems. At the same time, a heterogeneous weight strategy is designed for better convergence. Experimental results indicate that our proposed model can achieve much higher geolocation accuracy with approximately 2.60 pixels in the X direction and 3.50 pixels in the Y direction. Compared with traditional methods developed based on RFM, our proposed model provides a new way for synergistic use of multiple sources remote sensing data. Full article
Show Figures

Graphical abstract

Back to TopTop