remotesensing-logo

Journal Browser

Journal Browser

Photogrammetry and Remote Sensing in Environmental and Engineering Applications

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Environmental Remote Sensing".

Deadline for manuscript submissions: closed (31 December 2022) | Viewed by 43592

Special Issue Editors


E-Mail Website1 Website2
Guest Editor
Department of Photogrammetry, Remote Sensing of Environment and Spatial Engineering, Faculty of Mining Surveying and Environmental Engineering, AGH University of Science and Technology, Krakow, Poland
Interests: image processing; image classification; image analysis; machine learning; predictive models; land-use and land cover change monitoring; geoinformation; spatial analysis
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Photogrammetry, Remote Sensing of Environment and Spatial Engineering, Faculty of Mining Surveying and Environmental Engineering, AGH University of Science and Technology, Krakow, Poland
Interests: geoinformatics; spatial data uncertainty; DTM; GeoScience for decision support; risk analysis; image processing; image classification; machine learning; application of remote sensing in agriculture; monitoring of reclaimed post-mining areas; water monitoring; photogrammetry and remote sensing in cultural heritage applications; creating 3D models and time series analysis; 4D models

E-Mail Website
Guest Editor
Department of Photogrammetry, Remote Sensing of Environment and Spatial Engineering, Faculty of Mining Surveying and Environmental Engineering, AGH University of Science and Technology, Krakow, Poland
Interests: photogrammetry; remote sensing; laser scanning in engineering applications; digital image processing; UAV; AI; GIS
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

For a long time, photogrammetry and remote sensing have been considered valuable and often irreplaceable tools in various applications. In recent years, we have observed continuous development in these fields, resulting in improved accuracy and reliability of acquired, data as well as faster and more efficient technologies of data processing and analysis. As a result, photogrammetry and remote sensing tools have successfully been used to solve applied research problems in many areas.

This Special Issue is focussed on broadly-defined environmental and/or engineering applications. We would like to invite research papers presenting innovative approaches for the implementation of photogrammetric and remote sensing methods in practice.

Submissions may be related to the use of close-range, terrestrial, aerial, and/or satellite-based sensors of various kinds. Studies dedicated to the integration of photogrammetric and remote sensing techniques and/or novel methods of their application are especially welcome.

Dr. Wojciech Drzewiecki
Prof. Dr. Beata Hejmanowska
Dr. Sławomir Mikrut
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Photogrammetry
  • Remote sensing
  • Laser scanning
  • Close-range sensors
  • Airborne sensors
  • Satellite sensors

Published Papers (14 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

36 pages, 11769 KiB  
Article
CRBeDaSet: A Benchmark Dataset for High Accuracy Close Range 3D Object Reconstruction
by Grzegorz Gabara and Piotr Sawicki
Remote Sens. 2023, 15(4), 1116; https://doi.org/10.3390/rs15041116 - 18 Feb 2023
Cited by 3 | Viewed by 2147
Abstract
This paper presents the CRBeDaSet—a new benchmark dataset designed for evaluating close range, image-based 3D modeling and reconstruction techniques, and the first empirical experiences of its use. The test object is a medium-sized building. Diverse textures characterize the surface of elevations. The dataset [...] Read more.
This paper presents the CRBeDaSet—a new benchmark dataset designed for evaluating close range, image-based 3D modeling and reconstruction techniques, and the first empirical experiences of its use. The test object is a medium-sized building. Diverse textures characterize the surface of elevations. The dataset contains: the geodetic spatial control network (12 stabilized ground points determined using iterative multi-observation parametric adjustment) and the photogrammetric network (32 artificial signalized and 18 defined natural control points), measured using Leica TS30 total station and 36 terrestrial, mainly convergent photos, acquired from elevated camera standpoints with non-metric digital single-lens reflex Nikon D5100 camera (ground sample distance approx. 3 mm), the complex results of the bundle block adjustment with simultaneous camera calibration performed in the Pictran software package, and the colored point clouds (ca. 250 million points) from terrestrial laser scanning acquired using the Leica ScanStation C10 and post-processed in the Leica Cyclone™ SCAN software (ver. 2022.1.1) which were denoized, filtered, and classified using LoD3 standard (ca. 62 million points). The existing datasets and benchmarks were also described and evaluated in the paper. The proposed photogrammetric dataset was experimentally tested in the open-source application GRAPHOS and the commercial suites ContextCapture, Metashape, PhotoScan, Pix4Dmapper, and RealityCapture. As the first experience in its evaluation, the difficulties and errors that occurred in the software used during dataset digital processing were shown and discussed. The proposed CRBeDaSet benchmark dataset allows obtaining high accuracy (“mm” range) of the photogrammetric 3D object reconstruction in close range, based on a multi-image view uncalibrated imagery, dense image matching techniques, and generated dense point clouds. Full article
Show Figures

Figure 1

17 pages, 5238 KiB  
Article
The Lidargrammetric Model Deformation Method for Altimetric UAV-ALS Data Enhancement
by Antoni Rzonca and Mariusz Twardowski
Remote Sens. 2022, 14(24), 6391; https://doi.org/10.3390/rs14246391 - 17 Dec 2022
Cited by 2 | Viewed by 1591
Abstract
The altimetric accuracy of aerial laser scanning (ALS) data is one of the most important issues of ALS data processing. In this paper, the authors present a previously unknown, yet simple and efficient method for altimetric enhancement of ALS data based on the [...] Read more.
The altimetric accuracy of aerial laser scanning (ALS) data is one of the most important issues of ALS data processing. In this paper, the authors present a previously unknown, yet simple and efficient method for altimetric enhancement of ALS data based on the concept of lidargrammetry. The generally known photogrammetric theory of stereo model deformations caused by relative orientation parameters errors of stereopair was applied for the continuous correction of lidar data based on ground control points. The preliminary findings suggest that the method is correct, efficient and precise, whilst the correction of the point cloud is continuous. The theory of the method and its implementation within the research software are presented in the text. Several tests were performed on synthetic and real data. The most significant results are presented and discussed in the article together with a discussion of the potential of lidargrammetry, and the main directions of future research are also mapped out. These results confirm that the research gap in the area of altimetric enhancement of ALS data without additional trajectory data is resolved in this study. Full article
Show Figures

Figure 1

18 pages, 5348 KiB  
Article
Monitoring Lightning Location Based on Deep Learning Combined with Multisource Spatial Data
by Mingyue Lu, Yadong Zhang, Min Chen, Manzhu Yu and Menglong Wang
Remote Sens. 2022, 14(9), 2200; https://doi.org/10.3390/rs14092200 - 4 May 2022
Cited by 4 | Viewed by 2358
Abstract
Lightning is an important cause of casualties, and of the interruption of power supply and distribution facilities. Monitoring lightning locations is essential in disaster prevention and mitigation. Although there are many ways to obtain lightning information, there are still substantial problems in intelligent [...] Read more.
Lightning is an important cause of casualties, and of the interruption of power supply and distribution facilities. Monitoring lightning locations is essential in disaster prevention and mitigation. Although there are many ways to obtain lightning information, there are still substantial problems in intelligent lightning monitoring. Deep learning combined with weather radar data and land attribute data can lay the foundation for future monitoring of lightning locations. Therefore, based on the residual network, the Lightning Monitoring Residual Network (LM-ResNet) is proposed in this paper to monitor lightning location. Furthermore, comparisons with GoogLeNet and DenseNet were also conducted to evaluate the proposed model. The results show that the LM-ResNet model has significant potential in monitoring lightning locations. In this study, we converted the lightning monitoring problem into a binary classification problem and then obtained weather radar product data (including the plan position indicator (PPI), composite reflectance (CR), echo top (ET), vertical integral liquid water (VIL), and average radial velocity (V)) and land attribute data (including aspect, slope, land use, and NDVI) to establish a lightning feature dataset. During model training, the focal loss function was adopted as a loss function to address the constructed imbalanced lightning feature dataset. Moreover, we conducted stepwise sensitivity analysis and single factor sensitivity analysis. The results of stepwise sensitivity analysis show that the best performance can be achieved using all the data, followed by the combination of PPI, CR, ET, and VIL. The single factor sensitivity analysis results show that the ET radar product data are very important for the monitoring of lightning locations, and the NDVI land attribute data also make significant contributions. Full article
Show Figures

Figure 1

29 pages, 8617 KiB  
Article
Assessing the Performance of WRF Model in Simulating Heavy Precipitation Events over East Africa Using Satellite-Based Precipitation Product
by Isaac Kwesi Nooni, Guirong Tan, Yan Hongming, Abdoul Aziz Saidou Chaibou, Birhanu Asmerom Habtemicheal, Gnim Tchalim Gnitou and Kenny T. C. Lim Kam Sian
Remote Sens. 2022, 14(9), 1964; https://doi.org/10.3390/rs14091964 - 19 Apr 2022
Cited by 15 | Viewed by 3115
Abstract
This study investigated the capability of the Weather Research and Forecasting (WRF) model to simulate seven different heavy precipitation (PRE) events that occurred across East Africa in the summer of 2020. The WRF model outputs were evaluated against high-resolution satellite-based observations, which were [...] Read more.
This study investigated the capability of the Weather Research and Forecasting (WRF) model to simulate seven different heavy precipitation (PRE) events that occurred across East Africa in the summer of 2020. The WRF model outputs were evaluated against high-resolution satellite-based observations, which were obtained from prior evaluations of several satellite observations with 30 stations’ data. The synoptic conditions accompanying the events were also investigated to determine the conditions that are conducive to heavy PRE. The verification of the WRF output was carried out using the area-related root mean square error (RMSE)-based fuzzy method. This method quantifies the similarity of PRE intensity distribution between forecast and observation at different spatial scales. The results showed that the WRF model reproduced the heavy PRE with PRE magnitudes ranging from 6 to >30 mm/day. The spatial pattern from the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification-Climate Data Record (PERSIANN-CCS-CDR) was close to that of the WRF output. The area-related RMSE with respect to observation showed that the error in the model tended to reduce as the spatial scale increased for all the events. The WRF and high-resolution satellite data had an obvious advantage when validating the heavy PRE events in 2020. This study demonstrated that WRF may be used for forecasting heavy PRE events over East Africa when high resolutions and subsequent simulation setups are used. Full article
Show Figures

Figure 1

19 pages, 9811 KiB  
Article
Uncertainty-Guided Depth Fusion from Multi-View Satellite Images to Improve the Accuracy in Large-Scale DSM Generation
by Rongjun Qin, Xiao Ling, Elisa Mariarosaria Farella and Fabio Remondino
Remote Sens. 2022, 14(6), 1309; https://doi.org/10.3390/rs14061309 - 8 Mar 2022
Cited by 6 | Viewed by 3609
Abstract
The generation of digital surface models (DSMs) from multi-view high-resolution (VHR) satellite imagery has recently received a great attention due to the increasing availability of such space-based datasets. Existing production-level pipelines primarily adopt a multi-view stereo (MVS) paradigm, which exploit the statistical depth [...] Read more.
The generation of digital surface models (DSMs) from multi-view high-resolution (VHR) satellite imagery has recently received a great attention due to the increasing availability of such space-based datasets. Existing production-level pipelines primarily adopt a multi-view stereo (MVS) paradigm, which exploit the statistical depth fusion of multiple DSMs generated from individual stereo pairs. To make this process scalable, these depth fusion methods often adopt simple approaches such as the median filter or its variants, which are efficient in computation but lack the flexibility to adapt to heterogenous information of individual pixels. These simple fusion approaches generally discard ancillary information produced by MVS algorithms (such as measurement confidence/uncertainty) that is otherwise extremely useful to enable adaptive fusion. To make use of such information, this paper proposes an efficient and scalable approach that incorporates the matching uncertainty to adaptively guide the fusion process. This seemingly straightforward idea has a higher-level advantage: first, the uncertainty information is obtained from global/semiglobal matching methods, which inherently populate global information of the scene, making the fusion process nonlocal. Secondly, these globally determined uncertainties are operated locally to achieve efficiency for processing large-sized images, making the method extremely practical to implement. The proposed method can exploit results from stereo pairs with small intersection angles to recover details for areas where dense buildings and narrow streets exist, but also to benefit from highly accurate 3D points generated in flat regions under large intersection angles. The proposed method was applied to DSMs generated from Worldview, GeoEye, and Pleiades stereo pairs covering a large area (400 km2). Experiments showed that we achieved an RMSE (root-mean-squared error) improvement of approximately 0.1–0.2 m over a typical Median Filter approach for fusion (equivalent to 5–10% of relative accuracy improvement). Full article
Show Figures

Figure 1

19 pages, 8697 KiB  
Article
Water Quality Chl-a Inversion Based on Spatio-Temporal Fusion and Convolutional Neural Network
by Haibo Yang, Yao Du, Hongling Zhao and Fei Chen
Remote Sens. 2022, 14(5), 1267; https://doi.org/10.3390/rs14051267 - 5 Mar 2022
Cited by 25 | Viewed by 3595
Abstract
The combination of remote sensing technology and traditional field sampling provides a convenient way to monitor inland water. However, limited by the resolution of remote sensing images and cloud contamination, the current water quality inversion products do not provide both high temporal resolution [...] Read more.
The combination of remote sensing technology and traditional field sampling provides a convenient way to monitor inland water. However, limited by the resolution of remote sensing images and cloud contamination, the current water quality inversion products do not provide both high temporal resolution and high spatial resolution. By using the spatio-temporal fusion (STF) method, high spatial resolution and temporal fusion images were generated with Landsat, Sentinel-2, and GaoFen-2 data. Then, a Chl-a inversion model was designed based on a convolutional neural network (CNN) with the structure of 4-(136-236-340)-1-1. Finally, the results of the Chl-a concentrations were corrected using a pixel correction algorithm. The images generated from STF can maintain the spectral characteristics of the low-resolution images with the R2 between 0.7 and 0.9. The Chl-a inversion results based on the spatio-temporal fused images and CNN were verified with measured data (R2 = 0.803), and then the results were improved (R2 = 0.879) after further combining them with the pixel correction algorithm. The correlation R2 between the Chl-a results of GF2-like and Sentinel-2 were both greater than 0.8. The differences in the spatial distribution of Chl-a concentrations in the BYD lake gradually increased from July to August. Remote sensing water quality inversion based on STF and CNN can effectively achieve high frequency in time and fine resolution in space, which provide a stronger scientific basis for rapid diagnosis of eutrophication in inland lakes. Full article
Show Figures

Figure 1

21 pages, 16903 KiB  
Article
A Detection Method for Collapsed Buildings Combining Post-Earthquake High-Resolution Optical and Synthetic Aperture Radar Images
by Chao Wang, Yan Zhang, Tao Xie, Lin Guo, Shishi Chen, Junyong Li and Fan Shi
Remote Sens. 2022, 14(5), 1100; https://doi.org/10.3390/rs14051100 - 23 Feb 2022
Cited by 4 | Viewed by 2539
Abstract
The detection of collapsed buildings based on post-earthquake remote sensing images is conducive to eliminating the dependence on pre-earthquake data, which is of great significance to carry out emergency response in time. The difficulties in obtaining or lack of elevation information, as strong [...] Read more.
The detection of collapsed buildings based on post-earthquake remote sensing images is conducive to eliminating the dependence on pre-earthquake data, which is of great significance to carry out emergency response in time. The difficulties in obtaining or lack of elevation information, as strong evidence to determine whether buildings collapse or not, is the main challenge in the practical application of this method. On the one hand, the introduction of double bounce features in synthetic aperture radar (SAR) images are helpful to judge whether buildings collapse or not. On the other hand, because SAR images are limited by imaging mechanisms, it is necessary to introduce spatial details in optical images as supplements in the detection of collapsed buildings. Therefore, a detection method for collapsed buildings combining post-earthquake high-resolution optical and SAR images was proposed by mining complementary information between traditional visual features and double bounce features from multi-source data. In this method, a strategy of optical and SAR object set extraction based on an inscribed center (OpticalandSAR-ObjectsExtraction) was first put forward to extract a unified optical-SAR object set. Based on this, a quantitative representation of collapse semantic knowledge in double bounce (DoubleBounceCollapseSemantic) was designed to bridge a semantic gap between double bounce and collapse features of buildings. Ultimately, the final detection results were obtained based on the improved active learning support vector machines (SVMs). The multi-group experimental results of post-earthquake multi-source images show that the overall accuracy (OA) and the detection accuracy for collapsed buildings (Pcb) of the proposed method can reach more than 82.39% and 75.47%. Therefore, the proposed method is significantly superior to many advanced methods for comparison. Full article
Show Figures

Graphical abstract

31 pages, 29726 KiB  
Article
3D Modeling of Urban Area Based on Oblique UAS Images—An End-to-End Pipeline
by Valeria-Ersilia Oniga, Ana-Ioana Breaban, Norbert Pfeifer and Maximilian Diac
Remote Sens. 2022, 14(2), 422; https://doi.org/10.3390/rs14020422 - 17 Jan 2022
Cited by 10 | Viewed by 3790
Abstract
3D modelling of urban areas is an attractive and active research topic, as 3D digital models of cities are becoming increasingly common for urban management as a consequence of the constantly growing number of people living in cities. Viewed as a digital representation [...] Read more.
3D modelling of urban areas is an attractive and active research topic, as 3D digital models of cities are becoming increasingly common for urban management as a consequence of the constantly growing number of people living in cities. Viewed as a digital representation of the Earth’s surface, an urban area modeled in 3D includes objects such as buildings, trees, vegetation and other anthropogenic structures, highlighting the buildings as the most prominent category. A city’s 3D model can be created based on different data sources, especially LiDAR or photogrammetric point clouds. This paper’s aim is to provide an end-to-end pipeline for 3D building modeling based on oblique UAS images only, the result being a parametrized 3D model with the Open Geospatial Consortium (OGC) CityGML standard, Level of Detail 2 (LOD2). For this purpose, a flight over an urban area of about 20.6 ha has been taken with a low-cost UAS, i.e., a DJI Phantom 4 Pro Professional (P4P), at 100 m height. The resulting UAS point cloud with the best scenario, i.e., 45 Ground Control Points (GCP), has been processed as follows: filtering to extract the ground points using two algorithms, CSF and terrain-mark; classification, using two methods, based on attributes only and a random forest machine learning algorithm; segmentation using local homogeneity implemented into Opals software; plane creation based on a region-growing algorithm; and plane editing and 3D model reconstruction based on piece-wise intersection of planar faces. The classification performed with ~35% training data and 31 attributes showed that the Visible-band difference vegetation index (VDVI) is a key attribute and 77% of the data was classified using only five attributes. The global accuracy for each modeled building through the workflow proposed in this study was around 0.15 m, so it can be concluded that the proposed pipeline is reliable. Full article
Show Figures

Figure 1

16 pages, 3234 KiB  
Article
Emerging Sensor Platforms Allow for Seagrass Extent Mapping in a Turbid Estuary and from the Meadow to Ecosystem Scale
by Johannes R. Krause, Alejandro Hinojosa-Corona, Andrew B. Gray and Elizabeth Burke Watson
Remote Sens. 2021, 13(18), 3681; https://doi.org/10.3390/rs13183681 - 15 Sep 2021
Cited by 7 | Viewed by 2708
Abstract
Seagrass meadows are globally important habitats, protecting shorelines, providing nursery areas for fish, and sequestering carbon. However, both anthropogenic and natural environmental stressors have led to a worldwide reduction seagrass habitats. For purposes of management and restoration, it is essential to produce accurate [...] Read more.
Seagrass meadows are globally important habitats, protecting shorelines, providing nursery areas for fish, and sequestering carbon. However, both anthropogenic and natural environmental stressors have led to a worldwide reduction seagrass habitats. For purposes of management and restoration, it is essential to produce accurate maps of seagrass meadows over a variety of spatial scales, resolutions, and at temporal frequencies ranging from months to years. Satellite remote sensing has been successfully employed to produce maps of seagrass in the past, but turbid waters and difficulty in obtaining low-tide scenes pose persistent challenges. This study builds on an increased availability of affordable high temporal frequency imaging platforms, using seasonal unmanned aerial vehicle (UAV) surveys of seagrass extent at the meadow scale, to inform machine learning classifications of satellite imagery of a 40 km2 bay. We find that object-based image analysis is suitable to detect seasonal trends in seagrass extent from UAV imagery and find that trends vary between individual meadows at our study site Bahía de San Quintín, Baja California, México, during our study period in 2019. We further suggest that compositing multiple satellite imagery classifications into a seagrass probability map allows for an estimation of seagrass extent in turbid waters and report that in 2019, seagrass covered 2324 ha of Bahía de San Quintín, indicating a recovery from losses reported for previous decades. Full article
Show Figures

Figure 1

20 pages, 11822 KiB  
Article
Continued Monitoring and Modeling of Xingfeng Solid Waste Landfill Settlement, China, Based on Multiplatform SAR Images
by Yanan Du, Haiqiang Fu, Lin Liu, Guangcai Feng, Debao Wen, Xing Peng and Huaxiang Ding
Remote Sens. 2021, 13(16), 3286; https://doi.org/10.3390/rs13163286 - 19 Aug 2021
Cited by 4 | Viewed by 3302
Abstract
Continued settlement monitoring and modeling of landfills are critical for land redevelopment and safety assurance. This paper adopts a MTInSAR technique for time-series monitoring of the Xingfeng landfill (XFL) settlement. A major challenge is that the frequent and significant settlement in the initial [...] Read more.
Continued settlement monitoring and modeling of landfills are critical for land redevelopment and safety assurance. This paper adopts a MTInSAR technique for time-series monitoring of the Xingfeng landfill (XFL) settlement. A major challenge is that the frequent and significant settlement in the initial stage after the closure of landfills can affect the coherence of interferograms, thus hindering the monitoring of settlement by MTInSAR. We analyzed the factors that can directly affect the temporal decorrelation of landfills and adopted a 3D phase unwrapping approach to correct the phase unwrapping errors caused by such deformation gradient. SAR images from four platforms, including 50 Sentinel-1A, 12 Radarsat-2, 4 ALOS-2, and 2 TerraSAR-X/TanDEM-X images, are collected to measure the settlement and thickness of the landfill. The settlement accuracy is evaluated by a cross-evaluation between Radarsat-2 and Sentinel-1A that have similar temporal coverages. We analyzed the spatial characteristics of settlement and the relationship between the settlement and thickness. Further, we modeled the future settlement of the XFL with a hyperbolic function model. The results showed that the coherence in the initial stage after closure of the XFL is primarily affected by temporal decorrelation caused by considerable deformation gradient compared with spatial decorrelation. Settlement occurs primarily in the forward slope of the XFL, and the maximum line-of-sight (LOS) settlement rate reached 0.808 m/year from August 2018 to May 2020. The correlation between the settlement and thickness is 0.62, indicating an obvious relationship between the two. In addition, the settlement of younger areas is usually greater than that of older areas. Full article
Show Figures

Figure 1

18 pages, 2462 KiB  
Article
Machine Learning for Mineral Identification and Ore Estimation from Hyperspectral Imagery in Tin–Tungsten Deposits: Simulation under Indoor Conditions
by Agustin Lobo, Emma Garcia, Gisela Barroso, David Martí, Jose-Luis Fernandez-Turiel and Jordi Ibáñez-Insa
Remote Sens. 2021, 13(16), 3258; https://doi.org/10.3390/rs13163258 - 18 Aug 2021
Cited by 19 | Viewed by 3701
Abstract
This study aims to assess the feasibility of delineating and identifying mineral ores from hyperspectral images of tin–tungsten mine excavation faces using machine learning classification. We compiled a set of hand samples of minerals of interest from a tin–tungsten mine and analyzed two [...] Read more.
This study aims to assess the feasibility of delineating and identifying mineral ores from hyperspectral images of tin–tungsten mine excavation faces using machine learning classification. We compiled a set of hand samples of minerals of interest from a tin–tungsten mine and analyzed two types of hyperspectral images: (1) images acquired with a laboratory set-up under close-to-optimal conditions, and (2) a scan of a simulated mine face using a field set-up, under conditions closer to those in the gallery. We have analyzed the following minerals: cassiterite (tin ore), wolframite (tungsten ore), chalcopyrite, malachite, muscovite, and quartz. Classification (Linear Discriminant Analysis, Singular Vector Machines and Random Forest) of laboratory spectra had a very high overall accuracy rate (98%), slightly lower if the 450–950 nm and 950–1650 nm ranges are considered independently, and much lower (74.5%) for simulated conventional RGB imagery. Classification accuracy for the simulation was lower than in the laboratory but still high (85%), likely a consequence of the lower spatial resolution. All three classification methods performed similarly in this case, with Random Forest producing results of slightly higher accuracy. The user’s accuracy for wolframite was 85%, but cassiterite was often confused with wolframite (user’s accuracy: 70%). A lumped ore category achieved 94.9% user’s accuracy. Our study confirms the suitability of hyperspectral imaging to record the spatial distribution of ore mineralization in progressing tungsten–tin mine faces. Full article
Show Figures

Graphical abstract

23 pages, 5844 KiB  
Article
Reliable Crops Classification Using Limited Number of Sentinel-2 and Sentinel-1 Images
by Beata Hejmanowska, Piotr Kramarczyk, Ewa Głowienka and Sławomir Mikrut
Remote Sens. 2021, 13(16), 3176; https://doi.org/10.3390/rs13163176 - 11 Aug 2021
Cited by 9 | Viewed by 2570
Abstract
The study presents the analysis of the possible use of limited number of the Sentinel-2 and Sentinel-1 to check if crop declarations that the EU farmers submit to receive subsidies are true. The declarations used in the research were randomly divided into two [...] Read more.
The study presents the analysis of the possible use of limited number of the Sentinel-2 and Sentinel-1 to check if crop declarations that the EU farmers submit to receive subsidies are true. The declarations used in the research were randomly divided into two independent sets (training and test). Based on the training set, supervised classification of both single images and their combinations was performed using random forest algorithm in SNAP (ESA) and our own Python scripts. A comparative accuracy analysis was performed on the basis of two forms of confusion matrix (full confusion matrix commonly used in remote sensing and binary confusion matrix used in machine learning) and various accuracy metrics (overall accuracy, accuracy, specificity, sensitivity, etc.). The highest overall accuracy (81%) was obtained in the simultaneous classification of multitemporal images (three Sentinel-2 and one Sentinel-1). An unexpectedly high accuracy (79%) was achieved in the classification of one Sentinel-2 image at the end of May 2018. Noteworthy is the fact that the accuracy of the random forest method trained on the entire training set is equal 80% while using the sampling method ca. 50%. Based on the analysis of various accuracy metrics, it can be concluded that the metrics used in machine learning, for example: specificity and accuracy, are always higher then the overall accuracy. These metrics should be used with caution, because unlike the overall accuracy, to calculate these metrics, not only true positives but also false positives are used as positive results, giving the impression of higher accuracy. Correct calculation of overall accuracy values is essential for comparative analyzes. Reporting the mean accuracy value for the classes as overall accuracy gives a false impression of high accuracy. In our case, the difference was 10–16% for the validation data, and 25–45% for the test data. Full article
Show Figures

Graphical abstract

Other

Jump to: Research

13 pages, 20527 KiB  
Technical Note
Spatio-Temporal Estimation of Rice Height Using Time Series Sentinel-1 Images
by Huijin Yang, Heping Li, Wei Wang, Ning Li, Jianhui Zhao and Bin Pan
Remote Sens. 2022, 14(3), 546; https://doi.org/10.3390/rs14030546 - 24 Jan 2022
Cited by 7 | Viewed by 3135
Abstract
Rice height, as the fundamental biophysical attribute, is a controlling factor in crop phenology estimation and yield estimation. The aim of this study was to use time series Sentinel-1A images to estimate the spatio-temporal distribution of rice height. In this study, a particle [...] Read more.
Rice height, as the fundamental biophysical attribute, is a controlling factor in crop phenology estimation and yield estimation. The aim of this study was to use time series Sentinel-1A images to estimate the spatio-temporal distribution of rice height. In this study, a particle filter (PF) was applied for the real-time estimation of rice height compared with a simplified water cloud model (SWCM) on the basis of rice mapping and transplanting date. It was found that the VH backscatter (σvho) can potentially be applied to accurately estimate rice height compared with VV backscatter (σvvo), the σvho/σvv0 ratio, and the Radar Vegetation Index (RVI, 4* σvho/(σvho+σvvo)). The results show that the rice height estimation by PF generated a better result with a root-mean-square error (RMSE) equal to 7.36 cm and a determination factor (R2) of 0.95 compared with SWCM (RMSE = 12.59 cm and R2 = 0.86). Moreover, rice height in the south and east of the study area was higher than in the north and west. The reason for this is that the south and east are near to the South China Sea, and there are higher temperatures and earlier transplanting. Altogether, our results demonstrate the potential of PF and σvho to study the spatio-temporal distribution of crop height estimation. As a result, the PF method can contribute greatly to improvements in crop monitoring. Full article
Show Figures

Graphical abstract

11 pages, 3753 KiB  
Technical Note
Vegetation Filtering of a Steep Rugged Terrain: The Performance of Standard Algorithms and a Newly Proposed Workflow on an Example of a Railway Ledge
by Martin Štroner, Rudolf Urban, Martin Lidmila, Vilém Kolář and Tomáš Křemen
Remote Sens. 2021, 13(15), 3050; https://doi.org/10.3390/rs13153050 - 3 Aug 2021
Cited by 23 | Viewed by 3041
Abstract
Point clouds derived using structure from motion (SfM) algorithms from unmanned aerial vehicles (UAVs) are increasingly used in civil engineering practice. This includes areas such as (vegetated) rock outcrops or faces above linear constructions (e.g., railways) where accurate terrain identification, i.e., ground filtering, [...] Read more.
Point clouds derived using structure from motion (SfM) algorithms from unmanned aerial vehicles (UAVs) are increasingly used in civil engineering practice. This includes areas such as (vegetated) rock outcrops or faces above linear constructions (e.g., railways) where accurate terrain identification, i.e., ground filtering, is highly difficult but, at the same time, important for safety management. In this paper, we evaluated the performance of standard geometrical ground filtering algorithms (a progressive morphological filter (PMF), a simple morphological filter (SMRF) or a cloth simulation filter (CSF)) and a structural filter, CANUPO (CAractérisation de NUages de POints), for ground identification in a point cloud derived by SfM from UAV imagery in such an area (a railway ledge and the adjacent rock face). The performance was evaluated both in the original position and after levelling the point cloud (its transformation into the horizontal plane). The poor results of geometrical filters (total errors of approximately 6–60% with PMF performing the worst) and a mediocre result of CANUPO (approximately 4%) led us to combine these complementary approaches, yielding total errors of 1.2% (CANUPO+SMRF) and 0.9% (CANUPO+CSF). This new technique could represent an excellent solution for ground filtering of high-density point clouds of such steep vegetated areas that can be well-used, for example, in civil engineering practice. Full article
Show Figures

Graphical abstract

Back to TopTop