sensors-logo

Journal Browser

Journal Browser

Machine Learning Methods for Image Processing in Remote Sensing

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Remote Sensors".

Deadline for manuscript submissions: closed (31 December 2019) | Viewed by 12277

Special Issue Editors


E-Mail Website
Guest Editor
Department of Information Engineering, Università Politecnica delle Marche, 60121 Ancona, Italy
Interests: computer vision; robotics, machine learning; deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Università Politecnica delle Marche, Dipartimento di Ingegneria Civile, Edile e dell’ Architettura (DICEA). Via brecce bianche, 60131 Ancona, Italy
Interests: GIS; geomatics; remote sensing; classification; cultural heritage; photogrammetry
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Dipartimento di Ingegneria dell’ Informazione (DII), Università Politecnica delle Marche, 60131 Ancona, Italy
Interests: Deep Learning; Computer Vision; Geoinformatics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Given the exponentially increasing amount of data being produced from satellites and/or aerial platforms, end users can exploit a huge amount of images of different nature. The key challenge is now their thorough exploitation thanks to fully automated analysis methods, which now have new tools to extract reliable and expressive information. In addition, recent advances in the computer vision domain (namely, the use of artificial intelligence for image classification and segmentation) have shown their potential in the remote sensing domain. The purpose of this Special Issue is to collect research articles proposing innovative solutions about the use of artificial intelligence for the following (including but not limited to) application domains:

(1) Land cover/land use analysis (including forestry, building damages, hazards monitoring, change detection);
(2) Photointerpretation support;
(3) Image segmentation;
(4) Multi/hyperspectral image analysis;
(5) Cartographic feature extraction;
(7) Multi-resolution analysis of aerial/satellite imagery;
(8) Precision agriculture;
(9) GIS applications.

Prof. Dr. Emanuele Frontoni
Prof. Dr. Eva Savina Malinverni
Dr. Marina Paolanti
Dr. Roberto Pierdicca
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • remote sensing 
  • machine learning
  • image processing 
  • dataset annotation
  • artificial intelligence 
  • pattern recognition
  • multi-resolution analysis

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 3908 KiB  
Article
Quantifying Physical Activity in Young Children Using a Three-Dimensional Camera
by Aston K. McCullough, Melanie Rodriguez and Carol Ewing Garber
Sensors 2020, 20(4), 1141; https://doi.org/10.3390/s20041141 - 19 Feb 2020
Cited by 3 | Viewed by 2401
Abstract
The purpose of this study was to determine the feasibility and validity of using three-dimensional (3D) video data and computer vision to estimate physical activity intensities in young children. Families with children (2–5-years-old) were invited to participate in semi-structured 20-minute play sessions that [...] Read more.
The purpose of this study was to determine the feasibility and validity of using three-dimensional (3D) video data and computer vision to estimate physical activity intensities in young children. Families with children (2–5-years-old) were invited to participate in semi-structured 20-minute play sessions that included a range of indoor play activities. During the play session, children’s physical activity (PA) was recorded using a 3D camera. PA video data were analyzed via direct observation, and 3D PA video data were processed and converted into triaxial PA accelerations using computer vision. PA video data from children (n = 10) were analyzed using direct observation as the ground truth, and the Receiver Operating Characteristic Area Under the Curve (AUC) was calculated in order to determine the classification accuracy of a Classification and Regression Tree (CART) algorithm for estimating PA intensity from video data. A CART algorithm accurately estimated the proportion of time that children spent sedentary (AUC = 0.89) in light PA (AUC = 0.87) and moderate-vigorous PA (AUC = 0.92) during the play session, and there were no significant differences (p > 0.05) between the directly observed and CART-determined proportions of time spent in each activity intensity. A computer vision algorithm and 3D camera can be used to estimate the proportion of time that children spend in all activity intensities indoors. Full article
(This article belongs to the Special Issue Machine Learning Methods for Image Processing in Remote Sensing)
Show Figures

Figure 1

20 pages, 35765 KiB  
Article
One View Per City for Buildings Segmentation in Remote-Sensing Images via Fully Convolutional Networks: A Proof-of-Concept Study
by Jianguang Li, Wen Li, Cong Jin, Lijuan Yang and Hui He
Sensors 2020, 20(1), 141; https://doi.org/10.3390/s20010141 - 24 Dec 2019
Cited by 3 | Viewed by 3099
Abstract
The segmentation of buildings in remote-sensing (RS) images plays an important role in monitoring landscape changes. Quantification of these changes can be used to balance economic and environmental benefits and most importantly, to support the sustainable urban development. Deep learning has been upgrading [...] Read more.
The segmentation of buildings in remote-sensing (RS) images plays an important role in monitoring landscape changes. Quantification of these changes can be used to balance economic and environmental benefits and most importantly, to support the sustainable urban development. Deep learning has been upgrading the techniques for RS image analysis. However, it requires a large-scale data set for hyper-parameter optimization. To address this issue, the concept of “one view per city” is proposed and it explores the use of one RS image for parameter settings with the purpose of handling the rest images of the same city by the trained model. The proposal of this concept comes from the observation that buildings of a same city in single-source RS images demonstrate similar intensity distributions. To verify the feasibility, a proof-of-concept study is conducted and five fully convolutional networks are evaluated on five cities in the Inria Aerial Image Labeling database. Experimental results suggest that the concept can be explored to decrease the number of images for model training and it enables us to achieve competitive performance in buildings segmentation with decreased time consumption. Based on model optimization and universal image representation, it is full of potential to improve the segmentation performance, to enhance the generalization capacity, and to extend the application of the concept in RS image analysis. Full article
(This article belongs to the Special Issue Machine Learning Methods for Image Processing in Remote Sensing)
Show Figures

Figure 1

12 pages, 2327 KiB  
Article
Improved Classification Method Based on the Diverse Density and Sparse Representation Model for a Hyperspectral Image
by Na Li, Ruihao Wang, Huijie Zhao, Mingcong Wang, Kewang Deng and Wei Wei
Sensors 2019, 19(24), 5559; https://doi.org/10.3390/s19245559 - 16 Dec 2019
Cited by 1 | Viewed by 2288
Abstract
To solve the small sample size (SSS) problem in the classification of hyperspectral image, a novel classification method based on diverse density and sparse representation (NCM_DDSR) is proposed. In the proposed method, the dictionary atoms, which learned from the diverse density model, are [...] Read more.
To solve the small sample size (SSS) problem in the classification of hyperspectral image, a novel classification method based on diverse density and sparse representation (NCM_DDSR) is proposed. In the proposed method, the dictionary atoms, which learned from the diverse density model, are used to solve the noise interference problems of spectral features, and an improved matching pursuit model is presented to obtain the sparse coefficients. Airborne hyperspectral data collected by the push-broom hyperspectral imager (PHI) and the airborne visible/infrared imaging spectrometer (AVIRIS) are applied to evaluate the performance of the proposed classification method. Results illuminate that the overall accuracies of the proposed model for classification of PHI and AVIRIS images are up to 91.59% and 92.83% respectively. In addition, the kappa coefficients are up to 0.897 and 0.91. Full article
(This article belongs to the Special Issue Machine Learning Methods for Image Processing in Remote Sensing)
Show Figures

Figure 1

16 pages, 4016 KiB  
Article
Hough Transform and Clustering for a 3-D Building Reconstruction with Tomographic SAR Point Clouds
by Hui Liu, Lei Pang, Fang Li and Ziye Guo
Sensors 2019, 19(24), 5378; https://doi.org/10.3390/s19245378 - 05 Dec 2019
Cited by 9 | Viewed by 3451
Abstract
Tomographic synthetic aperture radar (TomoSAR) produces 3-D point clouds with unavoidable noise or false targets that seriously deteriorate the quality of 3-D images and the building reconstruction over urban areas. In this paper, a Hough transform was adopted to detect the outline of [...] Read more.
Tomographic synthetic aperture radar (TomoSAR) produces 3-D point clouds with unavoidable noise or false targets that seriously deteriorate the quality of 3-D images and the building reconstruction over urban areas. In this paper, a Hough transform was adopted to detect the outline of a building; however, on one hand, the obtained outline of a building with Hough transform is broken, and on the other hand, some of these broken lines belong to the same segment of a building outline, but the parameters of these lines are slightly different. These problems will lead to that segment of a building outline being represented by multiple different parameters in the Hough transform. Therefore, an unsupervised clustering method was employed for clustering these line parameters. The lines gathered in the same cluster were considered to correspond to a same segment of a building outline. In this way, different line parameters corresponding to a segment of a building outline were integrated into one and then the continuous outline of the building in cloud points was obtained. Steps of the proposed data processing method were as follows. First, the Hough transform was made use of to detect the lines on the tomography plane in TomoSAR point clouds. These detected lines lay on the outline of the building, but they were broken due to the density variation of point clouds. Second, the lines detected using the Hough transform were grouped as a date set for training the building outline. Unsupervised clustering was utilized to classify the lines in several clusters. The cluster number was automatically determined via the unsupervised clustering algorithm, which meant the number of straight segments of the building edge was obtained. The lines in each cluster were considered to belong to the same straight segment of the building outline. Then, within each cluster, which represents a part or a segment of the building edge, a repaired straight line was constructed. Third, between each two clusters or each two segments of the building outline, the joint point was estimated by extending the two segments. Therefore, the building outline was obtained as completely as possible. Finally, taking the estimated building outline as the clustering center, supervised learning algorithm was used to classify the building cloud point and the noise (or false targets), then the building cloud point was refined. Then, our refined and unrefined data were fed into the neural network for building the 3-D construction. The comparison results show the correctness and the effectiveness of our improved method. Full article
(This article belongs to the Special Issue Machine Learning Methods for Image Processing in Remote Sensing)
Show Figures

Figure 1

Back to TopTop