Topic Editors

State Key Laboratory of Resources and Environmental Information System, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Sciences, Beijing 100101, China
Department of Geography, Environment and Society, University of Minnesota, Twin Cities, MN 55455, USA
State Key Laboratory of Resources and Environmental Information System, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Sciences, Beijing 100101, China

Geocomputation and Artificial Intelligence for Mapping

Abstract submission deadline
closed (31 October 2023)
Manuscript submission deadline
31 December 2023
Viewed by
9200

Topic Information

Dear Colleagues,

In the era of big data, the emergence of massive data creates opportunities and challenges to mapping. With the rapid development of Geocomputation and AI, RS mapping theory, deep learning models, and programming frameworks contribute to the intellectual development of research in geospatial-related mapping.  In particular, the introduction of deep learning models and frameworks has greatly improved the accuracy and efficiency of geospatial-related mapping. However, new technology creates new opportunities as well as  new challenges. As such, why are Geocomputation and artificial intelligence needed for mapping? Can specific problems be better solved using artificial intelligence techniques than traditional methods? Why does cartography need artificial intelligence, and how can artificial intelligence technology be used to improve the speed and accuracy of RS mapping? What new directions can we expect AI techniques to introduce to the broader fields of mapping and cartographic generalization?

The aim of this Topic is to provide the opportunity to explore the mentioned challenges in remote sensing mapping using computer vision, deep learning, and artificial intelligence. Topics may cover but are not limited to the following: object detection, change detection, map styles transferring, automated workflow of map generalization and mapping, etc.

  • Mapping object information extraction from remote sensing and street view imagery;
  • Automatic extraction of map symbols and text annotations on maps and imagery;
  • Change detection and mapping based on artificial intelligence;
  • Artificial intelligence for RS Mapping;
  • Object recognition through artificial intelligence techniques;
  • Cartographic relief shading with neural networks;
  • Map style transferring using generative adversarial networks;
  • Integration of artificial intelligence and map design;
  • Automated workflow of cartographic generalization;
  • Spatial explicit neural networks for GeoAI applications;
  • AI mapping of urban socioeconomic patterns;
  • Intelligent spatial analytics for earth process modeling and RS mapping. 

Dr. Lili Jiang
Dr. Di Zhu
Dr. An Zhang
Topic Editors

Keywords

  • artificial intelligence
  • deep learning
  • AI for mapping
  • map styles transferring
  • spatial patterns
  • GeoAI
  • map design
  • remote sensing mapping
  • object detection

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Geomatics
geomatics
- - 2021 18.1 Days CHF 1000 Submit
ISPRS International Journal of Geo-Information
ijgi
3.4 6.2 2012 35.2 Days CHF 1700 Submit
Remote Sensing
remotesensing
5.0 7.9 2009 21.1 Days CHF 2700 Submit

Preprints is a platform dedicated to making early versions of research outputs permanently available and citable. MDPI journals allow posting on preprint servers such as Preprints.org prior to publication. For more details about reprints, please visit https://www.preprints.org.

Published Papers (10 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
21 pages, 23946 KiB  
Article
A Deep-Learning-Based Multimodal Data Fusion Framework for Urban Region Function Recognition
ISPRS Int. J. Geo-Inf. 2023, 12(12), 468; https://doi.org/10.3390/ijgi12120468 - 21 Nov 2023
Viewed by 378
Abstract
Accurate and efficient classification maps of urban functional zones (UFZs) are crucial to urban planning, management, and decision making. Due to the complex socioeconomic UFZ properties, it is increasingly challenging to identify urban functional zones by using remote-sensing images (RSIs) alone. Point-of-interest (POI) [...] Read more.
Accurate and efficient classification maps of urban functional zones (UFZs) are crucial to urban planning, management, and decision making. Due to the complex socioeconomic UFZ properties, it is increasingly challenging to identify urban functional zones by using remote-sensing images (RSIs) alone. Point-of-interest (POI) data and remote-sensing image data play important roles in UFZ extraction. However, many existing methods only use a single type of data or simply combine the two, failing to take full advantage of the complementary advantages between them. Therefore, we designed a deep-learning framework that integrates the above two types of data to identify urban functional areas. In the first part of the complementary feature-learning and fusion module, we use a convolutional neural network (CNN) to extract visual features and social features. Specifically, we extract visual features from RSI data, while POI data are converted into a distance heatmap tensor that is input into the CNN with gated attention mechanisms to extract social features. Then, we use a feature fusion module (FFM) with adaptive weights to fuse the two types of features. The second part is the spatial-relationship-modeling module. We designed a new spatial-relationship-learning network based on a vision transformer model with long- and short-distance attention, which can simultaneously learn the global and local spatial relationships of the urban functional zones. Finally, a feature aggregation module (FGM) utilizes the two spatial relationships efficiently. The experimental results show that the proposed model can fully extract visual features, social features, and spatial relationship features from RSIs and POIs for more accurate UFZ recognition. Full article
(This article belongs to the Topic Geocomputation and Artificial Intelligence for Mapping)
Show Figures

Figure 1

16 pages, 14995 KiB  
Article
Automatic Detection and Mapping of Dolines Using U-Net Model from Orthophoto Images
ISPRS Int. J. Geo-Inf. 2023, 12(11), 456; https://doi.org/10.3390/ijgi12110456 - 07 Nov 2023
Viewed by 545
Abstract
A doline is a natural closed depression formed as a result of karstification, and it is the most common landform in karst areas. These depressions damage many living areas and various engineering structures, and this type of collapse event has created natural hazards [...] Read more.
A doline is a natural closed depression formed as a result of karstification, and it is the most common landform in karst areas. These depressions damage many living areas and various engineering structures, and this type of collapse event has created natural hazards in terms of human safety, agricultural activities, and the economy. Therefore, it is important to detect dolines and reveal their properties. In this study, a solution that automatically detects dolines is proposed. The proposed model was employed in a region where many dolines are found in the northwestern part of Sivas City, Turkey. A U-Net model with transfer learning techniques was applied for this task. DenseNet121 gave the best results for the segmentation of the dolines via ResNet34, and EfficientNetB3 and DenseNet121 were used with the U-Net model. The Intersection over Union (IoU) and F-score were used as model evaluation metrics. The IoU and F-score of the DenseNet121 model were calculated as 0.78 and 0.87 for the test data, respectively. Dolines were successfully predicted for the selected test area. The results were converted into a georeferenced vector file. The doline inventory maps can be easily and quickly created using this method. The results can be used in geomorphology, susceptibility, and site selection studies. In addition, this method can be used to segment other landforms in earth science studies. Full article
(This article belongs to the Topic Geocomputation and Artificial Intelligence for Mapping)
Show Figures

Figure 1

24 pages, 14625 KiB  
Article
Semantic Segmentation of Urban Airborne LiDAR Point Clouds Based on Fusion Attention Mechanism and Multi-Scale Features
Remote Sens. 2023, 15(21), 5248; https://doi.org/10.3390/rs15215248 - 05 Nov 2023
Viewed by 680
Abstract
Semantic segmentation of point clouds provided by airborne LiDAR survey in urban scenes is a great challenge. This is due to the fact that point clouds at boundaries of different types of objects are easy to be mixed and have geometric spatial similarity. [...] Read more.
Semantic segmentation of point clouds provided by airborne LiDAR survey in urban scenes is a great challenge. This is due to the fact that point clouds at boundaries of different types of objects are easy to be mixed and have geometric spatial similarity. In addition, the 3D descriptions of the same type of objects have different scales. To address above problems, a fusion attention convolutional network (SMAnet) was proposed in this study. The fusion attention module includes a self-attention module (SAM) and multi-head attention module (MAM). The SAM can capture feature information according to correlation of adjacent point cloud and it can distinguish the mixed point clouds with similar geometric features effectively. The MAM strengthens connections among point clouds according to different subspace features, which is beneficial for distinguishing point clouds at different scales. In feature extraction, lightweight multi-scale feature extraction layers are used to effectively utilize local information of different neighbor fields. Additionally, in order to solve the feature externalization problem and expand the network receptive field, the SoftMax-stochastic pooling (SSP) algorithm is proposed to extract global features. The ISPRS 3D Semantic Labeling Contest dataset was chosen in this study for point cloud segmentation experimentation. Results showed that the overall accuracy and average F1-score of SMAnet reach 85.7% and 75.1%, respectively. It is therefore superior to common algorithms at present. The proposed model also achieved good results on the GML(B) dataset, which proves that the model has good generalization ability. Full article
(This article belongs to the Topic Geocomputation and Artificial Intelligence for Mapping)
Show Figures

Graphical abstract

17 pages, 7646 KiB  
Article
Large-Scale Automatic Identification of Industrial Vacant Land
ISPRS Int. J. Geo-Inf. 2023, 12(10), 409; https://doi.org/10.3390/ijgi12100409 - 05 Oct 2023
Viewed by 708
Abstract
Many cities worldwide have large amounts of industrial vacant land (IVL) due to development and transformation, posing a growing problem. However, the large-scale identification of IVL is hindered by obstacles such as high cost, high variability, and closed-source data. Moreover, it is difficult [...] Read more.
Many cities worldwide have large amounts of industrial vacant land (IVL) due to development and transformation, posing a growing problem. However, the large-scale identification of IVL is hindered by obstacles such as high cost, high variability, and closed-source data. Moreover, it is difficult to distinguish industrial vacant land from operational industrial land based solely upon image features. To address these issues, we propose a method for the large-scale automatic identification of IVL. The framework uses deep learning to train remote-sensing images of potential industrial vacant land to generate a semantic segmentation model and further use population density and surface temperature data to filter model predictions. The feasibility of the proposed methodology was validated through a case study in Tangshan City, Hebei Province, China. The study indicates two major conclusions: (1) The proposed IVL identification framework can efficiently generate industrial vacant land mapping. (2) HRNet exhibits the highest accuracy and strongest robustness after training compared with other semantic segmentation backbone networks, ensuring high-quality performance and stability, as evidenced by a model accuracy of 97.84%. Based on the above advantages, the identification framework provides a reference method for various countries and regions to identify industrial vacant land on a large scale, which is of great significance for advancing the research and transformation of industrial vacant land worldwide. Full article
(This article belongs to the Topic Geocomputation and Artificial Intelligence for Mapping)
Show Figures

Figure 1

22 pages, 9732 KiB  
Article
Gated Recurrent Unit Embedded with Dual Spatial Convolution for Long-Term Traffic Flow Prediction
ISPRS Int. J. Geo-Inf. 2023, 12(9), 366; https://doi.org/10.3390/ijgi12090366 - 05 Sep 2023
Viewed by 660
Abstract
Considering the spatial and temporal correlation of traffic flow data is essential to improve the accuracy of traffic flow prediction. This paper proposes a traffic flow prediction model named Dual Spatial Convolution Gated Recurrent Unit (DSC-GRU). In particular, the GRU is embedded with [...] Read more.
Considering the spatial and temporal correlation of traffic flow data is essential to improve the accuracy of traffic flow prediction. This paper proposes a traffic flow prediction model named Dual Spatial Convolution Gated Recurrent Unit (DSC-GRU). In particular, the GRU is embedded with the DSC unit to enable the model to synchronously capture the spatiotemporal dependence. When considering spatial correlation, current prediction models consider only nearest-neighbor spatial features and ignore or simply overlay global spatial features. The DSC unit models the adjacent spatial dependence by the traditional static graph and the global spatial dependence through a novel dependency graph, which is generated by calculating the correlation between nodes based on the correlation coefficient. More than that, the DSC unit quantifies the different contributions of the adjacent and global spatial correlation with a modified gated mechanism. Experimental results based on two real-world datasets show that the DSC-GRU model can effectively capture the spatiotemporal dependence of traffic data. The prediction precision is better than the baseline and state-of-the-art models. Full article
(This article belongs to the Topic Geocomputation and Artificial Intelligence for Mapping)
Show Figures

Figure 1

22 pages, 7358 KiB  
Article
Multi-Objective Multi-Satellite Imaging Mission Planning Algorithm for Regional Mapping Based on Deep Reinforcement Learning
Remote Sens. 2023, 15(16), 3932; https://doi.org/10.3390/rs15163932 - 08 Aug 2023
Viewed by 659
Abstract
Satellite imaging mission planning is used to optimize satellites to obtain target images efficiently. Many evolutionary algorithms (EAs) have been proposed for satellite mission planning. EAs typically require evolutionary parameters, such as the crossover and mutation rates. The performance of EAs is considerably [...] Read more.
Satellite imaging mission planning is used to optimize satellites to obtain target images efficiently. Many evolutionary algorithms (EAs) have been proposed for satellite mission planning. EAs typically require evolutionary parameters, such as the crossover and mutation rates. The performance of EAs is considerably affected by parameter setting. However, most parameter configuration methods of the current EAs are artificially set and lack the overall consideration of multiple parameters. Thus, parameter configuration becomes suboptimal and EAs cannot be effectively utilized. To obtain satisfactory optimization results, the EA comp ensates by extending the evolutionary generation or improving the evolutionary strategy, but it significantly increases the computational consumption. In this study, a multi-objective learning evolutionary algorithm (MOLEA) was proposed to solve the optimal configuration problem of multiple evolutionary parameters and used to solve effective imaging satellite task planning for region mapping. In the MOLEA, population state encoding provided comprehensive population information on the configuration of evolutionary parameters. The evolutionary parameters of each generation were configured autonomously through deep reinforcement learning (DRL), enabling each generation of parameters to gain the best evolutionary benefits for future evolution. Furthermore, the HV of the multi-objective evolutionary algorithm (MOEA) was used to guide reinforcement learning. The superiority of the proposed MOLEA was verified by comparing the optimization performance, stability, and running time of the MOLEA with existing multi-objective optimization algorithms by using four satellites to image two regions of Hubei and Congo (K). The experimental results showed that the optimization performance of the MOLEA was significantly improved, and better imaging satellite task planning solutions were obtained. Full article
(This article belongs to the Topic Geocomputation and Artificial Intelligence for Mapping)
Show Figures

Figure 1

18 pages, 5135 KiB  
Article
Research on SUnet Winter Wheat Identification Method Based on GF-2
Remote Sens. 2023, 15(12), 3094; https://doi.org/10.3390/rs15123094 - 13 Jun 2023
Cited by 4 | Viewed by 825
Abstract
Introduction: Winter wheat plays a crucial role in ensuring food security and sustainable agriculture. Accurate identification and recognition of winter wheat in remote sensing images are essential for monitoring crop growth and yield estimation. In recent years, attention-based convolutional neural networks have shown [...] Read more.
Introduction: Winter wheat plays a crucial role in ensuring food security and sustainable agriculture. Accurate identification and recognition of winter wheat in remote sensing images are essential for monitoring crop growth and yield estimation. In recent years, attention-based convolutional neural networks have shown promising results in various image recognition tasks. Therefore, this study aims to explore the application of attention-based convolutional neural networks for winter wheat identification on GF-2 high-resolution images and propose improvements to enhance recognition accuracy. Method: This study built a multi-band winter wheat sample dataset based on GF-2 images. In order to highlight the characteristics of winter wheat, this study added two bands, NDVI and NDVIincrease, to the dataset and proposed a SUNet network model. In this study, the batch normalization layer was added to the basic structure of the UNet convolutional network to speed up network convergence and improve accuracy. In the jump phase, shuffle attention was added to the shallow features extracted from the coding structure for feature optimization and spliced with the deep features extracted by upsampling. The SUNet made the network pay more attention to the important features to improve winter wheat recognition accuracy. In order to overcome the sample imbalance problem, this study used the focus loss function instead of the traditional cross-entropy loss function. Result: The experimental data show that its mean intersection over union, overall classification accuracy, recall, F1 score and kappa coefficient are 0.9514, 0.9781, 0.9707, 0.9663 and 0.9501, respectively. The results of these evaluation indicators are better than those of other comparison methods. Compared with the UNet, the evaluation indicators have increased by 0.0253, 0.0118, 0.021, 0.0185, and 0.0272, respectively. Conclusion: The SUNet network can effectively improve winter wheat recognition accuracy in multi-band GF-2 images. Furthermore, with the support of a cloud platform, it can provide data guarantee and computing support for winter wheat information extraction. Full article
(This article belongs to the Topic Geocomputation and Artificial Intelligence for Mapping)
Show Figures

Figure 1

22 pages, 14614 KiB  
Article
Mean Inflection Point Distance: Artificial Intelligence Mapping Accuracy Evaluation Index—An Experimental Case Study of Building Extraction
Remote Sens. 2023, 15(7), 1848; https://doi.org/10.3390/rs15071848 - 30 Mar 2023
Viewed by 1067
Abstract
Mapping is a fundamental application of remote sensing images, and the accurate evaluation of remote sensing image information extraction using artificial intelligence is critical. However, the existing evaluation method, based on Intersection over Union (IoU), is limited in evaluating the extracted information’s boundary [...] Read more.
Mapping is a fundamental application of remote sensing images, and the accurate evaluation of remote sensing image information extraction using artificial intelligence is critical. However, the existing evaluation method, based on Intersection over Union (IoU), is limited in evaluating the extracted information’s boundary accuracy. It is insufficient for determining mapping accuracy. Furthermore, traditional remote sensing mapping methods struggle to match the inflection points encountered in artificial intelligence contour extraction. In order to address these issues, we propose the mean inflection point distance (MPD) as a new segmentation evaluation method. MPD can accurately calculate error values and solve the problem of multiple inflection points, which traditional remote sensing mapping cannot match. We tested three algorithms on the Vaihingen dataset: Mask R-CNN, Swin Transformer, and PointRend. The results show that MPD is highly sensitive to mapping accuracy, can calculate error values accurately, and is applicable for different scales of mapping accuracy while maintaining high visual consistency. This study helps to assess the accuracy of automatic mapping using remote sensing artificial intelligence. Full article
(This article belongs to the Topic Geocomputation and Artificial Intelligence for Mapping)
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

17 pages, 3626 KiB  
Article
The Effect of Negative Samples on the Accuracy of Water Body Extraction Using Deep Learning Networks
Remote Sens. 2023, 15(2), 514; https://doi.org/10.3390/rs15020514 - 15 Jan 2023
Cited by 1 | Viewed by 1228
Abstract
Water resources are important strategic resources related to human survival and development. Water body extraction from remote sensing images is a very important research topic for the monitoring of global and regional surface water changes. Deep learning networks are one of the most [...] Read more.
Water resources are important strategic resources related to human survival and development. Water body extraction from remote sensing images is a very important research topic for the monitoring of global and regional surface water changes. Deep learning networks are one of the most effective approaches and training data is indispensable for ensuring the network accurately extracts water bodies. The training data for water body extraction includes water body samples and non-water negative samples. Cloud shadows are essential negative samples due to the high similarity between water bodies and cloud shadows, but few studies quantitatively evaluate the impact of cloud shadow samples on the accuracy of water body extraction. Therefore, the training datasets with different proportions of cloud shadows were produced, and each of them includes two types of cloud shadow samples: the manually-labeled cloud shadows and unlabeled cloud shadows. The training datasets are applied on a novel transformer-based water body extraction network to investigate how the negative samples affect the accuracy of the water body extraction network. The evaluation results of Overall Accuracy (OA) of 0.9973, mean Intersection over Union (mIoU) of 0.9753, and Kappa of 0.9747 were obtained, and it was found that when the training dataset contains a certain proportion of cloud shadows, the trained network can handle the misclassification of cloud shadows well and more accurately extract water bodies. Full article
(This article belongs to the Topic Geocomputation and Artificial Intelligence for Mapping)
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

23 pages, 7463 KiB  
Article
Supervised versus Semi-Supervised Urban Functional Area Prediction: Uncertainty, Robustness and Sensitivity
Remote Sens. 2023, 15(2), 341; https://doi.org/10.3390/rs15020341 - 06 Jan 2023
Viewed by 1563
Abstract
To characterize a community-scale urban functional area using geo-tagged data and available land-use information, several supervised and semi-supervised models are presented and evaluated in Hong Kong for comparing their uncertainty, robustness and sensitivity. The following results are noted: (i) As the training set [...] Read more.
To characterize a community-scale urban functional area using geo-tagged data and available land-use information, several supervised and semi-supervised models are presented and evaluated in Hong Kong for comparing their uncertainty, robustness and sensitivity. The following results are noted: (i) As the training set size grows, models’ accuracies are improved, particularly for multi-layer perceptron (MLP) or random forest (RF). The graph convolutional network (GCN) (MLP or RF) model reveals top accuracy when the proportion of training samples is less (greater) than 10% of the total number of functional areas; (ii) With a large amount of training samples, MLP shows the highest prediction accuracy and good performances in cross-validation, but less stability on same training sets; (iii) With a small amount of training samples, GCN provides viable results, by incorporating the auxiliary information provided by the proposed semantic linkages, which is meaningful in real-world predictions; (iv) When the training samples are less than 10%, one should be cautious using MLP to test the optimal epoch for obtaining the best accuracy, due to its model overfitting problem. The above insights could support efficient and scalable urban functional area mapping, even with insufficient land-use information (e.g., covering only ~20% of Beijing in the case study). Full article
(This article belongs to the Topic Geocomputation and Artificial Intelligence for Mapping)
(This article belongs to the Section AI Remote Sensing)
Show Figures

Graphical abstract

Back to TopTop