Big Data Computing for Geospatial Applications

A special issue of ISPRS International Journal of Geo-Information (ISSN 2220-9964).

Deadline for manuscript submissions: closed (31 May 2020) | Viewed by 58929

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors

Department of Geography, University of North Carolina at Charlotte, Charlotte, NC 28223-0001, USA
Interests: geographic information science; spatial cyberinfrastructure; agent-based modeling; land use and land cover change; complex adaptive spatial systems
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Geography, University of Wisconsin-Madison, Madison, WI 53706-1491, USA
Interests: spatial big data analytics and mining; cloud computing, distributed computing, and high-performance computing; remote sensing; natural hazards
Special Issues, Collections and Topics in MDPI journals
Department of Geography, Environment, and Society, University of Minnesota, Minneapolis, MN, USA
Interests: geographic information science; cyberGIS; geocomputing; big data analytics and modeling; social media data analytics

E-Mail Website
Guest Editor
School of Geography and Information Engineering & National Engineering Research Center of GIS, China University of Geosciences, Wuhan, China
Interests: big geospatial data; GeoAI; high-performance GeoComputation; spatiotemporal modeling; land-use and land-cover change; urban informatices
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Earth observation systems and model simulations are generating massive volumes of disparate, dynamic, and geographically distributed geospatial data with increasingly finer spatiotemporal resolutions. Meanwhile, the propagation of smart devices and social media also provide extensive geo-information about daily life activities. Efficiently analyzing those geospatial big data streams enables us to investigate unknown and complex patterns and develop new decision-support systems, thus provides unprecedented values for business, sciences, and engineering.

However, handling the "Vs" (volume, variety, velocity, veracity, and value) of big data is a challenging task. This is especially true for geospatial big data since the massive datasets often need to be analyzed in the context of dynamic space and time. Following a series of successful sessions organized at AAG, this special issue on “Big Data Computing for Geospatial Applications” by the ISPRS International Journal of Geo-Information aims to capture the latest efforts on utilizing, adapting, and developing new computing approaches, spatial methods, and data management strategies to tackle geospatial big data challenges for supporting geospatial applications in different domains such as climate change, disaster management, human dynamics, public health, and environment and engineering.

Potential topics include (but are not limited to) the following:

  • Geo-cyberinfrastructure integrating spatiotemporal principles and advanced computational technologies (e.g., high-performance computing, cloud computing, and deep learning).
  • New computing and programming frameworks and architecture or parallel computing algorithms for geospatial applications.
  • New geospatial data management strategies and data storage models coupled with high-performance computing for efficient data query, retrieval, and processing (e.g. new spatiotemporal indexing mechanisms).
  • New computing methods considering spatiotemporal collocation (locations and relationships) of users, data, and computing resources.
  • Geospatial big data processing, mining and visualization methods using high-performance computing and artificial intelligence.
  • Integrating scientific workflows in cloud computing and/or high performance computing environment.
  • Any other research, development, education, and visions related to geospatial big data computing.

Interested authors are encouraged to notify the guest editors of their intention by sending an abstract to Dr. Zhenlong Li (zhenlong@sc.edu). The deadline for submissions of the final papers is May 31, 2020. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website.

Dr. Zhenlong Li
Assoc. Prof. Wenwu Tang
Dr. Qunying Huang
Dr. Eric Shook
Prof. Qingfeng Guan
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. ISPRS International Journal of Geo-Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

7 pages, 219 KiB  
Editorial
Introduction to Big Data Computing for Geospatial Applications
by Zhenlong Li, Wenwu Tang, Qunying Huang, Eric Shook and Qingfeng Guan
ISPRS Int. J. Geo-Inf. 2020, 9(8), 487; https://doi.org/10.3390/ijgi9080487 - 12 Aug 2020
Cited by 9 | Viewed by 4540
Abstract
The convergence of big data and geospatial computing has brought challenges and opportunities to GIScience with regards to geospatial data management, processing, analysis, modeling, and visualization. This special issue highlights recent advancements in integrating new computing approaches, spatial methods, and data management strategies [...] Read more.
The convergence of big data and geospatial computing has brought challenges and opportunities to GIScience with regards to geospatial data management, processing, analysis, modeling, and visualization. This special issue highlights recent advancements in integrating new computing approaches, spatial methods, and data management strategies to tackle geospatial big data challenges and meanwhile demonstrates the opportunities for using big data for geospatial applications. Crucial to the advancements highlighted here is the integration of computational thinking and spatial thinking and the transformation of abstract ideas and models to concrete data structures and algorithms. This editorial first introduces the background and motivation of this special issue followed by an overview of the ten included articles. Conclusion and future research directions are provided in the last section. Full article
(This article belongs to the Special Issue Big Data Computing for Geospatial Applications)

Research

Jump to: Editorial

14 pages, 3900 KiB  
Communication
Terrain Analysis in Google Earth Engine: A Method Adapted for High-Performance Global-Scale Analysis
by José Lucas Safanelli, Raul Roberto Poppiel, Luis Fernando Chimelo Ruiz, Benito Roberto Bonfatti, Fellipe Alcantara de Oliveira Mello, Rodnei Rizzo and José A. M. Demattê
ISPRS Int. J. Geo-Inf. 2020, 9(6), 400; https://doi.org/10.3390/ijgi9060400 - 17 Jun 2020
Cited by 43 | Viewed by 8494
Abstract
Terrain analysis is an important tool for modeling environmental systems. Aiming to use the cloud-based computing capabilities of Google Earth Engine (GEE), we customized an algorithm for calculating terrain attributes, such as slope, aspect, and curvatures, for different resolution and geographical extents. The [...] Read more.
Terrain analysis is an important tool for modeling environmental systems. Aiming to use the cloud-based computing capabilities of Google Earth Engine (GEE), we customized an algorithm for calculating terrain attributes, such as slope, aspect, and curvatures, for different resolution and geographical extents. The calculation method is based on geometry and elevation values estimated within a 3 × 3 spheroidal window, and it does not rely on projected elevation data. Thus, partial derivatives of terrain are calculated considering the great circle distances of reference nodes of the topographic surface. The algorithm was developed using the JavaScript programming interface of the online code editor of GEE and can be loaded as a custom package. The algorithm also provides an additional feature for making the visualization of terrain maps with a dynamic legend scale, which is useful for mapping different extents: from local to global. We compared the consistency of the proposed method with an available but limited terrain analysis tool of GEE, which resulted in a correlation of 0.89 and 0.96 for aspect and slope over a near-global scale, respectively. In addition to this, we compared the slope, aspect, horizontal, and vertical curvature of a reference site (Mount Ararat) to their equivalent attributes estimated on the System for Automated Geospatial Analysis (SAGA), which achieved a correlation between 0.96 and 0.98. The visual correspondence of TAGEE and SAGA confirms its potential for terrain analysis. The proposed algorithm can be useful for making terrain analysis scalable and adapted to customized needs, benefiting from the high-performance interface of GEE. Full article
(This article belongs to the Special Issue Big Data Computing for Geospatial Applications)
Show Figures

Figure 1

29 pages, 9577 KiB  
Article
Advanced Cyberinfrastructure to Enable Search of Big Climate Datasets in THREDDS
by Juozas Gaigalas, Liping Di and Ziheng Sun
ISPRS Int. J. Geo-Inf. 2019, 8(11), 494; https://doi.org/10.3390/ijgi8110494 - 02 Nov 2019
Cited by 5 | Viewed by 2875
Abstract
Understanding the past, present, and changing behavior of the climate requires close collaboration of a large number of researchers from many scientific domains. At present, the necessary interdisciplinary collaboration is greatly limited by the difficulties in discovering, sharing, and integrating climatic data due [...] Read more.
Understanding the past, present, and changing behavior of the climate requires close collaboration of a large number of researchers from many scientific domains. At present, the necessary interdisciplinary collaboration is greatly limited by the difficulties in discovering, sharing, and integrating climatic data due to the tremendously increasing data size. This paper discusses the methods and techniques for solving the inter-related problems encountered when transmitting, processing, and serving metadata for heterogeneous Earth System Observation and Modeling (ESOM) data. A cyberinfrastructure-based solution is proposed to enable effective cataloging and two-step search on big climatic datasets by leveraging state-of-the-art web service technologies and crawling the existing data centers. To validate its feasibility, the big dataset served by UCAR THREDDS Data Server (TDS), which provides Petabyte-level ESOM data and updates hundreds of terabytes of data every day, is used as the case study dataset. A complete workflow is designed to analyze the metadata structure in TDS and create an index for data parameters. A simplified registration model which defines constant information, delimits secondary information, and exploits spatial and temporal coherence in metadata is constructed. The model derives a sampling strategy for a high-performance concurrent web crawler bot which is used to mirror the essential metadata of the big data archive without overwhelming network and computing resources. The metadata model, crawler, and standard-compliant catalog service form an incremental search cyberinfrastructure, allowing scientists to search the big climatic datasets in near real-time. The proposed approach has been tested on UCAR TDS and the results prove that it achieves its design goal by at least boosting the crawling speed by 10 times and reducing the redundant metadata from 1.85 gigabytes to 2.2 megabytes, which is a significant breakthrough for making the current most non-searchable climate data servers searchable. Full article
(This article belongs to the Special Issue Big Data Computing for Geospatial Applications)
Show Figures

Figure 1

15 pages, 4176 KiB  
Article
MapReduce-Based D_ELT Framework to Address the Challenges of Geospatial Big Data
by Junghee Jo and Kang-Woo Lee
ISPRS Int. J. Geo-Inf. 2019, 8(11), 475; https://doi.org/10.3390/ijgi8110475 - 24 Oct 2019
Cited by 9 | Viewed by 3170
Abstract
The conventional extracting–transforming–loading (ETL) system is typically operated on a single machine not capable of handling huge volumes of geospatial big data. To deal with the considerable amount of big data in the ETL process, we propose D_ELT (delayed extracting–loading –transforming) by utilizing [...] Read more.
The conventional extracting–transforming–loading (ETL) system is typically operated on a single machine not capable of handling huge volumes of geospatial big data. To deal with the considerable amount of big data in the ETL process, we propose D_ELT (delayed extracting–loading –transforming) by utilizing MapReduce-based parallelization. Among various kinds of big data, we concentrate on geospatial big data generated via sensors using Internet of Things (IoT) technology. In the IoT environment, update latency for sensor big data is typically short and old data are not worth further analysis, so the speed of data preparation is even more significant. We conducted several experiments measuring the overall performance of D_ELT and compared it with both traditional ETL and extracting–loading– transforming (ELT) systems, using different sizes of data and complexity levels for analysis. The experimental results show that D_ELT outperforms the other two approaches, ETL and ELT. In addition, the larger the amount of data or the higher the complexity of the analysis, the greater the parallelization effect of transform in D_ELT, leading to better performance over the traditional ETL and ELT approaches. Full article
(This article belongs to the Special Issue Big Data Computing for Geospatial Applications)
Show Figures

Figure 1

20 pages, 6356 KiB  
Article
Parallel Cellular Automata Markov Model for Land Use Change Prediction over MapReduce Framework
by Junfeng Kang, Lei Fang, Shuang Li and Xiangrong Wang
ISPRS Int. J. Geo-Inf. 2019, 8(10), 454; https://doi.org/10.3390/ijgi8100454 - 13 Oct 2019
Cited by 28 | Viewed by 4571
Abstract
The Cellular Automata Markov model combines the cellular automata (CA) model’s ability to simulate the spatial variation of complex systems and the long-term prediction of the Markov model. In this research, we designed a parallel CA-Markov model based on the MapReduce framework. The [...] Read more.
The Cellular Automata Markov model combines the cellular automata (CA) model’s ability to simulate the spatial variation of complex systems and the long-term prediction of the Markov model. In this research, we designed a parallel CA-Markov model based on the MapReduce framework. The model was divided into two main parts: A parallel Markov model based on MapReduce (Cloud-Markov), and comprehensive evaluation method of land-use changes based on cellular automata and MapReduce (Cloud-CELUC). Choosing Hangzhou as the study area and using Landsat remote-sensing images from 2006 and 2013 as the experiment data, we conducted three experiments to evaluate the parallel CA-Markov model on the Hadoop environment. Efficiency evaluations were conducted to compare Cloud-Markov and Cloud-CELUC with different numbers of data. The results showed that the accelerated ratios of Cloud-Markov and Cloud-CELUC were 3.43 and 1.86, respectively, compared with their serial algorithms. The validity test of the prediction algorithm was performed using the parallel CA-Markov model to simulate land-use changes in Hangzhou in 2013 and to analyze the relationship between the simulation results and the interpretation results of the remote-sensing images. The Kappa coefficients of construction land, natural-reserve land, and agricultural land were 0.86, 0.68, and 0.66, respectively, which demonstrates the validity of the parallel model. Hangzhou land-use changes in 2020 were predicted and analyzed. The results show that the central area of construction land is rapidly increasing due to a developed transportation system and is mainly transferred from agricultural land. Full article
(This article belongs to the Special Issue Big Data Computing for Geospatial Applications)
Show Figures

Figure 1

20 pages, 4960 KiB  
Article
Integrating Geovisual Analytics with Machine Learning for Human Mobility Pattern Discovery
by Tong Zhang, Jianlong Wang, Chenrong Cui, Yicong Li, Wei He, Yonghua Lu and Qinghua Qiao
ISPRS Int. J. Geo-Inf. 2019, 8(10), 434; https://doi.org/10.3390/ijgi8100434 - 30 Sep 2019
Cited by 10 | Viewed by 4093
Abstract
Understanding human movement patterns is of fundamental importance in transportation planning and management. We propose to examine complex public transit travel patterns over a large-scale transit network, which is challenging since it involves thousands of transit passengers and massive data from heterogeneous sources. [...] Read more.
Understanding human movement patterns is of fundamental importance in transportation planning and management. We propose to examine complex public transit travel patterns over a large-scale transit network, which is challenging since it involves thousands of transit passengers and massive data from heterogeneous sources. Additionally, efficient representation and visualization of discovered travel patterns is difficult given a large number of transit trips. To address these challenges, this study leverages advanced machine learning methods to identify time-varying mobility patterns based on smart card data and other urban data. The proposed approach delivers a comprehensive solution to pre-process, analyze, and visualize complex public transit travel patterns. This approach first fuses smart card data with other urban data to reconstruct original transit trips. We use two machine learning methods, including a clustering algorithm to extract transit corridors to represent primary mobility connections between different regions and a graph-embedding algorithm to discover hierarchical mobility community structures. We also devise compact and effective multi-scale visualization forms to represent the discovered travel behavior dynamics. An interactive web-based mapping prototype is developed to integrate advanced machine learning methods with specific visualizations to characterize transit travel behavior patterns and to enable visual exploration of transit mobility patterns at different scales and resolutions over space and time. The proposed approach is evaluated using multi-source big transit data (e.g., smart card data, transit network data, and bus trajectory data) collected in Shenzhen City, China. Evaluation of our prototype demonstrates that the proposed visual analytics approach offers a scalable and effective solution for discovering meaningful travel patterns across large metropolitan areas. Full article
(This article belongs to the Special Issue Big Data Computing for Geospatial Applications)
Show Figures

Figure 1

19 pages, 4004 KiB  
Article
High-Performance Overlay Analysis of Massive Geographic Polygons That Considers Shape Complexity in a Cloud Environment
by Kang Zhao, Baoxuan Jin, Hong Fan, Weiwei Song, Sunyu Zhou and Yuanyi Jiang
ISPRS Int. J. Geo-Inf. 2019, 8(7), 290; https://doi.org/10.3390/ijgi8070290 - 26 Jun 2019
Cited by 16 | Viewed by 4349
Abstract
Overlay analysis is a common task in geographic computing that is widely used in geographic information systems, computer graphics, and computer science. With the breakthroughs in Earth observation technologies, particularly the emergence of high-resolution satellite remote-sensing technology, geographic data have demonstrated explosive growth. [...] Read more.
Overlay analysis is a common task in geographic computing that is widely used in geographic information systems, computer graphics, and computer science. With the breakthroughs in Earth observation technologies, particularly the emergence of high-resolution satellite remote-sensing technology, geographic data have demonstrated explosive growth. The overlay analysis of massive and complex geographic data has become a computationally intensive task. Distributed parallel processing in a cloud environment provides an efficient solution to this problem. The cloud computing paradigm represented by Spark has become the standard for massive data processing in the industry and academia due to its large-scale and low-latency characteristics. The cloud computing paradigm has attracted further attention for the purpose of solving the overlay analysis of massive data. These studies mainly focus on how to implement parallel overlay analysis in a cloud computing paradigm but pay less attention to the impact of spatial data graphics complexity on parallel computing efficiency, especially the data skew caused by the difference in the graphic complexity. Geographic polygons often have complex graphical structures, such as many vertices, composite structures including holes and islands. When the Spark paradigm is used to solve the overlay analysis of massive geographic polygons, its calculation efficiency is closely related to factors such as data organization and algorithm design. Considering the influence of the shape complexity of polygons on the performance of overlay analysis, we design and implement a parallel processing algorithm based on the Spark paradigm in this paper. Based on the analysis of the shape complexity of polygons, the overlay analysis speed is improved via reasonable data partition, distributed spatial index, a minimum boundary rectangular filter and other optimization processes, and the high speed and parallel efficiency are maintained. Full article
(This article belongs to the Special Issue Big Data Computing for Geospatial Applications)
Show Figures

Figure 1

24 pages, 6322 KiB  
Article
Geographic Knowledge Graph (GeoKG): A Formalized Geographic Knowledge Representation
by Shu Wang, Xueying Zhang, Peng Ye, Mi Du, Yanxu Lu and Haonan Xue
ISPRS Int. J. Geo-Inf. 2019, 8(4), 184; https://doi.org/10.3390/ijgi8040184 - 08 Apr 2019
Cited by 55 | Viewed by 9742
Abstract
Formalized knowledge representation is the foundation of Big Data computing, mining and visualization. Current knowledge representations regard information as items linked to relevant objects or concepts by tree or graph structures. However, geographic knowledge differs from general knowledge, which is more focused on [...] Read more.
Formalized knowledge representation is the foundation of Big Data computing, mining and visualization. Current knowledge representations regard information as items linked to relevant objects or concepts by tree or graph structures. However, geographic knowledge differs from general knowledge, which is more focused on temporal, spatial, and changing knowledge. Thus, discrete knowledge items are difficult to represent geographic states, evolutions, and mechanisms, e.g., the processes of a storm “{9:30-60 mm-precipitation}-{12:00-80 mm-precipitation}-…”. The underlying problem is the constructors of the logic foundation (ALC description language) of current geographic knowledge representations, which cannot provide these descriptions. To address this issue, this study designed a formalized geographic knowledge representation called GeoKG and supplemented the constructors of the ALC description language. Then, an evolution case of administrative divisions of Nanjing was represented with the GeoKG. In order to evaluate the capabilities of our formalized model, two knowledge graphs were constructed by using the GeoKG and the YAGO by using the administrative division case. Then, a set of geographic questions were defined and translated into queries. The query results have shown that GeoKG results are more accurate and complete than the YAGO’s with the enhancing state information. Additionally, the user evaluation verified these improvements, which indicates it is a promising powerful model for geographic knowledge representation. Full article
(This article belongs to the Special Issue Big Data Computing for Geospatial Applications)
Show Figures

Figure 1

17 pages, 17124 KiB  
Article
A Novel Method of Missing Road Generation in City Blocks Based on Big Mobile Navigation Trajectory Data
by Hangbin Wu, Zeran Xu and Guangjun Wu
ISPRS Int. J. Geo-Inf. 2019, 8(3), 142; https://doi.org/10.3390/ijgi8030142 - 14 Mar 2019
Cited by 16 | Viewed by 3836
Abstract
With the rapid development of cities, the geographic information of urban blocks is also changing rapidly. However, traditional methods of updating road data cannot keep up with this development because they require a high level of professional expertise for operation and are very [...] Read more.
With the rapid development of cities, the geographic information of urban blocks is also changing rapidly. However, traditional methods of updating road data cannot keep up with this development because they require a high level of professional expertise for operation and are very time-consuming. In this paper, we develop a novel method for extracting missing roadways by reconstructing the topology of the roads from big mobile navigation trajectory data. The three main steps include filtering of original navigation trajectory data, extracting the road centerline from navigation points, and establishing the topology of existing roads. First, data from pedestrians and drivers on existing roads were deleted from the raw data. Second, the centerlines of city block roads were extracted using the RSC (ring-stepping clustering) method proposed herein. Finally, the topologies of missing roads and the connections between missing and existing roads were built. A complex urban block with an area of 5.76 square kilometers was selected as the case study area. The validity of the proposed method was verified using a dataset consisting of five days of mobile navigation trajectory data. The experimental results showed that the average absolute error of the length of the generated centerlines was 1.84 m. Comparative analysis with other existing road extraction methods showed that the F-score performance of the proposed method was much better than previous methods. Full article
(This article belongs to the Special Issue Big Data Computing for Geospatial Applications)
Show Figures

Figure 1

23 pages, 6670 KiB  
Article
Social Media Big Data Mining and Spatio-Temporal Analysis on Public Emotions for Disaster Mitigation
by Tengfei Yang, Jibo Xie, Guoqing Li, Naixia Mou, Zhenyu Li, Chuanzhao Tian and Jing Zhao
ISPRS Int. J. Geo-Inf. 2019, 8(1), 29; https://doi.org/10.3390/ijgi8010029 - 15 Jan 2019
Cited by 31 | Viewed by 7075
Abstract
Social media contains a lot of geographic information and has been one of the more important data sources for hazard mitigation. Compared with the traditional means of disaster-related geographic information collection methods, social media has the characteristics of real-time information provision and low [...] Read more.
Social media contains a lot of geographic information and has been one of the more important data sources for hazard mitigation. Compared with the traditional means of disaster-related geographic information collection methods, social media has the characteristics of real-time information provision and low cost. Due to the development of big data mining technologies, it is now easier to extract useful disaster-related geographic information from social media big data. Additionally, many researchers have used related technology to study social media for disaster mitigation. However, few researchers have considered the extraction of public emotions (especially fine-grained emotions) as an attribute of disaster-related geographic information to aid in disaster mitigation. Combined with the powerful spatio-temporal analysis capabilities of geographical information systems (GISs), the public emotional information contained in social media could help us to understand disasters in more detail than can be obtained from traditional methods. However, the social media data is quite complex and fragmented, both in terms of format and semantics, especially for Chinese social media. Therefore, a more efficient algorithm is needed. In this paper, we consider the earthquake that happened in Ya’an, China in 2013 as a case study and introduce the deep learning method to extract fine-grained public emotional information from Chinese social media big data to assist in disaster analysis. By combining this with other geographic information data (such population density distribution data, POI (point of interest) data, etc.), we can further assist in the assessment of affected populations, explore emotional movement law, and optimize disaster mitigation strategies. Full article
(This article belongs to the Special Issue Big Data Computing for Geospatial Applications)
Show Figures

Figure 1

18 pages, 6298 KiB  
Article
A Task-Oriented Knowledge Base for Geospatial Problem-Solving
by Can Zhuang, Zhong Xie, Kai Ma, Mingqiang Guo and Liang Wu
ISPRS Int. J. Geo-Inf. 2018, 7(11), 423; https://doi.org/10.3390/ijgi7110423 - 31 Oct 2018
Cited by 6 | Viewed by 4543
Abstract
In recent years, the rapid development of cloud computing and web technologies has led to a significant advancement to chain geospatial information services (GI services) in order to solve complex geospatial problems. However, the construction of a problem-solving workflow requires considerable expertise for [...] Read more.
In recent years, the rapid development of cloud computing and web technologies has led to a significant advancement to chain geospatial information services (GI services) in order to solve complex geospatial problems. However, the construction of a problem-solving workflow requires considerable expertise for end-users. Currently, few studies design a knowledge base to capture and share geospatial problem-solving knowledge. This paper abstracts a geospatial problem as a task that can be further decomposed into multiple subtasks. The task distinguishes three distinct granularities: Geooperator, Atomic Task, and Composite Task. A task model is presented to define the outline of problem solution at a conceptual level that closely reflects the processes for problem-solving. A task-oriented knowledge base that leverages an ontology-based approach is built to capture and share task knowledge. This knowledge base provides the potential for reusing task knowledge when faced with a similar problem. Conclusively, the details of implementation are described through using a meteorological early-warning analysis as an example. Full article
(This article belongs to the Special Issue Big Data Computing for Geospatial Applications)
Show Figures

Figure 1

Back to TopTop