Next Article in Journal
Aquaculture Site Selection of Oncorhynchus Mykiss (Rainbow Trout) in Markazi Province Using GIS-Based MCDM
Previous Article in Journal
Monitoring Coastal Vulnerability by Using DEMs Based on UAV Spatial Data
Previous Article in Special Issue
Participatory GIS-Based Approach for the Demarcation of Village Boundaries and Their Utility: A Case Study of the Eastern Boundary of Wilpattu National Park, Sri Lanka
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Integrating Visualization and Interaction Tools for Enhancing Collaboration in Different Public Participation Settings

by
Patrick Postert
1,*,†,
Anna E. M. Wolf
2,† and
Jochen Schiewe
1,†
1
Lab for Geoinformatics and Geovisualization (g2lab), HafenCity University Hamburg, Henning-Voscherau-Platz 1, 20457 Hamburg, Germany
2
FTZ Digital Reality, Hamburg University of Applied Sciences, Berliner Tor 5, 20099 Hamburg, Germany
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
ISPRS Int. J. Geo-Inf. 2022, 11(3), 156; https://doi.org/10.3390/ijgi11030156
Submission received: 13 January 2022 / Accepted: 15 February 2022 / Published: 22 February 2022
(This article belongs to the Special Issue Public Participation in 2021: New Forms, New Modes, New Questions?)

Abstract

:
The demand for participant engagement in urban planning shows a great need for tools that enable communication between stakeholders and make planning processes more transparent. So far, common methods use different tools and platforms independently. This prevents the full potential for effective, efficient, and creative collaboration from being realized. Hence, this paper presents an approach that combines different participation settings (off-site, on-site, and online) by using an interactive touch table and an additional screen, as well as virtual reality (VR) and augmented reality (AR) devices, and synchronizing them in real-time. To fulfill the collaboration requirements, three major technical aspects are addressed in the concept and prototype implementation: Firstly, the demands for various settings and devices require a uniform and cross-device interaction concept. Secondly, all changes in the course of the participation (e.g., adding, manipulating, or removing objects) must be synchronized across all devices in real-time, with very low latency. Thirdly, the various states should be saved persistently during the collaboration process. Detailed empirical usability studies are still pending; however, pretests indicate that the concept is appreciated, and the transferability to other planning processes is given.

Graphical Abstract

1. Introduction

The current social and cultural change is also evident, among other things, in the increased number of citizens’ petitions and referendums, as well as in the high level of involvement in mega topics, such as climate protection or construction projects that directly affect citizens [1,2]. When Arnstein described her “ladder of participation” in 1969 [3], she already pointed out that real participation is more than mere information. Instead of simply informing the public or allowing them to choose from predefined drafts, there is a call for a change in thinking, for citizens to be involved in the design planning process at an early stage, and for their special expertise on the district and the neighbourhood to be considered [1,4,5,6,7]. This is often referred to as “Collaboration” or “Co-Creation”. In the context of urban planning, this is to foster the participation of diverse people with different expertise. It allows combining different types of knowledge in a participatory process and thus to generate new, often innovative knowledge [8]. One key aspect of co-creation is the early involvement of citizens, and thereby to develop, together with them and not for them [8]. Another critical factor is enabling communication at the eye level of laypeople and professionals.
So far, research approaches have mainly focused on isolated innovative participation approaches without concentrating on an integrated concept of real-time collaboration, which covers different types of devices and multiple scenarios. This is where the project PaKOMM (the German abbreviation for participation, collaboration, multimedia) of the HafenCity University Hamburg and Hamburg University of Applied Sciences steps in. The novel approach of PaKOMM is to combine three settings (off-site, on-site, and online) by using an interactive touch table and an additional screen, as well as virtual reality (VR) and augmented reality (AR) devices, and synchronizing them in real-time. The task is to develop and test application-specific solutions and workflows for combined visualizations and interactions that—together with the integration of gamification elements to increase motivation—enable added value in the collaborative participation process.
To implement this overarching objective, the PaKOMM project uses not only the technical expertise from the fields of geoinformatics, media design, and mixed reality, but also the expertise from social sciences to be able to evaluate the effects of the developed approaches on digital citizen participation in the governance of a city.
Almost all participation procedures are based on spatio-temporal data. So far, only individual, isolated forms of representation have been used to convey them. In this context, simple and easy-to-understand visualizations are required for quick and efficient communication. On the other hand, Gebetsroither-Geringer et al. [2] state that static visualizations often do not meet the needs of stakeholders, making interactive visualizations a better alternative. Furthermore, the variety of requirements, together with those of combined on-site/online variants, necessitate an integrated use of display forms. Special attention is paid to the possible benefit of 3D city models and the further development of adapted Mixed Reality solutions (“Mixed Reality” is used here as an umbrella term for “Virtual Reality”, “Augmented Reality”, etc.). These ideas are based on the knowledge that both 3D models and Mixed Reality applications can potentially improve spatial imagination and create experiences that are shared in real-time and superimposed on the built environment on-site [9].
The overarching research question of the project is whether the use of modern multiuser and multimedia approaches can lead to an increase in effectiveness and efficiency in participatory planning processes. In addition, this article deals with the subordinate research question of how the cross-device interaction and visualization concept must look to enable various application-dependent co-creation scenarios. This includes the requirements of application-dependent synchronization across devices and persistent storage of states and intermediate results.
After a presentation of the related work in Section 2 and a more detailed description of the overall concept of PaKOMM (Section 3), Section 4 deals with three aspects that are of central importance for the fulfilment of these requirements: Firstly, the demands for various settings (off-site, on-site, and online) and devices (touch table, monitor, VR) lead to a uniform and cross-device interaction concept. Secondly, all changes in the course of the participation (e.g., adding, manipulating, or removing objects) must be synchronized across all devices in real-time with very low latency. Moreover, thirdly, the various states should be saved persistently during the collaboration process. Section 5 reports about a first pretest, and Section 6 summarizes the development status, while Section 7 finally gives an outlook on future work.

2. State of the Art

One key component for participation and evident decision-making is geospatial media; especially geospatial media arising from cartography have traditionally been used in urban planning scenarios. Prominent examples of map-based urban planning are the Cerdà Plan for the extension of Barcelona [10], and the Hobrecht Plan which was supposed to address the local migration from rural areas to the city of Berlin [11,12]. Although the plans were often criticized, today, characteristic elements are still recognizable in both cities.
Already proposed by planners like Frederick Law Olmsted, who designed the Manhattan Central Park, the concept arose to address the growing complexity of urban planning by separating different demands and planning factors, which should also be adaptable [13]. On this concept, layer-based planning with GIS still relies today. However, with digitization, GIS is also established in planning processes and interactive low-threshold tools empower collaborative citizen engagement.
Current participation approaches typically rely on either on-site procedures that invite people to participate on site, or online procedures that enable participation regardless of time and place. In the on-site context, an innovation was using an interactive touch table for citizen workshops to find places for temporary refugee shelters [14]. Furthermore, Gottwald et al. [15] utilized touch tables to assess the boundary management in river landscapes and assessed the resulting impact on the ecosystem. In the online context, especially WebGIS have proven themselves to be easily accessible for participants. To support participants’ decision-making, for example, Mansourian et al. [16] presented a framework with analytic and deliberative components. To combine online with off-site participation scenarios, the research project DIPAS (Digital Participation System) [1] presents an integrated application that relies on a web application for the online scenario and a touch table application for off-site workshops.
Arising from traditional cartography, 3D representations are getting more popular to increase spatial awareness. As Herbert and Chen [17] indicate, especially complex assessment and interaction tasks can potentially benefit from the ability to manipulate the perspective. One prominent example of a 3D web application built to foster participant engagement is the smarticipateApp [18], which provides automatic analysis and feedback generation of the citizen input to make informed choices. In the presented case study, citizens had to choose locations for tree plantations. The case study indicates that these supportive features can be effective in participation processes, although the effort can be high to extend the approach to other cases, as domain experts have to be involved in defining domain vocabulary and rules for the implementation. Furthermore, the authors assume that extensive real-time knowledge and feedback generation may have high hardware requirements.
Besides 3D web applications, a large volume of published studies indicates that urban planning processes can benefit from AR and VR approaches. In the context of VR applications in participation processes, for example, van Leeuwen et al. [19] propose that immersive VR applications might provide higher engagement than 3D renderings on screens. Furthermore, Ma et al. [20] argue that VR-driven visualizations of spatial relationships in the built environment can make complex questions more accessible for stakeholders. In the context of the utilization of AR, more recent attention got the UrbanPlanAR application, which used depth-buffering to determine whether parts of a Building Information Model (BIM) are occluded by parts of the built environment and thus should not be visible and rendered [21].
There are various examples for off-site, online and on-site participation approaches as presented above. Mainly, these approaches are specialized in one of these three scenarios. To integrate these scenarios into one comprehensive concept, PaKOMM describes an approach that allows real-time collaboration on different types of devices.

3. Pakomm Concept

PaKOMM combines an interactive touch table application with Virtual and Augmented Reality to enable interactive visualizations and adapt to different settings and users. Changes are synchronized and persistently stored in real-time, enabling hybrid forms of collaboration. A uniform and cross-device interaction concept (see Section 4.2) allows quick learning and easy use of the different applications.
To involve as many citizens as possible, we distinguish between three different settings with the appropriate end devices, shown in Figure 1:
(a)
An off-site scenario that combines an interactive touch table with a second screen and Virtual Reality (VR).
(b)
An online scenario where people can participate via a website from desktop computers, tablets, or VR headsets.
(c)
An on-site scenario that uses mobile devices for Augmented Reality (AR) and possibly standalone headsets for VR with camera see-through.
Scenarios that combine these three settings can also be realized with a backend for synchronization and persistent data storage. Ideally, it should be possible in all settings to actively visualize suggestions and annotate the suggestions of others.
As an example for a setting, a group of people can discuss an issue at the touch table, learn about the circumstances, and develop an initial proposal together. Subsequently, this proposal can be experienced and further elaborated in small groups in VR. The proposals that emerge in this way can finally be superimposed on the touch table and be compared. The developed compromise(s) can also be viewed on-site using the AR app, commented on, and adjusted if necessary.
Apart from this scenario, other combinations and hybrid use of the touch table, VR, and AR are also possible. The representation of VR users as avatars and the transmission of their movements allow real-time collaboration between VR users and between VR and touch table users.
The so far implemented prototype, that focuses on off-site workshops (a) (framed in green in Figure 1) is presented in Section 4.

4. Implementation of PaKOMM

Besides the proper choice of the development environment and the data basis presented in Section 4.1, there are three main requirements we address to allow participants to communicate, collaborate and co-create with each other on different devices. The first requirement is an identical 3D environment and a cohesive menu across the platforms to avoid placing an additional cognitive load on the participants when using different target devices. Therefore, Section 4.2 presents the overall design approach of our implementation for the touch table and the VR headsets. The second requirement is that during the participants’ usage of the applications, all changes in the environment, the movements of the collaborators, and their voices have to be synchronized in real-time with low latency (Section 4.3) as synchronization with high latency can cause breaks in the feeling of presence in VR [22]. Finally, for the third requirement, which is concerned with the review, continuation, and evaluation of planning results, we present an approach to persistent data storage in Section 4.4.

4.1. Implementation Basis

To realize the concept shown in Figure 1, PaKOMM targets the devices described in Section 4.1.1. With the multi-platform development presented in Section 4.1.2, we demonstrate an efficient development approach to distribute the same cohesive 3D environment to all used devices—also including others than the above-mentioned ones. The available data used for the 3D environment is described in Section 4.1.3.

4.1.1. Devices

Inspired by previous research projects, in which visualizations are projected onto LEGO tiles [23] or in which 3D-printed objects are combined with overlaid AR visualizations [24], we use a touch table with object recognition via object markers. This allows us to add haptic elements to the interaction by sticking 3D-printed objects on top of them (see Section 4.2.2). The selected system is “Nexus” from eyefactive [25] with a 65” UHD screen from NEC and PCAP technology from 3M. The touch recognition relies on the TUIO protocol [26], which allows multitouch.
Since we assume that many users of the PaKOMM applications have not often used VR, it is vital to keep the entry barriers low. A standalone headset allows high freedom of movement without the a cable’s length limiting the range of motion or users getting wrapped up with it or stumbling over it. We use the Oculus Quest 1 and 2 [27], which are comparatively affordable consumer products in the standalone headset category. Nevertheless, as the development of standalone headsets is very fast, the next version of our prototype should also be compatible with newly released headsets.

4.1.2. Multi-Platform Development

Game Engines have proven their effectiveness in the game industry for many years. One of the critical requirements for the game industry is the platform-independent development of applications that can easily be compiled and adopted to popular target devices and their Software Development Kits (SDKs). SDKs are developed by device manufacturers and provide programming tools and program libraries to use device-specific functions. As a result, the development of large, realistic, and nevertheless high-performance 3D worlds is accessible at low thresholds with multi-platform support, which is also needed in PaKOMM.
Examples of Game Engines are Unity3D, the Unreal Engine, and the CryEngine, which are free for non-commercial use. There are also open-source Game Engines like the Godot, which is under the MIT license. In the PaKOMM project, we rely on Unity3D. The main reason for this is that Unity offers extensive cross-platform development interfaces, which allow us to develop the applications for the device categories touch table, head-mounted displays (VR), and smartphones and tablets (AR) simultaneously. Without adapting the implementation, the devices within their category remain easily interchangeable and replaceable by newer hardware. Besides the cross-platform capabilities, Unity currently also has a vast (semi-)professional developer community.

4.1.3. Data

To create appealing 3D environments from spatial data, several tools are available to empower easy data import into Unity3D. The challenge is to ensure a correct representation of information and the surrounding environment for informed decision-making. Therefore, the curation and preparation of the used spatial data should be an essential component of applications used in participation processes. While Höhl [28] introduces a general systematic overview about pipelines to import spatial data into Game Engines, Keil et al. [29] provide a detailed explanation for targeting the Unity Game Engine. The latter also gives guidelines and arguments to choose between official spatial data and volunteered geographic information (VGI) for individual cases.
In the first PaKOMM scenario, which is concerned with converting an industrial site, the official data turned out to be more suitable as the digital elevation model and the green area were missing in the VGI.
The used data include three static components which the participants cannot modify. First, we utilize a digital elevation model (DEM), with its spatial information being essential for planning purposes as it delimits the planning area with its landscape elements such as visual edges (railway embankment) and barriers (water channel). Furthermore, the DEM is also used to shape a terrain object in Unity. In VR, the participants can move on the terrain freely by teleporting themselves. Second, we rely on a CityGML model, which contains building geometries. The buildings are represented with the highest available Level of Detail (LOD) 2. In LOD 2, the buildings have differentiated roof structures, but they neither have openings like windows or doors nor architectural structures like balconies or bays. The buildings instead serve as landscape elements for orientation and information carriers. Third, we use a base map to texturize the DEM. Due to the large map extract and the low zoom level, the street names included in the base map also provide orientation for the participants using the touch table, whereas the labelling is too large to be perceived or be read by the users in VR.

4.2. Interaction and User Interface

Three-dimensional representations and especially immersive experiences of virtual environments in VR create powerful images. Therefore, the used level of abstraction of virtual elements should be chosen tailored to the planning stage and question. For instance, choosing a concrete tree species might not be relevant but disturbing in the early planning stages, while it could become a key in a later iteration. Moreover, the chosen level of abstraction also induces the hardware requirements to provide an immersive experience. With the given hardware and use case, PaKOMM has relied on an abstract representation of objects so far (Section 4.2.1). As mentioned in Section 3, the interaction concept should be similar for all devices. For this reason, we use the same menu structure and a similar layout on the touch table (Section 4.2.2) and in VR (Section 4.2.3)—each adapted to the respective interaction features.

4.2.1. Editable Objects

Besides the three official spatial data components described in Section 4.1, an additional component of the 3D environment is a set of editable objects. Editable objects can be added, manipulated, and removed by the participants on all devices and in all scenarios presented in Figure 1.
As a preselection for the editable objects, we identified five object categories for urban planning processes as needed in our first scenario. These categories listed below also serve as superordinate terms used in the user interface.
  • Trees
  • Plants
  • City Furniture
  • Play and Sports
  • Industrial Objects

4.2.2. Touch Table

We developed an application for object recognition at the interactive touch table to introduce the users to the three-dimensional environment and foster interactions with the editable objects in an easy and intuitive way. 3D-printed objects attached to the object markers enable a playful interaction during the planning process (see Figure 2). Each of the previously mentioned object categories belongs to an object marker with a 3D-printed object symbolizing it. By pressing one of the buttons arranged in a radial layout around the marker [30], new objects can be instantiated and placed on the terrain by “drag and drop”. By using well-known touch gestures to replace (drag and drop), scale (increase the distance between two fingers to scale up and decrease to scale down), and rotate (rotate two fingers), the objects can be changed afterwards.
Since users often hesitate to put on a Virtual Reality headset and isolate themselves from the outside world, we expanded the interaction options on the touch table to archive a functionality similar to the one in VR, even though the same level of immersion and perception cannot be reached. Furthermore, to allow changes in perspective for the touch table users as well, we implemented different camera positions which can be changed in real-time thanks to two object markers:
  • The first object marker is for a flying camera, which films from an angled bird’s-eye view and provides a good overview of the terrain and the surroundings.
  • The second object marker is for a person camera, which films from the perspective of a person moving around the site to get a closer look and better understand the person’s view.
The concept of the person camera (see Figure 2) is based on the well-known function of Google Streetview [31]. The real-time stream of the camera can be seen on a second screen.

4.2.3. Virtual Reality

Putting on a Head-Mounted Display (HMD) and immersing yourself in a virtual world allow users to experience and interact with the virtual environment (VE) [22]. The world is shown to the user from a first-person perspective so that head movements such as turning the head lead to immediate changes in the camera stream. Due to the often limited space available in the physical world, different modes of locomotion have been and are being developed for VR [32]. Teleportation is a prevalent method, which allows jumping to different locations within the VE. For PaKOMM VR, we decided to allow the user to teleport everywhere on the terrain and thus enable free exploration of the environment.
Users can interact with each other and the surrounding objects in the VE. Following a laser pointer metaphor, the right controller emits a blue ray. Objects that have been hit by the ray are recognized and thus can be grabbed and replaced by using the trigger button of the controller. Afterwards, the selected object can be rotated or scaled. In this way, users can manipulate objects directly and create their vision of the site in the virtual environment. As for planning purposes, it is not enough to interact with already existing objects, users can instantiate new objects by pressing the corresponding button and place them by pointing on the terrain. A user interface, similar to the described UI of the touch table described in Section 4.2.2, is attached at the left controller. It allows changing between different object categories and choosing a new object to be instantiated.
Collaboration and Co-Creation as described in Section 1 require the presence of several people. To enable creation and co-creation, we distinguish between a single-user and a multiuser mode. In the multiuser mode, other persons are represented by avatars in the VE. If users teleport in the VE, their avatars are moved respectively. Movements of the controllers and the head-mounted display are tracked in the physical world and transferred to the avatar’s hands and head movement. This allows simple gestures like pointing at an object or showing approval by raising the thumb, facilitating communication (see Figure 3). Despite this non-verbal communication, speech is transmitted for verbal communication. To facilitate these overall communication and collaboration capabilities required for co-creative participation, we developed a multi-component backend.

4.3. Real-Time Co-Creation—Synchronizing Objects and Avatars

Besides the users’ avatars, controllers, and voice, changes applied to the editable objects have to be transmitted to all devices involved in a real-time multiuser participation session. At a glance, Real-Time Database Management System (DBMS) services, like the Google Firebase service and Amazon’s AWS AppSync service, are basically suitable for near real-time synchronization of game states on different devices. But these DBMS-based synchronization approaches focus on sequential correctness and data integrity rather than optimized low latency transmission between devices, which is an integral requirement for the feeling of presence in multiuser VR environments. Therefore, apart from the DBMS component for persistent data storage described in Section 4.4, the backend also consists of a multiuser game backend component (MBC) to provide low latency synchronization between devices. MBCs have already proven themselves in the gaming industry, as they are easy to integrate into Game Engines like Unity.
There are several MBCs available that can be self-hosted on dedicated servers and eliminate data protection concerns, especially in sensitive planning processes. Examples for MBCs are the Normcore [33] or the Photon PUN2 [34] packages. To find the right MBC to integrate into Unity, Ref. [35] provides a report with an overview about a subset of popular MBC. Due to their high-level abstraction and thus ease of implementation, for the first version of the PaKOMM prototype, we utilize the Photon PUN2 package to synchronize interactions with editable objects (Section 4.2.1) and avatars in real-time and the Photon Voice 2 SDK for voice transmission.
MBCs organize the real-time synchronization and voice transmission between users in a session. A session starts when the first user joins and ends when the last user leaves it. The fact that MBCs are designed to manage several sessions simultaneously provides that multiple groups of people can use the developed tools simultaneously. That several groups can collaborate at the same time has two advantages in the PaKOMM context: Firstly, the prototype is scalable as the number of users and their gathering is not generally limited. Secondly, workshops can be designed in new ways, as groups of users can be subdivided and merged for joint discussions in several novel ways and multiple co-creation sessions can be hosted at the same time. Thus, different independent variants can be created in parallel. After their creation, these variants can be elaborated, compared, and discussed with people from different sessions within the same workshop.
Before the participants enter a 3D environment in the PaKOMM prototype and communicate with each other, they automatically enter the lobby room. The lobby room’s purpose is to present the available planning versions and its associated MBC session to the participants, so they can select the one in which they want to collaborate. With the selection, the participants join the session and the network connection to the other participants is established. This so-called matchmaking and the lobby room are part of the most available MBCs.
After users join a session, the MBC transmits each user’s additions, manipulations, or deletions of editable objects to the other users of a scene in real-time. Thus, editable objects and their edits are instantly visible for all users of the same session. Also, the movements within the manipulation process of editable objects are synchronized and thus allow the other users to observe the placement process, including searching for the right place.
However, this engagement of all users in a manipulation process can also lead to interferences when multiple users want to manipulate the same editable object simultaneously. Therefore, we apply the concept of object ownership. As a result, each editable object is owned by one user who can manipulate the object in terms of its position, rotation and size, or remove the object. Initially, the ownership belongs to the user who added an editable object to the scene. A takeover of the ownership from another user is possible as long as the current owner does not actively manipulate the editable object at the same time – in this case, the takeover is refused to avoid interferences. To keep the concept of ownership and takeovers intact after a user leaves a session, their ownerships are transferred to another user in the session.

4.4. Persistent Data Storage in a Graph DBMS

With the MBC in Section 4.3, we addressed the required low latency real-time synchronization of editable objects for multiuser use cases. But as MBCs generally do not offer persistent saving, the edits of the editable objects are so far just kept non-persistently and the plannings made are lost after the last user leaves the session. Therefore, we use a DBMS for central persistent data storage, facilitating the systematic analysis and evaluation of the planning results after the participation processes. To address the potential data loss caused by connection failures, each time adding, manipulating, or deleting an editable object is completed, the applied changes are mirrored to the used DBMS instantaneously. For mirroring, the used MBC offers an interface on the server-side that can be utilized. However, interlocking between the MBC and the DBMS would reduce the application’s transferability and the replacement of individual components. Therefore, we rely on a client-side connection to the DBMS.
Real-time DBMS services like Google Firebase and Amazon’s AWS AppSync service are not suitable for the required low-latency transmission between users in the multiuser mode as argued in Section 4.3. But they are capable of efficient persistent data storage. Both service providers offer SDKs for the Unity Game Engine, but, notwithstanding, we argue that they are impractical for participation scenarios in terms of various reasons: First, the DBMSs are offered as a proprietary service and thus, if they are discontinued, also the support and availability of the developed application would be affected. Second, the DBMSs are generally not designed to interface with spatial data and spatial data infrastructures. Thus, it reduces interoperability with planning industry-standard software. Third, the DBMSs are often located in countries that do not comply with the legally required data protection regulations.
In conclusion, we argue that the chosen DBMS should be open source or hostable on a dedicated server. Several conventional DBMS fulfil these requirements, but there is no database driver available that can be used with the Unity Game Engine and meets our expectation of cross-platform support. Therefore, for the first prototype, we rely on the GraphQL to connect the DBMS to the clients. Technically, we utilize the Graph DBMS Dgraph [36] as it also allows flexible adaptation of the schema to new requirements of the application and PaKOMM concept with little effort.
In addition to the ability to store data persistently, DBMSs also allow the creation of planning versions. The implementation of planning versions provides three core components of the participation workshop: the creation, comparison, and discussion of the participants’ ideas. In the first PaKOMM prototype, we created a template with editable objects automatically loaded when a new planning version is created. The saved planning versions can be loaded and edited from any device. The loading of curated content can be extended at any stage up to the systematic creation of stakeholder and subtask specific predefined environments.

5. Pretest

The presented version of the prototype was already exhibited at a regional photography fair in Hamburg in October 2021. Pretests with visitors were carried out during the fair. The pretests focus on the system’s usability and how the use of the prototype for participation processes in urban planning is generally assessed.
Unfortunately, only a few visitors followed this invitation, so a total of only 13 persons (6 female, 7 male, 0 divers) aged between 20 to 56 (average age = 32, SD = 12) filled in the questionnaire completely. Seven of them tested VR and the touch table, two tested only VR, and four only tested the touch table app.
The visitors received a short briefing and were then able to try out the application either on the touch table, in VR, or both. There was no given time limit per participant. These test phases were accompanied by discussions that provided support for use and asked which aspects of handling were difficult and which additional functions would be desirable for the future. The suggestions for improvement mentioned by the visitors were recorded by the supervisors. Since this procedure is based on finding out the improvement potentials and less on a scientific evaluation, an additional questionnaire was used. After finishing the test phase, visitors were asked to fill in an online questionnaire, which consisted of the System Usability Scale Questionnaire [37] and the following qualitative questions:
  • What did you like about using the system?
  • What did you not like or miss about using the system?
  • How do you evaluate the use of the system for citizen participation procedures?
Even though the small number of participants and the tests taking place under non-laboratory conditions do not allow any founded conclusions, they nevertheless show a tendency. Three of four touch table users and four users of the combination of touch table and VR mentioned that the prototype was intuitive and easy to use. The discussions revealed that many of them espouse the use of the setup in participation processes and that the prototype is easily transferable to other planning use cases. In addition, many practical hints on how to improve the usability of the prototypes were given. Potential, additional functionalities mentioned were for example a feature to duplicate objects and one to align objects on the touch table and in VR.

6. Discussion

The demand for participant engagement in urban planning shows a need for tools that enable communication between different stakeholders and make planning processes more transparent. On the one hand, common methods such as the singular use of interactive touch table applications, 3D web applications, or physical models made of plasterboard are the first steps to enhance spatial perception. On the other hand, the potential to explore complex spatial-temporal data via Mixed Reality in a playful and low-threshold way has not been fully exploited yet. To exploit the full potential of the different methods, we proposed an integrated concept that covers collaborative and co-creative participation in off-site, online, and on-site scenarios. This paper addressed this gap by developing a novel, general technical concept covering the combined use of applications for interactive touch tables, AR, and VR in off-site, online, and on-site settings.
Within this concept, we proposed the three technical main requirements for the setup: Firstly, a cohesive user interface and interaction concept for all applications; secondly, a multiuser backend component to empower contemporaneous collaboration and co-creation with low latency transmission of avatars and edited objects; and thirdly, the persistent storage to keep, re-edit and discuss developed planning versions.
Addressing these main requirements, we introduced the first prototype’s setup that contains a horizontally placed touch screen with object detection and a second vertically placed screen. This stationary setup is combined with a VR application that allows collaborative planning with other VR users and participants using the touch table.
A general transferability of the implementation was already an essential requirement for implementing the first prototype. To address the high standards of data protection, which is often mandatory for planning processes, we selected backend components that can be hosted on dedicated servers. Additionally, to ensure that the two used backend components, the DBMS and the MBC, can be replaced independently, we did not rely on the MBC’s server-sided interface to store the users’ edits in the DBMS. Instead, we utilized GraphQL to establish a client-sided connection to the DBMS. With this, we keep the multiuser network logic separate from the saving of the planning versions.
With the Game Engine chosen for the implementation, we relied on a development environment capable of multi-platform development. We thus enabled the parallel development for Windows, Android, iOS, and iPadOS devices. Furthermore, as all common HMD manufacturers have provided SDKs for the integration into the Game Engine so far, we expect that the VR application can be easily extended to support HMD devices to be released in the future.
We expect a minor effort to adapt the prototype regarding the backend and the targeting of new devices. However, the effort required to change the environment can be considerably higher. Since the availability and quality of spatial data vary for every area, a review and curation of the data are necessary, and no standard import pipeline in the Game Engines can be provided. In the use case of the first prototype, for example, after importing the DEM into the Unity Game Engine, manually post-processing had to be applied. In some extracts of the DEM, there was a considerable deviation from the real terrain in heavily vegetated areas. Apart from the static environment components, the editable objects also have to be replaced. Therefore, they must be created or purchased to be tailored for a particular participation process. This paper presented five preselected object categories, which can vary regarding the use case. Due to the chosen menu component, the editable objects can be easily changed.
In the discussion about the use of mixed reality technologies, it is often asked whether the effort justifies the benefit. We argue that visualizations of three-dimensional objects in a three-dimensional medium such as mixed reality can provide a more realistic impression and allow more informed assessments of the situation than reduced two-dimensional representations. For example, two-dimensional renderings used in architectural competitions do not allow an easy impression of the visibility from or to existing buildings. On the other hand, with an interactive three-dimensional representation, viewers can take any point of view and thus look at and evaluate the object from several perspectives. In which use case a benefit arises that exceeds conventional means is what we would like to find out in our future work. To this end, we will test and evaluate the technical setting described here in various workshop formats with diverse stakeholders. If necessary, we will then adapt the aforementioned requirements for the system or add new ones. Based on our experiences, the following aspects have to be weighed: The effort and the costs compared to a better information base, an increase in motivation through the use of new technologies and gamification elements and the transferability to other use cases and planning areas.

7. Outlook on Future Work

The presented first prototype is still under development and has not been evaluated systematically yet. As described in Section 5 first pretests have been conducted, but so far, only with a few participants. But the experience from the pretests show the future work: The further development of the prototype followed by a more extensive user study.
The further development of the VR and touch table prototype includes refinements for higher usability and additional features like copying and aligning objects, a comprehensive tutorial, and user guidance. As mentioned in Section 4.1.1 we will also modify the VR prototype to allow the use of other HMDs than Oculus by changing the SDK for user interaction. These changes are currently in active development.
Subsequently, the prototype will be further developed and adapted to cover more of the introduced concept, like implementing annotations that can be seen and created on all devices included in the PaKOMM setting. Additionally, to the off-site focused prototypes for VR and touch tables, in the next step, we will develop the AR application for the presented on-site scenario and connect it with the presented prototype.
An extensive user study is planned. As the presented prototype already covers many potential workshop settings, the next development steps are accompanied by concrete co-creation workshops with diverse stakeholders to evaluate the current prototype’s implementation and the proposed concept. We want to find out if the use of the described technologies can make the collaboration and co-creation processes more effective and efficient. For this purpose, we conduct semi-structured interviews, e.g., to find out whether.
  • three-dimensional representation is a better data visualization than previous visualizations to support the expert-layman communication;
  • other groups of people, which have been poorly represented in participation processes so far, can be involved through the use of the technologies;
  • an increase in motivation takes place.
whether other groups of people, which have been poorly represented in participation processes so far, can be involved through the use of the technologies or whether an increase in motivation occurs.
In our future work, we will test different workshop formats and determine which combinations of synchronous and asynchronous use of the devices are promising to answer our research questions.

Author Contributions

Conceptualization, Patrick Postert, Anna E. M. Wolf and Jochen Schiewe; data curation, Patrick Postert and Anna E. M. Wolf; funding acquisition, Jochen Schiewe; investigation, Patrick Postert and Anna E. M. Wolf; methodology, Patrick Postert, Anna E. M. Wolf and Jochen Schiewe; project administration, Jochen Schiewe; software, Patrick Postert and Anna E. M. Wolf; supervision, Jochen Schiewe; visualization, Patrick Postert and Anna E. M. Wolf; writing–original draft, Patrick Postert, Anna E. M. Wolf and Jochen Schiewe; writing–review and editing, Patrick Postert, Anna E. M. Wolf and Jochen Schiewe. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Hamburg Ministry of Science, Research and Equality, grant number LFF FV80 (project: “Partizipation: kollaborativ und multimedial”).

Acknowledgments

We would like to thank Gesa Ziemer, Roland Greule, Hilke Berger and Imanuel Schipper for their contribution to the concept. Furthermore, we would like to thank Matthias Kuhr for his contribution to the implementation of the first prototype.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lieven, C.; Lüders, B.; Kulus, D.; Thoneick, R. Enabling Digital Co-creation in Urban Planning and Development. In Human Centred Intelligent Systems; Zimmermann, A., Howlett, R.J., Jain, L.C., Eds.; Springer: Singapore, 2021; pp. 415–430. [Google Scholar]
  2. Gebetsroither-Geringer, E.; Stollnberger, R.; Peters-Anders, J. Interactive Spatial Web-Applications as New Means of Support for Urban Decision-Making Processes. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, 4, 59–66. [Google Scholar] [CrossRef] [Green Version]
  3. Arnstein, S.R. A Ladder Of Citizen Participation. J. Am. Inst. Plan. 1969, 35, 216–224. [Google Scholar] [CrossRef] [Green Version]
  4. Dörk, M.; Monteyne, D. Urban Co-Creation: Envisioning New Digital Tools for Activism and Experimentation in the City. In Proceedings of the CHI Conference, Vancouver, BC, Canada, 7–12 May 2011; pp. 7–12. [Google Scholar]
  5. Meijer, A.; Bolívar, M.P.R. Governing the smart city: A review of the literature on smart urban governance. Int. Rev. Adm. Sci. 2016, 82, 392–408. [Google Scholar] [CrossRef]
  6. Lund, D.H. Co-creation in urban governance: From inclusion to innovation. Scand. J. Public Adm. 2018, 22, 3–17. [Google Scholar]
  7. Berger, H.M.; Postert, P.; Schipper, I.; Wolf, A.E.M. Taking Mixed Reality Serious: Co-Creation in the City of the Future. In Digital City Science; Schwegmann, R., Ziemer, G., Noennig, J.R., Eds.; Jovis: Berlin, Germany, 2021; pp. 144–149. [Google Scholar]
  8. Bason, C. Leading Public Sector Innovation 2E: Co-Creating for a Better Society; Policy Press: Bristol, UK, 2018. [Google Scholar]
  9. Skwarek, M. Augmented Reality Activism. In Augmented Reality Art: From an Emerging Technology to a Novel Creative Medium; Geroimenko, V., Ed.; Springer: Cham, Switzerland, 2018; pp. 3–40. [Google Scholar]
  10. Aibar, E.; Bijker, W.E. Constructing a City: The Cerdà Plan for the Extension of Barcelona. Sci. Technol. Hum. Values 1997, 22, 3–30. [Google Scholar] [CrossRef]
  11. Bentlin, F. Understanding the Hobrecht Plan. Origin, composition, and implementation of urban design elements in the Berlin expansion plan from 1862. Plan. Perspect. 2018, 33, 633–655. [Google Scholar] [CrossRef]
  12. Bernet, C. The ‘Hobrecht Plan’ (1862) and Berlin’s urban structure. Urban Hist. 2004, 31, 400–419. [Google Scholar] [CrossRef]
  13. Klaus, S.L. Efficiency, Economy, Beauty: The City Planning Reports of Frederick Law Olmsted, Jr., 1905–1915. J. Am. Plan. Assoc. 1991, 57, 456–470. [Google Scholar] [CrossRef]
  14. Hälker, N.; Hovy, K.; Ziemer, G. Das Projekt „FindingPlaces “. Ein Bericht aus der Praxis zwischen Digitalisierung und Partizipation. In Interdisziplinäre Perspektiven zur Zukunft der Wertschöpfung; Redlich, T., Moritz, M., Wulfsberg, J.P., Eds.; Springer: Wiesbaden, Germany, 2018; pp. 273–284. [Google Scholar]
  15. Gottwald, S.; Brenner, J.; Janssen, R.; Albert, C. Using Geodesign as a boundary management process for planning nature-based solutions in river landscapes. Ambio 2021, 50, 1477–1496. [Google Scholar] [CrossRef] [PubMed]
  16. Mansourian, A.; Taleai, M.; Fasihi, A. A Web-Based Spatial Decision Support System to Enhance Public Participation in Urban Planning Processes. J. Spat. Sci. 2011, 56, 269–282. [Google Scholar] [CrossRef]
  17. Herbert, G.; Chen, X. A comparison of usefulness of 2D and 3D representations of urban planning. Cartogr. Geogr. Inf. Sci. 2015, 42, 22–32. [Google Scholar] [CrossRef]
  18. Khan, Z.; Dambruch, J.; Peters-Anders, J.; Sackl, A.; Strasser, A.; Fröhlich, P.; Templer, S.; Soomro, K. Developing Knowledge-Based Citizen Participation Platform to Support Smart City Decision Making: The Smarticipate Case Study. Information 2017, 8, 47. [Google Scholar] [CrossRef] [Green Version]
  19. van Leeuwen, J.P.; Hermans, K.; Jylhä, A.; Quanjer, A.J.; Nijman, H. Effectiveness of Virtual Reality in Participatory Urban Planning: A Case Study. In Proceedings of the 4th Media Architecture Biennale Conference, Beijing, China, 13–16 November 2018; pp. 128–136. [Google Scholar]
  20. Ma, Y.; Wright, J.; Gopal, S.; Phillips, N. Seeing the invisible: From imagined to virtual urban landscapes. Cities 2020, 98, 102559. [Google Scholar] [CrossRef]
  21. Carozza, L.; Valero, E.; Bosché, F.; Banfill, G.; Mall, R.; Nguyen, M. Urbanplanar: BIM Mobile Visualisation in Urban Environments with Occlusion-Aware Augmented Reality. In Proceedings of the Joint Conference on Computing in Construction, Heraklion, Greece, 4–7 July 2017; Heriot-Watt University: Heraklion, Greece, 2017; pp. 229–236. [Google Scholar]
  22. Slater, M.; Wilbur, S. A Framework for Immersive Virtual Environments (FIVE): Speculations on the Role of Presence in Virtual Environments. Presence Teleoperat. Virtual Environ. 1997, 6, 603–616. [Google Scholar] [CrossRef]
  23. Alonso, L.; Zhang, Y.R.; Grignard, A.; Noyman, A.; Sakai, Y.; ElKatsha, M.; Doorley, R.; Larson, K. CityScope: A Data-Driven Interactive Simulation Tool for Urban Design. Use Case Volpe. In Unifying Themes in Complex Systems IX; Morales, A.J., Gershenson, C., Braha, D., Minai, A.A., Bar-Yam, Y., Eds.; Springer: Cham, Switzerland, 2018; pp. 253–261. [Google Scholar]
  24. Välkkynen, P.; Siltanen, S.; Väätänen, A.; Oksman, V.; Honkamaa, P.; Ylikauppila, M. Developing Mixed Reality Tools to Support Citizen Participation in Urban Planning. In Proceedings of the 6th International Conference on Communities and Technologies, Munich, Germany, 30 June 2013. [Google Scholar]
  25. UHD MultiTouch Table with Object-Recognition NEXUS. Available online: https://www.eyefactive.com/en/touchscreen-table-nexus (accessed on 10 November 2021).
  26. TUIO. Available online: https://tuio.org/ (accessed on 10 November 2021).
  27. Oculus Quest 2: Our Most Advanced New All-in-One VR-Headset|Oculus. Available online: https://www.oculus.com/quest-2/ (accessed on 10 November 2021).
  28. Höhl, W. Official Survey Data and Virtual Worlds—Designing an Integrative and Economical Open Source Production Pipeline for xR-Applications in Small and Medium-Sized Enterprises. Big Data Cogn. Comput. 2020, 4, 26. [Google Scholar] [CrossRef]
  29. Keil, J.; Edler, D.; Schmitt, T.; Dickmann, F. Creating Immersive Virtual Environments Based on Open Geospatial Data and Game Engines. J. Cartogr. Geogr. Inf. 2021, 71, 53–65. [Google Scholar] [CrossRef]
  30. Radial Layouts, Nice and Simple in Unity3Ds UI System—Just a Pixel. Available online: http://www.justapixel.co.uk/2015/09/14/radial-layouts-nice-and-simple-in-unity3ds-ui-system/ (accessed on 10 November 2021).
  31. Anguelov, D.; Dulong, C.; Filip, D.; Frueh, C.; Lafon, S.; Lyon, R.; Ogale, A.; Vincent, L.; Weaver, J. Google Street View: Capturing the World at Street Level. Computer 2010, 43, 32–38. [Google Scholar] [CrossRef]
  32. Di Luca, M.; Seifi, H.; Egan, S.; Gonzalez-Franco, M. Locomotion Vault: The Extra Mile in Analyzing VR Locomotion Techniques. Available online: https://locomotionvault.github.io/ (accessed on 10 November 2021).
  33. Normcore Private. Available online: https://normcore.io/normcore-private (accessed on 10 November 2021).
  34. On-Premises Cross Platform Multiplayer Game Backend|Photon Engine. Available online: https://www.photonengine.com/en-US/Server (accessed on 10 November 2021).
  35. Choosing the Right Netcode for Your Game|Unity Blog. Available online: https://blog.unity.com/technology/choosing-the-right-netcode-for-your-game (accessed on 10 November 2021).
  36. Dgraph. Available online: https://github.com/dgraph-io/dgraph (accessed on 10 November 2021).
  37. Brooke, J. SUS-A quick and dirty usability scale. Usabil. Eval. Ind. 1996, 189, 4–7. [Google Scholar]
Figure 1. PaKOMM combines three participation approaches: Off-site (a), Online (b), and On-site (c) collaboration [7]. The green frame indicates the first prototype’s implementation.
Figure 1. PaKOMM combines three participation approaches: Off-site (a), Online (b), and On-site (c) collaboration [7]. The green frame indicates the first prototype’s implementation.
Ijgi 11 00156 g001
Figure 2. PaKOMM’s touch table setup consists of a horizontal main screen where interactions are performed and a vertical screen where the participants can observe their edits from a different perspective, which can be manipulated on the touch table as well.
Figure 2. PaKOMM’s touch table setup consists of a horizontal main screen where interactions are performed and a vertical screen where the participants can observe their edits from a different perspective, which can be manipulated on the touch table as well.
Ijgi 11 00156 g002
Figure 3. Each co-user is shown as an avatar with a head and hands, allowing simple gestures like pointing, thumb up or waving.
Figure 3. Each co-user is shown as an avatar with a head and hands, allowing simple gestures like pointing, thumb up or waving.
Ijgi 11 00156 g003
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Postert, P.; Wolf, A.E.M.; Schiewe, J. Integrating Visualization and Interaction Tools for Enhancing Collaboration in Different Public Participation Settings. ISPRS Int. J. Geo-Inf. 2022, 11, 156. https://doi.org/10.3390/ijgi11030156

AMA Style

Postert P, Wolf AEM, Schiewe J. Integrating Visualization and Interaction Tools for Enhancing Collaboration in Different Public Participation Settings. ISPRS International Journal of Geo-Information. 2022; 11(3):156. https://doi.org/10.3390/ijgi11030156

Chicago/Turabian Style

Postert, Patrick, Anna E. M. Wolf, and Jochen Schiewe. 2022. "Integrating Visualization and Interaction Tools for Enhancing Collaboration in Different Public Participation Settings" ISPRS International Journal of Geo-Information 11, no. 3: 156. https://doi.org/10.3390/ijgi11030156

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop