Next Article in Journal
Soybean Oil Epoxidation: Kinetics of the Epoxide Ring Opening Reactions
Next Article in Special Issue
Inverter Efficiency Analysis Model Based on Solar Power Estimation Using Solar Radiation
Previous Article in Journal
A Grid-Density Based Algorithm by Weighted Spiking Neural P Systems with Anti-Spikes and Astrocytes in Spatial Cluster Analysis
Previous Article in Special Issue
Spray Structure and Characteristics of a Pressure-Swirl Dust Suppression Nozzle Using a Phase Doppler Particle Analyze
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Autonomous Indoor Scanning System Collecting Spatial and Environmental Data for Efficient Indoor Monitoring and Control

Department of Software, Catholic University of Pusan, Busan 46252, Korea
*
Author to whom correspondence should be addressed.
Processes 2020, 8(9), 1133; https://doi.org/10.3390/pr8091133
Submission received: 23 July 2020 / Revised: 3 September 2020 / Accepted: 7 September 2020 / Published: 11 September 2020
(This article belongs to the Special Issue Applications of Process Control in Energy Systems)

Abstract

:
As various activities related to entertainment, business, shopping, and conventions are done increasingly indoors, the demand for indoor spatial information and indoor environmental data is growing. Unlike the case of outdoor environments, obtaining spatial information in indoor environments is difficult. Given the absence of GNSS (Global Navigation Satellite System) signals, various technologies for indoor positioning, mapping and modeling have been proposed. Related business models for indoor space services, safety, convenience, facility management, and disaster response, moreover, have been suggested. An autonomous scanning system for collection of indoor spatial and environmental data is proposed in this paper. The proposed system can be utilized to collect spatial dimensions suitable for extraction of a two-dimensional indoor drawing and obtainment of spatial imaging as well as indoor environmental data on temperature, humidity and particulate matter. For these operations, the system has two modes, manual and autonomous. The main function of the systems is autonomous mode, and the manual mode is implemented additionally. It can be applied in facilities without infrastructure for indoor data collection, such as for routine indoor data collection purposes, and it can also be used for immediate indoor data collection in cases of emergency (e.g., accidents, disasters).

1. Introduction

People are known to spend more than 90% of their entire lives indoors. As this time increases, indoor spaces’ various elements and environments will inevitably impact human life. Additionally, the expression and understanding of indoor spaces have grown more complicated as commercial buildings and large-scale facilities have increased in number. Accordingly, the demand for location-based services that can process space and location information is increasing. In order to realize such a service, basic indoor spatial information must be provided. This includes detailed indoor maps and models that can be used for route planning, navigation guidance, and many other applications, as well as location information for pedestrians and goods in indoor spaces.
In indoor environments unlike outdoor ones, obtaining spatial information is difficult. In the absence of GNSS (Global Navigation Satellite System) signals, various technologies for outdoor positioning, mapping as well as modeling have been proposed. An indoor positioning technology is one that determines an object’s or person’s location and then tracks its movement to the next location. Indoor mapping and modeling entail the generation of spatial information based on data that already exists or is obtained by surveying, followed by detection of changes and updating accordingly. Related business models for indoor space services, convenience, facility management, safety, and disaster response have been suggested.
In order to meet the market demand for indoor applications, first, techniques for deriving indoor space information and performance analysis data for the related system are presented in [1,2]. Usually, they suggest schemes for deriving indoor maps and models based on collected point cloud data. In addition, a smart home or building infrastructure integrating IoT technology based on sensors and actuators for monitoring and control of indoor spaces such as homes or buildings has been proposed [3,4]. Therefore, we consider a system that can collect indoor data autonomously for monitoring and control of indoor spaces in real time where there is no infrastructure for the purpose. The data include indoor spatial data as well as environmental data.
In these pages, an autonomous scanning system for collection of indoor spatial and environmental data and efficient monitoring and control of indoor spaces, thereby, is proposed. The system can be utilized to collect indoor environmental data on temperature, humidity, particulate matter, and spatial dimensions suitable for extraction of a two-dimensional indoor drawing and obtainment of spatial imaging. For these operations, the system has two modes, manual and autonomous. It comprises three main components: a user terminal, a gateway, and an autonomous scanner. As for the user terminal, it controls the autonomous scanner component, and immediately checks scanned information that comes through the gateway. The gateway receives control commands from the user terminal and sends monitoring information to the user. Finally, the autonomous scanner comprises the following components: a mobile robot; a lidar sensor; a camera sensor; a temperature/humidity sensor, and a particulate matter sensor. It is used to overcome the distance limitation of the lidar sensor in collecting indoor spatial information and to perform autonomous scanning. The proposed system can be applied in facilities without infrastructure for indoor data collection, such as for routine indoor data collection purposes, and it can also be used for immediate indoor data collection in cases of emergency (e.g., accidents, disasters). The remainder of this paper is organized as follows. In Section 2, the background and motivation of the present study are discussed. Section 3 introduces the autonomous scanning system’s design and prototype implementation. Section 4 presents the system’s experimental results. Section 5 concludes the paper.

2. Background and Motivation

As an indoor space is a deliberately designed space, it can have many characteristics according to its purpose. Therefore, various related information is available. In this section, we survey the research on indoor mapping and modeling, indoor positioning and localization, and application services utilizing indoor spatial and environmental information. In addition, we present the motivation for the present research.

2.1. Indoor Mapping and Modeling

Various techniques have been developed for efficient construction of indoor expressing and visualizing data in the forms of indoor maps or modeling, for example. Among those techniques are automatic or semi-automatic extraction from architectural blueprints, and Building Information Management (BIM) systems that use specialized software and scanning-based indoor map construction.
Autonomous mobile robots are widely employed in many areas. A mobile robot with a Laser Range Finder (LRF) developed for environment-scanning purposes is presented in [5,6]. In that study, the mobile robot was operated in a known environment; as such, path following was the easiest navigation mode to adopt. Path following allows a robot’s movement and navigation according to a preprogrammed path. By the time the robot has moved through the black points, it will have stopped moving in order to scan the environment; then, after the scanned data has been completely saved in the computer, the robot continues on its path to the end point [6].
Adán et al. present a critical review of current mobile scanning systems in the field of automatic 3D building-modeling in [7]. Díaz-Vilariño et al. propose a method for the 3D modeling of indoor spaces from point cloud data in [8]. Some work has evaluated and analyzed the performance of conventional indoor scanning and mapping systems [9,10] that utilize hand-held devices or backpack or trolley forms. A simultaneous localization and mapping approach based on data captured by a 2D laser scanner and a monocular camera is introduced in [11].

2.2. Indoor Positioning and Localization

Indoor positioning technology identifies and determines the location of an object or person and tracks it or him to a new location. Indoor positioning technology is a key element in indoor spatial information utilization services.
In outdoor scenarios, the mobile terminal position can be obtained with a high degree of accuracy, thanks to GNSS. However, GNSS encounters problems in indoor environments and scenarios involving deep shadowing effects. Various indoor and outdoor technologies and methodologies, including Time of Arrival (ToA), Time Difference of Arrival (TDoA), Received Signal Strength (RSS)-based fingerprinting as well as hybrid techniques are surveyed in [12], which emphasizes indoor methodologies and concepts. Additionally, it reviews various localization-based applications to which location-estimation information is critical.
Recently, a number of new techniques have been introduced, as well as wireless technologies and mechanisms that leverage the Internet of Things (IoT), and ubiquitous connectivity that provides indoor localization services to users. Other indoor localization techniques, such as Angle of Arrival (AoA), Time of Flight (ToF), and Return Time of Flight (RTOF), as well as the above-noted RSS, are analyzed in [13,14,15,16], all of which are based on WiFi, Radio Frequency Identification Device (RFID), Bluetooth, Ultra-Wideband (UWB), and other technologies proposed in the literature.
For indoor positioning, various algorithms utilizing WiFi signals are investigated in [17,18,19,20,21,22] to improve localization accuracy and performance. With regard to UWB ranging systems, an improved target detection and tracking scheme for moving objects is discussed in [23], and a novel approach to precise TOA estimation and ranging error mitigation is presented in [24].
Thanks to the wide-spread of WiFi and the support of the IEEE 802.11 standard by the majority of mobile devices, most proposed indoor localization systems are based on WiFi technologies. However, the inherent noise and instability of wireless signals usually degrades the accuracy and robustness in a dynamically changing environment. A recent novel approach is the deep learning-based framework to improve localization accuracy. Abbas et al. propose an accurate and robust WiFi fingerprinting localization technique based on a deep neural network in [25]. A deep learning framework for joint activity recognition and indoor localization task using WiFi channel state information (CSI) fingerprints is presented in [26]. Hoang et al. describe RNN (Recurrent Neural Networks) for WiFi fingerprinting indoor localization focusing on trajectory positioning in [27].

2.3. Applications Utilizing Indoor Spatial and Environmental Information

Applications utilizing indoor spatial information facilitate indoor living and can be defined as services provided through wired/wireless terminals by acquiring and managing information directly or indirectly related to the indoor space. Services using indoor spatial information can be categorized into those enhancing people’s security and convenience, facility management and disaster response, marketing, and other businesses based on basic services provided by locating their client occupants.
Security services in high-rise buildings and large-sized buildings rely on the effective utilization of indoor spatial information. These services include software and smart equipment to maintain the security of occupants in the building and to assist in firefighting and rescue activities.
Convenience enhancement services include user location-based road guidance services that guide visitors to complex buildings, such as large-scale buildings, underground facilities, and complex space facilities.
Facility management and disaster response services can be classified into space and asset management, BEMS (Building Energy Management System), and PSIM (Physical Security Information Management). BEMS is an integrated building energy control and management system that enables facility managers to take advantage of rational energy consumption by using ICT (Information and Communication Technology) technology to efficiently maintain a user’s pleasant and functional work environment. PSIM is a service for responding to disaster situations by identifying such situations and assisting evacuees [28].
As one research field for efficient energy control of building using ICT technologies, video/image processing technologies were explored to monitor human thermal comfort status in contactless way, which gives feedback signals for BEMS to create energy efficient, pleasant and functional work environments [29,30,31].
Martinez-Sala et al. describe an indoor navigation system for visually impaired people in [32]. Plagerasa et al. propose a system for collecting and managing sensor data in a smart building operating in an IoT environment in [33]. The importance of the indoor environmental monitoring is emphasized in [34] for human safety and health. Mujan et al. review the state-of-the-art literature and establish a connection between the factors that influence health and productivity in any given indoor environment in [35], whether it be residential or commercial. A systemic review of relevance between indoor sensors and managing optimal energy saving, thermal comfort, visual comfort, and indoor air quality in the built environment is presented in [36]. Therefore, indoor environment scanning can be useful for investigating, monitoring and studying related with indoor applications.
This paper proposes an autonomous scanning system for acquisition of indoor environmental data such as temperature, humidity, particulate matter and picture of indoor as well as spatial data for being aware of the indoor environment status and indoor visualization. The proposed system can be applied in facilities without infrastructure for indoor data collection, such as for routine indoor data collection purposes, and it can also be used for immediate indoor data collection in cases of emergency (e.g., accidents, disasters).

3. Autonomous Scanning System

This section presents the architecture, functional design and prototype implementation of the proposed autonomous scanning system.

3.1. Architecture of Autonomous Scanning System

The conceptual architecture of the autonomous scanning system is defined as shown in Figure 1. It is composed of three main parts: a user terminal, a gateway, and an autonomous scanner.
The user terminal is configured to receive and confirm the results of the indoor spatial data and environmental data, and to input control commands. The gateway is configured to relay control commands input from the user terminal and spatial and environment data collected by the autonomous scanner. The autonomous scanner is configured to facilitate indoor data collection and considers capability of autonomous driving. In addition, various sensor equipment is considered for indoor environment data collection.

3.2. Functional Design of Autonomous Scanning System

In the design of the autonomous scanning system, we utilized prototyping platforms because, for the purpose of detailed function definition, it was necessary to first confirm the components to be used. Figure 2 shows the proposed system’s components.
A smartphone (Galaxy J6) is used for user-terminal monitoring and control. Raspberry Pi 3 Model B, a single-board computer, is used for a gateway. It has a 64-bit quad core processor, on-board WiFi, Bluetooth and USB boot capabilities. Raspberry Pi’s various communication and interface support and computing capabilities are suitable for rapid prototyping gateway implementation and experimentation. The autonomous scanner component is composed of a mobile robot, a lidar sensor, a camera sensor, a temperature/humidity sensor and a particulate matter sensor. iRobot Create 2, based on Roomba 600, is used for mobile robots. This is a programmable robot platform designed to enable the user to set various functions such as motion, sound and light. As such, it is equipped with LEDs and various sensors and can be programmed for specific sounds, behavior patterns, LED light, and so on [37].
Raspberry Pi Camera V2 is used as a camera sensor. It is attached to Raspberry Pi 3 via a Camera Serial Interface (CSI). DHT22 is used as a low-cost digital temperature/humidity sensor. It uses a capacitive humidity sensor and a thermistor to measure the surrounding air and spits out a digital signal on the data pin. Plantower PMS7003 is used as a particulate matter sensor. It is in fact a kind of digital and universal particle-concentration sensor that can be used to obtain the number of suspended particles in the air, i.e., the concentration of particles, and output them in the form of a digital interface. The main components of the output are the quality and number of particles with their different sizes per unit volume, where the particle-number unit volume is 0.1 L and the unit of mass concentration is μg/m3. Lidar is an acronym for light detection and ranging. It is a surveying method that measures distance to a target by illuminating the target with pulsed laser light and measuring the reflected pulses with a sensor. RPLidar A2, used as a lidar sensor is a 360-degree 2D laser scanner that can take up to 4000 samples of laser-ranging data per second. Its scanning range is 12 m [38].

3.2.1. Function Definitions

The user terminal has two modes for indoor data collection, either manual or autonomous. In the manual mode, the movement of the mobile robot, as well as the lidar and camera sensors, is manually controlled. When the autonomous mode is selected, the mobile robot performs autonomous driving, avoiding obstacles to collect and provide sensed data periodically.
The gateway receives and interprets a user’s command to execute a program that drives the autonomous scanner accordingly. It stores sensed data collected by autonomous scanners. It generates spatial data by converting the data collected from the lidar sensor to a two-dimensional drawing and stores temperature/humidity and particulate matter data that is measured periodically. These data are transmitted to the user in response to the user’s request. Additionally, a program that controls the movement of the robot is executed according to the driving mode of the mobile robot. It drives forward, backward, and makes left turns and right turns according to the user’s movement command. The gateway defines and operates the robot’s autonomous driving for autonomous scanning.
An autonomous scanner is defined as a driving vehicle that receives commands from the user through the gateway. The mobile robot equipped with sensors moves in the indoor space according to the user’s command, collects spatial data and environmental data, and transmits the data to the gateway. The lidar sensor collects indoor spatial data and transmits it to the gateway. The temperature/humidity sensor and particulate matter sensor collect the corresponding data and transmit them to the gateway. The mobile robot operates in the manual mode or the autonomous mode according to the command received through the gateway. The robot can move forward, backward and turn left and right.

3.2.2. Autonomous Scanning Algorithm

For the autonomous scanning operation, it is necessary to define the autonomous driving technique of the mobile robot. For the autonomous driving of the robot, we utilize the spatial data collected by the lidar sensor. The scanned data of the lidar sensor has distance and angle data from the current position of the mobile robot to another obstacle. Through the collected data, we extract the direction and distance to the next destination of the robot. The detailed algorithm is as follows.
We rearrange the scanned data in descending order according to the distance, extract a certain amount of high-ranked data and average the angles. We decide the extracting amount of data considering of sampling rate and scanning frequency of lidar sensor in implementation. The average of the angles is obtained by following the mean of the circular quantities’ equation.
θ ¯ = { arctan ( s ¯ c ¯ )                                             s ¯ > 0 ,                 c ¯ > 0 arctan ( s ¯ c ¯ ) + 180 °                     c ¯ > 0                                               arctan ( s ¯ c ¯ ) + 360 °                     s ¯ < 0 ,                 c ¯ > 0
In Equation (1), s ¯ is the mean sine of angles,   c ¯ is the mean cosine of angles, and θ ¯ is the mean of angles.
The angle thus obtained becomes the direction of movement toward the next destination. The moving distance should be set in consideration of the distance value used in the calculation and the measurement range of the lidar sensor. Once the initial direction is defined, the back part of the angle corresponding to the rotation angle from the scan is excluded to prevent the robot from going back.
The detailed operations of the autonomous driving algorithm are presented from Figure 3, Figure 4 and Figure 5.
Additionally, as shown in Figure 4, the algorithm works so that the proper movement direction can be extracted even if the mobile robot’s position is not centered.
Once the initial direction is defined, to prevent the robot from going back, the back part of the angle corresponding to the rotation angle from the scan is excluded. The next direction is calculated by excluding the backward angle based on the current direction, as shown in Figure 5 (left, middle). Even when the robot approaches the wall, the next direction can be derived according to the proposed autonomous driving algorithm, as presented in Figure 5 (right).

3.3. Prototype Implementation

In this section, we present the implementation of the autonomous scanning system. The prototype architecture of the autonomous scanning system is defined as shown in Figure 6. It is composed of three main parts: a user terminal, a gateway, and an autonomous scanner.
The user-terminal controls the mobile robot as well as the lidar and camera sensors. It monitors data in the forms of scanned images from the lidar sensor, captured images from the camera sensor, and data from the temperature/humidity sensor and particulate matter sensor. It exchanges this data with the gateway via TCP/IP communication. The gateway receives control commands from the user terminal and sends monitoring data back to the user terminal via TCP/IP communication.
The autonomous scanner components are connected to the gateway directly. The mobile robot, particulate matter sensor and lidar sensor are connected and communicate via a USB serial interface. The camera sensor is attached to the gateway via a CSI (Camera Serial Interface). And the temperature/humidity sensor is connected to the gateway via a GPIO (General Purpose Input and Output) port. The gateway controls the autonomous scanner according to user commands and obtains scanned data.

3.3.1. Mobile Application for User Terminal

For the purposes of the user terminal, we developed a mobile application to monitor and control the system. Some screenshots of the mobile application are presented in Figure 7.
In the right of the figure, the control plane is shown. When the user touches the ‘click to release’ menu, temperature, humidity and particulate matter data is updated periodically. The operation mode of the autonomous scanner can be selected between the autonomous mode and the manual mode. The main function of the systems is autonomous mode, and the manual mode is implemented additionally. When the autonomous mode is set, the mobile robot moves according to the autonomous scanning algorithm and collects indoor spatial data, image data, temperature, humidity and particulate matter data. The initial state of the operation mode is manual, which means that the mobile robot is controlled by the user using the four directional buttons below. The upward button is for forward movement; the downward button is for backward movement; the leftward button is for left turns, and the rightward button is for right turns.
If the user touches the camera icon, a picture-taking command is transmitted to the gateway, and the gateway controls the camera sensor and stores the captured image. If the user touches the gallery icon, the gateway transmits the captured image to the user terminal. The scan icon controls the lidar sensor. When the gateway receives a scan command, it controls the lidar sensor and obtains and draws 2D point coordinates as a scanned image. If the user touches the map icon, the gateway transmits the scanned image to the user.
The main difference between the autonomous mode and the manual mode is the control of the mobile robot. In the manual mode, the user must control the movement of the mobile robot, and in the autonomous mode, the mobile robot is driven to avoid obstacles according to the autonomous driving algorithm. In the autonomous mode, indoor scanning and camera control are automatically performed at every movement step in the collection of sensor data, whereas in the manual mode, they must be performed manually by the user. Even so, both temperature and humidity data are collected periodically and sent to the user terminal in the manual mode.

3.3.2. Gateway

The gateway operates in direct connection with mobile robots and sensors. According to the user’s command, it operates the associated control program and stores various data collected by the autonomous scanner.
In Raspberry Pi, three main python scripts are implemented. The first script includes processing of the user’s commands and operations of iRobot in the manual or autonomous mode. The second script entails control of RPLidar A2 and Pi Camera V2. The third script includes control of the temperature/humidity and particulate matter sensors. Data collected from each sensor is stored as a file in Raspberry Pi. The format of a temperature/humidity sensor data file is as follows.
Temp = 23.6 °C Humidity = 29.1% Fri Mar 20 16:53:27 2020
The format of a particulate matter sensor data file is as follows.
PM 1.0: 15 PM 2.5: 20 PM 10.0: 22 Fri Mar 20 17:55:46 2020
For the purposes of tracing and analysis, measured date and time information is added to each sensed data.
The following Table 1 shows sample angle and distance data obtained from the RPLidar A2 lidar sensor. The data of the lidar sensor is provided to the user by converting it to a scanned image in the form of a two-dimensional drawing, and is also used to calculate the movement direction and distance of the iRobot when it is operating in the autonomous mode.
For an autonomous scan operation, autonomous driving of iRobot is implemented as follows. First, the gateway collects angle and distance data from the current position of iRobot to an obstacle using RPLidar A2 and sorts the data in descending order based on the distance values. Then, it extracts high-ranked 550~1100 data and averages the angles using the mean of circular quantities formula. This time, to prevent the robot from going back, we added a code that excludes the back part of the angle corresponding to the rotation angle from the scan. The angle of the current direction of the robot is always 0°; therefore, to prevent the robot from going back, the back part of the angle to be excluded was set from 120° to 260° in this implementation.
The angle thus obtained becomes the direction of movement toward the next destination. The moving distance should be set in consideration of the distance value used in the calculation and the measurement range of the lidar sensor. In this implementation, we choose the top value used in the calculation, divide it by 160, which is determined heuristically, and set the result as the movement distance.

3.3.3. Autonomous Scanner

The autonomous scanner is a driving vehicle that actually operates according to a command issued by the user via the gateway. The mobile robot equipped with sensors moves in the indoor space, collects spatial data and environmental data, and transmits the data to the gateway. The movement of iRobot and control of sensors are implemented using Python scripts in Raspberry Pi, as mentioned in Section 3.3.2.
Figure 8 shows iRobot’s sensors that are mainly related to driving. The wall sensor allows the robot to identify the right wall and continue to move along it. The light bumper sensor detects an obstacle in advance when it appears while the robot is driving and avoids it. The bumper sensor detects a collision when the robot collides with obstacles that the robot has not detected beforehand and enables the robot to continue to move after handing the collision.
In this implementation of the autonomous scanning algorithm, when the bumper sensor is activated, it is judged as a collision with an unexpected obstacle, and the robot stops and moves backward pre-defined distance. Then, the next destination is recalculated according to the autonomous scanning algorithm.

4. Experiments

In this section, we present the experimental results for the proposed autonomous scanning system. In order to verify the operation of the proposed system, we selected some indoor spaces and performed experiments. The experiments were performed in two parts, manual scanning and autonomous scanning.

4.1. Manual Scanning Experiment

The manual scanning experiment was carried out in the lobby shown in Figure 9. The initial state of the mobile robot was the manual-movement mode. The user controlled the lidar and camera sensors.
Figure 10 presents the environmental data and captured image for the indoor lobby along with the scanned result. The user could check temperature/humidity, particulate matter data, captured images and the scanned image immediately via smartphone. The indoor spatial and environmental data was also maintained in the gateway.
In the second experiment, the user controlled the mobile robot’s movement in the manual mode. Figure 11 shows the results of movement.
Figure 12 presents the environmental data and captured image and scanned result after movement of the robot.

4.2. Autonomous Scanning Experiment

The autonomous scanning experiment was carried out in the corridor. Figure 13 shows an example of the autonomous mode operation of the proposed system. The user activates autonomous mode operation using a smartphone, and then the autonomous scanner operates in autonomous mode and the user obtains the scanned result.
The first autonomous scanning experiment was carried out in the corridor shown in Figure 14.
Figure 15 shows the 2D drawing result obtained by integrating the scan data collected by the autonomous scanner. Each number represents a point moved to by autonomous driving from the starting position of the autonomous scanner. In the process of integrating the scan data, we found that a calibration scheme is needed. For example, there is some noise between numbers 5 and 6 in following scanned result. This issue will be held over for future research.
In the autonomous operation mode, the mobile robot moves according to the autonomous scanning algorithm and collects indoor spatial data, image data, temperature, humidity and particulate matter data at each moving position. The gateway stores collected data from each sensor.
The initial position in the indoor space (corridor) for the first autonomous scanning experiment is shown in Figure 16.
Figure 17 presents the environmental data and captured image for the indoor corridor along with the scanned result at initial position. The user could check temperature/humidity, particulate matter data, the captured image and the scanned image immediately via smartphone. The indoor spatial and environmental data was also maintained in the gateway.
Figure 18 shows the position (marked 4 in Figure 15) in the corridor space where the first autonomous scanning experiment was performed.
Figure 19 presents the environmental data and captured image and scanned result at the position (marked 4 in Figure 15) from which the first autonomous scanning experiment was performed.
Figure 20 shows the corridor space at the position (marked 7 in Figure 15) from which the first autonomous scanning experiment was performed.
Figure 21 presents the environmental data, captured image and scanned result for the position (marked 7 in Figure 15) from which the first autonomous scanning experiment was performed.
The second and the third autonomous scanning experiment was carried out in the corridor shown in Figure 22.
Figure 23 shows the 2D drawing result obtained by integrating the scan data collected by the autonomous scanner. Each number represents the point moved to by autonomous driving from the starting position of the autonomous scanner.
In the autonomous operation mode, the mobile robot moves according to the autonomous scanning algorithm and collects indoor spatial data, image data, temperature, humidity and particulate matter data at each moving position. The gateway stores collected data from each sensor.
The initial position in the indoor space for the second autonomous scanning experiment is shown in Figure 24.
Figure 25 presents the environmental data and captured image for the indoor corridor along with the scanned result at the initial position in the second autonomous scanning experiment. The user could check temperature/humidity, particulate matter data, the captured image and the scanned image immediately via smartphone. The indoor spatial and environmental data was also maintained in the gateway.
Figure 26 shows the corridor space at the position (marked 6 in Figure 23) where the second autonomous scanning experiment was performed.
Figure 27 presents the environmental data, the captured image and the scanned result at position (marked 6 in Figure 23) from which the second autonomous scanning experiment was performed.
Figure 28 shows the corridor space and scanned result at the position (marked 12 in Figure 23) from which the second autonomous scanning experiment was performed.
The third autonomous scanning experiment was carried out in the corridor shown in Figure 22 again. Figure 29 shows the 2D drawing result obtained by integrating the scan data collected by the autonomous scanner. Each number represents a point moved to by autonomous driving from the starting position of the autonomous scanner. As can be seen in the results, movement path of the robot is different though the second and the third experiments were performed in the same indoor space. The direction for next destination is derived dynamically based on scanned data in proposed autonomous driving algorithm.
Figure 30 shows the position (marked 2 in Figure 29) in the corridor space where the third autonomous scanning experiment was performed.
Figure 31 presents the environmental data and captured image for the indoor corridor along with the scanned result at the position (marked 2 in Figure 29). The user could check temperature/humidity, particulate matter data, the captured image and the scanned image immediately via smartphone. The indoor spatial and environmental data was also maintained in the gateway.
Figure 32 shows the position (marked 5 in Figure 29) in the corridor space where the third autonomous scanning experiment was performed.
Figure 33 presents the environmental data and captured image and scanned result at the position (marked 5 in Figure 29) from which the third autonomous scanning experiment was performed.
An additional autonomous scanning experiment was carried out in the corridor with an obstacle and the 2D drawing result obtained by integrating the scan data collected by the autonomous scanner shown in Figure 34. Each number represents the point moved to by autonomous driving from the starting position of the autonomous scanner. As described in the algorithm, the direction for the next destination and the distance were extracted in consideration of not only the wall but also other obstacles.

4.3. Results and Discussion

As in the above experiment, we selected an indoor space and performed experiments on manual scanning and autonomous scanning in order to verify the operation of the proposed system. Table 2 shows portions of the temperature, humidity and particulate matter data collected in the previous autonomous scanning experiment.
Table 3 shows partial data for angle and distance obtained from the RPLidar A2 lidar sensor. Such data is provided to the user by converting it to a scanned image in the form of a two-dimensional drawing and is also used to calculate the movement direction and distance of the iRobot when it is operating in the autonomous mode.
Figure 35 shows a Python script for generation of a two-dimensional drawing using angle and distance data obtained from the RPLidar A2 lidar sensor. The data was obtained by dividing it into four cases according to the angle value. Each case changed the angle to the radian and generated the X and Y coordinates from the distance and the radian. The two-dimensional drawing was generated by applying those X and Y coordinates.
As interest in data on indoor spaces has increased, research on scanning and positioning methods for indoor space information collection and location recognition has been actively carried out, and relevant systems have been proposed. Such systems incorporate indoor mapping and modeling, positioning, and platform-related technologies. In addition, a smart home or building infrastructure integrating IoT technology based on sensors and actuators for monitoring and control of indoor spaces such as homes or buildings has been proposed. In this paper, we consider a system that can collect indoor data autonomously for monitoring and control of indoor spaces in real time where there is no infrastructure for the purpose. The data include indoor spatial data as well as environmental data.
In the design and implementation of the proposed system we utilized prototyping platforms and performed experiment in order to verify the operation of the system. As shown in the results, the proposed system functions properly, as it provides temperature, humidity, particulate matter, and image data as well as spatial data to the user in real time and can drive in an indoor space while avoiding obstacles in the autonomous mode.
The mobile robot and each sensor are connected to the gateway directly. The user-terminal exchanges control commands and sensor data with the gateway via TCP/IP communication. In the proposed system, each sensor data is stored in the gateway; therefore, in cases where the communication environment is inadequate, it can be checked against stored data in addition to real-time checking.
The iRobot Create 2 used as a prototyping tool has a battery that runs for 2 h when fully charged and has limitations in operating in more complex environments such as disaster areas. It is expected that if the proposed prototype’s robot and sensor functions and performance are improved, it will find use in various applications.

5. Conclusions

People spend most of time indoors, and the indoor environment influences their health, wellbeing, and productivity. Accordingly, the demand for location-based services that can process space and location information as well as smart indoors with an emphasis on safe, healthy, comfortable, affordable, and sustainable living environments is increasing.
In order to meet the demand for indoor applications, first, techniques for deriving indoor space information must be provided. Usually, schemes for deriving indoor maps and models are based on collected point cloud data. In addition, keeping the data of indoor up-to-date and checking it is problematic, although it is critical to support rapid intervention in situation of emergency. Information about building layouts and indoor objects occupancy is a critical factor to efficient and safer emergency response in disaster management.
Second, the convergence of various new technologies such as sensing and actuation, advanced control, and data analytics has been proposed for smart indoors. Therefore, we consider a system that can collect indoor data autonomously for monitoring and control of indoor spaces in real time where there is no infrastructure for the purpose. The data include indoor spatial data as well as environmental data.
We proposed herein an autonomous indoor scanning system that can be employed for acquisition of indoor environmental data including temperature, humidity and particulate matter information along with indoor imaging and spatial data for the purposes of indoor environment status awareness and indoor visualization.
The system collects indoor data autonomously, monitoring and controlling indoor spaces in real time where there is no infrastructure for such purposes. For design and implementation of the proposed system, we utilized prototyping platforms including iRobot Create 2 and Raspberry Pi 3, along with the RPLidar A2 lidar sensor, Raspberry Pi Camera V2, the DHT22 temperature/humidity sensor, and the Plantower PMS7003 particulate matter sensor. A smartphone was used as the user terminal for monitoring and control of the system, to which ends, we implemented a mobile application.
The results on our implementation and experimentation indicated proper functioning of the proposed system. It provides temperature, humidity, particulate matter, image as well as spatial data to the user in real time, and it can drive in indoor spaces while avoiding obstacles in the autonomous mode. In addition, in the proposed system, each sensor data is stored in the gateway; therefore, in cases where the communication environment is inadequate, it can be checked against stored data in addition to real-time checking. In the process of integrating the scan data, however, we found that a calibration scheme is needed. This issue will be held over for future research.
The proposed system can be applied in facilities without infrastructure for indoor data collection, such as for routine indoor data collection purposes, and it can also be used for immediate indoor data collection in cases of emergency. As based on the proposed prototype, the system has limitations in operating in more complex environments such as disaster areas. It is expected that if the prototype’s robot and sensor functions and performance are improved, the system will find use in various applications.
In future work, we will improve system performance by means of an advanced lidar sensor and mobile robot, and we will improve system functionality by installation of additional sensors. Additionally, a correction method for improved accuracy of scanned data and an autonomous-driving algorithm upgrade are required.

Author Contributions

S.H. prepared the paper structure, surveyed the related research, designed the proposed idea and research, and wrote the paper; D.P. implemented the prototype of the proposed system and performed the experiments. Both authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (NRF-2017R1A2B4009167).

Acknowledgments

The first draft of this paper was presented at the 15th IEEE International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob 2019) [39], Barcelona, Spain, 21–23 October 2019.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, Y.; Tang, J.; Jiang, C.; Zhu, L.; Lehtomäki, M.; Kaartinen, H.; Kaijaluoto, R.; Wang, Y.; Hyyppä, J.; Hyyppä, H.; et al. The Accuracy Comparison of Three Simultaneous Localization and Mapping (SLAM)-Based Indoor Mapping Technologies. Sensors 2018, 18, 3228. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Nikoohemata, S.; Diakitéb, A.A.; Zlatanovab, S.; Vosselmana, G. Indoor 3D reconstruction from point clouds for optimal routing in complex buildings to support disaster management. Autom. Constr. 2020, 113, 103109. [Google Scholar] [CrossRef]
  3. Dakheel, J.A.; Pero, C.D.; Aste, N.; Leonforte, F. Smart buildings features and key performance indicators: A review. Sustain. Cities Soc. 2020, 61, 102328. [Google Scholar] [CrossRef]
  4. Jia, R.; Jin, B.; Jin, M.; Zhou, Y.; Konstantakopoulos, I.C.; Zou, H.; Kim, J.; Li, D.; Gu, W.; Arghandeh, R.; et al. Design Automation for Smart Building Systems. Proc. IEEE 2018, 106, 1680–1699. [Google Scholar] [CrossRef] [Green Version]
  5. Markom, M.A.; Adom, A.H.; Tan, E.S.M.M.; Shukor, S.A.A.; Rahim, N.A.; Shakaff, A.Y.M. A mapping mobile robot using RP Lidar scanner. In Proceedings of the 2015 IEEE International Symposium on Robotics and Intelligent Sensors (IRIS), Langkawi, Malaysia, 18–20 October 2015. [Google Scholar]
  6. Markom, M.A.; Shukor, S.A.A.; Adom, A.H.; Tan, E.S.M.M.; Shakaff, A.Y.M. Indoor Scanning and Mapping using Mobile Robot and RP Lidar. Int’l J. Adv. Mech. Autom. Engg. (IJAMAE) 2016, 3, 42–47. [Google Scholar]
  7. Adán, A.; Quintana, B.; Prieto, S.A. Autonomous Mobile Scanning Systems for the Digitization of Buildings: A Review. Remote Sens. 2019, 11, 306. [Google Scholar] [CrossRef] [Green Version]
  8. Díaz-Vilariño, L.; Khoshelham, K.; Martínez-Sánchez, J.; Arias, P. 3D modeling of building indoor spaces and closed doors from imagery and point clouds. Sensors 2015, 15, 3491–3512. [Google Scholar] [CrossRef] [Green Version]
  9. Lehtola, V.V.; Kaartinen, H.; Nüchter, A.; Kaijaluoto, R.; Kukko, A.; Litkey, P.; Honkavaara, E.; Rosnell, T.; Vaaja, M.T.; Virtanen, J.-P.; et al. Comparison of the Selected State-Of-The-Art 3D Indoor Scanning and Point Cloud Generation Methods. Remote Sens. 2017, 9, 796. [Google Scholar] [CrossRef] [Green Version]
  10. Masiero, A.; Fissore, F.; Guarnieri, A.; Pirotti, F.; Visintini, D.; Vettore, A. Performance Evaluation of Two Indoor Mapping Systems: Low-Cost UWB-Aided Photogrammetry and Backpack Laser Scanning. Appl. Sci. 2018, 8, 416. [Google Scholar] [CrossRef] [Green Version]
  11. Oh, T.; Lee, D.; Kim, H.; Myung, H. Graph structure-based simultaneous localization and mapping using a hybrid method of 2D laser scan and monocular camera image in environments with laser scan ambiguity. Sensors 2015, 15, 15830–15852. [Google Scholar] [CrossRef] [Green Version]
  12. Yassin, A.; Nasser, Y.; Awad, M.; Al-Dubai, A.; Liu, R.; Yuen, C.; Raulefs, R.; Aboutanios, E. Recent Advances in Indoor Localization: A Survey on Theoretical Approaches and Applications. IEEE Commun. Surv. Tutor. 2017, 19, 1327–1346. [Google Scholar] [CrossRef] [Green Version]
  13. Zafari, F.; Gkelias, A.; Leung, K.K. A Survey of Indoor Localization Systems and Technologies. IEEE Commun. Surv. Tutor. 2019, 21, 2568–2599. [Google Scholar] [CrossRef] [Green Version]
  14. Chow, J.C.K.; Peter, M.; Scaioni, M.; Al-Durgham, M. Indoor Tracking, Mapping, and Navigation: Algorithms, Technologies, and Applications. J. Sens. 2018, 2018, 3. [Google Scholar] [CrossRef] [Green Version]
  15. Huh, J.-H.; Seo, K. An Indoor Location-Based Control System Using Bluetooth Beacons for IoT Systems. Sensors 2017, 17, 2917. [Google Scholar] [CrossRef] [Green Version]
  16. Huh, J.-H.; Bu, Y.; Seo, K. Bluetooth-tracing RSSI sampling method as basic technology of indoor localization for smart homes. Int. J. Smart Home 2016, 10, 9–22. [Google Scholar] [CrossRef]
  17. Caso, G.; de Nardis, L.; di Benedetto, M.-G. A mixed approach to similarity metric selection in affinity propagation based WiFi fingerprinting indoor positioning. Sensors 2015, 15, 27692–27720. [Google Scholar] [CrossRef] [PubMed]
  18. Castañón–Puga, M.; Salazar, A.; Aguilar, L.; Gaxiola-Pacheco, C.; Licea, G. A novel hybrid intelligent indoor location method for mobile devices by zones using Wi-Fi signals. Sensors 2015, 15, 30142–30164. [Google Scholar] [CrossRef] [PubMed]
  19. Ma, L.; Xu, Y. Received signal strength recovery in green WLAN indoor positioning system using singular value thresholding. Sensors 2015, 15, 1292–1311. [Google Scholar] [CrossRef] [Green Version]
  20. Ma, R.; Guo, Q.; Hu, C.; Xue, J. An improved WiFi indoor positioning algorithm by weighted fusion. Sensors 2015, 15, 21824–21843. [Google Scholar] [CrossRef]
  21. Zhou, M.; Zhang, Q.; Xu, K.; Tian, Z.; Wang, Y.; He, W. Primal: Page rank-based indoor mapping and localization using gene-sequenced unlabeled WLAN received signal strength. Sensors 2015, 15, 24791–24817. [Google Scholar] [CrossRef]
  22. Zou, H.; Lu, X.; Jiang, H.; Xie, L. A fast and precise indoor localization algorithm based on an online sequential extreme learning machine. Sensors 2015, 15, 1804–1824. [Google Scholar] [CrossRef] [PubMed]
  23. Nguyen, V.-H.; Pyun, J.-Y. Location detection and tracking of moving targets by a 2D IR-UWB radar system. Sensors 2015, 15, 6740–6762. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Yin, Z.; Cui, K.; Wu, Z.; Yin, L. Entropy-based TOA estimation and SVM-based ranging error mitigation in UWB ranging systems. Sensors 2015, 15, 11701–11724. [Google Scholar] [CrossRef] [PubMed]
  25. Abbas, M.; Elhamshary, M.; Rizk, H.; Torki, M.; Youssef, M. WiDeep: WiFi-based Accurate and Robust Indoor Localization System using Deep Learning. In Proceedings of the IEEE International Conference on Pervasive Computing and Communications (PerCom 2019), Kyoto, Japan, 11–15 March 2019. [Google Scholar]
  26. Wang, F.; Feng, J.; Zhao, Y.; Zhang, X.; Zhang, S.; Han, J. Joint Activity Recognition and Indoor Localization with WiFi Fingerprints. IEEE Access 2019, 7, 80058–80068. [Google Scholar] [CrossRef]
  27. Hoang, M.-T.; Yuen, B.; Dong, X.; Lu, T.; Westendorp, R.; Reddy, K. Recurrent Neural Networks for Accurate RSSI Indoor Localization. IEEE Internet Things J. 2019, 6, 10639–10651. [Google Scholar] [CrossRef] [Green Version]
  28. Kim, M.-C.; Jang, M.-K.; Hong, S.-M.; Kim, J.-H. Practices on BIM-based indoor spatial information implementation and location-based services. J. KIBIM 2015, 5, 41–50. [Google Scholar] [CrossRef]
  29. Cheng, X.; Yang, B.; Olofsson, T.; Liu, G.; Li, H. A pilot study of online non-invasive measuring technology based on video magnification to determine skin temperature. Build. Environ. 2017, 121, 1–10. [Google Scholar] [CrossRef]
  30. Yang, B.; Cheng, X.; Dai, D.; Olofsson, T.; Li, H.; Meier, A. Real-time and contactless measurements of thermal discomfort based on human poses for energy efficient control of buildings. Build. Environ. 2019, 162, 106284. [Google Scholar] [CrossRef]
  31. Cheng, X.; Yang, B.; Hedman, A.; Olofsson, T.; Li, H.; Gool, L.V. NIDL: A pilot study of contactless measurement of skin temperature for intelligent building. Energy Build. 2019, 198, 340–352. [Google Scholar] [CrossRef]
  32. Martinez-Sala, A.; Losilla, F.; Sánchez-Aarnoutse, J.; García-Haro, J. Design, implementation and evaluation of an indoor navigation system for visually impaired people. Sensors 2015, 15, 32168–32187. [Google Scholar] [CrossRef]
  33. Plagerasa, A.P.; Psannisa, K.E.; Stergiou, C.; Wang, H.; Gupta, B.B. Efficient IoT-based sensor BIG Data collection–processing and analysis in smart buildings. Future Gener. Comput. Syst. 2018, 82, 349–357. [Google Scholar] [CrossRef]
  34. Yang, C.-T.; Chen, S.-T.; Den, W.; Wang, Y.-T.; Kristiani, E. Implementation of an Intelligent Indoor Environmental Monitoring and management system in cloud. Future Gener. Comput. Syst. 2019, 96, 731–749. [Google Scholar] [CrossRef]
  35. Mujan, I.; AnCelkovi, A.S.; Muncan, V.; Kljajic, M.; Ruzic, D. Influence of indoor environmental quality on human health and productivity—A review. J. Clean. Prod. 2019, 217, 646–657. [Google Scholar] [CrossRef]
  36. Dong, B.; Prakash, V.; Feng, F.; O’Neill, Z. A review of smart building sensing system for better indoor environment control. Energy Build. 2019, 199, 29–46. [Google Scholar] [CrossRef]
  37. iRobot® Create® 2 Open Interface (OI) Specification Based on the iRobot® Roomba® 600. Available online: https://www.irobotweb.com/-/media/MainSite/Files/About/STEM/Create/2018-07-19_iRobot_Roomba_600_Open_Interface_Spec.pdf (accessed on 10 September 2018).
  38. RPLidar A2 Development Kit User Manual. Shanghai Slamtec. Co. Ltd. Available online: http://bucket.download.slamtec.com/a7a9b856b9f8e57aad717da50a2878d5d021e85f/LM204_SLAMTEC_rplidarkit_usermanual_A2M4_v1.1_en.pdf (accessed on 11 September 2018).
  39. Kim, H.; Shin, B.; Lee, Y.; Hwang, S. Design and Implementation of Mobile Indoor Scanning System. In Proceedings of the 15th IEEE International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob 2019), Barcelona, Spain, 21–23 October 2019. [Google Scholar]
Figure 1. Conceptual architecture of proposed autonomous scanning system. It is composed of three main parts: a user terminal, a gateway, and an autonomous scanner.
Figure 1. Conceptual architecture of proposed autonomous scanning system. It is composed of three main parts: a user terminal, a gateway, and an autonomous scanner.
Processes 08 01133 g001
Figure 2. Components of proposed autonomous scanning system. We utilized prototyping platforms such as Raspberry Pi and iRobot Create 2.
Figure 2. Components of proposed autonomous scanning system. We utilized prototyping platforms such as Raspberry Pi and iRobot Create 2.
Processes 08 01133 g002
Figure 3. Derivation of direction for next destination in proposed autonomous driving algorithm.
Figure 3. Derivation of direction for next destination in proposed autonomous driving algorithm.
Processes 08 01133 g003
Figure 4. Derivation of direction for next destination in proposed autonomous driving algorithm when mobile robot is at corner.
Figure 4. Derivation of direction for next destination in proposed autonomous driving algorithm when mobile robot is at corner.
Processes 08 01133 g004
Figure 5. Exclusion of back part of angle corresponding to rotation angle from scan to derive direction for next destination in proposed autonomous driving algorithm.
Figure 5. Exclusion of back part of angle corresponding to rotation angle from scan to derive direction for next destination in proposed autonomous driving algorithm.
Processes 08 01133 g005
Figure 6. Prototype architecture of proposed autonomous scanning system. The user terminal exchanges data with the gateway via TCP/IP communication. The autonomous scanner components are connected to the gateway directly.
Figure 6. Prototype architecture of proposed autonomous scanning system. The user terminal exchanges data with the gateway via TCP/IP communication. The autonomous scanner components are connected to the gateway directly.
Processes 08 01133 g006
Figure 7. Screenshots of mobile application in proposed system. Intro screen of application (left), and control screen of application (right).
Figure 7. Screenshots of mobile application in proposed system. Intro screen of application (left), and control screen of application (right).
Processes 08 01133 g007
Figure 8. Sensors of iRobot mainly related to driving: light bumper sensors, wall sensor, bumper sensors.
Figure 8. Sensors of iRobot mainly related to driving: light bumper sensors, wall sensor, bumper sensors.
Processes 08 01133 g008
Figure 9. Indoor space (lobby) wherein experiment was performed. This was for the manual operation experiment.
Figure 9. Indoor space (lobby) wherein experiment was performed. This was for the manual operation experiment.
Processes 08 01133 g009
Figure 10. Environmental data result (left), captured image (middle) and scanned result (right) at initial position in lobby.
Figure 10. Environmental data result (left), captured image (middle) and scanned result (right) at initial position in lobby.
Processes 08 01133 g010
Figure 11. Indoor space (lobby) wherein experiment was performed. This is the movement result of iRobot in the manual operation mode.
Figure 11. Indoor space (lobby) wherein experiment was performed. This is the movement result of iRobot in the manual operation mode.
Processes 08 01133 g011
Figure 12. Environmental data result (left), captured image (middle) and scanned result (right) for movement in lobby.
Figure 12. Environmental data result (left), captured image (middle) and scanned result (right) for movement in lobby.
Processes 08 01133 g012
Figure 13. Autonomous mode operation example. Activation of autonomous mode (left), autonomous scanning operation of system (middle), and scanned result in autonomous mode (right).
Figure 13. Autonomous mode operation example. Activation of autonomous mode (left), autonomous scanning operation of system (middle), and scanned result in autonomous mode (right).
Processes 08 01133 g013
Figure 14. Indoor space (corridor) wherein first autonomous scanning experiment was performed (panoramic view)
Figure 14. Indoor space (corridor) wherein first autonomous scanning experiment was performed (panoramic view)
Processes 08 01133 g014
Figure 15. Integrated image of scanned result in first autonomous scanning experiment. The numbers indicating the positions after movement form the starting position of the autonomous scanner (left); also, the trajectory of the autonomous scanner is indicated (right).
Figure 15. Integrated image of scanned result in first autonomous scanning experiment. The numbers indicating the positions after movement form the starting position of the autonomous scanner (left); also, the trajectory of the autonomous scanner is indicated (right).
Processes 08 01133 g015
Figure 16. Initial position in indoor space where first autonomous scanning experiment was performed.
Figure 16. Initial position in indoor space where first autonomous scanning experiment was performed.
Processes 08 01133 g016
Figure 17. Environmental data result (left), captured image (middle), and scanned result (right) at initial position from which first autonomous scanning experiment was performed.
Figure 17. Environmental data result (left), captured image (middle), and scanned result (right) at initial position from which first autonomous scanning experiment was performed.
Processes 08 01133 g017
Figure 18. Position (marked 4 in Figure 15) in indoor space where first autonomous scanning experiment was performed.
Figure 18. Position (marked 4 in Figure 15) in indoor space where first autonomous scanning experiment was performed.
Processes 08 01133 g018
Figure 19. Environmental data result (left), captured image (middle), and scanned result (right) at position (marked 4 in Figure 15) from which first autonomous scanning experiment was performed.
Figure 19. Environmental data result (left), captured image (middle), and scanned result (right) at position (marked 4 in Figure 15) from which first autonomous scanning experiment was performed.
Processes 08 01133 g019
Figure 20. Position (marked 7 in Figure 15) in indoor space where first autonomous scanning experiment was performed.
Figure 20. Position (marked 7 in Figure 15) in indoor space where first autonomous scanning experiment was performed.
Processes 08 01133 g020
Figure 21. Environmental data result (left), captured image (middle), and scanned result (right) at position (marked 7 in Figure 15) from which first autonomous scanning experiment was performed.
Figure 21. Environmental data result (left), captured image (middle), and scanned result (right) at position (marked 7 in Figure 15) from which first autonomous scanning experiment was performed.
Processes 08 01133 g021
Figure 22. Indoor space (corridor) wherein second and third autonomous scanning experiment was performed (panoramic view).
Figure 22. Indoor space (corridor) wherein second and third autonomous scanning experiment was performed (panoramic view).
Processes 08 01133 g022
Figure 23. Integrated image of scanned result in second autonomous scanning experiment. The numbers indicate the positions after movement form the starting position of the autonomous scanner (left); also, the trajectory of the autonomous scanner is indicated (right).
Figure 23. Integrated image of scanned result in second autonomous scanning experiment. The numbers indicate the positions after movement form the starting position of the autonomous scanner (left); also, the trajectory of the autonomous scanner is indicated (right).
Processes 08 01133 g023
Figure 24. Initial position in indoor space where second autonomous scanning experiment was performed. Front view of autonomous scanner (left); backward view of autonomous scanner (right).
Figure 24. Initial position in indoor space where second autonomous scanning experiment was performed. Front view of autonomous scanner (left); backward view of autonomous scanner (right).
Processes 08 01133 g024
Figure 25. Environmental data result (left), captured image (middle) and scanned result (right) at initial position from which second autonomous scanning experiment was performed.
Figure 25. Environmental data result (left), captured image (middle) and scanned result (right) at initial position from which second autonomous scanning experiment was performed.
Processes 08 01133 g025
Figure 26. Indoor space at position (marked 6 in Figure 23) from which second autonomous scanning experiment was performed.
Figure 26. Indoor space at position (marked 6 in Figure 23) from which second autonomous scanning experiment was performed.
Processes 08 01133 g026
Figure 27. Environmental data result (left), captured image (middle), and scanned result (right) at position (marked 6 in Figure 23) from which second autonomous scanning experiment was performed.
Figure 27. Environmental data result (left), captured image (middle), and scanned result (right) at position (marked 6 in Figure 23) from which second autonomous scanning experiment was performed.
Processes 08 01133 g027
Figure 28. Indoor space (left) and scanned result (right) at position (marked 12 in Figure 23) from which second autonomous scanning experiment was performed.
Figure 28. Indoor space (left) and scanned result (right) at position (marked 12 in Figure 23) from which second autonomous scanning experiment was performed.
Processes 08 01133 g028
Figure 29. Integrated image of scanned result in third autonomous scanning experiment. The numbers indicating the positions after movement form the starting position of the autonomous scanner (left); also, the trajectory of the autonomous scanner is indicated (right).
Figure 29. Integrated image of scanned result in third autonomous scanning experiment. The numbers indicating the positions after movement form the starting position of the autonomous scanner (left); also, the trajectory of the autonomous scanner is indicated (right).
Processes 08 01133 g029
Figure 30. Position (marked 2 in Figure 29) in indoor space where third autonomous scanning experiment was performed.
Figure 30. Position (marked 2 in Figure 29) in indoor space where third autonomous scanning experiment was performed.
Processes 08 01133 g030
Figure 31. Environmental data result (left), captured image (middle), and scanned result (right) at position (marked 2 in Figure 29) from which third autonomous scanning experiment was performed.
Figure 31. Environmental data result (left), captured image (middle), and scanned result (right) at position (marked 2 in Figure 29) from which third autonomous scanning experiment was performed.
Processes 08 01133 g031
Figure 32. Position (marked 5 in Figure 29) in indoor space where third autonomous scanning experiment was performed.
Figure 32. Position (marked 5 in Figure 29) in indoor space where third autonomous scanning experiment was performed.
Processes 08 01133 g032
Figure 33. Environmental data result (left), captured image (middle), and scanned result (right) at position (marked 5 in Figure 29) from which third autonomous scanning experiment was performed.
Figure 33. Environmental data result (left), captured image (middle), and scanned result (right) at position (marked 5 in Figure 29) from which third autonomous scanning experiment was performed.
Processes 08 01133 g033
Figure 34. Indoor space (corridor with an obstacle) wherein autonomous scanning experiment was performed (left) and integrated image of scanned result in experiment. The numbers indicating the positions after movement form the starting position of the autonomous scanner (middle); also, the trajectory of the autonomous scanner is indicated (right).
Figure 34. Indoor space (corridor with an obstacle) wherein autonomous scanning experiment was performed (left) and integrated image of scanned result in experiment. The numbers indicating the positions after movement form the starting position of the autonomous scanner (middle); also, the trajectory of the autonomous scanner is indicated (right).
Processes 08 01133 g034
Figure 35. Python code for generation of two-dimensional drawing using angle and distance data obtained from RPLidar A2 lidar sensor.
Figure 35. Python code for generation of two-dimensional drawing using angle and distance data obtained from RPLidar A2 lidar sensor.
Processes 08 01133 g035
Table 1. Sample angle and distance data obtained from RPLidar A2 lidar sensor.
Table 1. Sample angle and distance data obtained from RPLidar A2 lidar sensor.
AngleDistanceAngleDistance
108.2031251243.0111.343751264.0
109.7656251249.75112.9843751278.25
Table 2. Temperature, humidity and particulate matter data collected in experiment.
Table 2. Temperature, humidity and particulate matter data collected in experiment.
Temperature (°C)Humidity (%)PM 1.0PM 2.5PM 10.0
19.436.9172022
19.237.6172223
19.137.6192223
19.137.8172121
19.137.7162521
Table 3. Lidar sensor data collected in experiment.
Table 3. Lidar sensor data collected in experiment.
AngleDistanceAngleDistance
127.2656251068.25135.7187501127.75
128.9843751079.75137.4218751146.50
130.7343751088.75139.0781251162.75
132.3437501100.75140.7343751182.75
134.0312501116.25142.4218751207.25

Share and Cite

MDPI and ACS Style

Park, D.; Hwang, S. Autonomous Indoor Scanning System Collecting Spatial and Environmental Data for Efficient Indoor Monitoring and Control. Processes 2020, 8, 1133. https://doi.org/10.3390/pr8091133

AMA Style

Park D, Hwang S. Autonomous Indoor Scanning System Collecting Spatial and Environmental Data for Efficient Indoor Monitoring and Control. Processes. 2020; 8(9):1133. https://doi.org/10.3390/pr8091133

Chicago/Turabian Style

Park, Dongwoo, and Soyoung Hwang. 2020. "Autonomous Indoor Scanning System Collecting Spatial and Environmental Data for Efficient Indoor Monitoring and Control" Processes 8, no. 9: 1133. https://doi.org/10.3390/pr8091133

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop