Next Article in Journal
Optimal Sizing of Solar-Assisted Heat Pump Systems for Residential Buildings
Next Article in Special Issue
Influence of Occupant Behavior for Building Energy Conservation: A Systematic Review Study of Diverse Modeling and Simulation Approach
Previous Article in Journal
Quantitative Review of Construction 4.0 Technology Presence in Construction Project Research
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Humans-as-a-Sensor for Buildings—Intensive Longitudinal Indoor Comfort Models

Building and Urban Data Science (BUDS) Lab, National University of Singapore (NUS), Singapore 117566, Singapore
*
Author to whom correspondence should be addressed.
Buildings 2020, 10(10), 174; https://doi.org/10.3390/buildings10100174
Submission received: 25 August 2020 / Revised: 23 September 2020 / Accepted: 27 September 2020 / Published: 1 October 2020
(This article belongs to the Special Issue New Approaches to Modelling Occupant Comfort)

Abstract

:
Evaluating and optimising human comfort within the built environment is challenging due to the large number of physiological, psychological and environmental variables that affect occupant comfort preference. Human perception could be helpful to capture these disparate phenomena and interpreting their impact; the challenge is collecting spatially and temporally diverse subjective feedback in a scalable way. This paper presents a methodology to collect intensive longitudinal subjective feedback of comfort-based preference using micro ecological momentary assessments on a smartwatch platform. An experiment with 30 occupants over two weeks produced 4378 field-based surveys for thermal, noise, and acoustic preference. The occupants and the spaces in which they left feedback were then clustered according to these preference tendencies. These groups were used to create different feature sets with combinations of environmental and physiological variables, for use in a multi-class classification task. These classification models were trained on a feature set that was developed from time-series attributes, environmental and near-body sensors, heart rate, and the historical preferences of both the individual and the comfort group assigned. The most accurate model had multi-class classification F1 micro scores of 64%, 80% and 86% for thermal, light, and noise preference, respectively. The discussion outlines how these models can enhance comfort preference prediction when supplementing data from installed sensors. The approach presented prompts reflection on how the building analysis community evaluates, controls, and designs indoor environments through balancing the measurement of variables with occupant preferences in an intensive longitudinal way.

1. Introduction

Many office workers are familiar with the battle of the thermostat, or that co-worker who talks loudly on the phone. Many researchers in indoor comfort are also aware of the high rates of discomfort amongst office workers [1,2]. Vast global efforts have been undertaken to evaluate this discomfort, and with that knowledge, build models that can be used for the design and control of buildings. In the realm of thermal comfort, for example, two dominant models are in use. The first is the Predicted Mean Vote (PMV) that models comfort based on heat transfer characteristics between the human and their surrounding environment [3]. The other, more modern version, is the Adaptive Comfort model that includes the human adaptability to climate, drawing a linear relationship between the indoor and outdoor environments [4].
The underlying issue with modelling human comfort is the sheer number of variables present and the difficulty in accurately measuring them. Figure 1 highlights this issue by detailing a list of studied physiological, psychological, and environmental variables that influence thermal, visual, and aural comfort. While the empirical models in the academic literature are capable of incorporating a handful of these variables, the exclusion of the rest can cause significant errors. One reason is that the interrelationship between different indoor environmental parameters is not well-known [5]. It was shown in a recent study that the lowest indoor environmental satisfaction factor drives the overall satisfaction [6]. For example, while one can measure the temperature and humidity of a room; the type of meal a person ate, and even the spices present in the meal, can put the human body in a different state of thermal perception [7,8]. Furthermore, most of the studies that measure environmental variables face problems related to accuracy and calibration [9]. This includes the use of mobile carts with mounted sensors [10,11,12] or low-cost continuous sensing sensors [13]. While being comprehensive in capturing most comfort-related factors, Figure 1 excludes literature about physical and mental ailments, which further adds variance to the models.
It is, therefore, not surprising to find that preference prediction models with only a handful of the factors in Figure 1 have low accuracy. For example, the previously mentioned PMV model uses personal and environmental parameters such as temperature, humidity, mean radiant temperature, air movement, and clothing and metabolism levels to predict thermal comfort. A recent analysis showed that this model is only accurate 34% of the time [50]. In the control of real buildings, these models are further simplified, and it is usually the only variables of temperature, illuminance, and noise levels that are used to evaluate thermal, visual, and aural comfort, respectively.

1.1. Can Longitudinal Human Perception Feedback Supplement Sensors?

The human nervous system detects sensation and converts it into thoughts and feelings, which are the very foundation of the word comfort. What if occupants in buildings were asked about their subjective preference in spaces, instead of only measuring environmental variables and using them to infer comfort? Collecting enough comfort preference feedback from a single person over days or weeks would take advantage of a human’s ability to evaluate dozens of variables simultaneously, including those that are difficult to measure. How can this type of methodology be accomplished in a scalable way without annoying occupants too much or inducing survey fatigue? Can this approach provide insight into comfort problem areas that contemporary sensors are too expensive or problematic in implementation?
The goal of this paper was to test the ability of an intensive longitudinal method to capture numerous environmental feedback data from experimental participants in a field setting. This study uses Micro Ecological Momentary Assessments (EMA) as a subjective feedback methodology that overcomes many of the challenges presented by traditional methods [51]. Micro-EMA is a method of using a smartwatch interface to prompt and collect momentary, right-here-right-now subjective feedback from a single person over several weeks [52]. Receiving a large amount of feedback from a single person in a diversity of spaces and comfort exposures provided the ability to understand the comfort preference tendencies of a person. It is proposed that these behavioural tendencies can be used to segment people into groups related to how they perceive their environment. Grouping people with similar comfort preferences could, therefore, increase the accuracy of predicting where a person will be comfortable and what the system can to respond without additional sensors. Additionally, collecting large amounts of subjective preference data from numerous people in a particular space can characterise the comfort-related attributes of that space to supplement data being collected from the sensors installed. If technically scalable and not too disruptive to an occupant, using humans-as-a-sensor in buildings could change the way post-occupancy evaluations, building and system design, and controls and automation are done. There would be opportunities for people to provide feedback for short-term episodic uses (days or weeks), for building commissioning or long-term (months or years), and for continuous system control and management. This work complements the momentum from other disciplines focused on the use of humans as sensors for applications in detection of events using social media data [53], for detecting emergencies [54], and for cybersecurity [55].

1.2. Paper Overview

This paper presents how high-frequency micro-EMA, combined with sensor data time-series analysis, can enable the evaluation, control, and rethinking of the design of indoor environments. Section 2 first gives a more detailed overview of foundational work in indoor preference capture and modelling and the novelty being proposed. Section 3 provides a comprehensive explanation of the design and deployment of a smart watch-based subjective preference data collection and environmental variable measurement system. Section 4 details the results from a field-based implementation at the SDE4 building at the National University of Singapore (NUS) and the testing of various preference models based on intensive longitudinal data. Finally, Section 5 and Section 6 discusses integration methods in buildings, limitations, future work, and details on how to reproduce the study using open data and code.

2. Background and Novelty

This work builds upon previous literature focused on the measurement of factors that may influence thermal, visual, and aural comfort in the built environment. These modelling techniques are converged with an intensive longitudinal experience sampling technique that is common in the medical and psychological communities, but only emerging in the analysis of buildings. This section covers previous work in the building context using intensive longitudinal data and an overview of the novelty of the work in this paper as compared to the literature.

2.1. Indoor Environmental Comfort Variables and Models

There are generally two models types used in the literature for indoor comfort assessment: (1) objective-subjective, and (2) objective-criteria [56]. Which method to use is decided based on the aim of the evaluation. On the one hand, the objective-subjective model combines the indoor environmental measurements from sensors with the subjective feedback from users, mostly in the form of post-occupancy evaluation (POE) surveys [57,58,59,60]. On the other hand, the objective-criteria model is used in ranking or rating a building by comparing the indoor measurements from IEQ sensors with building performance measurement protocols such as LEED or WELL certifications [12]. Both of the methods have drawbacks both in measuring the environmental data as well as surveying occupants [56].
In terms of environmental measurements, work has been done that used accurate sensors that were mounted on movable carts [12,61]. However, these sensors were not affordable to all building operations scenarios [56]. The affordability challenge was met using low-cost continuous sensing sensors that required frequent calibration [13]. Nevertheless, the location of these sensors in buildings, and interpolation of the readings still represented a challenge in the literature, given the fact that indoor spaces are heterogeneous [62]. On the other side of the spectrum, surveys pose some problems related to questions, for example, what to ask, whom to ask, and how to interpret the results [56]. Additionally, Porter et al. [63] discussed the term survey fatigue in which users feel overwhelmed by questions that may lead to a misrepresentation in responses and reduced response rates.
A related area of recent focus is the use of wearable and infrared radiation sensors to capture near-body physiological data that define the environmental conditions close to or at the skin surface of an occupant. A recent study focused on creating personalised comfort models from these data in the context of field-based deployment on 14 subjects [64]. This deployment and the models produced used wrist and ankle skin temperature from several sensors placed on the participants and a smartphone application to collect surveys. Further work in the indoor context showed that both wearable sensors and infrared radiation cameras led to a 3–4% increase in accuracy of thermal comfort sensation prediction, marginally justifying the cost of implementation in a field setting [65].

2.2. Ecological Momentary Assessments (EMA)

The next area of background focuses on the challenge of collecting large amounts of longitudinal data from a person. Many fields of study have relied upon the ecological momentary assessment [51] methodology to meet this challenge. This method is a type of intensive longitudinal experience sampling most often utilised in studying human behaviour. The word ecological describes that fact that the measurement is taken in the subjects’ natural environment without impacting their task at hand. The word momentary pertains to the fact that feedback is requested at the moment of experience, as opposed to asking a subject to recall a past experience. And finally, the assessments are not static one-off outcomes but occur over time, thus accounting for temporal dynamics. Traditional models found in literature such as surveys are insufficient as their sampling rates are low, require the occupant to completely stop their task at hand to focus on the survey, and in many cases, ask for a recollection of past experiences. There is the further issue of survey fatigue [63] and even when willing to participate, there is a concern about how accurate their responses are [66]. The use of a smartwatch for data collection, coined micro ecological momentary assessments, is so user-friendly that it does not significantly disrupt any ongoing activity [52]. Furthermore, an eight-fold increase in sampling frequency can be obtained, in comparison to smartphone use, without burdening the user. Recent work has used ecological momentary assessments to assess the built environment through the use of smartphones [67]. While such applications are a step in the right direction, they were only able to collect eight feedback points per occupant, which is insufficient for time-series analysis.

2.3. Similar Work in Intensive Longitudinal Data Collection in the Built Environment

Intensive longitudinal methodologies have begun to emerge as a way to characterise occupants for various built environment objectives. In the urban context, several studies have deployed sensors on people to understand their experiences across their daily lives. A large study based in Singapore used thousands of wearable sensors in populations of students to discover travel patterns [68], collect information about thermal parameters [69], and even infer the impact of public spaces on happiness [70]. Work has been done in a controlled outdoor field study to understand the impact of the urban context on various emotions and physiological responses of human [71]. In the indoor setting, targeted work on collecting longitudinal data for more specific purposes has also emerged. The previously mentioned wearable study focusing on thermal comfort collected numerous data from the 14 participants over the 2–4 week study [64]. Another recent study that deployed a cyber-physical system to collect longitudinal data in offices focused on occupant concentration [72]. The work in this paper is most directly related to previous work in collecting longitudinal comfort feedback from smartphone interfaces for the allocation of activity-based workspaces [73] and through a sustainability tour in a university campus building [74].

2.4. Novelty of Proposed Approach

Despite the momentum in field-based intensive longitudinal methodologies, there are still several barriers to their implementation in real-world settings. Not the least is the challenge of getting human occupants to give data for comfort surveys, install applications, or wear devices. Working from this knowledge, the authors developed cozie, as seen in Figure 2, an open-source, smartwatch clock-face designed to conduct micro-EMA surveys for high-frequency data collection [75]. The application is open-sourced and free to download and use on the Fitbit gallery (https://cozie.app/).
The innovations outlined in this work as compared to the previously mentioned studies are:
  • The hardware and software deployment methodology has a focus on practicality in scalable, field-based implementations. Experimental participants were only asked to wear a single smartwatch device and answer survey questions that utilise a relatively small amount of time. The focus was on testing a configuration that was easily applied in a real-world context. The modelling methodology was designed to maximise data capture in the field without constant control and verification of sensor proximity and accuracy.
  • A series of pre-processing steps were developed to convert intensive longitudinal data into model input features that characterise the tendency of groups of people to have similar comfort preferences. A simple example of this concept is the commonly discussed, yet often anecdotal, person who seems always to need more cooling, even when the temperature is already low relative to the comfort zone. In this study, clustering was used to group people into comfort preference types as an input feature to a preference prediction model.
  • This paper introduces and tests a simple form of a cold start variant to the preference models that could be used to predict an occupant’s preference. Cold start refers to a model with limited or no data about the occupant’s preference history in a particular space or according to particular objective measurements such as temperature, humidity, or other factors. This model enables the deployment of the cozie data collection methodology by a set of participants in a building and then the creation of prediction models that could accommodate future occupants regardless of whether they have worn a smartwatch in those spaces.
  • The process seeks to show that comfort-based preference prediction can be accurate even in the absence of environmental sensors if enough intensive longitudinal data has been collected from sufficient occupants. The context of this experiment was in a relatively uncontrolled, field-based settings as opposed to laboratory conditions.

3. Methodology

To collect intensive longitudinal data in a field setting, the cozie platform was built on the Fitbit smart watch (https://www.fitbit.com/) and various time-series database technologies. The details of this technology stack are explained in the context of a deployment on 30 test participants in buildings at the School of Design and Environment (SDE) at NUS. The definition of an occupant in this study was a test participant who wore the smartwatch, and a manager as the person who coordinated the study. Thirty participants recruited via an online form, were compared to the inclusion criteria for the study, and were on-boarded according to an approved ethics review application. Priority was given to participants who work full-time in the SDE-related buildings on campus, and they were selected to maintain an even gender distribution.
The technology used in the deployment of this study can be sub-sectioned into individual tiers as described in Figure 3, with each level requiring additional resources to implement. For the experiment in the SDE4 building, all tiers were incorporated.

3.1. Tier 1: Smartwatch for Micro-EMA

Tier 1 is the core methodology presented in this paper, which uses the cozie clock-face, as shown in Figure 2. The occupants were asked to wear a Fitbit Versa smartwatch during daytime hours while on the NUS campus but were also welcome to wear the device for the entire duration of the study. Participants were asked to leave momentary assessment feedback on their comfort preferences at different points throughout the day on the watch face of the Fitbit device. Each time they responded to the survey, they were asked about their thermal, visual and aural preference using the options found in Figure 2. Comfort preference was chosen as the feedback most applicable to the methodology due to a three-point scale that is most appropriate for frequent watch-based surveys. Preference surveys also provide more meaningful information by indicating how the occupant would want the environment to change as opposed to satisfaction or sensation survey types that only capture how the occupant feels. The participants were asked to answer the questions when they moved from one environment to another, which amounted to approximately 5–15 assessments per day. The smartwatch also prompted the occupants with a small vibration that requested feedback from them at different timed points in the day. This prompt only occurred during daytime hours when the subject was active. The momentary assessment took less than 15 s to complete. Throughout the experiment, the cumulative amount of time spent answering the momentary assessment was approximately 20–40 min.
Detailed documentation for using cozie, along with the source code to an open-source repository, can be found on a GitHub repository (https://github.com/buds-lab/cozie). The platform also can collect sensation, satisfaction and objective feedback such as clothing and activity levels. These features were added after the experiments outlined in this paper and were not used in this study.

3.2. Tier 2: Indoor Localisation

Tier 1 is likely sufficient for experiments conducted in small office spaces. If only a few different zones exist, then an occupant’s location could be quickly determined through a supplementary question in the question flow of the survey. However, in a large building, such as the SDE4 where the outlined experiment was conducted, a more sophisticated indoor localisation system was required. The SDE4 building has six different floors, a gross floor area of around 8500 square meters, and a large variety of different indoor environments. To determine an occupant’s location in a building, 100 Bluetooth beacons and the Steerpath (https://steerpath.com/) platform were installed throughout the building. These beacons communicated with a custom-built smartphone application, called the Yak App [76], to determine their location with a one-meter precision. The location data was then used to geo-fence the occupant within various zones of the building and was merged with the subjective preference feedback data in the cloud.

3.3. Tier 3: Preference Data Convergence with Environmental Sensors

Tier 3 included the deployment of 45 indoor and outdoor environmental quality (IEQ) sensors in the experimental context. This data collection tier was used to compare the results of the subjective feedback, with existing environmental models. The IEQ sensors were WiFi-connected and were deployed by the company SenSING (https://sensing.online/) as part of an installation of sensors campus-wide. These sensor kits measured temperature, humidity, noise level, and illuminance. At least one sensor device was installed in each zone of the building, and the data was pulled from an API and merged with the subjective preference data in the cloud.

3.4. Tier 1b: Strap-Mounted Sensor Kit

Tier 3b included a temperature sensor from mbient labs (https://mbientlab.com/), which was attached to the watch through a custom three dimensional (3D) printed case. The design file for this case can be found online (https://myhub.autodesk360.com/ue29ab3ac/g/shares/SH919a0QTf3c32634dcfe0a71457c4729699). The mbient device logged data locally, which was transferred to the cloud database at the end of the experiment.

3.5. Occupant and Room Preference Clustering

This analysis included the hypothesis that the feedback of one of the occupants in such groups could be used to characterise the preferences of all group members for a particular space or set of conditions. In this step, the preference history of occupants was used to do a simple clustering-based segmentation step to group occupants according to their raw feedback preference tendencies. For example, occupants who more frequently indicated prefer-cooler as compared to a no change would be grouped. This strategy was a simplified version of this type of clustering as it neglects other context-based variables (environmental and physiological measurements). This choice was made to keep the method feasible even in situations in which other measurements are not available.
Given its widespread usage in related literature, occupant and room clustering is calculated using the k-means clustering algorithm with Euclidean distance, using the scikit-learn package (https://scikit-learn.org/stable/). The features used for clustering were the ratio of votes of each feedback class value for each subject. For example, the ratio of prefer-cooler for a given participant, or room, would be calculated as follows: # prefer-cooler votes # total votes . This calculation is repeated for all types of feedback responses for thermal, light, and aural feedback. Then, the number of clusters was chosen to match the number of possible responses per type of feedback, this led to initially k = 9 , but given that there were no data points with prefer-louder responses, the clusters were merged into eight.

3.6. Occupant Comfort Preference Prediction

The metric of comparison in this study was the prediction improvement of a machine learning model using added feature sets extracted from the intensive longitudinal preference data. This structure matches implementation-based environmental comfort studies outlined in the literature that showed the predictive improvement of additional data [64,65]. This approach can be compared to more controlled, lab-based methods that seek to isolate variables and individually test their influence.
The prediction problem translates to predicting the right class value or, in this case, the preference feedback response, at the given feature values. A random forest classifier from the scikit learn package was chosen to handle this comfort prediction. Random forest classifiers have been proven to have the highest accuracy at predicting personal comfort in one previous study [77] and is one of the best performing of other recent studies [64,65,78]. The decision was made to focus on the implementation of a single model type that has been proven effective and is straightforward to use based on documentation and ease-of-tuning. With this in mind, we fixed the hyper-parameters for the random forest classifier to 1000 number of trees, Gini criterion for node splitting, and two minimum samples per split.
Additionally, the prediction problem was divided into an individual and a grouped prediction task. The former refers to a model developed specifically for a given occupant, using only parts of its data to train a model and test it on its remaining data. On the other hand, the latter approach consists of combining all occupants’ training data subsets with training a model and testing it on all the occupants’ remaining data.
The data of each occupant was split into a 60:40 train test set based on time. That is, the first 60% of votes from each occupant was used in their training set, and the remaining 40% was used for testing. The sets were split by time to prevent the scenario of future data being used to predict the past. For the grouped model, all the occupants’ training sets (60% of each occupant’s data) was used as one training set, and the remaining 40% of each occupant’s data was combined with being used as one test set.
A primary component of the method was to test combinations of input feature sets to quantify their prediction power. The method used six combinations of these feature sets to test the influence each has in the predictive capability of the overall model. The following is an overview of these feature categories developed for testing:
  • Time was created through feature engineering the time stamp of when an occupant gave feedback. This feature was a cyclical representation of the hour of the day and day of the week. This simple feature type detects if certain cyclical habits or components have a role in preference prediction and was included in all scenarios.
  • Environmental Sensors were features extracted from measurement data from lighting (lux level), noise (dB level), temperature (deg. Celsius), and relative humidity (RH%) measurement. These variables were collected from the IEQ sensors that were in the same zone as the occupant, and closest spatially and temporally to the occupant when they gave feedback.
  • Near Body Temperature was a feature created from the temperature sensor mounted on the smartwatch strap that had temporal proximity to the time-stamp of when the occupant gave feedback.
  • Heart Rate was collected from the Fitbit smartwatch device as an instantaneous value collected when the occupant gave feedback.
  • Room was a feature that was encoded to a numerical preference type based on the history of feedback in the room in which the survey was taken. This feature was designed to increase the prediction accuracy by complimenting data from rooms of similar comfort profiles. For example, if an occupant only works from their office, the model will still be able to accurately predict how that occupant may feel in other rooms that have a similar comfort profile to their office.
  • Preference History features are similar to the Room features. These features use the ratio of responses of each type (thermal, visual, and aural) that were calculated for each user. This ratio was only calculated for the responses of prefer-cooler, prefer-warmer, prefer-dimmer, prefer-brighter, prefer-quieter, and prefer-louder. For example, the ratio of response of prefer-cooler responses of a given occupant is calculated the following way: # prefer-cooler votes # total votes .
Model classification results were calculated using the F1-micro scores (as shown in Equation (1)) which were equivalent to accuracy in the a multi-class classification problem by calculating precision and recall averaged across all classes, that is, subjective thermal comfort response value. As the objective was to provide a comparison among different feature sets with a standard metric, F1-micro was chosen due to its usage for benchmarking different aspects of the modelling pipeline in thermal comfort datasets [78].
F 1 = 2 × precision × recall precision + recall

4. Results

The results presented in this section are complemented with an interactive web application (https://sde4demo.herokuapp.com/) and interactive code (https://github.com/buds-lab/humans-as-a-sensor-for-buildings) which enables the reader to regenerate all the plots. During a two week collection time of 30 participants, 4378 comfort preference votes were collected, which is 146 data feedback points per person on average. From this set, 1474 data points were successfully localised to building environmental sensors. To allow for comparison with those data, this subset was used for analysis and machine learning in the following sections.

4.1. Grouping Comfort Preference Tendencies

Figure 4 illustrates an overview of the intensive longitudinal preference history data for each person according to the three preference categories. These feedback responses were only those collected in the SDE4 building, and a maximum number of 75 votes is shown. A simple clustering step was applied in this figure to represent the segmentation according to each preference category on its own. This visualisation shows how this simplified clustering step captured the tendency of an occupant to lean more towards one feedback response over the others. This segmentation was independent of the environmental parameters of the spaces to maintain the simplicity of the approach. The subsequent modelling steps were designed to test the effectiveness of doing this type of simplified segmentation.
Figure 5a is an aggregated representation of the segmentation process for each occupant, this time with all three preference categories being used in the clustering process. This figure summarises each occupant as a row of data, and the colour of the box represents the percentage of votes given to a particular preference category, where dark colours indicate higher preference. These clusters provided segmentation of the users according to their preference tendency types that were used in the preference models. Even in a sample size of 30 occupants, there were varying comfort tendencies present, which complemented the concept of a personal comfort model tested by Kim et al. [79]. This clustering step provided the foundation for the creation of the individual versus grouped models used in the prediction step.

4.2. Tagging the Spatial Context with Preference Feedback

While the subjective feedback highlighted varying comfort tendencies within a building, localisation also enabled the characterisation of preference tendencies in certain zones. Figure 5b presents each room as a row, where the colour of each cell represents the percentage of a preference vote given for a particular room. The utilisation of k-means clustering once again enabled the splitting and labelling of these zones, this time by the tendency for different comfort preferences to be left by occupants in those spaces. This result firstly served as an overview for facility managers to understand the office spaces they manage, and take action to improve upon the comfort. A visualisation of the subjective thermal preference data can be found in Figure 6 and online (https://sde4demo.herokuapp.com/).

4.3. Correlation with Indoor Environmental Quality Variables

One standard aspect of environmental comfort studies is the comparison of feedback to objective environmental measurements. For the data collected in this study, standard distribution plots of the environmental sensor data are summarised in Figure 7. Intuitive insight in the data can be observed, such as the absence of prefer-brighter votes after an illuminance threshold of 250 lux. Nevertheless, there was a significant overlap between classes for each of the environmental parameters, which were likely attributed to the numerous unmeasured variables described by Figure 1, and the varying comfort tendencies shown in Figure 5. This result reinforces the evidence that environmental measurements are not descriptive enough to characterise a person’s preferences, which results in poor prediction as found in previous studies [50].

4.4. Predicting Field-Based Indoor Preference Using Intensive Longitudinal Data

In this section, the time-series feedback was used to predict comfort satisfaction. Figure 8 shows a comparison of the various models built with the feature sets and process outlined in Section 3. The individual comfort model uses the occupant’s training data for prediction, while the grouped comfort model uses the input data for the groupings outlined in Figure 5. The top of Figure 8 shows a table in which each row represents the feature set that was used to train the model in that column.
Several insights were evident from this modelling analysis. Firstly, there were only small differences in the F1 scores between the different feature sets for the visual and aural preference models. These models, in general, had higher F1 scores than thermal preference prediction. Aural preference prediction had the highest F1 score which is a consequence of reducing the prediction problem from a multi-class classification task to a binary classification task given that no user voted prefer-louder during the data collection phase (bottom image in Figure 4). As for the visual preference prediction performance, we hypothesize the model takes advantage of the non-environmental features to better discern between the classes. Figure 8 shows how the lowest-performing visual preference model is the one that relies on environmental features alone, where the distribution of Environmental Light values already shows a clear overlap for the different visual feedback (top middle distribution in Figure 7).
Thermal preference prediction had more diversity across the feature sets tested as compared to the other preference categories. Merely using the conventional time-series and environmental sensor features had the lowest F1 score. Adding the physiological attributes of heart rate and near body temperature provided marginal improvements. The best thermal preference model used the physiological, room, and preference history features while excluding the environmental sensor data.
For all three preference categories, the grouped comfort model performed better than the individual version. Participants with similar comfort preferences became clustered together, thus increasing the training dataset for that particular occupant type. This result showed the impact that assigning a variety of peer groups can have on preference prediction.

4.5. Cold-Start Comfort Preference Prediction

The success of the preference models using grouping allowed for models that can predict an occupant’s preferences without their own personal data present. This scenario was labelled as a cold-start situation as it emulates when an occupant does not wear a watch to collect data in a particular building, but relies on crowdsourced data from peers that have a similar comfort preference. The line graphs to the right of Figure 8 show the results of this type of analysis. They illustrate the number of occupants required to sufficiently crowdsource the data for an average occupant for each of the preference categories. The orange line represents an ordinary person who does not wear a smartwatch, whereas the blue line is a smartwatch owner who is regularly giving feedback. In this study, nine and five users were sufficient on average to crowdsource the thermal and visual comfort prediction respectively to the same accuracy as a user wearing a watch.

4.6. Predicting Continuous Comfort Preference without Sensors

Since the preference feedback in this methodology was at a much higher high-frequency than a typical survey or occupants acting on the thermostat, this study had preference data with relatively high temporal and spatial diversity. The random forest classifier was used to predict comfort preference based on a time-stamp input for each zone to create a continuous prediction over time. This approach emulates the concept of using human feedback as a type of sensor. Figure 9 illustrates the prediction of two different zones, an office and an outdoor space, for a typical week using this model output. First, one can see that the office was generally a comfortable space, while the outdoor seating had an overall higher preference for cooling. Time-dependent fluctuations show how the model was able to predict comfort preference for different parts of the day or days of the week. The office had a peak of warmer preference around mid-day. Finally, it was observed how the model, often inaccurately, tried to predict comfort at times where no data is present. The square peaks in the office for aural and visual prediction between the hours of 22:00 and 7:00 were due to an absence of data to accurately predict during these times.

5. Discussion

The results of this implementation showed the potential of collecting intensive longitudinal feedback from occupants in the built environment. This approach revealed that the deployment and implementation of such a methodology were effective, and comfort models for visual, aural, and thermal comfort can have similar performance to sensor measurements. The key focus in this section is to discuss the practical uses and limitations of the proposed methodology.

5.1. Practical Application of Intensive Longitudinal Data in Industry

At the foundation of the method, the creation of more significant amounts of occupant feedback information in the form of preferences was successful. The utilisation of these type of high-frequency subjective feedback data has potential for building evaluation and occupant comfort optimisation. It changes the paradigm in which facility managers operate a building. For example, instead of saying light levels are below the comfort threshold in Office-1, the new conclusions could state that a higher frequency of prefer-brighter votes are recorded in Office-1. Furthermore, due to the high-frequency sampling rate provided by the micro ecological momentary assessment methodology, these periods of discomfort can also be mapped to particular times, and certain groups of people. The time series comfort profiles could also serve as input data for occupant-centric-control efforts of building systems which can then optimise for human comfort and energy optimisation. Notwithstanding, the deployment of this longitudinal data collection framework involves a specific device, the Fitbit smartwatch, which incurs an overhead cost per participant. The Fitbit Versa ranges in price from US$99–229 depending on the features included. To reduce the cost of implementation in practice, Section 4.5 has shown that not every single occupant requires a smartwatch in cold start scenarios which reduces the cost of smartwatches. Furthermore, the authors believe that wearable technology adoption is growing among the younger population who will soon be part of the working force, thus further reducing the cost of deployment. The cozie platform is currently under development for other platforms such as the Apple Watch (watchOS).

5.1.1. Post-Occupancy Evaluation, Commissioning, and Sensor Calibration

A focus on post-occupancy evaluation could be a key target for this type of data collection. In this scenario, a particular sample of non-transient occupants of a recently constructed building would be given smartwatches and asked to wear them for 2–4 weeks. These data could supplement the systems installed to characterise whether there are blind spots in terms of sensors not picking up comfort-influencing phenomenon that is not being measured. For example, it is rare to measure mean radiant temperature in most buildings; therefore hot spots might exist that are undetectable by thermostats and might be a result of inadequate shading or control of shading systems. To adapt the presented methodology to this context is straightforward as the two-week time frame of the experiment is similar. The current method involved asking each participant to wear the smartwatch until 100 data points were recorded. In a real-world setting, a similar approach could be deployed. At that point, perhaps the occupant could choose to return the device and rely on the data of co-workers for comfort prediction or continue to wear it and help crowdsource the data for others. It is strongly recommended that these deployments use automated indoor localisation to put the feedback in the spatial context without user intervention.

5.1.2. Potential for Spatial Recommendation Systems and Impact on Activity-Based Workspace Design

A less typical application for intensive longitudinal data might be the development of spatial recommendation engines for occupants in activity-based workspaces. In these spaces, an occupant does not have a constant workspace but instead finds a space that matches their immediate needs. This paradigm could prove to be an integral part of future working style, especially in light of social distancing due to global pandemics such as COVID-19 that forces a less conventional spatial working arrangement. This recommendation engine might work in a way that an occupant’s comfort tendencies could be matched with the comfort zone of the building. For example, those that prefer-warmer spaces can be recommended to work in areas that have a higher percentage of prefer-cooler votes. Previous work in this direction showed progress using a platform known as Spacematch [73]. Smart watch-based longitudinal feedback could enhance the model development process for this type of platform. However, the fact that modern office buildings are shifting towards more open-plan styles, or the aforementioned activity-based spaces, makes it hard to divide floors into multiple thermal zones. Nevertheless, recent work has found that existing deployed systems such as ceiling fans can help customize the nearby environment of occupants and achieve even higher thermal acceptance [80]. If the customization is done at a group level, that is, participants with similar or shared preference, far fewer thermal zones would be required compared to a scenario of a completely personalized experience for each occupant.
Additionally, the aspect of testing group-based models in this study is essential for this context as building owners cannot expect all occupants to be willing to wear or use devices. And those that do agree will likely have a limited amount of patience for giving feedback over long periods. This paper tested the ability to cluster occupants such that it was not necessary for everyone in an office to wear a smartwatch. The only requirement for this type of system to work is that each new employee would wear a smartwatch during a two-week data collection phase, which is sufficient to build their comfort preference tendency history as described in Section 4.1. The experiment also showed that not everyone in an office space needed to be using the smartwatch application all the time. In this particular experiment, six occupants were sufficient, on average, to crowdsource the prediction for the remaining 24. This value is not generalisable amongst all buildings and would change depending on the different comfort tendencies the building occupants might present. The higher the variation in comfort preferences, the greater the number of the occupants needed to crowdsource the data.
In terms of office space design, the collection of intensive longitudinal preference data could facilitate floor plan design decisions. Understanding the breakdown of comfort needs according to the tendencies of the occupants would enable architects to design or retrofit buildings with different comfort zones to match the different types of people. For example, if the zones with more cooling were popular and being used to their capacity, then the floor plans or systems control could respond by creating more spaces of that type to increase the probability of a person feeling comfortable.

5.1.3. Integration into Building Control Systems

Intensive longitudinal data has the opportunity to influence the control and automation systems of buildings through the use of preference feedback data in the control logic. Most building control systems rely on optimising a set-point temperature that is considered comfortable for the average occupant or comfort standard [81]. In that scenario, discomfort instead of comfort is evaluated as the current difference of the environment thermostat and the HVAC system set-temperature [82], occupancy density estimation, or via more traditional ways such as PMV [83]. While some of these approaches have dealt with single-occupant offices or Personal Comfort Systems (PCS), there is a distinction between controlling the actual HVAC system and allowing the occupant to control their immediate space. PCS systems are those that locally condition the occupant independent of the centralised HVAC system [84]. The intensive longitudinal data and the models developed in this study could help the controls field take the next step forward in occupant-centred building controls through the use of reinforcement learning [85]. The feedback mechanism in reinforcement control is generally the standard occupant-building interface such as switch or thermostat [86]. Intensive longitudinal data could be used to enhance that interaction by focusing on finding the motivations of those control actions. This work is a strong focus of the occupant-centric building operations in the IEA Annex 79 project [87].

5.2. Prediction Models Are Only as Good as the Training Data

The primary limitation of the presented approach was that it would only work where data were present. As seen in Figure 9, there were errors in the prediction when data were absent, for example, there was no data collected during the night. Furthermore, since there was a reliance on other subjects’ historical preferences, that is, crowdsourcing preferences, to evaluate environments, an office space that was rarely used would have a poor prediction of occupant comfort. Classical comfort models based on sensor data do not have this issue as spaces that are not used can still be characterised by the measured data. Furthermore, this particular study was conducted in Singapore, which does not have seasons and has minimal variability in temperature. For seasonal countries, the day of the year would be an added feature that may take up to a year worth of data to train. Further work could investigate the opportunity of using sensor data to characterise a space and then continuously refine the comfort prediction by crowdsourcing the occupants’ preference on said space.

6. Conclusions

This paper presents how micro ecological momentary assessments of subjective comfort can generate sufficiently large intensive longitudinal data for occupant comfort prediction and enhancement that can supplement objective environmental sensor data, and empirical comfort models. Results of an implementation of the platform on 30 occupants showed the segmentation and variation of indoor occupant comfort tendencies and highlighted the shortcomings of one-size-fits-all comfort models that are commonly applied in real buildings. Furthermore, the use of a smartwatch enabled data collection at a sufficient frequency to build time-series models of indoor spaces. These models could be used to detect building anomalies, serve as building data for subjective driven building control, or be used to recommend spaces that best match the comfort preference tendencies of each occupant. The optimum technological set-up uses a smartwatch for subjective data collection, combined with a method for localising an occupant in the building. This localisation may be achieved by asking the occupant directly through the smartwatch, or through Bluetooth or WiFi signals.

Author Contributions

P.J.: hardware, software, infrastructure development, experimental design, implementation and lead author of the paper; M.Q.: software, infrastructure development, data analysis, machine learning lead and author of the paper; M.A.: software, infrastructure development, and author of the paper; C.M.: funding, project leadership, experimental design, the corresponding author of the paper. All authors have read and agreed to the published version of the manuscript.

Funding

The Singapore Ministry of Education (MOE) (R296000181133 and R296000214114) and the National University of Singapore (R296000158646) provided support for the development and implementation of this research.

Acknowledgments

This research contributes to the body of work for the International Energy Agency (IEA) Energy in Building and Communities (EBC) Annex 79—Occupant-Centric Building Design and Operation. The authors would like to acknowledge Teo Yi Ting for her assistance in deployment of the experiment.

Conflicts of Interest

Two of the authors, P.J. and C.M., provided consulting services for the deployment of a similar methodology that occurred independently of the experiments and analysis outlined in this publication.

References

  1. Frontczak, M.; Schiavon, S.; Goins, J.; Arens, E.; Zhang, H.; Wargocki, P. Quantitative relationships between occupant satisfaction and satisfaction aspects of indoor environmental quality and building design. Indoor Air 2012, 22, 119–131. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Sakellaris, I.; Saraga, D.; Mandin, C.; Roda, C.; Fossati, S.; De Kluizenaar, Y.; Carrer, P.; Dimitroulopoulou, S.; Mihucz, V.; Szigeti, T.; et al. Perceived indoor environment and occupants’ comfort in European “modern” office buildings: The OFFICAIR study. Int. J. Environ. Res. Public Health 2016, 13, 444. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Fanger, P.O. Thermal Comfort. Analysis and Applications in Environmental Engineering; McGraw-Hill: New York, NY, USA, 1970. [Google Scholar]
  4. De Dear, R.; Brager, G.S. Developing an adaptive model of thermal comfort and preference. ASHRAE Trans. 1998, 104, 145–167. [Google Scholar]
  5. Asadi, I.; Hussein, I.; Palanisamy, K.; Conference, W.; Comfort, M.; Cumberland, R.; Use, E.; Environmental, C.; Engineering, M.; Mesiano, V.; et al. A survey of evaluation methods used for holistic comfort assessment. PLoS ONE 2016, 953–954, 1513–1519. [Google Scholar] [CrossRef] [Green Version]
  6. Tang, H.; Ding, Y.; Singer, B. Interactions and comprehensive effect of indoor environmental quality factors on occupant satisfaction. Build. Environ. 2020, 167. [Google Scholar] [CrossRef]
  7. Henry, C.; Emery, B. Effect of spiced food on metabolic rate. Hum. Nutr. Clin. Nutr. 1986, 40, 165–168. [Google Scholar] [PubMed]
  8. Swaminathan, R.; King, R.; Holmfield, J.; Siwek, R.; Baker, M.; Wales, J. Thermic effect of feeding carbohydrate, fat, protein and mixed meal in lean and obese subjects. Am. J. Clin. Nutr. 1985, 42, 177–181. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Nicol, J.F.; Wilson, M. A critique of European Standard EN 15251: Strengths, weaknesses and lessons for future standards. Build. Res. Inf. 2011, 39, 183–193. [Google Scholar] [CrossRef]
  10. Webster, T.; Bauman, F.; Anwar, G. CBE Portable Wireless Monitoring System (PWMS): UFAD Systems Commissioning Cart Design Specifications and Operating Manual. In Internal Report; Center for the Built Environment; UC Berkeley: Berkeley, CA, USA, 2007; p. 4. [Google Scholar]
  11. Choi, J.H.; Loftness, V.; Aziz, A. Post-occupancy evaluation of 20 office buildings as basis for future IEQ standards and guidelines. Energy Build. 2012, 46, 167–175. [Google Scholar] [CrossRef]
  12. Kim, H. Methodology for Rating a Building’s Overall Performance based on the ASHRAE/CIBSE/USGBC Performance Measurement Protocols for Commercial Buildings. J. Chem. Inf. Model. 2013, 53, 1689–1699. [Google Scholar] [CrossRef]
  13. Parkinson, T.; Parkinson, A.; de Dear, R. Continuous IEQ monitoring system: Context and development. Build. Environ. 2019, 149, 15–25. [Google Scholar] [CrossRef] [Green Version]
  14. Schiavon, S.; Yang, B.; Donner, Y.; Chang, V.C.; Nazaroff, W.W. Thermal comfort, perceived air quality, and cognitive performance when personally controlled air movement is used by tropically acclimatized persons. Indoor Air 2017, 27, 690–702. [Google Scholar] [CrossRef] [PubMed]
  15. Hodder, S.G.; Parsons, K. The effects of solar radiation on thermal comfort. Int. J. Biometeorol. 2007, 51, 233–250. [Google Scholar] [CrossRef]
  16. Kräuchi, K. How is the circadian rhythm of core body temperature regulated? Clin. Auton. Res. 2002, 12, 147–149. [Google Scholar] [CrossRef] [PubMed]
  17. Chinazzo, G.; Wienold, J.; Andersen, M. Daylight affects human thermal perception. Sci. Rep. 2019, 9, 1–15. [Google Scholar] [CrossRef] [PubMed]
  18. Halawa, E.; van Hoof, J.; Soebarto, V. The impacts of the thermal radiation field on thermal comfort, energy consumption and control—A critical overview. Renew. Sustain. Energy Rev. 2014, 37, 907–918. [Google Scholar] [CrossRef]
  19. Fukazawa, T.; Havenith, G. Differences in comfort perception in relation to local and whole body skin wettedness. Eur. J. Appl. Physiol. 2009, 106, 15–24. [Google Scholar] [CrossRef] [Green Version]
  20. Kingma, B.; Frijns, A.; van Marken Lichtenbelt, W. The thermoneutral zone: Implications for metabolic studies. Front. Biosci. (Elite Ed.) 2012, 4, 1975–1985. [Google Scholar] [CrossRef]
  21. Tikuisis, P.; Ducharme, M.B. The effect of postural changes on body temperatures and heat balance. Eur. J. Appl. Physiol. Occup. Physiol. 1996, 72, 451–459. [Google Scholar] [CrossRef] [PubMed]
  22. Gagge, A.P.; Stolwijk, J.; Hardy, J. Comfort and thermal sensations and associated physiological responses at various ambient temperatures. Environ. Res. 1967, 1, 1–20. [Google Scholar] [CrossRef]
  23. Gaesser, G.A.; Brooks, G.A. Muscular efficiency during steady-rate exercise: Effects of speed and work rate. J. Appl. Physiol. 1975, 38, 1132–1139. [Google Scholar] [CrossRef] [PubMed]
  24. Havenith, G.; Kuklane, K.; Fan, J.; Hodder, S.; Ouzzahra, Y.; Lundgren, K.; Au, Y.; Loveday, D.L. A database of static clothing thermal insulation and vapor permeability values of non-western ensembles for use in ASHRAE Standard 55, ISO 7730, and ISO 9920 CH-15-018 (RP-1504). ASHRAE Trans. 2015, 121, 1. [Google Scholar]
  25. Johnson, J.M.; Kellogg, D.L., Jr. Thermoregulatory and thermal control in the human cutaneous circulation. Front. Biosci. (Schol Ed.) 2010, 2, 825–853. [Google Scholar] [PubMed] [Green Version]
  26. Zhang, H.; Hedge, A. Laptop heat and models of user thermal discomfort. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting; SAGE Publications Sage: Los Angeles, CA, USA, 2014; Volume 58, pp. 1456–1460. [Google Scholar]
  27. Karjalainen, S. Thermal comfort and gender: A literature review. Indoor Air 2012, 22, 96–109. [Google Scholar] [CrossRef]
  28. Huang, H.W.; Wang, W.C.; Lin, C.C.K. Influence of age on thermal thresholds, thermal pain thresholds, and reaction time. J. Clin. Neurosci. 2010, 17, 722–726. [Google Scholar] [CrossRef]
  29. Havenith, G.; Holmér, I.; Parsons, K. Personal factors in thermal comfort assessment: Clothing properties and metabolic heat production. Energy Build. 2002, 34, 581–591. [Google Scholar] [CrossRef]
  30. Frank, S.M.; Raja, S.N.; Bulcao, C.F.; Goldstein, D.S. Relative contribution of core and cutaneous temperatures to thermal comfort and autonomic responses in humans. J. Appl. Physiol. 1999, 86, 1588–1593. [Google Scholar] [CrossRef] [Green Version]
  31. Boulant, J.A. Role of the preoptic-anterior hypothalamus in thermoregulation and fever. Clin. Infect. Dis. 2000, 31, S157–S161. [Google Scholar] [CrossRef]
  32. Cabanac, M. Physiological role of pleasure. Science 1971, 173, 1103–1107. [Google Scholar] [CrossRef]
  33. Brainard, G.C.; Hanifin, J.P.; Greeson, J.M.; Byrne, B.; Glickman, G.; Gerner, E.; Rollag, M.D. Action spectrum for melatonin regulation in humans: Evidence for a novel circadian photoreceptor. J. Neurosci. 2001, 21, 6405–6412. [Google Scholar] [CrossRef] [Green Version]
  34. Perez, R.; Ineichen, P.; Seals, R.; Michalsky, J.; Stewart, R. Modeling daylight availability and irradiance components from direct and global irradiance. Sol. Energy 1990, 44, 271–289. [Google Scholar] [CrossRef] [Green Version]
  35. Leather, P.; Pyrgas, M.; Beale, D.; Lawrence, C. Windows in the workplace: Sunlight, view, and occupational stress. Environ. Behav. 1998, 30, 739–762. [Google Scholar] [CrossRef]
  36. Dai, Q.; Hao, L.; Lin, Y.; Cui, Z. Spectral optimization simulation of white light based on the photopic eye-sensitivity curve. J. Appl. Phys. 2016, 119, 053103. [Google Scholar] [CrossRef]
  37. Slater, A.I.; Boyce, P.R. Illuminance uniformity on desks: Where is the limit? Light. Res. Technol. 1990, 22, 165–174. [Google Scholar] [CrossRef]
  38. Baron, R.A.; Rea, M.S.; Daniels, S.G. Effects of indoor lighting (illuminance and spectral distribution) on the performance of cognitive tasks and interpersonal behaviors: The potential mediating role of positive affect. Motiv. Emot. 1992, 16, 1–33. [Google Scholar] [CrossRef]
  39. Wilkins, A.J.; Nimmo-Smith, I.; Slater, A.I.; Bedocs, L. Fluorescent lighting, headaches and eyestrain. Light. Res. Technol. 1989, 21, 11–18. [Google Scholar] [CrossRef]
  40. Main, A.; Dowson, A.; Gross, M. Photophobia and phonophobia in migraineurs between attacks. Headache J. Head Face Pain 1997, 37, 492–495. [Google Scholar] [CrossRef]
  41. Yin, J.; Zhu, S.; MacNaughton, P.; Allen, J.G.; Spengler, J.D. Physiological and cognitive performance of exposure to biophilic indoor environment. Build. Environ. 2018, 132, 255–262. [Google Scholar] [CrossRef]
  42. Hopkinson, R.G. Glare from daylighting in buildings. Appl. Ergon. 1972, 3, 206–215. [Google Scholar] [CrossRef]
  43. Pierrette, M.; Parizet, E.; Chevret, P.; Chatillon, J. Noise effect on comfort in open-space offices: Development of an assessment questionnaire. Ergonomics 2015, 58, 96–106. [Google Scholar] [CrossRef]
  44. Job, R. Community response to noise: A review of factors influencing the relationship between noise exposure and reaction. J. Acoust. Soc. Am. 1988, 83, 991–1001. [Google Scholar] [CrossRef]
  45. Kim, J.; De Dear, R. Workspace satisfaction: The privacy-communication trade-off in open-plan offices. J. Environ. Psychol. 2013, 36, 18–26. [Google Scholar] [CrossRef] [Green Version]
  46. Templeton, D.; Saunders, D. Acoustic Design; Elsevier: Amsterdam, The Netherlands, 2014. [Google Scholar]
  47. Lee, S.Y.; Brand, J. Can personal control over the physical environment ease distractions in office workplaces? Ergonomics 2010, 53, 324–335. [Google Scholar] [CrossRef] [PubMed]
  48. Kjellberg, A.; Sköldström, B. Noise annoyance during the performance of different nonauditory tasks. Percept. Mot. Skills 1991, 73, 39–49. [Google Scholar] [CrossRef] [PubMed]
  49. Banbury, S.P.; Berry, D.C. Office noise and employee concentration: Identifying causes of disruption and potential improvements. Ergonomics 2005, 48, 25–37. [Google Scholar] [CrossRef]
  50. Cheung, T.; Schiavon, S.; Parkinson, T.; Li, P.; Brager, G. Analysis of the accuracy on PMV–PPD model using the ASHRAE Global Thermal Comfort Database II. Build. Environ. 2019, 153, 205–217. [Google Scholar] [CrossRef] [Green Version]
  51. Stone, A.A.; Shiffman, S.; Atienza, A.A.; Nebeling, L.; Stone, A.; Shiffman, S.; Atienza, A.; Nebeling, L. Historical roots and rationale of ecological momentary assessment (EMA). In The Science of Real-Time Data Capture: Self-Reports in Health Research; Oxford University Press: Oxford, UK, 2007; pp. 3–10. [Google Scholar]
  52. Intille, S.; Haynes, C.; Maniar, D.; Ponnada, A.; Manjourides, J. μEMA: Microinteraction-based ecological momentary assessment (EMA) using a smartwatch. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, Heidelberg, Germany, 12–16 September 2016; pp. 1124–1128. [Google Scholar]
  53. Wang, D.; Amin, M.T.; Li, S.; Abdelzaher, T.; Kaplan, L.; Gu, S.; Pan, C.; Liu, H.; Aggarwal, C.C.; Ganti, R.; et al. Using humans as sensors: An estimation-theoretic perspective. In Proceedings of the 13th International Symposium on Information Processing in Sensor Networks, Berlin, Germany, 15–17 April 2014; pp. 35–46. [Google Scholar]
  54. Avvenuti, M.; Cimino, M.G.C.A.; Cresci, S.; Marchetti, A.; Tesconi, M. A framework for detecting unfolding emergencies using humans as sensors. Springerplus 2016, 5, 43. [Google Scholar] [CrossRef]
  55. Vielberth, M.; Menges, F.; Pernul, G. Human-as-a-security-sensor for harvesting threat intelligence. Cybersecurity 2019, 2, 23. [Google Scholar] [CrossRef] [Green Version]
  56. Heinzerling, D.; Schiavon, S.; Webster, T.; Arens, E. Indoor environmental quality assessment models: A literature review and a proposed weighting and classification scheme. Build. Environ. 2013, 70, 210–222. [Google Scholar] [CrossRef] [Green Version]
  57. Ncube, M.; Riffat, S. Developing an indoor environment quality tool for assessment of mechanically ventilated office buildings in the UK—A preliminary study. Build. Environ. 2012, 53, 26–33. [Google Scholar] [CrossRef] [Green Version]
  58. Wong, L.T.; Mui, K.W.; Hui, P.S. A multivariate-logistic model for acceptance of indoor environmental quality (IEQ) in offices. Build. Environ. 2008, 43, 1–6. [Google Scholar] [CrossRef]
  59. Lai, A.C.; Mui, K.W.; Wong, L.T.; Law, L.Y. An evaluation model for indoor environmental quality (IEQ) acceptance in residential buildings. Energy Build. 2009, 41, 930–936. [Google Scholar] [CrossRef]
  60. Cohen, R.; Standeven, M.; Bordass, B.; Leaman, A. Assessing building performance in use 1: The Probe process. Build. Res. Inf. 2001, 29, 85–102. [Google Scholar] [CrossRef]
  61. Webster, T.; Arens, E.; Anwar, G.; Bonnell, J.; Bauman, F.; Brown, C. UFAD Commissioning Cart: Design Specifications and Operating Manual. In Internal Report; Center for the Built Environment; UC Berkeley: Berkeley, CA, USA, 2007. [Google Scholar]
  62. Jin, M.; Liu, S.; Schiavon, S.; Spanos, C. Automated mobile sensing: Towards high-granularity agile indoor environmental quality monitoring. Build. Environ. 2018, 127, 268–276. [Google Scholar] [CrossRef] [Green Version]
  63. Porter, S.R.; Whitcomb, M.E.; Weitzer, W.H. Multiple surveys of students and survey fatigue. New Dir. Inst. Res. 2004, 2004, 63–73. [Google Scholar] [CrossRef]
  64. Liu, S.; Schiavon, S.; Das, H.P.; Jin, M.; Spanos, C.J. Personal thermal comfort models with wearable sensors. Build. Environ. 2019, 162, 106281. [Google Scholar] [CrossRef] [Green Version]
  65. Aryal, A.; Becerik-Gerber, B. A comparative study of predicting individual thermal sensation and satisfaction using wrist-worn temperature sensor, thermal camera and ambient temperature sensor. Build. Environ. 2019, 160, 106223. [Google Scholar] [CrossRef]
  66. Clear, A.K.; Mitchell Finnigan, S.; Olivier, P.; Comber, R. ThermoKiosk: Investigating Roles for Digital Surveys of Thermal Experience in Workplace Comfort Management. Proc. CHI 2018, 1–12. [Google Scholar] [CrossRef] [Green Version]
  67. Engelen, L.; Held, F. Understanding the office: Using ecological momentary assessment to measure activities, posture, social interactions, mood, and work performance at the workplace. Buildings 2019, 9, 54. [Google Scholar] [CrossRef] [Green Version]
  68. Monnot, B.; Wilhelm, E.; Piliouras, G.; Zhou, Y.; Dahlmeier, D.; Lu, H.Y.; Jin, W. Inferring Activities and Optimal Trips: Lessons From Singapore’s National Science Experiment. In Complex Systems Design & Management Asia; Springer International Publishing: Berlin/Heidelberg, Germany, 2016; pp. 247–264. [Google Scholar]
  69. Wilhelm, E.; Zhou, Y.; Zhang, N.; Kee, J.; Loh, G.; Tippenhauer, N. Sensg: Large-Scale Deployment of Wearable Sensors for Trip and Transport Mode Logging. In Proceedings of the Transportation Research Board 95th Annual Meeting, Washington, DC, USA, 10–14 January 2016. [Google Scholar]
  70. Benita, F.; Bansal, G.; Tunçer, B. Public spaces and happiness: Evidence from a large-scale field experiment. Health Place 2019, 56, 9–18. [Google Scholar] [CrossRef]
  71. Ojha, V.K.; Griego, D.; Kuliga, S.; Bielik, M.; Buš, P.; Schaeben, C.; Treyer, L.; Standfest, M.; Schneider, S.; König, R.; et al. Machine learning approaches to understand the influence of urban environments on human’s physiological response. Inf. Sci. 2019, 474, 154–169. [Google Scholar] [CrossRef]
  72. Rahaman, M.S.; Liono, J.; Ren, Y.; Chan, J.; Kudo, S.; Rawling, T.; Salim, F.D. An Ambient-Physical System to Infer Concentration in Open-plan Workplace. IEEE Internet Things J. 2020, 99, 1. [Google Scholar] [CrossRef]
  73. Sood, T.; Janssen, P.; Miller, C. Spacematch: Using environmental preferences to match occupants to suitable activity-based workspaces. Front. Built Environ. 2020, 6, 113. [Google Scholar] [CrossRef]
  74. Sood, T.; Quintana, M.; Jayathissa, P.; AbdelRahman, M.; Miller, C. The SDE4 Learning Trail: Crowdsourcing occupant comfort feedback at a net-zero energy building. J. Phys. Conf. Ser. 2019, 1343, 012141. [Google Scholar] [CrossRef]
  75. Jayathissa, P.; Quintana, M.; Sood, T.; Narzarian, N.; Miller, C. Is your clock-face cozie? A smartwatch methodology for the in-situ collection of occupant comfort data. J. Phys. Conf. Ser. 2019, 1343, 012145. [Google Scholar] [CrossRef]
  76. Abdelrahman, M.M.; Jayathissa, P.; Miller, C. YAK: An Indoor Positioning App for Spatial-Temporal Indoor Environmental Quality. ResearchGate 2019. [Google Scholar] [CrossRef]
  77. Kim, J.; Zhou, Y.; Schiavon, S.; Raftery, P.; Brager, G. Personal comfort models: Predicting individuals’ thermal preference using occupant heating and cooling behavior and machine learning. Build. Environ. 2018, 129, 96–106. [Google Scholar] [CrossRef] [Green Version]
  78. Luo, M.; Xie, J.; Yan, Y.; Ke, Z.; Yu, P.; Wang, Z.; Zhang, J. Comparing machine learning algorithms in predicting thermal sensation using ASHRAE Comfort Database II. Energy Build. 2020, 210, 109776. [Google Scholar] [CrossRef]
  79. Kim, J.; Schiavon, S.; Brager, G. Personal comfort models—A new paradigm in thermal comfort for occupant-centric environmental control. Build. Environ. 2018, 132, 114–124. [Google Scholar] [CrossRef] [Green Version]
  80. Lipczynska, A.; Schiavon, S.; Graham, L.T. Thermal comfort and self-reported productivity in an office with ceiling fans in the tropics. Build. Environ. 2018, 135, 202–212. [Google Scholar] [CrossRef] [Green Version]
  81. Enescu, D. A review of thermal comfort models and indicators for indoor environments. Renew. Sustain. Energy Rev. 2017, 79, 1353–1379. [Google Scholar] [CrossRef]
  82. Barrios, L.; Kleiminger, W. The Comfstat—Automatically sensing thermal comfort for smart thermostats. In Proceedings of the 2017 IEEE International Conference on Pervasive Computing and Communications, PerCom 2017, Kona, HI, USA, 13–17 March 2017; pp. 257–266. [Google Scholar] [CrossRef]
  83. Park, J.Y.; Nagy, Z. Comprehensive analysis of the relationship between thermal comfort and building control research—A data-driven literature review. Renew. Sustain. Energy Rev. 2018, 82, 2664–2679. [Google Scholar] [CrossRef]
  84. Zhang, H.; Arens, E.; Zhai, Y. A review of the corrective power of personal comfort systems in non-neutral ambient environments. Build. Environ. 2015, 91, 15–41. [Google Scholar] [CrossRef] [Green Version]
  85. Park, J.Y.; Ouf, M.M.; Gunay, B.; Peng, Y.; O’Brien, W.; Kjærgaard, M.B.; Nagy, Z. A critical review of field implementations of occupant-centric building controls. Build. Environ. 2019, 165, 106351. [Google Scholar] [CrossRef]
  86. Park, J.Y.; Dougherty, T.; Fritz, H.; Nagy, Z. LightLearn: An adaptive and occupant centered controller for lighting based on reinforcement learning. Build. Environ. 2019, 147, 397–414. [Google Scholar] [CrossRef]
  87. O’Brien, W.; Wagner, A.; Schweiker, M.; Mahdavi, A.; Day, J.; Kjærgaard, M.B.; Carlucci, S.; Dong, B.; Tahmasebi, F.; Yan, D.; et al. Introducing IEA EBC annex 79: Key challenges and opportunities in the field of occupant-centric building design and operation. Build. Environ. 2020, 178, 106738. [Google Scholar] [CrossRef]
Figure 1. Graphical review of physiological, psychological and environmental factors influencing human comfort. Thermal—clockwise from top left: adaptation to outdoor environment [4], air flow [14], solar radiation [15], circadian rhythm [16], daylight perception [17], environmental long wave radiation [18], perspiration [19], diet-induced thermo-genesis [8], subcutaneous fat thickness [20], posture [21], temperature/humidity [3,22], physical activity [23], clothing level [24], vascular anatomy [25], near body heat sources [26], gender [27], age [28], basal metabolic rate [29], thermal regulation [30,31], alliesthesia [32]. Visual—clockwise from top left: circadian calibration [33], daylight [34], view [35], spectrum [36], uniformity [37], illuminance [38], flicker [39], susceptibility to migraines [40], biophilia [41], glare [42]. Aural—clockwise from top: seniority in a company [43], subjective sensitivity [44], sound privacy [45], sound absorption [46], controllability [47], task [48], variable and constant noise [49].
Figure 1. Graphical review of physiological, psychological and environmental factors influencing human comfort. Thermal—clockwise from top left: adaptation to outdoor environment [4], air flow [14], solar radiation [15], circadian rhythm [16], daylight perception [17], environmental long wave radiation [18], perspiration [19], diet-induced thermo-genesis [8], subcutaneous fat thickness [20], posture [21], temperature/humidity [3,22], physical activity [23], clothing level [24], vascular anatomy [25], near body heat sources [26], gender [27], age [28], basal metabolic rate [29], thermal regulation [30,31], alliesthesia [32]. Visual—clockwise from top left: circadian calibration [33], daylight [34], view [35], spectrum [36], uniformity [37], illuminance [38], flicker [39], susceptibility to migraines [40], biophilia [41], glare [42]. Aural—clockwise from top: seniority in a company [43], subjective sensitivity [44], sound privacy [45], sound absorption [46], controllability [47], task [48], variable and constant noise [49].
Buildings 10 00174 g001
Figure 2. The cozie watch-face, built on the Fitbit smartwatch platform was used to collect subjective feedback. The phone that is paired with the Fitbit can be used to set up additional questions.
Figure 2. The cozie watch-face, built on the Fitbit smartwatch platform was used to collect subjective feedback. The phone that is paired with the Fitbit can be used to set up additional questions.
Buildings 10 00174 g002
Figure 3. Overview of the experimental deployment in the NUS SDE buildings in four distinct tiers: Tier 1 is the base methodology which is production-ready for real-building deployment. It requires a smart watch with the cozie clock-face installed. Tier 1b, is an extension to the base methodology by adding a temperature sensor to the watch. Tier 2 includes building-wide indoor localisation. In this experiment, Steerpath Bluetooth beacons were used, which communicate with the occupant’s smartphone to determine the occupant’s location. Tier 3 merges the localised feedback points with environmental sensors in the same comfort zone as the occupant.
Figure 3. Overview of the experimental deployment in the NUS SDE buildings in four distinct tiers: Tier 1 is the base methodology which is production-ready for real-building deployment. It requires a smart watch with the cozie clock-face installed. Tier 1b, is an extension to the base methodology by adding a temperature sensor to the watch. Tier 2 includes building-wide indoor localisation. In this experiment, Steerpath Bluetooth beacons were used, which communicate with the occupant’s smartphone to determine the occupant’s location. Tier 3 merges the localised feedback points with environmental sensors in the same comfort zone as the occupant.
Buildings 10 00174 g003
Figure 4. Overview of the intensive longitudinal data collected from the occupants according to the three categories. Each row is an occupant and each box in that row shows that occupant’s feedback answers collected sequentially. The visualisation is diagrammatic in that vertical alignment of the boxes between different occupants does not imply identical time stamps.
Figure 4. Overview of the intensive longitudinal data collected from the occupants according to the three categories. Each row is an occupant and each box in that row shows that occupant’s feedback answers collected sequentially. The visualisation is diagrammatic in that vertical alignment of the boxes between different occupants does not imply identical time stamps.
Buildings 10 00174 g004
Figure 5. K-means clustering of preference tendencies quantified by average number of votes by occupant (a), and by room (b) within the test building. Each row presents the percentage of votes that fell into a respective preference. Dark colours in cells indicate higher preference.
Figure 5. K-means clustering of preference tendencies quantified by average number of votes by occupant (a), and by room (b) within the test building. Each row presents the percentage of votes that fell into a respective preference. Dark colours in cells indicate higher preference.
Buildings 10 00174 g005
Figure 6. Interactive visualisation of the data collection in the SDE4 building that highlights the spatial distribution of subjective preference data for thermal comfort in three dimensions. In terms of preference feedback, the blue dots indicate prefer-cooler responses, the yellow dots are no change, and the red dots are prefer-warmer.
Figure 6. Interactive visualisation of the data collection in the SDE4 building that highlights the spatial distribution of subjective preference data for thermal comfort in three dimensions. In terms of preference feedback, the blue dots indicate prefer-cooler responses, the yellow dots are no change, and the red dots are prefer-warmer.
Buildings 10 00174 g006
Figure 7. Distribution of sensor data by preference vote. While trends can be observed many feedback votes overlap for the same environmental or physiological measurement. This was possibly due to the different comfort tendencies as shown in Figure 5 or numerous other variables described in Figure 1 that are not accounted for. Near body temperature and noise appear to have the most distinct differentiation.
Figure 7. Distribution of sensor data by preference vote. While trends can be observed many feedback votes overlap for the same environmental or physiological measurement. This was possibly due to the different comfort tendencies as shown in Figure 5 or numerous other variables described in Figure 1 that are not accounted for. Near body temperature and noise appear to have the most distinct differentiation.
Buildings 10 00174 g007
Figure 8. Left: Comparison of prediction F1-micro-score between grouped and individual comfort models using data from different feature sets. The feature set that excluded environmental sensor data for the thermal model had the highest F1-score, while minimal differences in F1-score were noted between feature sets of the visual and aural models. Right: The accuracy in predicting the comfort of an individual as further participants are added to the training set. The blue line includes the test participants training set in the training data, and the orange line excludes the test the participants training data meaning that it depends on crowdsourced feedback from other occupants.
Figure 8. Left: Comparison of prediction F1-micro-score between grouped and individual comfort models using data from different feature sets. The feature set that excluded environmental sensor data for the thermal model had the highest F1-score, while minimal differences in F1-score were noted between feature sets of the visual and aural models. Right: The accuracy in predicting the comfort of an individual as further participants are added to the training set. The blue line includes the test participants training set in the training data, and the orange line excludes the test the participants training data meaning that it depends on crowdsourced feedback from other occupants.
Buildings 10 00174 g008
Figure 9. Comfort prediction for two zones for an average occupant over a week. The grey circles indicate votes that were given for each category, and the shaded-out sections indicate times where no data were present. These time-series predictions can be used to detect anomalies, such as the mid-day peak for a warmer preference for the office, or the general discomfort in the outdoor seating area. Note that there was an absence of data in these zones between the hours of 22:00–7:00, and on the weekend. This lack of data caused inaccurate predictions as seen in the square-shaped peaks in the office.
Figure 9. Comfort prediction for two zones for an average occupant over a week. The grey circles indicate votes that were given for each category, and the shaded-out sections indicate times where no data were present. These time-series predictions can be used to detect anomalies, such as the mid-day peak for a warmer preference for the office, or the general discomfort in the outdoor seating area. Note that there was an absence of data in these zones between the hours of 22:00–7:00, and on the weekend. This lack of data caused inaccurate predictions as seen in the square-shaped peaks in the office.
Buildings 10 00174 g009

Share and Cite

MDPI and ACS Style

Jayathissa, P.; Quintana, M.; Abdelrahman, M.; Miller, C. Humans-as-a-Sensor for Buildings—Intensive Longitudinal Indoor Comfort Models. Buildings 2020, 10, 174. https://doi.org/10.3390/buildings10100174

AMA Style

Jayathissa P, Quintana M, Abdelrahman M, Miller C. Humans-as-a-Sensor for Buildings—Intensive Longitudinal Indoor Comfort Models. Buildings. 2020; 10(10):174. https://doi.org/10.3390/buildings10100174

Chicago/Turabian Style

Jayathissa, Prageeth, Matias Quintana, Mahmoud Abdelrahman, and Clayton Miller. 2020. "Humans-as-a-Sensor for Buildings—Intensive Longitudinal Indoor Comfort Models" Buildings 10, no. 10: 174. https://doi.org/10.3390/buildings10100174

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop