Next Article in Journal
The Growth Factor Release from a Platelet-Rich Plasma Preparation Is Influenced by the Onset of Guttate Psoriasis: A Case Report
Previous Article in Journal
Ventricular Fibrillation and Tachycardia Detection Using Features Derived from Topological Data Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Impact of Virtual Reality-Based Design Review System on User’s Performance and Cognitive Behavior for Building Design Review Tasks

1
Department of Construction Engineering & Management, National University of Sciences and Technology (NUST), Risalpur 24080, Pakistan
2
School of Architectural, Civil, Environment and Energy Engineering, Kyungpook National University, Daegu 41566, Korea
3
Department of Civil and Environmental Engineering, Hanyang University, Seoul 04763, Korea
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2022, 12(14), 7249; https://doi.org/10.3390/app12147249
Submission received: 29 June 2022 / Revised: 13 July 2022 / Accepted: 17 July 2022 / Published: 19 July 2022
(This article belongs to the Section Environmental Sciences)

Abstract

:
Virtual reality (VR) can potentially enhance various design and construction assessment intensive tasks, such as construction design and review. However, it may lead to cognitive overload, adversely affecting the participants’ performance. It is critical to understand the effects of VR cognitive behavior for implementing VR technology in the construction industry. The principal objective of this study was to investigate the participants’ cognitive load (CL), task performance (TP), and situational awareness (SA) in the VR environment for the evaluation of building design review tasks. Participants were asked to review the design task based on their memory knowledge and understanding in one of the three environments: paper-based, monitor-based, and immersive virtual environment. Participants’ CL was measured using the National Aeronautics and Space Administration Task Load Index (NASA TLX), TP was evaluated on completion time and the number of errors correctly detected, and situational awareness (SA) was assessed using the Situational Awareness and Review Technique (SART). The statistical results show a high CL and better performance in the immersive virtual environment. These findings can contribute to a better understanding of cognitive process characteristics and capabilities for design review activities in the VR environment.

1. Introduction

The Architecture Engineering and Construction (AEC) industry produces complex, customized, temporary, and unique products. The design process is iterative and strongly relies on individual experience [1]. The designer uses their mental abilities or supporting informational documents to develop the AEC design model. One major challenge the ACE industry faces is the inefficient construction design approval process due to the slow adoption of modern technologies such as Building Information Modelling (BIM), Virtual Reality (VR), Augmented reality (AR), and cloud computing [2,3,4]. These modern technologies have the potential to address these paucities, and its multifaceted industry implementations make it more suitable for construction projects [5]. BIM allows to look into the design and functional properties of the building as well as perform other tasks such as cost estimate, project planning, scheduling, resource management and structural analysis etc. [6,7,8]. It can also help the construction industry improve safety planning, on-site communication, constructability, and design review meetings [9,10,11]. Furthermore, VR technology has emerged recently and revolutionized multiple sectors. The AEC sector has embraced the VR application with BIM to improve the visualization of a virtual world and interaction with the real world and its components as VR technologies combine the information system and immersive environments.
BIM-based VR technologies enable the project stakeholders to walk through a virtual environment (VE) while viewing the 1:1 scale three-dimensional (3D) model. Users can navigate the model with the same scale as the actual one and review the design of buildings, e.g., sill level of the windows and doors, ceiling height, and beam-column size [12]. Understanding the significance of the Virtual Reality Environment (VRE) is important because it creates the intermediate design for the organization that can evaluate design at the time of critical analysis [4]. Most of the time, design review meetings in the VRE require modifications between different components of architecture design [13]. During the project phase, the important decision regarding cost, quality, and schedule influence the overall construction estimate. So, in the design review meeting, each activity and its related material and specifications are discussed. In the end, amendments are made in the initial draft before the commencement of actual construction.
VR in the construction design visualization represents the experiential architecture human experience in the real world and is expected to boost construction efficiency and save cost and time [14]. As a result, VR helps the cognitive-based construction tasks, including assembly placing [15], arrangement and inspection [16], and minimizing mental effort and task completion time. VR can improve spatial and conceptual learning, immersion and presence, and cognitive and psychomotor outcomes. However, studies have demonstrated that when it comes to cognitive effects such as knowledge and understanding, Immersive Virtual Reality (IVR) does not outperform the traditional approaches or Non-Immersive Virtual Reality (NIVR). Since IVR is a 3D, 360° experience, this would likely provide more information than the traditional methods [17]. Immersive interaction can lead to cognitive effects, waste of time, loss of access to reality, and powerful emotions [18]. Augmented reality frequently supports users’ cognitive abilities by providing superimposed information. However, such knowledge can cause cognitive overload, which might negatively impact the participants’ performance [19]. Zhong studied that VR training may help individuals with their cognitive and executive function [20]. Another study stated that CL will play a major part in high IVR device applications in the future and many researchers want to explore CL in these new environments [21]. Cognitive Load (CL) is concerned with the transmission of knowledge from working memory to long-term memory. Up to now, there have been no comprehensive studies investigating the impact of VR on user performance and cognitive behavior for design review tasks in the building construction industry.
This study is set out to explore the impact of VR-based construction design review tasks on construction professionals by investigating the CL, Task Performance (TP), and Situational Awareness (SA) of participants in three distinct environments: 3D monitor-based VR, head-mounted-based VR, and paper-based design review techniques. An experimental methodological approach was adopted to achieve the research objective; three participant groups were provided residential building design review tasks using one of the techniques: VR headset, monitor screen, and traditional paper drawings. The TP is evaluated on task completion time and error rate. The CL is calculated using the National Aeronautics and Space Administration Task Load Index (NASATLX) [22], and SA is assessed using the Situational Awareness and Review Technique (SART) [23] in the site-like design simulation setting. The results discuss important insight into participants’ TP, CL, and SA in three distinct environments and the impact of VR-based construction design tasks. The present research makes an important contribution and is the first extensive study to examine the user performance and cognitive behavior in VRE for design review tasks in the construction industry.

2. Literature Review

VR is a simulation of an environment or computer-generated VE that allows participants to experience a place or event differently than where they are physically present; a flight simulator is an early example of VR technology [24]. In 1838 Charles Whetstone’s work featured two mirrors positioned at a 45° angle to the user’s eye, each reflecting an image located offsite as it was the first concept to provide VR a sensory feeling of immersion [25]. In 1950 Sensorama was the first sensory display invented by Morton Heilig [26]. It was a scripted arcade-like experience, and after 11 years, he also invented the first head-mounted display (HMD) prototype that provided a stereoscopic image with stereo sound. However, there was no interactive response or motion tracking. According to previous studies, VR commercial development began in 1988, but in 1991, the first commercial VR entertainment system was unveiled called “Virtuality” [27]. In 1992, Steuer defined VR as a type of human experience enabled by the sensation of being present in a given environment [28].
VR is used in many applications because of technological advancements in medical sciences, video gaming, cinema and entertainment, education and training, engineering, architecture, and urban planning. Palmer Luckey designed the prototype of the Oculus rift, which had the capability of rotational tracking [29]. In 2015, HTC and Valve corporation collaborated on developing the HTC VIVE VR headset and motion controller, and both were built on Valves’ steam VR platform. Novel positional tracking technology was introduced in this release, which used infrared light and specially designed wall-mounted base stations to track the user’s location. At the start of 2017, Sony developed a similar tracking system for PlayStation VR and used the same technology to create a wireless headset. In 2019, the standalone headset Oculus Quest and the Oculus Rift S were launched by Oculus. These headsets used inside-out tracking, which differed from the outside-in tracking used in earlier headsets [30]. Later in 2019, Valve introduced notable features of a 130° field of vision and off-ear headphones for comfort and immersion. These open-handed controllers support individual finger tracking, front-facing cameras, and a front expansion slot designed for extensibility [31]. Oculus introduced the Oculus Quest 2 in 2020 with improved performance, a lower price, and a better screen. To use this new headset, Facebook users must sign in using a Facebook account [32]. In 2021, the European Union Aviation Safety Agency (EASA) approved the first Flight Simulation Training Device based on VR. The device makes rotorcraft pilots safer by letting them practice dangerous moves in a virtual environment [33]. As COVID-19 regulations were enacted in 2020 and 2021, the virtual reality industry witnessed a rapid boom.

2.1. Virtual Reality in the Design and Construction

Recent technological advancements have enabled construction practitioners to improve the project design’s construction methodology and quality to achieve success. In the early design process, 2D architectural drawings cannot represent and communicate the number of possible solutions. Evaluating a design against construction requirements and specifications is known as a design review [34]. Previously, a common way to conduct a design review was using two-dimensional (2D) computer-aided design and physical assets [35]. Design review has evolved to include different visualization tools, owing to the rapid development of technology in the construction industry [36,37]. The use of computer-generated designs and visualizations has been improved in the recent past. This continual improvement process in visualization has reduced design review problems [36,38].
A visualization technology gaining interest in design reviews is IVR. The study of [39] found that design reviews in the VE result in a better understanding of the proposed design, more efficient meetings, and team management. The study of [40] proposed that VR engages reviewers by reducing the effort required to contemplate the design; they also conclude that level of detail in the VR model is important because too many details may disproportionately affect the original purpose of reviewing. VR application has been seen in industries other than construction, such as reviewing the performance of nuclear power plants [41], medical science patient rooms [42], education [43], and safety training [44]. Paes and Irizarry compared the traditional workspace with IVR platforms and found that users’ spatial perception improved in an immersive virtual environment (IVE) [45]. Florio suggests that the design reviewer uses the visualization tools of the models and prototypes to “confirm or reject each hypothesis” during this experimental process, known as design review or critical analysis [14]. On the other side, the use of virtual 3D models helps the stakeholders to understand the design rather than those who understand the symbol and notations of 2D, resulting in improved communication, collaborative task, and the development of more integrated solutions [46,47,48].

2.2. Impact of Virtual Reality on the Cognitive Load, Task Performance, and Situational Awareness

According to research, cognition arose in tandem with the advancement of computers and artificial intelligence (AI) [14]. The term cognition is associated with computing and analyzing data information. Researchers define the ability to acquire knowledge that involves rich information through reasoning and perception. Human cognition involves gathering information and developing experiences from their interactions with the environment, as shown in Figure 1 [49]. Every human perceives, processes, and creates a mental portrayal of their particular reality. According to the author [50], the designer thinks about what he is doing, calling it the “reflection-in-action” process.
Virtual headsets are consumer-grade products that are scarce; thus, measuring the TP of these commercial VR is difficult. Further, these VR systems are composed of various components, including VR headsets, desktop monitors, smartphones, and VR applications. Each of these elements has a direct impact on the user’s performance. TP measures assume that an individual’s mental workload while interacting with the system during a task is a good indicator of CL [51]. Task completion time and error identification rate are examples of CL and TP metrics [52].
The NASA Task Load Index (TLX) is a subjective workload assessment tool that lets users perform subjective workload assessments on participants working with different human–machine interface systems. In 1988, Hart and Staveland developed the NASA TLX questionnaire to quantify the physical and mental load associated with performing a given task [53]. NASA TLX uses a six-dimensional rating system to calculate an overall CL. This score is based on the weighted average of ratings on six subscales: mental demand, physical demand, temporal demand, performance, effort, and frustration. NASA TLX has measured CL in physical, virtual, simulations, and lab tests [15,19,54,55].
In the examination of SA measurement, Salman et al. [56,57] categorized existing techniques into five categories: (1) the physiological method [58] corresponds to heart rate, electroencephalography (EEG) [59,60] and now most recent electrodermal activity (EDA) [61]; (2) the performance method such as task success or failure, detection of hazards; (3) the observer rating technique such as the situational awareness behavioral rating technique [62]; (4) the self-rating technique such as the crew and mission awareness scale and situational awareness review technique (SART) [23]; (5) the freeze rating technique that is a situational awareness global assessment technique (SAGAT). All these above techniques have some benefits and drawbacks. According to researchers, SART is generally acknowledged as low cost, simple to perform, and easy to analyze [63,64]. This technique has three dimensions: (1) demand on the attentional resource (D), (2) supply of attentional resource (S), and (3) understanding of the situation (U).

3. Research Methodology

This paper proposes a new methodology to achieve the research objectives. For this reason, this study created the real-like experiment of a residential building to perform design review meetings of construction experts to find the design errors. VRE was created in the university BIM laboratory. Design review tasks are assigned to the participants in one of three modalities shown in Figure 2. One group used the Oculus Quest 2 headset-based IVE for design review, the second group used the monitor-based non-immersive virtual environment (NIVE) for design review, and the third group used the traditional paper-based drawings. The experiment was performed to measure the impact of VE and traditional paper-based review on the participants’ task performance (TP) (number of errors and task completion time) CL using NASA-TLX and SART. The experiment steps and how it is performed are shown in Figure 2 and Figure 3.

3.1. Participants

Participants were selected based on their knowledge of the AEC from post-graduate students of the Civil Engineering Department of the National University of Science and Technology. Ninety-six participants accepted the invitation to participate in the research after being informed through email and face-to-face interaction. All participants had civil engineering knowledge and were post-graduate students; for instance, 43 were from the construction engineering and management department, 29 were from the structural engineering department, and 24 were from the transportation engineering department. Among all these participants, 33 had field experience of one to four years. Participants were 22–30 years old, with an average age of 26. Of these 96 participants, 64 were male and 32 were female. A total of 22 participants had prior experience with virtual reality. Total participants were divided into three equal groups. Each group contained thirty-two members; one group was for the immersive environment using Oculus Quest 2, the second group for the monitor-based VE and the third group for the paper-based drawings. The immersive group included 22 participants with prior VR experience and 10 willing participants without any prior VR experience. These 10 non-VR experienced participants were provided VR experience of 25–30 min through games at least one day before the experiment to avoid biases in the data. The participants’ demographics in this study were gathered to see how they would affect the investigation’s findings. Participants were asked about their knowledge of VR games because it has the same virtual interface as VR games. Participants who had played the game interacted with or knew this technology were recruited. These environments affect the participants’ performance and presence in these environments.

3.2. Task Overview

All participants experimenting were asked to find out the design error in the drawing and design of the residential four-story building. The typical design errors and their categorization were collected from industry experts through interview and literature review [65]. Construction industry experts’ work were from various construction sectors such as clients, consultants, contractors, and education. These design errors were incorporated into the building model of our study. The participant played the role of the construction design reviewer with a task performing on 12 types of design errors such as: (1) stair not connected to the upper floor, (2) slab and door/window clash, (3) column and door/window clash, (4) stair and beam clash, (5) stair and slab clash, (6) stair and column clash (7) sill height error, (8) sill height of windows error, (9) beam size changed, (10) column size changed, (11) extra beam, (12) and floor level changed error.

3.3. Experimental Procedure

In a paper-based design review experiment, all participants were asked to determine the design errors of each of the twelve types discussed above, using their mental abilities, as shown in Figure 4 and Figure 5. The second group of participants performed the same task in the NIVE, which is a monitor-based design review, a 3D model of the building in which they navigate and can assess the errors in the building. The 3D building design model, was drawn in the Revit version 2020 and converted into a game-like VE. The participants navigated with the help of computer hardware devices. The last group of participants did the same task in an IVE using Oculus Quest 2 (Figure 6).
The participants in all three groups were asked to complete the design review task as quickly and effectively as possible, with their reviewing speed and number of errors identified recorded in the meantime. After that, NASA-TLX was used to calculate the CL at the end of each group experiment. The measure of the SA of participants in these two VE and the real-like construction environment is created using the sound of a construction site. Participants’ CL and TP were measured using the same technique discussed prior, while their SA was measured using the SART at the end of the later-described modality.

3.4. Measurements

The NASA-TLX method was used to measure the CL of the participant. It is widely adopted because it is low-cost and measures the subjective mental workload (MWL) assessment. It contains the six elements to measure: mental demand, physical demand, temporal demand, effort, frustration, and performance. All these elements are applied to measure the CL of participants except the physical demand, which means “physical effort required to do a task,” which was not required in any of the three types of environments in our study. Performance, already present in the NASA-TLX elements, was also measured directly because the NASA TLX performance incorporates self-esteem, satisfaction, and motivation. As a result, participants in each experiment were rated on a scale of 1 = Low to 5 = High, based on mental demand, temporal demand, effort, frustration, and performance, as shown in Table 1.
SART is a widely renowned technique to measure SA. It is a subjective rating technique for assessing a participant’s SA after a trial. SA was measured at the last of an experiment using the seven-point Likert scale ranging from 1 = low to 7 = high. This technique contains the ten elements, which are: (1) Information quantity, (2) Information quality, (3) Familiarity, (4) Instability of situation, (5) Variability of a situation, (6) Complexity of situation, (7) Arousal, (8) Concentration, (9) Division of attention, and (10) Spare mental capacity. These are shown in Table 2. Furthermore, these factors are divided into three categories: the allocation of attentional resources to the present situation (S), attentional resource demand (D), and the knowledge of the surrounding conditions (U). Where U represents the summation of (1)–(3), D represents a summation of (4)–(6), and S represents the summation of (7)–(10). The participants’ overall SART score can be calculated using Equation (1). Finally, the TP was measured directly using the task completion time, such as how much time participants required to complete their design review task and the number of errors identified correctly in each type of environment during the experimental session.
SA = U − [D − S]
where SA is situational awareness, U is understanding, D is attentional demand, and S is attentional supply.

3.5. Data Analysis Techniques

Normality tests are sensitive to the sample size. The Shapiro–Wilk and Kolmogorov–Smirnov tests are the most well-known normality tests [66]. The Shapiro–Wilk test is a commonly used approach for determining data normality in samples size of fewer than 50 participants [67]. This test has become a famous normality check test because of its good power properties [68]. It determines the deviation from normality due to either skewness or kurtosis, or both [69]. It leads us to good results even with a small sample size. The Kolmogorov–Smirnov test also checks the normality of data, which is more general but less powerful than the first one [68]. In this test, the distribution of the statistic is independent of the cumulative distribution function being tested, and the test is precise. In this research, both tests were performed as the sample size was 32. The null hypothesis for both tests is that data are normally distributed, and the alternative hypothesis is that data are not normally distributed. The significance value (p) was taken (0.05) for the sample size to test the normality. If the p-value comes out greater than 0.05, we must fail to reject the null hypothesis that data are normally distributed. If it is less than 0.05, we must reject the null hypothesis that data are not normally distributed. We performed the parametric or non-parametric test based on these test results.
A non-parametric Kruskal–Wallis H test determines statistically significant differences between three or more independently sampled groups [70]. This test has four assumptions: (i) the dependent variable is measured at the ordinal level, (ii) the independent variable should have two or more categories, (iii) there is no relationship between the observations in each group or among the group themselves, and (iv) the determination shape of each distribution is necessary for the interpretation of results. The null hypothesis Kruskal–Wallis H test is that there is a significant difference in sample distribution, and the alternative hypothesis is that there is no significant difference in sample distribution. The significant p-value was taken (0.05) to check the significant difference. If the p-value comes out less than 0.05, we must fail to reject the null hypothesis that there is a statistically significant difference in the sample distribution. If it is greater than 0.05, we must reject the null hypothesis that there is no statistically significant difference in the sample distribution. These tests assess any statistically significant difference between paper, monitor, and Oculus Quest 2-based design review tasks.

4. Results and Discussion

4.1. Data Analysis

After collecting data from the different participants, the data were analyzed. The significance values (p) that the Shapiro–Wilk and Kolmogorov–Smirnov test produced for this study’s data were less than 0.05, meaning that data are not normally distributed, as shown in Table 3. All the p-values in these five elements of CL were less than 0.05 in both tests across three instructional media. A non-parametric Kruskal–Wallis H test was applied to the rest of the data to analyze, which is more appropriate for the non-normally distributed data.

4.2. Experiment

As discussed above, the Kruskal–Wallis H test was carried out to determine how traditional paper-based, monitor-based, and Oculus Quest 2-based design review tasks would impact the user’s CL. The results are shown in Figure 7. An insignificant difference was found in the mental demand of the paper- and monitor-based environments, as well as in the monitor and Oculus Quest 2-based environments. However, there was a statistically significant difference between paper and Oculus Quest 2 environments. No significant difference was in the first two media in temporal demand, and all other medium combinations had significant differences. CL’s performance, effort, and frustration elements had no significant difference in all three media. The overall CL results show a statistically insignificant difference (p < 0.05) in all three environments for design review tasks. On the other hand, the immersive environment of Oculus Quest 2 was the most cognitively demanding of these three modalities. Sweller and Rogers explained that the review task will be impaired or fail if the required CL exceeds the limits of working memory. After the detailed comparison of three design review groups’ data based on NASA-TLX, the results show that NIVE participants perceived a lower CL than those who used the IVE methods.
Based on the above experimental data, participants’ mean (average) completion time shown in Figure 8 and error identification in Table 4 were calculated using the Kruskal–Wallis H test. Figure 8 compares the completion times between the three design review groups; Oculus Quest 2 group takes (17.49 min) to complete the task which is significantly less than other two groups, monitor-based and paper-based design task, which take (19.37 min) and (24.72 min), respectively. However, the average completion time of paper-based, monitor-based, and Oculus Quest 2-based groups (24.72, 19.37, and 17.49 min) and p-value less than 0.05 show a statistically significant difference. The Kruskal–Wallis H test examines these three groups’ error identification, as shown in Table 4.
The SART score was calculated by using Equation (1); when examined, the Oculus Quest 2 (12.34) had the highest cumulative score of SA, and monitor-based drawing had (12.0), and Oculus Quest 2 (10.00) had the lowest score. Lastly, statistically significant differences in the SA values of these three media (p < 0.05) were analyzed. For every error type in each medium, the numbers of errors placed, values of the mean (SD) and Kruskal–Wallis H, and significant difference (p-value) in the three media are shown in Table 4. As discussed above, 22 errors were placed with 12 types.
A statistically significant difference was found in the average performance of the three groups in detecting the error in the changed stair beam and column size. However, on the other hand, there was no statistically significant difference between the three groups in the rest of the errors such as: stair not connected to upper floor, slab and door/window, column and door/window, stair and beam, stair and column, sill height error, sill height of bathroom windows/exhaust fan, beam size changed, floor level changed, and extra beam. Overall, out of 22 design errors intentionally placed in the building design model, the Oculus Quest 2 groups identified 12.28 average errors, the monitor-based group identified 12.13, and the paper-based group identified 10.42. The p-value was greater than 0.05, which means no significant difference was found in detecting the total number of errors in these three media, as shown in Table 4. The Kruskal–Wallis H test was applied to the experiment to determine SA. It is the ability to know, precept, and predict factors and variables that can affect the participants’ performance in a specific situation or environment [23]. Lastly, we determined the SART score from the same experiment and asked the participants to rate themselves. Table 5 shows the cumulative mean of SA and their standard deviation, Kruskal–Wallis H test values and their respective level of significance.
As stated in the above section, measurement SART elements are divided into three main groups, D, S, and U, and further into ten subgroups. A statistically significant difference was found in the understanding U and attentional supply S of SART main groups (p < 0.05), but on the other hand, there was no statistically significant difference in the attentional demand D (p > 0.05). Out of the ten elements of SART, seven (instability of situation, variability of situation, division of attention, spare mental capacity, information quantity, information quality, and familiarity) showed statistically insignificant differences. The three elements, complexity of the situation, arousal, and concentration, depicted significant differences. When we examined the cumulative mean of these three main groups, we found that the Oculus Quest 2 had a higher understanding U (10.69) than the monitor- (10.00) and paper-based environment (8.13). The cumulative mean values of attentional demand D were higher in the paper-based drawings (9.25) than in monitor-based (8.88) and Oculus Quest 2 (8.46), and also attentional supply S was highest in the paper-based drawing (11.13) compared with the monitor-based drawing (10.88) and Oculus Quest 2 (10.12). Attentional demand had a p-value greater than 0.05, which is an insignificant difference, and the other two attentional supply and understanding had significant differences among these three medias. The overall SA score was high ranking in the Oculus Quest 2 (12.34) and the monitor and paper-based media have a lower score (12.0 and 10.0), respectively. There was also a significant difference in the SA among all these media.

4.3. Cognitive Load and Task Performance

The NASA TLX experiment results showed that the CL of participants in immersive and non-immersive environments increases compared with paper-based drawings. Because both VEs can be very distractive and over stimulative as a realistic three-dimensional 360° experience, learners will obtain a lot more information from these VEs than they will obtain from the traditional medium [71]; that is why in our experiment, participants’ mental demand was slightly high in the VEs. Temporal demand was also less for the paper-based and monitor-based medium than for the Oculus Quest 2 because the immersive environment has a higher level of immersion and temporal disassociation [72].
Participants obtained higher CL in the Oculus Quest 2 at 2.15 compared with monitor-based and paper-based at 2.045 and 2.005, respectively. Still, their performance was better in the VRE considering the time required to complete the task and identifying the number of errors in the design review process because participants had to focus on a single source of information or display system at a time. Nevertheless, the simulated VEs reduced the participants’ design review effort because, in the simulated environment, one should not shift the gaze between the pages. That is why the effort was less for these environments than a traditional paper-based review. Virtual display systems have reduced the effort to change page shifts of the drawings, which positively impacts the participants’ performance. It provides the idea of dimensions or depth information of drawing such as the distance between the slab and door and windows, slab and the floor height, size of beam and column, and whether their size is optimum or not because participants can adjust their height in the VE. This led to an increase in participants’ performance in the VEs compared with the traditional environment when it came to finding errors in the design of the building. Moreover, the participants of the latter two environments completed the task earlier than the first group of participants.

4.4. Situational Awareness

In our experiment, SART results showed that participants using the Oculus Quest 2 and the monitor-based group performed better than a traditional paper-based group. They were also aware of their surroundings because of Oculus Quest 2 ability to switch on the camera when double tapping on the gear. This feature makes it unique and assists it in better performance. The 3D design model of the simulated VE on the monitor, and Oculus Quest 2 seemed to help the participants understand their task; concerning SA, both VE groups performed better than the traditional group. Attentional D and attentional supply S were less for the latter two environments than in the first one but understanding (U) was vice versa. Participants who used the Oculus Quest 2 focused solely on the immersive VE, making it easy to comprehend their surroundings by utilizing the cognitive resources of attentional supply (arousal, concentration, attentional division, and mental capacity). It is a fact that the design reviewer must perform various cognitive tasks simultaneously, such as analyzing, comprehending, remembering, and making assumptions of alternatives, if any. They must be fully aware of their surroundings to perform better. In general, using these virtual headsets or HMD would potentially impact the industry with better performance and reduce the risk and errors coming forth in the execution of the project.

4.5. Limitations

Although the study was conducted successfully, some limitations must be considered. Firstly, although it is concluded with the mixed type of results from the experiment as they are both statistically significant and insignificant, fewer participants may hinder the generalizability of the study due to human heterogeneity. i.e., how someone performs when the given task uses a new technology, such as virtual reality, can be contingent on participants’ acceptance of technology and how well they performed before. Although participants were given comprehensive pre-training sessions, varying learning abilities still exist. A participant’s cognitive ability to find the design errors in the building model is different, leading to misinterpretation of the result. The questionnaire provided to participants to assess themselves may skew the results. The participants might interpret the questions differently and answer them according to their understanding which may also mislead the result. Future studies must ensure that there should be minimum human variation issues in the experiment, which will support the results from our analysis. Electroencephalography EEG measures the mental stress-inducing task and identifies optimal task allocation, workplace efficiency, and workspace safety to measure skin response during cognitively demanding activity. We subjectively measured the participants’ CL, SA, and TP due to limited resources, as these can all be measured by the new advanced techniques such as electrodermal activity EDA. This research is also limited by the fact that it only uses Oculus Quest 2 VR handset. Other handsets with different resolution and refresh rate may produce different results. A future study investigating the effect of user performance for using different VR handsets technology with different resolution and refresh rate would be interesting. Despite the limitation mentioned earlier in this study, this research extends knowledge and understanding of the impact of VR and cognitive issues for design review tasks in the construction industry. Further research might explore more dimensions by considering a more realistic environment (increase more dimensions to a simulated environment such as hearing, feeling, smell, interaction with, and affect the surroundings) to measure SA.

5. Conclusions

In recent years, VR has been adopted in the AEC industry; it helps the construction stakeholders effectively collaborate and better understand and visualize the information. However, this additional information advantage may lead to cognitive stress for participants, negatively impacting their performance.
This research examined the impact of VR-based design review tasks on construction professionals. This study analyzed TP, CL, and SA for building design review tasks in three distinct environments of three groups working in one of the environments: paper-based, 3D monitor-based VR (non-immersive), and head-mounted-based VR (immersive). The design model of the building was created in BIM using 2D paper drawings. The design errors were incorporated into paper-based drawings and the BIM model. Participants were tasked to identify the design errors in one of the environments. Before performing the task, participants demonstrated the medium they were using and the task they had to perform. At the end of the task, they asked questions, and their performance was also analyzed.
The results of this study indicate that participants have better work performance in the virtual environment as they have identified more errors in virtual environments than in traditional drawings. However, the total cognitive score was greater in virtual environments. The IVR increased the participants’ understanding and they were less aware of their surroundings. Participants’ task completion time was reduced in the latter two virtual environments than in the traditional paper-based drawings. The key finding of this study is that the virtual environment affects the participants’ TP, CL, and SA in design review tasks. The findings from this research suggest that these VRE aided the construction professional in terms of exhaustive information provided to them in different formats. Another finding of this research is that it helps us better understand cognitively demanding problems and helps design the construction documents more appropriately, which will help professionals work more efficiently in virtual environments.

Author Contributions

Conceptualization, M.U., A.S. and J.S.; methodology, M.U., A.S. and D.-E.L.; software, M.U. and A.S.; validation, A.S. and J.S.; data curation, M.U.; writing—original draft preparation, M.U. and A.S.; writing—review and editing, A.S., J.S. and D.-E.L.; supervision, A.S. and J.S.; funding acquisition, J.S. and D.-E.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MEST) (No. NRF 2018R1A5A1025137).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MEST) (No. NRF 2018R1A5A1025137). Muhammad Umair would like to thank the National University of Sciences and Technology for providing support during the research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ahmed, S. A Review on Using Opportunities of Augmented Reality and Virtual Reality in Construction Project Management. Organ. Technol. Manag. Constr. Int. J. 2019, 11, 1839–1852. [Google Scholar] [CrossRef] [Green Version]
  2. Mehrbod, S.; Staub-French, S.; Mahyar, N.; Tory, M. Characterizing interactions with BIM tools and artifacts in building design coordination meetings. Autom. Constr. 2019, 98, 195–213. [Google Scholar] [CrossRef]
  3. Hartmann, T.; Fischer, M.; Haymaker, J. Implementing information systems with project teams using ethnographic–action research. Adv. Eng. Inform. 2009, 23, 57–67. [Google Scholar] [CrossRef]
  4. Khan, M.S.; Kim, J.; Park, S.; Seo, J. BIM-Based Augmented Reality (AR) Framework for Visualization and Monitoring of Underground Utilities. In Proceedings of the Korean Society of Civil Engineers Conference, Gwangju, Korea, 20–22 October 2021; pp. 13–14. [Google Scholar]
  5. Leite, F.; Cho, Y.; Behzadan, A.H.; Lee, S.; Choe, S.; Fang, Y.; Akhavian, R.; Hwang, S. Visualization, Information Modeling, and Simulation: Grand Challenges in the Construction Industry. J. Comput. Civ. Eng. 2016, 30, 04016035. [Google Scholar] [CrossRef] [Green Version]
  6. Davidson, J.; Fowler, J.; Pantazis, C.; Sannino, M.; Walker, J.; Sheikhkhoshkar, M.; Pour Rahimian, F. Integration of VR with BIM to facilitate real-time creation of bill of quantities during the design phase: A proof of concept study. Front. Eng. Manag. 2020, 7, 396–403. [Google Scholar] [CrossRef] [Green Version]
  7. Herr, C.M.; Fischer, T. BIM adoption across the Chinese AEC industries: An extended BIM adoption model. J. Comput. Des. Eng. 2019, 6, 173–178. [Google Scholar] [CrossRef]
  8. Shin, J.; Rajabifard, A.; Kalantari, M.; Atazadeh, B. Applying BIM to support dispute avoidance in managing multi-owned buildings. J. Comput. Des. Eng. 2020, 7, 788–802. [Google Scholar] [CrossRef]
  9. Kim, J.I.; Li, S.; Chen, X.; Keung, C.; Suh, M.; Kim, T.W. Evaluation framework for BIM-based VR applications in design phase. J. Comput. Des. Eng. 2021, 8, 910–922. [Google Scholar] [CrossRef]
  10. Abbas, A.; Choi, M.; Seo, J.; Cha, S.H.; Li, H. Effectiveness of Immersive Virtual Reality-based Communication for Construction Projects. KSCE J. Civ. Eng. 2019, 23, 4972–4983. [Google Scholar] [CrossRef]
  11. Khan, M.; Park, J.; Seo, J. Geotechnical Property Modeling and Construction Safety Zoning Based on GIS and BIM Integration. Appl. Sci. 2021, 11, 4004. [Google Scholar] [CrossRef]
  12. Sampaio, A.Z. Enhancing BIM Methodology with VR Technology. In State of the Art Virtual Reality and Augmented Reality Knowhow; InTech: London, UK, 2018. [Google Scholar]
  13. Dunston, P.S.; Arns, L.L.; Mcglothlin, J.D.; Lasker, G.C.; Kushner, A.G. An Immersive Virtual Reality Mock-Up for Design Review of Hospital Patient Rooms. In Collaborative Design in Virtual Environments; Springer: Dordrecht, The Netherlands, 2011; pp. 167–176. [Google Scholar] [CrossRef]
  14. Florio, W. Análise do processo de projeto sob a teoria cognitiva: Sete difi culdades no atelier. Arquitetura Rev. 2011, 7, 161–171. [Google Scholar] [CrossRef] [Green Version]
  15. Hou, L.; Wang, X.; Bernold, L.; Love, P.E.D. Using Animated Augmented Reality to Cognitively Guide Assembly. J. Comput. Civ. Eng. 2013, 27, 439–451. [Google Scholar] [CrossRef]
  16. Zhou, Y.; Luo, H.; Yang, Y. Implementation of augmented reality for segment displacement inspection during tunneling construction. Autom. Constr. 2017, 82, 112–121. [Google Scholar] [CrossRef]
  17. Jensen, L.; Konradsen, F. A review of the use of virtual reality head-mounted displays in education and training. Educ. Inf. Technol. 2017, 23, 1515–1529. [Google Scholar] [CrossRef] [Green Version]
  18. Feng, Y. Facilitator or Inhibitor? The Use of 360-Degree Videos for Immersive Brand Storytelling. J. Interact. Advert. 2018, 18, 28–42. [Google Scholar] [CrossRef]
  19. Abbas, A.; Seo, J.; Kim, M. Impact of Mobile Augmented Reality System on Cognitive Behavior and Performance during Rebar Inspection Tasks. J. Comput. Civ. Eng. 2020, 34, 04020050. [Google Scholar] [CrossRef]
  20. Zhong, D.; Chen, L.; Feng, Y.; Song, R.; Huang, L.; Liu, J.; Zhang, L. Effects of virtual reality cognitive training in individuals with mild cognitive impairment: A systematic review and meta-analysis. Int. J. Geriatr. Psychiatry 2021, 36, 1829–1847. [Google Scholar] [CrossRef]
  21. Sweller, J.; van Merriënboer, J.J.G.; Paas, F. Cognitive Architecture and Instructional Design: 20 Years Later. Educ. Psychol. Rev. 2019, 31, 261–292. [Google Scholar] [CrossRef] [Green Version]
  22. Hart, S.G.; Staveland, L.E. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. In Advances in Psychology; North-Holland: Amsterdam, The Netherlands, 1988; Volume 52, pp. 139–183. [Google Scholar] [CrossRef]
  23. Taylor, R. Situational Awareness Rating Technique (Sart): The Development of a Tool for Aircrew Systems Design. In Situational Awareness; Routledge: London, UK, 2017; pp. 111–128. [Google Scholar]
  24. Oberhauser, M.; Dreyer, D. A virtual reality flight simulator for human factors engineering. Cogn. Technol. Work 2017, 19, 263–277. [Google Scholar] [CrossRef]
  25. Wade, N.J. Charles Wheatstone (1802–1875). Perception 2002, 31, 265–272. [Google Scholar] [CrossRef] [Green Version]
  26. Regrebsubla, N. Determinants of Diffusion of Virtual Reality; GRIN: Munich, Germany, 2015; Volume 5. [Google Scholar]
  27. Cruz-Neira, C.; Fernández, M.; Portalés, C. Virtual Reality and Games. Multimodal Technol. Interact. 2018, 2, 8. [Google Scholar] [CrossRef] [Green Version]
  28. Steuer, J. Defining Virtual Reality: Dimensions Determining Telepresence. J. Commun. 1992, 42, 73–93. [Google Scholar] [CrossRef]
  29. Harley, D. Palmer Luckey and the rise of contemporary virtual reality. Converg. Int. J. Res. Into New Media Technol. 2020, 26, 1144–1158. [Google Scholar] [CrossRef]
  30. Hautamäki, J. Interfacing Extended Reality and Robotic Operating System 2. Ph.D. Thesis, Tampere University, Tampere, Finland, 2021. [Google Scholar]
  31. Angelov, V.; Petkov, E.; Shipkovenski, G.; Kalushkov, T. Modern virtual reality headsets. In Proceedings of the 2020 International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA), Ankara, Turkey, 26–28 June 2020; pp. 1–5. [Google Scholar]
  32. Kelly, J.W.; Doty, T.A.; Ambourn, M.; Cherep, L.A. Distance Perception in the Oculus Quest and Oculus Quest 2. Front. Virtual Real. 2022, 3, 1–7. [Google Scholar] [CrossRef]
  33. Moesl, B.; Schaffernak, H.; Vorraber, W.; Holy, M.; Herrele, T.; Braunstingl, R.; Koglbauer, I.V. Towards a More Socially Sustainable Advanced Pilot Training by Integrating Wearable Augmented Reality Devices. Sustainability 2022, 14, 2220. [Google Scholar] [CrossRef]
  34. Liu, Y.; Messner, J.I.; Leicht, R.M. A process model for usability and maintainability design reviews. Arch. Eng. Des. Manag. 2018, 14, 457–469. [Google Scholar] [CrossRef]
  35. Henry, D.; Furness, T. Spatial perception in virtual environments: Evaluating an architectural application. In Proceedings of the IEEE Virtual Reality Annual International Symposium, Seattle, WA, USA, 18–22 September 1993; pp. 33–40. [Google Scholar] [CrossRef]
  36. Arayici, Y.; Aouad, G. Computer integrated construction: An approach to requirements engineering. Eng. Constr. Arch. Manag. 2005, 12, 194–215. [Google Scholar] [CrossRef]
  37. Tanoli, W.A.; Seo, J.W.; Sharafat, A.; Lee, S.S. 3D Design Modeling Application in Machine Guidance System for Earthwork Operations. KSCE J. Civ. Eng. 2018, 22, 4779–4790. [Google Scholar] [CrossRef]
  38. Sharafat, A.; Khan, M.S.; Latif, K.; Seo, J. BIM-Based Tunnel Information Modeling Framework for Visualization, Management, and Simulation of Drill-and-Blast Tunneling Projects. J. Comput. Civ. Eng. 2021, 35, 04020068. [Google Scholar] [CrossRef]
  39. Liu, Y.; Castronovo, F.; Messner, J.; Leicht, R. Evaluating the Impact of Virtual Reality on Design Review Meetings. J. Comput. Civ. Eng. 2020, 34, 04019045. [Google Scholar] [CrossRef]
  40. Liu, Y.; Lather, J.; Messner, J. Virtual Reality to Support the Integrated Design Process: A Retrofit Case Study. In Proceedings of the 2014 International Conference Computer Civil Building Engineering, Orlando, FL, USA, 23–25 June 2014; pp. 801–808. [Google Scholar] [CrossRef] [Green Version]
  41. Lee, H.; Cha, W.C. Virtual Reality-Based Ergonomic Modeling and Evaluation Framework for Nuclear Power Plant Operation and Control. Sustainability 2019, 11, 2630. [Google Scholar] [CrossRef] [Green Version]
  42. Heydarian, A.; Carneiro, J.P.; Gerber, D.; Becerik-Gerber, B.; Hayes, T.; Wood, W. Immersive virtual environments versus physical built environments: A benchmarking study for building design and user-built environment explorations. Autom. Constr. 2015, 54, 116–126. [Google Scholar] [CrossRef]
  43. Alizadehsalehi, S.; Hadavi, A.; Huang, J.C. Virtual Reality for Design and Construction Education Environment. In AEI 2019; American Society of Civil Engineers: Reston, VA, USA, 2019; pp. 193–203. [Google Scholar] [CrossRef]
  44. Getuli, V.; Capone, P.; Bruttini, A.; Isaac, S. BIM-based immersive Virtual Reality for construction workspace planning: A safety-oriented approach. Autom. Constr. 2020, 114, 103160. [Google Scholar] [CrossRef]
  45. Paes, D.; Arantes, E.; Irizarry, J. Immersive environment for improving the understanding of architectural 3D models: Comparing user spatial perception between immersive and traditional virtual reality systems. Autom. Constr. 2017, 84, 292–303. [Google Scholar] [CrossRef]
  46. Wang, X. BIM Handbook: A Guide to Building Information Modeling for Owners, Managers, Designers, Engineers and Contractors; John Wiley & Sons: New York, NY, USA, 2012; Volume 12. [Google Scholar] [CrossRef] [Green Version]
  47. Sharafat, A.; Khan, M.; Latif, K.; Tanoli, W.; Park, W.; Seo, J. BIM-GIS-Based Integrated Framework for Underground Utility Management System for Earthwork Operations. Appl. Sci. 2021, 11, 5721. [Google Scholar] [CrossRef]
  48. Tanoli, W.A.; Sharafat, A.; Park, J.; Seo, J.W. Damage Prevention for underground utilities using machine guidance. Autom. Constr. 2019, 107, 102893. [Google Scholar] [CrossRef]
  49. Paes, D.; Irizarry, J. Virtual Reality Technology Applied in the Building Design Process: Considerations on Human Factors and Cognitive Processes. Adv. Intell. Syst. Comput. 2016, 485, 3–15. [Google Scholar] [CrossRef]
  50. Damon, S. Educating the Reflective Practitioner. Towards a New Design for Teaching and Learning in the Professions; Jossey-Bass: San Francisco, CA, USA, 1992; Volume 1. [Google Scholar] [CrossRef] [Green Version]
  51. Lee, B.C.; Chung, K.; Kim, S.-H. Interruption Cost Evaluation by Cognitive Workload and Task Performance in Interruption Coordination Modes for Human–Computer Interaction Tasks. Appl. Sci. 2018, 8, 1780. [Google Scholar] [CrossRef] [Green Version]
  52. Longo, L. Experienced mental workload, perception of usability, their interaction and impact on task performance. PLoS ONE 2018, 13, e0199661. [Google Scholar] [CrossRef] [Green Version]
  53. Nikolaev, V.B.; Olimpiev, D.N. Complex analysis and evaluation of the condition of reinforced-concrete components in power-generating structures. Power Technol. Eng. 2009, 43, 280–286. [Google Scholar] [CrossRef]
  54. Dadi, G.B.; Goodrum, P.M.; Taylor, T.R.B.; Carswell, C.M. Cognitive Workload Demands Using 2D and 3D Spatial Engineering Information Formats. J. Constr. Eng. Manag. 2014, 140, 4014001. [Google Scholar] [CrossRef]
  55. Bhandary, S.; Lipps, J.; Winfield, S.R.; Abdel-Rasoul, M.; Stoicea, N.; Pappada, S.M.; Papadimos, T.J. NASA Task Load Index Scale to Evaluate the Cognitive Workload during Cardiac Anesthesia Based Simulation Scenarios. Int. J. Anesthesiol. Res. 2016, 4, 300–304. [Google Scholar] [CrossRef]
  56. Salmon, P.; Stanton, N.; Walker, G.; Jenkins, D.; Ladva, D.; Rafferty, L.; Young, M. Measuring Situation Awareness in complex systems: Comparison of measures study. Int. J. Ind. Ergon. 2009, 39, 490–500. [Google Scholar] [CrossRef]
  57. Salmon, P.; Stanton, N.; Walker, G.; Green, D. Situation awareness measurement: A review of applicability for C4i environments. Appl. Ergon. 2006, 37, 225–238. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  58. Paas, F.; Renkl, A.; Sweller, J. Cognitive Load Theory and Instructional Design: Recent Developments. Educ. Psychol. 2003, 38, 1–4. [Google Scholar] [CrossRef]
  59. Gevins, A.; Smith, M.E.; Leong, H.; McEvoy, L.; Whitfield, S.; Du, R.; Rush, G. Monitoring Working Memory Load during Computer-Based Tasks with EEG Pattern Recognition Methods. Hum. Factors J. Hum. Factors Ergon. Soc. 1998, 40, 79–91. [Google Scholar] [CrossRef]
  60. Ke, Y.; Qi, H.; He, F.; Liu, S.; Zhao, X.; Zhou, P.; Zhang, L.; Ming, D. An EEG-based mental workload estimator trained on working memory task can work well under simulated multi-attribute task. Front. Hum. Neurosci. 2014, 8, 703. [Google Scholar] [CrossRef] [Green Version]
  61. Shi, Y.; Ruiz, N.; Taib, R.; Choi, E.; Chen, F. Galvanic skin response (GSR) as an index of cognitive load. In Proceedings of the CHI’07 Extended Abstracts on Human Factors in Computing Systems, San Jose, CA, USA, 28 April–3 May 2007; pp. 2651–2656. [Google Scholar] [CrossRef]
  62. Matthews, M.D.; Martinez, S.G.; Eid, J.; Johnsen, B.H.; Boe, O.C. A Comparison of Observer and Incumbent Ratings of Situation Awareness. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2005, 49, 548–551. [Google Scholar] [CrossRef]
  63. Endsley, M.R.; Garland, D.J. Situation Awareness Analysis and Measurement; CRC Press: Boca Raton, FL, USA, 2000. [Google Scholar] [CrossRef]
  64. Stanton, N.A.; Salmon, P.M.; Rafferty, L.A.; Walker, G.H.; Baber, C.; Jenkins, D.P. Human Factors Methods: A Practical Guide for Engineering and Design, 2nd ed.; CRC Press: Boca Raton, FL, USA, 2013. [Google Scholar] [CrossRef]
  65. Peansupap, V.; Ly, R. Evaluating the Impact Level of Design Errors in Structural and Other Building Components in Building Construction Projects in Cambodia. Procedia Eng. 2015, 123, 370–378. [Google Scholar] [CrossRef] [Green Version]
  66. Hanusz, Z.; Tarasińska, J. Normalization of the Kolmogorov–Smirnov and Shapiro–Wilk tests of normality. Biom. Lett. 2015, 52, 85–93. [Google Scholar] [CrossRef] [Green Version]
  67. Ahad, N.A.; Yin, T.S.; Othman, A.R.; Yaacob, C.R. Sensitivity of normality tests to non-normal data. Sains Malays. 2011, 40, 637–641. [Google Scholar]
  68. Wah, Y.B.; Razali, N.M. Power comparisons of Shapiro-Wilk, Kolmogorov-Smirnov, Lilliefors and Anderson-Darling tests. J. Stat. Model. Anal. 2011, 2, 21–33. [Google Scholar]
  69. Althouse, L.A.; Ware, W.B.; Ferron, J.M. Detecting Departures from Normality: A Monte Carlo Simulation of a New Omnibus Test Based on Moments. 1998. Available online: https://eric.ed.gov/?id=ED422385 (accessed on 10 June 2022).
  70. Ostertagová, E.; Ostertag, O.; Kováč, J. Methodology and Application of the Kruskal-Wallis Test. Appl. Mech. Mater. 2014, 611, 115–120. [Google Scholar] [CrossRef]
  71. Josephsen, J. Cognitive Load Theory and Nursing Simulation: An Integrative Review. Clin. Simul. Nurs. 2015, 11, 259–267. [Google Scholar] [CrossRef]
  72. Rutkowski, A.-F.; Saunders, C.; Vogel, D.; Van Genuchten, M. “Is It Already 4 a.m. in Your Time Zone?”: Focus immersion and temporal dissociation in virtual teams. Small Group Res. 2007, 38, 98–129. [Google Scholar] [CrossRef]
Figure 1. Human factors, cognitive aspects, and knowledge representation in building design.
Figure 1. Human factors, cognitive aspects, and knowledge representation in building design.
Applsci 12 07249 g001
Figure 2. Experimental methodology.
Figure 2. Experimental methodology.
Applsci 12 07249 g002
Figure 3. Experimental procedure steps.
Figure 3. Experimental procedure steps.
Applsci 12 07249 g003
Figure 4. Participants performing design review tasks in (a) paper-based drawing, (b) monitor-based drawing, and (c) Oculus Quest 2-based drawing.
Figure 4. Participants performing design review tasks in (a) paper-based drawing, (b) monitor-based drawing, and (c) Oculus Quest 2-based drawing.
Applsci 12 07249 g004
Figure 5. (a) paper-based drawing, (b) non-immersive monitor-based drawings, (c,d) Oculus Quest 2 based IVR drawings.
Figure 5. (a) paper-based drawing, (b) non-immersive monitor-based drawings, (c,d) Oculus Quest 2 based IVR drawings.
Applsci 12 07249 g005
Figure 6. Controller setting for the participants in the Oculus Quest 2 virtual environment and participants performing the task in the immersive virtual environment.
Figure 6. Controller setting for the participants in the Oculus Quest 2 virtual environment and participants performing the task in the immersive virtual environment.
Applsci 12 07249 g006
Figure 7. Cognitive load score of participants in the different groups.
Figure 7. Cognitive load score of participants in the different groups.
Applsci 12 07249 g007
Figure 8. Average task completion time.
Figure 8. Average task completion time.
Applsci 12 07249 g008
Table 1. Questions that were asked to measure cognitive load using NASA-TLX.
Table 1. Questions that were asked to measure cognitive load using NASA-TLX.
DimensionsQuestions
Mental demanding Was the task mentally demanding?
Temporal demanding Was the task temporally demanding (time pressure for completing the task)?
Performance How successful were you in completing the task?
Effort How much has hard work been performed to achieve the task?
Frustration How much were you insecure, discouraged, irritated, or stressed during the task?
Table 2. Situational awareness rating technique questions.
Table 2. Situational awareness rating technique questions.
DomainElementsQuestion
Attentional
Demand (D)
Instability of situationHow much was the situation in the surroundings changing during the experimental session?
Complexity of situationHow complex was the surrounding situation?
Variability of situationWere several different factors in the surrounding environment changing?
Attentional
Supply (S)
ArousalHow alert were you to observe the surrounding situation?
ConcertationHow much did you concentrate on the surrounding?
Division of attentionWhat proportion of attention was devoted to surroundings instead of the design review task?
Spare Mental CapacityHow much mental capacity must be spared for the surroundings?
Understanding (U)Information QuantityHow much information about the surrounding did you take in?
Information QualityHow well did you understand/comprehend the information about the surroundings that you took in?
FamiliarityHow familiar were you with the surroundings during the task?
Table 3. Tests of normality.
Table 3. Tests of normality.
CL DimensionMediumKolmogorov–Smirnov *Shapiro–Wilk
StatisticdfSig.StatisticdfSig.
Mental DemandPaper0.229320.0000.864320.001
Monitor0.220320.0000.883320.002
Oculus Quest 20.205320.0010.912320.013
Temporal DemandPaper0.273320.0000.803320.000
Monitor0.275320.0000.783320.000
Oculus Quest 20.224320.0000.900320.006
PerformancePaper0.169320.0210.891320.004
Monitor0.157320.0440.922320.023
Oculus Quest 20.252320.0000.892320.004
FrustrationPaper0.249320.0000.826320.000
Monitor0.182320.0080.902320.007
Oculus Quest 20.244320.0000.888320.003
EffortPaper0.299320.0000.805320.000
Monitor0.178320.0110.849320.000
Oculus Quest 20.222320.0000.878320.002
Note: * Lilliefors significance correction.
Table 4. The number of design errors identified by the three design review groups.
Table 4. The number of design errors identified by the three design review groups.
Design ErrorsMediaNo of Errors PlacedMean (SD)Kruskal-H TestSignificance (p)
Stair not connected to upper floorPaper
Monitor
Oculus Quest 2
20.73 (0.45)
0.88 (0.65)
0.97 (0.64)
3.480.176 **
Slab and door/windowPaper
Monitor
Oculus Quest 2
10.61 (0.5)
0.82 (0.39)
0.78 (0.44)
3.9770.137 **
Column and door/windowPaper
Monitor
Oculus Quest 2
20.76 (0.56)
1 (0.79)
0.91 (0.58)
2.8380.242 **
Stair and beamPaper
Monitor
Oculus Quest 2
10.52 (0.51)
0.82 (0.39)
0.81 (.39)
8.2610.016 *
Stair and slabPaper
Monitor
Oculus Quest 2
10.67 (0.48)
0.64 (0.49)
0.72 (0.45)
0.8640.650 **
Stair and columnPaper
Monitor
Oculus Quest 2
10.55 (0.51)
0.7 (0.47)
0.81 (0.39)
6.0450.051 **
Sill height error Paper
Monitor
Oculus Quest 2
41.73 (1.26)
2.09 (0.72)
1.94 (1.20)
2.4710.291 **
Sill height of bathroom windows/ exhaust fanPaper
Monitor
Oculus Quest 2
21.21 (0.60)
1.27 (0.52)
1.06 (0.66)
2.160.34 **
Beam size changed Paper
Monitor
Oculus Quest 2
21.03 (0.53)
0.97 (0.53)
1.06 (0.68)
0.450.79 **
Column size changed Paper
Monitor
Oculus Quest 2
20.58 (0.66)
0.94 (0.70)
0.97 (0.66)
6.040.049 *
Floor level changed Paper
Monitor
Oculus Quest 2
10.67 (0.48)
0.64 (0.49)
0.66 (0.49)
.0890.95 **
Extra beam Paper
Monitor
Oculus Quest 2
31.18 (0.58)
1.36 (1.06)
1.53 (0.80)
1.7410.419 **
Total number of errorsPaper
Monitor
Oculus Quest 2
22
22
22
10.28 (7.11)
12.13 (7.2)
12.28 (7.28)
5.8980.052 **
Note: * Significant difference (p < 0.05), ** insignificant difference (p > 0.05).
Table 5. Situational awareness rating technique descriptive statistics and Kruskal–Wallis test score.
Table 5. Situational awareness rating technique descriptive statistics and Kruskal–Wallis test score.
Sr No.SART
(Elements)
MediumKruskal–Wallis Test HSignificance (p)
Paper Mean (SD)Monitor Mean (SD)Oculus Quest 2 Mean (SD)
1Instability of Situation 2.94 (0.76)3.03 (0.93)2.94 (0.88)0.1060.949 **
2Complexity of Situation3.66 (0.9)2.81 (0.82)2.91 (0.73)15.3300.000 *
3Variability of situation2.66 (0.9)3.03 (0.93)2.63 (0.79)3.1320.209 **
4Arousal 2.59 (0.76)2.94 (0.80)2.69 (0.82)14.4600.002 *
5Concentration2.66 (0.75)2.88 (0.79)2.41 (0.71)7.0420.030 *
6Division of Attention2.72 (0.58)2.41 (0.67)2.41 (0.71)4.9320.085 **
7Spare Mental Capacity 3.16 (0.85)2.66 (0.75)2.63 (0.75)8.7330.013 **
8Information Quantity2.81 (1.03)3.19 (0.78)3.41 (0.87)0.8020.670 **
9Information Quality2.72 (0.73)3.56 (1.16)3.75 (0.98)12.3370.002 **
10Familiarity2.59 (0.98)3.25 (0.76)3.53 (1.05)4.5220.104 **
11Attentional Demand (D) 9.25(1.79)8.88(1.77)8.46(1.74)3.7240.155 **
12Attentional Supply (S)11.12(1.60)10.88(1.54)10.12(1.58)7.4730.024 *
13Understanding (U)8.12(1.95)10.0(1.85)10.69(1.97)9.8120.007 *
14SA = U − [D − S]10.0(2.6)12.0(2.27)12.34(2.67)7.6760.022 *
Note: * Significant difference (p < 0.05), ** insignificant difference (p > 0.05).
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Umair, M.; Sharafat, A.; Lee, D.-E.; Seo, J. Impact of Virtual Reality-Based Design Review System on User’s Performance and Cognitive Behavior for Building Design Review Tasks. Appl. Sci. 2022, 12, 7249. https://doi.org/10.3390/app12147249

AMA Style

Umair M, Sharafat A, Lee D-E, Seo J. Impact of Virtual Reality-Based Design Review System on User’s Performance and Cognitive Behavior for Building Design Review Tasks. Applied Sciences. 2022; 12(14):7249. https://doi.org/10.3390/app12147249

Chicago/Turabian Style

Umair, Muhammad, Abubakar Sharafat, Dong-Eun Lee, and Jongwon Seo. 2022. "Impact of Virtual Reality-Based Design Review System on User’s Performance and Cognitive Behavior for Building Design Review Tasks" Applied Sciences 12, no. 14: 7249. https://doi.org/10.3390/app12147249

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop