Next Article in Journal
Using Diffused Essential Oils to Remove Airborne Pine and Pecan Pollen: A Pilot Study
Next Article in Special Issue
Context-Aware Querying, Geolocalization, and Rephotography of Historical Newspaper Images
Previous Article in Journal
Special Issue on Microgrids/Nanogrids Implementation, Planning, and Operation
Previous Article in Special Issue
Augmented Reality in Cultural Heritage: An Overview of the Last Decade of Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Virtual Humans in Museums and Cultural Heritage Sites

Department of Electrical and Computer Engineering, University of Patras, 26504 Patras, Greece
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(19), 9913; https://doi.org/10.3390/app12199913
Submission received: 19 July 2022 / Revised: 11 September 2022 / Accepted: 24 September 2022 / Published: 1 October 2022
(This article belongs to the Special Issue Advanced Technologies in Digitizing Cultural Heritage)

Abstract

:
This article presents the results of a survey on the use of digital avatars and agents in museums and places of cultural interest. The optimization of virtual agents in the cultural heritage domain is an interdisciplinary undertaking and this paper investigates pertinent research and solutions and suggests ways forward. The research questions examined relate to (a) the technological characteristics of cultural heritage-related uses of users’ avatars and virtual agents, and patterns that emerge, and (b) suggestions for future research based on this article’s findings. We reviewed relevant publications and analysed the approaches presented to identify trends and issues that could lead to conclusions on the existing state of the field and, moreover, infer and suggest future directions. The main findings relate to a trend toward onsite, sophisticated installations or applications with increasing investment in mixed reality. Moreover, emphasis shifts toward optimising agents such as virtual guides or companions, mediators of cultural content and engaging facilitators. Behavioural Realism (BR), featured mostly in virtual reality installations, greatly fosters engagement according to the reviewed research, and we conclude that mixed reality onsite applications, which are gathering pace, should reach their degree of sophistication and combine the strengths of both MR and BR.

1. Introduction

The word avatar comes from the Sanskrit word avatarah, which means the “descent of a deity into a terrestrial form”. The term first appeared in the late 1980s in the video games Ultima IV: Quest of the Avatar (1985) and Habitat (1986). However, it was only in 1992 that Neal Stephenson first used the term avatar in the sense of a digital representation of a person in virtual environments in his science fiction novel Snow Crash [1]. Over two decades ago, avatars were already used in games as a representation of the player in the game’s world, as well as in customer service as automated online assistants. Then, they were used in online worlds, such as Second Life, as customised 2D or 3D representations of users in various forms (e.g., human-like, animals, legendary creatures) with interaction and conversation capabilities.
The term avatar is broadly defined as a representation of a human actor in digital space (typically the user), as Nowak and Fox explain [2]. They note that there are authors who adopt a more inclusive approach regarding avatars as representations of computer-generated agents such as bots, citing Nowak and Rauh [2].
However, most authors in the field distinguish between digital representations of human actors that are described as avatars and those that are computer-generated, which are called ‘agents’. We use the term “virtual human” exclusively to describe a digital agent, i.e., a computer-generated representation, and the term “digital human avatar” in a broader sense, including human users (e.g., Second Life), as well as instances of a digital bot/computer-generated anthropomorphic representation; therefore, in this paper, “digital human avatar” covers all instances of actual users’ representations (for example, when the visual representation resembles the actual human actor/narrator, as is the case in the Asinou church AR application).
Digital human agents have evolved over time, and nowadays, not only can they have a realistic human appearance, but with the help of Artificial Intelligence, they also simulate human behaviour and adopt methods of human verbal and nonverbal communication to interact and engage with real humans, as well as to express and provoke emotions. In multi-user virtual worlds, digital human avatars are widely used in social VR applications. For more than two decades, digital humans have not only been used in various virtual representations, but virtual agents have also been used for various purposes and using several technologies. The scope of this paper is to present the results of a survey concerning the use of virtual humans in museum environments, and its main aim is to discuss the trends in the use of virtual agents and avatars, the use of technologies and the use of virtual agents and avatars onsite and online.

2. Methods and Context of the Study

2.1. Related Work

Nowak and Fox [3] provide a systematic review of digital human avatars ‘Definitions, Uses, and Effects’; they note that digital human avatars’ definitions vary to an extent and provide a section that outlines the convergences of existing definitions. In another section, they present the divergencies and, more specifically, the similarities in conceptualization and differences in conceptualization. The authors examine digital humans’ uses as well as the perceptions around them and their effects. The focus is on issues regarding their level of realism in terms of appearance and behaviour, and agency. The social categorization of avatars, e.g., in terms of racial characteristics and gender is presented, and issues of embodiment and self-expression through the choice of avatar are analysed. Moreover, the effect of avatars on human communication is investigated, addressing the emerging issues from multiple perspectives.
Extensive surveys have been conducted concerning the presence of digital humans and avatars in cultural heritage Information and Communication Technologies (ICT) applications [4] and in virtual worlds [5]. They present the state-of-the-art concerning the use of digital human agents and avatars in virtual applications not only from the user’s point of view, but also from the designer’s. In [4], a table is provided that offers an overview of the main characteristics of digital human agent and avatar uses in cultural heritage-related applications, such as the types of technologies employed, the level of interactivity and the key implementation features. Furthermore, they offer a set of recommendations and good practices for the use of digital human agents and avatars in VR-based cultural heritage-related applications, as well as the strengths and weaknesses of existing applications. In [5], the survey focuses on the role of human-like characters as equivalents of real human users and as embodied agents driven by artificial intelligence. Information is provided concerning the crafting of virtual humans, and more specifically, in terms of appearance and the embodied agents. Furthermore, the authors focus on four application domains: environmental design, training, cultural heritage, and healthcare.
In another paper [6], the first results of a survey on the use of digital humans (agents) and avatars in museums and places of cultural interest are presented. The research questions examined were related to (a) the technological characteristics of cultural heritage-related uses of digital humans/avatars and patterns that emerge, (b) what affects their potential for audience engagement, and (c) what directions may be suggested for future research based on the findings of this review.
In this article, we reviewed all relevant publications and analysed the trends in technology, the online and onsite use of DHs, and their role as virtual guides or companions, as mediators of cultural content and as engaging facilitators. According to the virtuality continuum of Milgram and Kishino, which provides an overview of the possibilities between the real environment and the virtual environment, virtual reality (VR) permits viewing, moving around and interacting in a fully immersive environment; augmented reality (AR) ‘augments’ the real environment by superimposing digital elements in the physical world; and mixed reality (MR) allows the superimposition of digital elements in the physical world and interaction with them. We focus less on the technical characteristics of these technologies and more on establishing the types of digital human, as well as the type of technology involved (i.e., AR or VR and the devices used), in relation to the types of museums/cultural heritage (CH) sites to identify emerging patterns and trends that could inform future research.

2.2. Research Questions

The research questions that underpin this review are the following:
(i)
What are the types and roles of virtual agents and avatars used in the cultural heritage sector, and how do they evolve over time?
(ii)
What are the trends in the use of technologies such as VR and AR regarding their evolvement over time?
(iii)
What are the trends in the use of virtual agents and avatars in relation to whether such uses regard onsite or online applications, and what are their characteristics?
(iv)
What conclusions can be drawn for future research and pertinent practices regarding uses of digital human agents and avatars in the field of cultural heritage?

2.3. Search Strategy

We examined the ACM and IEEE Digital Libraries. The databases were searched for studies related to virtual humans/avatars in museums and cultural heritage environments/museums on 1 July 2022. Our search strategy required the words “avatar” or “digital human avatar” and “museum” or “cultural heritage” to be present in the paper.
The search terms in the IEEE command search (advanced search with commands) were ((“Abstract”: “museum” OR “Abstract”: “cultural heritage”) AND (“Abstract”: “virtual human” OR “Abstract”: “virtual guide” OR “Abstract”: “digital human” OR “Abstract”: “digital guide” OR “Abstract”: “virtual agent” OR “Abstract”: “digital agent” OR “Abstract”: “avatar”)); likewise, the search terms for the ACM were ((“Abstract”: “museum”) OR (“Abstract”: “cultural heritage”)) AND ((“Abstract”: “virtual human”) OR (“Abstract”: “digital guide”) OR (“Abstract”: “virtual guide”) OR (“Abstract”: “digital human”) OR (“Abstract”: “virtual agent”) OR (“Abstract”: “digital agent”) OR (“Abstract”: “avatar”)). We excluded only the publications that were not relevant to the topic. In these cases, even though keywords were present in the abstract, the research project presented did not directly pertain to cultural heritage. We focused on these digital libraries as they offer a comprehensive and representative picture of the research conducted in the field surveyed.

3. Results

The papers that matched our set of criteria yielded a small number of results, a fact that relates to the specificity of the topic, as well as the rather strict exclusion criteria ruling out a large number of publications. In particular, we reviewed 39 publications from the initial 61 (47 from the IEEE and 14 from the ACM) as 22 were excluded. Exclusions were due to lack of direct relevance to the cultural heritage sector.

3.1. Typology of Virtual Agents and Avatars and Trends across Time

Initially, the 39 papers were reviewed in relation to the exact nature and function of the virtual agents and avatars. The following graph (Figure 1a) provides an overview of the cases that involved a user’s avatar, a digital human or both.
More specifically, in the following table (Figure 1b), we distinguished between digital humans as facilitators or guides (virtual agents), described as ‘mediators’, as opposed to those who are inert (described as inert DH) in the sense that they do not interact and mostly function as visual elements (e.g., ‘digital crowd’).
Moreover, we identified instances in which users’ avatars are solitary or interact socially with those of others, described as social avatars (Figure 1c). The overall picture that emerges is that as time progresses there is an incremental increase in the presence of published research on DHs as communicative agents that engage users.
Ten (10) out of thirteen (13) publications in the last four years characteristically refer to digital agents (mediators), while another one regards both mediator and user avatars. This shift indicates increased interest and investment in the capabilities such digital agents can offer. Additionally, in publications before 2014, in addition to the emphasis on users’ avatars, there was considerable focus on the social aspect. Therefore, research focuses on ways that users’ avatars may interact with each other, as well as with a mediator (5 out of 15—a third of cases). Moreover, VR experiences in which users’ avatars would come across inert DHs (in further cases) are examined; thus, there is notable investment in looking into how populated virtual environments (given that before 2015, all cases were VR-related) can enhance the user experience through social interaction or co-presence. Second Life-like environments and the social aspect seem to wane in favour of more apt facilitator DHs (virtual agents). This is also a shift that relates to the increasing prevalence of mixed reality in which the embodiment of the user in an avatar is of little relevance, while the importance of a mediator between exhibits or sites that are physically present and visible increases. Nevertheless, onsite installations and applications still investigate VR-based solutions mostly on account of their ability to achieve greater levels of behavioural realism, especially concerning natural spoken language.

3.2. Uses of VR and MR

As time progresses, mixed reality applications, virtually all of which are augmented reality with only one instance of augmented virtuality (Trajkova et al. 2020), have become more and more frequent. This is a natural outcome of the adoption of newer technologies (namely MR in this case) and the exploration of its potential. However, this is not to say that the use of VR recedes, as the number of such cases remains rather stable; nevertheless, MR applications have claimed a considerable percentage since 2015. VR and MR have their own distinct strengths and weaknesses; therefore, they are employed depending on priorities and contexts. One main difference, for example, between MR apps and VR installations is that the former offer flexibility as visitors can use their hand-held devices and combine actual objects/imagery with virtual ones. VR installations can offer immersive, interactive experiences, e.g., communication with digital humans that can approximate natural language, taking advantage of the greater processing power the hardware of fixed installations can have. Therefore, there is a balancing act between the capabilities mixed reality unlocks and its limitations as a technology that mostly functions on users’ own portable devices, and that cannot match onsite VR installations that offer superior levels of behavioural realism embedded in the DHs that function as guides, etc. Both VR and MR offer characteristics and capabilities that are valuable, and, as described in the conclusions section, there is scope for trying to combine their strengths in future research.
Current research (Li et al., 2022) in VR-based user avatars is often orientated toward novel technological possibilities such as the Brain–Computer Interface (BCI). Moreover, recent VR-based solutions often have a specific area of interest, for example, how to optimise the parameters regarding a dancer’s avatar to educate users on traditional dance movements (Kico et al., 2020 [7]; Stergiou and Vosinakis, 2022 [8]). Such VR-related research typically takes place in an experimental setting in a lab, while recent publications regarding MR are often actively used in museum settings, with actual visitors, as opposed to experiment participants (e.g., Trajkova et al., 2020 [9], Teixeira et al., 2021 [10]).
MR applications are even more prevalent in publications that present case studies of avatar/digital human use in visitable venues such as museums (e.g., Breuss-Schneeweis, 2016 [11]), as opposed to research conducted in labs. While this underlines the shift toward MR in the cultural sector, it also must be noted that very often, museums/CH sites try to combine the best of both worlds, by offering onsite MR experiences, and VR applications for remote users (e.g., Geigel et al., 2020 [12]) or allowing users to choose between VR and MR onsite (Trajkova et al., 2020 [9]).
Furthermore, recent publications (e.g., Rivera-Gutierrez et al., 2014 [13]; Sylaiou et al., 2020 [14]; Stylianidis et al., 2022 [15]) that investigate the ability of virtual agents who present exhibits to users in VR settings (e.g., museum guides or companions) to engage visitors do so in experiments. This allows for more focused research on a set of variables (avatars’ clothing, appearance and, most importantly, modes of address, i.e., the style and register of verbal communications with users; these can range from impersonal, authoritative, and almost didactic to informal and convivial, resembling a peer-to-peer interaction). Moreover, the effectiveness of agents that undertake the presentation of exhibits to users has been tested in experiments in which human-like, realistic digital agents have been used in comparison with robot-like (Rzayev et al. 2019a [16] and 2019b [17]), skeletal or simplistic dummy doll-like DHs (Stergiou and Vosinakis, 2022 [8]) to gauge their respective impact on visitors.
The persisting occurrence of research on VR applications, as something that must be connected to a more controlled environment, offers experiments that, in an MR setting, would have to counterbalance distractions and the complexity of stimuli when the virtual and actual worlds are combined. Moreover, such VR-based experiments also foreground the increasing investment in virtual humans as mediators of cultural content, given the effort put into testing which digital agent configurations and profiles are more effective in terms of audience engagement. While research in VR environments provides a more amenable context to generate new knowledge in the field, the insights gained may easily be applied in MR applications in actual museum settings or CH sites (Figure 2).

3.3. Uses of Digital Humans Onsite and Online

The clearest trend that transpires from this review’s findings is the prevalence of onsite uses of DHs (both users’ own and digital agents) in recent years, as opposed to online applications. This underlines a turn toward including emerging technologies in the quest to amplify the impact of an actual visit to a museum or site and engage audiences. Moreover, state-of-the-art MR or VR-based onsite installations can function as informative and fascinating instances of optimal uses of technology that may be regarded (especially in technology-related museums) as exhibits in their own right (Figure 3).
We identified the instances of published research in which DHs play a mediating role, as opposed to cases in which users’ avatars only interact with exhibits, or in whichnon-communicative avatars populate a virtual environment for illustrative purposes. The publications that refer to virtual agents that facilitate users comprise 23 of a total of 39. Furthermore, we established that amongst the 23 cases in which DHs play the role of mediator (e.g., presenting artwork), the prevalence of onsite uses of avatars, as opposed to online, is considerably higher than in the total of 39. Figure 4 illustrates this point:
One of the reasons for this clear trend relates to the fact that behavioural realism may be better supported onsite. This relates to the fact that onsite installations can accommodate the hardware and sensors needed to achieve behavioural realism, which demands significant computing power and resources, well beyond the existing capacity of portable devices. Moreover, cultural institutions may feel inclined to reciprocate the manifested interest of visitors who are physically present for their exhibits, by offering the best available means to facilitate their engagement.
Figure 5a,b provide an overview of the percentage, respectively, of VR- and MR-related publications, and the sites of their employment, i.e., types of museums/sites.
The data in the two tables above are plotted in the figure below (Figure 6) to illustrate the occurrence of VR-based DH uses and MR uses in accordance with the respective site of employment, i.e., virtual museums or actual museums and CH sites. It must be noted that practically half of actual museums employ MR applications, while CH sites gravitate toward VR or mixed VR and MR solutions.
The main finding here is that actual museums increasingly invest in MR experiences to engage audiences, given the fact that cases of MR-based DHs cluster toward the second half of the 2010s (2016 to 2021) and that research on VR-based DHs is spread rather evenly over the past 12 years (2008–2021)—one case in both 2008 and 2011, two cases in 2014, one case in 2017 and one case in 2021. Figure 7 illustrates the distribution of VR/MR occurrences in actual museums across time and shows the increasing prevalence of MR applications in them, something that indicates the potential of such technology to foster visitors’ experience when they are physically present and interacting with their exhibits.
This paper expands on, as well as draws upon, another article [6] by the same authors that presents the first results of a survey on the use of virtual humans in museums and places of cultural interest. The main difference between the two publications is that the preliminary one focused exclusively on the characteristics and trends of DH uses in the cultural domain as facilitators, guides and generally mediating agents between cultural content and visitors. The present survey includes all types of DHs, practically shedding light on users’ avatars as well. Therefore, we will refer to examples and conclusions of the early survey. At this point, we clarify that the present paper presents publications from the IEEE and ACM digital libraries as they offer a representative cross-section of the research that pertains to the topic, as the scope of this article is broader. Nevertheless, we will refer to examples discussed from other sources as well as those presented in [1].
In a nutshell, the main findings lead to the conclusion that research gravitates toward digital agents that foster users’ engagement and, as time progresses, less so toward users’ avatars. Figure 8 below illustrates the increasing percentage of research on agents as mediators of cultural content (blue bars) concerning user avatar-specific publications (yellow bars) and distinguishes the cases in which both user avatars and DHs (mixed cases) have been investigated (green bars); interestingly, before 2014, mixed approaches were as numerous as those related to DH avatars only. This is in stark contrast to the picture from 2015 onwards in which, apart from one mixed case in 2021, there is an exclusive focus on avatars as mediators. This fact, on top of the overwhelming percentage of publications addressing DHs from 2019 onwards (11 out of 13) shows the shift in researchers’ interest toward the direction of investigating ways to improve digital agents as means to foster users’ experience.
While the interest in users’ avatars recedes as a percentage, it is almost stable in absolute numbers, and the increasing volume of total publications appears to correlate to the respective increase in DHs (agents), which seem to account for most of the additional number of papers that have been published in the last decade. The new technological breakthroughs provide opportunities to tap into the potential of digital guides or facilitators, given the increased complexity of such functions in comparison to digital embodiments of users who browse media, e.g., a virtual gallery, which do not require the development of elaborate behavioural or visual realism even in instances of social user avatars in which they cluster and interact often in the presence of a mediator/digital guide avatar (three out of five cases as shown in Figure 1).
In the table below (Table 1 and Table 2), we show the source of the data set on which the previous tables/figures are based. We followed a different approach in gathering the data in as concise a manner as possible, presenting them in a single table. Therefore, we avoided including mixed cases (e.g., publications on a research project that includes both VR and MR) as a separate column and marked such instances in both columns concerned (in the above example, both VR and MR, as opposed to including a further column, namely ‘VR and MR’ as we did in the rest of the tables with colour-coded bars). The same principle is applied throughout the table (online/onsite) and regarding the types of DHs (social avatars are marked as such, as are ‘inert DHs’, within the columns under user avatar and DH, respectively). Lastly, the far-right column presents instances of research held in controlled environments (referred to as ‘Labs’) in which human subjects are experiment participants, as opposed to ‘Museums/CH sites’ in which the people involved are visitors, and therefore, the locus of the research is a visitable environment.

4. Discussion

The findings suggest that researchers’ interest shifts from users’ avatars to digital humans used as mediators of cultural content (e.g., virtual guides), as the data presented in Figure 8 illustrate. In the last five years (2018 to 2022) only four out of a total of fifteen publications exclusively focused on users’ avatars as the interest shifted toward the potential of digital agents. Moreover, these four instances of research on user avatars investigate very specific aspects or uses of avatars, e.g., testing new technologies such as the Brain–Computer Interface or addressing users with special needs. Therefore, recent research on users’ avatars has examined very specific applications rather than the potential of using avatars, such as in contradistinction, to research digital humans, in which researchers appear to invest in promoting engagement with CH. More specifically, [40] examines the effect of non-realistic representations of humans in a social setting, [41] investigates the ability of avatars to foster the interest of users with autism, [46] presents research on directing an avatar through the Brain–Computer Interface, and [9], in an onsite installation, projects users’ avatars as an on-screen digital human. Recent publications, e.g., in 2018, that focus on user avatars either in a social setting [40] or not [41] include minimal visual and (in the case of social avatars) interaction or communication functionalities. In fact, users’ avatars in [40] are rendered as pillars slightly larger than their actual body size, with no anatomic detail whatsoever. They based their minimalist approach on existing research that showed no significant contribution of visual realism in the sense of social presence, and embarked on investigating specific parameters such as the visualization of points of mutual interest on exhibits, captured by eye tracking sensors, as well as of eye contact amongst them. In particular, they quote [47], who did not find significant differences between low- and high-fidelity avatars, as well as [48], who found that users tend to have higher acceptance of avatars as realistic as their own bodies, but this does not affect their social presence perception.
Interestingly Roth et al. [40] did not find any significant changes in learning or the enjoyment of the experience, or, put otherwise, in overall engagement, apart from the fact that the visualization of points of interest attracted users’ gaze (mutually visible bright spots hovering before the exhibit seen by all participants in the experiment) and increased curiosity about these specific parts of the artefact. Moreover, even though the sense of co-presence was amplified by the pillar-like avatar visualisations, this did not translate into tangible benefits in terms of engagement with the exhibition. Lastly, even the innovative visualization of mutual gaze (eye contact amongst users) with floating, coloured shapes, despite being assumed to be a positive manifestation of social behaviour, added little to the interaction with the exhibits (see Figure 9a–c). This shows that users’ own avatars may generate a sense of co-presence, but neither behavioural nor visual realism adds significantly to the ways visitors relate to cultural content.
This comes in contradistinction to the important effect that digital human guides or other avatars have on users’ perceptions of exhibitions and CH sites. Finally, Sorce et al. [41], who did investigate the engaging potential of interacting (in fact, directing) one’s own avatar in a VM environment, did so in an experiment on users who have some form of autism. They did find a positive correlation between engagement with exhibits and users’ projection of themselves onto an avatar that views artwork, but this is evidently case-specific research that does not produce a safe conclusion about the balance between user avatars and virtual agents as factors that foster a VM experience. Moreover, in a similar vein, the authors of [46] investigate the potential of directing a VR-based user avatar through the novel technological possibility of the Brain–Computer Interface (BCI) in a controlled environment. In the fourth case of research on user avatars [9] in the last five years, the user avatar appears and functions as a digital agent appearing on a large screen, facing the user and other visitors, and interacting with informational content.
In this instance, within an onsite installation, in an actual museum setting, visitors’ own avatars can be visualised in different ways: skeletal, robot-like, as well as realistically through an augmented virtuality application that projected their image through video in real-time. In effect, users were extrapolated onto a large screen, in essence, watching themselves on screen, unpacking knowledge by interacting with the exhibits. This resembles watching a digital human providing access to the information upon users’ demand, as it transposes the user as an entity that literally stands opposite to them. Another element this example brings to the fore is the frequently observed cases of mixed approaches in which users may choose between different visualizations or modes. This trend is also seen in cultural institutions that employ both VR and MR applications for the same cultural content (e.g., Geigel et al., 2020 [12]) as well as diversified modalities users can choose from within the same setting.
There is a significant turn toward examining the potential of digital humans to communicate with and engage the viewer, as the following instances illustrate. The fact that one’s own virtual appearance or behavioural features have far less significance in affecting meaning-making and learning in museums and CH sites [40], in comparison to the respective characteristics of digital guides, comes as no surprise. This is not drastically different from a real-world situation in a museum or gallery, in which the attitude, perceived professionalism, mode of address and looks of the person mediating the exhibition are far more important to us than our appearance and the behavioural modalities that we employ. Therefore, pertinent research puts emphasis on the qualitative characteristics of virtual agents that attempt to engage the viewer. At this point, two examples are offered to draw attention to the cutting-edge technology incorporated into such digital agents, as well as the emphasis put on the behavioural realism they feature as they engage in life-like conversations with people.
As explained in [6] in 2005, Kopp, Gesellensetter, Krämer and Wachsmuth [49] introduced an embodied conversational agent, Max, as a museum guide in a real-world setting, in a computer museum in Paderborn, Germany. Max was the size of a real human and appeared on a screen, face-to-face with museum visitors. Their research addressed issues on how conversational behaviour is achieved and evaluated users’ acceptance and engagement. In a similar vein, Swartout et al. [50] present the results of the InterFaces project that was implemented by the USC Institute for Creative Technologies and the Museum of Science, Boston, concerning the use of two ‘life-sized, photorealistic’ virtual museum guides, Ada and Grace. The visitors can ask questions and interact with the virtual humans with the help of museum staff members in their natural language.
A large category comprises the application of agents in museum spaces onsite, in which with the use of portable devices, users prompt agents’ narrations by means of, e.g., scanning a target, or by choosing a topic by tapping on the screen. In these cases, responses are predetermined and limited to the number of topics/items of interest. Narrations may nevertheless come in the form of answers to verbal questions, as is the case with the application of ‘Ada and Grace’ [50] and Max [49]; these involve elaborate mechanisms of processing input in the form of questions and provision of the most apposite answer verbally from a large pool of possible replies, establishing the most appropriate response in each case. This even includes off-topic questions that are addressed as such, with generic replies maintaining a natural communication protocol.
These instances, especially when paired with increased photorealism in agents’ depictions, foster engagement, given that the reviewed instances address school students. These virtual interlocutors that appear on static screens in fixed installations are akin to an information kiosk that is somewhat detached from the flow of the exhibition halls and their path-related design for learning.
It is notable that the above applications [49,50] and that presented in Rivera-Gutierrez et al. [13] are 3D VR installations belonging to technology and science museums. To an extent, they are expositions of advanced technology in themselves, and therefore both enhance the interest in technology and foreground the institutions’ pertinence with advancements in ICT and science in general.
Recent research on proactive virtual agents shows that we are already at the point at which virtual humans do not only respond to commands by providing information but take the initiative to address the users first (see Figure 10 and Figure 11). They are able to detect human presence and, moreover, distinguish specific characteristics of visitors such as the colour of their clothing; they can even assume users’ emotional state by decoding their movement or body posture [51], or their gaze direction and the time span of their attention towards the exhibit presented by the DH [45]. Respectively, they adopt voice tonalities, body language and an expression/gaze that enhance their effect. The following images from Ko et al. [44] show an onsite installation in which a life-size on-screen agent representing a famous Korean admiral addresses a visitor, attracting their attention by inviting them to play a game before offering historic information and responding to queries through users’ interactive navigation (see Figure 10).
The case study presented by Bönsch et al. [45] discusses the effect on users (participants in an experiment) depending on the mode of address; two modes are tested: a so-called virtual guide who acted in a more predictable and professional manner, and a virtual companion who at times became more convivial. Judging from users’ relative positions and gaze, the DH interjected at times, trying to ensure she still had their attention when participants seemed to lose interest and their gaze wandered away from the exhibit presented to them. Interestingly, findings suggest that when the DH became too casual in her address, this caused some inconvenience instead of appreciation of the naturalness of the behaviour, even though the participants could justify every comment she made.
Sylaiou et al. [14] offer insights on the correlation between a virtual agent’s appearance, its assumed professionalism, and its mode of address/register of the spoken language, in an experiment involving three types of virtual guides presenting a Roman statue and, in fact, the narrative that underpins the historical scene it depicts. Even though this instance does not include mutual conversation, it is indicative of the variables that condition users’ acceptance of agents and, in turn, the effect on their experience. The naturalness of conversation, and the incorporation of the mutual recognition of emotional state judged by visual clues, appears to be a way forward; however, as instances of this demand in hardware and software ventures are exceptionally high, even nowadays, such examples were to be found initially in technology museums, doubling as exhibits.
It is difficult to separate the naturalness of interaction and the discomfort of entering the uncanny valley as a visitor by encountering a less-than-predictable agent, especially one with no clearly defined role. Likewise, little separates the presence of a highly advanced virtual agent, who engages in conversation about exhibits, from an exhibit of technological prowess on the institution’s part that, as shown in the last analysis, may distract from the exhibition when deployed, e.g., in an art museum.
While these balances need to be kept, the potential for the innovative inclusion of behavioural realism is so great that more research needs to be conducted in the direction of not only honing existing methodologies and approaches, but also expanding such advancements in the field of MR, and even hand-held device applications. This relates to the fact that such applications, as they were shown to offer a balance between immersion and retaining contact with the actuality of spaces and artefacts that in the last analysis, should be the epicentre of attention. Augmented reality offers the means to maintain contact with the physical exhibits throughout museum/cultural heritage site spaces, thus weaving itself into the flow of a visit. However, the behavioural realism of digital humans is a demanding task in terms of the processing power needed and is not present in any of the surveyed applications that are designed for hand-held devices. Nevertheless, there is scope for future research to investigate ways to combine the merits of using portable devices, and especially MR applications, while experiencing natural interaction with a digital human.
Mixed methodological approaches are evident in the tables and seem to be adopted to diversify the services offered, as well as to combine the optimal aspects of each technology. For example, Geigel et al. [12] present an approach based on both VR and AR, online and onsite at a museum, respectively. A 3D digital human agent provides narrations on pre-determined topics that users choose with the help of VR and AR, for interactive storytelling experiences at a museum or through a browser, remotely. See Figure 12 below. Prompting narrations is a widespread approach, especially in the use of AR digital guides. A characteristic example is described by Breuss-Schneeweis [11], in which a digital human agent in the form of an ancient Celt explains the exhibits when smartphone app users scan specific targets at the Salzburg Museum of Celtic Heritage. While in these instances users prompt VH narrations by choosing topics, there is no possibility for direct communication given the resources such functionality would require, especially considering the limitations hand-held devices pose in terms of processing power. Users may provide non-verbal input to DHs so that they can provide the most apposite narration or information in different ways, apart from gesture commands or choosing options from a menu; in Rivera-Gutierrez et al. [13], virtual humans as medical doctors provide lessons on a series of health issues interactively within an installation at a Science Museum that provides advice to visitors on issues such as weight loss. In this paper, the surveyed research appears to investigate the potential uses of differing approaches methodologies and technologies. As outlined in the following Conclusions section, there is scope for future research to aim at combining the strengths of existing (and often diverging) approaches. Regarding applications that can highlight the merits of MR, hand-held devices with the capabilities of onsite installations that support DHs that communicate with users in their natural language are an example of technological and methodological convergencies that could foster cultural experiences in the near future.

5. Conclusions

The characteristics of both user avatars and three-dimensional animated agents depend on their intended use, function, and role. While users’ avatars were more intensely investigated until the mid-2010s, and especially in relation to multi-user learning environments, the focus has shifted to the potential of DHs to engage audiences. With regard to agents, appearances do matter with regard to their effectiveness to an extent. As Yee and J. Bailenson explain, users who engage in self-disclosure tasks tend to be more intimate when addressing a more attractive virtual agent [52]. This is an indication that the use of or engagement with avatars or virtual humans follows similar patterns to social exchanges in real life. Behavioural realism, however, appears to be the most critical factor in the quest to foster visitors’ cultural experience. Virtual agents’ ability to sense, decode and respond to human gestures, body language and patterns of movements, and to reciprocate with verbal communications and facial expressions in accordance with the situation, is already embedded in experimental systems [44,45]. While reciprocal, natural conversation capacity for agents has been developed and employed, it so happens that this typically takes place in science museums. This underlines the fact that such technologies are seen both as a fulcrum to engage audiences and as an exhibit of state-of-the-art technology.
In several cases, users prompt DHs’ narrations and the elicitation of information by choosing an effect from a given menu through different means of interaction such as gesture commands or by scanning tags. Useful as this may be, incorporating the capacity to understand verbal or even written questions about exhibits by visitors would also have the added benefit of, e.g., archeological museums gaining insights into audiences’ actual interests, queries and thoughts about their collection items. Even if it is still a tall order to present systems that are able to maintain a natural conversation, reciprocity in verbal/written exchanges is something that would greatly enhance mutual communication. The main point that this review foregrounds is that multimodal and multi-digital solutions combine the strengths of AR versatility and discreet presence with the interactivity capacities of VR installations. While more advanced functionalities naturally occur in digital technology museums where the use of virtual humans not only provides a mediating agent between audiences and exhibitions, it may serve the entire cultural heritage sector to undertake and adapt such capacities to foster visitors by providing meaningful connections with cultural treasures in exhilarating and informative ways that may acknowledge the specificity and demands of the user.

Author Contributions

Conceptualization, S.S. and C.F.; methodology, S.S. and C.F.; formal analysis, S.S. and C.F.; investigation, S.S. and C.F.; resources, S.S. and C.F.; writing—original draft preparation, S.S. and C.F.; writing—review and editing, S.S. and C.F.; visualization, S.S. and C.F.; supervision, S.S. and C.F.; project administration, C.F.; funding acquisition, C.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research has been co-financed by the European Regional Development Fund of the European Union and Greek national funds through the operational program Competitiveness, Entrepreneurship and Innovation, under the call RESEARCH–CREATE–INNOVATE (project code: T1EDK-2-01392).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of the data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Kolesnichenko, A.; McVeigh-Schultz, J.; Isbister, K. Understanding Emerging Design Practices for Avatar Systems in the Commercial Social VR Ecology. In Proceedings of the DIS ’19: Designing Interactive Systems Conference, San Diego, CA, USA, 23–28 June 2019. [Google Scholar] [CrossRef]
  2. Nowak, K.L.; Rauh, C. The influence of the avatar on online perceptions of anthropomorphism, androgyny, credibility, homophily, and attraction. J. Comput.-Mediat. Commun. 2005, 11, 153–178. [Google Scholar] [CrossRef]
  3. Nowak, K.L.; Fox, J. Avatars and computer-mediated communication: A review of the definitions, uses, and effects of digital representations on communication. Rev. Commun. Res. 2018, 6, 30–53. [Google Scholar] [CrossRef]
  4. Machidon, O.M.; Duguleana, M.; Carrozzino, M. Virtual humans in cultural heritage ICT applications: A review. J. of Cult. Heritage 2018, 33, 249–260. [Google Scholar] [CrossRef]
  5. Carrozzino, M.A.; Galdieri, R.; Machidon, O.M.; Bergamasco, M. Do Virtual Humans Dream of Digital Sheep? IEEE Comp. Graph. Appl. 2020, 40, 71–83. [Google Scholar] [CrossRef] [PubMed]
  6. Sylaiou, S.; Fidas, C. First results of a survey concerning the use of digital human avatars in museums and cultural heritage sites. In Proceedings of the 2nd International Conference on Interactive Media, Smart Systems and Emerging Technologies (IMET 2022), Nicosia, Cyprus, 4–7 October 2022. [Google Scholar]
  7. Kico, I.; Zelníček, D.; Liarokapis, F. Assessing the Learning of Folk Dance Movements Using Immersive Virtual Reality. In Proceedings of the 24th International Conference Information Visualisation (IV), Melbourne, Australia, 7–11 September 2020; pp. 587–592. [Google Scholar] [CrossRef]
  8. Stergiou, M.; Vosinakis, S. Exploring costume-avatar interaction in digital dance experiences. In Proceedings of the 8th International Conference on Movement and Computing (MOCO ’22), Chicago, IL, USA, 22–25 June 2022. [Google Scholar] [CrossRef]
  9. Trajkova, M.; Alhakamy, A.; Cafaro, F.; Mallappa, R.; Kankara, S.R. Move Your Body: Engaging Museum Visitors with Human-Data Interaction. In Proceedings of the Conference on Human Factors in Computing Systems (CHI ’20), Honolulu, HI, USA, 25–30 April 2020. [Google Scholar] [CrossRef]
  10. Teixeira, N.; Lahm, B.; Peres, F.F.F.; Mauricio, C.R.M.; Xavier Natario Teixeira, J.M. Augmented Reality on Museums: The Ecomuseu Virtual Guide. In Proceedings of the Symposium on Virtual and Augmented Reality (SVR’21), Virtual Event, Brazil, 18–21 October 2021. [Google Scholar] [CrossRef]
  11. Breuss-Schneeweis, P. The speaking celt. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp ’16), Heidelberg, Germany, 12–16 September 2016. [Google Scholar] [CrossRef]
  12. Geigel, J.; Shitut, K.S.; Decker, J.; Doherty, A.; Jacobs, G. The Digital Docent: XR storytelling for a Living History Museum. In Proceedings of the 26th ACM Symposium on Virtual Reality Software and Technology (VRST ’20), Virtual Event, Canada, 1–4 November 2020. [Google Scholar] [CrossRef]
  13. Rivera-Gutierrez, D.; Ferdig, R.; Li, J.; Lok, B. Getting the Point Across: Exploring the Effects of Dynamic Virtual Humans in an Interactive Museum Exhibit on User Perceptions. IEEE Trans. Vis. Comput. Graph. 2014, 20, 636–643. [Google Scholar] [CrossRef] [PubMed]
  14. Sylaiou, S.; Kasapakis, V.; Gavalas, D.; Djardanova, E. Avatars as Storytellers: Affective Narratives in Virtual Museums. J. Pers. Ubiquitous Comput. 2020, 24, 829–841. [Google Scholar] [CrossRef]
  15. Stylianidis, E.; Evangelidis, K.; Vital, R.; Dafiotis, P.; Sylaiou, S. 3D Documentation and Visualization of Cultural Heritage Buildings through the Application of Geospatial Technologies. Heritage 2022, 5, 2818–2832. [Google Scholar] [CrossRef]
  16. Rzayev, R.; Karaman, G.; Wolf, K.; Henze, N.; Schwind, V. The Effect of Presence and Appearance of Guides in Virtual Reality Exhibitions. In Proceedings of the Mensch und Computer 2019 (MuC’19), Hamburg, Germany, 8–11 September 2019. [Google Scholar] [CrossRef]
  17. Rzayev, R.; Karaman, G.; Henze, N.; Schwind, V. Fostering Virtual Guide in Exhibitions. In Proceedings of the 21st International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI ’19), Taipei, Taiwan, 1–4 October 2019. [Google Scholar] [CrossRef]
  18. Kim, Y.; Kesavadas, T.; Paley, S.M.; Sanders, D.H. “Real-time animation of King Ashur-nasir-pal II (883–859 BC) in the virtual recreated Northwest Palace. In Proceedings of the Seventh International Conference on Virtual Systems and Multimedia, Berkeley, CA, USA, 25–27 October 2001; pp. 128–136. [Google Scholar] [CrossRef]
  19. Chen, J.X.; Yang, Y.; Loffin, B. MUVEES: A PC-based multi-user virtual environment for learning. In Proceedings of the IEEE Virtual Reality, Los Angeles, CA, USA, 22–26 March 2003; pp. 163–170. [Google Scholar] [CrossRef]
  20. Tavares, T.A.; Oliveira, S.A.; Canuto, A.; Goncalves, L.M.; Filho, G.S. An infrastructure for providing communication among users of virtual cultural spaces. In Proceedings of the WebMedia and LA-Web, Ribeirao Preto, Brazil, 15 October 2004; pp. 54–61. [Google Scholar] [CrossRef]
  21. Sari, R.F. Interactive Object and Collision Detection Algorithm Implementation on a Virtual Museum based on Croquet. In Proceedings of the Innovations in Information Technologies (IIT), Dubai, United Arab Emirates, 18–20 November 2007; pp. 685–689. [Google Scholar] [CrossRef]
  22. Schulman, D.; Sharma, M.; Bickmore, T.W. The identification of users by relational agents. In Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems (AAMAS), Estoril, Portugal, 12–16 May 2008; pp. 105–111. [Google Scholar]
  23. Xinyu, D.; Pin, J. Three Dimension Human Body Format and Its Virtual Avatar Animation Application. In Proceedings of the Second International Symposium on Intelligent Information Technology Application, Shanghai, China, 21–22 December 2008; pp. 1016–1019. [Google Scholar] [CrossRef]
  24. Pan, Z.; Chen, W.; Zhang, M.; Liu, J.; Wu, G. Virtual Reality in the Digital Olympic Museum. IEEE Comput. Graph. Appl. 2009, 29, 91–95. [Google Scholar] [CrossRef] [PubMed]
  25. Mu, B.; Yang, Y.; Zhang, J. Implementation of the Interactive Gestures of Virtual Avatar Based on a Multi-user Virtual Learning Environment. In Proceedings of the International Conference on Information Technology and Computer Science, Beijing, China, 8–11 August 2009; pp. 613–617. [Google Scholar] [CrossRef]
  26. Nimnual, R.; Chaisanit, S.; Suksakulchai, S. Interactive virtual reality museum for material packaging study. In Proceedings of the ICCAS 2010, Gyeonggi-do, Korea, 27–30 October 2010; pp. 1789–1792. [Google Scholar] [CrossRef]
  27. Dantas, R.R.; de Melo, J.C.P.; Lessa, J.; Schneider, C.; Teodósio, H.; Gonçalves, L.M.G. A path editor for virtual museum guides. In Proceedings of the IEEE International Conference on Virtual Environments, Human-Computer Interfaces and Measurement Systems, Taranto, Italy, 6–8 September 2010; pp. 136–140. [Google Scholar] [CrossRef]
  28. Pan, Z.; Jiang, R.; Liu, G.; Shen, C. Animating and Interacting with Ancient Chinese Painting—Qingming Festival by the Riverside. In Proceedings of the Second International Conference on Culture and Computing, Kyoto, Japan, 20–22 October 2011; pp. 3–6. [Google Scholar] [CrossRef]
  29. Oliver, I.; Miller, A.; Allison, C. Mongoose: Throughput Redistributing Virtual World. In Proceedings of the 21st International Conference on Computer Communications and Networks (ICCCN), Munich, Germany, 30 July–2 August 2012; pp. 1–9. [Google Scholar] [CrossRef]
  30. Hill, V.; Mystakidis, S. Maya Island virtual museum: A virtual learning environment, museum, and library exhibit. In Proceedings of the 18th International Conference on Virtual Systems and Multimedia, Milan, Italy, 2–5 September 2012; pp. 565–568. [Google Scholar] [CrossRef]
  31. Kyriakou, P.; Hermon, S. Building a dynamically generated virtual museum using a game engine. In Proceedings of the 2013 Digital Heritage International Congress (DigitalHeritage), Marseille, France, 28 October–1 November 2013; p. 443. [Google Scholar] [CrossRef]
  32. Dawson, T.; Vermehren, A.; Miller, A.; Oliver, I.; Kennedy, S. Digitally enhanced community rescue archaeology. In Proceedings of the 2013 Digital Heritage International Congress (DigitalHeritage), Marseille, France, 28 October–1 November 2013; pp. 29–36. [Google Scholar] [CrossRef] [Green Version]
  33. Tsoumanis, G.; Kavvadia, E.; Oikonomou, K. Changing the look of a city: The v-Corfu case. In Proceedings of the 5th International Conference on Information, Intelligence, Systems and Applications (IISA 2014), Chania, Greece, 7–9 July 2014; pp. 419–424. [Google Scholar] [CrossRef]
  34. Aguirrezabal, P.; Peral, R.; Pérez, A.; Sillaurren, S. Designing history learning games for museums. In Proceedings of the Virtual Reality International Conference (VRIC ’14), Laval, France, 9–11 April 2014. [Google Scholar] [CrossRef]
  35. Moreno, I.; Prakash, E.C.; Loaiza, D.F.; Lozada, D.A.; Navarro-Newball, A.A. Marker-less feature and gesture detection for an interactive mixed reality avatar. In Proceedings of the 20th Symposium on Signal Processing, Images and Computer Vision (STSIVA), Bogotá, Colombia, 2–4 September 2015; pp. 1–7. [Google Scholar] [CrossRef]
  36. Cafaro, A.; Vilhjálmsson, H.H.; Bickmore, T. First Impressions in Human-Agent Virtual Encounters. ACM Trans. Comput.-Hum. Interact. 2016, 23, 1–40. [Google Scholar] [CrossRef]
  37. Ghani, I.; Rafi, A.; Woods, P. Sense of place in immersive architectural virtual heritage environment. In Proceedings of the 2016 22nd International Conference on Virtual System & Multimedia (VSMM), Kuala Lumpur, Malaysia, 17–21 October 2016; pp. 1–8. [Google Scholar] [CrossRef]
  38. Bruno, F.; Lagudi, A.; Ritacco, G.; Agrafiotis, P.; Skarlatos, D.; Cejka, J.; Kouril, P.; Liarokapis, F.; Philpin-Briscoe, O.; Poullis, C.; et al. Development and integration of digital technologies addressed to raise awareness and access to European underwater cultural heritage. An overview of the H2020 i-MARECULTURE project. In Proceedings of the OCEANS 2017, Aberdeen, Scotland, 19–22 June 2017. [Google Scholar] [CrossRef]
  39. Linssen, J.; Theune, M. R3D3: The Rolling Receptionist Robot with Double Dutch Dialogue. In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’17), Vienna, Austria, 6–9 March 2017. [Google Scholar] [CrossRef]
  40. Roth, D.; Klelnbeck, C.; Feigl, T.; Mutschler, C.; Latoschik, M.E. Beyond Replication: Augmenting Social Behaviors in Multi-User Virtual Realities. In Proceedings of the IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Reutlingen, Germany, 18–22 March 2018; pp. 215–222. [Google Scholar] [CrossRef]
  41. Sorce, S.; Gentile, V.; Oliveto, D.; Barraco, R.; Malizia, A.; Gentile, A. Exploring Usability and Accessibility of Avatar-based Touchless Gestural Interfaces for Autistic People. In Proceedings of the 7th ACM International Symposium on Pervasive Displays (PerDis ’18), Munich, Germany, 6–8 June 2018. [Google Scholar] [CrossRef]
  42. Ali, G.; Le, H.-Q.; Kim, J.; Hwang, S.-W.; Hwang, J.-I. Design of Seamless Multi-modal Interaction Framework for Intelligent Virtual Agents in Wearable Mixed Reality Environment. In Proceedings of the 32nd International Conference on Computer Animation and Social Agents (CASA ’19), Paris, France, 1–3 July 2019. [Google Scholar] [CrossRef]
  43. Ye, Z.-M.; Chen, J.-L.; Wang, M.; Yang, Y.-L. PAVAL: Position-Aware Virtual Agent Locomotion for Assisted Virtual Reality Navigation. In Proceedings of the IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Bari, Italy, 4–8 October 2021; pp. 239–247. [Google Scholar] [CrossRef]
  44. Ko, J.-K.; Koo, D.W.; Kim, M.S. A Novel Affinity Enhancing Method for Human Robot Interaction—Preliminary Study with Proactive Docent Avatar. In Proceedings of the 21st International Conference on Control, Automation and Systems (ICCAS), Jeju, Korea, 12–15 October 2021; pp. 1007–1011. [Google Scholar] [CrossRef]
  45. Bönsch, A.; Hashem, D.; Ehret, J.; Kuhlen, T.W. Being Guided or Having Exploratory Freedom. In Proceedings of the 21st ACM International Conference on Intelligent Virtual Agents (IVA ’21), Fukuchiyama, Kyoto, Japan, 14–17 September 2021. [Google Scholar] [CrossRef]
  46. Li, P.; Wei, A.; Peng, F.; Zhang, N.; Chen, C.; Wei, Q. Virtual Reality Roaming System Design Based on Motor Imagery-Based Brain-Computer Interface. In Proceedings of the IEEE 6th Information Technology and Mechatronics Engineering Conference (ITOEC), Chongqing, China, 4–6 March 2022; pp. 1619–1623. [Google Scholar] [CrossRef]
  47. Bente, G.; Ruggenberg, S.; Kramer, N.C.; Eschenburg, F. Avatar-Mediated Networking: Increasing Social Presence and Interpersonal Trust in Net-Based Collaborations. Hum. Commun. Res. 2008, 34, 287–318. [Google Scholar] [CrossRef]
  48. Latoschik, M.E.; Roth, D.; Gall, D.; Achenbach, J.; Waltemate, T.; Botsch, M. The effect of avatar realism in immersive social virtual realities. In Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology (VRST), Gothenburg, Sweden, 8–10 November 2017. [Google Scholar]
  49. Kopp, S.; Gesellensetter, L.; Krämer, N.C.; Wachsmuth, I. A conversational agent as museum guide—Design and evaluation of a real-world application. In IVA 2005. LNCS (LNAI); Panayiotopoulos, T., Gratch, J., Aylett, R., Ballin, D., Olivier, P., Rist, T., Eds.; Springer: Berlin/Heidelberg, Germany, 2005; Volume 3661, pp. 329–343. [Google Scholar] [CrossRef]
  50. Swartout, W.; Traum, D.; Artstein, R.; Noren, D.; Debevec, P.; Bronnenkant, K.; Williams, J.; Leuski, A.; Narayanan, S.; Piepol, D.; et al. Ada and Grace: Toward Realistic and Engaging Virtual Museum Guides. In Intelligent Virtual Agents. IVA 2010. Lecture Notes in Computer Science; Allbeck, J., Badler, N., Bickmore, T., Pelachaud, C., Safonova, A., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; Volume 6356. [Google Scholar] [CrossRef]
  51. Sénécal, S.; Cadi, N.; Arévalo, M.; Magnenat-Thalmann, N. Modelling Life Through Time: Cultural Heritage Case Studies. In Mixed Reality and Gamification for Cultural Heritage; Ioannides, M., Magnenat-Thalmann, N., Papagiannakis, G., Eds.; Springer: Cham, Switzerland, 2017. [Google Scholar] [CrossRef]
  52. Yee, N.; Bailenson, J. The Proteus Effect: The Effect of Transformed Self-Representation on Behavior. Hum. Commun. Res. 2007, 33, 271–290. [Google Scholar] [CrossRef] [Green Version]
Figure 1. (a) Publications referring to user avatars, digital humans or both. (b) Publications referring to user avatars, digital humans or both, distinguishing between virtual agents and inert DHs (that are not interactive). (c) Publications referring to user avatars, digital humans or both, foregrounding uses of social avatars.
Figure 1. (a) Publications referring to user avatars, digital humans or both. (b) Publications referring to user avatars, digital humans or both, distinguishing between virtual agents and inert DHs (that are not interactive). (c) Publications referring to user avatars, digital humans or both, foregrounding uses of social avatars.
Applsci 12 09913 g001
Figure 2. VR; VR and AR; and AR applications that use digital humans.
Figure 2. VR; VR and AR; and AR applications that use digital humans.
Applsci 12 09913 g002
Figure 3. Digital humans (both avatars and virtual agents) and users, online and onsite.
Figure 3. Digital humans (both avatars and virtual agents) and users, online and onsite.
Applsci 12 09913 g003
Figure 4. Digital humans and users’ avatars online and onsite, in publications presenting uses of DHs as mediators of cultural content (virtual agents).
Figure 4. Digital humans and users’ avatars online and onsite, in publications presenting uses of DHs as mediators of cultural content (virtual agents).
Applsci 12 09913 g004
Figure 5. (a) Publications regarding VR and/or AR (b) Publications on VR and AR applications according to type of museums/sites.
Figure 5. (a) Publications regarding VR and/or AR (b) Publications on VR and AR applications according to type of museums/sites.
Applsci 12 09913 g005
Figure 6. Occurrence of VR-based DH uses, and MR uses according to museum/site type.
Figure 6. Occurrence of VR-based DH uses, and MR uses according to museum/site type.
Applsci 12 09913 g006
Figure 7. VR; VR and AR; and AR applications that use DHs in actual museums.
Figure 7. VR; VR and AR; and AR applications that use DHs in actual museums.
Applsci 12 09913 g007
Figure 8. Publications concerning user avatars; user avatars and mediator (virtual agent); and mediator.
Figure 8. Publications concerning user avatars; user avatars and mediator (virtual agent); and mediator.
Applsci 12 09913 g008
Figure 9. (a) Visualization of mutual gaze (eye contact amongst users) with floating, coloured shapes (pink bubbles); (b) Visualization of points of interest that attract users’ gaze and (c) Combination of the above from (Roth at al., 2018 [40]).
Figure 9. (a) Visualization of mutual gaze (eye contact amongst users) with floating, coloured shapes (pink bubbles); (b) Visualization of points of interest that attract users’ gaze and (c) Combination of the above from (Roth at al., 2018 [40]).
Applsci 12 09913 g009
Figure 10. Virtual human detects human presence and address the user (Ko et al., 2021 [44]).
Figure 10. Virtual human detects human presence and address the user (Ko et al., 2021 [44]).
Applsci 12 09913 g010
Figure 11. Virtual character interaction with passing people in a tunnel. Sensors are located above the people and allow interaction. Virtual Ada thanks the people for their visit to the exhibition and wishes them a good journey back (MIRALab) (© 2015 Uni. Geneva/MIRALab) [51].
Figure 11. Virtual character interaction with passing people in a tunnel. Sensors are located above the people and allow interaction. Virtual Ada thanks the people for their visit to the exhibition and wishes them a good journey back (MIRALab) (© 2015 Uni. Geneva/MIRALab) [51].
Applsci 12 09913 g011
Figure 12. (a) Visitor interacts with DH with the use of HMD in an AR application, (b) use of portable device (AR application) and (c) VR application for remote users.
Figure 12. (a) Visitor interacts with DH with the use of HMD in an AR application, (b) use of portable device (AR application) and (c) VR application for remote users.
Applsci 12 09913 g012
Table 1. Basic technology types (VR/AR), location (onsite or online), types of avatars, type of employment environment and, lastly, site of experiment, research or application of project described in the publication.
Table 1. Basic technology types (VR/AR), location (onsite or online), types of avatars, type of employment environment and, lastly, site of experiment, research or application of project described in the publication.
PublicationVRAROn-
Line
On-
Site
Avatar
of User
Digital Human Virtual
Museum
CH SiteActual MuseumLab or
Museum/CH Site
1Kim, Y. et al. (2001) [18]X X XX InertX Lab
2Chen, J. X. et al. (2003) [19]X X X SocialXX Lab
3Tavares T. A. et al. (2004) [20]X X X SocialXX Lab
4Sari R. F. and Muliawan (2007) [21]X X X X Lab
5Schulman D. et al. (2008) [22]X X X XM/CHs
6Xinyu D. and J. Pin J. (2008) [23]X X X X Lab
7Pan Z. et al. (2009) [24]X X X X M/CHs
8Mu B. et al. (2009) [25]X XXX Social X Lab
9Nimnual B. et al. (2010) [26]X X X X Lab
10Dantas R. R. et al. (2010) [27]X X XX Lab
11Pan Z. et al. (2011) [28]X X X XM/CHs
12Oliver I. et al. (2012) [29]X X X Social X Lab
13Hill V. and Mystakidis S. (2012) [30]X X X SocialXX M/CHs
14Kyriakou P. & Hermon S. (2013) [31]X XXX X Lab
15Dawson T. et al. (2013) [32]X XXXX InertX XM/CHs
16Tsoumanis G. et al. (2014) [33]X X X X M/CHs
17Aguirrezabal, P. et al. (2014) [34]X XXX XLab
18Rivera-Gutierrez D. et al. (2014) [13]X X X XMuseum
19Moreno I. et al. (2015) [35]XX XX XM/CHs
20Cafaro A. (2016) [36]X X XX Lab
21Breuss-Schneeweis, P. (2016) [11] X X X XM/CHs
22Ghani I. et al. (2016) [37]X X X InertXX Lab
23Bruno F. et al. (2017) [38]XX XX X M/CHs
24Linssen, J., & Theune, M. (2017) [39]X X X XLab
25Roth D. et al. (2018) [40]X XX Social X Lab
26Sorce, S. (2018) [41]X XX X Lab
27Rzayev, R. et al. (2019a) [16] X X XX Lab
28Rzayev, R. et al. (2019b) [17]X X X XLab
29Ali G. et al. (2019) [42] X X X XLab
30Geigel J. et al. (2020) [12]XXXX XX XM/CHs
31Sylaiou S. et al. (2020) [14]X X XX Lab
32Trajkova M. et al. (2020) [9] X XX XM/CHs
33Kico I. et al. (2020) [7]X XXXX Lab
34Teixeira N. (2021) [10] X X X XM/CHs
35Ye Z. -M. et al. (2021) [43]X XX XX Lab
36Ko J. -K. et al. (2021) [44]X X X XM/CHs
37Bönsch A. et al. (2021) [45]X X XX Lab
38Li P. et al. (2022) [46]X XX X Lab
39Stergiou M. & Vosinakis S. (2022) [8]X X X Lab
Table 2. We describe, in brief, two main aspects of the research project re-viewed: (1) interface, device and type of technology; (2) description of use and interaction type. The Acronym VE stands for ‘virtual environment’ and DH for digital human.
Table 2. We describe, in brief, two main aspects of the research project re-viewed: (1) interface, device and type of technology; (2) description of use and interaction type. The Acronym VE stands for ‘virtual environment’ and DH for digital human.
PublicationInterface Device
and Type of Technology
Description of Use and Interaction Type
1Kim Y. et al. (2001) [18]VR Stereo glasses, large screen/caveUser interacts with content; inert DH
2Chen J.X. et al. (2003) [19]VR, PC screen, 3D VEMulti-user learning VE
3Tavares T.A. et al. (2004) [20]VR, PC screen, 3D VEMulti-user learning VE, virtual guide
4Sari R.F. and Muliawan (2007) [21]VR, PC screen, 3D VEUser interacts with content
5Schulman D. et al. (2008) [22]VR, large screen, onsite installationUser interacts with virtual guide
6Xinyu D. and Pin J. (2008) [23]VR, PC screen, 3D VECrowd user avatar; no interaction
7Pan Z. et al. (2009) [24]VR, PC screen, 3D VEUser interacts with content
8Mu B. et al. (2009) [25]VR, PC screen, 3D VEUser avatars’ gestural interaction
9Nimnual B. et al. (2010) [26]VR, PC screen, 3D VEUser interacts with content
10Dantas R. R. et al. (2010) [27]VR museum, PC screenDH agent guides users
11Pan Z. et al. (2011) [28]VR, multi-screen projection, VE Gesture interactions with system and DH
12Oliver I. et al. (2012) [29]VR, PC screen, 3D VEMulti-user learning VE
13Hill V. and Mystakidis S. (2012) [30]VR, PC screen, 3D VEDH agent provides information
14Kyriakou P. & Hermon S. (2013) [31]VR, PC screen, 3D VEUser interacts with content
15Dawson T. et al. (2013) [32]VR, multiple screens, 3D VENon-interactive VHs (illustrative)
16Tsoumanis G. et al. (2014) [33]VR, PC or projection screen, 3D VEUser interacts with content
17Aguirrezabal, P. et al. (2014) [34]VR, PC or projection screen, 3D VEUser interacts with content
18Rivera-Gutierrez D. et al. (2014) [13]VR, large screen, onsite installationDH agent provides information to users
19Moreno I. et al. (2015) [35]VR/AR headset, Kinect sensorUser interacts with content
20Cafaro A. (2016) [36]Tablet, large screen, VEUser responds to DH questions
21Breuss-Schneeweis, P. (2016) [11]AR app, smartphoneVisitors prompt VH narrations
22Ghani I. et al. (2016) [37]VR headset, 3D VEUser interacts with content
23Bruno F. et al. (2017) [38]VR headset/controller and tabletUser interacts with content
24Linssen, J., & Theune, M. (2017) [39]Screen held by actual robot User interacts with VH on screen
25Roth D. et al. (2018) [40]VR headset, Immersive 3D V Env.User interacts with content
26Sorce, S. (2018) [41]Projection on screen, sensorsUser interacts with content
27Rzayev, R. et al. (2019a) [16]VR headset, Immersive 3D V Env.DH provides information
28Rzayev, R. et al. (2019b) [17]AR headsetDH provides information
29Ali G. et al. (2019) [42] AR headsetUser interacts with content
30Geigel J. et al. (2020) [12] AR headset/mobile, VR headset/PCUsers prompt VH narrations
31Sylaiou S. et al. (2020) [14] VR headset, Immersive 3D V Env.Users prompt VH narrations
32Trajkova M. et al. (2020) [9]Large screen, PC, sensors, cameraUsers interact with content, gesture cmnd.
33Kico I. et al. (2020) [7] VR headset, 3D VEUsers watch and mimic DH
34Teixeira N. (2021) [10] AR app., portable device.Users interact with content, DH
35Ye Z. -M. et al. (2021) [43]VR headset, 3D VEUsers interact with content, DH
36Ko J. -K. et al. (2021) [44]Large screen, PC, sensors, cameraUsers interact with content, DH
37Bönsch A. et al. (2021) [45]VR HMD, 3D VE, sensorsUsers interact with content, DH
38Li P. et al. (2022) [46]BCI equipment/sensors, PC screenUsers interact with content
39Stergiou M. & Vosinakis S. (2022) [8]VR headset, 3D VEUsers watch and mimic DH
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sylaiou, S.; Fidas, C. Virtual Humans in Museums and Cultural Heritage Sites. Appl. Sci. 2022, 12, 9913. https://doi.org/10.3390/app12199913

AMA Style

Sylaiou S, Fidas C. Virtual Humans in Museums and Cultural Heritage Sites. Applied Sciences. 2022; 12(19):9913. https://doi.org/10.3390/app12199913

Chicago/Turabian Style

Sylaiou, Stella, and Christos Fidas. 2022. "Virtual Humans in Museums and Cultural Heritage Sites" Applied Sciences 12, no. 19: 9913. https://doi.org/10.3390/app12199913

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop