Next Article in Journal / Special Issue
Hand-Controlled User Interfacing for Head-Mounted Augmented Reality Learning Environments
Previous Article in Journal
Sharing the Sidewalk: Observing Delivery Robot Interactions with Pedestrians during a Pilot in Pittsburgh, PA
Previous Article in Special Issue
Revealing Unknown Aspects: Sparking Curiosity and Engagement with a Tourist Destination through a 360-Degree Virtual Tour
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Linking Personality and Trust in Intelligent Virtual Assistants

Department Management, Communication & IT, MCI | The Entrepreneurial School, 6020 Innsbruck, Austria
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Multimodal Technol. Interact. 2023, 7(6), 54; https://doi.org/10.3390/mti7060054
Submission received: 23 March 2023 / Revised: 8 May 2023 / Accepted: 19 May 2023 / Published: 25 May 2023

Abstract

:
Throughout the last years, Intelligent Virtual Assistants (IVAs), such as Alexa and Siri, have increasingly gained in popularity. Yet, privacy advocates raise great concerns regarding the amount and type of data these systems collect and consequently process. Among many other things, it is technology trust which seems to be of high significance here, particularly when it comes to the adoption of IVAs, for they usually provide little transparency as to how they function and use personal and potentially sensitive data. While technology trust is influenced by many different socio-technical parameters, this article focuses on human personality and its connection to respective trust perceptions, which in turn may further impact the actual adoption of IVA products. To this end, we report on the results of an online survey ( n = 367 ). Findings show that on a scale from 0 to 100 % , people trust IVAs 51.59 % on average. Furthermore, the data point to a significant positive correlation between people’s propensity to trust in general technology and their trust in IVAs. Yet, they also show that those who exhibit a higher propensity to trust in technology tend to also have a higher affinity for technology interaction and are consequently more likely to adopt IVAs.

1. Introduction

Intelligent Virtual Assistants (IVAs) are voice-enabled applications that provide users with a large variety of services. Providers of IVAs include Amazon, with its IVA Alexa, Apple, with its IVA Siri, Microsoft’s IVA Cortana, Google’s Assistant, and Samsung‘s IVA Bixby. Not only are these systems available whenever needed but their initially rather low acceptance rates have significantly increased in recent years. This can be inferred from the steadily growing adoption of respective IVA-driven products illustrated in Figure 1.
With respect to this increase in adoption, one may even argue that the IVA field has started experiencing what Attig et al. describe as “a shift from computer anxiety and technophobia to nomophobia, the fear to be without a digital device” ([2], p. 26). To this end, we have already entered an era of “ubiquitous listening” where we are surrounded by devices that are capable of constantly listening to their environment, and where the technological advancements in speech recognition and natural language processing allow for complete device control, without the need for pressing buttons or shifting levers. Continuous technical improvements, as well as the growing speeds and expansions of networks, have further let IVAs outgrown their initial task as simple information providers and increasingly allowed them to fill the role of daily companions. However, this evolution comes at a cost, illustrated by privacy concerns among users, who are now increasingly afraid of their private conversations being tapped into by companies, governments, and other parties who may have an interest in processing and (mis)using personal data [3]. Consequently, several consumers and privacy advocates keep raising their voices against technology companies and their potentially privacy-violating products. Samsung, for example, received negative publicity after a lawsuit against their speech-enabled SmartTV. As a result, researchers as well as policy makers have called for the development of more open and privacy-preserving IVA technology [4].
Strongly connected to this notion of privacy is the concept of trust, both in people and organizations, as well as in technology. Especially for intelligent technology systems, such as IVAs, which provide little transparency of their functioning to users [5], technology trust has a significant impact on whether or not they are accepted. This is true even more so than with more traditional software, as IVAs apply various statistical methods and learning systems and access multiple sources to execute tasks for the user, while the inner workings of these solutions usually remain hidden and thus difficult (or even impossible) for people to apprehend.
Many researchers have investigated technology trust, e.g., in e-commerce [6], virtual reality [7], or cloud services [8]. Respective findings support the assumption that technology trust is a rather complex, multinatured construct, which is influenced by various socio-technological parameters. Relevant key factors include the characteristics of the trustor [9,10], the characteristics of the technology itself [11], the technology provider and its communication strategy [12], and the context or situation of technology use [13], as well as people’s past experiences (with the technology) [14]. Furthermore, personality was found to affect technology trust, and consequently, the acceptance of a given product [15,16,17]. To this end, different personality traits, for instance extraversion, as well as interaction styles, have been shown to impact how individuals approach and perceive technology and thus how likely they are to trust it.
Aiming to expand upon the existing body of knowledge around trust in technology, the work reported in this paper investigates the connection between personality and perceived trust in IVAs. Our respective analysis starts by discussing the necessary theory and relevant previous work in Section 2. Next, Section 3 outlines our research methodology, and Section 4 and Section 5 present results and respective hypotheses evaluations. Finally, Section 6 reflects upon our findings, and Section 7 concludes with some limitations and points towards potential future research directions.

2. Theoretical Background and Related Work

Gartner defines a Virtual Assistant as a “a conversational, computer-generated character that simulates a conversation to deliver voice- or text-based information to a user via a Web, kiosk or mobile interface” [18]. Other terms for the technology include Smart Assistant, Virtual Digital Assistant, Voice-Controlled Agent, Intelligent Personal Assistant, Personal Virtual Assistant, Intelligent Software Assistant, Conversational Agent, Dialogue System, Chatbot, Voice Assistant, Digital Assistant or, as we chose to refer to it in this article, Intelligent Virtual Assistant (IVA). Although one may find subtle differences in functionality between all these definitions, they share the same goal, i.e., to provide information to and communicate with human interlocutors and execute small tasks (e.g., play a song). Furthermore, they are available on most PC and smartphone platforms [19].

2.1. IVA Technology Ecosystem

Generally, the IVA ecosystems consist of three key components (cf. Figure 2): (1) the IVA; (2) IVA-enabled devices, and (3) companion applications.
Through algorithmic learning, which is usually deployed in the cloud, IVAs are becoming increasingly smarter [21], adapting to users’ speech patterns over time, and advancing in understanding speech in context [3]. Often, they are thus already referred to as speech-based Natural User Interfaces (NUI) ([22], p. 241). IVAs are accessed through either discrete devices (e.g., Amazon Alexa) or companion applications (e.g., Apple Siri and Google Assistant); (note that both Amazon Alexa and Google Assistant are currently also accessible through respective devices, such as Apple’s HomePod or Google’s Nest/Home speaker series). In both cases, the integration of third-party apps, including music and video apps, banking apps, fitness apps, food delivery apps, games, and travel apps, allows for IVAs to significantly expand their skill set [20], so that consequently they may become a single interface for different digital services [4].

2.2. IVAs and Their Function in Everyday Life

IVAs have increased in popularity as their variety of services has increased. Core functions include the provision of quick answers to information requests, such as the current time, upcoming weather, or traffic status. Moreover, answers to concrete mathematical functions or more domain-specific questions may swiftly be provided [21]. To this end, a study by Lopatovska et al. [23] showed that the most frequent interactions with Amazon’s Alexa were these types of quick searches. They were followed by asking Alexa to play music or other tasks, such as to manage correspondence, e.g., reading and answering messages and emails, dialing phone numbers and answering calls, and managing calendars and lists, as well as controlling timers, alarms, and reminders. Recently, it has particularly been Internet of Things (IoT) applications, such as lights or thermostats which have been connected to IVA control [19]. While service providers continuously work on new application scenarios for voice control, research is more concerned with interaction characteristics such as type of voice, dialog structure, or error recovery strategies. So, while today’s IVA voices are often perceived as being neutral, clear, warm, and rather emotionless, ongoing research is working on making them more expressive and emotional or able to adopt certain styles [24]. These advances seem particularly important if one envisions the use of the technology in more sensitive domains, such as mental health care or other social dialog settings [19]. Apart from supporting social engagement, IVAs may further encourage users to perform physical exercises and establish healthy nutrition habits, so as to enhance ones quality of life [25] and counter rising healthcare costs resulting from lifestyle diseases such as atherosclerosis, obesity, or type 2 diabetes. Elderly people in particular seem to be targeted by these self-management and self-care capabilities future IVAs may offer [26]. For this to be achieved, however, IVAs require high-level dialog skills. They need to be able to interpret natural cues, recognize and appropriately use emotions, and exhibit social dialogue competencies. Underlining these requirements, Bell et al. [27] point out that empathy, conversational smoothness, and trust, as well as a sense of relationship, are essential aspects influencing the success of IVAs used in such health care domains.

2.3. Making IVAs More Humanlike

Numerous researchers have called for the necessity to make IVAs more humanlike in order to increase user satisfaction and enhance user experience. Keeping “Sorry I didn’t understand that” or similar responses to a minimum helps avoid user frustration and demonstrates real advancements in IVA technology [28]. Yet, by increasing the human likeness of an IVA, one risks descending into the so-called “uncanny valley” [24], an effect that occurs when human mimicry is perceived as “creepy” [29]. On the other hand, measuring potential effects of naturalness and correctness, López et al. [22] investigated four different IVA interaction contexts (i.e., shopping, traveling, administrative tasks, and miscellaneous) and found that although Siri had the highest number of correct responses, and Google Assistant was perceived to be the most natural IVA, there was no clear preference for any of the IVAs.
Building upon work by Heater [30] and Biocca [31], and consequently Li and Nass [32], Lankton et al. [33] focused on “social presence” as a measure of naturalness, arguing that technology is more humanlike if it is capable of evoking a sense of personalness and contact with the user through features such as adapted voice characteristics and (dialog) interactivity. To this end, Cho [21] found that users report a heightened level of social presence when interacting with the IVA via voice compared with text. However, this effect was only present in contexts and conversations that involved low amounts of sensitive data. When users conversed with the IVA about highly sensitive personal topics, they reported high levels of social presence, regardless of whether they used a voice or text interface. A difference was also discovered between users with low and high levels of privacy concerns, in that individuals with low privacy concerns exhibited more positive attitudes toward IVAs when using the speech over the text interface. Finally, copying elements of human–human relationships, emotions and empathy [34] have been considered key elements to increase naturalness in human–IVA interaction. While human empathy is defined as “the natural ability to understand the emotions and feelings of others” ([35], p. 71), IVA engineers aim to equip their products with the capability to perceive and process users’ emotional cues so as to respond in an appropriate empathetic way [36]. Here, previous studies point to positive user perceptions of IVAs that display emotions [37], yet this effect is reversed if said emotional responses are inaccurate. In other words, IVAs’ expressed emotions have to meet users’ expectations [38]. If this is achieved, IVAs are perceived to be more caring, trustworthy, and likable, as well as nicer, safer, more intelligent, and more pleasant conversational partners compared with nonempathetic systems [36,39]. Similarly, Paiva et al. argue that IVAs should possess the ability to “recognize and understand people’s emotions, put themselves into our shoes, and act in an empathic manner” ([40], p. 35), in order to increase the authenticity and naturalness of an interaction.

2.4. IVA Privacy Concerns

As IVA-enabled devices use speech recognition technology, thus being able to listen to their surroundings at all times, they are considered “always on” devices. However, there is a difference between devices, in that there are some which are designed to continuously process data, such as smart home security cameras, and others, such as most IVAs, which are activated on demand, e.g., via a spoken wake-up phrase. This speech phrase is in this case processed locally, i.e., on the device, and not usually transmitted to or stored in the cloud. Yet, previous work has pointed to the danger of such wake-up phrases, as speaker-dependency is not always guaranteed, by which a third party may potentially also activate the IVA device [41]. Furthermore, critics call for an investigation into which data are really collected, processed, stored, and shared by IVAs, as well as whether the privacy of users is sufficiently preserved [42,43]. Country-dependent differences in policies and legal environments play a particularly important role here as the storage, processing, and transmission of (personal) data may be subject to different, sometimes contradicting legal regulations [44]. In the European Union (EU), for example, service users are protected under the General Data Protection Regulation (GDPR) [45], which came into force in May 2018. The respective laws govern not only the personal data of EU citizens but of any individual present in the EU. According to the GDPR, companies are advised to adopt a “Privacy by Design” approach, i.e., privacy needs to be accounted for in every stage of the product and service life cycle. Transparency as to how, when, and where data are being transmitted, processed, and stored is also relevant when service providers aim to build up or increase consumers’ trust in a system or service [46]. As for IVAs, users need to understand default data processing settings and how to change them. This includes understandable privacy declarations presented during initial setup of a device and clarification of when data are being processed. Additionally, lights or other visual cues may be used to indicate when a device is recording audio signals. Furthermore, with reference to the GDPR, users should have the possibility to access and delete all audio files being processed and stored for IVA improvement. To this end, Flikkema and Cambou [47] point out that consumers must become aware of what happens with their data and become advocates of Data Collection Transparency (DCT). Based on the statement “We cannot understand what we cannot monitor”, they propose a DCT infrastructure concept which permits users to monitor which data are being exchanged between a personal device and the cloud, thereby increasing IVA transparency and helping users make informed decisions.

2.5. IVAs and Trust

Luhmann [48] argues that trust can only exist in a familiar context, because only familiarity allows an individual to anticipate future events with certainty. Respectively, familiarity can lead to both trust and distrust based on past experiences. Digital technology is increasingly present in everyday life, whether it be in private or professional environments. Being able to cope successfully with technology is therefore of importance to achieving one’s goals. Franke et al. [16] argue that users find it easier to use new systems if these systems are similar to the ones they already know. Thus, the higher the user’s familiarity with a technological system, the easier their coping with new but similar systems. To this end, Lee and Moraylee1992trust [9] conducted research in the field of automated systems and found that trust levels increase the more familiar the user becomes with the system. System errors, on the other hand, hamper existing trust.
Previous work has used different models to measure trust. Depending on the degree of humanness, technology may be classified as systemlike or humanlike. Consequently, it can be evaluated with either systemlike characteristics, such as functionality, reliability, and helpfulness, or with humanlike characteristics, such as ability, benevolence, and integrity [33]. Focusing particularly on IVAs, Gulati et al. [49] found that perceived system benevolence, competence, and honesty impact users’ trust in Apple’s Siri. Clark et al. [50], on the other hand, found that trust in IVAs is mostly gauged based on the provided level of security, privacy, and transparency. That is, perceived trustworthiness depends (among other things) on which data are collected, how they are processed, who has access to them, as well as which security features are built into the technology using the data. This is also supported by Neururer et al. [51], who argue that trust in conversational agents is built by predictability and transparency. Finally, in 2019, the Higher Level Expert Group on Artificial Intelligence set up by the European Commission defined a framework of principles which aims to outline key characteristics of what makes AI-driven products such as IVAs trustworthy [52].

2.6. Linking IVA Privacy and Trust Concerns

Previous work by Vimalkumar et al. [53] identified a strong positive correlation between perceived privacy risks and perceived concerns with IVAs. In other words, if people perceive IVAs to pose a potential threat to their privacy, they seem to also exhibit a higher level of concern about the technology. Moreover, it was found that the higher the perceived privacy risk, the lower the level of perceived technology trustworthiness. Consequently, it may be argued that privacy risk perception has a significant impact on the level of trust users put in a technology. Yet, it should also be noted that this influence seems to be moderated by the perceived usefulness of the IVA. That is, if the IVA’s utility is rated low, privacy concerns become more eminent, and thus users are less likely to adopt the technology. This paradox is widely known as the “privacy–utility trade-off” or “privacy calculus”. Still, a study by Burbach et al. [54] has shown that privacy, and not pricing, is often the most important factor for the acceptance of a given technology. Furthermore, Liao et al. [55] found that IVA users exhibit lower levels of general privacy concerns and higher trust and confidence in IVAs that are offered by providers who obey privacy, safety, and security regulations. Moreover, survey participants who considered adopting such a technology in the future showed trust levels which were similar to those of existing IVA users. In contrast, IVA nonusers exhibited lower levels of trust in IVA providers and their intentions towards meeting privacy, safety, and security requirements, and they were thus less likely to adopt the technology. This is also supported by recent work by Jo [56], which suggests that trust and privacy have not only an impact on technology adoption but also on its continuance intention. A promising way ahead may thus be seen in the findings of Brunotte et al. [57], who show that privacy concerns regarding IVAs may be reduced, and consequently trust in technology raised, by providing clear and transparent explanations as to which and how data are collected, stored processed, and potentially shared.

2.7. IVAs and Personality

Previous research has shown that people’s personality significantly impacts their level of acceptance of (new) technology [15]. Respected research usually uses the Big Five personality dimensions [58], i.e., Extraversion, Agreeableness, Conscientiousness, Neuroticism, and Openness, as a means of measurement. People who show high extraversion scores tend to be more talkative, assertive, active, energetic, outgoing, outspoken, dominant, forceful, enthusiastic, show-offy, sociable, spunky, adventurous, noisy, and bossy, whereas those who score low tend to be reserved, shy, and quiet. Adjectives which describe people with high agreeableness scores include sympathetic, kind, appreciative, affectionate, soft-hearted, warm, generous, trusting, helpful, forgiving, pleasant, good-natured, friendly, cooperative, gentle, unselfish, praising, and sensitive. On the opposite side one, finds antagonistic people, characterized by being fault-finding, cold, unfriendly, unforgiving, and stubborn. Individuals who exhibit high values in conscientiousness appear to be organized, thorough, planful, efficient, responsible, reliable, dependable, precise, practical, deliberate, and painstaking. Low scores describe a person that is careless, disorderly, irresponsible, lazy, and forgetful. The neuroticism dimension describes the emotional stability of a person or lack thereof. Individuals who score high tend to be tense, anxious, nervous, moody, worrying, touchy, fearful, high-strung, self-pitying, temperamental, unstable, self-punishing, despondent, and emotional. In comparison, emotionally stable people are self-confident and respond better to stressors. Finally, high openness scores relate to people who have wide interests and are more likely to be imaginative, intelligent, original, insightful, curious, sophisticated, artistic, clever, inventive, sharp-witted, or ingenious. Individuals who find themselves on the lower end of the openness spectrum usually have narrow interests and tend to be commonplace, shallow, and simple [59].
Investigating connections between the Big Five personality dimensions and technology use, Gessl et al. [17] found that agreeableness had the most eminent relationship with technology acceptance, as it positively correlates with six subdimensions, i.e., social influence, perceived sociability, attitude, perceived usefulness, intention to use, and enjoyment. Behrenbruch et al. [15], on the other hand, found empirical support for an existing relationship between extraversion and trust, as well as perceived usefulness of technology. Furthermore, it was found that people’s personality has an influence on how they approach technology, making them embrace or avoid respective interactions. Franke et al. termed this personality trait as Affinity for Technology Interaction (ATI), which describes “the tendency to actively engage in intensive technology interaction” ([16], p. 456). It is considered a key personal resource and allows individuals to better cope with technology challenges in daily life.

3. Methodology

Building upon previous work on IVA use and respective privacy and trust concerns, our goal was to investigate a potential connection between people’s personality traits and their trust in IVAs (note that although in Section 2.6 we show a clear link between IVA trust and privacy concerns, we intentionally excluded privacy aspects from our investigation so as to prevent potentially interfering measurement constructs). This investigation was consequently guided by the following research question: “What is the relationship between personality and trust in IVAs?”

3.1. Research Model and Hypotheses Development

Based on results by McKnight et al. [10], who found a relationship between different forms of trust and technology use, we propose a research model which integrates people’s trust in technology with their Big Five personality traits, as well as their affinity for technology interaction. Figure 3 provides an overview of this model, depicting McKnight et al’s trust components in black and our additionally proposed components in red.
Following this model proposition, we deduced and consequently tested 11 hypotheses, subdivided into 4 difference hypothesis categories (i.e., Hypotheses Categories A–D).

3.1.1. Hypotheses Category A—Personality and Propensity to Trust

With reference to the literature discussed in Section 2.7, individuals who show high scores in agreeableness tend to be more trusting. Furthermore, people who score high in neuroticism tend to be anxious and worrying. Consequently, we expected to find relationships between these two personality dimensions and different determinants of propensity to trust, expressed by the following hypotheses (cf. Figure 4):
  • H-A01: Agreeableness positively correlates with trusting stance.
  • H-A02: Agreeableness positively correlates with faith in general technology.
  • H-A03: Neuroticism negatively correlates with trusting stance.
  • H-A04: Neuroticism negatively correlates with faith in general technology.

3.1.2. Hypotheses Category B—Personality and Trusting Beliefs in a Specific Technology

Following the assumption that those who score high on extraversion tend to be more talkative, outgoing, and sociable, we hypothesize that they would spend more time interacting with and exploring the functions of an IVA (i.e., a specific technology), which should in turn lead to higher levels of trusting beliefs in said technology. Furthermore, it is stated that conscientious people tend to aim for efficiency. As IVA’s help increase the efficiency of information retrieval by offering natural, voice-based interaction, conscientious people may also exhibit higher levels of trusting believes in this specific technology. Finally, personality traits related to the openness dimension include being open to experience and having wide interests. IVAs provide users with a range of different services; therefore, we expect that individuals with high scores of openness find IVAs helpful and functional, which in turn should be reflected in higher levels of trusting believes. In summary, we thus predicted relationships between the personality dimensions extraversion, agreeableness, and conscientiousness and openness, and trusting beliefs in a specific technology, described by the following hypotheses (cf. Figure 5).
  • H-B01: Extraversion positively correlates with trusting beliefs in a specific technology.
  • H-B02: Agreeableness positively correlates with trusting beliefs in a specific technology.
  • H-B03: Conscientiousness positively correlates with trusting beliefs in a specific technology.
  • H-B04: Openness positively correlates with trusting beliefs in a specific technology.

3.1.3. Hypotheses Category C—Propensity to Trust and Trusting Beliefs in a Specific Technology

With reference to McKnight et al. [10], who found significant effects of propensity to trust on trusting beliefs in a specific technology, we propose that these relationships also apply in the context of IVAs. That is, we assume that if an individual shows a higher trusting stance towards technology, he/she will also have stronger trusting beliefs in IVAs. We further assume that individuals who have confidence in technology and who believe that technology is designed to be effective also have higher trusting beliefs in IVAs. Consequently, we hypothesize the following (cf. Figure 6):
  • H-C01: Trusting stance positively correlates with trusting beliefs in a specific technology.
  • H-C02: Faith in general technology positively correlates with trusting beliefs in a specific technology.

3.1.4. Hypotheses Category D—Affinity for Technology Interaction and Trusting Beliefs in a Specific Technology

Finally, we assume a relationship between affinity for technology interaction (ATI) and trusting beliefs in a specific technology. Individuals with high ATI scores tend to actively engage in technology interaction and thus may acquaint themselves with new systems more easily. Komiak and Benbasat [60] further found that familiarity with a technology increases trust. Thus, we hypothesize the following (cf. Figure 7):
  • H-D01: ATI positively correlates with trusting beliefs in a specific technology.

3.2. Survey Design and Operationalization

To investigate the relationship between people’s personality and their trust in IVAs, we used a pentamerous survey instrument to collect data on people’s IVA Use, Trust, Personality, Affinity for Technology Interaction, and Demographics. We opted for a bilingual survey design (i.e., German and English), as we expected the majority of participants to be coming from the DACH area (Germany, Austria, and Switzerland).

3.2.1. IVA Use

The survey opened for five questions on people’s use of IVAs. For cases where regular IVA use was reported, we also included questions regarding the type of used IVA (e.g., Alexa, Siri, Cortana, Google Assistant, Bixby, and others), the frequency of IVA use, the domain of IVA use, and the specific tasks connected to IVA use.

3.2.2. Trust

Next, we built upon the measures developed by McKnight et al. [10]. Consequently, we started with questions concerning institution-based trust, subdivided into structural assurance (four questions) and situational normality (four questions), before the focus moved towards trusting beliefs in a specific technology, measured through the three subconcepts of functionality (three questions), reliability (four questions), and helpfulness (four questions). All questions were reworded so as to tackle the IVA as the specific technology of interest. Lastly, an individual’s general propensity to trust, expressed by one’s trusting stance (three questions) and his/her faith in general technology (four questions), was evaluated. All question items used 5-point Likert-scaled answering, ranging from 1 = S t r o n g l y D i s a g r e e to 5 = S t r o n g l y A g r e e . An additional 0 = N o A n s w e r option was available as well.

3.2.3. Personality

In order to determine the personality of participants, we employed the Big Five Inventory (BFI) instrument. The BFI uses bipolar scales to measure the core features of the five personality dimensions extraversion, agreeableness, conscientiousness, neuroticism, and openness. Based on the original 44-item BFI scales proposed by John [61], Rammstedt and John [62] created a 10-item ultrashort version of the BFI, which measures psychometric characteristics with only two items per scale. Although it comprises less than 25% of the 44-item scale, it was shown to predict close to 70% of the variance of the complete scale. Yet, since noticeable losses were found in the agreeableness construct, it was suggested to include an additional third item for this one dimension. Consequently, we employed this ultrashort version of the BFI scale, including the third item for the agreeableness construct.

3.2.4. Affinity for Technology Interaction

We used the ATI scale by Franke et al. [16], as it has shown to reliably measure a person’s tendency to interact with technology. It consists of nine question items, each of which requires answering on a 6-point Likert scale ranging from 1 = C o m p l e t e l y D i s a g r e e to 6 = C o m p l e t e l y A g r e e . The scale does not allow for a central value, thus circumventing neutral responses. An additional 0 = N o A n s w e r option, however, was available in our survey.

3.2.5. Demographics

Finally, the survey concluded with a set of questions collecting demographic information on gender, age, nationality, education, and occupation.

3.3. Pretest and Sampling

The survey was distributed to seven individuals for testing. Five of them evaluated the German version of the questions (all native German speakers) and two focused on the English version. Based on their feedback, the following amendments to the questions were made. The German term for IVA was changed from “Virtueller Assistent” to “Digitaler Assistent”, whereas in the English version, we kept the term “Virtual Assistant”. Questions in the survey section on trust were slightly modified, and an additional information field was added so as to foster comprehension. A subsequent test with four individuals did not produce any new recommendations, certifying the survey’s appropriateness. Since the goal of the study was to evaluate general trust towards IVAs, we targeted both users and nonusers of theses systems. Consequently, we used convenience sampling and distributed the survey online via social media (i.e., Facebook, LinkedIn, and Xing), as well as via messenger networks and mailing lists.

4. Results

The survey was available for eight weeks, during which the respective link received 416 views. From these, a total of 367 valid responses were collected (i.e., 262 responses to the German and 105 responses to the English questionnaire). The respective data were categorized and coded. Reversed-scored Likert items were inverted before IBM SPSS Statistics 26 was used for further analysis (note that all anonymized data and a description of the column labels and the variables’ numeric range can be found at https://doi.org/10.5281/zenodo.7764249 (accessed on 22 April 2023)).
The collected sample ( n = 367 ) shows an almost equal gender distribution (i.e., 180 male, 183 female, and 4 other). Participants were on average 29.16 years old ( S D = 9.00 , M e d i a n = 26 ) and came predominantly from Europe ( 90.5 % ), with 217 of them (i.e., 59.1 % ) being Austrian. With regards to IVA use, 138 ( 37.6 % ) of the survey participants reported previous IVA experiences. Here, we also see that more of the participants who completed the English survey questions were IVA users ( 53.3 % ) compared with those who completed the German survey questions ( 31.3 % ).

4.1. Trust

Cronbach’s α values 0.70 attest all trust-related survey constructs have an acceptable reliability (cf. Table 1).
In order to yield single values for the different trust constructs (i.e., propensity to trust, institution-based trust, and trusting beliefs in a specific technology), we computed mean scores over their subconstructs. Each subconstruct was equally weighted and reflected in the overall construct mean. Consequently, a mean score of 1 indicates an extremely negative response, whereas a mean score of 5 depicts a highly positive one. To generate the mean scores for propensity to trust, we considered the reported values for trusting stance and faith in general technology. Due to significant differences between respondents of the German survey and those of the English survey, we report separate means for both languages (cf. Table 2).
Furthermore, the data show a significant difference in trusting stance between users and nonusers in both the German ( t ( 259 ) = 3.248 , p = 0.001 ) and the English sample ( t ( 103 ) = 5.011 , p = 0.000 ). Moreover, we found a significant difference concerning faith in general technology between IVA users and nonusers (German: t ( 258 ) = 2.289 , p = 0.023 ; English: t ( 103 ) = 2.843 , p = 0.005 ). The respective data are depicted in Table 3.
Next, to generate a single value for institution-based trust, we computed the mean scores over structural assurance and structural normality. Again, significant differences between respondents of the German survey and those of the English survey were found, for which we report separate means for both languages (cf. Table 4).
Finally, in order to yield a single value for trusting beliefs in a specific technology, we computed the means of functionality, reliability, and helpfulness. Here, the data do not point to a difference between respondents of the German and those of the English survey, so we report the overall means (cf. Table 5).
Aiming to provide an overall rating for participants’ reported trusting believes in IVA technology, we add up the values of all 11 survey items for trusting beliefs in a specific technology. This overall IVA trust score may consequently lie between 0 (i.e., each of the 11 items responded to with “No Answer”) and 55 (i.e., each of the 11 items responded to with “Strongly Agree”). As can be seen in Figure 8, 50% of participants report a respective IVA trust score that is higher than 35.5.

4.2. Affinity for Technology Interaction

Looking at the ATI, the data show a significant difference between the German (Mean = 3.95, SD = 1.04) and the English (Mean = 3.68, SD = 0.85) sample: t ( 364 ) = 2.578 , p = 0.011 . Furthermore, as illustrated in Table 6, a significant difference in ATI scores was found between IVA users and nonusers, in both the German ( t ( 258 ) = 2.289 , p = 0.023 ) and the English sample ( t ( 103 ) = 2.843 , p = 0.005 ). Finally, in the German sample, the data suggest a gender difference (Male: Mean = 4.44, SD = 0.88; Female: mean = 3.52, SD = 1.00); t ( 255 ) = 7.854 , p = 0.000 ) , while in the English sample no such difference was found: t ( 103 ) = 0.496 , p = 0.621 .

5. Hypotheses Testing

In the following, we focus on evaluating the hypotheses defined in Section 3.1. We perform Pearson or Spearman correlation analyses, depending on whether the data are normally distributed or not. Furthermore, if significant differences were found between responses to the German and the English questionnaire, correlation tests were performed for both languages independently. The significance level was set to p = 0.05 . Effect sizes were evaluated based on Spearman’s or Bravais–Pearson’s correlation coefficient r, where r 0.10 signifies a small effect size, r 0.30 a medium effect size, and r 0.50 a large effect size. A summary of which hypotheses are supported by the collected data and which are not is found in Table 7.

5.1. Category A—Personality and Propensity to Trust

The data from the German survey support a small but significant positive correlation between a person’s agreeableness and his/her trusting stance; r = 0.138 , p = 0.025 . No such correlation was found in the data from the English survey; r = 0.084 , p = 0.395 . Moreover, with respect to faith in general technology, the data point to a small yet significant positive correlation with the German respondents ( r = 0.228 , p = 0.000 ). Again, the data from the English respondents did not show such a correlation ( r = 0.176 , p = 0.072 ). Focusing on potential relationships with neuroticism, the data show that the characteristic correlates with neither trusting stance (German: r = 0.057 , p = 0.359 ; English: r = 0.122 , p = 0.215 ) nor faith in general technology (German: r = 0.072 , p = 0.247 , English: r = 0.137 , p = 0.163 ). Finally, a significant negative correlation with a small effect size was found between openness and faith in general technology with the data from the German sample ( r = 0.165 , p = 0.008 ). Again, the data from the English sample does not point to such a relationship ( r = 0.052 , p = 0.596 ).
As for the proposed hypotheses of category A (cf. Section 3.1.1), we may thus conclude that H-A01 and H-A02 are supported by the data from the German sample, yet need to be rejected for the English sample. Hypotheses H-A03 and H-A04, on the other hand, need to be rejected for both the German and the English sample.

5.2. Category B—Personality and Trusting Beliefs in a Specific Technology

Contrary to our expectations, the data show a small yet significant negative correlation between extraversion and trusting beliefs in a specific technology with the English sample ( r = 0.264 , p = 0.049 ). The data from the German sample, however, do not support this relationship ( r = 0.119 , p = 0.286 ). Similarly, the English survey data uphold a negative correlation between extraversion and functionality ( r = 0.296 , p = 0.027 ). Again, no correlation was found in the data from the German survey ( r = 0.021 , p = 0.852 ). Moreover, in the English data, extraversion correlates with reliability ( r = 0.273 , p = 0.042 ), whereas in the German data, no such connection exists ( r = 0.171 , p = 0.128 ). Finally, neither the English nor the German data point to a correlation between extraversion and helpfulness; English: r = 0.101 , p = 0.401 , German: r = 0.000 , p = 0.998 . Similarly, with agreeableness, we see no connections with any of the other trust constructs, neither in the German nor in the English data (i.e., trusting beliefs in a specific technology: r = 0.066 , p = 0.443 ; functionality: r = 0.111 , p = 0.195 ; reliability: r = 0.021 , p = 0.810 ; helpfulness: r = 0.026 , p = 0.776 ). The same holds for conscientiousness: trusting beliefs in a specific technology: German: r = 0.076 , p = 0.497 , English: r = 0.050 , p = 0.719 ; functionality: German: r = 0.090 , p = 0.420 , English: r = 0.035 , p = 0.801 ; reliability: German: r = 0.018 , p = 0.875 , English: r = 0.009 , p = 0.950 ; helpfulness: German: r = 0.146 , p = 0.226 , English: r = 0.114 , p = 0.434 . The same is true for openness: trusting beliefs in a specific technology: r = 0.079 , p = 0.358 ; functionality: r = 0.070 , p = 0.416 ; reliability: r = 0.098 , p = 0.257 ; helpfulness: r = 0.012 , p = 0.892 .
Concerning the proposed hypotheses of Category B (cf. Section 3.1.2), we may thus conclude that none of the assumptions are supported by the data. Quite on the contrary, we found a small but significant correlation between extraversion and trusting beliefs in a specific technology in the English sample.

5.3. Category C—Propensity to Trust and Trusting Beliefs in a Specific Technology

Here, the data indicate a positive correlation between trusting stance and trusting beliefs in a specific technology. The effect size is stronger in the German ( r = 0.312 , p = 0.005 ) than in the English sample data ( r = 0.280 , p = 0.036 ). Moreover, a correlation was found between trusting stance and functionality. This time, the effect size is stronger in the English ( r = 0.308 , p = 0.021 ) than in the German ( r = 0.220 , p = 0.048 ) sample. A correlation between trusting stance and reliability, however, is not supported by the data, neither in the German ( r = 0.199 , p = 0.075 ) nor the English sample ( r = 0.203 , p = 0.134 ). Looking at helpfulness, a significant positive correlation with trusting stance was found in the German data ( r = 0.406 , p = 0.000 ) but not in the English ones ( r = 0.182 , p = 0.206 ). A significant positive correlation was further found between faith in general technology and trusting beliefs, with the effect size being stronger in the English ( r = 0.397 , p = 0.002 ) than in the German sample ( r = 0.292 , p = 0.009 ). Similarly, a significant positive correlation was found between faith in general technology and functionality. Again, the effect size seems to be stronger in the English ( r = 0.359 , p = 0.007 ) than in the German ( r = 0.245 , p = 0.028 ) sample. Another correlation is supported between faith in general technology and reliability. However, the connection is only significant in the data from the English survey; r = 0.363 , p = 0.006 . Conversely, the data point to a significant correlation between faith in general technology and helpfulness in the German sample ( r = 0.311 , p = 0.009 ), whereas the English sample does not support this assumption ( r = 0.238 , p = 0.097 ).
In summary, we may thus conclude that both hypotheses of category C (cf. Section 3.1.3), i.e., H-C01 and H-C02, are (at least partly) supported by our data.

5.4. Category D—Affinity for Technology Interaction and Trusting Beliefs in a Specific Technology

Finally, investigating the hypotheses of Category D (cf. Section 3.1.4), the data dismiss any correlations between ATI and trusting beliefs in a specific technology (German: r = 0.038 , p = 0.736 ; English: r = 0.030 , p = 0.828 ), ATI and functionality (German: r = 0.073 , p = 0.518 ; English: r = 0.003 , p = 0.982 ) and ATI and reliability (German: r = 0.039 , p = 0.732 ; English: r = 0.111 , p = 0.415 ). Yet, the German data point to a significant medium correlation between ATI and helpfulness ( r = 0.386 , p = 0.006 ), although the English sample does not confirm this connection ( r = 0.081 , p = 0.504 ).

5.5. Additional Investigations

Although our data do not point to a general negative impact, which increasing age may have on trust in technology ( r = 0.004 , p = 0.960 ), as was found by previous work [63], we tested for significant differences concerning trusting beliefs in a specific technology between Generation Z ( A g e 25 ) and other age groups. Our assumption was that people who were exposed to IVAs at a young age would show higher trust levels towards the technology. However, the data do not support this assumption; t ( 136 ) = 1.082 , p = 0.281 . Additionally, genderwise, respondents did not show any difference here: t ( 135 ) = 0.177 , p = 0.860 .

6. Discussion

Our analysis shows that on a scale from 0 to 100%, users trust IVAs 51.59% on average. Consequently, we see significant improvement potential to enhance users’ trusting beliefs in this technology. Besides technical shortcomings, it seems to be particularly the increased existence of privacy concerns that trigger exceeding mistrust [64]. Addressing these concerns may be a relevant first step in reducing mistrust. For example, a recent study by Brunotte et al. [57] indicates that a clear explanation as to why certain data are needed and how they are used can significantly alleviate people’s privacy concerns and thus increase their trust in a software system. Furthermore, another study by Jain et al. [65] found that higher brand credibility reduces users’ perception of privacy risks connected to IVAs. In other words, if IVA producers such as Amazon, Microsoft, Apple, Samsung, or Google manage to increase their credibility, privacy concerns with respect to their IVA products may decline. On the other hand, if the perception remains that these companies increasingly “misuse”’collected data for consumer profiling purposes, privacy concerns are likely to stay were they are, or even increase. Neither explained data use nor brand credibility were included in our study design, yet we believe they should be considered as possible pathways towards tackling privacy concerns in future IVA studies.
Our study, on the other hand, focused particularly on the relationship between people’s personality and their trusting beliefs in IVA technology. To this end, four of our eleven proposed hypotheses were supported by the collected data (cf. Table 7). Two of them are only partially supported, as only one of the two survey languages (i.e., the English one) yielded significant results. Interestingly, however, we found evidence in the data which clearly contradicts some of our hypotheses. For example, the English sample shows a negative correlation between the Big Five personality dimension extraversion and a respondent’s trusting beliefs in IVAs. This is surprising, as similar studies on human–human interaction have shown that extroverted individuals seem to exhibit higher levels of trust towards their counterparts (e.g., [66,67]). One explanation for our results may be found in the assumption that current IVAs still lack the capability to meet people’s social needs. That is, extraverts tend to be more talkative and sociable and thus expect IVAs to live up to what is often promised in their respective commercials [61]. Consequently, if these expectations are not satisfied, trusting beliefs in the technology may be hampered. This also aligns with the findings of Elson et al. [68], who showed that wrong or unsatisfying agent responses significantly decrease extraverts’ levels of trust in said technology. It seems interesting though that neither of the other personality characteristics, i.e., agreeableness, openness, and conscientiousness, show a significant connection with trusting beliefs in IVAs. Consequently, one may argue that the error proneness, which is still inherent to many IVAs, annihilates the trust advantage the technology may otherwise have with highly extrovert individual.
Our findings further point to a relationship between certain personality stances and propensity to trust, which can be seen in a positive correlation between agreeableness and trusting stance and faith in general technology. This indicates that the higher a person scores in the agreeableness dimension, the more he/she tends to be willing to depend on technology. Although this effect was only observable in the German sample, our findings confirm previous work on the assumption that more agreeable people are also more likely to trust [61]. While generally this may be considered advantageous for the adoption and continuous use of IVA technology, it also bears a certain risk, as respective people could more easily fall victim to wrongful advice—an aspect which has recently become a pressing issue with the increasing use of generative conversational AI tools such as ChatGPT [69].
In accordance with Gessl et al. [17], we also observed a significant negative correlation between openness and trust, i.e., faith in general technology. Additionally, consistent with McKnight et al. [10], we found that propensity to trust positively correlates with trusting beliefs in a specific technology, in our case with trusting believes in IVAs. In other words, rather cautious people also put little faith in technology, for which they remain a difficult user group for Alexa and Co. On the other hand, the people who exhibit a certain general level of trust are also more inclined to trust IVAs and consequently are more likely to adopt them.
Furthermore, our study data point to a positive correlation between trusting stance and trusting beliefs in IVAs, as well as faith in general technology and trusting beliefs in IVAs. Thus, on the one hand, one may argue that when an individual shows a higher trusting stance towards technology, he/she also has stronger trusting beliefs in IVAs. On the other hand, individuals who show higher confidence in technologies in general and who believe that technologies are designed to be effective also have higher trusting beliefs in IVAs.
Interestingly, our data do not show a connection between people’s affinity for technology interaction and their trusting beliefs in IVAs. Consequently, we may argue that an individual’s tendency to actively engage in intensive IVA interaction is not triggered by his/her trusting beliefs in this technology. Rather, it is influenced by a deeper assessment of the utility the technology may potentially offer [70]. This is consistent with Gulati et al. [71], who also found that there is no relationship between people’s trust and their motivation or willingness to interact with Siri, although IVA users show a significantly higher ATI score.
Lastly, it is worth pointing out that we found IVA users to have a higher propensity to trust than nonusers.

7. Conclusions, Limitations, and Future Outlook

We presented results from a study investigating trust in IVA technology. Since IVAs such as Siri, Alexa, Cortana, Bixby, and Google Assistant all process personal data and act autonomously, trust counts as an important prerequisite for widespread technology adoption [26]. To this end, our work focused particularly on the impact personality has on people’s trust perception and found that none of the Big Five dimensions (i.e., openness, conscientiousness, extraversion, agreeableness, and neuroticism) seem to have a respective effect, and neither does a person’s affinity for technology interaction. However, a significant positive correlation was observed between the personality trait propensity to trust in general technology and overall trusting beliefs in IVA technology.
There are a number of limitations to our study which may restrain the generalization of findings. First, we used convenience sampling and focused on the online distribution of questionnaires, for which certain population groups are not represented or are underrepresented in the sample, e.g., people without Internet access or older generations. Moreover, the standardized questionnaire instrument developed by MckKnight et al. [10] was translated into German, so that discrepancies related to ambiguous words and adapted sentence structures cannot be ruled out. Significant differences between the German and the English sample were discovered in some key variables, and it is unclear where the differences stem from and whether they could be rooted in comprehension problems or in culture. As for the measured personality traits, we used the BFI-10 scale. Even though it counts as a standardized and rather reliable instrument to investigate personality traits, it cannot substitute full-length personality tests. Finally, one may argue that our findings represent a current snapshot and are strongly influenced by the growing number of services and capabilities, i.e., “skills”, IVAs are increasingly equipped with. Thus, additional long-term studies are needed to investigate if trust levels fluctuate and which factors may potentially impact this development. Furthermore, in accordance with Mayer et al. [72], we suggest shifting the key question from “Do you trust IVAs?” to “Do you trust IVAs to …” so as to gain deeper understanding of IVA trust. Finally, we recommend extending studies to include other trust constructs which may impact IVA adoption.

Author Contributions

The article was a collaborative effort by all three co-authors. Conceptualization, L.S. and S.S.; methodology, L.S.; validation, S.S. and A.G.; formal analysis, L.S.; investigation, L.S.; data curation, L.S.; writing—original draft preparation, L.S. and S.S.; writing—review and editing, S.S. and A.G.; visualization, S.S.; supervision, S.S.; project administration, S.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of MCI—The Entrepreneurial School (https://www.mci.edu (accessed on 22 April 2023)).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are openly available on Zenodo at https://doi.org/10.5281/zenodo.7764249 (accessed on 22 April 2023).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
IVAIntelligent Virtual Agent
NUINatural User Interfaces
GDPRGeneral Data Protection Regulation
DCTData Collection Transparency
AIAffinity for Technology Interaction
BFIBig Five Inventory

References

  1. Statista, G. Anzahl der Nutzer Virtueller Digitaler Assistenten Weltweit in den Jahren von 2015 bis 2021. Available online: https://de.statista.com/statistik/daten/studie/620321/umfrage/nutzung-von-virtuellen-digitalen-assistenten-weltweit/ (accessed on 30 September 2022).
  2. Attig, C.; Wessel, D.; Franke, T. Assessing personality differences in human-technology interaction: An overview of key self-report scales to predict successful interaction. In Proceedings of the International Conference on Human-Computer Interaction, Vancouver, BC, Canada, 9–14 July 2017; pp. 19–29. [Google Scholar]
  3. Gray, S. Always on: Privacy implications of microphone-enabled devices. In Proceedings of the Future of Privacy Forum, Washington, DC, USA, 25 April 2016; pp. 1–10. [Google Scholar]
  4. Campagna, G.; Ramesh, R.; Xu, S.; Fischer, M.; Lam, M.S. Almond: The architecture of an open, crowdsourced, privacy-preserving, programmable virtual assistant. In Proceedings of the 26th International Conference on World Wide Web, Perth, Australia, 3–7 April 2017; pp. 341–350. [Google Scholar]
  5. Glass, A.; McGuinness, D.L.; Wolverton, M. Toward establishing trust in adaptive agents. In Proceedings of the 13th International Conference on Intelligent User Interfaces, Canaria, Spain, 13–16 January 2008; pp. 227–236. [Google Scholar]
  6. Gefen, D.; Karahanna, E.; Straub, D.W. Trust and TAM in online shopping: An integrated model. MIS Q. 2003, 27, 51–90. [Google Scholar] [CrossRef]
  7. Salanitri, D.; Lawson, G.; Waterfield, B. The relationship between presence and trust in virtual reality. In Proceedings of the European Conference on Cognitive Ergonomics, Nottingham, UK, 5–8 September 2016; pp. 1–4. [Google Scholar]
  8. Ding, S.; Yang, S.; Zhang, Y.; Liang, C.; Xia, C. Combining QoS prediction and customer satisfaction estimation to solve cloud service trustworthiness evaluation problems. Knowl.-Based Syst. 2014, 56, 216–225. [Google Scholar] [CrossRef]
  9. Lee, J.; Moray, N. Trust, control strategies and allocation of function in human-machine systems. Ergonomics 1992, 35, 1243–1270. [Google Scholar] [CrossRef] [PubMed]
  10. McKnight, D.H.; Carter, M.; Thatcher, J.B.; Clay, P.F. Trust in a specific technology: An investigation of its components and measures. ACM Trans. Manag. Inf. Syst. (TMIS) 2011, 2, 1–25. [Google Scholar] [CrossRef]
  11. Detweiler, C.; Broekens, J. Trust in online technology: Towards practical guidelines based on experimentally verified theory. In Proceedings of the International Conference on Human-Computer Interaction, San Diego, CA, USA, 19–24 July 2009; pp. 605–614. [Google Scholar]
  12. Hengstler, M.; Enkel, E.; Duelli, S. Applied artificial intelligence and trust—The case of autonomous vehicles and medical assistance devices. Technol. Forecast. Soc. Chang. 2016, 105, 105–120. [Google Scholar] [CrossRef]
  13. Lee, M.K.; Turban, E. A trust model for consumer internet shopping. Int. J. Electron. Commer. 2001, 6, 75–91. [Google Scholar] [CrossRef]
  14. Bente, G.; Dratsch, T.; Kaspar, K.; Häßler, T.; Bungard, O.; Al-Issa, A. Cultures of trust: Effects of avatar faces and reputation scores on German and Arab players in an online trust-game. PLoS ONE 2014, 9, e98297. [Google Scholar] [CrossRef]
  15. Behrenbruch, K.; Söllner, M.; Leimeister, J.M.; Schmidt, L. Understanding diversity—The impact of personality on technology acceptance. In Proceedings of the IFIP Conference on Human-Computer Interaction, Cape Town, South Africa, 2–6 September 2013; pp. 306–313. [Google Scholar]
  16. Franke, T.; Attig, C.; Wessel, D. A personal resource for technology interaction: Development and validation of the affinity for technology interaction (ATI) scale. Int. J. Hum. –Comput. Interact. 2019, 35, 456–467. [Google Scholar] [CrossRef]
  17. Gessl, A.S.; Schlögl, S.; Mevenkamp, N. On the perceptions and acceptance of artificially intelligent robotics and the psychology of the future elderly. Behav. Inf. Technol. 2019, 38, 1068–1087. [Google Scholar] [CrossRef]
  18. Gartner, I. Anzahl der Nutzer Virtueller Digitaler Assistenten Weltweit in den Jahren von 2015 bis 2021. Available online: https://www.gartner.com/it-glossary/virtual-assistant-va/ (accessed on 30 September 2022).
  19. Hoy, M.B. Alexa, Siri, Cortana, and more: An introduction to voice assistants. Med. Ref. Serv. Q. 2018, 37, 81–88. [Google Scholar] [CrossRef]
  20. Chung, H.; Iorga, M.; Voas, J.; Lee, S. Alexa, can I trust you? Computer 2017, 50, 100–104. [Google Scholar] [CrossRef] [PubMed]
  21. Cho, E. Hey Google, can I ask you something in private? In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; pp. 1–9. [Google Scholar]
  22. López, G.; Quesada, L.; Guerrero, L.A. Alexa vs. Siri vs. Cortana vs. Google Assistant: A comparison of speech-based natural user interfaces. In Proceedings of the International Conference on Applied Human Factors and Ergonomics, Los Angeles, CA, USA, 17–21 July 2017; pp. 241–250. [Google Scholar]
  23. Lopatovska, I.; Rink, K.; Knight, I.; Raines, K.; Cosenza, K.; Williams, H.; Sorsche, P.; Hirsch, D.; Li, Q.; Martinez, A. Talk to me: Exploring user interactions with the Amazon Alexa. J. Librariansh. Inf. Sci. 2019, 51, 984–997. [Google Scholar] [CrossRef]
  24. Aylett, M.P.; Cowan, B.R.; Clark, L. Siri, echo and performance: You have to suffer darling. In Proceedings of the Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; pp. 1–10. [Google Scholar]
  25. Čaić, M.; Odekerken-Schröder, G.; Mahr, D. Service robots: Value co-creation and co-destruction in elderly care networks. J. Serv. Manag. 2018, 29, 178–205. [Google Scholar] [CrossRef]
  26. Looije, R.; Neerincx, M.A.; Cnossen, F. Persuasive robotic assistant for health self-management of older adults: Design and evaluation of social behaviors. Int. J. Hum.-Comput. Stud. 2010, 68, 386–397. [Google Scholar] [CrossRef]
  27. Bell, S.; Wood, C.; Sarkar, A. Perceptions of chatbots in therapy. In Proceedings of the Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; pp. 1–6. [Google Scholar]
  28. Dale, R. The return of the chatbots. Nat. Lang. Eng. 2016, 22, 811–817. [Google Scholar] [CrossRef]
  29. Mori, M.; MacDorman, K.F.; Kageki, N. The uncanny valley [from the field]. Ieee Robot. Autom. Mag. 2012, 19, 98–100. [Google Scholar] [CrossRef]
  30. Heater, C. Being there: The subjective experience of presence. Presence Teleoperators Virtual Environ. 1992, 1, 262–271. [Google Scholar] [CrossRef]
  31. Biocca, F. The cyborg’s dilemma: Progressive embodiment in virtual environments. J. Comput.-Mediat. Commun. 1997, 3, JCMC324. [Google Scholar] [CrossRef]
  32. Lee, K.M.; Nass, C. Designing social presence of social actors in human computer interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Fort Lauderdale, FL, USA, 5–10 April 2003; pp. 289–296. [Google Scholar]
  33. Lankton, N.K.; McKnight, D.H.; Tripp, J. Technology, humanness, and trust: Rethinking trust in technology. J. Assoc. Inf. Syst. 2015, 16, 1. [Google Scholar] [CrossRef]
  34. Eisenberg, N.; Strayer, J. Empathy and Its Development; Cambridge University Press: Cambridge, UK, 1990. [Google Scholar]
  35. Decety, J.; Jackson, P.L. The functional architecture of human empathy. Behav. Cogn. Neurosci. Rev. 2004, 3, 71–100. [Google Scholar] [CrossRef]
  36. Rodrigues, S.H.; Mascarenhas, S.; Dias, J.; Paiva, A. A process model of empathy for virtual agents. Interact. Comput. 2015, 27, 371–391. [Google Scholar] [CrossRef]
  37. Ochs, M.; Pelachaud, C.; Sadek, D. An empathic virtual dialog agent to improve human-machine interaction. In Proceedings of the 7th International joint Conference on Autonomous Agents and Multiagent Systems—Volume 1, Estoril, Portugal, 12–16 May 2008; pp. 89–96. [Google Scholar]
  38. Ghafurian, M.; Budnarain, N.; Hoey, J. Role of emotions in perception of humanness of virtual agents. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, Montreal, QC, Canada, 13–17 May 2019; pp. 1979–1981. [Google Scholar]
  39. Lisetti, C.; Amini, R.; Yasavur, U.; Rishe, N. I can help you change! an empathic virtual agent delivers behavior change health interventions. ACM Trans. Manag. Inf. Syst. (TMIS) 2013, 4, 1–28. [Google Scholar] [CrossRef]
  40. Paiva, A.; Leite, I.; Boukricha, H.; Wachsmuth, I. Empathy in virtual agents and robots: A survey. ACM Trans. Interact. Intell. Syst. (Tiis) 2017, 7, 1–40. [Google Scholar] [CrossRef]
  41. Alepis, E.; Patsakis, C. Monkey says, monkey does: Security and privacy on voice assistants. IEEE Access 2017, 5, 17841–17851. [Google Scholar] [CrossRef]
  42. Graeff, T.R.; Harmon, S. Collecting and using personal data: Consumers’ awareness and concerns. J. Consum. Mark. 2002, 19, 302–318. [Google Scholar] [CrossRef]
  43. Spiekermann, S.; Cranor, L.F. Engineering privacy. IEEE Trans. Softw. Eng. 2008, 35, 67–82. [Google Scholar] [CrossRef]
  44. Srinivas, J.; Reddy, K.V.S.; Qyser, A.M. Cloud computing basics. Int. J. Adv. Res. Comput. Commun. Eng. 2012, 1, 343–347. [Google Scholar]
  45. Wolford, B. What Is GDPR, the EU’s New Data Protection Law? Available online: https://gdpr.eu/what-is-gdpr/ (accessed on 30 September 2022).
  46. Peslak, A.R. Internet privacy policies of the largest international companies. J. Electron. Commer. Organ. (JECO) 2006, 4, 46–62. [Google Scholar] [CrossRef]
  47. Flikkema, P.G.; Cambou, B. When things are sensors for cloud AI: Protecting privacy through data collection transparency in the age of digital assistants. In Proceedings of the 2017 Global Internet of Things Summit (GIoTS), Linz, Austria, 22–25 October 2017; pp. 1–4. [Google Scholar]
  48. Luhmann, N. Vertrauen—Ein Mechanismus der Reduktion sozialer Komplexität, 5th ed.; UTB: Stuttgart, Germany, 2014. [Google Scholar]
  49. Gulati, S.; Sousa, S.; Lamas, D. Modelling trust: An empirical assessment. In Proceedings of the IFIP Conference on Human-Computer Interaction, Mumbai, India, 25–29 September 2017; pp. 40–61. [Google Scholar]
  50. Clark, L.; Pantidi, N.; Cooney, O.; Doyle, P.; Garaialde, D.; Edwards, J.; Spillane, B.; Gilmartin, E.; Murad, C.; Munteanu, C.; et al. What makes a good conversation? Challenges in designing truly conversational agents. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; pp. 1–12. [Google Scholar]
  51. Neururer, M.; Schlögl, S.; Brinkschulte, L.; Groth, A. Perceptions on authenticity in chat bots. Multimodal Technol. Interact. 2018, 2, 60. [Google Scholar] [CrossRef]
  52. European Commission; Directorate-General for Communications Networks, Content and Technology (CNECT). Ethics Guidelines for Trustworthy AI; European Commission: Brussels, Belgium, 2019. [Google Scholar] [CrossRef]
  53. Vimalkumar, M.; Sharma, S.K.; Singh, J.B.; Dwivedi, Y.K. ‘Okay google, what about my privacy?’: User’s privacy perceptions and acceptance of voice based digital assistants. Comput. Hum. Behav. 2021, 120, 106763. [Google Scholar] [CrossRef]
  54. Burbach, L.; Halbach, P.; Plettenberg, N.; Nakayama, J.; Ziefle, M.; Valdez, A.C. “Hey, Siri”, “Ok, Google”, “Alexa”. Acceptance-Relevant Factors of Virtual Voice-Assistants. In Proceedings of the 2019 IEEE International Professional Communication Conference (ProComm), Aachen, Germany, 23–26 July 2019; pp. 101–111. [Google Scholar]
  55. Liao, Y.; Vitak, J.; Kumar, P.; Zimmer, M.; Kritikos, K. Understanding the role of privacy and trust in intelligent personal assistant adoption. In Proceedings of the Information in Contemporary Society: 14th International Conference, iConference 2019, Washington, DC, USA, 31 March–3 April 2019; pp. 102–113. [Google Scholar]
  56. Jo, H. Impact of Information Security on Continuance Intention of Artificial Intelligence Assistant. Procedia Comput. Sci. 2022, 204, 768–774. [Google Scholar] [CrossRef]
  57. Brunotte, W.; Specht, A.; Chazette, L.; Schneider, K. Privacy explanations—A means to end-user trust. J. Syst. Softw. 2023, 195, 111545. [Google Scholar] [CrossRef]
  58. Tupes, E.C.; Christal, R.E. Recurrent personality factors based on trait ratings. J. Personal. 1992, 60, 225–251. [Google Scholar] [CrossRef] [PubMed]
  59. John, O.P.; Srivastava, S. The Big-Five trait taxonomy: History, measurement, and theoretical perspectives. In Handbook of Personality: Theory and Research; Pervin, L.A., John, O.P., Eds.; Guilford Press: New York, NY, USA, 1999; pp. 102–138. [Google Scholar]
  60. Komiak, S.Y.X.; Benbasat, I. The Effects of Personalization and Familiarity on Trust and Adoption of Recommendation Agents. MIS Q. 2006, 30, 941–960. [Google Scholar] [CrossRef]
  61. John, O.P.; Donahue, E.M.; Kentle, R.L. Big five inventory. J. Personal. Soc. Psychol. 1991. [Google Scholar] [CrossRef]
  62. Rammstedt, B.; John, O.P. Measuring personality in one minute or less: A 10-item short version of the Big Five Inventory in English and German. J. Res. Personal. 2007, 41, 203–212. [Google Scholar] [CrossRef]
  63. Evans, A.M.; Athenstaedt, U.; Krueger, J.I. The development of trust and altruism during childhood. J. Econ. Psychol. 2013, 36, 82–95. [Google Scholar] [CrossRef]
  64. Cowan, B.R.; Pantidi, N.; Coyle, D.; Morrissey, K.; Clarke, P.; Al-Shehri, S.; Earley, D.; Bandeira, N. “What can i help you with?” infrequent users’ experiences of intelligent personal assistants. In Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services, Vienna, Austria, 4–7 September 2017; pp. 1–12. [Google Scholar]
  65. Jain, S.; Basu, S.; Dwivedi, Y.K.; Kaur, S. Interactive voice assistants—Does brand credibility assuage privacy risks? J. Bus. Res. 2022, 139, 701–717. [Google Scholar] [CrossRef]
  66. Furumo, K.; de Pillis, E.; Green, D. Personality influences trust differently in virtual and face-to-face teams. Int. J. Hum. Resour. Dev. Manag. 2009, 9, 36–58. [Google Scholar] [CrossRef]
  67. Tov, W.; Nai, Z.L.; Lee, H.W. Extraversion and Agreeableness: Divergent Routes to Daily Satisfaction with Social Relationships. J. Personal. 2016, 84, 121–134. [Google Scholar] [CrossRef]
  68. Elson, J.; Derrick, D.; Ligon, G. Examining trust and reliance in collaborations between humans and automated agents. In Proceedings of the HICCS Hawaii International Conference on System Sciences affiliated Conference on Processes and Technologies for Small and Large Team Collaboration, Village, HI, USA, 3–6 January 2018; pp. 430–439. [Google Scholar]
  69. Oviedo-Trespalacios, O.; Peden, A.E.; Cole-Hunter, T.; Costantini, A.; Haghani, M.; Kelly, S.; Torkamaan, H.; Tariq, A.; Newton, J.D.A.; Gallagher, T.; et al. The risks of using chatgpt to obtain common safety-related information and advice. SSRN 2023. [Google Scholar] [CrossRef]
  70. Hesse, L.S.; Walter, G.; Tietze, S. Influence of personality, affinity for technology and risk awareness on technology acceptance using the example of voice control. In Proceedings of the Mensch und Computer 2020, Magdeburg, Germany, 6–9 September 2020; pp. 211–221. [Google Scholar]
  71. Gulati, S.; Sousa, S.; Lamas, D. Modelling trust in human-like technologies. In Proceedings of the 9th Indian Conference on Human Computer Interaction, Bangalore, India, 16–18 December 2018; pp. 1–10. [Google Scholar]
  72. Mayer, R.C.; Davis, J.H.; Schoorman, F.D. An integrative model of organizational trust. Acad. Manag. Rev. 1995, 20, 709–734. [Google Scholar] [CrossRef]
Figure 1. Yearly world-wide adoption of Intelligent Virtual Assistants (IVA) in millions of users [1].
Figure 1. Yearly world-wide adoption of Intelligent Virtual Assistants (IVA) in millions of users [1].
Mti 07 00054 g001
Figure 2. IVA ecosystem adapted from Chung et al. [20].
Figure 2. IVA ecosystem adapted from Chung et al. [20].
Mti 07 00054 g002
Figure 3. Proposed research model building upon the work of McKnight et al. [10].
Figure 3. Proposed research model building upon the work of McKnight et al. [10].
Mti 07 00054 g003
Figure 4. Hypotheses Category A—Personality and Propensity to Trust.
Figure 4. Hypotheses Category A—Personality and Propensity to Trust.
Mti 07 00054 g004
Figure 5. Hypotheses Category B—Personality and Trusting Beliefs in a Specific Technology.
Figure 5. Hypotheses Category B—Personality and Trusting Beliefs in a Specific Technology.
Mti 07 00054 g005
Figure 6. Hypotheses Category C—Propensity to trust and Trusting Beliefs in a Specific Technology.
Figure 6. Hypotheses Category C—Propensity to trust and Trusting Beliefs in a Specific Technology.
Mti 07 00054 g006
Figure 7. Hypotheses Category D—Affinity for Technology Interaction and Trusting Beliefs in IVAs.
Figure 7. Hypotheses Category D—Affinity for Technology Interaction and Trusting Beliefs in IVAs.
Mti 07 00054 g007
Figure 8. Distribution of overall trusting believes in the specific technology IVA (x-axis: overall trust score; y-axis: number of respondents).
Figure 8. Distribution of overall trusting believes in the specific technology IVA (x-axis: overall trust score; y-axis: number of respondents).
Mti 07 00054 g008
Table 1. Construct Reliability.
Table 1. Construct Reliability.
Cronbach’s α
ConstructOverallGermanEnglish
Propensity to trust
Trusting stance0.860.870.83
Faith in general technology0.750.720.78
Institution-based trust
Structural assurance0.900.880.90
Situational normality0.760.700.81
Trusting beliefs in a specific technology
Functionality0.820.770.87
Reliability0.870.870.89
Helpfulness0.870.870.85
Table 2. Propensity to Trust.
Table 2. Propensity to Trust.
GermanEnglish
Propensity to TrustNo. of ItemsMeanSDMeanSD
Trusting stance33.21390.97903.47780.9781
Faith in general technology43.65900.61353.91430.7146
Mean 3.4365 3.6961
Table 3. Differences in Propensity to Trust between IVA Users and Nonusers.
Table 3. Differences in Propensity to Trust between IVA Users and Nonusers.
GermanEnglish
Propensity to TrustMeanSDMeanSD
Trusting stance
Users3.50210.90243.88100.8538
Non-users3.08430.98683.01700.9118
Faith in general technology
Users3.78850.55944.09380.5772
Non-users3.60140.62903.70920.8026
Table 4. Institution-Based Trust.
Table 4. Institution-Based Trust.
GermanEnglish
Institution-Based TrustNo. of ItemsMeanSDMeanSD
Structural assurance42.82400.94843.41220.9541
Situational normality43.35370.74223.69640.8198
Mean 3.0889 3.5543
Table 5. Trusting Beliefs in a Specific Technology.
Table 5. Trusting Beliefs in a Specific Technology.
Trusting Beliefs in a Specific TechnologyNo. of ItemsMeanSD
Functionality33.64370.7931
Reliability43.00360.9115
Helpfulness43.50210.8002
Mean 3.3718
Table 6. Differences in ATI score between IVA users and nonusers split by the respondents of the German and the English survey.
Table 6. Differences in ATI score between IVA users and nonusers split by the respondents of the German and the English survey.
GermanEnglish
ATIMeanSDMeanSD
Users4.29480.95813.92380.8190
Nonusers3.79451.04443.39910.8029
Table 7. Overview of Results of Hypotheses Testing.
Table 7. Overview of Results of Hypotheses Testing.
GermanEnglish
Category A—Personality and Propensity to Trust
H-A01: Agreeableness positively correlates with trusting stance.YesNo
H-A02: Agreeableness positively correlates with faith in general technology.YesNo
H-A03: Neuroticism negatively correlates with trusting stance.NoNo
H-A04: Neuroticism negatively correlates with faith in general technology.NoNo
Category B—Personality and Trusting Beliefs in a Specific Technology
H-B01: Extraversion positively correlates with trusting beliefs in a specific technology.NoNo
H-B02: Agreeableness positively correlates with trusting beliefs in a specific technology.NoNo
H-B03: Conscientiousness positively correlates with trusting beliefs in a specific technology.NoNo
H-B04: Openness positively correlates with trusting beliefs in a specific technology.NoNo
Category C—Propensity to Trust and Trusting Beliefs in a Specific Technology
H-C01: Trusting stance positively correlates with trusting beliefs in a specific technology.YesYes
H-C02: Faith in general technology positively correlates with trusting beliefs in a specific technology.YesYes
Category D—Affinity for Technology Interaction and Trusting Beliefs in a Specific Technology
H-D01: ATI positively correlates with trusting beliefs in a specific technology.NoNo
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Schadelbauer, L.; Schlögl, S.; Groth, A. Linking Personality and Trust in Intelligent Virtual Assistants. Multimodal Technol. Interact. 2023, 7, 54. https://doi.org/10.3390/mti7060054

AMA Style

Schadelbauer L, Schlögl S, Groth A. Linking Personality and Trust in Intelligent Virtual Assistants. Multimodal Technologies and Interaction. 2023; 7(6):54. https://doi.org/10.3390/mti7060054

Chicago/Turabian Style

Schadelbauer, Lisa, Stephan Schlögl, and Aleksander Groth. 2023. "Linking Personality and Trust in Intelligent Virtual Assistants" Multimodal Technologies and Interaction 7, no. 6: 54. https://doi.org/10.3390/mti7060054

Article Metrics

Back to TopTop