Next Article in Journal
Fusion Segmentation Network Guided by Adaptive Sampling Radius and Channel Attention Mechanism Module for MLS Point Clouds
Previous Article in Journal
Hierarchical Fine Extraction Method of Street Tree Information from Mobile LiDAR Point Cloud Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

No Movie to Watch: A Design Strategy for Enhancing Content Diversity through Social Recommendation in the Subscription-Video-On-Demand Service

1
Department of Interaction Science, Sungkyunkwan University, Seoul 03063, Republic of Korea
2
School of Industrial and Management Engineering, Korea University, Seoul 02841, Republic of Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(1), 279; https://doi.org/10.3390/app13010279
Submission received: 26 October 2022 / Revised: 25 November 2022 / Accepted: 22 December 2022 / Published: 26 December 2022
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
Increasing diversity is becoming crucial in recommender systems to address the “filter bubble” issue caused by accuracy-based algorithms. Diversity-oriented algorithms have been developed to solve this problem. However, this diversification has made it difficult for users to discover what they really want from the variety of information provided by the algorithm. Users spend their time wandering around the recommended content space but fail to find content they want to watch. Therefore, they rely on external services to gather information that does not appear on the recommended list. This could lead to a reduction in the services’ ability to compete with other subscription video on-demand (SVOD) services. To address this problem, this study proposes a human-centered approach to diversification through social recommendations. We conducted an experiment to understand how perceived diversity affects user perceptions and attitudes. Specifically, by incorporating social recommendations into the SVOD service, this experiment was changed to examine the following conditions: (1) influencers vs. online friends, and (2) human recommendation lists vs. algorithmic recommendation lists. The findings indicated that perceived diversity influences the manner in which the users perceive information quality and playfulness, both of which have a positive effect on their intention to use. Additionally, the participants’ perceptions of information quality were greater in the scenario with the human recommendation than in that with the algorithmic recommendation. This study contributes to the development of a theoretical framework based on perceived diversity through social recommendations and the design of an SVOD interface with social recommendations to provide better user experiences.

1. Introduction

Our world is now more vibrant and colorful because of new discoveries. It is natural for users to expect novel and interesting material when using a recommender system. However, current recommender systems do not completely satisfy user expectations. The recommender systems with collaborative filtering consider accuracy to be the most important; therefore, the recommended items are primarily based on user profiles and past preferences. This often causes the recommendation of limited items with similar properties to be connected to a “filter bubble”, which produces a narrow range of suggestions and interrupts users from exploring new information [1]. To address this issue, various strategies with beyond-accuracy have been proposed to increase user satisfaction with the recommendations. For example, interesting items that users find in their content space are not explained only by recommendation accuracy.
For beyond-accuracy, several platforms utilize social factors to help users find new content with entertainment elements. On Amazon, people browse products listed by other users through social tagging. Spotify allows users to access celebrities’ playlists and to determine which songs they are listening to, and Listening Together (Spotify) offers users the opportunity to explore, stream, and share playlists from around the world. On YouTube, users chat and communicate with others while watching live broadcasts, which enhance their social presence and entertainment. Related to this, Grange et al. [2] showed that designs that incorporate online friendships positively affect users’ access to unexpected and relevant information in online shopping environments. Social factors can make searching easier and minimize risk aversion, which can thus introduce users to novel and enjoyable items.
Subscription video on demand (SVOD) services, however, adopt relatively few social factors. The SVOD service refers to a video streaming service that users can access by subscription (e.g., Netflix, Amazon Prime Video, and Disney+). Most SVOD companies employ recommendation algorithms to suggest items that individuals may be interested in. Netflix held a competition and awarded the “Netflix Prize” to the algorithm with the best recommendation performance. The proposed method is a model-based collaborative filtering technique, which is currently the basis of the Netflix algorithms. However, many users still spend a lot of time exploring the content space and searching for the items they really want to watch. They often believe that there is nothing new to watch when using SVOD services, which is boring [3].
Often, users take longer to select a video with a relatively longer running time than songs or short video clips. Videos watched through SVOD services are typically between 20 min and 2 h. Users may need to watch a few episodes or an entire drama season of a series to determine if the drama satisfies their wants. In addition, the recommendation algorithms in SVOD services tend not to offer new content to the user. Industry-leading companies, such as Netflix, have already abandoned the star rating system and have been trying to find and utilize users’ hidden behaviors [4]. This implies that traditional methods, such as star ratings, have obvious limitations in understanding user preferences. These methods often prevent users from being exposed to diverse content in new or unfamiliar genres or themes.
In 2019, Netflix announced that 75% of movie viewing followed its recommendations. What does this imply for the remaining 25%? With the advancements in recommendation algorithms, the proportion of viewers watching based on recommendation will rise. However, is reaching 100% possible? This does not imply that users will simply follow the recommendations and not explore other content. Helpers must encourage users to click on one of the hundreds of contents exposed by algorithms. Helpers can exist in both online and offline social relationships. SVOD companies are trying to form public opinion by utilizing social media. Social media campaigns such as Twitter encourage user engagement via social networking platforms and increase viewership [5]. However, this strategy can reduce member retention by providing viewers an option to switch to competing services.
The goal of this study is to propose a new SVOD interface by examining the effects of diversity on social recommendations. Though several studies have demonstrated the effects of social recommendations, SVOD services have rarely addressed this issue. We propose an SVOD in which users can access the content lists recommended by other users. In addition, the credibility of the information sources and the superiority of human recommendations were investigated in the proposed SVOD interface. The diversity of social recommendations is considered a key factor affecting users’ intentions to use and the perceived value of the information.

2. Background

2.1. Diversity in Recommender Systems

Accuracy has traditionally been considered an important criterion in the design of recommender systems. However, overfitting and hyper-personalization issues occur because of a limited range of options based on recommendation algorithms [6]. Several studies have demonstrated the value of diversity in reducing monotony and fostering better recommendations. Diversity is often described as being the opposite of similarity [7]. A higher level of diversity is associated with a lower level of accuracy due to the trade-off between the two. Although conventional recommendation algorithms still prioritize accuracy over diversity, exposing viewers to diverse content is becoming increasingly important. Ziegler et al. [8] stated that users’ overall satisfaction with recommendation lists depends not only on accuracy but also on diversity. Abbas et al. [9] reported that a recommender system with high diversity makes users to explore various alternative options rather than to receive recommended results passively. In addition, Knijnenburg et al. [10] found that system performance and perceived recommendation variety are positively correlated. Ekstrand et al. [11] demonstrated that recommendation diversity has a positive influence on users’ choice and satisfaction.
As diversity has been found to be an influential factor in recommendation quality, several studies have examined its effect by focusing on algorithmic manipulations. Aytekin and Karakaya [12] developed a novel method to increase the diversity of recommendations wherein users could adjust the diversity level they wanted. Abdollahpouri et al. [13] suggested re-ranking approaches as a personalized diversification method to solve a long-tail problem. Wang et al. [14] addressed the dilemma between accuracy and diversity and proposed a new collaborative filtering-based recommendation method by employing an approximate nearest neighbor (ANN) search method called locality-sensitive hashing (LSH). Panteli and Boutsinas [15] developed a flexible recommender system using the trade-off between diversity and similarity. The proposed system regulates the diversity/similarity ratios depending on the sizes of the recommendation sets.
However, owing to limited human cognitive capability, users cannot perceive diversity beyond a certain level. Only algorithmic efforts are insufficient for recommendation efficacy. There has been a demand for interface designs that help users recognize recommendation diversity. However, only a few studies have addressed this issue. Hu and Pu [16] developed an organization-based interface to encourage users to find diverse items. The study’s findings showed that users gained more helpful and supportive information when they thought diversity was better, which gave them more confidence in their choices. This reiterates the importance of a recommendation interface, which has often been ignored compared with algorithmic perspectives. Tsai and Brusilovsky [17] proposed a visual diversity-promoting interface for a better perception of diversity. The findings showed that the visual interface could help users perceive or become aware of recommendation diversity. Similarly, our study explores the manner in which diversity can be promoted in SVOD interfaces and examines its implications on information quality and consumers’ intention to use services.

2.2. Social Recommendation

The value of social recommendation has been growing manifold because other users’ opinions about items can reduce information overload and help us form an opinion of their quality before purchasing or using them. Previous studies have developed different methods of implementing social recommendations from algorithmic perspectives [18,19,20]. Additionally, the effects of social recommendations were examined. Chakravarty et al. [21] discovered that infrequent moviegoers were more influenced by word-of-mouth, while frequent moviegoers depended more on reviews. Although word-of-mouth and reviews did not equally affect users’ movie evaluations, the effects of both social opinions were found to be significant. Duan et al. [22] examined in what manner online film reviewers influence users’ purchase decisions and found that the volume of online posts is a key factor owing to an awareness effect. In addition, Xu [23] emphasized social recommendation as a major influential factor on attention, click likelihood, and evaluation of new credibility. According to Grange et al. [2], social circle design based on users’ social connections encourages serendipity.
Social recommendations play a pivotal role in inducing changes in users’ behaviors; however, their consequences differ depending on the information source. Therefore, many companies hire influencers as part of their sales strategies. The influencer, according to Berger and Keller Fay Group [24], is “a reliable and credible channel with a real impact in swaying consumer behavior”. Thus, influencers are seen as more credible, trustworthy, and persuasive. These influencer’ characteristics affect the general user behavior. Hovland and Weiss [25] showed that when communicators are viewed as credible information sources, people are much more likely to change their opinions towards them. Pornpitakpan [26] analyzed the effects of source credibility and described a higher-credibility source as generally more persuasive in changing consumers’ behaviors in advertisements. Hsu et al. [27] examined how online bloggers’ recommendations affect their purchase intentions in terms of their reputations. The researchers found that users tended to consider bloggers’ recommendations reliable and to have more positive attitudes and intentions towards bloggers with higher reputations.
User-generated playlists have emerged as a new method for users to express and recommend their preferences to others. Millions of people are actively creating and consuming their own playlists on YouTube. The fact that several music streaming services such as Spotify have been hiring experts and ordinary people to create playlists highlights the role and effect of user-generated playlists [28]. User-generated playlists as information sources are numerous and diverse, and users enjoy discovering valuable content by exploring playlists generated by other users. According to Duhan et al. [29], weak-tie recommendation sources are not limited to the social circle of a decision-maker. The user-generated playlist can be regarded as a weak-time recommendation. Users can obtain a greater likelihood of finding novel and meaningful content from user-generated playlists, which are weak-time recommendation sources.

2.3. Perceived Playfulness

Perceived playfulness is defined as the degree of attentiveness, curiosity, enjoyment, or interestingness derived from the interaction with an information technology [30]. According to [30], perceived playfulness as an intrinsic motivational factor significantly affects users’ intention to use. Using the extended expectation-confirmation theory (ECT) model, Lin et al. [31] found that perceived playfulness had a positive effect on continuance intention. Venkatesh [32] demonstrated a significant relationship between perceived playfulness and new technology adoption, which was mediated by perceived ease of use. These findings indicate that perceived playfulness is a pivotal determinant of users’ uses of web services.
Padilla-Meléndez et al. [33] found direct effects of perceived playfulness on the intention to use a system in a blended learning environment. Marza et al. [34] demonstrated the positive impact of perceived enjoyment on users’ attitudes towards online shopping. In addition, Kasilingam [35] observed that users who enjoy using chatbots have a positive attitude towards them and continue using them. Based on the technology acceptance model (TAM), Lee et al. [36] showed that consumers’ behaviors in adopting VR devices are strongly affected by perceived enjoyment. Similarly, other studies consistently argued that perceived playfulness is one of the strongest predictors of users’ intention to use [37,38,39]. Moreover, positive emotions such as creativity, arousal, excitement, imagination, curiosity, and exploratory experiences are often referred to as playfulness [40,41,42]. These studies contend that when users improve their competency in various situations, playfulness (positive emotions) can be attained. Hampton et al. [43] found that social interactions in a variety of settings help users access a variety of information, which fosters playfulness.

2.4. Algorithm vs. Human

Previous research on source bias has stated that users’ levels of algorithm aversion can differ depending on the characteristics of the tasks they are performing. Computer-based algorithms are considered unsuitable for subjective judgments that involve understanding and expressing emotional states. According to Waytz and Norton [44], people often think that computers and robots have less emotional ability than humans. Castelo et al. [45] found that click rates for the human-based option were significantly higher than those for the algorithm-based advice for more subjectively perceived decisions, such as dating advice. Similarly, Lee [46] stated that task characteristics have a significant influence on users’ perceptions of algorithmic and human decisions. In addition, Herlocker et al. [47] found that users are dissatisfied with algorithmic recommendations because they selectively present a few options and often fail to provide diverse information, depriving users of the chance to make a serendipitous discovery.
The findings state that algorithmic choices might not depend sufficiently on user viewpoints for a subjective task requiring greater human touch. Therefore, it is necessary to determine whether users’ tasks are subjective or objective and to provide an appropriate tool. Because movies inherently involve users’ emotions, user tasks in SVOD services (such as choosing which film to watch) involve subjective judgmental processes. However, few studies have examined the difference between people’s perceptions of algorithmic recommendations and human recommendations in SVOD domains.

2.5. Problem Statement

Considering previous literature comprehensively, there are two main research gaps regarding social recommendations. First, although diversity in the domain of recommender systems has been considered a critical factor in user experience, previous studies have neglected to investigate the effects of perceived diversity in the context of social recommendation. To bridge this research gap between diversity and social recommendation, it is required to explore how diversity is perceived by users and its structural relations with other subjective attitudes for the acceptance of social recommendations.
Second, there is a lack of investigation on source-related effects such as information source and recommender type, especially in SVOD services. The literature review showed that when users receive information, their perception varies on whether the source is human or algorithmic. Furthermore, a human recommender could be a different type of recommender depending on source credibility. These imply that it is required to conduct comparative studies to examine users’ perceptual differences in information source and recommender type for social recommendations in the SVOD service.

3. Hypotheses

This study aimed to examine how perceived diversity in social recommendations affects users’ attitudes in the context of SVOD. Figure 1 shows the hypotheses (H1s and H2s) of the relationships between perceived diversity, information quality, perceived playfulness, and intention to use.
Social interactions can improve user competence by providing various types of information [40]. Diverse recommendations help users make new and informative discoveries [48]. Positive emotions accompanied by diversity lead to user enjoyment [40,43]. We derived the following hypotheses (H1a and H1b) in the SVOD domain:
Hypothesis 1a (H1a).
Users’ perceptions of diversity in social recommendations in SVOD services positively affect their perception of information quality.
Hypothesis 1b (H1b).
Users’ perceptions of diversity in social recommendations in SVOD services positively affect their perception of playfulness.
Previous studies show that the quality of information in an information system is a strong predictor of users’ propensity to use it [49,50]. Additionally, it was found that perceived playfulness positively influences intention to use [30,51]. Thus, the following hypothesis was proposed:
Hypothesis 2a (H2a).
Users’ perceptions of information quality mediate the effect of perceived diversity on the intention to use an SVOD system.
Hypothesis 2b (H2b).
Users’ perceptions of playfulness mediate the effect of perceived diversity on the intention to use an SVOD system.
The additional hypotheses focus on which aspects of social recommendations affect perceived diversity and information quality. First, these perceptions depend on the source of the social recommendations. Related to users’ bias towards algorithms, algorithmic advice on a subjective decision is deemed less reliable than advice from real people [46]. In addition, it is often considered that a reputed person accesses diverse environments and trustworthy information [52,53]. Thus, we propose the following hypotheses:
Hypothesis 3a (H3a).
Users’ perceptions of diversity in SVOD services are higher for recommendations based on human choices than for those based on algorithmic choices.
Hypothesis 3b (H3b).
Users’ perceptions of information quality in SVOD services are higher for recommendation lists based on human choices than for those based on algorithmic choices.
Hypothesis 4a (H4a).
Users’ perceptions of diversity in SVOD services are higher for influencers’ recommendations than for online friends’ recommendations.
Hypothesis 4b (H4b).
Users’ perceptions of information quality in SVOD services are higher for influencers’ recommendations than for online friends’ recommendations.

4. Method

4.1. Participants

Thirty-nine volunteers were recruited from a university located in Seoul, Korea. Five of them were disqualified from the study because they misunderstood the experiment. Therefore, of the remaining 34 participants, 13 were male and 21 were female, with ages ranging from 19 to 31 (mean = 25.5; standard deviation = 3.067). All participants had previously used SVOD services, and 29 of them indicated that Netflix was the most frequently used SVOD service. Figure 2 displays the SVOD usage of the participants. The participants were randomly assigned to one of four experimental scenarios: influencers’ recommendation lists based on human choices (n = 9), influencers’ recommendation lists based on algorithmic choices (n = 9), online friends’ recommendation lists based on human choices (n = 8), and online friends’ recommendation lists based on algorithmic choices (n = 8).

4.2. Stimuli and Experimental Design

For the experiment, we built digital prototypes that could be interactively manipulated to investigate videos recommended by the participants. The prototypes were developed using Framer, which is a popular design and prototyping graphical user interface tool in the industry [54]. The participants were all Korean; therefore, all experimental materials were designed in the Korean language. The prototype design was inspired by the Netflix interface design. We expected that this interface design would allow participants to become accustomed to using our prototype without any additional training because Netflix is the most popular SVOD service not just globally but also in South Korea [55]. The lists of video recommendations in the prototype were created using data from the actual accounts of the three Netflix users who provided their details. We did not build our own recommendation system for this research because the purpose of this study was not to objectively evaluate recommendations but rather the user’s perceptions of the recommendations. Our experiment was designed as a 2 (information source: online friend or influencer) × 2 (recommender type: algorithm or human) between-subjects design. Four interactive prototypes were developed based on these experimental conditions. The details of the manipulation of each condition are described below.
To separate the two conditions of information source operationally (online friend vs. influencer), we controlled an instruction message for social recommendation on the starting page. On this page, the participants were asked to select one profile from among the three people. For this selection, the message “these are friends’ playlists” was displayed in the online-friend condition, whereas “these are influencers’ playlists” was displayed in the influencer condition (see Figure 3). Especially in the influencer condition, for their immersion in this selection, participants were informed that the three influencers were famous video content reviewers who had over three hundred thousand followers on YouTube. The same profiles were provided between the two conditions, including usernames and avatars, to control for any potential nuisance effects.
Next, we offered several welcome messages for content exploration at the top of the recommendation page for the manipulation of the recommender type (algorithm vs. human). Our prototype provides a message to facilitate the exploration of a recommended list created by a chosen profile or a list generated by an algorithm in which the profile might be interested, depending on the recommender type. More specifically, the message “these are [recommended] content X may like” was displayed in the algorithm condition, whereas “these are content X would like to recommend” was displayed in the human condition (see Figure 4). An identical recommendation list was provided for all experimental conditions to reduce any unintended consequences of various recommendations.
The prototype design and functionalities of the main page are described in more detail in Figure 4. First, an avatar and welcome message are shown at the top of the page. A menu is provided below that area to filter genres (romance, documentary, comedy, horror, and reality). If a participant selects one item, the list of recommended content below is provided for the selected genre. Below the genre area are lists of recommended content with their titles in each row. The title briefly describes the criterion on which the content was selected (e.g., watch it again, trending now, and continuing watching). When a thumbnail is clicked, the title of the content, seasonal information, and plot information are displayed. The entire content is composed of 210 items in total, and the first page consists of 35 items in seven rows, and each genre-specific page consists of 25 items in five rows.

4.3. Tasks and Procedure

An interview was strategically conducted to familiarize the participants in the experiment. Self-perception theory [56] states that this facilitation can be achieved by explicitly mentioning the experiences induced by the recommender system while using the SVOD service. This interview consisted of three open questions about the participant’s perspective on current recommender systems’ content recommendations: “what kind of SVOD services have you used?”, “have you ever thought that recommendation lists are tedious, or there is nothing to watch?”, and “what kind of strategies do you use to find content to watch?”
In the main experiment, the participants were provided a 10.1” iPad Air to use with our prototype. First, they were asked to create their own profiles using information such as nicknames, age, and gender. On the next page, the participants were instructed to write their preferred genre or content in text form. There were no cues or options offered to help users recall items on their own to interact with the system [57]. Subsequently, the participants moved to another page that presented others’ profiles, as shown in Figure 3. On this page, the participants were asked to select a profile among three people’s profiles. The people could be influencers or the participant’s online friends, depending on their experimental condition. When they select a profile, they access the recommendation list of that selected profile, as shown in Figure 4. On this page, the participants were asked to freely browse the recommendation list as much as they wanted to explore. With this search, they were also encouraged to press “like,” a heart icon in the upper right corner of the thumbnail if they wanted to see the content. Finally, to determine how seriously and honestly they were engaged in the experiment, they were asked to inform our researcher of the titles of at least three favorite content pieces and their reasons for their choices.
Following the main session, the participants were asked to respond to questionnaires on perceived diversity, information quality, choice satisfaction, perceived playfulness, and intention to use. Perceived diversity was measured using four items based on [58]. Information quality was measured using six items modified from [59]. Perceived playfulness and intention to use were measured using six and three items, respectively [30]. All responses were rated on a 7-point Likert scale (e.g., 1 = “I do not agree at all,” 7 = “I totally agree”).
Finally, following this quantitative assessment, a follow-up interview was conducted to gain a deeper understanding of user perceptions and the manner in which these social recommender interfaces affect users. This was a semi-structured interview, with questions asked flexibly based on the favorite topic they had responded to at the end of the main session. Table 1 lists the questions asked in the interviews. Specifically, the participants were first asked to answer the four questions regarding diversity perception and their activities in the experiment. After then, they were asked to imagine how they would feel if they were exposed to different conditions and respond to the questions in comparison with the condition they were assigned. Next, they were asked if they would be willing to share their or others’ recommendation lists for social recommendations. Lastly, researchers asked their opinion on the existing rating system on content evaluation and their opinion.

5. Results

To test Hypotheses 1s and 2s statistically, we first conducted mediation analyses based on ordinary least squares (OLS) regression using PROCESS, as explained by [60]. This is one of the most reliable statistical methods for analyzing complex causal relationships between multiple variables. The data analyses were conducted using the PROCESS macro, which is a third-party extension implemented in SPSS. Model 4 of PROCESS was specifically chosen for our analyses because it supports a simple mediation analysis [60]. Following that, two-sample t-tests to compare two different conditions were performed to test Hypotheses 3s and 4s.

5.1. Effects of Diversity and Mediating Effects of Information Quality and Perceived Playfulness

To test Hypotheses 1s and 2s, we conducted simple mediation analyses to validate whether information quality and perceived playfulness mediate the effect of diversity on the intention to use. Specifically, the effects of diversity in the social recommendations on information quality and perceived playfulness (H1a and H1b), as well as the mediating effects of these two variables on intention to use, were tested (H2a and H2b). Figure 5 shows the overall results of both the full models.
First, results indicated that perceived diversity by social recommendation was a significant predictor of both information quality (B = 0.331, SE = 0.140, p = 0.033) and perceived playfulness (B = 0.387, SE = 0.151, p = 0.016) at the 0.05 significance level. Thus, Hypotheses H1a and H1b were all supported (see Table 2).
Both information quality (B = 0.799, SE = 0.219, p = 0.001) and perceived playfulness (B = 1.041, SE = 0.154, p < 0.001) were significant predictors of intention to use at the 0.05 significance level. Accordingly, Hypotheses 2a and 2b were supported (see Table 3). However, perceived diversity did not show significant effects directly on intention to use in both models, including information quality (B = 0.223, SE = 0.186, p = 0.240) and perceived playfulness (B = 0.070, SE = 0.145, p = 0.633).
Lastly, to examine the significance of mediating effects of information quality and perceived playfulness, the complete mediations were tested with 5000 times bootstrapping. Results showed that no direct effect of perceived diversity on intention to use was found. Moreover, indirect effects were statistically significant in both models, including information quality (B = 0.249, SE = 0.107, 95% C.I. = 0.062 – 0.485) and perceived playfulness (B = 0.402, SE = 0.180, 95% C.I. = 0.032 – 0.733) (see Table 4). Conclusively, overall results indicated that perceived diversity can significantly affect intention to use only through information quality or perceived playfulness (see Figure 5).

5.2. Effects of Information Source and Recommender Type

To test Hypothesis 3s, independent sample t-tests were conducted to compare the effects between the two information sources. Before the main analyses, we checked statistical assumptions for the t-tests, specifically, the Shapiro–Wilk test for normality and the Levene test for homogeneity of variance. These showed that all the assumptions were satisfied: normality for perceived diversity (W = 0.960, p = 0.251) and information quality (W = 0.972, p = 0.512), and homogeneity of variance for perceived diversity (F = 1.874, p = 0.181) and information quality (F = 0.0104, p = 0.919). For the H3a, the t-test results indicated that the information source did not show significant effects on perceived diversity (t = 0.065, p = 0.949) at the 0.05 significance level. Specifically, there was no difference between participants in the influencer condition (M = 5.194, SD = 1.133) and those in the online-friend condition (M = 5.172, SD = 0.865). For H3b, the t-test results showed that the information source did not show a significant effect on perceived information quality (t = 0.961, p = 0.344) at the 0.05 significance level. There was no difference between the influencer condition (M = 5.333, SD = 0.868) and the online-friend condition (M = 5.052, SD = 0.834). Thus, Hypotheses 3a and 3b were rejected (see Table 5).
To test Hypothesis 4s, independent sample t-tests were conducted to compare the effects between the two recommender types. Before the main analyses, we checked statistical assumptions for the t-tests, specifically, the Shapiro–Wilk test for normality and the Levene test for homogeneity of variance. These showed that all the assumptions were satisfied: normality for perceived diversity (W = 0.956, p = 0.186) and information quality (W = 0.978, p = 0.710), and homogeneity of variance for perceived diversity (F = 0.111, p = 0.741) and information quality (F = 1.193, p = 0.283). For the H4a, the t-test results indicated that perceived diversity was not significantly different (t = 1.074, p = 0.291) between participants in the algorithm condition (M = 5.000, SD = 1.031) and those in the human condition (M = 5.367, SD = 0.965). Regarding H4b, the t-test results indicated that perceived information quality was significantly different (t = 2.077, p = 0.046) between in the algorithm condition (M = 4.912, SD = 0.934) and the in the online-friend condition (M = 5.490, SD = 0.667). Therefore, only Hypothesis 4b was supported (see Table 6).

6. Discussion

The effects of perceived diversity, playfulness, and information quality on an SVOD system with social recommendations were investigated in this study. Previous studies have pointed out that while designing an interface for a particular context, appropriate social factors must be considered [2,23,59]. Despite these concerns, the implementation of social design in SVOD services has rarely been studied. Therefore, the objective of this study was to propose a social recommendation that a user would find more entertaining while discovering superior video content on an SVOD service. There were three key findings from our experiments: 1) perceived diversity in recommender systems positively affects both perceived playfulness and information quality; 2) both perceived playfulness and information quality influence users’ intentions to use; and 3) users are more likely to perceive higher information quality when the recommendation lists are created by humans rather than algorithms. However, the results did not show significant differences in perceived diversity and information quality between the influencer and online-friend conditions. One possible reason is that the participants did not trust the influencers they did not know well or follow. P10, P11, P18, P35, P37, and P39 reported in the follow-up interview that there was only limited information about the introduced influencers (i.e., “the influencers are YouTube creators who have more than three-hundred-thousand followers.”). This lack of information might make it difficult to build trust in uncertain information sources. Thus, it may be incorrect to conclude that the information source and reputation have no effect. Researchers have introduced abundant evidence to address the significant role of influencers [53,61]. We believe that it would have a significant effect if we appropriately manipulated an interface with sufficient information or provided a profile of a famous influencer. Our findings and their theoretical and practical implications are discussed in the next sections (Section 6.1 and Section 6.2).

6.1. Theoretical Implications

The results of this study have several theoretical implications. First, our findings show that perceived diversity plays an important role in the use of recommender systems. This demonstrates the necessity to consider diversity as an essential factor for SVOD services. To increase diversity, this study proposes a novel social recommendation strategy to enable users to access various people’s recommendation lists. Tsai and Brusilovsky [17] have stressed that interfaces for recommender systems must assist users in exploring the various relevance prospects of recommended items through diversity-oriented tasks. Likewise, our social design makes the interface more accessible to diverse content beyond a user’s recommendation result, through the user’s voluntary selection of other people’s profiles.
Although we were unable to find a statistical difference in perceived diversity between human and algorithmic recommenders, our social recommendation design appeared to increase perceived diversity significantly. In the follow-up interview, the majority of participants in the human recommender condition stated that they were able to discover novel content they did not know before by exploring other people’s recommendation lists. Moreover, many participants found the algorithmic recommendations boring. In the follow-up interview, P3, P4, P5, P15, and P21 showed a negative attitude towards the algorithm in comparison with recommendations by a human; for example, “algorithmic recommendations seem [too] obvious.”
In addition, we found that perceived diversity affects two key factors—perceived information quality and perceived playfulness—for users’ continued use of a service. These findings are consistent with those of previous studies [10,40,41]. Our findings suggest that some benefits accrue from increased diversity. First, users may perceive social recommendations with high diversity as having high quality without increasing accuracy through algorithmic manipulations. Furthermore, diversification with social activity, as Hampton et al. [43] pointed out, can arouse a user’s playful feelings, potentially reducing boredom in existing recommender systems.
Second, our findings highlight a flaw in algorithm-based recommender systems and their origins. The findings indicated that users’ perceptions of information quality differed depending on whether a human recommended it or an algorithm. Specifically, most participants preferred human recommendations to the algorithm. Twelve participants in the algorithm recommender condition showed similar responses. For example, one commented, “I can trust the recommended items more when the lists are made by humans.” According to [45], this finding is quite reasonable because people tend to be skeptical about algorithmic results, especially for subjective tasks. The curation of video content is a subjective task that requires emotion and intuition; thus, human recommendation is preferred, as demonstrated in our study.
This preference for human recommendation appears to be derived from the shared understandability between human users. Many participants in our study believed that there must be a rational basis for online friends or influencers recommending specific content. Several participants responded that the algorithmically recommended results were not understandable. P37 answered, “I cannot figure out why the algorithms chose these contents, but in human recommendations, there is a reason anyway”. A previous study pointed out that it is difficult to explain how the recommendations were provided by a recommender model to end users [62]. These algorithms were too complex to enable users to comprehend the recommended results. Thus, it can be concluded that the degree of empathy is fundamentally responsible for the difference between the algorithm and human recommendations.

6.2. Practical Implications for SVOD Recommender Systems

The findings of this study provide practical insights into how designers implement social factors in the SVOD service to eliminate the old stigma of “there is nothing to see.” First, this study showed that social recommendations through media sharing could play a pivotal role in guiding users to discover diversity voluntarily. This design strategy could be utilized for users of recommender systems against the limited capacity of human perception that cannot recognize above a certain degree of diversity [8,16]. Regarding this problem, Tsai and Brusilovsky [17] introduced a diversity-enhanced interface that can reduce users’ efforts to explore information. These interfaces specifically enable users to recognize diversity and encourage them to explore the information space. Compared to algorithmically driven content consumption, this user-driven behavior for content consumption has been reported to result in long-term use of the service [63]. Moreover, this strategy can nudge users to view content that they have not yet watched. In the interview, P3 responded that “I became interested again in this content that I dropped out of before.” P13 also pointed out, “some content I would like to watch is what I already knew, and these recommendation lists reminded me of them”. These responses indicate that social recommendations are a viable strategy to encourage users to explore and consume diverse content.
Second, in terms of service management, our social design provides commercial advantages. Ten participants in the human recommender condition responded that they wanted to compare their own and other people’s movie preferences. For example, P4 reported, “This system enables me to assume the owner’s taste, and I can decide whether it is right for me or not with my own criteria”. Some participants mentioned that this comparison is more helpful than existing indices such as the star rating system. P10 responded, “star rating system is not trustworthy because it is based on unspecified users’ preference, however, this system gives me the recommendation from those who have similar tastes with me”. This implies that an SVOD service can prevent users from leaving the service to find information outside if it adopts a social recommendation to provide other people’s playlists or recommended lists. Furthermore, our design strategy is helpful for churn management. Previous studies have indicated that people with weak ties who receive socially shared information are more likely to spread this information to others [64]. It demonstrated that recommendations from influencers or online friends could spread information and increase information flow. Consequently, our social design might facilitate users sharing the content they discover, providing a new business opportunity for service providers.

6.3. Limitations

This study has some limitations in the research design and experiment. First, owing to the constraints of resources, recruiting a relatively small number of participants in the experiment was inevitable. However, we believe that our analytic results have a certain level of statistical power. Specifically, this study utilized the PROCESS macro for the mediation analyses [60]. This method is known to be less susceptible to small sample size in the simple mediation analysis [65]. In addition to the mediation analyses, the statistical assumptions for normality and homogeneity of variance were both satisfied for the independent t-tests. These indicate that despite this small sample size, our statistical results could be reliable enough.
Next, our study did not control the potential effects of individual preferences of participants due to inefficiency. It is too time-consuming to collect all the participant’s playlists and compare them with the recommendation lists we provided in the experiment. Additionally, it requires a complex process to calculate all the similarities between the participant’s playlist and our recommendation lists. Instead, we strategically exposed participants to three different recommendation lists through the profile selection in the experiment. We believe that it might not be completely successful in controlling this effect but somewhat effective in reducing potential biases.
Lastly, although we emphasized the importance of diversity through social recommendations in the SVOD context, this study neglected to explore users’ perceptions in a real-friend scenario. In the follow-up interview, 19 participants suggested that it would be more interesting if they could access and see what content their friends have recommended or are looking to recommend. Some of them also mentioned that the recommendation lists from their friends would be more trustworthy because they already knew about their friends’ preferences. Some previous research suggests that recommending products to friends can increase users’ interests in and attitudes towards certain products [66,67]. In the future, we plan to conduct a comparative study on diversity and social recommendations among influencers, online friends, and real-life friends.

7. Conclusions

The present study aimed to investigate how perceived diversity can be increased in a recommender system by implementing social recommendations in an SVOD service. To gain a deeper understanding of the underlying mechanisms among related concepts, we first examined how perceived diversity influences intention to use with the mediating effects of information quality and perceived playfulness. Next, we investigated how perceived diversity differs according to information source and recommender type. The results demonstrated that perceived diversity affects perceived information quality and playfulness; the effects of perceived diversity on users’ intentions to use are mediated by perceived information quality and playfulness; and human recommendation lists would produce higher information quality in SVOD services. In conclusion, these findings provide several implications theoretically to highlight (1) the importance of perceived diversity in recommender systems, and (2) the limitations of algorithm-based recommender systems compared to social recommendation. Practically, the research proposes (1) the need for support for users to voluntarily discover diversity in SVOD interfaces, and (2) commercial advantages of social design, especially for churn management in SVOD services.

Author Contributions

Conceptualization, S.K., I.H., and S.L.; Formal analysis, S.K.; Investigation, I.H.; Methodology, I.H.; Resources, S.K.; Supervision, S.L.; Writing—original draft, S.K. and I.H.; Writing—review and editing, S.L. All authors have read and agreed to the published version of the manuscript.

Funding

Korea University. The Ministry of Education of the Republic of Korea and the National Research Foundation of Korea (NRF-2022S1A5A2A01047148).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Ethics Committee of Sungkyunkwan University, South Korea (No. 2021-10-033).

Informed Consent Statement

All subjects gave their informed consent for inclusion before they participated in the study.

Data Availability Statement

Data sharing not applicable.

Acknowledgments

This study was supported by a Korea University Grant. This work was supported by the Ministry of Education of the Republic of Korea and the National Research Foundation of Korea (NRF-2022S1A5A2A01047148).

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Pariser, E. The Filter Bubble: What the Internet is Hiding from You; Penguin: London, UK, 2011. [Google Scholar]
  2. Grange, C.; Benbasat, I.; Burton-Jones, A. With a little help from my friends: Cultivating serendipity in online shopping environments. Inf. Manag. 2019, 56, 225–235. [Google Scholar] [CrossRef] [Green Version]
  3. Choi, H.S.; Kim, S.I. A study on user experience of OTT service-focused on Netflix, Watcha and Wavve. J. Digit. Converg. 2020, 18, 425–431. [Google Scholar] [CrossRef]
  4. Gomez-Uribe, C.A.; Hunt, N. The Netflix recommender system: Algorithms, business value, and innovation. ACM Trans. Manag. Inf. Syst. 2016, 6, 1–19. [Google Scholar] [CrossRef] [Green Version]
  5. Fernández Gómez, E.; Martín Quevedo, J. Connecting with audiences in new markets: Netflix’s Twitter strategy in Spain. J. Media Bus. Stud. 2018, 15, 127–146. [Google Scholar] [CrossRef]
  6. Kunaver, M.; Požrl, T. Diversity in recommender systems—A survey. Knowl.-Based Syst. 2017, 123, 154–162. [Google Scholar] [CrossRef]
  7. Smyth, B.; McClave, P. Similarity vs. diversity. In International Conference on Case-Based Reasoning; Springer: Berlin/Heidelberg, Germany, 2001; pp. 347–361. [Google Scholar] [CrossRef]
  8. Ziegler, C.-N.; McNee, S.M.; Konstan, J.A.; Lausen, G. Improving recommendation lists through topic diversification. In Proceedings of the 14th International Conference on World Wide Web—WWW’05, Chiba, Japan, 10–14 May 2005; pp. 22–32. [Google Scholar] [CrossRef] [Green Version]
  9. Abbas, F.; Najjar, N.; Wilson, D. Increasing diversity through dynamic critique in conversational recipe recommendations. In Proceedings of the 13th International Workshop on Multimedia for Cooking and Eating Activities, Taipei, Taiwan, 21 August 2021; pp. 9–16. [Google Scholar] [CrossRef]
  10. Knijnenburg, B.P.; Willemsen, M.C.; Gantner, Z.; Soncu, H.; Newell, C. Explaining the user experience of recommender systems. User Model. User-Adapt. Interact. 2012, 22, 441–504. [Google Scholar] [CrossRef] [Green Version]
  11. Ekstrand, M.D.; Harper, F.M.; Willemsen, M.C.; Konstan, J.A. User perception of differences in recommender algorithms. In Proceedings of the 8th ACM Conference on Recommender Systems—RecSys’14, Foster City, CA, USA, 6–10 October 2014; pp. 161–168. [Google Scholar] [CrossRef]
  12. Aytekin, T.; Karakaya M, Ö. Clustering-based diversity improvement in top-n recommendation. J. Intell. Inf. Syst. 2014, 42, 1–18. [Google Scholar] [CrossRef]
  13. Abdollahpouri, H.; Burke, R.; Mobasher, B. Managing popularity bias in recommender systems with personalized re-ranking. In Proceedings of the 32th International Florida Artificial Intelligence Research Society Conference (FLAIRS-32), Sarasota, FL, USA, 19–22 May 2019; pp. 413–418. [Google Scholar]
  14. Wang, L.; Zhang, X.; Wang, R.; Yan, C.; Kou, H.; Qi, L. Diversified service recommendation with high accuracy and efficiency. Knowl.-Based Syst. 2020, 204, 106196. [Google Scholar] [CrossRef]
  15. Panteli, A.; Boutsinas, B. Improvement of similarity–diversity trade-off in recommender systems based on a facility location model. In Neural Computing and Applications; Springer: Berlin/Heidelberg, Germany, 2021. [Google Scholar] [CrossRef]
  16. Hu, R.; Pu, P. Helping Users Perceive Recommendation Diversity. In Proceedings of the DiveRS 2011 ACM RecSys 2011 Workshop on Novelty and Diversity in Recommender Systems (RecSys’11), Chicago, IL, USA, 23–27 October 2011; pp. 43–50. [Google Scholar]
  17. Tsai, C.-H.; Brusilovsky, P. Exploring Social Recommendations with Visual Diversity-Promoting Interfaces. ACM Trans. Interact. Intell. Syst. 2019, 10, 1–34. [Google Scholar] [CrossRef] [Green Version]
  18. Ahmadian, S.; Joorabloo, N.; Jalili, M.; Ren, Y.; Meghdadi, M.; Afsharchi, M. A social recommender system based on reliable implicit relationships. Knowl.-Based Syst. 2020, 192, 105371. [Google Scholar] [CrossRef]
  19. Lai, C.-H.; Lee, S.-J.; Huang, H.-L. A social recommendation method based on the integration of social relationship and product popularity. Int. J. Hum.-Comput. Stud. 2019, 121, 42–57. [Google Scholar] [CrossRef]
  20. Zhou, X.; Liang, W.; Huang, S.; Fu, M. Social Recommendation with Large-Scale Group Decision-Making for Cyber-Enabled Online Service. IEEE Trans. Comput. Soc. Syst. 2019, 6, 1073–1082. [Google Scholar] [CrossRef]
  21. Chakravarty, A.; Liu, Y.; Mazumdar, T. The differential effects of online word-of-mouth and critics’ reviews on pre-release movie evaluation. J. Interact. Mark. 2010, 24, 185–197. [Google Scholar] [CrossRef]
  22. Duan, W.; Gu, B.; Whinston, A.B. Do online reviews matter?—An empirical investigation of panel data. Decis. Support Syst. 2008, 45, 1007–1016. [Google Scholar] [CrossRef]
  23. Xu, Q. Social Recommendation, Source Credibility, and Recency. Journal. Mass Commun. Q. 2013, 90, 757–775. [Google Scholar] [CrossRef]
  24. Berger, J.; Keller Fay Group. Research Shows Micro-Influencers Have More Impact than Average Consumers. 2016. Available online: http://go2.experticity.com/rs/288-azs-731/images/experticitykellerfaysurveysummary.pdf (accessed on 2 November 2021).
  25. Hovland, C.I.; Weiss, W. The influence of source credibility on communication effectiveness. Public Opin. Q. 1951, 15, 635–650. [Google Scholar] [CrossRef]
  26. Pornpitakpan, C. The Persuasiveness of Source Credibility: A Critical Review of Five Decades’ Evidence. J. Appl. Soc. Psychol. 2004, 34, 243–281. [Google Scholar] [CrossRef]
  27. Hsu, C.; Chuan-Chuan Lin, J.; Chiang, H. The effects of blogger recommendations on customers’ online shopping intentions. Internet Res. 2013, 23, 69–88. [Google Scholar] [CrossRef]
  28. Webster, J. Taste in the platform age: Music streaming services and new forms of class distinction. Inf. Commun. Soc. 2020, 23, 1909–1924. [Google Scholar] [CrossRef]
  29. Duhan, D.F.; Johnson, S.D.; Wilcox, J.B.; Harrell, G.D. Influences on consumer use of word-of-mouth recommendation sources. J. Acad. Mark. Sci. 1997, 25, 283–295. [Google Scholar] [CrossRef]
  30. Moon, J.-W.; Kim, Y.-G. Extending the TAM for a World-Wide-Web context. Inf. Manag. 2001, 38, 217–230. [Google Scholar] [CrossRef]
  31. Lin, C.S.; Wu, S.; Tsai, R.J. Integrating perceived playfulness into expectation-confirmation model for web portal context. Inf. Manag. 2005, 42, 683–693. [Google Scholar] [CrossRef]
  32. Venkatesh, V. Determinants of Perceived Ease of Use: Integrating Control, Intrinsic Motivation, and Emotion into the Technology Acceptance Model. Inf. Syst. Res. 2000, 11, 342–365. [Google Scholar] [CrossRef] [Green Version]
  33. Padilla-Meléndez, A.; del Aguila-Obra, A.R.; Garrido-Moreno, A. Perceived playfulness, gender differences and technology acceptance model in a blended learning scenario. Comput. Educ. 2013, 63, 306–317. [Google Scholar] [CrossRef]
  34. Marza, S.; Idris, I.; Abror, A. The Influence of Convenience, Enjoyment, Perceived Risk, And Trust On The Attitude Toward Online Shopping. In Proceedings of the 2nd Padang International Conference on Education, Economics, Business and Accounting (PICEEBA-2 2018), Padang, Indonesia, 24–25 November 2018. [Google Scholar] [CrossRef]
  35. Kasilingam, D.L. Understanding the attitude and intention to use smartphone chatbots for shopping. Technol. Soc. 2020, 62, 101280. [Google Scholar] [CrossRef]
  36. Lee, J.; Kim, J.; Choi, J.Y. The adoption of virtual reality devices: The technology acceptance model integrating enjoyment, social interaction, and strength of the social ties. Telemat. Inform. 2019, 39, 37–48. [Google Scholar] [CrossRef]
  37. Salloum, S.A.; AlAhbabi NM, N.; Habes, M.; Aburayya, A.; Akour, I. Predicting the intention to use social media sites: A hybrid SEM-machine learning approach. In International Conference on Advanced Machine Learning Technologies and Applications; Springer: Cham, Switzerland, 2021; pp. 324–334. [Google Scholar] [CrossRef]
  38. Hsu, C.-L.; Lin, J.C.-C. Understanding continuance intention to use online to offline (O2O) apps. Electron. Mark. 2020, 30, 883–897. [Google Scholar] [CrossRef]
  39. Joo, Y.J.; Park, S.; Shin, E.K. Students’ expectation, satisfaction, and continuance intention to use digital textbooks. Comput. Hum. Behav. 2017, 69, 83–90. [Google Scholar] [CrossRef]
  40. Xiao, Q.; Zhuang, W.; Hsu, M.K. Using Social Networking Sites: What Is the Big Attraction? Exploring a Mediated Moderation Relationship. J. Internet Commer. 2014, 13, 45–64. [Google Scholar] [CrossRef]
  41. Nambisan, S.; Baron, R.A. Interactions in virtual customer environments: Implications for product support and customer relationship management. J. Interact. Mark. 2007, 21, 42–62. [Google Scholar] [CrossRef]
  42. Novak, T.P.; Hoffman, D.L.; Yung, Y.-F. Measuring the Customer Experience in Online Environments: A Structural Modeling Approach. Mark. Sci. 2000, 19, 22–42. [Google Scholar] [CrossRef] [Green Version]
  43. Hampton, K.N.; Goulet, L.S.; Rainie, L.; Purcell, K. Social Networking Sites and Our Lives; Pew Internet & American Life Project: Washington, DC, USA, 2011; Volume 1, pp. 1–85. [Google Scholar]
  44. Waytz, A.; Norton, M.I. Botsourcing and outsourcing: Robot, British, Chinese, and German workers are for thinking—Not feeling—Jobs. Emotion 2014, 14, 434–444. [Google Scholar] [CrossRef] [PubMed]
  45. Castelo, N.; Bos, M.W.; Lehmann, D. Let the machine decide: When consumers trust or distrust algorithms. NIM Mark. Intell. Rev. 2019, 11, 24–29. [Google Scholar] [CrossRef]
  46. Lee, M.K. Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data Soc. 2018, 5, 205395171875668. [Google Scholar] [CrossRef]
  47. Herlocker, J.L.; Konstan, J.A.; Terveen, L.G.; Riedl, J.T. Evaluating collaborative filtering recommender systems. ACM Trans. Inf. Syst. 2004, 22, 5–53. [Google Scholar] [CrossRef]
  48. Makri, S.; Blandford, A.; Woods, M.; Sharples, S.; Maxwell, D. “Making my own luck”: Serendipity strategies and how to support them in digital information environments. J. Assoc. Inf. Sci. Technol. 2014, 65, 2179–2194. [Google Scholar] [CrossRef]
  49. Lin, J.C.-C.; Lu, H. Towards an understanding of the behavioural intention to use a web site. Int. J. Inf. Manag. 2000, 20, 197–208. [Google Scholar] [CrossRef]
  50. Yang, X. Determinants of consumers’ continuance intention to use social recommender systems: A self-regulation perspective. Technol. Soc. 2021, 64, 101464. [Google Scholar] [CrossRef]
  51. van der Heijden, H. User Acceptance of Hedonic Information Systems. MIS Q. 2004, 28, 695. [Google Scholar] [CrossRef]
  52. Burt, R.S. The social capital of opinion leaders. ANNALS Am. Acad. Political Soc. Sci. 1999, 566, 37–54. [Google Scholar] [CrossRef]
  53. Jin, S.V.; Muqaddam, A.; Ryu, E. Instafamous and social media influencer marketing. Mark. Intell. Plan. 2019, 37, 567–579. [Google Scholar] [CrossRef]
  54. Framer. 2021. Available online: https://www.framer.com (accessed on 1 October 2021).
  55. Statista. Video Streaming (SVoD). 2021. Available online: https://www.statista.com/outlook/dmo/digital-media/video-on-demand/video-streaming-svod/south-korea (accessed on 30 October 2021).
  56. Bem, D.J. Self-perception theory. In Advances in Experimental Social Psychology; Academic Press: Cambridge, MA, USA, 1972; Volume 6, pp. 1–62. [Google Scholar] [CrossRef]
  57. McNee, S.M.; Lam, S.K.; Konstan, J.A.; Riedl, J. Interfaces for Eliciting New User Preferences in Recommender Systems; Springer: Berlin/Heidelberg, Germany, 2003; pp. 178–187. [Google Scholar] [CrossRef]
  58. Kamehkhosh, I.; Bonnin, G.; Jannach, D. Effects of recommendations on the playlist creation behavior of users. User Model. User-Adapt. Interact. 2020, 30, 285–322. [Google Scholar] [CrossRef] [Green Version]
  59. Filieri, R.; Alguezaui, S.; McLeay, F. Why do travelers trust Tripadvisor? Antecedents of trust towards consumer-generated media and its influence on recommendation adoption and word of mouth. Tour. Manag. 2015, 51, 174–185. [Google Scholar] [CrossRef] [Green Version]
  60. Hayes, A.F. Introduction to Mediation, Moderation, and Conditional Process Analysis: A Regression-Based Approach, 2nd ed.; Guilford Publications: New York, NY, USA, 2018. [Google Scholar]
  61. Kadekova, Z.; Holienčinova, M. Influencer marketing as a modern phenomenon creating a new frontier of virtual opportunities. Commun. Today 2018, 9, 90–105. [Google Scholar]
  62. Herlocker, J.L.; Konstan, J.A.; Riedl, J. Explaining Collaborative Filtering Recommendations. In Proceedings of the 2000 ACM Conference on Computer Supported Cooperative Work—CSCW’00, Philadelphia, PE, USA, 14–18 October 2000; pp. 241–250. [Google Scholar] [CrossRef]
  63. Anderson, A.; Maystre, L.; Anderson, I.; Mehrotra, R.; Lalmas, M. Algorithmic effects on the diversity of consumption on Spotify. In Proceedings of the Web Conference 2020, Taipei, Taiwan, 20–24 April 2020; pp. 2155–2165. [Google Scholar] [CrossRef]
  64. Bakshy, E.; Rosenn, I.; Marlow, C.; Adamic, L. The role of social networks in information diffusion. In Proceedings of the 21st International Conference on World Wide Web 2012, Lyon, France, 16–20 April 2012; pp. 519–528. [Google Scholar] [CrossRef] [Green Version]
  65. Creedon, P.J.; Hayes, A.F. Small sample mediation analysis: How far can you push the bootstrap? In Proceedings of the Annual Conference of the Association for Psychological Science, New York, NY, USA, 21–24 May 2015. [Google Scholar]
  66. McNeill, L.S. “My friend posted it and that’s good enough for me!”: Source Perception in Online Information Sharing. J. Am. Folk. 2018, 131, 493–499. [Google Scholar] [CrossRef]
  67. Windels, K.; Heo, J.; Jeong, Y.; Porter, L.; Jung, A.-R.; Wang, R. My friend likes this brand: Do ads with social context attract more attention on social networking sites? Comput. Hum. Behav. 2018, 84, 420–429. [Google Scholar] [CrossRef]
Figure 1. Hypotheses 1s and 2s.
Figure 1. Hypotheses 1s and 2s.
Applsci 13 00279 g001
Figure 2. Answers to “how many days do you use SVOD services in a week?”.
Figure 2. Answers to “how many days do you use SVOD services in a week?”.
Applsci 13 00279 g002
Figure 3. Stimuli of both information sources’ conditions: influencer (left) and online-friend (right).
Figure 3. Stimuli of both information sources’ conditions: influencer (left) and online-friend (right).
Applsci 13 00279 g003
Figure 4. Stimuli representing both recommendation lists made by human (left, “Content X would like to recommend”) and algorithm (right, “Content X may like”).
Figure 4. Stimuli representing both recommendation lists made by human (left, “Content X would like to recommend”) and algorithm (right, “Content X may like”).
Applsci 13 00279 g004
Figure 5. Path coefficients and significance (* p < 0.05, ** p < 0.01, *** p < 0.001).
Figure 5. Path coefficients and significance (* p < 0.05, ** p < 0.01, *** p < 0.001).
Applsci 13 00279 g005
Table 1. Question lists of the follow-up interview.
Table 1. Question lists of the follow-up interview.
Question TypeQuestionnaire Items
Content choiceQ1. Can you find diverse and novel contents while using this service compared with your own recommendation list?
Q2. Have you ever heard about the content you found in this service?
Q3. Do you think this recommendation list convinced you to choose the favorite contents?
Q4. Do you think this recommendation list reminded you of the contents you already know about?
Other conditionsQ1. What if this recommendation list is your real friends’/online-friends’/influencers’?
Q2. What if this recommendation list is produced by its owner/algorithms?
SharingQ1. Did you enjoy/feel interested while using this service?
Q2. Do you want to share your recommendation list?
Q3. Do you worry about privacy violations? (What if anonymous?)
Rating systemQ1. Which one do you prefer, star-rating or recommendation sharing? (Why?)
Table 2. Results of the mediation analyses of the effects of perceived diversity on two mediating variables for H1a and H1b.
Table 2. Results of the mediation analyses of the effects of perceived diversity on two mediating variables for H1a and H1b.
BS.E.tLLCIULCI
Perceived DiversityInformation quality0.3310.1402.224 *0.0260.597
F = 4.944 *, R2 = 0.134
Perceived playfulness0.3870.1512.552 *0.0780.695
F = 6.515 *, R2 = 0.169
* p < 0.05.
Table 3. Results of the mediation analyses of the effects of information quality and perceived playfulness on intention to use for H2a and H2b.
Table 3. Results of the mediation analyses of the effects of information quality and perceived playfulness on intention to use for H2a and H2b.
BS.E.tLLCIULCI
Information qualityIntention to use0.7990.2193.647 **0.3521.246
Perceived playfulness1.0410.1546.766 ***0.7271.355
** p < 0.01, *** p < 0.001.
Table 4. Results of the mediation analyses of the indirect effects of two mediating variables.
Table 4. Results of the mediation analyses of the indirect effects of two mediating variables.
BBoot S.E.Boot LLCI
(95%)
Boot ULCI
(95%)
Perceived diversity → Information quality → Intention to use0.2490.1070.0620.485
Perceived diversity → Perceived playfulness → Intention to use0.4020.1800.0320.733
Table 5. Results of the independent sample t-tests for H3a and H3b.
Table 5. Results of the independent sample t-tests for H3a and H3b.
GroupntpMSD
Perceived diversityInfluencers180.0650.9495.1941.133
Online-friends165.1720.865
Information qualityInfluencers180.9610.3445.3330.868
Online-friends165.0520.834
Table 6. Results of the two-way t-tests for H4a and H4b.
Table 6. Results of the two-way t-tests for H4a and H4b.
GroupntpMSD
Perceived diversityAlgorithm171.0740.2915.0001.031
Human175.3670.965
Information qualityAlgorithm172.0770.046 *4.9120.934
Human175.4900.667
* p < 0.05.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, S.; Huh, I.; Lee, S. No Movie to Watch: A Design Strategy for Enhancing Content Diversity through Social Recommendation in the Subscription-Video-On-Demand Service. Appl. Sci. 2023, 13, 279. https://doi.org/10.3390/app13010279

AMA Style

Kim S, Huh I, Lee S. No Movie to Watch: A Design Strategy for Enhancing Content Diversity through Social Recommendation in the Subscription-Video-On-Demand Service. Applied Sciences. 2023; 13(1):279. https://doi.org/10.3390/app13010279

Chicago/Turabian Style

Kim, Sangyeon, Insil Huh, and Sangwon Lee. 2023. "No Movie to Watch: A Design Strategy for Enhancing Content Diversity through Social Recommendation in the Subscription-Video-On-Demand Service" Applied Sciences 13, no. 1: 279. https://doi.org/10.3390/app13010279

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop