Next Article in Journal
Cooperation-Assisted Spectrum Handover Mechanism in Vehicular Ad Hoc Networks
Previous Article in Journal
Motor Indicators for the Assessment of Frozen Shoulder Rehabilitation via a Virtual Reality Training System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

How the Multiplicity of Suggested Information Affects the Behavior of a User in a Recommender System

1
Agency for Defense Development, Daejeon 34068, Korea
2
School of Artificial Intelligence, Yong In University, Yongin 17092, Korea
*
Author to whom correspondence should be addressed.
First author.
Electronics 2021, 10(6), 741; https://doi.org/10.3390/electronics10060741
Submission received: 27 February 2021 / Revised: 17 March 2021 / Accepted: 19 March 2021 / Published: 20 March 2021
(This article belongs to the Section Computer Science & Engineering)

Abstract

:
Many researchers have suggested improving the retention of a user in the digital platform using a recommender system. Recent studies show that there are many potential ways to assist users to find interesting items, other than high-precision rating predictions. In this paper, we study how the diverse types of information suggested to a user can influence their behavior. The types have been divided into visual information, evaluative information, categorial information, and narrational information. Based on our experimental results, we analyze how different types of supplementary information affect the performance of a recommender in terms of encouraging users to click more items or spend more time in the digital platform.

1. Introduction

Recommender systems have been broadly used in providing a personalized user experience in various digital platforms for video streaming, online shopping, etc. Users face overwhelming options in a platform, and a recommender system helps users make decisions. Many studies have been conducted on finding recommendable items that can possibly satisfy a user with a set of certain interest areas. The studies not only have academic values but also have industrial values, as the satisfaction of a user usually results in the increased retention of the user, thereby increasing the profitability of a digital platform. In these days, most of the influential platforms rely on their recommendations, which account for many of the users’ selections. A recommendation is based on how a user has behaved, for instance, clicking on items, watching contents, and buying goods. It has to consider the long-term satisfaction of a user under the condition that the user’s tastes usually change over time [1]. It is important to point out that the recommendation task has been regarded as equivalent to predicting the rating of a user accurately. However, many of the recent practical studies focus on how to increase user retention [2,3]. This is because it is often not clear enough to what extent the precision measures of predicting ratings are correlated with the business success of a recommender [3].
The information used for recommending an item is usually stored in large databases which dynamically grow and build the source of knowledge regarding user behavior, so that recommendations of items can be suggested [4]. The decision of a user in digital platforms is likely to follow the cost–benefit theory on information presentation. This theory suggests that the strategy for a decision is easily adapted so that the joint cost of effort and errors in making a decision are minimized [5]. In general, the numerosity of different types of information tends to benefit certain sorts of options more than others and therefore has systematic effects on the choice of a user [6]. Zhang et al. [7] proposed Joint Representation Learning (JRL) as a general framework for a recommender by learning user-item representation in a unified space. The framework can incorporate different types of information, and its extendable version can integrate new types of information. Liu et al. [8] suggested a novel attention neural network which exploits an item’s multimodal features to estimate the user’s attention to various aspects of the item and eventually improve the accuracy of recommendations. They introduced a weight vector to each user–item pair, which uniquely describes the user’s attention on the different aspects of an item. Chang et al. [9] proposed a novel aspect-aware latent factor model which first uses both textual reviews and images to learn preferences of users on different aspects and then integrates the learned preferences into a rating-based matrix factorization model for accurate rating prediction. Still, those works propose methods on how to incorporate different features for improving the accuracy of a recommender rather than studying the systematic effects of suggesting varying types of information of a recommended item for the betterment of user retention and engagement in a recommender-based platform.
In this paper, we study how suggesting various types of information about a recommended item influences the behavior of a user. The various types of information include visual, evaluative, categorial, and narrational information. Based on movie datasets, the effect of each type of information has been verified by providing additional images, providing averaged rating information, providing genre information, and providing shortened synopsis information. The experimental results show how each type of information has affected user retention as well as user engagement and whether combining various types always leads to betterment. Eventually, our study can be useful in designing the digital platform based on a recommender system in which users can click on more items or spend more time on the platform.
The organization of the rest of this paper is as follows: Section 2 provides the review on related works, Section 3 presents our proposed approach, the comparative experiments and discussion can be found in Section 4, and our conclusion is given in Section 5.

2. Background

Recommenders are roughly categorized into three categories: a collaborative approach, a content-based approach, and a hybrid approach [10,11]. A collaborative approach is designed for its predictive process based on the similarity among users measured from the historical interactions, provided that similar users show similar patterns of preference and that similar items receive similar ratings [12]. For instance, a collaborative recommender for TV shows could make predictions about which TV show a user should like based on the behavioral histories of other users’ feedback. Aggarwal et al. (SAR) [13] suggested a Simple Algorithm for Recommendation (SAR), which is a fast and scalable method for personalized recommendations using user transaction history. The recommendations are explainable and interpretable. SAR recommends items similar to the ones that the user already has liked. Two items are determined to be similar if many users who rated one item well are also likely to leave positive feedback on the other. This method can provide fast training and fast scoring. Only simple counting is needed to construct matrices used for training, and the scoring only involves the multiplication of a similarity matrix with an affinity vector. He et al. (NCF) [14] introduced Neural Collaborative Filtering (NCF), which is a neural matrix factorization model ensembling Generalized Matrix Factorization (GMF) and Multi-Layer Perceptron (MLP) to unify the linearity of matrix factorization and the non-linearity of MLP for modeling the latent structures between users and items. The authors had GMF and MLP learn separate embeddings to allow for flexibility in the fused model and combined the two models by concatenating their last hidden layer. He et al. (L-GCN) [15] proposed to learn user and item embeddings by linearly propagating them in terms of the interaction between users and items, based on the neighborhood aggregation of a graph convolution network (GCN) [16]. The weighted summation of the embeddings was used in all layers.
The content-based approach is based on the description of an item and the profile of a user’s preferences. This approach considers the recommendation task as a user-specific classification and learns a classifier for the user’s preferences based on the feature of an item. For example, keywords are used to describe a set of movies, and a user profile is built to indicate the type of movie this user prefers. Yu et al. (SLi-Rec) [17] proposed a deep learning-based model that captures both long-term and short-term user preferences. It adopts the attentive feature of asymmetric singular value decomposition [18] for long-term modeling and considers both time irregularity and semantic irregularity by modifying the gating logic in LSTM [19]. It also uses the attention mechanism to dynamically fuse the long-term and short-term components. Juan et al. (FFM) [20] introduced a Field-aware Factorization Machine (FFM), which is an extension to the Factorization Machine (FM) [21]. Unlike the FM, this method uses different factorized latent factors for different groups (fields) of features. By putting features into fields, they address the issue that the latent factors shared by features which intuitively describe different categories of information might not generalize the correlation.
Hybrid approaches combine the collaborative and content-based approaches [10,22]. Many studies have been done on aggregating the predictions of the two approaches or integrating the characteristics of one approach into the other approach. Kula et al. (L-FM) [23] suggested a hybrid matrix factorization which describes users and items as linear combinations of the latent elements of their content features. Cheng et al. (W&D) [24] proposed a framework to jointly train feed-forward neural networks with embedding and a feature transformation-based linear model for generic recommenders using sparse inputs.
Many of the recent studies have focused on enhancing the accuracy of rating prediction rather than improving user engagement or retention. As such, it is also important to study how the suggesting strategy of a recommendation can influence the behavior of a user in recommender systems.

3. Strategy: Different Types of Supplementary Information

3.1. Research Question

To begin with the strategy of suggesting different types of supplementary information, we emphasize the main research question to clarify the strategy of our proposed approach. Information cues can have different dimensions with varying influences on information load and, accordingly, the decision of a user [25]. Youtube, for example, presents an item by using the title, the image, the uploaded time, the number of hits, the label of a channel, etc. The way of presenting an item indirectly but obviously has a strong influence on the behavior of a user.
We start by questioning whether different types of information lead to the different consequences of a user response in a recommender system, and if so, how different the degree of an influence is between various types of suggested information. Therefore, here, we study the effects of suggesting different types of information, which include visual, evaluative, categorial, and narrational information, as listed in Table 1, when the title and the poster image of a movie have been provided to the user as default.

3.2. Visual Information

Visual information helps a user understand increasingly rich databases of recommended items and biases the decision of a user by focusing attention on a limited set of options [9,26]. Many studies report that the contents of a movie poster are generally meant to be matched with the kind of motivation a user has when selecting the movie [27]. To verify the impact of the quantity of visual information, presenting a single movie poster has been compared to presenting multiple images that include different versions of movie posters and scenes from the movie. (A Python script has been leveraged to automatically obtain the additional images by searching with the title and released year from Google.)
While exploring the additionally suggested images, a user has the chance to find out that a specific actor is going to be shown in the movie or that a certain background scene is interesting. Users tend to be more interested in a movie when they actively explore more visual information about the movie. In this experiment, the supplementary visual information of an item was suggested to users in order to study its effect on the users’ behaviors. A Netflix dataset was used, and Figure 1 shows the example of additional images as well as the poster of a movie that a participant can explore during the experiment.

3.3. Evaluative Information

A number of platforms allow users to submit the review of a movie and aggregate the collected reviews into an average. Users interactively express their opinions on movies through the community-driven reviews. The results are then presented as an overall rating for any particular movie. The ratings from a community lead to a bandwagon effect [28], which means that an item with a positive rating is likely to be clicked more frequently [29]. As such, ratings as evaluative information play an important role when a user makes a decision based on the environment of the recommendation. Items with a high rating are perceived as more credible than items with a low rating. In this experiment, a Tweetings dataset was used, and it contains around 894K ratings obtained from around 70K users’ tweets on the Twitter platform. The average rating of an item was calculated, and a user could refer to the rating information during the experiment. Figure 2 depicts examples of items with their average rating score.

3.4. The Categorial Information

The category of a movie, the so-called genre, is a motion-picture category based on similarities either in the factors of narrative, aesthetic, and emotional responses to the movie. Users generally have a sensitivity to the within-category correlation [30], such that the item from a preferred category is more likely to be selected from a user. The poster and title of a movie often do not sufficiently convey the impression of the movie. The genre of the movie can subsidiarily provide general information before the user makes a decision, as it affects the familiarity association as well as the visual and emotional association in the decision-making process of a user [31]. Movielens contains the genre information of a movie, while a movie can belong to more than one genre. In this experiment, we present the categorial information for each recommended item to the participant, who then is more likely to better guess what a movie could be about. Figure 3 shows examples of movies with their genre information.

3.5. The Narrational Information

A synopsis is a brief summary that gives users a depictive idea of what a movie is about. It provides an overview of the storyline and other defining factors of the movie, for the benefit of a potential user. The consistency between movie genres and linguistic cues in a movie synopsis positively affects the decision of a user when the synopsis containing linguistic cues confirms the expectancy of a user. On the other hand, a synopsis might negatively affect the decision of a user when the linguistic cues disconfirm the expectancy [32].
In this experiment, a Yahoo dataset, which contains the synopsis data of movies, was used to provide a user with the narrational information of a movie. Figure 4 shows examples of providing the synopsis of a movie.

4. Experiment

4.1. Experimental Setup

4.1.1. Dataset

  • Netflix dataset [33]: The Netflix Prize dataset contains over 100 million ratings from 480 thousand randomly chosen customers for over 17 thousand movies and shows (see Figure 5). The data were collected between 1998 and 2005 and provide ratings on a scale from 1 to 5. Each customer id has been replaced by a random id to protect privacy. The dataset includes the date of a rating, the title, and the year of release for each movie.
  • Tweetings dataset [34]: The Movie-Tweetings dataset had been automatically gathered and therefore depends on the continuation of the IMDb apps and Twitter API (see Figure 6). The dataset consists of 893,866 ratings extracted from the tweets by 69,832 users. It contains 36,737 different movies and shows, and the ratings are scaled from 0 to 10.
  • MovieLens dataset [35]: The MovieLens ( 25 M ) dataset was made by the GroupLens Research Project (the University of Minnesota) and consists of 25,000,095 ratings ranging from 1 to 5 (see Figure 7). The feedback had been collected between 1995 and 2019 from 162,541 users on 62,423 movies through the MovieLens website. Users who wrote less than 20 ratings were excluded. A movie is able to belong to multiple genres among 18 different genres.
  • Yahoo dataset [36]: The Yahoo movie dataset is collected by Yahoo! Research through the Yahoo Movies website (see Figure 8). 7642 users rated movies using a 13-level rating scale ranging from A+ to F (or from 1 to 13). 5808 movies were used, along with a large amount of descriptive information about the movies including synopsis, genre, ratings, etc.

4.1.2. Evaluation

The common user retention [37] metric, the click-through rate (CTR) [38,39,40,41], was adopted to directly evaluate the impact of the methodology. The metric measures the ratio of clicks to recommendations, as presented in Equation (1).
C T R = Number of click-throughs Number of recommendations × 100 ( % )
Additionally, mean average precision [42,43], M A P @ K , implies both the sequence of feedback and the total number of item engagements of a user (see Equation (2), where S is the number of samples). Average precision, A P @ K , was first defined as the summation of the engaged precision values divided by the number of item engagements, m, where P ( i ) indicates the precision at i, and δ ( i ) indicates the bivariate function for engagement (as in Equations (3)–(5)).
M A P @ K = 1 S n = 1 S ( A P @ K ) n
A P @ K = 1 m i = 1 K P ( i ) × δ ( i )
P ( i ) = Number of relevant recommendations | i Number of recommendations | i
δ ( i ) = 1 if i t h recommendation is engaged , 0 otherwise .
A/B experiments [2,37,44] were conducted to verify the effect of diversifying the information suggested to a user based on different baseline methods. We collected data from 33 participants who were asked to click on all the items that they were interested in. Four comparing baseline methods were used: SAR [13], NCF [14], L-GCN [15], SLi-Rec [17], FFM [20], L-FM [23], and W&D [24]. We did not inform the participants of which method was applied in each experiment. The given settings of default parameters were used when adopting the baseline methods based on their open sources.

4.2. Result and Discussion

4.2.1. The Visual Information

We tested the effect of providing additional visual information based on the decision of a user. At most 10 images including different posters and scenes were available before making a decision. The posters often contained important keywords, and the scenes provided information about major figures or backgrounds of the movie. Pictures played an influential role in guiding a user to take an action, showing the picture superiority effect [45], which is the conceptual or perceptual processing advantage of visual information. Compared to providing a single image, users showed a tendency to take more interest in a recommended movie when multiple images were provided. Each user has specific visual preferences that are more likely to be satisfied by multiple images rather than a single image.
Table 2 presents the averaged results of the click-through rate and the mean average precision comparatively, showing the difference obtained by providing more cues of visual information (also see Figure 9). The user retention was commonly increased by a significant degree for every baseline method. In terms of both the click-through rate and mean average precision, the tendency of improvement was more or less consistent. Also, it should be pointed out that the collaborative approach (SAR, NCF, and L-GCN) largely benefited from providing complementary visual information compared to the content-based approach (Sli-Rec and FFM) or the hybrid approach (L-FM and W&D).

4.2.2. The Evaluative Information

We then tested the effect of providing evaluative information based on the decision of a user. The given cue of ratings effectively functioned because users mostly lay trust upon the statistical values of other users. Users started paying attention to the movies that had a relatively higher rating or finally made up their minds by finding the rational basis of a decision in the statistics. If making a choice is difficult, the rating information from many other users carries as much credibility as an expert recommendation. Compared to the condition of excluding the evaluative information, the user retention results were substantially improved by including the evaluative information. Users were inclined to turn their attention to the movies with a high rating, even though they did not find the movies interesting from the beginning.
Table 3 shows the averaged results of the click-through rate and mean average precision, which highlight the difference made by providing the evaluative information of a movie. The user retention was remarkably improved by the evaluative information in all cases, as shown in Figure 10. Similar to the previous case of visual information, the collaborative approach obtained a greater positive influence by the evaluative information compared to the content-based approach. Yet, in this case, the hybrid approach showed a greater benefit from supplementarily suggesting the rating data of a movie.

4.2.3. The Categorial Information

Next, the effect of providing categorial information was investigated. A movie can be classified into multiple genres, and users have a favorable view of movies whose genres include their favorite films. The genre information efficiently supplements the limited information of a movie introduced by the title and poster image, because users usually begin with identifying whether a movie belongs to their preferred genres. Although the user retention results were improved by the categorial information, it was not as effective as providing evaluative information or visual information. Still, suggesting additional information helped users become interested in certain movies and induced click-throughs to a greater or lesser extent.
Table 4 shows the averaged results of the click-through rate and mean average precision, and the effect of suggesting the categorial information of a movie can be observed. Unlike the visual information and evaluative information, the content-based approach benefited more from the categorial information. Figure 11 also visually conveys that the content-based approach, which is based on learning user-specific classifiers for the user’s preferences based on the description of an item, takes advantage of the categorial information.

4.2.4. The Narrational Information

The effect of providing narrational information was verified by using the synopsis of a movie. A user was able to open and read the attached synopsis, which satisfies the curiosity of the user about the detail of a recommended movie. However, reading straight through a synopsis takes quite a long time; as such, many of the users skimmed over the synopsis for outstanding keywords.
The results intuitively convey that additional information helps encourage user retention, but the narrational information was not as fruitful as visual information or evaluative information. Table 5 presents the averaged results of the click-through rate and mean average precision and compares the difference created by providing the synopsis of a movie. The hybrid approach was more related to the effect of narrational information (also see Figure 12).

4.2.5. Aggregation of Supplementary Information

Providing additional information about an option can both help and hinder the item evaluation of users [46]. We studied whether multiple types of supplementary information always lead to the betterment of user retention in a recommender system. The test was conducted under the circumstance that a recommender system simultaneously gives users evaluative, categorial, and narrational information. Interestingly, providing evaluative, categorial, and narrational information produces a rather low click-through rate and mean average precision compared with providing only evaluative information. Figure 13 visualizes the averaged results of the click-through rate and mean average precision of the different methods compared in Table 6. This marginal effect by the abundance of supplementary information leaves a suggestive message or contrary evidence that adding more information does not always help a user click more or spend more time on a recommender-based platform.
To summarize the experimental results, Figure 14 depicts the level of improvement achieved by different types of supplementary information, as shown in Table 7: visual, evaluative, categorial, and narrational information, as well as the aggregation of some types. In short, the click-through rate was enhanced with visual information by 12.84%, with evaluative information by 25.14%, with categorial information by 8.20%, with narrational information by 14.04%, and with aggregated information by 18.02%.

5. Conclusions

We studied the effects of suggesting different types of supplementary information to a user in terms of the improvement of user retention in a recommender system. By understanding more about these effects, a digital platform will be better able to design a structure that can satisfy users who would click more on items suggested by the recommender system and would spend more time on the platform. This study is derived from the idea that there are many potential ways to assist users in finding interesting items, other than high-precision rating prediction. We found that visual, evaluative, categorial, and narrational information differently affect the decision of a user. Firstly, we observed that providing supplementary information is generally effective in improving user retention. It is worth observing that the rating of an item as evaluative information plays an important role in giving a user confidence to click on the item. Secondly, certain types of information can be more effective based on approach: for example, visual information better helps the collaborative approach; categorial information has much to do with the content-based approach; the hybrid approach is benefited more from evaluative and narrational information. In addition, we show a simple counterexample that the richness of supplementary information does not always result in improving user retention. This leaves the next question of how different combinations of supplementary information can have an influence on the decision of a user.
In future work, we plan to extend this approach to examine the effects of other types of information, such as acoustic and dynamic information (e.g., video highlights). Furthermore, the effect of different combinations of information should be studied. Moreover, further research is needed on which practical, cognitive-science-based approach, e.g., in terms of visual saliency and interacting with user feedback, can allow for an improved recommender system.

Author Contributions

Y.B. conceived, designed, and performed the experiments; Y.B. and K.L. analyzed the data and wrote the paper. Both authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2020R1G1A1102041).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jagerman, R.; Markov, I.; de Rijke, M. When people change their mind: Off-policy evaluation in non-stationary recommendation environments. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, Melbourne, Australia, 11–15 February 2019; pp. 447–455. [Google Scholar]
  2. Gomez-Uribe, C.A.; Hunt, N. The netflix recommender system: Algorithms, business value, and innovation. ACM Trans. Manag. Inf. Syst. 2015, 6, 1–19. [Google Scholar] [CrossRef] [Green Version]
  3. Jannach, D.; Jugovac, M. Measuring the business value of recommender systems. ACM Trans. Manag. Inf. Syst. 2019, 10, 1–23. [Google Scholar] [CrossRef] [Green Version]
  4. Pajuelo-Holguera, F.; Gómez-Pulido, J.A.; Ortega, F. Performance of two approaches of embedded recommender systems. Electronics 2020, 9, 546. [Google Scholar] [CrossRef] [Green Version]
  5. Vessey, I. The effect of information presentation on decision making: A cost-benefit analysis. Inf. Manag. 1994, 27, 103–119. [Google Scholar] [CrossRef]
  6. Gupta, S.; Kapoor, S.; Gupta, P. Synthesis of a face image at a desired pose from a given pose. Pattern Recognit. Lett. 2012, 33, 1942–1950. [Google Scholar] [CrossRef]
  7. Zhang, Y.; Ai, Q.; Chen, X.; Croft, W.B. Joint representation learning for top-n recommendation with heterogeneous information sources. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, Singapore, 6–10 November 2017; pp. 1449–1458. [Google Scholar]
  8. Liu, F.; Cheng, Z.; Sun, C.; Wang, Y.; Nie, L.; Kankanhalli, M. User diverse preference modeling by multimodal attentive metric learning. In Proceedings of the 27th ACM International Conference on Multimedia, Nice, France, 21–25 October 2019; pp. 1526–1534. [Google Scholar]
  9. Cheng, Z.; Chang, X.; Zhu, L.; Kanjirathinkal, R.C.; Kankanhalli, M. MMALFM: Explainable recommendation by leveraging reviews and images. ACM Trans. Inf. Syst. 2019, 37, 1–28. [Google Scholar] [CrossRef]
  10. Shah, K.; Salunke, A.; Dongare, S.; Antala, K. Recommender systems: An overview of different approaches to recommendations. In Proceedings of the 2017 International Conference on Innovations in Information, Embedded and Communication Systems (ICIIECS), Coimbatore, India, 17–18 March 2017; pp. 1–4. [Google Scholar]
  11. Patel, B.; Desai, P.; Panchal, U. Methods of recommender system: A review. In Proceedings of the 2017 International Conference on Innovations in Information, Embedded and Communication Systems (ICIIECS), Coimbatore, India, 17–18 March 2017; pp. 1–4. [Google Scholar]
  12. Chen, S.; Wu, M. Attention Collaborative Autoencoder for Explicit Recommender Systems. Electronics 2020, 9, 1716. [Google Scholar] [CrossRef]
  13. Aggarwal, C.C. An introduction to recommender systems. In Recommender Systems; Springer: Berlin, Germany, 2016; pp. 1–28. [Google Scholar]
  14. He, X.; Liao, L.; Zhang, H.; Nie, L.; Hu, X.; Chua, T.S. Neural collaborative filtering. In Proceedings of the 26th International Conference on World Wide Web, Perth, Australia, 3–7 May 2017; pp. 173–182. [Google Scholar]
  15. He, X.; Deng, K.; Wang, X.; Li, Y.; Zhang, Y.; Wang, M. Lightgcn: Simplifying and powering graph convolution network for recommendation. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, Xi’an, China, 25–30 July 2020; pp. 639–648. [Google Scholar]
  16. Kipf, T.N.; Welling, M. Semi-supervised classification with graph convolutional networks. arXiv 2016, arXiv:1609.02907. [Google Scholar]
  17. Yu, Z.; Lian, J.; Mahmoody, A.; Liu, G.; Xie, X. Adaptive User Modeling with Long and Short-Term Preferences for Personalized Recommendation. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19), Macao, China, 10–16 August 2019; pp. 4213–4219. [Google Scholar]
  18. Pu, L.; Faltings, B. Understanding and improving relational matrix factorization in recommender systems. In Proceedings of the 7th ACM Conference on Recommender Systems, Hong Kong, China, 12–16 October 2013; pp. 41–48. [Google Scholar]
  19. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
  20. Juan, Y.; Zhuang, Y.; Chin, W.S.; Lin, C.J. Field-aware factorization machines for CTR prediction. In Proceedings of the 10th ACM Conference on Recommender Systems, Boston, MA, USA, 17 September 2016; pp. 43–50. [Google Scholar]
  21. Rendle, S. Factorization machines. In Proceedings of the 2010 IEEE International Conference on Data Mining, Sydney, Australia, 13–17 December 2010; pp. 995–1000. [Google Scholar]
  22. Singh, R.; Rani, A. A Survey on the Generation of Recommender Systems. Int. J. Inf. Eng. Electron. Bus. 2017, 9, 26. [Google Scholar] [CrossRef] [Green Version]
  23. Kula, M. Metadata Embeddings for User and Item Cold-start Recommendations. In Proceedings of the 2nd Workshop on New Trends on Content-Based Recommender Systems Co-Located with 9th ACM Conference on Recommender Systems (RecSys 2015), Vienna, Austria, 16–20 September 2015; Bogers, T., Koolen, M., Eds.; Volume 1448, pp. 14–21. [Google Scholar]
  24. Cheng, H.T.; Koc, L.; Harmsen, J.; Shaked, T.; Chandra, T.; Aradhye, H.; Anderson, G.; Corrado, G.; Chai, W.; Ispir, M.; et al. Wide & deep learning for recommender systems. In Proceedings of the 1st Workshop on Deep Learning for Recommender Systems, Boston, MA, USA, 15 September 2016; pp. 7–10. [Google Scholar]
  25. Iselin, E.R. The effects of information load and information diversity on decision quality in a structured decision task. Account. Organ. Soc. 1988, 13, 147–164. [Google Scholar] [CrossRef]
  26. Lurie, N.H.; Mason, C.H. Visual representation: Implications for decision making. J. Mark. 2007, 71, 160–177. [Google Scholar] [CrossRef]
  27. Stokmans, M. Effectiveness of promotional film posters. In Proceedings of the 10th International Conference on Arts and Cultural Management, Aix-en-Provence, France, 28 June–1 July 2015. [Google Scholar]
  28. Nadeau, R.; Cloutier, E.; Guay, J.H. New evidence about the existence of a bandwagon effect in the opinion formation process. Int. Political Sci. Rev. 1993, 14, 203–213. [Google Scholar] [CrossRef]
  29. Riedl, C.; Blohm, I.; Leimeister, J.M.; Krcmar, H. The effect of rating scales on decision quality and user attitudes in online innovation communities. Int. J. Electron. Commer. 2013, 17, 7–36. [Google Scholar] [CrossRef] [Green Version]
  30. Chin-Parker, S.; Ross, B.H. The effect of category learning on sensitivity to within-category correlations. Mem. Cogn. 2002, 30, 353–362. [Google Scholar] [CrossRef] [Green Version]
  31. Kork, Y. The Influence of Film Genres on the Tourist’s Decision Making Process; University of Exeter: Exeter, UK, 2013. [Google Scholar]
  32. Hung, Y.C.; Guan, C. Winning box office with the right movie synopsis. Eur. J. Mark. 2020. [Google Scholar] [CrossRef]
  33. Netflix. Netflix Prize Dataset. Available online: netflixprize.com (accessed on 15 January 2021).
  34. Dooms, S.; De Pessemier, T.; Martens, L. Movietweetings: A movie rating dataset collected from twitter. In Proceedings of the Workshop on Crowdsourcing and Human Computation for Recommender Systems, CrowdRec at RecSys, Hong Kong, China, 12–16 October 2013; Volume 2013, p. 43. [Google Scholar]
  35. Harper, F.M.; Konstan, J.A. The movielens datasets: History and context. Acm Trans. Interact. Intell. Syst. 2015, 5, 1–19. [Google Scholar] [CrossRef]
  36. Yahoo! Yahoo! Movies User Ratings and Descriptive Content Information. Available online: webscope.sandbox.yahoo.com (accessed on 15 January 2021).
  37. Chen, M.; Beutel, A.; Covington, P.; Jain, S.; Belletti, F.; Chi, E.H. Top-k off-policy correction for a REINFORCE recommender system. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, Melbourne, Australia, 11–15 February 2019; pp. 456–464. [Google Scholar]
  38. Haruna, K.; Akmar Ismail, M.; Suhendroyono, S.; Damiasih, D.; Pierewan, A.C.; Chiroma, H.; Herawan, T. Context-aware recommender system: A review of recent developmental process and future research direction. Appl. Sci. 2017, 7, 1211. [Google Scholar] [CrossRef] [Green Version]
  39. Feng, Y.; Lv, F.; Hu, B.; Sun, F.; Kuang, K.; Liu, Y.; Liu, Q.; Ou, W. MTBRN: Multiplex Target-Behavior Relation Enhanced Network for Click-Through Rate Prediction. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, Virtual Event, Ireland, 19–23 October 2020; pp. 2421–2428. [Google Scholar]
  40. Zhang, D.; Liu, L.; Wei, Q.; Yang, Y.; Yang, P.; Liu, Q. Neighborhood Aggregation Collaborative Filtering Based on Knowledge Graph. Appl. Sci. 2020, 10, 3818. [Google Scholar] [CrossRef]
  41. Huang, R.; McIntyre, S.; Song, M.; Ou, Z. An Attention-Based Latent Information Extraction Network (ALIEN) for High-Order Feature Interactions. Appl. Sci. 2020, 10, 5468. [Google Scholar] [CrossRef]
  42. Schütze, H.; Manning, C.D.; Raghavan, P. Introduction to Information Retrieval; Cambridge University Press: Cambridge, UK, 2008; Volume 39. [Google Scholar]
  43. Ban, Y.; Lee, K. Re-Enrichment Learning: Metadata Saliency for the Evolutive Personalization of a Recommender System. Appl. Sci. 2021, 11, 1733. [Google Scholar] [CrossRef]
  44. Kohavi, R.; Longbotham, R. Online Controlled Experiments and A/B Testing. Encycl. Mach. Learn. Data Min. 2017, 7, 922–929. [Google Scholar]
  45. Stenberg, G. Conceptual and perceptual factors in the picture superiority effect. Eur. J. Cogn. Psychol. 2006, 18, 813–847. [Google Scholar] [CrossRef] [Green Version]
  46. Keller, K.L.; Staelin, R. Effects of quality and quantity of information on decision effectiveness. J. Consum. Res. 1987, 14, 200–213. [Google Scholar] [CrossRef]
Figure 1. An example of providing additional images (Netflix dataset).
Figure 1. An example of providing additional images (Netflix dataset).
Electronics 10 00741 g001
Figure 2. Examples of providing rating information (Tweetings dataset).
Figure 2. Examples of providing rating information (Tweetings dataset).
Electronics 10 00741 g002
Figure 3. Examples of providing genre information (MovieLens dataset).
Figure 3. Examples of providing genre information (MovieLens dataset).
Electronics 10 00741 g003
Figure 4. Examples of providing synopsis information (Yahoo dataset).
Figure 4. Examples of providing synopsis information (Yahoo dataset).
Electronics 10 00741 g004
Figure 5. Examples of items in the Netflix dataset.
Figure 5. Examples of items in the Netflix dataset.
Electronics 10 00741 g005
Figure 6. Examples of items in the Tweetings dataset.
Figure 6. Examples of items in the Tweetings dataset.
Electronics 10 00741 g006
Figure 7. Examples of items in the MovieLens dataset.
Figure 7. Examples of items in the MovieLens dataset.
Electronics 10 00741 g007
Figure 8. Examples of items in the Yahoo dataset.
Figure 8. Examples of items in the Yahoo dataset.
Electronics 10 00741 g008
Figure 9. Visualization of A/B experiments for visual information (Netflix dataset).
Figure 9. Visualization of A/B experiments for visual information (Netflix dataset).
Electronics 10 00741 g009
Figure 10. Visualization of A/B experiments for evaluative information (Tweetings dataset).
Figure 10. Visualization of A/B experiments for evaluative information (Tweetings dataset).
Electronics 10 00741 g010
Figure 11. Visualization of A/B experiments for categorial information (MovieLens dataset).
Figure 11. Visualization of A/B experiments for categorial information (MovieLens dataset).
Electronics 10 00741 g011
Figure 12. Visualization of A/B experiments for narrational information (Yahoo dataset).
Figure 12. Visualization of A/B experiments for narrational information (Yahoo dataset).
Electronics 10 00741 g012
Figure 13. Visualization of A/B experiments for the aggregation of information (Yahoo dataset).
Figure 13. Visualization of A/B experiments for the aggregation of information (Yahoo dataset).
Electronics 10 00741 g013
Figure 14. Comparing the degree of improvement in different cases.
Figure 14. Comparing the degree of improvement in different cases.
Electronics 10 00741 g014
Table 1. Hypothesized cues for the research question.
Table 1. Hypothesized cues for the research question.
QuestionContent
Visual informationThe images of an item (e.g., movie scenes)
Evaluative informationThe rating of an item (e.g., movie rating)
Categorial informationThe category of an item (e.g., movie genre)
Narrational informationThe linguistic description of an item (e.g., movie synopsis)
Table 2. The results of A/B experiments for visual information on the Netflix dataset (A: single image; B: multiple images).
Table 2. The results of A/B experiments for visual information on the Netflix dataset (A: single image; B: multiple images).
MethodCTRMAP@K
A B B A ( B A ) / A A B B A ( B A ) / A
SAR18.58%21.89%+ 3.31%↑ 17.81%0.3140.335+ 0.021↑    6.73%
NCF19.75%22.59%+ 2.84%↑ 14.38%0.3230.341+ 0.018↑    5.48%
L-GCN19.93%22.72%+ 2.79%↑ 14.00%0.3270.346+ 0.019↑    5.88%
SLi-Rec16.84%18.90%+ 2.06%↑ 12.23%0.3040.329+ 0.025↑    8.30%
FFM15.02%16.83%+ 1.81%↑ 12.05%0.2720.286+ 0.014↑    5.00%
L-FM15.33%16.78%+ 1.45%↑    9.46%0.2770.289+ 0.013↑    4.63%
W&D15.17%16.68%+ 1.51%↑    9.95%0.2570.277+ 0.020↑    7.83%
Table 3. The results of A/B experiments for evaluative information on the Tweetings dataset (A: w/o rating; B: w/rating).
Table 3. The results of A/B experiments for evaluative information on the Tweetings dataset (A: w/o rating; B: w/rating).
MethodCTRMAP@K
A B B A ( B A ) / A A B B A ( B A ) / A
SAR18.55%22.64%+ 4.09%↑ 22.05%0.3050.339+ 0.034↑ 11.02%
NCF19.42%24.79%+ 5.37%↑ 27.65%03110.345+ 0.033↑ 10.74%
L-GCN19.69%25.15%+ 5.46%↑ 27.73%03090.341+ 0.031↑ 10.16%
SLi-Rec16.73%20.55%+ 3.82%↑ 22.83%0.2930.320+ 0.026↑    9.00%
FFM15.43%18.73%+ 3.30%↑ 21.39%0.2830.313+ 0.030↑ 10.52%
L-FM15.05%19.32%+ 4.27%↑ 28.37%0.2670.310+ 0.043↑ 16.07%
W&D15.51%19.54%+ 4.03%↑ 25.98%0.2880.329+ 0.041↑ 14.26%
Table 4. The results of A/B experiments for categorial information on the MovieLens dataset (A: w/o genre; B: w/genre).
Table 4. The results of A/B experiments for categorial information on the MovieLens dataset (A: w/o genre; B: w/genre).
MethodCTRMAP@K
A B B A ( B A ) / A A B B A ( B A ) / A
SAR16.98%18.23%+ 1.25%↑    7.36%0.2970.305+ 0.009↑    2.90%
NCF17.95%19.10%+ 1.15%↑    6.41%0.3030.314+ 0.011↑    3.57%
L-GCN18.20%19.25%+ 1.05%↑    5.77%0.3070.319+ 0.011↑    3.64%
SLi-Rec15.29%16.53%+ 1.24%↑    8.11%0.2850.296+ 0.012↑    4.08%
FFM13.28%14.93%+ 1.65%↑ 12.42%0.2480.262+ 0.014↑    5.64%
L-FM13.59%14.98%+ 1.39%↑ 10.23%0.2540.267+ 0.013↑    5.20%
W&D14.85%15.91%+ 1.06%↑    7.12%0.2560.274+ 0.018↑    6.83%
Table 5. The results of A/B experiments for narrational information in the Yahoo dataset (A: w/o synopsis; B: w/synopsis).
Table 5. The results of A/B experiments for narrational information in the Yahoo dataset (A: w/o synopsis; B: w/synopsis).
MethodCTRMAP@K
A B B A ( B A ) / A A B B A ( B A ) / A
SAR15.92%17.61%+ 1.69%↑ 10.62%0.2960.311+ 0.015↑    5.07%
NCF16.69%18.79%+ 2.10%↑ 12.58%0.3100.331+ 0.020↑    6.51%
L-GCN17.05%18.98%+ 1.93%↑ 11.32%0.3140.337+ 0.023↑    7.28%
SLi-Rec14.32%16.46%+ 2.14%↑ 14.94%0.3130.320+ 0.007↑    2.27%
FFM13.26%14.83%+ 1.57%↑ 11.84%0.2590.270+ 0.010↑    4.01%
L-FM13.67%16.13%+ 2.46%↑ 18.00%0.2870.304+ 0.017↑    5.89%
W&D13.40%15.94%+ 2.54%↑ 18.96%0.2720.291+ 0.019↑    6.80%
Table 6. The results of A/B experiments for the aggregation of information on the Yahoo dataset (A: w/o rating-genre-synopsis; B: w/rating-genre-synopsis).
Table 6. The results of A/B experiments for the aggregation of information on the Yahoo dataset (A: w/o rating-genre-synopsis; B: w/rating-genre-synopsis).
MethodCTRMAP@K
A B B A ( B A ) / A A B B A ( B A ) / A
SAR15.92%18.56%+ 2.64%↑ 16.58%0.2960.307+ 0.011↑    3.61%
NCF16.69%19.85%+ 3.16%↑ 18.93%0.3100.332+ 0.021↑    6.83%
L-GCN16.51%19.74%+ 3.23%↑ 19.56%0.3080.330+ 0.022↑    7.13%
SLi-Rec14.32%16.94%+ 2.62%↑ 18.30%0.3130.323+ 0.009↑    3.00%
FFM13.26%15.56%+ 2.30%↑ 17.35%0.2590.273+ 0.014↑    5.44%
L-FM13.60%15.99%+ 2.39%↑ 17.57%0.2870.310+ 0.023↑    8.09%
W&D13.45%15.85%+ 2.40%↑ 17.84%0.2720.299+ 0.027↑    9.85%
Table 7. The degree of improvement in different cases.
Table 7. The degree of improvement in different cases.
MethodVisualEvaluativeCategorialNarrationalAggregation
CTR MAP @ K CTR MAP @ K CTR MAP @ K CTR MAP @ K CTR MAP @ K
SAR17.81%  6.73%22.05%11.02%  7.36%  2.90%10.62%  5.07%16.58%  3.61%
NCF14.38%  5.48%27.65%10.74%  6.41%  3.57%12.58%  6.51%18.93%  6.83%
L-GCN14.00%  5.88%27.73%10.16%  5.77%  3.64%11.32%  7.28%19.56%  7.13%
SLi-Rec12.23%  8.30%22.83%  9.00%  8.11%  4.08%14.94%  2.27%18.30%  3.00%
FFM12.05%  5.00%21.39%10.52%12.42%  5.64%11.84%  4.01%17.35%  5.44%
L-FM  9.46%  4.63%28.37%16.07%10.23%  5.20%18.00%  5.89%17.57%  8.09%
W&D  9.95%  7.83%25.98%14.26%  7.12%  6.83%18.96%  6.80%17.84%  9.85%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ban, Y.; Lee, K. How the Multiplicity of Suggested Information Affects the Behavior of a User in a Recommender System. Electronics 2021, 10, 741. https://doi.org/10.3390/electronics10060741

AMA Style

Ban Y, Lee K. How the Multiplicity of Suggested Information Affects the Behavior of a User in a Recommender System. Electronics. 2021; 10(6):741. https://doi.org/10.3390/electronics10060741

Chicago/Turabian Style

Ban, Yuseok, and Kyungjae Lee. 2021. "How the Multiplicity of Suggested Information Affects the Behavior of a User in a Recommender System" Electronics 10, no. 6: 741. https://doi.org/10.3390/electronics10060741

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop