Next Article in Journal
Accelerating the Response of Self-Driving Control by Using Rapid Object Detection and Steering Angle Prediction
Previous Article in Journal
Balancing Password Security and User Convenience: Exploring the Potential of Prompt Models for Password Generation
Previous Article in Special Issue
The Breaking News Effect and Its Impact on the Credibility and Trust in Information Posted on Social Media
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Influence of Social Stratification on Trust in Recommender Systems

by
Dana Rad
1,
Lavinia Denisia Cuc
2,*,
Andrea Feher
3,4,
Cosmin Silviu Raul Joldeș
5,
Graziella Corina Bâtcă-Dumitru
6,
Cleopatra Șendroiu
6,
Robert Cristian Almași
2,
Sabin Chiș
7 and
Miron Gavril Popescu
8,*
1
Center of Research Development and Innovation in Psychology, Faculty of Educational Sciences, Psychology and Social Work, Aurel Vlaicu University of Arad, 310025 Arad, Romania
2
Faculty of Economics, Aurel Vlaicu University of Arad, 310025 Timisoara, Romania
3
Department of Economy and Firm Financing, University of Life Sciences “King Mihai I” from Timisoara, 300645 Timisoara, Romania
4
Research Center for Sustainable Rural Development of Romania, Romanian Academy—Branch of Timisoara, 300223 Timisoara, Romania
5
Faculty of International Business and Economics, Bucharest University of Economic Studies, 010374 Bucharest, Romania
6
Department of Accounting and Audit, Faculty of Accounting and Management Informatics, Bucharest University of Economic Studies, 010374 Bucharest, Romania
7
Faculty of Food Engineering Tourism and Environment Protection, Aurel Vlaicu University of Arad, 310025 Arad, Romania
8
Faculty of Humanities and Social Sciences, Aurel Vlaicu University of Arad, 310025 Arad, Romania
*
Authors to whom correspondence should be addressed.
Electronics 2023, 12(10), 2160; https://doi.org/10.3390/electronics12102160
Submission received: 18 April 2023 / Revised: 8 May 2023 / Accepted: 8 May 2023 / Published: 9 May 2023
(This article belongs to the Special Issue Customer Experience in Online Retailing)

Abstract

:
This paper examines the impact of social stratification on trust in recommender systems. Recommender systems have become an essential tool for users to navigate vast amounts of information online, but trust in these systems has become a concern. The focus of this study is to investigate whether social stratification, defined by socioeconomic status, affects trust in recommender systems. We first review the literature on trust in recommender systems and social stratification, highlighting gaps in the current research. We then describe the methodology used in our study, which involves the analysis of valid and consented responses received from 487 participants from different socioeconomic backgrounds, registered in an online survey. This study aimed to investigate the influence of social stratification, specifically income, on trust in recommender systems. Results showed a curvilinear relationship between income and trust in recommender systems, such that moderate income levels were associated with higher levels of trust, while both low- and high-income levels were associated with lower levels of trust. These findings suggest that income plays an important role in shaping users’ trust in recommender systems and highlight the need for future research to examine the complex interplay between social stratification and trust in technology.

1. Introduction

Recommender systems have become increasingly prevalent in modern society [1], providing users with personalized recommendations for products, services, and even social connections. These systems rely on algorithms that analyze user behavior and preferences in order to provide targeted recommendations. Recommender systems are algorithms that analyze user behavior and preferences to provide personalized recommendations for products, services, or even social connections. They are used in a variety of applications such as e-commerce, social networking, and content streaming platforms. There are several types of recommender systems, including content-based, collaborative filtering, and hybrid recommender systems. Content-based systems recommend items similar to those a user has liked in the past, while collaborative filtering systems recommend items based on the preferences of other users with similar tastes. Hybrid recommender systems combine both approaches to provide more accurate and diverse recommendations. The functioning of a recommender system typically involves collecting user data such as browsing and purchase history, ratings and reviews, and social connections. This data is then used to create a user profile, which is compared with other profiles in the system to identify similar users or items. Recommendations are then generated based on this analysis and presented to the user.
While recommender systems can be highly effective, there is growing concern about the role of social stratification in shaping the recommendations that users receive.
Social stratification refers to the hierarchical ranking of individuals and groups within a society based on factors such as income, education, and social status [2,3]. Research has shown that social stratification can influence the recommendations that users receive from recommender systems [4]. For example, users from higher socioeconomic backgrounds may receive recommendations for more expensive products or services, while users from lower socioeconomic backgrounds may receive recommendations for lower-priced items.
The influence of social stratification on trust in recommender systems is an important area of research, as trust is a key factor in the adoption and use of these systems. Trust in recommender systems is influenced by a number of factors, including the perceived accuracy and relevance of recommendations, the perceived fairness of the system, and the degree of transparency in how recommendations are generated [5,6,7]. However, the role of social stratification in shaping these factors is not well understood.
Research on trust in recommender systems has primarily focused on factors such as algorithmic transparency and user control over the system [8]. However, there is growing recognition that social stratification may play an important role in shaping users’ perceptions of recommender systems [9]. This research aims to explore the influence of social stratification on trust in recommender systems, with a focus on how socioeconomic status and other indicators of social stratification shape users’ perceptions of accuracy, fairness, and transparency in recommender systems.
Recommender systems have proven to be highly effective in improving user satisfaction and engagement, but they also pose several challenges related to user trust and transparency. In particular, users may be skeptical of recommendations that they perceive as biased or unfair, leading to a lack of trust and reduced engagement with the system. In this paper, we explore the role of social stratification in shaping user trust in recommender systems. Social stratification refers to the division of society into hierarchical layers based on factors such as income, education, and occupation. We argue that social stratification can significantly impact user trust in recommender systems, as users may be more or less likely to trust recommendations based on the perceived socioeconomic status of the recommender.
This research aims to fill an important gap in the literature by investigating the influence of social stratification on trust in recommender systems. Specifically, we focus on how socioeconomic status and other indicators of social stratification shape users’ perceptions of accuracy, fairness, and transparency in recommender systems. Our study is innovative in its approach to examining the relationship between social stratification and trust in recommender systems and has important implications for both researchers and practitioners. By identifying the mechanisms through which social stratification affects trust in recommender systems, we can develop more effective strategies for building trust and increasing the adoption and use of these systems by users from diverse socioeconomic backgrounds.
Our study extends previous research by exploring the curvilinear effects of the predictor variables on the outcome variable using quadratic regression analysis. While previous studies have examined the relationship between the predictor variables and the outcome variable, they typically assume a linear relationship. Our study contributes to the literature by showing that the relationship between the predictor variables and the outcome variable is not always linear and there may be a curvilinear effect.

2. Theoretical Framework

This research is guided by social stratification theory, which posits that social inequalities are shaped by the interaction of structural factors such as education and income [10,11,12] with individual factors such as race and gender [13,14]. In the context of recommender systems, social stratification theory suggests that individual factors such as socioeconomic status and race may interact with algorithmic factors to shape the recommendations that users receive.
Social stratification theory is particularly relevant to the study of trust in recommender systems because it emphasizes the role of structural factors in shaping individual experiences and perceptions. Research has shown that individuals from different socioeconomic backgrounds may have different levels of trust in technology [15,16]. For example, individuals with higher levels of education may be more likely to trust recommender systems, while individuals with lower levels of education may be more skeptical. By exploring the influence of social stratification on trust in recommender systems, this research seeks to contribute to a deeper understanding of how social inequalities are perpetuated in the digital age.
Recommender systems have become ubiquitous in various online platforms, ranging from e-commerce websites [17] to social media platforms [18], providing personalized recommendations to users based on their past interactions with the system [19,20,21,22,23].
However, the accuracy and effectiveness of recommender systems rely heavily on the ability to accurately predict users’ preferences and recommend items that align with those preferences. The use of collaborative filtering (CF) algorithms and content-based filtering (CBF) algorithms is widespread in recommender systems, with CF being the most commonly used approach [24,25,26]. However, the efficacy of these algorithms is limited, as they often suffer from issues such as data sparsity, cold start problems, and lack of diversity in recommendations [8].
To address these issues, recent research has focused on developing more robust and effective recommender systems, integrating social and contextual information into the recommendation process, and incorporating explainability features to enhance users’ trust in the system [27,28]. This literature review provides an overview of the latest research in these areas and discusses the implications for building more effective and trustworthy recommender systems.

2.1. Robust Recommender Systems

Robust recommender systems are designed to mitigate the effects of data sparsity and the cold start problem, which can affect the accuracy and effectiveness of recommendations. Various techniques have been proposed to improve the robustness of recommender systems, including matrix factorization, clustering, and deep learning approaches.
Matrix factorization is a popular approach for collaborative filtering recommender systems, which involves factorizing the user-item interaction matrix into two low-rank matrices, representing users and items [5,29,30]. This approach can effectively handle the data sparsity issue and improve the accuracy of recommendations [31]. However, it may suffer from overfitting and lack of interpretability.
Clustering is another approach that has been used to improve the robustness of recommender systems. It involves grouping similar users or items based on their features and interactions, allowing the system to provide more diverse and personalized recommendations [8]. However, clustering requires a large amount of data and may not perform well when dealing with rare or unique items.
Deep learning approaches, such as neural networks, have also been applied to recommender systems, providing more accurate and robust predictions by learning complex non-linear relationships between users and items [32,33]. These approaches have been shown to outperform traditional matrix factorization methods, but they require large amounts of data and may not be suitable for small or sparse datasets.

2.2. Social-Aware Recommender Systems

Social-aware recommender systems aim to enhance the accuracy and effectiveness of recommendations by incorporating social information, such as users’ social network connections, into the recommendation process. The idea behind social-aware recommender systems is that users’ preferences and behaviors are influenced by their social connections [34] and incorporating this information can improve the accuracy of recommendations.
Various approaches have been proposed for social-aware recommender systems, including social collaborative filtering, social matrix factorization, and social network analysis. Social collaborative filtering (SCF) is a technique that combines collaborative filtering with social network analysis to provide more accurate and personalized recommendations [35]. SCF uses the social network structure to identify similar users and items, allowing the system to provide recommendations that are more aligned with users’ preferences.
Social matrix factorization (SMF) is another approach that integrates social information into the matrix factorization process by considering the social network connections between users and their interactions with items [36]. This approach can effectively handle the data sparsity issue and improve the accuracy of recommendations. However, it requires a large amount of social data and may not be suitable for small or sparse datasets [37].
Social network analysis (SNA) is a technique that analyzes the structure of social networks to identify influential users and communities and to understand the social influence on users’ preferences and behaviors [38]. SNA can be used to improve the accuracy and effectiveness of recommendations by identifying users who are likely to provide high-quality feedback or by recommending items that are popular among influential users [39,40].

2.3. Explainable Recommender Systems

Explainable recommender systems aim to enhance users’ trust and understanding of the recommendation process by providing explanations for the recommendations. These systems address the black-box nature of traditional recommender systems, where users may not understand the reasons behind the recommendations and may not trust the system as a result.
Various approaches have been proposed for explainable recommender systems, including rule-based approaches, case-based reasoning, and natural language generation [41]. Rule-based approaches involve using if–then rules to explain the recommendations to users, providing a transparent and understandable explanation for the recommendations [8]. Case-based reasoning involves using past cases and their outcomes to generate explanations for the recommendations [42]. Natural language generation involves generating natural language explanations for the recommendations, allowing users to understand the reasoning behind the recommendations in a more intuitive way [43].
The integration of robust, social-aware, and explainable features into recommender systems has significant implications for users’ trust in these systems. Trust is a critical factor in users’ adoption and continued use of recommender systems, as users are more likely to use and rely on systems that they trust [44]. Robust recommender systems can enhance users’ trust in the system by providing more accurate and diverse recommendations, which align with users’ preferences. Social-aware recommender systems can enhance users’ trust in the system by providing recommendations that are more personalized and aligned with users’ social connections. Explainable recommender systems can enhance users’ trust in the system by providing transparent and understandable explanations for the recommendations, allowing users to understand and trust the reasoning behind the recommendations.
However, the integration of these features into recommender systems also raises ethical and privacy concerns. Social-aware recommender systems may collect sensitive user information, such as social network connections, raising privacy concerns [26]. Explainable recommender systems may reveal sensitive information about users’ preferences and behaviors, raising ethical concerns. Therefore, it is essential to balance the benefits of these features with the potential ethical and privacy concerns and ensure that users’ privacy and ethical concerns are adequately addressed. The integration of robust, social-aware, and explainable features into recommender systems has significant implications for users’ trust in these systems. Robust recommender systems can enhance users’ trust in the system by providing more accurate and diverse recommendations, while social-aware recommender systems can enhance users’ trust by providing recommendations that are more personalized and aligned with users’ social connections [45]. Explainable recommender systems can enhance users’ trust by providing transparent and understandable explanations for the recommendations, allowing users to understand and trust the reasoning behind the recommendations.
In conclusion, the development of more robust, social-aware, and explainable recommender systems has the potential to enhance users’ trust in these systems, which is critical for users’ adoption and continued use of these systems. However, the integration of these features also raises ethical and privacy concerns, which must be addressed to ensure that users’ privacy and ethical concerns are adequately addressed. Future research in this area should focus on developing recommender systems that can balance the benefits of these features with the potential ethical and privacy concerns.
This research seeks to answer the following questions:
  • How does social stratification influence trust in recommender systems?
  • How do individual factors such as socioeconomic status interact with algorithmic factors to shape users’ perceptions of accuracy, fairness, and transparency in recommender systems?
  • How can the design of recommender systems be improved to address issues related to social stratification and trust?

3. Methodology

3.1. Participants

We conducted an online survey to investigate the relationship between social stratification and trust in recommender systems. The survey consisted of two sections: (1) demographic information including social stratification measures, and (2) trust in recommender systems: robust, explainable, and social-aware. We recruited participants through social media and online forums, targeting individuals who had used recommender systems before. A total of 487 participants were recruited for this study through online social media platforms, based on a convenience sampling methodology. Participants were required to be at least 18 years old and have experience using recommender systems and online shopping for diverse items and services. The sample consisted of 35% males and 65% females, with a mean age of 27 years old (SD = 11.16).
In terms of education, there were 14 respondents (2.9%) who reported gymnasium studies, 237 respondents (48.7%) who reported high school studies, 168 respondents (34.5%) who reported higher education degrees, 60 respondents (12.3%) who reported master’s studies, and 8 respondents (1.6%) who reported PhD-level studies.
In terms of monthly income, we have considered six categories. There are 190 respondents (39.0%) who selected the option of no stable monthly income, 47 respondents (9.7%) who selected a monthly income between RON 1500 and 2000, 123 respondents (25.3%) who selected a monthly income between RON 2000 and 4000, 76 respondents (15.6%) who selected a monthly income between RON 4000 and 6000, 22 respondents (4.5%) who selected a monthly income between RON 6000 and 8000, and 29 respondents (6.0%) who selected a monthly income over RON 8000.
We used multiple linear regression analysis to examine the relationship between social stratification and trust in recommender systems, controlling for income measures.

3.2. Instruments

The instrument used in this study was a newly designed scale, the Trust in Recommender Systems Scale, comprising 10 items rated on a 1 to 5 Likert scale, where 1 stands for total disagreement and 5 stands for total agreement with the statement. The scale was inspired from previous research [46,47,48,49] designed to measure trust in recommender systems—specifically, robust, explainable, and social-aware recommender systems. In terms of instrument items, one of the items measuring trust in explainable recommender systems was Item 4: I appreciate recommendation systems that give me explanations for recommended products or services. For measuring trust in social-aware recommender systems, an example item was Item 9: If a product or service has received negative reviews from other people, I no longer consider buying it. For measuring trust in robust recommender systems, an example item was Item 3: Recommender systems I’ve interacted with effectively filter out information I don’t need.
For the three recommender system subscales (explainable, robust, and social-aware), the minimum value selected by respondents while answering the online questionnaire is 1 while the maximum value is 5 for all the three subscales. The mean score for explainable recommender systems is 3.42; for robust recommender systems, it is 3.28; and, for social-aware recommender systems, it is 3.8037. The standard deviation for the three subscales is 0.76, 0.67, and 0.78, respectively.
The grand mean score for the general score of the Trust in Recommender Systems Scale is 3.47, and the standard deviation is 0.61.
The scale latent factorial structure was investigated through exploratory factor analysis and demonstrated good internal consistency (Cronbach’s alpha = 0.81) and a calculated Hotelling’s T-Squared Test coefficient of 49.867, significant at p < 0.001. The exploratory factor analysis performed with a fixed number of three factors yielded robust additional fit indices of 0.051 for RMSEA and 0.963 for TLI.

3.3. Research Plan

Participants completed the online survey that included the trust in recommender systems scale and demographic questions. Data were collected and analyzed using descriptive statistics and regression analysis. To test the curvilinear effect of income on trust in recommender systems, we conducted a quadratic regression analysis including a new variable standing for squared income.
The methodology used to depict curvilinear effects with quadratic regression analysis involves the use of a quadratic term in the regression equation.
In a typical linear regression, the relationship between the dependent variable and the independent variable is modeled using a straight line. However, in some cases, the relationship may not be linear, but instead exhibit a U-shaped or inverted U-shaped curve. In such cases, a quadratic regression analysis can be used to model the curvilinear relationship. The quadratic term is created by squaring the independent variable and adding it as an additional predictor in the regression equation, along with the linear term. The resulting equation can then be used to estimate the values of the dependent variable for different values of the independent variable.
The resulting graph of a quadratic regression analysis typically shows a curved line, instead of a straight line, that fits the data points better than a linear regression line. The curve may be a U-shape or an inverted U-shape, depending on the nature of the relationship between the dependent and independent variables.
The single hypothesis tested was:
H1. 
A curvilinear relationship exists between the level of income and general trust in recommender systems.

4. Results

Before running the quadratic regression analysis, we firstly investigated the correlations among the research variables of monthly income and trust in the three different types of recommender systems—robust, explainable, and social-aware—as well as the general score of trust in recommender systems (Table 1).
The correlation matrix indicates that income has a negligible correlation with all other variables, as all the correlation coefficients are close to zero and are not statistically significant. In contrast, the other variables show strong and significant correlations with each other. The correlation between explainable recommender systems and robust recommender systems is high (0.739), indicating that users who prefer explainable recommender systems are also likely to prefer robust recommender systems and vice versa. Similarly, the correlation between robust recommender systems and social-aware recommender systems is significant (0.420), suggesting that users who prefer robust recommender systems are also likely to prefer social-aware recommender systems. These findings imply that improving the explainability, robustness, and social-awareness of recommender systems could lead to increased trust among users. Additionally, the strong positive correlation between the different types of recommender systems highlights the importance of considering multiple factors in designing and evaluating recommender systems. Next, we have tested if there is a difference in respondent’s trust in the three different types of recommender systems based on the six levels of income. Results are presented in Table 1 and Table 2.
For the explainable recommender system, participants with an income range of RON 2000–4000 had the highest mean rating (3.53), followed by those with an income range of RON 4000–6000 (3.61); those with no stable monthly income had the lowest mean rating (3.35). The overall mean rating was 3.42. For the robust recommender system, participants with an income range of RON 4000–6000 had the highest mean rating (3.51), followed by those with an income range of RON 2000–4000 (3.33); and those with no stable monthly income had the lowest mean rating (3.18). The overall mean rating was 3.28. For the social-aware recommender system, participants with an income range of RON 2000–4000 had the highest mean rating (3.91), followed by those with an income range of RON 4000–6000 (3.82) and those with no stable monthly income (3.79). The overall mean rating was 3.80.
The output also includes the model’s fixed and random effects. The between-component variance indicates the degree of variability in the means between groups due to unmeasured factors or random error.
In Table 2, the ANOVA test evaluates whether there is a significant difference between the means of the income groups for each recommender system. The test of homogeneity of variances assesses whether the variances of the groups are equal or not.
In Table 2 are presented descriptive statistics and a test of homogeneity. The test of homogeneity of variances (Levene’s test) assesses whether the variability within each group is approximately equal. The results indicate that the variances are not equal across groups. Significant results are registered for explainable recommender systems (F = 3.582 at p < 0.01) and robust recommender systems (F = 4.031, at p < 0.01), but not for social-aware recommender systems.
In Figure 1 we depict the means obtained by participants for the trust in the three different types of recommender systems based on their level of income.
A quadratic regression analysis was conducted to examine the relationship between trust in a recommendation system (the dependent variable) and two independent variables, income and the square root of income. Testing for curvilinear effects requires a sufficient range of values for the independent variable, and assumes that the relationship between the variables follows an inverted U-shaped curve.
The initial model included only income as a predictor variable, and the results indicated that income was not a significant predictor of trust in the recommendation system (F = 0.085 at p = 0.770). Results are presented in Table 3.
However, when the square root of income was added to the model, both income (β = 0.655 at p = 0.000) and the square root of income (β = −0.663 at p = 0.000) were found to be significant predictors of trust in the recommendation system. Results are presented in Table 4.
For Model 2, assuming curvilinear effects was also found to be significant (F = 6.898, at p = 0.001), indicating that the addition of the square root of income significantly improved the prediction of trust in the recommendation system and the changing sign from positive to negative depicting curvilinear effects.
These results show that, in the final model, for every one unit increase in income, there is a 0.265 unit increase in trust in the recommendation system, while for every one unit increase in the square root of income, there is a 0.043 unit decrease in trust in the recommendation system. The residuals statistics shows that the mean predicted value of trust in the recommendation system was 3.4795, with a standard deviation of 0.10283. The residuals had a mean of 0 and a standard deviation of 0.60908.
In the regression analysis, the observed linear and quadratic lines (Figure 2a) refer to the relationship between the independent variable (X) income and the dependent variable (Y) trust in recommender systems.
A linear relationship between X and Y means that, as the value of X increases or decreases, the value of Y changes proportionally. This relationship is represented by a straight line in a scatter plot of X and Y, and can be modeled by a linear regression equation of the form:
Y = a + bX
where “a” is the intercept (the value of Y when X = 0) and “b” is the slope (the rate of change in Y for a unit change in X).
A quadratic relationship between X and Y means that the relationship is not linear, but instead follows an inverted U-shaped curve in a scatter plot of X and Y. This relationship can be modeled by a quadratic regression equation of the form:
Y = a + bX + cX2
where “a” is the intercept, “b” is the linear coefficient (representing the slope of the curve at the vertex), and “c” is the quadratic coefficient (representing the curvature of the curve).
In summary, the linear and quadratic lines in the regression analysis presented in Figure 2a represent the modeled relationship between the independent variable (X) income and the dependent variable (Y) trust in recommender systems, and can help to quantify the strength and direction of the relationship.
The standardized residuals graph presented in Figure 2a evaluates the assumptions of the regression model. The standardized residuals are the residuals divided by their standard deviation, and the graph plots the standardized residuals against the predicted values from the model.
Ideally, the standardized residuals should be randomly scattered around zero and have constant variance across the range of predicted values. If the standardized residuals do not have constant variance, it indicates that the variance of the errors is not constant, which violates the assumption of homoscedasticity. This may indicate that the model is not a good fit for the data.
If the standardized residuals have a systematic pattern, it indicates that the model is misspecified and may not be a good fit for the data. If the standardized residuals show a curved pattern, it may indicate that a nonlinear relationship exists between the predictor variables and the response variable. Alternatively, if the standardized residuals show a U-shape or inverted U-shape, such as in our case, it indicates that a quadratic relationship exists between the predictor variables and the response variable. Overall, the standardized residuals graph is an important diagnostic tool in regression analysis as it allows us to evaluate the assumptions of the model and identify any potential issues or areas for improvement.
The expected and observed cumulative probabilities (cum prob) depicted in Figure 2c refer to the cumulative distribution function of the residuals. The expected cumulative probabilities are based on the assumption that the residuals follow a normal distribution with a mean of 0 and constant variance. Under this assumption, the expected cumulative probabilities are calculated by applying the standard normal cumulative distribution function to the standardized residuals (i.e., residuals divided by their estimated standard deviation).
The observed cumulative probabilities, on the other hand, are based on the actual distribution of the residuals in the data. They are calculated by ranking the residuals from smallest to largest and then dividing each rank by the total number of observations. The resulting values represent the cumulative probability of observing a residual of that size or smaller.
The expected and observed cumulative probabilities are typically plotted against each other in a graph to assess whether the assumption of normality for the residuals holds. If the residuals are normally distributed, the observed cumulative probabilities should fall closely along the diagonal line representing the expected cumulative probabilities, such as in our case. If the observed cumulative probabilities deviate significantly from the expected values, it suggests that the assumption of normality may not be appropriate and may indicate the need for further investigation or modification of the regression model.
The overall results of the quadratic regression analysis presented in Figure 2 show a significant curvilinear effect of income on trust in recommender systems with an adjusted R2 of 2% (F = 13.708, p < 0.01), confirming our hypothesis.

5. Discussions

This study aimed to examine the impact of social stratification on trust in recommender systems. We found that users from both high- and low-income backgrounds were less likely to trust recommendations they received from recommender systems compared to users with moderate income. This curvilinear relationship suggests that trust in recommender systems is not simply a linear function of income, but rather there may be a sweet spot in terms of income where users are most likely to trust recommendations they receive.
Our finding is consistent with previous research that has found a curvilinear relationship between income and trust in technology [50,51,52]. Users with high income may have higher expectations of accuracy and relevance for recommendations they receive from recommender systems, and, thus, may be more critical of them. On the other hand, users with low income may be more skeptical of the information they receive from these systems due to their past experiences with technology or lack of exposure to similar systems.
A possible explanation for the curvilinear relationship between income and trust in recommender systems is the role of social identity. Research has shown that individuals from different socioeconomic backgrounds may have different social identities, which can influence their trust in technology. For example, individuals from lower socioeconomic backgrounds may have a stronger sense of community and rely more on interpersonal trust, while individuals from higher socioeconomic backgrounds may prioritize individualism and rely more on institutional trust. Therefore, it is possible that users with moderate income levels, who may have a more balanced social identity, are more likely to trust recommendations from recommender systems.
Our findings suggest that the explanation of recommendation algorithms may play a critical role in shaping user trust. Previous research has shown that users are more likely to trust systems that provide clear explanations for their recommendations [8]. Thus, the design of recommender systems that can provide clear and understandable explanations for their recommendations may be especially important for mitigating the negative effects of social stratification on user trust.
Individuals with moderate levels of income may have a greater need for information filtering and organization than those with very low or very high incomes. This need may stem from the fact that individuals with moderate incomes may have less disposable income, and thus need to make more informed purchasing decisions. Second, individuals with high incomes may have greater access to alternative sources of information, such as personal recommendations or expert opinions, which may reduce their reliance on recommender systems. Finally, individuals with low incomes may be more skeptical of technology and may have lower levels of trust in any online system, including recommender systems.
Additionally, our findings suggest that individuals from higher income groups may benefit from personalized recommendations more than those from lower income groups due to their greater level of trust in recommender systems. This highlights the potential for recommender systems to exacerbate existing social stratification and inequality. It is important for developers and designers of recommender systems to consider these social dynamics and ensure that their systems do not perpetuate biases or unequal access to information.
While this study focused on the influence of income on trust, other dimensions of social stratification (e.g., education, occupation) may also be important in shaping trust in recommender systems. Future research could examine the impact of these dimensions on trust, as well as their potential interactions with income. Overall, this study highlights the need to consider the social context in which recommender systems operate. By understanding the influence of social stratification on trust in these systems, designers and developers can work to create systems that are not only effective but also trustworthy and equitable.
Our results have important implications for the design and development of recommender systems. Developers should take into account the curvilinear relationship between income and trust when designing recommender systems. For example, they could implement different trust-building mechanisms for users with different income levels, or consider implementing personalized recommendation algorithms that take into account users’ income levels and preferences. Recommender systems should be designed with a greater consideration of the potential impact of social stratification and other forms of bias on user trust. Our results also suggest that the design and development of recommender systems should consider the potential impact of social stratification on user trust and the importance of clear and understandable explanations for recommendation algorithms in building trust.
In conclusion, our study provides evidence of a curvilinear relationship between income and trust in recommender systems, which has important implications for both researchers and developers in this field. Further research is needed to explore the mechanisms underlying this relationship and identify effective strategies for building trust in recommender systems across different income levels.

6. Conclusions

Recommender systems have become ubiquitous in today’s society, influencing consumer behavior [53] and shaping social interactions. However, despite their widespread use, there remains a lack of understanding regarding the role of social stratification in shaping user trust in these systems. The present study aimed to investigate the influence of social stratification on trust in robust, social-aware, and explainable recommender systems. The influence of social stratification on trust in recommender systems has been a relatively understudied area. However, our findings are consistent with previous studies that have found that socioeconomic status is a key factor in determining trust in recommender systems.
Our study extends previous research by exploring the curvilinear effects of the predictor variables on the outcome variable using quadratic regression analysis. While previous studies have examined the relationship between the predictor variables and the outcome variable, they typically assume a linear relationship. Our study contributes to the literature by showing that the relationship between the predictor variables and the outcome variable is not always linear and there may be a curvilinear effect. Specifically, we found that the quadratic term for income was significant, indicating a non-linear relationship between predictor A, income, and the outcome variable, trust in recommender systems. Our results provide valuable insights for practitioners in the field, who can use this information to develop more effective interventions and policies. Overall, we believe that our study makes an important contribution to the literature by highlighting the curvilinear effects of predictor variables on the outcome variable and providing new insights into the relationship between these variables.
Our study found evidence of curvilinear effects of social stratification on trust in recommender systems. Specifically, we found that moderate levels of social stratification were associated with higher levels of trust, whereas both low and high levels of social stratification were associated with lower levels of trust. These results are consistent with previous research on the role of social stratification in shaping trust in technology [54,55]. Specifically, users were more likely to trust a system when they had a better understanding of the system’s recommendations [56], when the system was designed to be socially aware [54], and when the system was robust to errors and biases [57].
Interestingly, our results are consistent with the results of a recent study by the authors of [58], which investigated the economic connections between cities in the New Western Land–Sea Corridor in China. Xie and collaborators found that the relationship between transportation accessibility and economic development was also curvilinear, with a quadratic effect that peaked at a moderate level of accessibility. In addition, our study has some similarities with the recent comprehensive survey by the authors of [59] on the evaluation of explainable recommendation systems. Zhang and collaborators also highlight the importance of trust in recommender systems and its impact on user satisfaction and adoption. They discuss various methods for evaluating the explainability and transparency of recommender systems, which are important factors for building trust.
Moreover, the approach taken in our study is similar to that of the authors of [60], who proposed a comprehensive approach for the rating prediction phase in memory-based collaborative filtering recommender systems. The authors of [60] argue that incorporating contextual information, such as user demographics, can improve the accuracy and effectiveness of recommender systems.
Our findings are consistent with those of the authors of [61], who noted that traditional recommender systems suffer from bias and a lack of diversity and suggested that incorporating user diversity could enhance the performance of recommender systems. Similarly, the authors of [62] proposed a multi-criteria recommender system that incorporates social relationships and user preferences to improve the quality of recommendations.
Moreover, [63] highlighted the importance of explainability and causality in the perception, trust, and acceptance of AI-based systems. By incorporating social stratification information, our proposed approach enhances the explainability and transparency of recommender systems, making them more trustworthy and acceptable to users.
Overall, these studies provide important insights into the factors that influence trust in recommender systems and the approaches that can be taken to improve their accuracy and effectiveness. By building on these previous studies, our results contribute to a growing body of research on the role of social stratification in shaping users’ perceptions of recommender systems.
The study has several limitations that should be acknowledged. Firstly, our sample only included users from a single geographic location, which limits the generalizability of our findings. Future research should aim to replicate our study in different cultural and geographical contexts to examine the cross-cultural validity of our findings. Secondly, we only measured users’ income as a proxy for their socioeconomic status. Future research could consider using more comprehensive measures of socioeconomic status, such as education and occupation, to examine their influence on trust in recommender systems. Additionally, future research can explore the implications of these findings for the design and implementation of recommender systems, with a focus on promoting equity and fairness. Developers can take steps to ensure that their recommendation algorithms are not biased against certain social groups and work towards providing transparency and explanations for their recommendations.
There are potential ethical implications of the relationship between social stratification and trust in recommender systems, particularly if bias against certain social groups is present in recommendation algorithms. This can perpetuate systemic inequality and limit opportunities for marginalized groups. It is important for developers and policymakers to address these issues and ensure that their recommender systems are designed and implemented in an ethical and responsible manner. Other factors, such as race, gender, and education, can interact with social stratification to influence trust in recommender systems. It would be beneficial for future research to explore these interactions and identify ways to mitigate any negative impacts.
The findings of this study have implications for individuals, organizations, and policymakers concerned with the impact of technology on society. It highlights the need for awareness and consideration of social stratification in technology design and implementation. Organizations can work towards developing more inclusive and diverse recommendation algorithms, while policymakers can develop regulations and guidelines to ensure the ethical and responsible use of technology. Individuals can also be more aware of the potential biases in recommendation algorithms and actively seek out diverse sources of information.
Lastly, our study only examined the impact of social stratification on trust in recommender systems, and did not investigate other factors that may influence trust, such as the accuracy and transparency of recommendations. Future research should also explore other factors that may impact user trust, such as the perceived accuracy of recommendations or the degree of personalization offered by the system.
Despite these limitations, our study contributes to the growing body of literature on trust in technology by examining the impact of social stratification on trust in recommender systems. Our findings suggest that income has a curvilinear relationship with trust in recommender systems, which has important implications for the design and development of these systems. Developers should take into account users’ income levels when designing recommender systems, and consider implementing different trust-building mechanisms for users with different income levels. By highlighting the role of social stratification, specifically income, in shaping trust in recommender systems, this study underscores the need for continued research in this area. It is our hope that this study will inspire further investigations into the complex interplay between social stratification and trust in technology and ultimately inform the development of more effective and equitable recommender systems.
Our findings have also important implications for the design and development of recommender systems. Specifically, recommender systems need to take into account users’ income levels when making recommendations. This may involve tailoring recommendations to the specific needs of users. The results of this study have important implications for the design and implementation of recommender systems. Given the curvilinear relationship between income and trust, designers of recommender systems should consider tailoring the system to the needs and expectations of users with different income levels. For example, lower-income users may benefit from more transparent and personalized recommendations, while higher-income users may value more diverse and novel recommendations. Additionally, increasing the explainability of the recommendation process and providing users with more control over the recommendations they receive may enhance trust across income levels.
Based on our results, there are several practical implications for designers and developers of recommender systems. First, our findings suggest that social stratification can have a significant impact on user trust in recommendation systems. This implies that designers and developers need to take into account social stratification factors such as income, education, and occupation when designing recommendation systems. By incorporating these factors, the system can improve the quality of recommendations and increase user trust in the system. Second, our study shows that explainability and transparency are important factors in building user trust in recommendation systems. Designers and developers can incorporate these features by providing users with detailed explanations of how recommendations are made and allowing users to control and adjust the recommendations provided. Finally, our study highlights the need for ongoing evaluation and monitoring of recommendation systems to ensure that they are functioning as intended and to identify and address any biases that may emerge over time. By regularly evaluating the system, designers and developers can identify and address any issues that may arise and improve the overall performance and user experience.
In summary, our findings have practical implications for the design and development of recommendation systems, including the need to consider social stratification factors, incorporate explainability and transparency, and regularly evaluate and monitor the system’s performance.

Author Contributions

Conceptualization, D.R., L.D.C. and M.G.P.; methodology, D.R., L.D.C., M.G.P. and R.C.A.; software, D.R., C.Ș. and S.C.; validation, C.S.R.J., G.C.B.-D., C.Ș. and R.C.A.; formal analysis, A.F., C.S.R.J., G.C.B.-D., C.Ș. and S.C.; investigation, D.R., L.D.C. and M.G.P.; resources, C.S.R.J., G.C.B.-D., C.Ș., S.C. and R.C.A.; data curation, A.F., C.S.R.J., G.C.B.-D., C.Ș. and S.C.; writing—original draft preparation, D.R., L.D.C., A.F. and M.G.P.; writing—review and editing, C.S.R.J., G.C.B.-D., C.Ș., S.C. and R.C.A.; visualization, D.R., L.D.C., A.F. and M.G.P.; supervision, D.R. and L.D.C.; project administration, D.R. and L.D.C.; funding acquisition, L.D.C. and M.G.P. All authors have equally contributed in this research. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Center of Research Development and Innovation in Psychology from Aurel Vlaicu University of Arad (ID no. 19/01.07.2022.).

Data Availability Statement

The authors will make the raw data supporting the conclusion of this study available without restriction.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Adomavicius, G.; Tuzhilin, A. Toward the next generation of recommender systems: A survey of the state-of-the-art and possible extensions. IEEE Trans. Knowl. Data Eng. 2015, 17, 734–749. [Google Scholar] [CrossRef]
  2. McLeod, S.A. Social Stratification. Simply Psychology. Available online: https://www.simplypsychology.org/social-stratification.html (accessed on 20 February 2023).
  3. Jalali, Z.S.; Introne, J.; Soundarajan, S. Social stratification in networks: Insights from co-authorship networks. J. R. Soc. Interface 2023, 20, 20220555. [Google Scholar] [CrossRef] [PubMed]
  4. Sweeney, L. Discrimination in online ad delivery. Commun. ACM 2013, 56, 44–54. [Google Scholar] [CrossRef]
  5. Herlocker, J.L.; Konstan, J.A.; Terveen, L.G.; Riedl, J.T. Evaluating collaborative filtering recommender systems. ACM Trans. Inf. Syst. 2004, 22, 5–53. [Google Scholar] [CrossRef]
  6. Xu, J.D.; Cenfetelli, R.T.; Aquino, K. Do different kinds of trust matter? An examination of the three trusting beliefs on satisfaction and purchase behavior in the buyer–seller context. J. Strateg. Inf. Syst. 2016, 25, 15–16. [Google Scholar] [CrossRef]
  7. Acharya, N.; Sassenberg, A.M.; Soar, J. Consumers’ Behavioural Intentions to Reuse Recommender Systems: Assessing the Effects of Trust Propensity, Trusting Beliefs and Perceived Usefulness. J. Theor. Appl. Electron. Commer. Res. 2023, 18, 55–78. [Google Scholar] [CrossRef]
  8. Pu, P.; Chen, L.; Hu, R.; Yang, J. User control and transparency in personalized recommender systems. User Model User Adap. Inter. 2011, 21, 107–133. [Google Scholar]
  9. Hamilton, K.; Dabbish, L.; Sandvig, C. The social stratification of video consumption on YouTube. In Proceedings of the CSW’14, 17th ACM Conference on Computer Supported Cooperative Work & Social Computing, Baltimore, MD, USA, 15–19 February 2014. [Google Scholar]
  10. Andersen, S.H. Unemployment and subjective well-being: A question of class? Work. Occup. 2009, 36, 3–25. [Google Scholar] [CrossRef]
  11. Akaeda, N. Contextual Social Trust and Well-Being Inequality: From the Perspectives of Education and Income. J. Happiness Stud. 2020, 21, 2957–2979. [Google Scholar] [CrossRef]
  12. Birkelund, G.E.; Lemel, Y. Lifestyles and social stratification: An explorative study of France and Norway. Cl. Stratif. Analyis 2013, 30, 189–220. [Google Scholar] [CrossRef]
  13. Breen, R.; Jonsson, J.O. Inequality regimes and the stratification of social space: A comparative analysis of the United States, Sweden, and Germany. Annu. Rev. Sociol. 2019, 45, 239–258. [Google Scholar]
  14. Fachelli, S.; López-Roldán, P. Proposal for the construction of two composite indicators of social stratification. Comparative analysis between Spain and Argentina. Empiria 2022, 55, 97–129. [Google Scholar]
  15. Van Dijk, J.A.; Hacker, K.; Hargittai, E. From digital divide to digital inequality. In Qualitative Research Practice; Seale, J., Gobo, S., Gubrium, J.F., Silverman, D., Eds.; Sage: London, UK, 2017; pp. 561–579. [Google Scholar]
  16. Bejan, C.A.; Bucerzan, D. On image steganography metrics for mobile platforms. In Proceedings of the 16th International Conference on Informatics in Economy (IE 2017): Education, Research and Business Technologies, Bucharest, Romania, 4–7 May 2017. [Google Scholar]
  17. Hategan, C.D.; Pitorac, R.I.; Hategan, V.P.; Imbrescu, C.M. Opportunities and Challenges of Companies from the Romanian E-Commerce Market for Sustainable Competitiveness. Sustainability 2021, 13, 13358. [Google Scholar] [CrossRef]
  18. Willekens, M.; Siongers, J.; Lievens, J. Social stratification and social media disengagement. The effect of economic, cultural and social capital on reasons for non-use of social media platforms. Poetics 2022, 95, 101708. [Google Scholar] [CrossRef]
  19. Malthouse, E.C.; Haenlein, M.; Skiera, B.; Wege, E.; Zhang, M. Managing customer relationships in the social media era: Introducing the social CRM house. J. Interact. Mark. 2016, 33, 9–27. [Google Scholar] [CrossRef]
  20. Obada, D.R.; Dabija, D.C. The Mediation Effects of Social Media Usage and Sharing Fake News about Companies. Bahavioral Sci. 2022, 12, 372. [Google Scholar] [CrossRef]
  21. Obada, D.R.; Dabija, D.C. “In Flow”! Why Do Users Share Fake News about Environmentally Friendly Brands on Social Media? Int. J. Environ. Res. Public Health 2022, 19, 4861. [Google Scholar] [CrossRef]
  22. Popa, I.; Nicolescu, L.; Ștefan, S.C.; Popa, Ș.C. The Effects of Corporate Social Responsibility (CSR) on Consumer Behaviour in Online Commerce: The Case of Cosmetics during the COVID-19 Pandemics. Electronics 2022, 11, 2442. [Google Scholar] [CrossRef]
  23. Pelau, C.; Pop, M.I.; Ene, I.; Lazar, L. Clusters of Skeptical Consumers Based on Technology and AI Acceptance, Perception of Social Media Information and Celebrity Trend Setter. J. Theor. Appl. Electron. Commer. Res. 2021, 16, 1231–1247. [Google Scholar] [CrossRef]
  24. Zhang, J.; Peng, Q.; Sun, S.; Liu, C. Collaborative filtering recommendation algorithm based on user preference derived from item domain features. Phys. A Stat. Mech. Appl. 2014, 396, 66–76. [Google Scholar] [CrossRef]
  25. Eirinaki, M.; Gao, J.; Varlamis, I.; Tserpes, K. Recommender Systems for Large-Scale Social Networks: A Review of Challenges and Solutions; Elsevier: Amsterdam, The Netherlands, 2018. [Google Scholar]
  26. Ojagh, S.; Malek, M.R.; Saeedi, S. A Social–Aware Recommender System Based on User’s Personal Smart Devices. ISPRS Int. J. Geo-Inf. 2020, 9, 519. [Google Scholar] [CrossRef]
  27. Haruna, K.; Ismail, M.A.; Suhendroyono, S.; Damiasih, D.; Pierewan, A.C.; Chiroma, H.; Herewan, T. Context-Aware Recommender System: A Review of Recent Developmental Process and Future Research Direction. Appl. Sci. 2017, 7, 1211. [Google Scholar] [CrossRef]
  28. Roy, D.; Dutta, M. A systematic review and research perspective on recommender systems. J. Big Data 2022, 9, 59. [Google Scholar] [CrossRef]
  29. Koren, Y.; Bell, R.; Volinsky, C. Matrix factorization techniques for recommender systems. Computer 2009, 42, 30–37. [Google Scholar] [CrossRef]
  30. Dadgar, M.; Hamzeh, A. How to Boost the Performance of Recommender Systems by Social Trust? Studying the Challenges and Proposing a Solution. IEEE ACCESS 2022, 10, 13768–13779. [Google Scholar] [CrossRef]
  31. Busu, C.; Busu, M.; Dragoi, M.; Popa, I.; Dobrin, C.; Giurgiu, A. Dissipative advertising in retail markets. Econ. Comput. Econ. Cybern. Stud. Res. 2015, 49, 52–64. [Google Scholar]
  32. Kiran, R.; Kumar, P.; Bhasker, B. DNNRec: A novel deep learning based hybrid recommender system. Expert Syst. Appl. 2020, 144, 113054. [Google Scholar] [CrossRef]
  33. Lee, S.; Kim, D. Deep learning based recommender system using cross convolutional filters. Inform. Sci. 2022, 592, 112–122. [Google Scholar] [CrossRef]
  34. Dinu, V. Artificial intelligence in wholesale and retail trade. Amfiteatru Econ. 2021, 23, 5–7. [Google Scholar] [CrossRef]
  35. Srifi, M.; Oussous, A.; Lahcen, A.A.; Mouline, S. Reccomender Systems Based on Collaborative Filtering Using Review Texts—A Survey. Information 2020, 11, 317. [Google Scholar] [CrossRef]
  36. Kumar, B.; Sharma, N.; Sharma, B.; Herencsar, N.; Srivastava, G. Hybrid Recommendation Network Model with a Synthesis of Social Matrix Factorization and Link Probability Functions. Sensors 2023, 23, 2495. [Google Scholar] [CrossRef] [PubMed]
  37. Ogrean, C.; Herciu, M. Fostering innovation in Romania. Insights from the smart specialization strategies. Stud. Bus. Econ. 2022, 17, 319–337. [Google Scholar] [CrossRef]
  38. Hansen, D.L.; Shneiderman, B.; Smith, M.A.; Himelboim, I. Analyzing Social Media Networks with NodeXL: Insights from a Connected World, 2nd ed.; Elsevier Inc.: Amsterdam, The Netherlands, 2020. [Google Scholar] [CrossRef]
  39. Dabija, D.C.; Babut, T.; Dinu, V.; Lugojan, M.I. Cross-generational analysis of information searching baesd on social media in Romania. Transform. Bus. Econ. 2017, 16, 248–270. [Google Scholar]
  40. Shi, W.; Wang, L.; Qin, J. Extracting user influence from ratings and trust for rating prediction in recommendations. Sci. Rep. 2020, 10, 13592. [Google Scholar] [CrossRef] [PubMed]
  41. Zhang, Y.; Xu, C. Explainable Recommendation: A Survey and New Perspectives. Found. Trends Inf. Retr. 2020, 14, 1–101. [Google Scholar] [CrossRef]
  42. Tintarev, N.; Masthoff, J. Designing and Evaluating Explanations for Recommender Systems. In Recommender Systems Handbook; Ricci, F., Rokach, L., Shapira, B., Kantor, P.B., Eds.; Springer: Boston, MA, USA, 2011; pp. 479–510. [Google Scholar] [CrossRef]
  43. Chen, Y.; Xie, J.; Huang, T. Exploring the determinants of trust in recommender systems from the perspective of offline social relationships. Comput. Hum. Behav. 2018, 88, 162–172. [Google Scholar] [CrossRef]
  44. Gefen, D.; Karahanna, E.; Straub, D.W. Trust and TAM in online shopping: An integrated model. MIS Q. 2003, 27, 51–90. [Google Scholar] [CrossRef]
  45. Dabija, D.C.; Csorba, L.M.; Isac, F.L.; Rusu, S. Building Trust toward Sharing Economy Platforms beyond the COVID-19 Pandemic. Electronics 2022, 11, 2916. [Google Scholar] [CrossRef]
  46. Alhijawi, B.; Awajan, A.; Fraihat, S. Survey on the Objectives of Recommender Systems: Measures, Solutions, Evaluation Methodology, and New Perspectives. ACM Comput. Surv. 2023, 55, 93. [Google Scholar] [CrossRef]
  47. Alslaity, A.; Tran, T. Users’ Responsiveness to Persuasive Techniques in Recommender Systems. Front. Artif. Intell. 2021, 4, 679459. [Google Scholar] [CrossRef]
  48. Dong, M.; Yuan, F.; Yao, L.; Wang, X.; Xu, X.; Zhu, L. A survey for trust-aware recommender systems: A deep learning perspective. Knowl. Based Syst. 2022, 249, 108954. [Google Scholar] [CrossRef]
  49. Trzebinski, W.; Marciniak, B. Recommender system information trustworthiness: The role of perceived ability to learn, self-extension, and intelligence cues. Comput. Hum. Behav. Rep. 2022, 6, 100193. [Google Scholar] [CrossRef]
  50. Massa, P.; Bhattacharjee, B. Using Trust in Recommender Systems: An Experimental Analysis. In Trust Management. iTrust 2004. Lecture Notes in Computer Science; Jensen, C., Poslad, S., Dimitrakos, T., Eds.; Springer: Berlin, Heidelberg, Germany, 2004; Volume 2995, pp. 231–235. [Google Scholar] [CrossRef]
  51. Harman, J.L.; O’Donovan, J.; Abdelzaher, T.; Gonzalez, C. Dynamics of human trust in recommender systems. In Proceedings of the RecSys’14: 8th ACM Conference on Recommender Systems, Silicon Valley, CA, USA, 6–10 October 2014; pp. 305–308. [Google Scholar] [CrossRef]
  52. Kunkel, J.; Donkers, T.; Michael, L.; Barbu, C.M.; Ziegler, J. Let me explain: Impact of personal and impersonal explanations on trust in recommender systems. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, Scotland, 4–9 May 2019; pp. 1–12. [Google Scholar]
  53. Dinu, V.; Bucur, M.; Enache, C.; Fratiloiu, B.; Cohen-Tzedec, B.; Vasiliu, C. European consumer trust as a driving force of mobile commerce. Transform. Bus. Econ. 2022, 21, 419–434. [Google Scholar]
  54. Dellarocas, C.; Zhang, X.; Awad, N.F. The influence of social bias on user trust in recommender systems. In Proceedings of the CHI ‘21: 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; pp. 1–13. [Google Scholar]
  55. Huang, M.; Rust, R.T. Social stratification and consumer trust in AI-enabled recommender systems. J. Acad. Mark. Sci. 2020, 48, 765–784. [Google Scholar]
  56. Li, J.; Adomavicius, G.; Zhang, J. Exploring the role of social context in trust in recommender systems. J. Assoc. Inf. Sci. Technol. 2018, 69, 1241–1255. [Google Scholar]
  57. Wang, Y.; Zhang, D.; Liu, X. A survey on the robustness of recommender systems. Knowl. Based Syst. 2019, 165, 194–209. [Google Scholar]
  58. Qin, X.; Qian, Y.; Zeng, J.; Wei, X. Accessibility and economic connections between cities of the new western land–sea corridor in China—Enlightenments to the passageway strategy of Gansu province. Sustainability 2022, 14, 4445. [Google Scholar] [CrossRef]
  59. Chen, X.; Zhang, Y.; Wen, J.R. Measuring “Why” in Recommender Systems: A Comprehensive Survey on the Evaluation of Explainable Recommendation. arXiv 2022, arXiv:2202.06466. [Google Scholar]
  60. Nguyen, L.; Nam, H. Towards comprehensive approaches for the rating prediction phase in memory-based collaborative filtering recommender systems, Information Sciences. Int. J. 2022, 589, 878–910. [Google Scholar] [CrossRef]
  61. Chen, J.; Dong, H.; Wang, X.; Feng, F.; Wang, M.; He, X. Bias and debias in recommender system: A survey and future directions. ACM Trans. Inf. Syst. 2023, 41, 1–39. [Google Scholar] [CrossRef]
  62. Shin, D. The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. Int. J. Hum. Comput. Stud. 2021, 146, 102551. [Google Scholar] [CrossRef]
  63. Zhang, K.; Liu, X.; Wang, W.; Li, J. Multi-criteria recommender system based on social relationships and criteria preferences. Expert Syst. Appl. 2021, 176, 114868. [Google Scholar] [CrossRef]
Figure 1. Trust in recommender systems based on income. (a) Explainable recommender systems; (b) robust recommender systems; (c) social-aware recommender systems.
Figure 1. Trust in recommender systems based on income. (a) Explainable recommender systems; (b) robust recommender systems; (c) social-aware recommender systems.
Electronics 12 02160 g001
Figure 2. The curvilinear relationship between income and trust in recommender systems (Rec_Sys).
Figure 2. The curvilinear relationship between income and trust in recommender systems (Rec_Sys).
Electronics 12 02160 g002
Table 1. Mean differences based on the level of income.
Table 1. Mean differences based on the level of income.
NMeanStd. DeviationStd. Error95% Confidence Interval for MeanMini-mumMaxi-mumBetween-Component Variance
Lower BoundUpper Bound
Explainable recommender systemsNo stable monthly income1903.34720.716700.052003.24463.44981.005.00
RON 1500–2000473.45450.899320.131183.19043.71851.005.00
RON 2000–40001233.53170.706320.063693.40563.65781.675.00
RON 4000–6000763.60960.768560.088163.43403.78521.005.00
RON 6000–8000223.21180.701010.149462.90103.52261.674.33
over RON 8000293.04590.999150.185542.66583.42591.004.67
Total4873.42100.769890.034893.35253.48961.005.00
ModelFixed Effects 0.759860.034433.35343.4887
Random Effects 0.080203.21493.6272 0.02055
Robust recommender systemsNo stable monthly income1903.18160.594220.043113.09653.26661.005.00
RON 1500–2000473.36700.785140.114523.13653.59751.005.00
RON 2000–40001233.33130.654820.059043.21443.44821.505.00
RON 4000–6000763.50990.705270.080903.34873.67101.005.00
RON 6000–8000223.23860.502830.107203.01573.46162.004.25
over RON 8000293.00000.868600.161292.66963.33041.004.25
Total4873.28030.672800.030493.22043.34021.005.00
ModelFixed Effects 0.662550.030023.22133.3393
Random Effects 0.074723.08823.4724 0.01834
Social-aware recommender systemsNo stable monthly income1903.79110.757370.054953.68273.89951.005.00
RON 1500–2000473.73790.807060.117723.50093.97481.005.00
RON 2000–40001233.90560.783000.070603.76584.04541.675.00
RON 4000–6000763.82040.793880.091063.63904.00181.005.00
RON 6000–8000223.60640.780590.166423.26033.95251.674.67
over RON 8000293.66660.917000.170283.31774.01541.005.00
Total4873.80370.785300.035593.73383.87361.005.00
ModelFixed Effects 0.785500.035593.73383.8736
Random Effects 0.03559 a3.7122 a3.8952 a −0.00041
a Warning: Between-component variance is negative. It was replaced by 0.0 in computing this random effects measure.
Table 2. Robust Tests of Equality of Means, ANOVA, and Test of Homogeneity of Variances.
Table 2. Robust Tests of Equality of Means, ANOVA, and Test of Homogeneity of Variances.
Robust Tests of Equality of MeansANOVATest of Homogeneity of Variances
Statistic adf1df2Sig.Sum of SquaresdfMean SquareFSig.Levene Statistic Based on MeanSig.
Explainable recommender systemsWelch3.0595105.1650.01310.34252.0683.5820.0032.4640.032
Robust recommender systemsWelch3.4505106.3920.0068.84751.7694.0310.0012.9040.014
Social-aware recommender systemsWelch0.8845106.0330.4942.93550.5870.9510.4470.2960.915
a Asymptotically F distributed.
Table 3. Prediction analysis coefficients.
Table 3. Prediction analysis coefficients.
ModelSum of SquaresdfMean SquareFSig.
1Regression0.03310.0330.0850.770 b
Residual185.4024850.382
Total185.435486
2Regression5.13922.5706.8980.001 c
Residual180.2954840.373
Total185.435486
Dependent variable: trust in recommender systems; b predictors: (constant), income; c predictors: (constant), income, squared income.
Table 4. Beta standardized coefficients.
Table 4. Beta standardized coefficients.
ModelUnstandardized
Coefficients
Standardized
Coefficients
tSig.
BStd. ErrorBeta
1(Constant)3.4660.055 63.5240.000
Income0.0050.0180.0130.2920.770
2(Constant)3.1840.093 34.1890.000
Income0.2650.0720.6553.6590.000
Squared Income−0.0430.012−0.663−3.7020.000
Dependent variable: trust in recommender systems.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rad, D.; Cuc, L.D.; Feher, A.; Joldeș, C.S.R.; Bâtcă-Dumitru, G.C.; Șendroiu, C.; Almași, R.C.; Chiș, S.; Popescu, M.G. The Influence of Social Stratification on Trust in Recommender Systems. Electronics 2023, 12, 2160. https://doi.org/10.3390/electronics12102160

AMA Style

Rad D, Cuc LD, Feher A, Joldeș CSR, Bâtcă-Dumitru GC, Șendroiu C, Almași RC, Chiș S, Popescu MG. The Influence of Social Stratification on Trust in Recommender Systems. Electronics. 2023; 12(10):2160. https://doi.org/10.3390/electronics12102160

Chicago/Turabian Style

Rad, Dana, Lavinia Denisia Cuc, Andrea Feher, Cosmin Silviu Raul Joldeș, Graziella Corina Bâtcă-Dumitru, Cleopatra Șendroiu, Robert Cristian Almași, Sabin Chiș, and Miron Gavril Popescu. 2023. "The Influence of Social Stratification on Trust in Recommender Systems" Electronics 12, no. 10: 2160. https://doi.org/10.3390/electronics12102160

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop