Next Article in Journal
QACM: Quality Aware Crowd Sensing in Mobile Computing
Previous Article in Journal
Gesture-to-Text Translation Using SURF for Indian Sign Language
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Human-Centric Aggregation via Ordered Weighted Aggregation for Ranked Recommendation in Recommender Systems

1
Department of Computer Science and Engineering, Jamia Hamdard, New Delhi 110062, India
2
Department of Computer Engineering, Aligarh Muslim University, Aligarh 202002, India
3
Faculty of Computing and Information Technology, King Abdul Aziz University, Jeddah 21431, Saudi Arabia
4
USN School of Business, University of South-Eastern Norway, 3511 Hønefoss, Norway
*
Authors to whom correspondence should be addressed.
Appl. Syst. Innov. 2023, 6(2), 36; https://doi.org/10.3390/asi6020036
Submission received: 9 January 2023 / Revised: 1 March 2023 / Accepted: 2 March 2023 / Published: 6 March 2023

Abstract

:
In this paper, we propose an approach to recommender systems that incorporates human-centric aggregation via Ordered Weighted Aggregation (OWA) to prioritize the suggestions of expert rankers over the usual recommendations. We advocate for ranked recommendations where rankers are assigned weights based on their ranking position. Our approach recommends books to university students using linguistic data summaries and the OWA technique. We assign higher weights to the highest-ranked university to improve recommendation quality. Our approach is evaluated on eight parameters and outperforms traditional recommender systems. We claim that our approach saves storage space and solves the cold start problem by not requiring prior user preferences. Our proposed scheme can be applied to decision-making problems, especially in the context of recommender systems, and offers a new direction for human-specific task aggregation in recommendation research.

1. Introduction

Recent research in decision-making has explored new directions of natural language usuality, which has given rise to the emergence of a new perspective on information aggregation that can be termed ‘human-centric aggregation’. The primary objective of this paper is two-fold. First, to incorporate the concept of human-centric aggregation via Ordered Weighted Aggregation (OWA), which was recently presented by J. Kacprzyk, R. Yager, and J.M. Merigo in the memorial issue of the reputed IEEE Computational Intelligence magazine [1]. Second, to design a recommender system that can assimilate ranking to the voters or rankers and assign them weights accordingly, which in turn may produce recommendations where experts’ suggestions are given priority over usual recommendations [2].
It is interesting to explore the ways in which human-centric aggregation can be useful in solving real-life problems. By human-centric aggregation, we mean quantifying those qualitative attributes that are human-specific such as judgment, intelligence, intention, vision, etc. OWA has been extensively used in the literature to explore different aspects of human-specific problems [3,4,5,6,7,8], especially in decision-making problems. In this paper, we try to explore its diversity and strength for recommender systems. Although OWA has been used in the context of recommendations in the past [9,10], we have modified the weight assignment method that is used. The new method for assigning weights gives more weight to the best ranked to improve recommendation quality. We use the ‘most preferred first’ linguistic quantifier with OWA for this purpose.
In this paper, our aim is to recommend top books to university students by aggregating the suggestions of experts from top-ranked institutions. The idea is supported by previous work which uses OWA for making book recommendations; however, priority to the best-ranked university or subject experts is not adequately provided [11]. The issue lies with weight assignment with some of the fuzzy linguistic quantifiers, which generate zero values for the top-ranked institution. Therefore, we have assigned weights so that the higher-ranked universities achieve higher weights compared to lower-ranked universities. The proposed scheme will also save space, as it does not need to record prior preferences, which most of the existing recommender systems do. In addition to this, the proposed approach fills the gap of satisfying new users who do not have any prior logged information and face a cold start issue. The proposed method identifies the inclusiveness in the recommendation process and, hence, provides a better, consensus-driven recommendation.
The results of the proposed OWA (most preferred first) are compared with the previous positional aggregation-based scoring (PAS) technique and OWA using other linguistic quantifiers on eight different parameters. The results reveal that the proposed scheme has about 17% improved performance compared to PAS and up to 65% improved performance compared to ordered weighted aggregation, where weights are not assigned according to the order of ranking. In addition, we elaborate on how the OWA can be perceived as human-centric aggregation. It is suggested that the scheme proposed in this study can be very useful in addressing various decision-making problems, especially for recommender systems. In addition, it provides new directions on how various human-specific tasks can be numerically aggregated. To this end, our main contribution lies in presenting a human aggregation approach to inclusively consider human intelligence and a ranking mechanism that helps design a recommendation system capable of providing recommendations to new users and users with some specific requirements through an expert consensus.
Section 2 discusses the perspective of human-centric aggregation, including a background to OWA. Section 3 provides an explanation of the proposed scheme for the recommendation of books using OWA operators. In Section 4, a detailed discussion of experiments, dataset, and results are given, and further, the performance evaluation strategy is illustrated with a diagram. Finally, we conclude in Section 5.

2. Human-Centric Aggregation

In our daily lives, we often encounter issues where human perception plays an important role, and without which, decision-making becomes difficult. Therefore, human beings aggregate different opinions to reach a conclusion [12,13], examples being problems where consensus is required, such as decision-making, voting results, etc. In the same way, the aggregation of numerical values is important. R. Yager has significantly contributed to the science of aggregation by introducing Ordered Weighted Aggregation (OWA) [5,14]. We describe OWA in the following section.

Ordered Weighted Aggregation (OWA)

OWA has been applied extensively in the research literature, especially as a way to deal with uncertainty [12,13,15,16,17,18,19,20,21]. Authors have used various OWA-based applications, which include techniques for randomized queries for searching the Web [22,23], applying aggregation operators based on fuzzy concepts for recommender systems [24], social networking [25], GIS applications [6,17], environments [26,27,28], and combination of OWA and opinion mining for book recommendation [29]. OWA is also used in the context of sport management [30] and for analyzing the talents and skills of the players in different sports [3]. The frequent use of OWA in multi-criteria decision-making has been reported in a range of studies [5,19,31,32,33,34,35], whereas the fuzzy methods are supposed to yield impressive results in decision-making problems, also [36,37,38,39,40,41].
Ordered weighted aggregation (OWA) can be termed as a function from Rn --> R, where ‘W’-weight vectors are associated with them in such a way that k = 1 n W k = 1 and Wk ϵ [0, 1]. Mathematically it is given as:
OWA   d 1 , d 2 ,   , d n = k = 1 n W k C k
where if we sort Ck, kth largest element would be dk. We primarily intended to incorporate human-specific aggregation for the recommendation process (books) with the help of OWA [42]. Therefore, the points that must be addressed are how many books to recommend, how many users are needed to be involved, some, almost all, most, etc. Therefore, we use linguistic quantifiers. For fuzzy linguistic quantifier, we define Function Q(r) for relative quantifier as:
Q ( r ) = 0 if   r   <   a ( r a b a ) if   a r b 1 if   r   >   b
where Q(0) = 0, ∃r ϵ [0, 1] such that Q(r) = 1, and a, b ϵ [0, 1].
For the above condition, we have Q: [0, 1] → [0, 1].
The weights ‘Wk’ for the OWA operator is calculated by the following equation [4,17]:
W k = Q k m Q k 1 m
where k = 1, 2… m.
Different weights can be obtained by using different linguistic quantifiers. For example, for the ‘Most’ linguistic quantifier, a = 0.3 and b = 0.8; using these quantifiers, those books are preferred that are recommended by most universities.
Similarly, ‘as many as possible’ and ‘at least half’ are other quantifiers for which the values of (a, b) are (0.5, 1) and (0, 0.5), respectively. Graphical representations of these fuzzy linguistic quantifiers are shown for ‘most’, ‘as many as possible’, and ‘at least half’, in Figure 1.
Now we consider a situation when we need to rank the voters, i.e., rankers are valued, and they influence decision-making [43,44]. We suggest OWA with a modification in the weight assignment of Equation (3) and term it as OWA (most preferred first). For this, we define weight assignment as:
W k = u + 1 k N
where
u’ is the total number of universities.
N = i = 1 u k , giving the sum of the number of universities, and ‘k’ is a variable that can have any value from i = 1 to i = number of universities involved (‘u’ in this case).
Wk ε [0, 1] and 1 u W k = 1.
OWA (most preferred first) is given as:
OWA   most   preferred   first = k = 1 n W k C k
where W k is the weight obtained from Equation (4), and C k is the score given to a book by kth ranked university.
The application of the above-suggested concept can influence human-specific problems where rankers or experts need to be assigned weights, i.e., recommendations are given by the experts. These problems may have a wide domain, including judgment, human intelligence, voting results, and any problem that involves consensus [45,46].

3. Proposed Recommendation Strategy Using OWA (Most Preferred First)

We intend to give higher weights to the best-ranked entity. Therefore, if Wk > Wm when k < m. i.e., for three ordered ranked universities Univ_1, Univ_2, and Univ_3, we must have W1> W2> W3. Here, we aim to propose a recommendation strategy for books. Thus, we claim that it is important to know the experts’ ranking. The suggested technique represented by Equation (5) can weight those experts and, hence, generate more appropriate recommendations. Keeping this concept in mind, we proceed with a recommendation.
We intend to recommend books for computer science undergraduate students. Initially, all the possible books were incorporated for the purposes of the experiment. However, without any limit or criteria, it would have resulted in huge data and waste of storage. Therefore, we filtered the data by applying the PAS concept [11] and incorporated books from top universities as prescribed in their syllabus. Positional aggregation-based scoring (PAS) is used in the field of information retrieval, search engines, and ranking problems. The basic idea behind PAS is to score a document or item based on the relevance of its content to the query while also taking into account the position of the query terms or items under consideration within the document. PAS works by first identifying the position of each query term within the document. Once the position of each query term has been identified, PAS aggregates these positions in order to produce a score for the document. This aggregation can be done by giving a score to ranking positions of the books (in our case) and then aggregating the final value. A detailed explanation can be found in [47,48]. This is done in a number of ways, such as by taking the sum, mean, or maximum position of the query terms within the document.
The intuition behind this approach is that documents that contain query terms in close proximity to each other are more likely to be relevant to the query. By taking into account the position of the query terms within the document, PAS is able to produce more accurate and relevant search results. The top Indian university from QS ranking [49] has been used, and the top Indian institutes included in the world’s top ranking are examined for our work. Only ‘computer science’ is taken as a subject of interest as we only intended to show how the human-centric aggregation can perform. Once it can happen with a smaller dataset, it can be easily extended to a larger data set. Hence, courses in computer science, such as computer networks, database concepts, etc., are searched in these institutions, and their recommended books are stored. Only seven Indian universities have a position in the QS World Ranking (Table 1).
The positional aggregation-based scoring (PAS) technique has been used to quantify the ranking, which assigns a numerical value corresponding to the rank of the books. For details, please see [48].
The PAS technique tries to assign a maximum value to the best-ranked university and quantifies every ranking to a numerical value. The OWA-based approach for Book Recommendation (most preferred first) is shown in Figure 2. Once we find the positional score for books, we assign weights for ranked universities. We use Equations (4) and (5) to find the final score of books. The sorted value gives the book’s ranking.
Only those courses which are kept in the curriculum by these leading institutions have been taken into consideration for the experiment. A complete list is given in Table 2. Where ‘Asi 06 00036 i001’ shows the course is available and ‘Asi 06 00036 i002’ indicates non-availability.
The universities usually display the recommended books on their websites, and we have, to the best of our ability, tried to fetch all those books and categorize distinct books differently. There were some courses at some universities where the books were not displayed, and in these cases, we reached out by email to gather the required information. Ten different courses have been added in Table 2. It is clear from the table that not all universities have published lists of recommended books on a topic on their respective websites. At the same time, not every book has been recommended by all universities. As a result, we collected 158 different books for the above 10 courses, which were then included in the experimental procedure. The process of selecting books only from ranked universities reduces the huge number of related books available, making the procedure easier.

4. Results and Discussions

4.1. Evaluation Metrics

There are different metrics for the evaluations of the recommender systems that have been used in the literature [47,50,51,52,53,54]. Some of them are veracity measures, i.e., they are used to measure accuracy. The higher the value of the measure, the better the result. On the other hand, some of them are fallacy measures, i.e., the minimum resulting value indicates better performance. We have used four veracity measures and four fallacy measures in our results to decide the proposed mechanism’s performance. For measures of precision, we have used p@10, Mean Average Precision (MAP), Mean Reciprocal Rank (MRR), and Modified Spearman’s Rank Correlation Coefficient (MSRCC). Whereas, for the parameters to be termed as fallacy measures, we have used FPR@10, FNR@10, Root Mean Square Error (RMSE), and Mean Absolute Error (MAE). The list and details of the evaluation metrics are given below.
i.
P@10
ii.
FPR@10
iii.
FNR@10
iv.
Mean Average Precision (MAP)
v.
Mean Absolute Error (MAE)
vi.
Mean Reciprocal Rank (MRR)
vii.
Root Mean Square Error (RMSE)
viii.
Modified Spearman’s Rank Correlation Coefficient (MSRCC)

4.1.1. P@10

We denote the precision at the top-10 positions as P@10 and define it for our purpose as:
P @ 10 = N u m b e r   o f   b o o k s   e n d o r s e d   b y   u s e r   a s   w e l l   a s   r e c o m m e n d e d   i n   t o p   10   p o s i t i o n 10
P@10 is obtained by comparing the ranking that emerges by applying OWA (most preferred first) with the experts’ ranking. These values are shown in Figure 3a.

4.1.2. FPR@10

FPR@10 denotes a false positive rate for the top 10 positions, which is defined as follows:
F P R @ 10 = N u m b e r   o f   b o o k s   t h a t   c o m e s   i n   t o p   10   p o s i t i o n   w h i c h   i s   n o t   l i k e d   b y   c u s t o m e r 10
The “false positive” is a case when recommended items are different from users’ preferred items. This situation leads to customer irritation and, hence, is treated as the worst-case scenario. It may cause damaged customer relationships and make the customer less likely to make additional purchases.
FPR@10 is obtained by the comparison of the ranked position that appears by applying OWA (most preferred first) techniques with the experts’ ranking. These values are shown in Figure 3b.

4.1.3. FNR@10

The false negative rate is an error that gives an idea about a situation when the recommendation technique misses recommending items that are preferred by customers. The false negative rate for the top 10 positions is denoted as FNR@10. In the context of our problem, we define it as follows:
F N R @ 10 = N u m b e r   o f   b o o k s   w h i c h   i s   n o t   r e c o m m e n d e d   b u t   l i k e d   b y   c u s t o m e r s   i n   t o p   10   10
FNR@10 is shown in Figure 7. These values are obtained by comparing the ranking obtained by applying OWA with linguistic quantifier techniques with the experts’ ranking.

4.1.4. Mean Average Precision

Mean Average Precision (MAP) is mathematically defined as:
Mean   Average   Precision = 1 / n i = 0 n p C i
p(Ci) represent precision for ith customer where ‘n’ is the total number of customers concerned in the experiment.

4.1.5. Mean Absolute Error

The Mean Absolute Error (MAE) tells how close the outcome to the actual result is. It is given by:
Mean   Absolute   Error = 1 / n i = 1 n ( | O i A i | )
In the above Equation (10), the observed values and actual values are represented as Oi and Ai, respectively, where n symbolizes the number of observations.

4.1.6. Mean Reciprocal Rank

Let ‘r’ denote the rank of a product in the proposed approach (which is based on the OWA technique (most preferred first). In the ranking of the products, as suggested by the proposed scheme in the paper, we aimed to find the ranked position of an item when it is known that it has ranked first in the system ranking. Reciprocal Rank (RR) is given as:
RR = 1 r
Mean Reciprocal Rank (MRR) is calculated for the first-ranked product of all the items. Mathematically it is given by:
Mean   Reciprocal   Rank = 1 / n i = 1 n ( R R i )
where ‘n’ represents the total number of items and ‘i’ represents ith items, respectively. Mean Reciprocal Rank (MRR) measures how relevant the product is for a customer as it suggests the best item. If the position of the first ranked item by the experts and by the proposed scheme coincides, it simply indicates that the item is of great interest. In this case, the MRR comes out to be 1, which implies the best case.

4.1.7. Root Mean Square Error

The root mean square error is used to measure error value. It is defined as:
R o o t   M e a m   S q u a r e   E r r o r = 1 / n k = 0 n Y i y i 2
Yi and yi indicate two different entities, with one representing the actual ranking, whereas the other represents the outcomes of the ranking by experiments. In the above equation, the actual ranking is basically the experts’ recommendation which is denoted by ‘Yi’ whereas ‘yi’ denotes the system’s prediction.

4.1.8. Modified Spearman’s Rank Correlation Coefficient

The modified Spearman’s rank correlation coefficient is suggested after Spearman’s correlation coefficient proved to be incapable of producing the correct result for the partial list. The mathematical definition of the modified Spearman’s rank correlation coefficient is given as:
r s = 1 i = 1 m i V i 2 m ( [ max V j j = 1 m ] 2 1 )
where the full list and partial list are given by [1, 2… m] and [v1, v2vm], respectively.

4.2. Experimental Results

The methods discussed in Section 3 are illustrated here. Since the procedure is the same for all the books in each course, we have demonstrated examples considering the books used in only one course. For the sake of simplicity, the books on ‘Data Structure’ are considered. This is because different ranked universities have different rankings of books in which the same book may be repeated everywhere, or there may be only one book that is included in the ranking of the respective universities. The respective university ranking of the books on Data Structure is listed in Table 3. The books on Data Structure (DS) are represented by the codes ‘DS1’, ‘DS2’, etc., and we can easily find that the book DS1 is placed in first rank by Univ_1. At the same time, no two universities have recommended the same book in the first rank. Moreover, no two books have been included in the ranking of more than two universities. Only the books DS1 and DS3 are included twice in the top-ranked universities’ ranking of books.
A total of 16 books on Data Structure have been included in the ranking of the top seven universities. As can be seen, the first-ranked university has only one book in its prescribed syllabus. The positional aggregation technique-based score has been obtained by the procedure stated in [11] and discussed in Section 3. For the sake of simplicity and to save space, we have represented the final score of the books of one course (Data Structure) only (Table 4). In this paper, we intend to show how human-centric aggregation can perform and how the changes in weight assignment will enhance the aggregation for a problem that incorporates human-specific decisions. Therefore, the results that have been obtained by applying OWA (most preferred first) are shown. The method for a single book is illustrated below. Here, we have focused on criteria related to the selection of books by a university. Therefore, we have classified seven selection criteria, which is basically the total number of universities under consideration. The weight assignment formula, as suggested in Equation (4), is used to calculate weights for the seven criteria. The weights for OWA (most preferred first) are given in Table 5.
Furthermore, the existing strategies have been compared with the proposed mechanism, which will be discussed in the subsequent section. The final values for positional score, OWA (at least half), OWA (as many as possible), OWA (most), and OWA (most preferred first) for books on Data Structure (DS) are shown in Table 6. The tabulated score indicates which method to consider the best for the books concerned.
The procedure for calculation of OWA (most preferred first) using positional score is as follows:
The positional scores (for DS1) are represented as:
C k = 1 0 0 0.9375 0 0 0
These values are re-ordered and represented in descending order. Let the order be dk. The re-ordered value shall be represented as:
d k = 1 0 0 0.9375 0 0 0
From Equations (4) and (5), we get:
OWA   most   preferred   first = OWA d 1 , d 2 ,   , d n = k = 1 n W k C k
Therefore, we obtain OWA   d 1 , d 2 ,   , d n as:
=   0.25 ,     0.21428 ,   0.17857 ,   0.14285 , 0.10714 , 0.07142 , 0.03571   ×   1 0 0 0.9375 0 0 0 = 0.25 × 1 + 0.21428 × 0 + 0.17857 × 0 + 0.14285 × 0.9375 + 0.10714 × 0 + 0.07142 × 0 + 0.03571 × 0 = 0.38392
These scores are calculated for all the books, and the scores help in sorting the books, which provides a platform for the ranking of the books for the above method. The final OWA scores with the ranking of the respective books for the linguistic quantifier ‘most preferred first’ are shown in Table 6.
The OWA (most preferred first) score comes out to be 0.450813 for DS1, as shown in Section 4.2. Similarly, values for all books have been calculated. With the help of the proposed mechanism, the above scores are calculated, which lays a foundation for the ranking of the books by different approaches. The ranking for books on DS for OWA (most preferred first) is given in Table 6. In the same way, we have calculated the ranking by all related methods of all books for different courses. In the comparison section, all of these rankings are considered, but we have not shown those scores to save space and avoid repetition. Thus, there are a total of 158 different books that have been filtered from the huge number of books in the process of recommendation, which eases the complexities of the recommendation process. All the related approaches, namely PAS, OWA (at least half), OWA (as many as possible), and OWA (most), have been tested for the same number of books, and all these values have been stored. The OWA with the quantifier ‘most preferred first’ performance for all the books of all the courses concerning eight different parameters is shown in Figure 3.

Performance Evaluation Mechanism

The above methods, discussed in the preceding sections, explain how books are recommended for students on a particular subject. The QS ranking [49] is considered to be the base of the complete recommendation process. The books recommended by the top-ranked Indian Institute, according to the QS World University Ranking, are considered for experiments. Several techniques have been applied previously to filter the vast records for recommending the most promising books that have been recommended to readers. These techniques include the PAS technique, OWA (at least half), OWA (as many as possible), and OWA (most). These techniques are compared here with the proposed methodology, which incorporates OWA with modified weight assignment formula. We argue that doing it this way is more human-centric, and we term it as OWA (most preferred first). Since the PAS technique does not involve any weights to be assigned in its process of recommendation, we argue that the process is an unweighted aggregation. The scores are calculated using the PAS technique, and these scores are assigned fuzzy weights using OWA ((at least half), (most), and (as many as possible)).
In the proposed scheme, we assign weights to the universities according to their values, which are known by their position in the QS ranking. We claim that in this way, an appropriate recommendation can be made as a priority-based weight assignment is used, which advocates the best-get-best philosophy. However, who will decide which recommendation process is performing well? Or which recommendation process is supposed to be the best? Since there is no such clear protocol to design a recommender system or to judge a recommendation process, it is necessary to evaluate the system, and this evaluation is obviously relative and not absolute. Usually, prediction accuracy is considered the de facto parameter for evaluating recommender systems [51]. It suggests how accurately the recommendation has been made by the adopted approach. It is obvious that the user would prefer a more accurate system. Generally, accuracy is classified as the accuracy of the predictions of the ratings, the accuracy of usage predictions, and the accuracy of the rankings of the items.
Since we are advocating in favor of human-centric aggregation, we have taken experts’ suggestions for the evaluation of the proposed system. These experts are senior academicians and computer scientists familiar with the Indian education system. We provided the details of the books of the respective courses to them. We adopted an evaluation scheme based on explicit feedback. The experts’ feedback on their choice of books was recorded, and the books were ranked. This ranking was then compared with the ranking of the books obtained by the aforementioned schemes. Therefore, the human evaluation by experts would boost the performance evaluation procedure of the adopted recommendation process. The performance evaluation mechanism is presented diagrammatically in Figure 4.

4.3. Discussion

The comparison is made based on eight different parameters. These parameters, with their details, have been mentioned in Section 4.1. In addition, four parameters are used as veracity measures, and another four are used as fallacy measures. The values of the veracity measures are shown in Figure 5. It is very clear from the graph that OWA (most preferred first) has outperformed the other techniques for all these parameters. The OWA with at least half quantifier, however, has the same P@10 and MAP as that obtained by OWA (most preferred first). In contrast, the value of the modified Spearman’s correlation coefficient and mean reciprocal rank is better for OWA (most preferred first) than OWA (at least half).
The improvement in the performance of OWA (most preferred first) for Mean Absolute Error (MAE) with respect to OWA (at least half), (as many as possible) and (most) are 6.66%, 28.20%, and 23.28%, respectively. In contrast, with respect to PAS, there is an improvement of 8.94%. For Root Mean Square Error (RMSE), 14.64% improved results have been achieved with respect to PAS, whereas, while comparing OWA (most preferred first) with OWA (at least half), (as many as possible) and (most), it is remarkable that percentages of improvements are 4.24%, 33.46%, and 24.55%, respectively (Figure 6). In addition to these results, we can say that with the help of ranked weights, the errors can be reduced, which leads to more accurate results.
In Figure 7, we can easily notice that the FPR@10 and FNR@10 both have the same values for all the approaches. This is because we have a complete list. For a partial list, these two parameters may also have different values. Interestingly, the result for OWA (at least half) is the same as OWA (most preferred first), and these results are on par with other operators. The false rate is 36% reduced with respect to OWA (as many as possible) and 17.85% improved than PAS. These improvements clearly indicate how powerful OWA can be when treated according to human-centric situations and how closely they are related to human perception and aggregation.
This suggests that these quantifiers can be useful in many areas of application, especially in recommender systems. Furthermore, human-centric aggregation does not need any prior information about user activity. Thus, it enhances the approach by reducing time and saving storage space that is usually required to acquire this knowledge.
Further, with the help of OWA and the modified weight assignment formula for OWA (most preferred first), it is observed that human-centric terms, such as intention, intelligence, judgment, and vision, have been incorporated into this work. Each step in the procedure is based upon human perception, which we argue is human-centric. After all, the primary objective of the paper is to apply human-centric aggregation. Initially, the intention of the ranked universities is taken by involving their recommended books, and then the books recommended by experts are considered, which keeps the vision of those experts. Next, performance evaluation shows the judgment, and finally, what we recommend is classified as intelligence. Thus, different aspects of human perceptions have been assimilated. The concept is diagrammatically demonstrated in Figure 8.

5. Conclusions

In this paper, we incorporate Yager’s perspective on linguistic data summaries to explore the usefulness of human-centric aggregation in decision-making problems and gauge how effective it would be in designing recommendation systems. The proposed OWA (most preferred first), which suggests a modification in weight assignment for the OWA operator, has shown improved results with respect to eight parameters over the previous OWA with different linguistic quantifiers. The results clearly suggest how powerful human-centric aggregation can be used for the above purpose. The presented methodology is envisaged as a platform for future participation of these aggregation techniques in the design of recommender systems. The key point in the presented idea is its independence from the rating scale, i.e., recommendations have been made with more than 78% accuracy, a 50% improvement over previous approaches, without exploiting the rating scale. One of the key contributions of the work is its clarity and its autonomy from users’ prior preferences, which provides solutions to cold start issues. The proposed approach not only helps students and graduates find the best books but also lays a foundation for designing recommender systems by using numerical aggregation from human perspectives.
Exploring other soft computing approaches, such as analytical hierarchy processing (AHP) and TOPSIS, would be interesting and could be a potential future research direction. However, the primary aim here is to frame a human aggregation approach to rank the top books for students. In addition, this approach can be integrated into recommender system design, especially items involving recommendations to new users or any specialized recommendation, where human experts have an important role to play. This can be scientific events, medical treatment or diagnosis, etc. Further, the scientific community may look into incorporating similar approaches for a fair and inclusive recommendation and aggregation system for ranking.

Author Contributions

Conceptualization, S.S.S.; writing—original draft preparation, S.S.S.; Data curation, S.S.S.; Methodology, S.S.S.; Visualization, S.S.S.; writing—review and editing, A.A., M.A.A., R.A., S.H.H. and D.Ø.M.; project administration, R.A., S.S.S., M.A.A. and D.Ø.M.; supervision, R.A. and M.A.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data can be available on request from the corresponding authors.

Conflicts of Interest

On behalf of all authors, the corresponding authors state that there is no conflict of interest.

References

  1. Kacprzyk, J.; Yager, R.R.; Merigo, J.M. Towards human-centric aggregation via ordered weighted aggregation operators and linguistic data summaries: A new perspective on zadeh’s inspirations. IEEE Comput. Intell. Mag. 2019, 14, 16–30. [Google Scholar] [CrossRef]
  2. Qin, Y.; Qi, Q.; Shi, P.; Lou, S.; Scott, P.J.; Jiang, X. Multi-Attribute Decision-Making Methods in Additive Manufacturing: The State of the Art. Processes 2023, 11, 497. [Google Scholar] [CrossRef]
  3. Ahamad, G.; Naqvi, S.K.; Beg, M.M.S. An OWA-Based Model for Talent Enhancement in Cricket. Int. J. Intell. Syst. 2015, 31, 763–785. [Google Scholar] [CrossRef]
  4. Lee, M.-C.; Chang, J.-F.; Chen, J.-F. Fuzzy preference relations in group decision making problems based on ordered weighted averaging operators. Int. J. Artif. Intell. Appl. Smart Devices 2014, 2, 11–22. [Google Scholar]
  5. Yager, R. On ordered weighted averaging aggregation operators in multicriteria decisionmaking. IEEE Trans. Syst. Man Cybern. 1988, 18, 183–190. [Google Scholar] [CrossRef]
  6. Rinner, C.; Malczewski, J. Web-enabled spatial decision analysis using Ordered Weighted Averaging (OWA). J. Geogr. Syst. 2002, 4, 385–403. [Google Scholar] [CrossRef]
  7. Yager, R.R.; Kacprzyk, J.; Beliakov, G. Recent Developments in the Ordered Weighted Averaging Operators: Theory and Practice; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  8. Yager, R.R.; Kacprzyk, J. The Ordered Weighted Averaging Operators: Theory and Applications; Springer Science & Business Media: Berlin, Germany, 2012. [Google Scholar]
  9. Zhou, L.; Chen, H. A generalization of the power aggregation operators for linguistic environment and its application in group decision making. Knowl.-Based Syst. 2012, 26, 216–224. [Google Scholar] [CrossRef]
  10. Du, X.; Lu, K.; Nie, Y.; Qiu, S. Information Fusion Model of Group Decision Making Based on a Combinatorial Ordered Weighted Average Operator. IEEE Access 2023, 11, 4694–4702. [Google Scholar] [CrossRef]
  11. Sohail, S.S.; Siddiqui, J.; Ali, R. An OWA-Based Ranking Approach for University Books Recommendation. Int. J. Intell. Syst. 2017, 33, 396–416. [Google Scholar] [CrossRef]
  12. Yager, R.R. Using fuzzy measures for modeling human perception of uncertainty in artificial intelligence. Eng. Appl. Artif. Intell. 2019, 87, 103228. [Google Scholar] [CrossRef]
  13. Yager, R.R. OWA aggregation with an uncertainty over the arguments. Inf. Fusion 2019, 52, 206–212. [Google Scholar] [CrossRef]
  14. Ahn, B.S. The OWA Aggregation With Uncertain Descriptions on Weights and Input Arguments. IEEE Trans. Fuzzy Syst. 2007, 15, 1130–1134. [Google Scholar] [CrossRef]
  15. Kacprzyk, J.; Zadrożny, S. Towards a general and unified characterization of individual and collective choice functions under fuzzy and nonfuzzy preferences and majority via the ordered weighted average operators. Int. J. Intell. Syst. 2008, 24, 4–26. [Google Scholar] [CrossRef]
  16. Emrouznejad, A.; Marra, M. Ordered Weighted Averaging Operators 1988-2014: A Citation-Based Literature Survey. Int. J. Intell. Syst. 2014, 29, 994–1014. [Google Scholar] [CrossRef] [Green Version]
  17. Malczewski, J.; Rinner, C. Exploring multicriteria decision strategies in GIS with linguistic quantifiers: A case study of residential quality evaluation. J. Geogr. Syst. 2005, 7, 249–268. [Google Scholar] [CrossRef]
  18. Herrera, F.; Herrera-Viedma, E.; Verdegay, J. A model of consensus in group decision making under linguistic assessments. Fuzzy Sets Syst. 1996, 78, 73–87. [Google Scholar] [CrossRef]
  19. Yager, R.R. Multicriteria Decision-Making Using Fuzzy Measures. Cybern. Syst. 2015, 46, 150–171. [Google Scholar] [CrossRef]
  20. Kacprzyk, J.; Zadrożny, S. Linguistic summarization of the contents of Web server logs via the Ordered Weighted Averaging (OWA) operators. Fuzzy Sets Syst. 2016, 285, 182–198. [Google Scholar] [CrossRef]
  21. Yager, R.R. Applications and extensions of OWA aggregations. Int. J. Man-Mach. Stud. 1992, 37, 103–122. [Google Scholar] [CrossRef]
  22. Beg, M.S. A subjective measure of web search quality. Inf. Sci. 2005, 169, 365–381. [Google Scholar] [CrossRef]
  23. Beg, M.S. User feedback based enhancement in web search quality. Inf. Sci. 2005, 170, 153–172. [Google Scholar] [CrossRef]
  24. Yager, R.R. Fuzzy logic methods in recommender systems. Fuzzy Sets Syst. 2003, 136, 133–149. [Google Scholar] [CrossRef]
  25. Yager, R.R. Intelligent social network analysis using granular computing. Int. J. Intell. Syst. 2008, 23, 1197–1219. [Google Scholar] [CrossRef]
  26. Malczewski, J. Ordered weighted averaging with fuzzy quantifiers: GIS-based multicriteria evaluation for land-use suitability analysis. Int. J. Appl. Earth Obs. Geoinf. 2006, 8, 270–277. [Google Scholar] [CrossRef]
  27. Rasmussen, B.M.; Melgaard, B.; Kristensen, B.; GIS for Decision Support Designation of Potential Wetlands. In Danish with English Summary; 2002. Available online: https://www.researchgate.net/publication/266078073_GIS_for_decision_support_designation_of_potential_wetlands_In_Danish_with_English_summary (accessed on 1 March 2023).
  28. Zabihi, H.; Alizadeh, M.; Langat, P.K.; Karami, M.; Shahabi, H.; Ahmad, A.; Said, M.N.; Lee, S. GIS Multi-Criteria Analysis by Ordered Weighted Averaging (OWA): Toward an Integrated Citrus Management Strategy. Sustainability 2019, 11, 1009. [Google Scholar] [CrossRef] [Green Version]
  29. Sohail, S.S.; Siddiqui, J.; Ali, R. Book Recommender System using Fuzzy Linguistic Quantifier and Opinion Mining. In ISTA 2016: Intelligent Systems Technologies and Applications; Advances in Intelligent Systems and, Computing; Corchado Rodriguez, J., Mitra, S., Thampi, S., El-Alfy, E.S., Eds.; Springer: Cham, Switzerland, 2016; Volume 530. [Google Scholar] [CrossRef]
  30. Merigó, J.M.; Gil-Lafuente, A.M. Decision making with the OWA operator in sport management. Expert Syst. Appl. 2011, 38, 10408–10413. [Google Scholar] [CrossRef]
  31. Yager, R. Multiple objective decision-making using fuzzy sets. Int. J. Man-Mach. Stud. 1977, 9, 375–382. [Google Scholar] [CrossRef]
  32. Xu, Z.S.; Yager, R.R. Dynamic intuitionistic fuzzy multi-attribute decision making. Int. J. Approx. Reason. 2008, 48, 246–262. [Google Scholar] [CrossRef] [Green Version]
  33. Yager, R. OWA Aggregation Over a Continuous Interval Argument With Applications to Decision Making. IEEE Trans. Syst. Man Cybern. Part B 2004, 34, 1952–1963. [Google Scholar] [CrossRef]
  34. Yager, R.R.; Gumrah, G.; Reformat, M.Z. Using a web Personal Evaluation Tool—PET for lexicographic multi-criteria service selection. Knowl.-Based Syst. 2011, 24, 929–942. [Google Scholar] [CrossRef]
  35. Makropoulos, C.; Butler, D. Spatial ordered weighted averaging: Incorporating spatially variable attitude towards risk in spatial multi-criteria decision-making. Environ. Model. Softw. 2006, 21, 69–84. [Google Scholar] [CrossRef]
  36. Ishibuchi, H.; Nozaki, K.; Yamamoto, N.; Tanaka, H. Selecting fuzzy if-then rules for classification problems using genetic algorithms. IEEE Trans. Fuzzy Syst. 1995, 3, 260–270. [Google Scholar] [CrossRef] [Green Version]
  37. Fernandez, A.; Herrera, F.; Cordon, O.; del Jesus, M.J.; Marcelloni, F. Evolutionary Fuzzy Systems for Explainable Artificial Intelligence: Why, When, What for, and Where to? IEEE Comput. Intell. Mag. 2019, 14, 69–81. [Google Scholar] [CrossRef]
  38. Couso, I.; Borgelt, C.; Hullermeier, E.; Kruse, R. Fuzzy Sets in Data Analysis: From Statistical Foundations to Machine Learning. IEEE Comput. Intell. Mag. 2019, 14, 31–44. [Google Scholar] [CrossRef]
  39. Wilbik, A.; Vanderfeesten, I.; Bergmans, D.; Heines, S.; Turetken, O.; van Mook, W. Towards a Flexible Assessment of Compliance with Clinical Protocols Using Fuzzy Aggregation Techniques. Algorithms 2023, 16, 109. [Google Scholar] [CrossRef]
  40. Zhou, S.-M.; Chiclana, F.; John, R.I.; Garibaldi, J.M. Type-1 OWA operators for aggregating uncertain information with uncertain weights induced by type-2 linguistic quantifiers. Fuzzy Sets Syst. 2008, 159, 3281–3296. [Google Scholar] [CrossRef] [Green Version]
  41. Zhou, S.-M.; John, R.I.; Chiclana, F.; Garibaldi, J.M. On aggregating uncertain information by type-2 OWA operators for soft decision making. Int. J. Intell. Syst. 2010, 25, 540–558. [Google Scholar] [CrossRef] [Green Version]
  42. Merigo, J.M. Probabilities in the OWA operator. Expert Syst. Appl. 2012, 39, 11456–11467. [Google Scholar] [CrossRef]
  43. Zhou, L.; Chen, H.; Liu, J. Generalized power aggregation operators and their applications in group decision making. Comput. Ind. Eng. 2012, 62, 989–999. [Google Scholar] [CrossRef]
  44. Hussain, W.; Merigo, J.M.; Gao, H.; Alkalbani, A.M.; A Rabhi, F. Integrated AHP-IOWA, POWA Framework for Ideal Cloud Provider Selection and Optimum Resource Management. IEEE Trans. Serv. Comput. 2021, 370–382. [Google Scholar] [CrossRef]
  45. Hussain, W.; Merigó, J.M.; Gil-Lafuente, J.; Gao, H. Complex nonlinear neural network prediction with IOWA layer. Soft Comput. 2023, 1–11. [Google Scholar] [CrossRef]
  46. Xu, Z.; Yager, R.R. Power-Geometric Operators and Their Use in Group Decision Making. IEEE Trans. Fuzzy Syst. 2009, 18, 94–105. [Google Scholar] [CrossRef]
  47. Sohail, S.S.; Siddiqui, J.; Ali, R. Book Recommendation technique using rank based scoring method. In Proceedings of the National Conference on Recent Innovations & Advancements in Information Technology (RIAIT-2014), BGSBU, Rajouri, India, 26–27 November 2014; pp. 140–146. [Google Scholar]
  48. Sohail, S.S.; Siddiqui, J.; Ali, R. A Novel Approach for Book Recommendation using Fuzzy based Aggregation. Indian J. Sci. Technol. 2017, 10, 1–30. [Google Scholar] [CrossRef] [Green Version]
  49. QS World Ranking. QS World University Rankings by Subject 2015—Computer Science & Information Systems. Available online: https://www.topuniversities.com/university-rankings/university-subject-rankings/2015/computer-science-information-systems (accessed on 20 February 2023).
  50. Sohail, S.S.; Siddiqui, J.; Ali, R. A comprehensive approach for the evaluation of recommender systems using implicit feedback. Int. J. Inf. Technol. 2018, 11, 549–567. [Google Scholar] [CrossRef]
  51. Sohail, S.S.; Siddiqui, J.; Ali, R. Feature-Based Opinion Mining Approach (FOMA) for Improved Book Recommendation. Arab. J. Sci. Eng. 2018, 43, 8029–8048. [Google Scholar] [CrossRef]
  52. Sohail, S.S.; Siddiqui, J.; Ali, R. Book recommendation system using opinion mining technique. In Proceedings of the 2013 International Conference on Advances in Computing, Communications and Informatics (ICACCI), Mysore, India, 22–25 August 2013; pp. 1609–1614. [Google Scholar]
  53. Bylinskii, Z.; Judd, T.; Oliva, A.; Torralba, A.; Durand, F. What do different evaluation metrics tell us about saliency models? IEEE Trans. Pattern Anal. Mach. Intell. 2018, 41, 740–757. [Google Scholar] [CrossRef] [Green Version]
  54. Shani, G.; Gunawardana, A. Evaluating Recommendation Systems. In Recommender Systems Handbook; Ricci, F., Rokach, L., Shapira, B., Kantor, P., Eds.; Springer: Boston, MA, USA, 2011; pp. 257–297. [Google Scholar] [CrossRef]
Figure 1. Graphical representation of linguistic quantifier ‘as many as possible’, ‘at most’ and ‘at least half’ quantifiers colored in violet, blue and brown, respectively.
Figure 1. Graphical representation of linguistic quantifier ‘as many as possible’, ‘at most’ and ‘at least half’ quantifiers colored in violet, blue and brown, respectively.
Asi 06 00036 g001
Figure 2. OWA-based approach for Book Recommendation (most preferred first).
Figure 2. OWA-based approach for Book Recommendation (most preferred first).
Asi 06 00036 g002
Figure 3. Results of different parameters using OWA (most preferred first) for books of all courses. (a) P@10 using OWA (most preferred first) for books of all courses. (b) FPR@10 using OWA (most preferred first) for books of all courses. (c) FNR@10 using OWA (most preferred first) for books of all courses. (d) Mean Absolute Error using OWA (most preferred first) for books of all courses. (e) Modified Spearman’s Rank Correlation coefficient using OWA (most preferred first) for books of all courses. (f) Mean Reciprocal Ranking using OWA (most preferred first) for books of all courses. (g) Root Mean Square Error using OWA (most preferred first) for books of all courses.
Figure 3. Results of different parameters using OWA (most preferred first) for books of all courses. (a) P@10 using OWA (most preferred first) for books of all courses. (b) FPR@10 using OWA (most preferred first) for books of all courses. (c) FNR@10 using OWA (most preferred first) for books of all courses. (d) Mean Absolute Error using OWA (most preferred first) for books of all courses. (e) Modified Spearman’s Rank Correlation coefficient using OWA (most preferred first) for books of all courses. (f) Mean Reciprocal Ranking using OWA (most preferred first) for books of all courses. (g) Root Mean Square Error using OWA (most preferred first) for books of all courses.
Asi 06 00036 g003
Figure 4. Performance evaluation scheme for the book recommendation approach.
Figure 4. Performance evaluation scheme for the book recommendation approach.
Asi 06 00036 g004
Figure 5. Comparison of related recommendation approaches for veracity measures including P@10, MAP, MRR, and modified Spearman’s rank correlation coefficient.
Figure 5. Comparison of related recommendation approaches for veracity measures including P@10, MAP, MRR, and modified Spearman’s rank correlation coefficient.
Asi 06 00036 g005
Figure 6. Comparison of related recommendation approaches for RMSE and MAE.
Figure 6. Comparison of related recommendation approaches for RMSE and MAE.
Asi 06 00036 g006
Figure 7. Comparison of related recommendation approaches for FPR@10 and FNR@10.
Figure 7. Comparison of related recommendation approaches for FPR@10 and FNR@10.
Asi 06 00036 g007
Figure 8. Book recommendation from the perspective of human-centric aggregation approach. The human-centric factors are shown via double arrow and curved arrow representing the connection in recommendation mechanism.
Figure 8. Book recommendation from the perspective of human-centric aggregation approach. The human-centric factors are shown via double arrow and curved arrow representing the connection in recommendation mechanism.
Asi 06 00036 g008
Table 1. Top-ranked seven universities of India in QS ranking [49].
Table 1. Top-ranked seven universities of India in QS ranking [49].
Rank PositionUniversity Name
1IIT, Bombay
2IIT, Delhi
3IIT, Kanpur
4IIT, Madras
5IISC, Bangalore
6IIT, Kharagpur
7IIT, Roorkee
Table 2. List of computer science courses that have been included in the syllabus at top universities.
Table 2. List of computer science courses that have been included in the syllabus at top universities.
SequenceCourse TitleUniv_1Univ_2Univ_3Univ_4Univ_5Univ_6Univ_7
1.Artificial IntelligenceAsi 06 00036 i001Asi 06 00036 i001Asi 06 00036 i002Asi 06 00036 i002Asi 06 00036 i001Asi 06 00036 i001Asi 06 00036 i001
2.Compiler DesignAsi 06 00036 i001Asi 06 00036 i001Asi 06 00036 i001Asi 06 00036 i002Asi 06 00036 i001Asi 06 00036 i002Asi 06 00036 i001
3.Computer NetworksAsi 06 00036 i001Asi 06 00036 i001Asi 06 00036 i001Asi 06 00036 i002Asi 06 00036 i001Asi 06 00036 i001Asi 06 00036 i001
4.Discrete MathematicsAsi 06 00036 i001Asi 06 00036 i001Asi 06 00036 i002Asi 06 00036 i001Asi 06 00036 i001Asi 06 00036 i001Asi 06 00036 i001
5.Data StructureAsi 06 00036 i001Asi 06 00036 i001Asi 06 00036 i001Asi 06 00036 i001Asi 06 00036 i001Asi 06 00036 i001Asi 06 00036 i001
6.GraphicsAsi 06 00036 i002Asi 06 00036 i002Asi 06 00036 i002Asi 06 00036 i001Asi 06 00036 i001Asi 06 00036 i001Asi 06 00036 i001
7.Operating SystemsAsi 06 00036 i001Asi 06 00036 i001Asi 06 00036 i002Asi 06 00036 i002Asi 06 00036 i001Asi 06 00036 i001Asi 06 00036 i001
8.Principles of Database SystemsAsi 06 00036 i001Asi 06 00036 i001Asi 06 00036 i001Asi 06 00036 i002Asi 06 00036 i001Asi 06 00036 i001Asi 06 00036 i001
9.Software EngineeringAsi 06 00036 i001Asi 06 00036 i001Asi 06 00036 i001Asi 06 00036 i002Asi 06 00036 i001Asi 06 00036 i001Asi 06 00036 i001
10.Theory of ComputationAsi 06 00036 i001Asi 06 00036 i001Asi 06 00036 i002Asi 06 00036 i002Asi 06 00036 i001Asi 06 00036 i001Asi 06 00036 i001
Table 3. Ranking of books on Data Structure by top universities.
Table 3. Ranking of books on Data Structure by top universities.
Rank PositionU1U2U3U4U5U6U7
1stDS1DS2DS4DS9DS12DS9DS15
2ndxDS3DS5DS1DS8DS14DS16
3rdxxDS6DS10DS13DS10x
4thxxDS7DS11DS3xx
5thxxDS8xxxx
6thxxxxxxx
7thxxxxxxx
8thxxxxxxx
9thxxxxxxx
10thxxxxxxx
Table 4. Quantified Positional Score for books on Data Structure.
Table 4. Quantified Positional Score for books on Data Structure.
Book CodeU1U2U3U4U5U6U7
DS.11000.9375000
DS.20100000
DS.300.9375000.812500
DS.40010000
DS.5000.93750000
DS.6000.8750000
DS.7000.81250000
DS.8000.7500.937500
DS.90001010
DS.100000.87500.8750
DS.110000.8125000
DS.120000100
DS.1300000.87500
DS.14000000.93750
DS.150000001
DS.160000000.9375
Table 5. Weights assigned to universities by using OWA (most preferred first).
Table 5. Weights assigned to universities by using OWA (most preferred first).
Ranked UniversityWeights Assigned
Univ_1W1 = 0.25
Univ_2W2 = 0.21428
Univ_3W3 = 0.17857
Univ_4W4 = 0.14285
Univ_5W5 = 0.10714
Univ_6W6 = 0.07142
Univ_7W7 = 0.03571
Table 6. Final scores and ranked list of books on Data Structure using OWA (most preferred first).
Table 6. Final scores and ranked list of books on Data Structure using OWA (most preferred first).
Rank PositionBook CodeScore Obtained Using OWA (Most Preferred First) Techniques
1stDS.9.0.4642
2ndDS.1.0.450813
3rdDS.3.0.408413
4thDS.10.0.406175
5thDS.8.0.395025
6thDS.4.0.25
7thDS.12.0.25
8thDS.15.0.25
9thDS.5.0.234375
10thDS.14.0.234375
11thDS.16.0.234375
12thDS.6.0.21875
13thDS.13.0.21875
14thDS.2.0.2142
15thDS.7.0.203125
16thDS.11.0.203125
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sohail, S.S.; Aziz, A.; Ali, R.; Hasan, S.H.; Madsen, D.Ø.; Alam, M.A. Human-Centric Aggregation via Ordered Weighted Aggregation for Ranked Recommendation in Recommender Systems. Appl. Syst. Innov. 2023, 6, 36. https://doi.org/10.3390/asi6020036

AMA Style

Sohail SS, Aziz A, Ali R, Hasan SH, Madsen DØ, Alam MA. Human-Centric Aggregation via Ordered Weighted Aggregation for Ranked Recommendation in Recommender Systems. Applied System Innovation. 2023; 6(2):36. https://doi.org/10.3390/asi6020036

Chicago/Turabian Style

Sohail, Shahab Saquib, Asfia Aziz, Rashid Ali, Syed Hamid Hasan, Dag Øivind Madsen, and M. Afshar Alam. 2023. "Human-Centric Aggregation via Ordered Weighted Aggregation for Ranked Recommendation in Recommender Systems" Applied System Innovation 6, no. 2: 36. https://doi.org/10.3390/asi6020036

Article Metrics

Back to TopTop