Next Article in Journal
A 110 GHz Feedback Amplifier Design Based on Quasi-Linear Analysis
Next Article in Special Issue
Genetic Algorithm for High-Dimensional Emotion Recognition from Speech Signals
Previous Article in Journal
An Improved CNN for Polarization Direction Measurement
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multiobjective Learning to Rank Based on the (1 + 1) Evolutionary Strategy: An Evaluation of Three Novel Pareto Optimal Methods

by
Walaa N. Ismail
1,2,
Osman Ali Sadek Ibrahim
3,
Hessah A. Alsalamah
4,5,* and
Ebtesam Mohamed
2
1
Department of Management Information Systems, College of Business, Al Yamamah University, Riyadh 11512, Saudi Arabia
2
Faculty of Computers and Information, Minia University, Minia 61519, Egypt
3
Department of Computer Science, Minia University, Minia 61519, Egypt
4
Information Systems Department, College of Computer and Information Sciences, King Saud University, Riyadh 4545, Saudi Arabia
5
Department of Computer Engineering, College of Engineering and Architecture, Al Yamamah University, Riyadh 11512, Saudi Arabia
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(17), 3724; https://doi.org/10.3390/electronics12173724
Submission received: 3 August 2023 / Revised: 25 August 2023 / Accepted: 29 August 2023 / Published: 4 September 2023
(This article belongs to the Special Issue Evolutionary Computation Methods for Real-World Problem Solving)

Abstract

:
In this research, the authors combine multiobjective evaluation metrics in the (1 + 1) evolutionary strategy with three novel methods of the Pareto optimal procedure to address the learning-to-rank (LTR) problem. From the results obtained, the Cauchy distribution as a random number generator for mutation step sizes outperformed the other distributions used. The aim of using the chosen Pareto optimal methods was to determine which method can give a better exploration–exploitation trade-off for the solution space to obtain the optimal or near-optimal solution. The best combination for that in terms of winning rate is the Cauchy distribution for mutation step sizes with method 3 of the Pareto optimal procedure. Moreover, different random number generators were evaluated and analyzed versus datasets in terms of NDCG@10 for testing data. It was found that the Levy generator is the best for both the MSLR and the MQ2007 datasets, while the Gaussian generator is the best for the MQ2008 dataset. Thus, random number generators clearly affect the performance of ES-Rank based on the dataset used. Furthermore, method 3 had the highest NDCG@10 for MQ2008 and MQ2007, while for the MSLR dataset, the highest NDCG@10 was achieved by method 2. Along with this paper, we provide a Java archive for reproducible research.

1. Introduction

In information retrieval (IR), ranking retrieved documents according to their relevance to a user query is an important task. To adjust the relevance of the retrieved documents, a ranking system needs to be used after receiving the user’s query, as shown in Figure 1. An optimization model is used to order the collection of available documents using such a ranking system [1,2]. A number of unsupervised term vector models (TVMs), including the vector space model (VSM), TF-IDF and Okapi BM25, were used in early IR research [2,3]. Based on these models, the documents that were retrieved were rated in terms of their relevance to the user’s search terms following one scoring method (TWS) in IR systems. It was found that these methods were insufficient for the development of effective IR systems. There are several reasons for this, including the fact that scoring approaches such as Okapi BM25 and various language models are limited in their ability to return appropriate search results based on relevance judgments [3,4]. Consequently, multiple scoring methods should be used to rank retrieved documents based on the user’s query. Furthermore, other aspects, such as the importance of business documents on the web, should also be considered. Among other desirable features, the host server is taken into account when ranking documents. A statistical machine learning approach traditionally focuses on solving one single-objective optimization problem [4,5], that is, during a training set, it is necessary to minimize the average loss. Several additional quantities, including the complexity of the model, are either implicitly addressed by the choice of the model class or are incorporated into the main objective by incorporating weighted regularization terms.
Recently, the machine learning community has focused on additional quantities of interest, such as the fairness, robustness, efficiency or interpretability of learned models. Optimizing these can conflict with the goal of reducing training loss, and task-specific trade-offs need to be considered. The problem with hard-coding such trade-offs is that they may have undesirable consequences. The process of selecting them is also cumbersome when multiple objectives are at stake. There has been an increase in interest in multiobjective learning in recent years as a way to avoid the need for a priori trade-offs. By performing multiobjective optimization at the same time as training the actual model, the optimization either finds promising trade-off parameters simultaneously or computes multiple solutions that reflect different trade-offs, ideally along the Pareto frontier. Despite being algorithmically rich, the theory of multiobjective optimization and learning has been little studied. Specifically, learning theory results such as generalization bounds are almost completely absent. To overcome the mentioned limitations, a new approach is proposed in this study that involves combining multiobjective evaluation metrics in (1 + 1)—an evolutionary strategy—with three different methods and examining their effectiveness with a single-objective evolutionary strategy. The contributions can be summarized as follows:
  • A hybrid multiobjective algorithm is proposed for a more accurate exploration of the IR problem search space. This objective is achieved by devising the multiobjective evolutionary strategy with three different methods.
  • The performance of the multiobjective evolutionary strategy is enhanced by automatically choosing and optimizing search results by using three novel multiobjective functions to determine which set of solutions are nondominant with regard to one another and are superior to the rest in the search space.
  • A comprehensive experiment was conducted to validate the effectiveness of the proposed strategy and to compare its performance against that of state-of-the-art single-objective evolutionary algorithms.
  • Detailed code for all experiments has been posted at https://www.researchgate.net/publication/364265942_Multi-Objective_11-Evolutionary_Strategy_Multi-ESRank_for_Learning_to_Ranking_Problem, accessed on 20 October 2022. This provides the first example of multiobjective learning-to-rank Java archive applications of the method proposed for research reproducibility.

2. Related Work

In this section, we introduce/discuss some related studies that applied multiobjective methods to LTR problems.
A learning-to-rank process aims to produce a ranking model that is capable of predicting accurately the relevance of a set of queries and items, improving user satisfaction and engagement. To obtain a ranking function, a structured process involving several steps is required. Firstly, a dataset is gathered which includes queries, items and relevance labels, ensuring a variety of scenarios for robustness. Following this, relevant features are extracted from both queries and items, capturing the critical aspects that affect the relative relevance of both. Once the training data have been obtained, they data are used to develop a ranking function. Finally, a ranking list of documents associated with a new query is created by using the ranking function [7,8].
The study in [8] presented a multiobjective LTR approach for commercial search engines using LambdaMART, a state-of-the-art ranking algorithm. They modified the λ functions to solve two associated problems with the current LambdaMART λ-gradient. The goal was to stop the ranking model from trying to separate documents that were already ordered and separated in addition to making ranking mistakes that persisted long into training. Their proposed approach achieved significant improvements in terms of accuracy over the baseline state-of-the-art ranker LambdaMART. The experiments were performed on a large real-world dataset in which each query–URL pair had 860 features. However, the dataset itself and the authors’ code package are not available to researchers for research reproducibility.
The incorporation of relevant and well-engineered features into the dataset will enhance the model’s ability to generalize and provide informed ranking results. Several evolutionary multiobjective feature-selection ranking algorithms have been proposed in recent years [7,8]. According to Li et al. [9], a new decomposition-based multiobjective immune algorithm called MOIA/DFSRank was proposed for the selection of features in L2R. To ensure greater convergence and diversity of the initial populations, representative features are selected for generation based on their importance and redundancy score. The proposed algorithm utilizes two effective operators: clonal selection and mutation, where the clonal selection operator generates clones to facilitate the search direction during evolution, while the mutation operator retains excellent features with a high probability of evolution. Kundu, P.P. et al. [10] employed the NSGA-II algorithm framework to introduce a method for feature selection utilizing an SNN-based distance metric. This method aims to concurrently maximize both the count of selected features and the classification accuracy. Zhang et al. [11] utilized an enhanced MOPSO algorithm to effectively diminish the Hamming loss value, even when utilizing a reduced number of features. In a related context, Das, A. [12] presented a multiobjective evolutionary algorithm centered on relevance and redundancy considerations. This approach demonstrated superior classification outcomes while utilizing a reduced set of selected features. Mahapatr et al. [13] addressed the multiobjective optimization (MOO) problem associated with multilabel LTR (training a model using a different relevance criterion). Essentially, this framework is capable of consuming any first-order gradient-based MOO algorithm to train a ranking model. Cheng et al. [14], on the other hand, addressed the learning-to-rank problem by devising an algorithm grounded in the NSGA-II framework, yielding commendable results. Nevertheless, there remains a need for further enhancement of classification accuracy within this framework.
For commercial search engine preferences, the query–item relevance can be judged based on different criteria. For instance, in a search for products, the search engine may rank products based on their quality or on the user’s price preferences. The research study in [4] applied several multiobjective optimization methods with preference directions, such as the traditional Pareto optimal search, to LTR problems. Their approach was applied to three LTR datasets and worked effectively for all three datasets. The datasets included the Microsoft Learning-to-Rank web search dataset (MSLR-WEB30K) [15], which is represented by a 136-dimensional feature vector, and E-commerce datasets. They presented the maximum weighted loss as a novel model evaluation metric. The gradient-boosted regression tree (GBRT or MART) [16] algorithm was used in the study. They found that the single-objective MART outperformed the multiobjective MART. Thus, they proposed a smooth remedy procedure to improve the performance of multiobjective MART compared to using the traditional Pareto optimal method in this algorithm.
Multiobjective optimization methods have been developed and used for multitask learning, especially for combinatorial optimization; however, their applications to LTR problems are still a novel research topic.
A different line of research studies presented a multiobjective learning framework where the authors used relevance labels and adjusted remedies for the ranking function to satisfy multiple objectives to produce results satisfying specific criteria, such as scale calibration [17] and fairness [18,19]. They used the Rank Neural Network (RankNET), LambdaMART and Listwise Neural Network (ListNET) approaches [16]. Remedy procedures were used to overcome the gaps in performance between the single-objective approaches and multiobjective ones [15]. On the other hand, evolutionary strategy LTR (ES-Rank) outperformed MART, RankNET, LambdaMART and ListNET in previous research [16]. Furthermore, the single-objective ES-Rank outperformed the 14 well-known evolutionary and machine learning approaches.
Hence, the principal objective of this research is to introduce an innovative search-space-exploration-procedures-based multiobjective algorithm using the Pareto optimal approach. These procedures are used here as a remedy between single- and multiobjective performance for the same algorithm to prove that the multiobjective version of the LTR algorithm can outperform the single-objective version in some exploration circumstances. Empirical findings attest to the heightened performance of the introduced algorithm in tackling the challenges posed by the learning-to-rank problem.

3. Proposed Approach

In the field of optimization, metaheuristic algorithms are computational techniques for solving complex optimization problems. Traditionally, optimization methods may struggle when faced with this type of problem because of its size or nonlinearity or the presence of multiple objectives that conflict. A metaheuristic is an approach to optimization that is different from an exact optimization algorithm. While exact algorithms promise the best solution given enough time and resources, metaheuristics offer approximate solutions that are often of excellent quality. A single-objective heuristic addresses optimization problems based on a single objective. The objective of these problems is to maximize or minimize a single criterion or goal. By utilizing heuristics, an optimal solution to the given objective function can be found. Multiobjective heuristics, on the other hand, are designed to solve optimization problems with multiple conflicting objectives. This paper uses the (1 + 1) evolutionary strategy algorithm for learning to rank (ES-Rank) in two variations, which are single-objective and multiobjective evaluation metrics (as shown in Figure 2).
The single-objective ES-Rank has been used in the previous study in [16] in comparison with 14 evolutionary and machine learning methods, and it outperformed them. It is often necessary to optimize multiple criteria simultaneously in such problems, and these objectives often conflict. In general, it is not possible to find a solution that optimizes all objectives simultaneously due to inherent trade-offs. Several studies have shown that multiobjective optimization is usually less accurate than the approach of optimizing each fitness function individually. However, our method can be a strong rival to the single-objective ES-Rank.
This study aims to find the most effective method for multiobjective learning in order to optimize performance in multiobjective learning. The present study introduces three methods, two of which are novel methods in the field of multiobjective optimization.
Our proposed optimization algorithm, the multiobjective (1 + 1) evolutionary strategy, is a novel approach for tackling complex multi-ES-Rank problems. The problem involves multiple objectives to be optimized, and no single solution may be considered to be the best across all objectives. In this algorithm, the decision variables are assigned random values for a population of “individuals”, each representing a potential solution. In order to rank these individuals, this algorithm employs the Pareto principle. In a multiobjective optimization problem, the Pareto optimal, also known as the Pareto frontier, is a set of solutions that are not considered to be dominated by any of the objectives. As a result, no solution in the set is superior to any of its competitors in all of its objectives, and at least one objective has improved without compromising any of the others. The framework of the proposed multiobjective (1 + 1) evolutionary strategy is as follows.

3.1. Step 1: Initialization

Set initial values for the maximum algorithm iterations and population size, and then generate an initial population of candidate solutions (individuals), denoted as P(0), and assign random values to the decision variables for each individual.

3.2. Step 2: Termination

If the termination condition has not been reached for a maximum number of iterations, then continue; otherwise, print out the Pareto optimal set from P.

3.3. Step 3: Mutation

Using the objective function values for each individual, calculate the fitness value for that individual. In order to calculate the fitness value, a ranking-based approach can be used, such as a nondominated sorting rank. We utilized 3 different methods from a single fitness objective function, ES-Rank. These 3 multiobjective ES-Rank methods use the Pareto frontier approach for the cumulative objective function MFitness. This cumulative fitness function can be calculated by Equation (1).
M F i t t n e s s = 5 i = 1 C i   × F i t t n e s s i
where C i   is the Pareto frontier coefficient i, which corresponds to the fitness evaluation metric F i t t n e s s i , and i is an integer number between 1 and 5. The fitness evaluation metrics used in this study are the mean average precision (MAP), normalized discounted cumulative gain (NDCG@10), reciprocal rank (RR@10), expected reciprocal rank (ERR@10) and precision (P@10) at top 10 documents retrieved [20]. The 3 multiobjective ES-Rank methods use 3 different representations for C i   :
  • The first multiobjective ES-Rank approach uses C i   = 1 1     i , while i = { 1 , 2 , 3 , 4 , 5 } .
  • The second multiobjective ES-Rank approach uses a traditional real random number generator for assigning a real number value for the C i   coefficient for every fitness function in every evolving iteration with constraints. This constraint is that 5 i = 1 C i   = 1 in every evolving iteration.
  • The third multiobjective ES-Rank approach uses a ziggurat Gaussian random number generator to assign a real number value for the C i   coefficient for every fitness function in every evolving iteration with constraints. This constraint is that i = 1   5 C i in every evolving iteration. The ziggurat Gaussian random number generator [21] generates a normalized Gaussian random number between 0 and 1 rather than −50 and 50 as in the traditional Gaussian random number generator.

3.4. Step 4: Population Evolution

To guarantee that the constraints in the second and third multiobjective ES-Rank on Pareto frontier coefficients are met, we assume that the five Pareto frontier coefficients generated using random number generators are Ci = {C1, C2, C3, C4, C5} in each evolving iteration. There is no guarantee for the summation value for these coefficients to be 1 without a normalization factor. The normalization factor N f a c t o r can be calculated by Equation (2).
N f a c t o r = 1 ( C 1 + C 2 + C 3 + C 4 + C 5 )
Then, the Pareto coefficients are calculated by C i = Nfactor × C i , where i  5 .
During each iteration, methods 2 and 3 use multiobjective randomization functions based on traditional and ziggurat Gaussian distribution Pareto coefficients. In this manner, more exploration can be achieved for a multiobjective search-space solution, while exploitation can be limited to a single Pareto coefficient sum. A rank   ( r ) is assigned to each individual, with a lower rank indicating a higher level of fitness. Ranks and fitness values are then used to select parents for reproduction. The probability of becoming a parent increases for individuals with a lower rank and a higher fitness value.

3.5. Step 5: Population Update

Evolutionary strategy consists of two solutions, the current solution (parent) and a candidate solution (offspring) that results from perturbing the parent. If offspring are not at least as efficient as their parents, they will be discarded from consideration for the following generation. As a vector of weights, the chromosome represents the evolving ranking model.
Algorithm 1 outlines the multi-ES-Rank algorithm. The training and validating set of query–document pairs provides a means of assessing evolutionary solutions in each iteration, and the output of this algorithm is a ranking model for the dataset used in the evolving phase. Using PCh as a parent chromosome, each gene is represented as a real number, representing the significance of the corresponding feature for ranking the training and validating data instances, where the data instances are queries and documents. Each gene in steps 1 through 4 is initialized to a value of 0.5 in the parent chromosome vector. The Boolean parameter Good is used to indicate whether to repeat the previous mutation steps from the previous generation or not. It is set to FALSE in step 5 when the previous mutation steps are to be repeated.
A copy of PCh is assigned to OffCh in step 6. The evolving process is repeated until the maximum generation MaxGenerations is reached; the number of iterations is 1300 in this paper. The evolving procedure begins in step 7 and ends in step 24. The procedure for managing mutations is demonstrated in steps 8–16 by choosing the number of genes to mutate (RM). Four probability distributions are used to determine the mutation step (steps 11 to 15): Gaussian, Cauchy, Levy and uniform. The successful evolution process (which produced good offspring) for evolving iteration G is repeated in evolving iteration G, as illustrated in step 9. Otherwise, the mutation procedure’s settings are reset, as demonstrated in steps 11 to 15. Using the fitness metrics, steps 17 to 23 determine which PCh or OffCh to use. Finally, in step 25, the relationship between dynamic feature weights and query–document pairs is represented by the mathematical transposition of the feature weights vector (i.e., multi-ES-Rank procedure).
Algorithm 1: MultiES-Rank: Multiobjective Evolutionary Strategy Ranking Approach
Input:A training setα (q, d) and a validation set ɳ (q, d) of query-document pairs of feature vectors.
Output:A linear ranking function F (q, d) that assigns a weight to every query-document pair indicating its relevancy degree.
1Initialization:
2For (Gen_i Є PCh) do
3     Gen_i = 0.0;
4end
5Good = FALSE;
6OffCh = PCh;
7For (G = 1 to MaxGenerations) do
8  If (Good==TRUE) Then
9       Use the same mutation process of generation (G-1) on OffCh to mutate next OffCh, that is, mutate the same RM genes using the same Mutation Step;
10    Else
11    Choose number of genes to mutate RM at random from 1 to M
12  For (j = 1 to RM)
13    Choose random Gen_i in OffCh for mutation;
14    Mutate Gene_i using Mutation Step according to Probability Distributions used
15  end
16end
17If (((Fitness(PCh,α(q,d)) < Fitness(OffCh α(q,d))) && (Fitness(PCh, ɳ(q,d)) ≤ Fitness(OffCh, ɳ(q,d)))) Then
18  PCh = OffCh;
19  Good=TRUE;
20Else
21  OffCh = PCh;
22  Good = FALSE;
23end
24Return: The linear ranking function F (q, d) = PCh, that is PCh at the end of the MaxGenerations contains the evolved vector W of M feature weights.

4. Experimental Results

This section includes a thorough experimental investigation that compares the three proposed learning-to-rank multiobjective strategies to a single-objective existing approach in terms of five accuracy fitness metrics. MAP, RR, ERR, NDCG and P (mean average precision, reciprocal rank, expected reciprocal rank—total precision, normalized discount cumulative gain and average precision) at top 10 documents retrieved are the five metrics used to evaluate accuracy, as stated in subsection IV-A. To evaluate the performance of an LTR approach, the LTR technique is first applied to the training set. Afterwards, the ranking model’s performance is evaluated using the test set to determine how well the LTR algorithm makes predictions.

4.1. Benchmark Datasets and Evaluation Fitness Metrics

Three benchmarking datasets are considered in this paper, as follows:
The MSLR-WEB30K dataset [22]: This dataset provides a comprehensive and realistic set of query–document pairs with relevance labels. Additionally, there is a set of features associated with each query–document pair that capture various aspects of the query and the document. Among these features are textual features, numerical features and other metadata that can be used to determine the degree of relevance of a document with respect to a specific query.
LETOR 4.0 [23,24]: It is part of the LETOR (Learning to Rank for Information Retrieval) dataset collection. A significant number of query–document pairs are included in the dataset, each associated with a relevance label. Additionally, a variety of features are included in the dataset that capture the characteristics of both queries and documents. These include textual attributes, numerical attributes and other metadata. These features are designed to aid ranking algorithms in determining the relevance of documents to a query.
As can be seen in Table 1, these datasets have a number of different characteristics. Compared to LETOR 4 datasets (MQ2007 and MQ2008), the Microsoft Bing Search dataset (MSLR-WEB30K) has a much higher number of query–document pairs and features. There are several low-level characteristics associated with each query–document pair, such as term frequency and inverse document frequency. In order to determine low-level features for all document parts (title, anchor, body and whole), a set of low-level features was determined. In addition, there are high-level features that indicate how well the searches and documents correspond. Additionally, hybrid features have been employed in previous SIGIR conference papers including LMIR.ABS, LMIR.JM, LMIR.DIR and LMIR.DIR as well as the Language Model with Absolute Discounted Smoothing [22,23,24,25] and Language Model with Jelinek–Mercer smoothing [LMIR.JM]. There are 30,000 queries in the MSLR-WEB30K dataset. MQ2008 contains fewer than 1000 queries, whereas MQ2007 contains 1692 queries. There are a variety of query–document combinations for each query, which are based on a set of relevant and irrelevant documents. A relevance label indicates the level of relevance of a query when it is accompanied by a document (relationship query–document). As a general rule, relevance labels are classified as 0 (for totally irrelevant), 1 (for moderately relevant) and 2 (for very relevant). There is one exception to this rule, the MSLR-WEB30K dataset, where values range from 0 (irrelevant) to 4 (perfectly relevant).
In this research, MAP, NDCG@10, P@10, RR@10 and ERR@10 were used as five distinct fitness functions on the training sets [1]. They were also used as assessment measures for the test-set ranking algorithms. These fitness functions were demonstrated in detail in [20].

4.2. Result Analysis and Discussion

This section gives an overview of the progress achieved using multiobjective LTR. From the results obtained, we can say that using the Cauchy probability distribution as a random number generator for mutation step sizes in multiobjective ES-Rank outperformed Gaussian, Levy and uniform distributions. It also outperformed single-objective ES-Rank, but the multiobjective method that uses the Cauchy distribution as the dominant method in performance relies on the particular dataset used. Figure 3, Figure 4 and Figure 5 illustrate the superiority of the proposed methodologies for LTR for the three datasets used.
From Figure 3 and Figure 4, for both the MSLR-WEB10K and MQ2008 datasets, the single-objective ES-Rank performance is higher than the multiobjective ES-Rank. This degradation in performance is accepted in order to gain the multiobjective ranking. From Figure 5, it is found that for the dataset MQ2007, the multiobjective ES-Rank with method 1 using uniform and multiobjective ES-Rank with method 2 using Cauchy as a random number generator for mutation step sizes both achieve high performance, with 6 and 7 winning rates, respectively. This is better than the overall performance of the single-objective ES-Rank. These results ensure the effectiveness of our proposed methods for both single-objective and multiobjective optimization. Moreover, the dataset affects the performance of ES-Rank for all the methods used.
To evaluate the methods of generated random numbers distribution, Figure 6 illustrates the NCDG@10 for the test set MSLR dataset. From Figure 6, we can conclude that Levy is the best one for single-objective ES-Rank, while for multiobjective, Figure 6 shows grouping results based on the method of optimization, where Levy is the best for method 1, method 2 and method 3. Thus, Levy probability distribution as a random number generator for mutation step sizes is recommended for single-objective and multiobjective ES-Rank using all three methods. Moreover, method 2 with Levy achieves the highest NDCG@10 for the MSLR dataset.
For analyzing and evaluating different random number generators, Figure 6, Figure 7 and Figure 8 illustrate the NDCG@10 for testing data for MSLR, MQ2007 and MQ2008. For the MQ2008 dataset, Figure 7 illustrates the NCDG@10 for the test set. From Figure 7, it is found that the Gaussian probability distribution as a random number generator for mutation step sizes is recommended for single-objective and multiobjective ES-Rank using all three methods. Moreover, method 3 with Gaussian achieves the highest NDCG@10 for the MQ2008 dataset.
For the MQ2007 dataset, Figure 8 illustrates the NCDG@10 for the test set. From Figure 8, it is found that Levy probability distribution as a random number generator for mutation step sizes is recommended for multiobjective ES-Rank using all three methods; however, Gaussian is recommended for single-objective ES-Rank. Moreover, method 3 with Levy achieves the highest NDCG@10 for the MQ2008 dataset. Thus, random number generators clearly affect the performance of ES-Rank based on the dataset used.
Multi-ES-Rank is an evolutionary strategy that uses a cumulative fitness function to determine the quality of each evolving ranking model in each iteration. Additionally, as the Pareto frontier contains no dominant solution, there is no other solution that performs better on all objectives at the same time. The developed strategy explores the search space and produces diverse solutions reflecting different trade-offs between multiple objectives through the cumulative fitness function. As a result, developed algorithms provide decision-makers with a variety of options from which to select so that they can make informed decisions based on their individual preferences.
In summary, this paper introduces a multiobjective evolutionary strategy (multi-ES-Rank) approach for learning-to-rank problems. In addition, we propose three novel Pareto optimal methods in continuous optimization research. Furthermore, we provide the Java archive package of the proposed approach for research reproducibility. From the experimental results, multi-ES-Rank can outperform single-objective ES-Rank in some circumstances of mutation step sizes and Pareto optimal methods for LTR data, as given in Appendix A. The best performance can be gained with the method using Cauchy as a random number generator for mutation step sizes in terms of winning rate. This causes the multi-ES-Rank to outperform the single-objective ES-Rank in certain conditions. Moreover, the different random number generators are evaluated and analyzed versus the three datasets in terms of NDCG@10 for testing data. It was found that the Levy generator is the best for both the MSLR and MQ2007 datasets while the Gaussian generator is the best for the MQ2008 dataset. Thus, random number generators clearly affect the performance of ES-Rank based on the dataset used. Furthermore, method 3 achieved the highest NDCG@10 for MQ2008 and MQ2007, while for the MSLR dataset, the highest NDCG@10 was achieved by method 2.
An important limitation of this study is the sensitivity of the evolutionary fitness function to configuration parameters. The results of this study highlight the importance of careful parameter tuning, but they also demonstrate that it is difficult to identify a universally optimal configuration because it is often dependent upon specific datasets and problem domains. Since there may not be one configuration suitable for different LTR tasks and datasets, developing automated hyperparameter optimization techniques may mitigate this limitation in the future. This study is also limited by the lack of dedicated multiobjective optimization packages for comparison. Most research focuses on learning-to-rank models with single objectives, such as mean squared error or pairwise ranking. However, in real-world applications, it is often necessary to optimize conflicting objectives simultaneously. The proposed techniques can be further evaluated in more complex optimization scenarios where the performance of the proposed techniques can be evaluated on a broader scale in future research.

5. Conclusions and Future Work

In this paper, we describe a general framework for learning to rank using the multiobjective (1 + 1) evolutionary strategy, which can be used with any type of data. As a multiobjective method, the multi-ES-Rank algorithm is based on novel methodologies for calculating cumulative fitness functions. A principled approach to maintaining the relative quality of rankings based on different relevance criteria is provided. Three types of trade-off (fitness calculation) specifications are formalized. The framework was validated using three public datasets, and the source code package is available for reproducible research. A number of directions will be explored to improve the current multi-ES-Rank algorithm in the future, including applying the Pareto optimal methods on some other metaheuristic methods and on some other optimization research domains, and additionally, enhancing the developed package by combining offline and online learning components. The process of offline optimization typically involves the use of historical data in order to train and refine the ranking model, whereas the process of online optimization involves continuous adaptation of the ranking model as a result of real-time user interactions. Using a hybrid approach can enhance the performance and relevance of multi-ES-Rank in dynamic environments by combining the stability and quality of offline optimization with the real-time adaptation of online optimization.

Author Contributions

All authors wrote and reviewed the paper (they contributed equally). All authors have read and agreed to the published version of the manuscript.

Funding

This research project was supported by a grant from the “Research Center of College of Computer and Information Sciences”, Deanship of Scientific Research, King Saud University.

Data Availability Statement

Publicly available datasets were analyzed in this study. This data can be found here: [https://www.microsoft.com/en-us/research/project/mslr/].

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Average results of the proposed methods using MSLR-WEB30K.
Table A1. Average results of the proposed methods using MSLR-WEB30K.
MSLR-WEB30K
Methods vs. DatasetMAPNDCG@10RR@10ERR@10P@10Winning Rate
TrainingES-Rank Gaussian0.56590.37660.75440.25870.58793
Multi-ES-Rank 1 Gaussian0.53860.33560.73170.26230.55360
Multi-ES-Rank 2 Gaussian0.55940.35470.75410.27110.58130
Multi-ES-Rank 3 Gaussian0.55480.36220.75880.28340.58152
ES-Rank Cauchy0.49530.24410.58650.18330.45921
Multi-ES-Rank 1 Cauchy0.48390.24570.60060.18740.45984
Multi-ES-Rank 2 Cauchy0.46780.22510.55780.16870.43210
Multi-ES-Rank 3 Cauchy0.47580.24080.59490.18690.44920
ES-Rank Levy0.58270.38370.80200.31510.60793
Multi-ES-Rank 1 Levy0.57610.39480.79820.30950.60980
Multi-ES-Rank 2 Levy0.57340.40160.79920.31020.61442
Multi-ES-Rank 3 Levy0.55160.35070.76020.27220.57450
ES-Rank uniform0.48180.23570.59340.19060.46244
Multi-ES-Rank 1 uniform0.46590.23160.56900.18360.42780
Multi-ES-Rank 2 uniform0.47010.23360.58240.18050.43830
Multi-ES-Rank 3 uniform0.47490.23970.58930.18620.44631
ValidationES-Rank Gaussian0.57630.37840.76570.26440.59703
Multi-ES-Rank 1 Gaussian0.54610.34150.73640.27210.56080
Multi-ES-Rank 2 Gaussian0.56710.38510.75650.27890.59081
Multi-ES-Rank 3 Gaussian0.56240.36530.76130.29150.58601
ES-Rank Cauchy0.50410.24900.59290.18960.46752
Multi-ES-Rank 1 Cauchy0.49160.25070.60740.19410.46533
Multi-ES-Rank 2 Cauchy0.47430.22760.58070.17520.43430
Multi-ES-Rank 3 Cauchy0.48320.24410.60250.19290.45290
ES-Rank Levy0.59130.39110.80990.32520.61884
Multi-ES-Rank 1 Levy0.58350.40030.80780.31610.61860
Multi-ES-Rank 2 Levy0.57800.40210.80700.31580.61681
Multi-ES-Rank 3 Levy0.56040.35790.77610.28150.58310
ES-Rank uniform0.49100.23890.59620.19760.46994
Multi-ES-Rank 1 uniform0.47220.23390.57570.18820.42870
Multi-ES-Rank 2 uniform0.47600.23520.58420.18590.44010
Multi-ES-Rank 3 uniform0.48190.24270.59090.19130.44911
TestingES-Rank Gaussian0.56850.37100.75400.25690.58793
Multi-ES-Rank 1 Gaussian0.54040.33160.73170.26230.55590
Multi-ES-Rank 2 Gaussian0.56050.34860.75410.27110.58490
Multi-ES-Rank 3 Gaussian0.55760.15810.75880.28340.58742
ES-Rank Cauchy0.49710.24550.57090.18750.46622
Multi-ES-Rank 1 Cauchy0.48710.24570.60060.18740.46653
Multi-ES-Rank 2 Cauchy0.46960.22280.57780.16870.43540
Multi-ES-Rank 3 Cauchy0.47890.23960.59490.18690.45330
ES-Rank Levy0.58190.37890.79730.31620.60712
Multi-ES-Rank 1 Levy0.57750.39020.79820.30950.61521
Multi-ES-Rank 2 Levy0.57340.39570.79920.31020.61322
Multi-ES-Rank 3 Levy0.5570.34920.76020.27220.58150
ES-Rank uniform0.48560.23900.59060.19440.46644
Multi-ES-Rank 1 uniform0.46730.23110.59210.18360.43321
Multi-ES-Rank 2 uniform0.47170.23150.58240.18050.44140
Multi-ES-Rank 3 uniform0.47690.23660.58930.18620.44870
Table A2. The performance of the proposed methods using MSLR-WEB30K.
Table A2. The performance of the proposed methods using MSLR-WEB30K.
MSLR-WEB30K
Method Used/Winning Evaluation RateWinning Rate
Single-objective ES-RankGaussianEvolving6
Predictive3
CauchyEvolving3
Predictive2
LevyEvolving7
Predictive2
UniformEvolving8
Predictive4
Multiobjective ES-RankMethod 1GaussianEvolving0
Predictive0
CauchyEvolving7
Predictive3
LevyEvolving0
Predictive1
UniformEvolving0
Predictive1
Method 2GaussianEvolving1
Predictive0
CauchyEvolving0
Predictive0
LevyEvolving3
Predictive2
UniformEvolving0
Predictive0
Method 3GaussianEvolving3
Predictive2
CauchyEvolving0
Predictive0
LevyEvolving0
Predictive0
UniformEvolving2
Predictive0
Table A3. Average results of the proposed methods using MQ2008.
Table A3. Average results of the proposed methods using MQ2008.
MQ2008
Methods vs. DatasetMAPNDCG@10RR@10ERR@10P@10Winning Rate
TrainingES-Rank Gaussian0.47450.50200.52870.99000.27931
Multi-ES-Rank 1 Gaussian0.47980.50220.54600.09840.27630
Multi-ES-Rank 2 Gaussian0.47810.50410.54920.09910.27570
Multi-ES-Rank 3 Gaussian0.4860.50880.55510.09980.27634
ES-Rank Cauchy0.46490.48320.51210.09150.26851
Multi-ES-Rank 1 Cauchy0.44030.47130.50740.09130.26830
Multi-ES-Rank 2 Cauchy0.45120.48220.51790.09390.27230
Multi-ES-Rank 3 Cauchy0.4560.48560.52620.09400.27254
ES-Rank Levy0.48270.50100.54790.09960.27853
Multi-ES-Rank 1 Levy0.47760.50320.55080.09950.27531
Multi-ES-Rank 2 Levy0.47890.50370.54400.09870.27611
Multi-ES-Rank 3 Levy0.47280.49720.53850.09620.27340
ES-Rank uniform0.45090.47560.53330.09430.27122
Multi-ES-Rank 1 uniform0.44190.47490.50130.09100.26930
Multi-ES-Rank 2 uniform0.45180.48300.51840.09320.27043
Multi-ES-Rank 3 uniform0.44610.47650.51550.09260.26760
ValidationES-Rank Gaussian0.54620.56130.57500.10130.27871
Multi-ES-Rank 1 Gaussian0.53440.56630.60930.10470.27680
Multi-ES-Rank 2 Gaussian0.53920.57280.61130.10560.27994
Multi-ES-Rank 3 Gaussian0.52550.55720.60670.10160.27490
ES-Rank Cauchy0.51180.53070.55280.09590.26981
Multi-ES-Rank 1 Cauchy0.48630.52560.54860.09410.26980
Multi-ES-Rank 2 Cauchy0.49820.53330.55740.09750.27103
Multi-ES-Rank 3 Cauchy0.49920.53190.55170.09630.27171
ES-Rank Levy0.53530.56360.60100.10590.28125
Multi-ES-Rank 1 Levy0.52620.55940.58850.10130.27870
Multi-ES-Rank 2 Levy0.52260.55580.59370.10150.27610
Multi-ES-Rank 3 Levy0.51480.55010.57650.09910.27870
ES-Rank uniform0.49390.53730.55890.09800.27235
Multi-ES-Rank 1 uniform0.49200.52880.54550.09580.27100
Multi-ES-Rank 2 uniform0.48490.52520.53680.09520.27040
Multi-ES-Rank 3 uniform0.48880.53080.54990.09590.27100
TestingES-Rank Gaussian0.4550.48490.50560.09390.26300
Multi-ES-Rank 1 Gaussian0.46260.48480.54600.09850.26361
Multi-ES-Rank 2 Gaussian0.45210.48070.54920.09910.26300
Multi-ES-Rank 3 Gaussian0.45990.48620.55510.09980.26984
ES-Rank Cauchy0.45260.46430.46900.91300.26111
Multi-ES-Rank 1 Cauchy0.43720.46300.50740.09130.26170
Multi-ES-Rank 2 Cauchy0.44160.46550.51790.09390.26551
Multi-ES-Rank 3 Cauchy0.44520.47280.52620.09400.26363
ES-Rank Levy0.45050.48310.49450.09560.26431
Multi-ES-Rank 1 Levy0.45090.47320.55080.09950.26232
Multi-ES-Rank 2 Levy0.44920.47930.54400.09870.26551
Multi-ES-Rank 3 Levy0.45210.47310.53850.09620.26361
ES-Rank uniform0.44550.46170.48880.09460.26232
Multi-ES-Rank 1 uniform0.44400.46750.50130.09100.26170
Multi-ES-Rank 2 uniform0.44100.47010.51840.09320.26363
Multi-ES-Rank 3 uniform0.43430.46670.51550.09260.26110
Table A4. The performance results of the proposed methods using MQ2008.
Table A4. The performance results of the proposed methods using MQ2008.
MQ2008
Method Used/Winning Evaluation RateWinning Rate
Single-objective ES-RankGaussianEvolving2
Predictive0
CauchyEvolving2
Predictive1
LevyEvolving8
Predictive1
UniformEvolving7
Predictive2
Multiobjective ES-RankMethod 1GaussianEvolving0
Predictive1
CauchyEvolving0
Predictive0
LevyEvolving1
Predictive2
UniformEvolving0
Predictive0
Method 2GaussianEvolving4
Predictive0
CauchyEvolving3
Predictive1
LevyEvolving1
Predictive1
UniformEvolving3
Predictive3
Method 3GaussianEvolving4
Predictive4
CauchyEvolving5
Predictive3
LevyEvolving0
Predictive1
UniformEvolving0
Predictive0
Table A5. Average results of the proposed methods using MQ2007.
Table A5. Average results of the proposed methods using MQ2007.
MQ2007
Methods vs. DatasetMAPNDCG@10RR@10ERR@10P@10Winning Rate
TrainingES-Rank Gaussian0.44360.42340.56640.09830.37772
Multi-ES-Rank 1 Gaussian0.44870.42710.56560.09940.36413
Multi-ES-Rank 2 Gaussian0.44350.42340.56460.09890.36610
Multi-ES-Rank 3 Gaussian0.44440.42190.55400.09800.36490
ES-Rank Cauchy0.43270.41110.54190.09600.34691
Multi-ES-Rank 1 Cauchy0.43220.41170.54210.09640.35480
Multi-ES-Rank 2 Cauchy0.43170.41270.54290.09690.35764
Multi-ES-Rank 3 Cauchy0.42520.40410.53850.09450.35240
ES-Rank Levy0.45220.42920.54880.10000.37281
Multi-ES-Rank 1 Levy0.45230.43250.56540.09970.36881
Multi-ES-Rank 2 Levy0.45310.42950.56620.09810.37061
Multi-ES-Rank 3 Levy0.45260.43220.56960.09860.37552
ES-Rank uniform0.43500.41950.53310.09360.34141
Multi-ES-Rank 1 uniform0.43740.41750.54730.09770.36013
Multi-ES-Rank 2 uniform0.41960.40000.53440.09410.34650
Multi-ES-Rank 3 uniform0.42940.41040.53950.09590.35601
ValidationES-Rank Gaussian0.47420.45200.59300.10660.38324
Multi-ES-Rank 1 Gaussian0.46820.45110.58180.10540.37140
Multi-ES-Rank 2 Gaussian0.47440.45150.58760.10470.37551
Multi-ES-Rank 3 Gaussian0.4720.44990.59250.10490.37140
ES-Rank Cauchy0.45460.43360.55350.10060.35750
Multi-ES-Rank 1 Cauchy0.45630.43250.55880.10050.35630
Multi-ES-Rank 2 Cauchy0.45730.43830.57150.10270.35935
Multi-ES-Rank 3 Cauchy0.45360.42930.56750.10080.35250
ES-Rank Levy0.47420.46280.59310.10810.38383
Multi-ES-Rank 1 Levy0.4760.45410.61150.10710.37912
Multi-ES-Rank 2 Levy0.47260.45240.58480.10460.38020
Multi-ES-Rank 3 Levy0.47560.45600.59230.10720.38050
ES-Rank uniform0.46010.44390.54950.09770.35250
Multi-ES-Rank 1 uniform0.46640.44740.58530.10530.36434
Multi-ES-Rank 2 uniform0.44370.42180.54580.09830.35040
Multi-ES-Rank 3 uniform0.45740.43570.56160.10130.35811
TestingES-Rank Gaussian0.47740.46500.56830.10940.40064
Multi-ES-Rank 1 Gaussian0.48160.46230.56560.09940.38870
Multi-ES-Rank 2 Gaussian0.46880.45280.56460.09890.38960
Multi-ES-Rank 3 Gaussian0.48180.46230.55400.09800.39291
ES-Rank Cauchy0.45610.43780.55240.10280.36942
Multi-ES-Rank 1 Cauchy0.46210.43750.54210.09640.37231
Multi-ES-Rank 2 Cauchy0.46110.43950.54290.09640.37352
Multi-ES-Rank 3 Cauchy0.45340.43260.53850.09450.37260
ES-Rank Levy0.48330.45900.56400.10740.39001
Multi-ES-Rank 1 Levy0.48250.46690.56540.09970.39700
Multi-ES-Rank 2 Levy0.48330.46750.56620.09810.39761
Multi-ES-Rank 3 Levy0.48440.46780.56960.09860.39583
ES-Rank uniform0.46350.44480.54620.10010.35861
Multi-ES-Rank 1 uniform0.47060.44720.54730.09770.37774
Multi-ES-Rank 2 uniform0.45090.43010.53440.09410.36760
Multi-ES-Rank 3 uniform0.46220.43850.53950.09590.37320

References

  1. Li, H. Theory of learning to rank. In Learning to Rank for Information Retrieval and Natural Language Processing, 2nd ed.; Springer International Publishing: Cham, Switzerland, 2015; pp. 81–86. [Google Scholar]
  2. Manning, C.D.; Raghavan, P.; Schütze, H. Introduction to Information Retrieval; Cambridge University Press: Cambridge, UK, 2008. [Google Scholar]
  3. Urbano, J. Test collection reliability: A study of bias and robustness to statistical assumptions via stochastic simulation. Inf. Retr. J. 2016, 19, 313–350. [Google Scholar] [CrossRef]
  4. Momma, M.; Dong, C.; Chen, Y. Multi-objective Ranking with Directions of Preferences. In Proceedings of the 45th International ACM SIGIR Conference in Research and Development in Information Retrieval, Madrid, Spain, 11–15 July 2022. [Google Scholar]
  5. Drucker, H.; Shahrary, B.; Gibbon, D.C. Support vector machines: Relevance feedback and information retrieval. Inf. Process. Manag. 2002, 38, 305–323. [Google Scholar] [CrossRef]
  6. Al-Tashi, Q.; Abdulkadir, S.J.E. Approaches to multi-objective feature selection: A systematic literature review. IEEE Access 2020, 8, 125076–125096. [Google Scholar] [CrossRef]
  7. Xue, B.; Zhang, M.; Browne, W.N.; Yao, X. A survey on evolutionary computation approaches to feature selection. IEEE Trans. Evol. Comput. 2016, 20, 606–626. [Google Scholar] [CrossRef]
  8. Svore, K.M.; Volkovs, M.N.; Burges, C.J.C. Learning to Rank with Multiple Objective Functions. In Proceedings of the 20th International World Wide Web Conference, New York, NY, USA, 28 March–1 April 2011. [Google Scholar]
  9. Li, W.; Chai, Z.; Tang, Z. A decomposition-based multi-objective immune algorithm for feature selection in learning to rank. Knowl.-Based Syst. 2021, 234, 107577. [Google Scholar] [CrossRef]
  10. Kundu, P.P.; Mitra, S. Multi-objective optimization of shared nearest neighbor similarity for feature selection. Appl. Soft Comput. 2015, 37, 751–762. [Google Scholar] [CrossRef]
  11. Yong, Z.; Gong, D.W.; Sun, X.Y.; Guo, Y.N. A PSO-based multi-objective multi-label feature selection method in classification. Sci. Rep. 2017, 7, 376. [Google Scholar]
  12. Das, A.; Das, S. Feature weighting and selection with a pareto optimal tradeoff between relevancy and redundancy. Pattern Recognit. Lett. 2017, 88, 12–19. [Google Scholar] [CrossRef]
  13. Cheng, F.; Guo, W.; Zhang, X. Mofsrank: A multi-objective evolutionary algorithm for feature selection in learning to rank. Complexity 2018, 2018, 14. [Google Scholar] [CrossRef]
  14. Mahapatra, D.; Dong, C.; Chen, Y.; Meng, D.; Momma, M. Multi-Label Learning to Rank through Multi-Objective Optimization. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Long Beach, CA, USA, 6–10 September 2023. [Google Scholar]
  15. Pang, L.; Xu, J.; Ai, Q.; Lan, Y.; Cheng, X.; Wen, J. Setrank: Learning a Permutation-invariant Ranking Model for Information Retrieval. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, New York, NY, USA, 25–30 July 2020. [Google Scholar]
  16. Ibrahim, O.A.S.; Silva, D.L. An evolutionary strategy with machine learning for learning to rank in information retrieval. Soft Comput. 2018, 22, 3171–3185. [Google Scholar] [CrossRef]
  17. Yan, L.; Qin, Z.; Wang, X.; Bendersky, M.; Najork, M. Scale Calibration of Deep Ranking Models. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, New York, NY, USA, 14–18 August 2022. [Google Scholar]
  18. Singh, A.; Joachims, T. Policy Learning for Fairness in Ranking. In Proceedings of the 2019 Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019. [Google Scholar]
  19. Morik, M.; Singh, A.; Hong, J.; Joachims, T. Controlling Fairness and Bias in Dynamic Learning-to-Rank. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, New York, NY, USA, 25–30 July 2020. [Google Scholar]
  20. Ibrahim, O.A.S.; Younis, E.M.G. Hybrid online offline learning to rank using simulated annealing strategy based on dependent click model. Knowl. Inf. Syst. 2022, 64, 2833–2847. [Google Scholar] [CrossRef]
  21. Loshchilov, L. A Computationally Efficient Limited Memory Cmaes for Large Scale Optimization. In Proceedings of the Genetic and Evolutionary Computation Conference, Vancouver, BC, Canada, 12–16 July 2014. [Google Scholar]
  22. Qin, T.; Liu, T. Introducing LETOR 4.0 datasets. arXiv 2013, arXiv:1306.2597v1. [Google Scholar]
  23. Liu, T. The LETOR dataset. In Learning to Rank for Information Retrieval; Springer: Berlin, Germany, 2011; pp. 133–143. [Google Scholar]
  24. Qin, T.; Liu, T.; Xu, J.; Li, H. Letor: A benchmark collection for research on learning to rank for information retrieval. Inf. Retr. 2010, 13, 346–374. [Google Scholar] [CrossRef]
  25. Ibrahim, O.A.S.; Landa-Silva, D. Es-rank: Evolution Strategy Learning to Rank Approach. In Proceedings of the 32nd ACM SIGAPP Symposium on Applied Computing, Marrakech, Morocco, 4–6 April 2017. [Google Scholar]
Figure 1. Learning-to-rank process view according to [6].
Figure 1. Learning-to-rank process view according to [6].
Electronics 12 03724 g001
Figure 2. Flowchart representation of the proposed algorithm.
Figure 2. Flowchart representation of the proposed algorithm.
Electronics 12 03724 g002
Figure 3. Winning rate for single-objective vs. multiobjective ES-Rank (MSLR-WEB10K).
Figure 3. Winning rate for single-objective vs. multiobjective ES-Rank (MSLR-WEB10K).
Electronics 12 03724 g003
Figure 4. Winning rate for single-objective vs. multiobjective ES-Rank (MQ2008).
Figure 4. Winning rate for single-objective vs. multiobjective ES-Rank (MQ2008).
Electronics 12 03724 g004
Figure 5. Winning rate for single-objective vs. multiobjective ES-Rank (MQ2007).
Figure 5. Winning rate for single-objective vs. multiobjective ES-Rank (MQ2007).
Electronics 12 03724 g005
Figure 6. NDCG@10 relevance range for testing data (MSLR dataset) grouped by method.
Figure 6. NDCG@10 relevance range for testing data (MSLR dataset) grouped by method.
Electronics 12 03724 g006
Figure 7. NDCG@10 relevance range for testing data (MQ2008 dataset) grouped by method.
Figure 7. NDCG@10 relevance range for testing data (MQ2008 dataset) grouped by method.
Electronics 12 03724 g007
Figure 8. NDCG@10 relevance range for testing data (MQ2007 dataset) grouped by method.
Figure 8. NDCG@10 relevance range for testing data (MQ2007 dataset) grouped by method.
Electronics 12 03724 g008
Table 1. Properties of the benchmark datasets used in the experimental study.
Table 1. Properties of the benchmark datasets used in the experimental study.
DatasetQueriesQuery–Document PairsFeaturesRelevance LabelsNo. of Folds
MQ2007169269,62346{0, 1, 2}5
MQ200878415,21146{0, 1, 2}5
MSLR-WEB30K30,0003, 771, 125136{0, 1, 2, 3, 4}5
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ismail, W.N.; Ibrahim, O.A.S.; Alsalamah, H.A.; Mohamed, E. Multiobjective Learning to Rank Based on the (1 + 1) Evolutionary Strategy: An Evaluation of Three Novel Pareto Optimal Methods. Electronics 2023, 12, 3724. https://doi.org/10.3390/electronics12173724

AMA Style

Ismail WN, Ibrahim OAS, Alsalamah HA, Mohamed E. Multiobjective Learning to Rank Based on the (1 + 1) Evolutionary Strategy: An Evaluation of Three Novel Pareto Optimal Methods. Electronics. 2023; 12(17):3724. https://doi.org/10.3390/electronics12173724

Chicago/Turabian Style

Ismail, Walaa N., Osman Ali Sadek Ibrahim, Hessah A. Alsalamah, and Ebtesam Mohamed. 2023. "Multiobjective Learning to Rank Based on the (1 + 1) Evolutionary Strategy: An Evaluation of Three Novel Pareto Optimal Methods" Electronics 12, no. 17: 3724. https://doi.org/10.3390/electronics12173724

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop