Next Article in Journal
The Efficacy of Shape Radiomics and Deep Features for Glioblastoma Survival Prediction by Deep Learning
Next Article in Special Issue
Personalized Tour Recommendation via Analyzing User Tastes for Travel Distance, Diversity and Popularity
Previous Article in Journal
Optimal 3D Placement of UAV-BS for Maximum Coverage Subject to User Priorities and Distributions
Previous Article in Special Issue
Trust-Based Recommendation for Shared Mobility Systems Based on a Discrete Self-Adaptive Neighborhood Search Differential Evolution Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Recommending Reforming Trip to a Group of Users

by
Rizwan Abbas
1,*,
Gehad Abdullah Amran
2,*,
Ahmed Alsanad
3,*,
Shengjun Ma
4,
Faisal Abdulaziz Almisned
5,
Jianfeng Huang
6,
Ali Ahmed Al Bakhrani
7,
Almesbahi Belal Ahmed
2 and
Ahmed Ibrahim Alzahrani
8
1
College of Software Engineering, Northeastern University, Shenyang 110169, China
2
Department of Management Science Engineering, Faculty of Management and Economics, Dalian University of Technology, Dalian 116024, China
3
STC’s Artificial Intelligence Chair, Department of Information Systems, College of Computer and Information Sciences, King Saud University, Riyadh 11543, Saudi Arabia
4
College of Computer Science and Engineering, Northeastern University, Shenyang 110169, China
5
Department of Information Systems, College of Computer and Information Sciences, King Saud University, Riyadh 11451, Saudi Arabia
6
College of Computer Science and Engineering, Peking University, Beijing 100871, China
7
Department of Computer Science, Technique Leaders College, Sanaa 31220, Yemen
8
Computer Science Department, Community College, King Saud University, Riyadh 11451, Saudi Arabia
*
Authors to whom correspondence should be addressed.
Electronics 2022, 11(7), 1037; https://doi.org/10.3390/electronics11071037
Submission received: 24 February 2022 / Revised: 21 March 2022 / Accepted: 22 March 2022 / Published: 25 March 2022
(This article belongs to the Special Issue Recommender Systems: Approaches, Challenges and Applications)

Abstract

:
With the quick evolution of mobile apps and trip guidance technologies, a trip recommender that recommends sequential points of interest (POIs) to travelers has emerged and recently received popularity. Compared to other outing recommenders, which suggest the following single POI, our proposed trip proposal research centers around the POI sequence proposal. An advanced sequence of the POI recommendation system named Recommending Reforming Trip (RRT) is presented, recommending a dynamic sequence of POIs to a group of users. It displays the information progression in a verifiable direction, and the output produced is the arrangement of POIs to be expected for a group of users. A successful plan is executed depending upon the deep neural network (DNN) to take care of this sequence-to-sequence problem. From start to finish of the work process, RRT can permit the input to change over time by smoothly recommending a dynamic sequence of POIs. Moreover, two advanced new estimations, adjusted precision (AP) and sequence-mindful precision (SMP), are introduced to analyze the recommended precision of a sequence of POIs. It considers the POIs’ consistency and also meets the sequence of order. We evaluate our algorithm using users’ travel histories extracted from a Weeplaces dataset. We argue that our algorithm outperforms various benchmarks by satisfying user interests in the trips.

1. Introduction

Location recommendations for having a great trip are becoming popular and attracting the attention of researchers. After a tough time at work, many users would like to have a great trip to enjoy the holidays. These POIs can be restaurants, hotels, beaches, events, cinemas, parks, matches, or even a viewpoint on the road. In the most recent couple of years, location-based social networks (LBSNs) have been a quickly developing field. The volume of data created by LBSNs permits data miners to extract precise user data to offer superior support in end-user applications. Compared to one POI suggestion, the sequence of POIs is challenging. Only a couple of investigations focus on POI sequence recommendations. To share the visited location, users most frequently update their trip on location-based social networks (LBSNs) [1], which make life enjoyable and exciting. This helps researchers to extract the required data and provide a pleasant trip to the users. LBSNs provide masses of critical information for investigating examples of human practices. This information has massive potential for manipulation in different applications, such as to address and reply, promote, and suggest [2]; among these, point-of-interest (POI) proposals have recently been attracting increasing consideration from analysts.
The consistency of practices implies that the individual rules consistently follow a specific example and the inclination in a particular duration. Along these lines, a POI suggestion is needed that can catch users’ standards of conduct and preferences first from the description of directions. Afterward, they can be drawn out to make the upcoming suggestion. However, the more significant part of the current POI suggestion system can suggest the upcoming POI [3,4], while now and again, progressive POI sequence recommendations are more idealistic. For instance, when someone needs to arrange a route, they look forward not to a solitary POI suggestion but to a recommendation for the sequence of POIs. A sequence of POIs contains many POIs and follows the proper travel order. Route arranging is an extraordinarily dreary and tedious task, since users consistently need to consider the time imperatives, distance limitations, cost requirements, etc. Contrasted with only one POI suggestion, a sequence of POIs is more difficult for the following significant reasons: (I) a sequence of POI recommendations requires a relevant and interesting sequence of POIs that precisely correspond with the user’s advantage and inclinations, compared to simply an individual POI; (II) users’ inclinations may alter within the middle of a trip, which expands the inconvenience of dynamic suggestions; and (III) the ordered series of POI suggestions is depended upon different variables (e.g., dependency, time, and other conditions) [5]. To display the POI sequence recommendation task, a few analysts have proposed popularity-based methodologies [6,7], which means to discover a POI sequence that expands POI popularity. Following these ways, all users can have a similar recommendation. Furthermore, other customization-based methodologies [8,9] have additionally been developed to suggest a magnificent travel plan for every traveler dependent on their advantages and inclinations. Some researchers considered the time factor in designing the trip recommendation model [10]. The vast majority of the previous frameworks have some disadvantages: (I) they have to declare the beginning and end POIs for each proposal, and (II) they cannot capture the changes in user inclinations after some time, so that it is difficult to recommend a sequence of POIs dynamically.
This advanced research introduced a progressive sequence of POI suggestions, named Recommending Reforming Trip (RRT), to a group of users. The trip proposal is designed, specifically, so as to have the input arrangement in an authentic direction and the output succession as the dynamic sequence of POIs to be suggested. Numerous examinations have been conducted to solve the sequence-to-sequence learning issue, for example [6,7]. Given the way in which the deep neural network (DNN) has resulted in extraordinary achievements in different fields [11], the engineering of the RRT was planned based on the deep neural network. More explicitly, the RRT is consists essentially of an encoder combined with a decoder. The encoder is intended to become familiar with the details of the relevant data suggested in the input sequence, given the logical data; then, at that point, the decoder is to produce the following POI individually to shape a POI succession suggestion. In addition, this model recommends the sequence of POIs to a group of users. The Weeplace dataset is assessed for the proposed technique; the experimental outcomes prove the viability of the RRT for a group of users.
Our commitments are as follows:
  • This advanced research introduces a progressive sequence of the POI suggestion structures named RRT, through which one can recommend a sequence of POIs dynamically as per the recorded direction;
  • To utilize different data about POIs, such as geographical data, the POI embeds quality and the absolute impacts of recorded demand with the positional encoding in the RRT;
  • This advanced research likewise introduces two advanced measurements, i.e., adjusted precision (AP) and sequence-mindful precision (SMP), that examine both POI property and traveling order and can assess the suggestion precision of the sequence of POIs;
  • This research recommends a series of POIs to a group of users, and not only to a single user;
  • We appraise the performance of RRT with real-world datasets from Weeplaces and compare it with some baselines. The experimental results show that RRT is superior to other trip recommendation techniques.
Section 2 of this paper surveys related work. Section 3 describes the proposed strategy. Section 4 displays the experimental results. Finally, Section 5 concludes this advanced research.

2. Related Work

Frameworks for recommending areas to a group of at least two or more users in a traveler space are especially valuable as individuals usually make group ventures (family, companions, and so on). In this paper we describe some recommendation frameworks for POIs, such as itinerary recommendations, long- and short-term preference learning for the next POI recommendation, and a serendipity-oriented next POI recommendation model (SNPR).
Itinerary Recommendation. Itinerary recommendations provide recommendations of POIs for travel, which is similar to our concern here. One of the best early examples of class agenda recommenders mentioned in [6] was one in which the creators utilized geotagged Flickr photographs to deduce user intrigue and suggest multi-day schedules using a recursive ravenous calculation. In [12], mined user check-in information and a priori-based analysis were proposed in order to discover ideal excursions under different requirements, even though the computational expense was high. In [13], modified travels in metropolitan regions were suggested by using POI classifications and recommending various kinds of settings. Moreover, of late, Ref. [14] proposed a time-sensitive user premium and exhibited benefits over the utilization of recurrence-based notoriety measures in travel personalization. In [15], customized travel arrangements in various seasons were suggested by consolidating text-based information and perspective data removed from the pictures.
This work is not the same as our work, as we are executing dynamicity in our excursion suggestion system.
Long- and Short-term Preference Learning for Next POI Recommendation [16]. The next POI suggestion has been concentrated on broadly at present. The objective is to suggest the next POI for users at the exact time given users’ recorded history of traveling. In this manner, it is pivotal to display users’ overall taste and successive ways of visiting interested POIs. Additionally, the setting data, such as the classification and check-in time, are likewise essential to capture user inclination. To address this problem, a model named the long- and short-term preference learning model (LSPL) has been proposed considering the consecutive and context data. The long-term module becomes familiar with the relevant elements of POIs and influences the consideration system to capture users’ inclination. The short module uses LSTM to become familiar with users’ consecutive ways of interest. In particular, in order to be more likely to become acquainted with the different impacts of areas and classes of POIs, two LSTM models were used for area-based succession and classification-based grouping, individually. Then, at that point, the long and short-term outcomes were examined to suggest the next POI for users. Finally, the proposed model was assessed on two real-world datasets. The experimental results show that the strategy performs better than other benchmark methods for the next POI proposal.
This work recommends only a single POI, while we are recommending a sequence of POIs. Another difference between their work and ours is that we recommend a series of dynamic POIs to achieve users’ satisfaction.
SNPR: A Serendipity-Oriented Next POI Recommendation Model [11]. Personalized trip recommendation attempts to recommend a sequence of points of interest (POIs) to a user. The problem that most of the points of interest (POI) face is that recommending the exact and accurate location bores the users. Looking at the same kind of POI again and again is sometimes irritating and tedious. This work devised an approach and designed the serendipity-oriented next POI recommendation model (SNPR). The proposed serendipity-oriented next POI recommendation model (SNPR) is presented in this work to deal with the challenges of discovering and evaluating user satisfaction. A compelling recommendation algorithm should not just prescribe what we are probably going to appreciate, but should additionally recommend random yet objective elements to assist with keeping an open window to different worlds and discoveries. To evaluate the algorithm using information acquired from a real-life dataset and user travel histories extracted from a foursquare dataset, the impacts of uncertainty on increasing user satisfaction and social aims were observationally confirmed. Based on that, the SNPR recommends a next POI with high user satisfaction to maximize user experience. Presently, this algorithm outperforms various recommendation methods by satisfying user interests in the trip.
This work recommends the next POI only to a single user, not a group of users. This work included serendipity to achieve user satisfaction. We are suggesting a trip which is the sequence of POIs, to a group of users. We included dynamicity in our trip recommendation model.
All the mentioned trip recommendation frameworks use accumulation strategies to recommend exciting locations to visit. By comparison, our methodology recommends the sequence of POIs dynamically to a group of users.

3. Our Approach

3.1. Problem Statement

To recommend the sequence of POIs, we need to consider the order of the POIs, and two consecutive POIs should not be similar. The problem of a static POI sequence is a subject that we address in our advanced research. All the previous trip recommendation models recommend stationary POI sequences to users. However, what if users want to change their plans in the middle of the trip? As we know, a trip is a combination of a sequence of POIs. The trip recommended at the start is the static trip, and the users cannot proceed to the destination dynamically.
We addressed this issue by recommending the trip dynamically to a group of users. We inputted the previous two or three POIs and obtained the next POI stepwise rather than all the sequences of POIs at the start. For example, we obtained the previous two POIs of a group of users (POIs = a, b) and then recommended the third POI, so that the sequence would be POIs = a, b, c. Then we obtained the previous sequence as input POIs = a, b, c and then recommended the next POI, so that the sequence would be POIs = a, b, c, d.
Our purpose was to recommend the trip to a group of users dynamically. The POIs series recommendation for a trip is a Seq2Seq learning task. Seq2Seq is a technique for encoder–decoder-based machine interpretation and language handling that maps an input of a sequence to an output of a sequence with a tag and consideration value. The thought is to utilize two RNNs that will cooperate with a unique token and anticipate the following state sequence from the past arrangement. The Seq2Seq model is represented as the input and the recommended POI series as the output.

3.2. Summing-Up the Designed Framework

Generally, related trip recommendation technologies recommend trips based only on user profiles. Our goal for a POI arrangement recommendation is to capture this inclination and conduct our design by merging them to dynamically suggest the upcoming POI sequence. Given the success of the transformer model [17], we used a neural network interpretation model, which is impressive for demonstrating a sequence-to-sequence task. This advanced research proposes a new Recommending Reforming Trip (RRT) model. The transformer model follows an encoder–decoder structure but does not depend on repetition and convolutions to produce an output.
The encoder’s job, as depicted on the left half of the transformer model, is to plan an input succession to a sequence of constant representations, which is then added into a decoder. The RRT mainly consists of an encoder combined with a decoder. In Figure 1, we can see that combining an encoder with a decoder is the way to display the sequence-to-sequence learning task. In particular, the encoder is utilized to gain proficiency with the relevant data suggested in the information arrangement H. By contrast, the decoder produces the POI succession proposal R. To capture plentiful data from the information POI arrangement, the RRT incorporates four components as the contribution of the encoder; the details are explained in the explanation of the model design. Given the decoder, the fundamental component can suggest the following POIs individually to shape a POI succession proposal. This individually utilizes two more helper methods to foresee the classifications and areas of the comparing POIs. With different imperatives, both assistant methods assist with training our model. It is important to keep in mind, however, that the POIs are produced individually, and while creating the upcoming POI, the decoder likewise accepts the recently made POIs as extra input.

3.3. Generating a Trip for a Group of Users

To generate an order of POIs for a group of users, we divided the max-range POIs of users with other users’ POIs. Then, regardless of the result, we placed the POIs repeatedly after the POIs of the max-range POIs user. For example, we had a group of three users, group = u s e r 1 , u s e r 2 , u s e r 3 while the arrangement of POIs for these three users were given as u s e r 1 = P c , P g , u s e r 2 = P a , P b , P e , P f and u s e r 3 = P b , P c , P f . Next we used the following Equation (1) to generate an order of POIs for the group of users.
O r d e r o f P O I s = m a x r a n g e o t h e r r a n g e s
We can now see that the max-range of POIs was four, which for u s e r 2 . We next divided the max-range with other ranges of POIs, which were two for u s e r 1 and three for u s e r 3 . If we divide the max-range of POIs, which was four for u s e r 2 , with the other range, which was two for u s e r 1 , so we obtain Order of POIs = 4 2 = 2, and the resulting order of POIs is: Order of POIs = P a , P b , P e , P f , P g . We entered the POIs of u s e r 1 after each two POIs of u s e r 2 . Similarly, we made the same calculations for u s e r 3 and so on for the group of a larger number of users than three. We removed the repeated POIs from the list; two consecutive POIs should not be similar. For example, if P b is a hotel, it is not appropriate to recommend P b again.

3.4. Explanation of Model Design

The RRT substructure can be isolated by three individual parts: first an input part, second an encoder and decoder part, and third an output part.
In the Recommending Reforming Trip (RRT) model, four highlights of POIs arrangements are included in the input part. These are classification embed, POI embeds, positional encoding, and geographical effect, as displayed in Figure 1.

3.4.1. Input Part

POI Embeds. Instinctively, a POI arrangement is addressed to a group through the ordering POIs’ unique IDs. In any case, the unique ID is an inadequate to portray a POI. Highlight embed is a compelling portrayal learning strategy that can insert the first element into a more powerful vector portrayal [18]. Additionally, a learnable highlight POI embed is utilized to plan every POI’s unique ID to the c-measurement inactive vector f p e in that exploration, which depicts the genetic element of the POI. Officially, the POI embeds of an arrangement of POI S R l is signified by the network f p e R l × d , while the r is the range of POI arrangements, and c is the component of the inactive vector. The POI insert is first haphazardly instated, and afterward it is learned via neural network training.
Category Embeds. Classification is a significant characteristic of POIs that is broadly utilized in numerous POI proposal frameworks, for example, by Ding et al. [19], Bolzoni et al. [7], and Lin et al. [20]. The absolute impact of a POI arrangement was likewise considered in the RRT. A POI arrangement was compared to a POI category arrangement. Like POI embeds, a similar category embed was likewise used to plan every POI category to a c-measurement inactive vector f c e . In this way, the classification embed of a POI arrangement S can be signified by a matrix f c e R l × d .
Geographical Effect. It has been demonstrated that geographical factors fundamentally affect POI proposals [21]. Geographical effects exist among POIs. Among POIs and users, as indicated by the law of geography, two nearer points of interest provide sufficient points of comparison. This builds on the geographical directions, resulting in an effective method for estimating the comparability between points of interest. Then again, the geological impact can likewise mirror users’ inclination to areas. For instance, users can enjoy traveling near their homes. In this way, the geographical effect is a significant part of POI attributes. To demonstrate the geographical effect of POIs, we used a multi-layer perceptron (MLP) [22] to change the directions of every POI over to a c-measurement vector f g e . To be specific, the f g e is shown by Equation (2):
f g e ( c ) = R e L U ( R e L U ( c W 1 + B 1 ) W 2 + B 2 )
while c = (dX, dY) is the direction of a POI, b 1 and b 2 are the biases, W 1 and W 2 are the weights, and ReLU [23] denotes a rectified linear activation unit. The inclinations and weights are the variables that should be trained. Additionally, a matrix f g e R l × d was utilized to depict the geological impact of a POI succession S.
Positional Encoding. To utilize the order for the arrangement of POIs, which provides indispensable context-oriented data, the “positional encoding” was brought into RRT. As in Vaswani et al. [17], we encoded each location to a c-measurement vector; along these lines, a network f p o s R l × d was developed to address the positional encoding of the succession of a POI. In particular, every component of f p o s is denoted by Equation (3):
f p o s ( p , e ) = s i n p 1000 e d , i f e i s e v e n s i n p 1000 e 1 d , e l s e
where p 1 , 2 , . . . , l denote the POI’s location in the arrangement, and e 1 , 2 , . . . , n signifies the i-th measurement of the c-measurement vector.
When the above highlights of POI arrangements are determined, they can be coordinated together by the following sum:
f i n t = f p e + f c e + f g e + f p o s
Then, at that point, f i n t is addressed in the part below.

3.4.2. Encoder and Decoder Part

The encoder and decoder infrastructure is strong for a Seq-to-Seq learning function [23,24]. This advanced research was also used in this design. In particular, an encoder is intended to capture the context-oriented data inferred at the input arrangement H; then, at that point, a decoder is utilized to create the POI succession proposal R depending on the output of the encoder, as displayed in Figure 1.
Encoder. The encoder comprises an N indistinguishable block. The attention component has been demonstrated to be a powerful approach for an arrangement-displaying task in a deep neural network [25]. It can be seen as planning a group of key-value sets (K V) with (Q) a query using the attention function as output. At the same time, the keys, qualities, and inquiries are entirely obtained from the various changes of the incorporated component. The output is a weight allotted to each esteem that relies upon the comparability of the query to the relating key. Here, an MLHA structure similar to that used by Vaswani et al. [17] was added into RRT. MLHA is calculated by Equation (5):
M L H A ( K , V , Q ) = C o n c a t ( A t t 1 , A t t 2 , . . . , A t t m ) W o
while the Concatfunction is utilized to concatenate Att 1 Att m , and
A t t i = A t t e n t i o n ( K W K , V W V , Q W Q ) , i { 1 , 2 , 3 , . . . , m }
The Attention function is denoted by Equation (7):
A t t e n t i o n ( V , K , Q ) = s o f t m a x V K T d k Q
where d k represents the measurement of K, and the softmax function is utilized for stabilizing the output possibilities. The over variables’ matrices W i K , W i V , W i Q , W o are largely trainable. The FFN is given by Equation (8):
F F N ( x ) = R e L U ( x W F F N + b 1 F F N ) W F F N + b 2 F F N
where W 1 F F N , W 2 F F N are the weights and b 1 F F N , b 2 F F N are the biases. With that, the leftover association [26] is employed to boost the results.
Decoder. The decoder also comprises N undefined pieces. Each piece, though with encoder, has an additional conceal masked multi-head attention (Masked-MLHA) layer that takes the past output of a POI order as input while creating the following order of POIs. The aim is to ensure that the next series of POIs could be recommended dynamically, and not statically as in the previous trip recommendation models. Moreover, the MLHA layer inside the decoder uses the output of the encoder as input at that point, which is taken after employing two-layer FFN. The remaining affiliation is also utilized within the decoder.

3.4.3. Output Part

The output part has three components, i.e., a principal component and two assistant components. The main component is utilized to suggest the upcoming POI individually to frame a POI succession proposal. The two helper components are separately used to foresee the areas and categories of the relating POIs, which adds additional requirements to the learning task to prepare the model and work on the exhibition. The complete loss is composed as:
L = n = 1 k C E ( y i , y ^ i ) + n = 1 k C E ( c a t i , c a ^ t i ) + n = 1 k M S E ( l o c i , l o ^ c i )
where CE represents the cross-entropy function, y i is the one-hot encoding of a genuine POI’s unique ID at location i, and y ^ i is the proposal possibilities of all POIs at location i. Likewise, c a t i is the one-hot encoding of a simple POI classification, and c a ^ t i is the anticipated possibilities of each category at location i. MSE represents the mean square error function, and l o c i and l o ^ c i are individually the good and anticipated areas of the POI at location i. The training aims to limit the loss function L .

3.5. Dynamic Recommendation

Users’ recorded directions consistently alter over for an extended period, which implies that their advantages and inclinations likewise change over a long period. For this reason, a dynamic proposal framework is much more viable than a static one. With this examination, the model is finished with training. The RRT undoubtedly provides dynamic suggestions by permitting the input POI arrangement to alter with time. At that point, when the previously entered order of POIs alters, the RRT dynamically captures the new inclination of the user, and then suggests a reasonable POI succession. As opposed to different methodologies, which require costly calculation assets to re-compute the highlights of the evolving input, the RRT offers a POI arrangement that is dynamic from start to finish.

4. Experiments

In this section, a series of experiments are conducted to assess the performance of the proposed RRT system.

4.1. Experimental Setup

4.1.1. The Dataset

In this advanced research, the experiments are based upon a dataset [27] gathered from Weeplace, a site that plans user check-in exercises in LBSN. Furthermore, the classification and area of every POI are likewise given in that dataset. The dataset was separated into various regions within the city. Afterward, the proposed strategy was assessed in some significant urban areas, including New York (NY), Brooklyn (BK), San Francisco (SF), and London (LDN). Table 1 shows the measurement subtleties of the information for every city. Table 2 shows the number of groups in every city along with the number of users in each group.
We removed users with under 40 check-ins from that dataset for POIs with under ten travel destinations. Every user was able to have a long authentic direction. In this manner, numerous examples could be built for every user and the set of experiences by moving a lag. In each instance, the principal POIs created the input POI arrangement, indicated by I = i 1 , i 2 , . . . , i n , and top ordered POIs sequence is indicated by T = t 1 , t 2 , . . . , t k .

4.1.2. Evaluation Metrics

In the only POI suggestion errand, precision@k and recall@k are typically chosen as assessment measurements; each is assessed upon the top k POIs in a positioned proposal list. In any case, what should be suggested in the POI arrangement proposal assignments is not a placed POI list but rather an arranged POI succession. Along these lines, precision@k and recall@k do not fit in this study area. The most utilized measurements for assessing POI succession proposals are precision, recall, and F1 score [20,28,29,30]. F1-measure or F1-score considers both precision and recall for test accuracy. Accuracy is how people think when it comes to measuring the model’s performance. F1 is used to test how accurate the model is at making predictions. In any case, a typical disadvantage of these measurements is that they all disregard the order for POIs in the proper arrangement. By contrast, the order of POIs in the trip is very important. Two advanced new measurements named adjusted precision (AP) and sequence-mindful precision (SMP) are introduced to assess the suggestion precision of POI arrangements, considering the POI property and traveling order. Adjusted precision is computed by Equation (10):
A d j u s t e d P r e c i s i o n = n = 1 k I ( g i = r i ) K
where g i and r i represent the ground truth and the suggestion at location i, specifically, and
I ( g i = r i ) = 1 , i f g i = r i 0 , o t h e r w i s e
Equation (10) shows that the adjusted precision is equivalent to 1 only when each grouping location is accurately suggested. As such, if a POI arrangement suggestion contains suitable POI properties but some unacceptable order, its adjusted precision is 0. In this manner, a more modest yet viable metric for sequence-mindful precision is likewise introduced in this exploration, by Equation (12):
S e q u e n c e m i n d f u l P r e c i s i o n = K G R · C M
where K G R estimates the level of cross-over in the suggested POI succession R and ground truth G (except as concerns the order), while C M calculates the order accuracy of the covered arrangement, C is the quantity of all POI sets in the covered succession, and M denotes the number of sets in the right order.
For instance, if a suggested POI arrangement is P b , P a , P d , P c , P f and the related ground truth is P a , P b , P c , P d , P e , it can be productively determined that the adjusted precision = 0. As concerns sequence-mindful precision, the covered succession P b , P a , P d , P c can be acquired first, which has four normal POIs, so that K G R = 0.8. Furthermore, all the requested POI sets in the covered arrangement incorporate P b a , P b d , P b c , P a d , P a c , P d c , i.e., C = 6.
However, only P b d , P b c , P a d , P a c have the right orders for ground truth; therefore, M = 4. Normally it is determined that the sequence-mindful precision = 0.529. This model shows that adjusted precision is much more rigorous than sequence-mindful precision, and sequence-mindful precision considers the level of cross-over between sequences as well as the order in the succession.
Other than adjusted precision and sequence-mindful precision, the most frequently utilized measurements, such as precision and recall, are additionally used to assess the models in this examination. The confusion matrix can be used to determine precision and recall. Furthermore, these two arrived at the midpoint of all the POIs.

4.1.3. Baselines and Implementation Details

To assess the performance of the introduced model, we chose the following four recommendation models.
  • LOcation REcommendation (LORE) [30]. This first mines successive examples of certain POI arrangements and represents the successive examples as a unique location–location transition graph (LLTG). In light of the LLTG and the geographical effect, LORE would be able to foresee the user’s likelihood of traveling to every POI.
  • Additive Markov Chain (AMC) [31]. This suggests POI arrangements by utilizing consecutive impact. In particular, given an authentic direction Su of the user u, while suggesting the POI at location p, AMC at first provides likelihood figures of the user’s traveling to every POI dependent upon the full set of POIs before location p. This suggests the POI with extreme likelihood at location p.
  • RAND. The random-based strategy arbitrarily suggests POIs among the applicants to which the objective user has not traveled.
  • LSTM-Seq2Seq [24]. This involves a multifaceted long short-term memory (LSTM) to plan the input grouping to a vector having a fixed measurement, and afterward utilizes another profound LSTM to translate the objective succession from the vector.

4.1.4. The Variable Setting

In the experiments, all examples were partitioned into three parts, i.e., a training part (70%), an approval part (20%), and a test part (10%). The training part was utilized for training the model boundaries of RRT. The approval part was used to choose the limits for the training. Furthermore, the test part was used to assess the performance of the model. Moreover, the c-measurement of the input included was set to 72 and the quantity of blocks N was adjusted to 2.
By contrast, the quantity of the “header” of the MLHA layer m was adjusted to 9. Moreover, the analyses were directed with two different settings of input and output POI arrangement ranges: (n, k), with (18, 3) and (15, 6). In order to adequately assess the performance of the introduced calculation and reduce the risk of over-fitting, each investigation in this advanced research was repeated multiple times, with the mean and change of numerous outcomes utilized to quantify the presence of the model. Furthermore, all of the experiments were executed with TensorFlow [32].

4.2. Experimental Results

A series of analyses was performed using the above setup. For the importance of the order of POIs for recommending the trip, as well as adjusted precision and sequence-mindful precision, are introduced in Table 3 and Table 4. The contrast between Table 3 and Table 4 concerns the adjusting of input and output POI arrangement ranges: the first is (18, 3), and the last is (15, 6).
It is evident from Table 3 and Table 4 that RRT essentially exceeds different baselines if we make our estimates using adjusted precision and sequence-mindful precision for all of the four urban areas. The RAND technique provides the suggestion only through irregular choices without extra data, so that precision is extremely low but stable. By comparison, AMC and LORE accomplish comparable execution. All the more explicitly, if the range of input is more extended (18) and the output range is more limited (3), LORE is superior to AMC much of the time. Alternatively, at the point when the information range becomes more limited (15) and the output range becomes longer (6), AMC quite often surpasses LORE. Then again, it is not difficult to notice that the upsides of sequence-mindful precision are consistently greater than those of adjusted precision, which confirms that, as referenced in the past investigation, adjusted precision is more rigorous than sequence-mindful precision.
In addition, in adjusted precision and sequence-mindful precision, we used precision and recall to assess the above models. Table 5 and Table 6 present the experimental outcomes. Table 5 represents the situation with an input value = 18 and an output = 3, and Table 6 the case of an input value = 15 and an output = 6.
In light of the outcomes, it can be seen that RRT can, in any case, surpass practically all of the similar approaches when estimated with precision and recall. Moreover, as the input is limited, the output is extended. The POI series proposal also becomes more refined. RRT can, in any case, accomplish significantly better execution, which demonstrates that the RRT is excellent for the recommendation of POI arrangement tasks.

4.3. The Impact of Components

As depicted in Section 3.4.1, four highlights are described in the input part of the RRT: the POI embeds (PE), geographical effect (GE), positional encoding (POs), and category embed (CE). That part explores the impact of all the segments. In particular, every detail is eliminated from the input part separately to show the viability of the relating part. The experimental findings are shown in Table 7.
As indicated by the outcomes introduced in Table 7, if the PE and POs are eliminated from the input part, the adjusted precision and sequence-mindful precision consistently decrease. The adjusted precision and sequence-mindful precision always result in increased qualities while eliminating the CE and GE. This implies that PE and POs are the more significant highlights in the POI arrangement proposal.

4.4. Cold-Start Problem

Recommendation frameworks consistently encounter the cold-start problem, so that the proposed frameworks experience issues, suggesting solid outcomes due to the underlying absence of information. Likewise, RRT has a cold-start problem if the input information is restricted. In this advanced research, therefore, users with fewer than 18 and more than 3 check-ins were utilized to approve the exhibition of the introduced calculation in a cold-start problem. In particular, each deficient direction was chosen from an arrangement of a fixed range (18). Then, at that point, the initial 15 POIs contribute to the training model to achieve a proposal, while the 3 POIs are utilized to assess the suggestion precision. The trial outcomes were introduced in Table 8. It may be noticed that the performance of all the strategies aside from RAND deteriorated (as contrasted with Table 3). In spite of this, RRT is still fundamentally better than other baselines.

5. Conclusions

This advanced research proposes the RRT system to suggest POI successions based on a group of users’ recorded directions. RRT first models the sequence of POI suggestions as a Seq-to-Seq learning function and afterward develops a DNN-based construction to address it. In RRT the POIs are obtained to highlight the categorical and geographical effects of recorded directions as input, then generally with the outputs a POI sequence recommendation is provided to a group of users. From start to finish of the work process, RRT can undoubtedly provide a dynamic sequence of POIs recommendations by permitting the inputs to alter over a long period. Moreover, except for precision and recall, two advanced measurements, namely adjusted precision and sequence-mindful precision, are proposed to assess the suggestion precision of dynamic sequences of POI recommendations to a group of users. Although they vary in their precision and recall, adjusted precision and sequence-mindful precision both consider the traveling order of POI arrangements, which results in a more sensible approach to assess the proposal exactness of a POI sequence. Likewise, the experimental outcomes of all the measurements above illustrate the enormous benefits of RRT in making dynamic trip recommendations to a group of users.
RRT has shown its adequacy; however, a few subjects still deserve further investigation. Finding a way to include spontaneity in recommending a dynamic sequence of POIs to a group of users should also be addressed in future work.

Author Contributions

Conceptualization, R.A. and G.A.A.; methodology, R.A., A.A. and G.A.A.; software, R.A.; validation, S.M., F.A.A. and J.H.; formal analysis, R.A., F.A.A. and G.A.A.; investigation, R.A. and A.A.; resources, A.A.A.B. and A.B.A.; data curation, R.A.; writing—original draft preparation, R.A.; writing—review and editing, R.A. and A.I.A.; visualization, R.A., F.A.A. and G.A.A.; supervision, R.A.; project administration, A.A.; funding acquisition, F.A.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data used to support the findings of this study are available from the corresponding author upon request.

Acknowledgments

The authors are grateful to the Deanship of Scientific Research, King Saud University for funding through Vice Deanship of Scientific Research Chairs.

Conflicts of Interest

The authors declare that they have no known competing financial interest or personal relationships that could have influenced the work reported in this advanced research.

References

  1. Zheng, Y.; Zhou, X. Computing with Spatial Trajectories, 1st ed.; Springer: Berlin/Heidelberg, Germany, 2011; Available online: https://www.springer.com/gp/book/9781461416289 (accessed on 20 February 2022).
  2. Mei, T.; Hsu, W.H.; Luo, J. Knowledge Discovery from Community-Contributed Multimedia. IEEE Multimed. 2010, 17, 16–17. Available online: https://ieeexplore.ieee.org/document/5638050 (accessed on 20 February 2022). [CrossRef]
  3. Cheng, C.; Yang, H.; Lyu, M.R.; King, I. Where you like to go next: Successive point-of-interest recommendation. In Proceedings of the IJCAI International Joint Conference on Artificial Intelligence, Beijing, China, 3–9 August 2013; pp. 2605–2611. Available online: https://www.aaai.org/ocs/index.php/IJCAI/IJCAI13/paper/view/6633 (accessed on 20 February 2022).
  4. Liu, Q.; Wu, S.; Wang, L.; Tan, T. Predicting the Next Location: A Recurrent Model with Spatial and Temporal Contexts. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016; pp. 194–200. Available online: https://www.aaai.org/ocs/index.php/AAAI/AAAI16/paper/view/11900 (accessed on 20 February 2022).
  5. Baral, R.; Li, T. Exploiting the roles of aspects in personalized POI recommender systems. Data Min. Knowl. Discov. 2017, 32, 320–343. [Google Scholar] [CrossRef]
  6. Choudhury, M.D.; Feldman, M.; Amer-Yahia, S.; Golbandi, N.; Lempel, R.; Yu, C. Automatic construction of travel itineraries using social breadcrumbs. In Proceedings of the 21st ACM Conference on Hypertext and Hypermedia, Toronto, ON, Canada, 13–16 June 2010; pp. 35–44. [Google Scholar] [CrossRef]
  7. Bolzoni, P.; Helmer, S.; Wellenzohn, K.; Gamper, J.; Andritsos, P. Efficient Itinerary Planning with Category Constraints. In Proceedings of the 22nd ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, Dallas, TX, USA, 4–7 November 2014; ACM: New York, NY, USA, 2014; pp. 203–212. [Google Scholar] [CrossRef] [Green Version]
  8. Bin, C.; Gu, T.; Sun, Y.; Chang, L.; Sun, W.; Sun, L. Personalized POIs Travel Route Recommendation System Based on Tourism Big Data. In Proceedings of the 15th Pacific Rim International Conference on Artificial Intelligence, Nanjing, China, 28–31 August 2018; pp. 290–299. Available online: https://www.springerprofessional.de/en/personalized-pois-travel-route-recommendation-system-based-on-to/15986358 (accessed on 20 February 2022).
  9. Lim, K.H.; Chan, J.; Leckie, C.; Karunasekera, S. Personalized trip recommendation for tourists based on user interests, points of interest visit durations and visit recency. Knowl. Inf. Syst. 2018, 54, 375–406. [Google Scholar] [CrossRef]
  10. Porras, C.; Pérez-Cañedo, B.; Pelta, D.A.; Verdegay, J.L. A Critical Analysis of a Tourist Trip Design Problem with Time-Dependent Recommendation Factors and Waiting Times. Electronics 2022, 11, 357. [Google Scholar] [CrossRef]
  11. Zhang, M.; Yang, Y.; Abbas, R.; Deng, K.; Li, J.; Zhang, B. SNPR: A Serendipity-Oriented Next POI Recommendation Model. In Proceedings of the CIKM ’21 30th ACM International Conference on Information & Knowledge Management, Gold Coast, Australia, 1–5 November 2021. [Google Scholar] [CrossRef]
  12. Lu, E.H.C.; Chen, C.Y.; Tseng, V.S. Personalized trip recommendation with multiple constraints by mining user check-in behaviors. In Proceedings of the SIGSPATIAL ’12 20th International Conference on Advances in Geographic Information Systems, Beach, CA, USA, 6–9 November 2012; pp. 209–218. [Google Scholar] [CrossRef]
  13. Gionis, A.; Lappas, T.; Pelechrinis, K.; Terzi, E. Customized tour recommendations in urban areas. In Proceedings of the WSDM ’14 7th ACM International Conference on Web Search and Data Mining, New York, NY, USA, 24–28 February 2014; pp. 313–322. [Google Scholar] [CrossRef] [Green Version]
  14. Gavalas, D.; Konstantopoulos, C.; Mastakas, K.; Pantziou, G.; Vathis, N. Heuristics for the time dependent team orienteering problem: Application to tourist route planning. Comput. Oper. Res. 2015, 62, 36–50. Available online: https://www.sciencedirect.com/science/article/abs/pii/S0305054815000817 (accessed on 20 February 2022). [CrossRef]
  15. Jiang, S.; Qian, X.; Mei, T.; Fu, Y. Personalized Travel Sequence Recommendation on Multi-Source Big Social Media. IEEE Trans. Big Data 2016, 2, 43–56. Available online: https://ieeexplore.ieee.org/document/7444160 (accessed on 20 February 2022). [CrossRef]
  16. Wu, Y.; Li, K.; Zhao, G.; Qian, X. Long- and Short-Term Preference Learning for Next POI Recommendation. In Proceedings of the CIKM ’19 28th ACM International Conference on Information and Knowledge Management, Beijing, China, 3–7 November 2019; pp. 2301–2304. [Google Scholar] [CrossRef] [Green Version]
  17. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Advances in Neural Information Processing Systems; 30 Curran Associates, Inc.: Red Hook, NY, USA, 2017; pp. 5998–6008. Available online: https://arxiv.org/pdf/1706.03762.pdf (accessed on 20 February 2022).
  18. Covington, P.; Adams, J.; Sargin, E. Deep Neural Networks for YouTube Recommendations. In Proceedings of the 10th ACM Conference on Recommender Systems, Boston, MA, USA, 15–19 September 2016; ACM: New York, NY, USA, 2016; pp. 191–198. [Google Scholar]
  19. Ding, R.; Chen, Z. A deep neural network for personalized POI recommendation in location-based social networks. Int. J. Geogr. Inf. Sci. 2018, 32, 1631–1648. [Google Scholar] [CrossRef]
  20. Lin, I.C.; Lu, Y.S.; Shih, W.Y.; Huang, J.L. Successive POI Recommendation with Category Transition and Temporal Influence. In Proceedings of the 2018 IEEE 42nd Annual Computer Software and Applications Conference (COMPSAC), Tokyo, Japan, 23–27 July 2018; pp. 23–27. Available online: https://ieeexplore.ieee.org/document/8377830/ (accessed on 20 February 2022).
  21. Wang, H.; Shen, H.; Ouyang, W.; Cheng, X. Exploiting POI-Specific Geographical Influence for Point-of-Interest Recommendation. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, Stockholm, Sweden, 13–19 July 2018; pp. 3877–3883. [Google Scholar] [CrossRef] [Green Version]
  22. Hornik, K. Approximation Capabilities of Multilayer Feedforward Networks. Neural Netw. 1991, 4, 251–257. Available online: https://linkinghub.elsevier.com/retrieve/pii/089360809190009T (accessed on 20 February 2022). [CrossRef]
  23. Cho, K.; van Merrienboer, B.; Gülçehre, Ç.; Bougares, F.; Schwenk, H.; Bengio, Y. Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, 25–29 October 2014; pp. 175–191. [Google Scholar]
  24. Sutskever, I.; Vinyals, O.; Le, Q.V. Sequence to Sequence Learning with Neural Networks. In Advances in Neural Information Processing Systems; Curran Associates, Inc.: Red Hook, NY, USA, 2014; pp. 3104–3112. Available online: https://proceedings.neurips.cc/paper/2014/hash/a14ac55a4f27472c5d894ec1c3c743d2-Abstract.html (accessed on 20 February 2022).
  25. Kim, Y.; Denton, C.; Hoang, L.; Rush, A.M. Structured Attention Networks. arXiv 2017, arXiv:1702.00887. [Google Scholar]
  26. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. Available online: https://ieeexplore.ieee.org/document/7780459 (accessed on 20 February 2022).
  27. Liu, X.; Liu, Y.; Aberer, K.; Miao, C. Personalized Point-of-interest Recommendation by Mining Users’ Preference Transition. In Proceedings of the 22nd ACM International Conference on Information & Knowledge Management, San Francisco, CA, USA, 27 October–1 November 2013; pp. 733–738. [Google Scholar] [CrossRef]
  28. Zhao, P.; Zhu, H.; Liu, Y.; Li, Z.; Xu, J.; Sheng, V.S. Where to Go Next: A Spatio-temporal LSTM model for Next POI Recommendation. arXiv 2018, arXiv:1806.06671. [Google Scholar]
  29. Baral, R.; Iyengar, S.S.; Li, T.; Zhu, X. HiCaPS: Hierarchical Contextual POI Sequence Recommender. In Proceedings of the 26th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, Seattle, WA, USA, 6–9 November 2018; pp. 436–439. [Google Scholar] [CrossRef]
  30. Zhang, J.D.; Chow, C.Y.; Li, Y. LORE: Exploiting Sequential Influence for Location Recommendations. In Proceedings of the 22nd ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, Dallas, TX, USA, 4–7 November 2014; pp. 103–112. [Google Scholar] [CrossRef]
  31. Usatenko, O. Random Finite-Valued Dynamical Systems: Additive Markov Chain Approach. In Kharkov Series in Physics and Mathematics; Cambridge Scientific Publishers: Cambridge, UK, 2009; Available online: https://iucat.iu.edu/iun/11414611 (accessed on 20 February 2022).
  32. Abadi, M.; Barham, P.; Chen, J.; Chen, Z.; Davis, A.; Dean, J.; Devin, M.; Ghemawat, S.; Irving, G.; Isard, M.; et al. TensorFlow: A System for Large-Scale Machine Learning. In Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), Savannah, GA, USA, 2–4 November 2016; pp. 265–283. Available online: https://www.usenix.org/conference/osdi16/technical-sessions/presentation/abadi (accessed on 20 February 2022).
Figure 1. The architecture of RRT. The details of the model are explained in Section 3.4.
Figure 1. The architecture of RRT. The details of the model are explained in Section 3.4.
Electronics 11 01037 g001
Table 1. Statistics of datasets.
Table 1. Statistics of datasets.
ItemsNYSFBKLDN
Total users4811322027241935
Total POIs28,33313,366733410,405
Total check-ins720,350330,975159,946147,610
Table 2. Number of groups with total users.
Table 2. Number of groups with total users.
Number of Users per Group
Cities23456789
NY15,91518,68519,45117,77415,60213,78911,3867951
SF96339486677733691121265454
BK875095528633614234521579585160
LDN339817167112691023691
Table 3. Execution examination on four cities—LDN, NY, BK, and SF—for adjusted precision and sequence-mindful precision with an input value = 18 and an output = 3. The results in bold are those with the best performance.
Table 3. Execution examination on four cities—LDN, NY, BK, and SF—for adjusted precision and sequence-mindful precision with an input value = 18 and an output = 3. The results in bold are those with the best performance.
NYSFBKLDN
MethodsAPSMPAPSMPAPSMPAPSMP
RAND0.090.0100.0040.0640.0090.1650.0090.023
AMC5.2328.4546.5658.4547.34311.3439.45414.655
LORE6.3548.2347.6758.3458.45410.34311.35413.564
LSTM7.24311.3459.35413.3448.45620.45612.45325.454
RRT8.44318.45312.45417.3449.45321.45313.45327.344
Table 4. Execution examination on four cities—LDN, NY, BK, and SF—for adjusted precision and sequence-mindful precision with an input value = 15 and an output = 6. The results in bold are those with the best performance.
Table 4. Execution examination on four cities—LDN, NY, BK, and SF—for adjusted precision and sequence-mindful precision with an input value = 15 and an output = 6. The results in bold are those with the best performance.
NYSFBKLDN
MethodsAPSMPAPSMPAPSMPAPSMP
RAND0.0050.0240.0050.0240.0090.0240.0140.024
AMC5.5439.4534.6569.3456.45312.5649.34418.353
LORE4.3448.6344.2336.3456.3459.34410.34414.454
LSTM6.45417.4336.34517.3457.34516.34510.34325.343
RRT7.35319.4457.45418.43410.35420.34513.34328.344
Table 5. Execution examination on four cities—LDN, NY, BK, and SF—for precision and recall with an input value = 18 and an output = 3. The results in bold are those with the best performance.
Table 5. Execution examination on four cities—LDN, NY, BK, and SF—for precision and recall with an input value = 18 and an output = 3. The results in bold are those with the best performance.
NYSFBKLDN
MethodsPrecisionRecallPrecisionRecallPrecisionRecallPrecisionRecall
RAND0.0040.0040.0030.0020.0040.0080.0010.001
AMC0.6330.7501.6441.4353.64412.4532.6349.453
LORE1.4555.3454.3450.4530.5349.4533.4539.543
LSTM2.4537.4544.4542.4545.45413.5444.45410.345
RRT4.44410.4545.4545.4347.54414.4546.54413.343
Table 6. Execution examination on four cities—LDN, NY, BK, and SF—for precision and recall with an input value = 15 and an output = 6. The results in bold are those with the best performance.
Table 6. Execution examination on four cities—LDN, NY, BK, and SF—for precision and recall with an input value = 15 and an output = 6. The results in bold are those with the best performance.
NYSFBKLDN
MethodsPrecisionRecallPrecisionRecallPrecisionRecallPrecisionRecall
RAND0.0010.0020.0130.0130.0140.0320.0310.029
AMC0.4541.4443.4540.9881.4549.4541.4544.454
LORE0.4545.4541.4540.8650.23410.4540.4546.454
LSTM0.7456.4546.4541.4542.45411.4541.4548.545
RRT2.4547.5457.4543.4545.45413.3443.64510.454
Table 7. Impact of various components upon given four cities—LDN, NY, BK, and SF—for adjusted precision and sequence-mindful precision with an input value = 18 and an output = 3.
Table 7. Impact of various components upon given four cities—LDN, NY, BK, and SF—for adjusted precision and sequence-mindful precision with an input value = 18 and an output = 3.
NYSFBKLDN
MethodsAPSMPAPSMPAPSMPAPSMP
RRT8.44318.45312.45417.3449.45321.45313.45327.344
No PE1.4352.3432.5344.4332.4445.2343.4436.453
No CE5.4548.3546.3449.3547.34513.34510.54416.354
No GE6.4548.4547.3549.5457.54412.45411.54312.354
No Pos2.4543.4545.4548.4546.34410.4548.45411.454
Table 8. Execution examination in a cold-start problem for four cities—LDN, NY, BK, and SF. The results in bold are those with the best performance.
Table 8. Execution examination in a cold-start problem for four cities—LDN, NY, BK, and SF. The results in bold are those with the best performance.
NYSFBKLDN
MethodsAPSMPAPSMPAPSMPAPSMP
RAND0.0030.0240.0080.0540.0160.0320.0130.024
AMC1.4345.3451.4342.4533.4545.4532.3455.533
LORE3.5436.5431.4532.4344.4536.5345.5347.534
LSTM3.4537.4533.4536.4536.75315.4358.45418.345
RRT4.43410.4535.4349.6436.95316.4548.74522.453
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Abbas, R.; Amran, G.A.; Alsanad, A.; Ma, S.; Almisned, F.A.; Huang, J.; Al Bakhrani, A.A.; Ahmed, A.B.; Alzahrani, A.I. Recommending Reforming Trip to a Group of Users. Electronics 2022, 11, 1037. https://doi.org/10.3390/electronics11071037

AMA Style

Abbas R, Amran GA, Alsanad A, Ma S, Almisned FA, Huang J, Al Bakhrani AA, Ahmed AB, Alzahrani AI. Recommending Reforming Trip to a Group of Users. Electronics. 2022; 11(7):1037. https://doi.org/10.3390/electronics11071037

Chicago/Turabian Style

Abbas, Rizwan, Gehad Abdullah Amran, Ahmed Alsanad, Shengjun Ma, Faisal Abdulaziz Almisned, Jianfeng Huang, Ali Ahmed Al Bakhrani, Almesbahi Belal Ahmed, and Ahmed Ibrahim Alzahrani. 2022. "Recommending Reforming Trip to a Group of Users" Electronics 11, no. 7: 1037. https://doi.org/10.3390/electronics11071037

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop