Next Article in Journal
Interoperability Benefits and Challenges in Smart City Services: Blockchain as a Solution
Next Article in Special Issue
Incremental Connected Component Detection for Graph Streams on GPU
Previous Article in Journal
Research on Testing Method for Shielding Effectiveness of Irregular Cavity Based on Field Distribution Characteristics
Previous Article in Special Issue
Music Recommendation Based on “User-Points-Music” Cascade Model and Time Attenuation Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Incentive Mechanism for Improving Task Completion Quality in Mobile Crowdsensing

1
School of Computer Science and Engineering, Central South University, Changsha 410075, China
2
State Grid Ningxia Electric Power Co., Ltd. Information and Communication Branch, Yinchuan 750000, China
*
Authors to whom correspondence should be addressed.
Electronics 2023, 12(4), 1037; https://doi.org/10.3390/electronics12041037
Submission received: 15 January 2023 / Revised: 11 February 2023 / Accepted: 14 February 2023 / Published: 19 February 2023
(This article belongs to the Special Issue Applications of Big Data and AI)

Abstract

:
Due to the randomness of participants’ movement and the selfishness and dishonesty of individuals in crowdsensing, the quality of the sensing data collected by the server platform is uncertain. Therefore, it is necessary to design a reasonable incentive mechanism in crowdsensing to ensure the stability of the sensing data quality. Most of the existing incentive mechanisms for data quality in crowdsensing are based on traditional economics, which believe that the decision of participants to complete a task depends on whether the benefit of the task is greater than the cost of completing the task. However, behavioral economics shows that people will be affected by the cost of investment in the past, resulting in decision-making bias. Therefore, different from the existing incentive mechanism researches, this paper considers the impact of sunk cost on user decision-making. An incentive mechanism based on sunk cost called IMBSC is proposed to motivate participants to improve data quality. The IMBSC mechanism stimulates the sunk cost effect of participants by designing effort sensing reference factor and withhold factor to improve their own data quality. The effectiveness of the IMBSC mechanism is verified from three aspects of platform utility, participant utility and the number of tasks completed through simulation experiments. The simulation results show that compared with the system without IMBSC mechanism, the platform utility is increased by more than 100%, the average utility of participants is increased by about 6%, and the task completion is increased by more than 50%. 

1. Introduction

Crowdsensing refers to forming a sensing network through people’s existing mobile devices and publishing sensing tasks to individuals or groups in the network for completion, thereby helping professionals or the public to collect data, analyze information and share knowledge [1]. At present, crowdsensing is applied in various aspects of real life, such as traffic monitoring system [2], environmental monitoring [3], road condition monitoring [4]. There are still many problems in the above specific applications. Participants will waste personal time and consume the resources of the terminal equipment, such as data traffic, battery power, computing resources, etc., in the process of completing the sensing task. So, participants hope to get a certain economic reward as a compensation for the resource consumption when they complete the sensing task. Therefore, the server platform will pay appropriate rewards to participants to incentive participate in the sensing task in crowdsensing. There may be malicious participants in the crowdsensing system, they hope to obtain more rewards through a small cost or submit false data directly. At the same time, there are also problems that participants may submit unsatisfactory data quality due to limited time, low accuracy of the perception module in the terminal device, or negligence of participants themselves. To solve these problems, it is necessary to design a reasonable mechanism to motivate participants to submit high-quality data.
At present, many scholars have focused on the research of incentive mechanism on data quality [5,6,7,8,9,10], but most of them are based on traditional economics. They believe that users take the maximization of individual future utility as the goal when making decisions, and only consider future costs and benefits [11]. However, more and more studies in behavioral economics show that people will be affected by the cost of investment in the past, resulting in decision-making bias, and the cost of investment in the past is the sunk cost. Sunk cost refers to the cost that people have invested that cannot be recovered, and the sunk cost effect refers to the fact that people choose to continue to adhere to the previous behavior in order to avoid losses [12]. Sunk costs play an important role in the decision-making of participants, and have been widely existing in people’s daily life, such as commodity consumption and market investment. It is very meaningful to introduce the sunk cost effect into the design of incentive mechanism of crowdsensing [13].
Different from the existing incentive mechanism researches [5,6,7,8,9,10], this paper considers the impact of sunk cost on user decision-making. An incentive mechanism for improving data quality based on sunk cost effect is proposed. Taking participants as the incentive object, reverse auction as the carrier, and data quality as the incentive goal, the mechanism designed effort sensing reference factor and withhold factor to motivate participants to perform sensing tasks and submit high-quality sensing data. Participants evaluated their own effort level through the effort sensing reference factor provided by the server platform. Server platform further withheld part of the participants ‘reward through the withhold factor to stimulate the sunk cost effect of participants. In order to win the withheld reward, participants will resubmit high-quality data to the platform, and finally achieve the purpose of improving data quality.
The remainder of the study is organized as follows. Section 2 introduces the incentive mechanism for data quality in crowdsensing and sunk cost effect. In Section 3, the Incentive Mechanism based on Sunk Cost (IMBSC) is presented in detail. In Section 4, simulations verify the effectiveness of IMBSC in improving task completion quality.

2. Related Work

2.1. Research on Incentive Mechanism for Data Quality in Crowdsensing

At present, the researches on crowdsensing incentive mechanism mainly focuses on social benefits, protecting the privacy of participants, task allocation algorithm, ensuring the authenticity of incentive mechanism and ensuring data quality. In order to incentive participants to submit high-quality data, Yang Di et al. [5] proposed a user reputation evaluation mechanism that combines the reputation score feedback of requester to participant, the objective time reliability and the reliability of the provided data size. They designed an online incentive algorithm based on reputation update by considering the update mechanism of user history and display reputation records. Yang Jing et al. [6] proposed to analyze the historical trust status of participants through the willingness degree and data quality of historical tasks, evaluate their historical reputation, and dynamically update the reputation value of participants in subsequent tasks. The server platform reasonably selects participants according to the reputation value, so as to achieve the purpose of accurately collecting high-quality sensing data in real time. Dai et al. [7] proposed to guarantee data quality from two aspects: constraining the physical location distance between participants and tasks and using linear model to calculate whether the quality sum of participants meets the quality threshold. Peng et al. [8] proposed to use the expectation maximization algorithm combining maximum likelihood and Bayesian inference to evaluate data quality, so as to reward participants with fair and appropriate rewards and further incentive users to submit high-quality data. Wen et al. [9] mainly designed a probability model for indoor positioning scenarios to evaluate data reliability, and paid participants according to the number of data requesters. It not only controlled data quality, but also effectively motivated participants to submit high-quality data next time. Both research on the reputation value of participants and quality guarantee are mostly based on historical data to evaluate the reputation or quality of participants, but these studies ignore an important feature of historical data, that is, the relationship between the data in the time series. Ignoring the time characteristics of historical data, the evaluation results of reputation value or quality will also lose time characteristics. Wang et al. [10] proposed to model the quality of participants in latent time series based on linear dynamical system. It used Expectation Maximization (EM) algorithm to periodically evaluate the participant hyper-parameters to ensure the accuracy of quality assessment. Ref. [14] evaluated the user’s task performance through data accuracy and response time, and the user’s task reward depends on their own task performance. Finally, the mechanism can not only motivate users to participate in the sensing task, but also ensure the completion quality of the task. Ref. [15] made use of the diversity of inherently inaccurate data from many users to aggregate data perceived by the crowd, so that the accuracy of data aggregation can be improved, and pays according to the quality of data submitted by individuals and the truth value of aggregated data. Ref. [16] proposed an allocation algorithm that can effectively use and manage idle resources to obtain maximum benefits, and motivate users proportionally according to the quality of task completion, which not only improves the participation rate of users but also improves the data quality of users. The reward amount paid by the platform to users is determined according to the data quality provided by users. Ref. [17] proposed a payment mechanism, named Theseus, that deals with users’ strategic behavior, and incentivizes high-effort sensing from users. It ensured that, at the Bayesian Nash Equilibrium of the non-cooperative game induced by Theseus, all participating users will spend their maximum possible effort on sensing, which improves their data quality.

2.2. Sunk Cost Effect

Sunk cost effect [12] refers to the fact that people choose to continue to adhere to the previous behavior in order to avoid losses. There are many sources [13 of the sunk cost effect, and we mainly use the mental account in the cognitive mechanism [18]. The mental account refers to people’s psychological estimation of the outcome, that is, people’s psychological measurement of gains and losses. “Gains” refers to the obtained utility, and “Losses” refers to the total amount paid by people in the process of consumption, that is, people’s sunk cost. The mental account reaches equilibrium only if people’s gain utility is greater than sunk costs. Generally, the mental account value function is used to represent people’s gains and losses [18], as shown in Formula (1).
V ( x ) = { ( x x 0 ) α x x 0 0 λ ( x x 0 ) β x x 0 < 0
where x 0 represents the reference point of people. It can be seen from Formula (1) that when people judge their gains and losses, they are based on the reference point, rather than the absolute value of gains and losses. Sunk cost effect has been studied in many fields, for example, Ref. [19] studies the influence of psychological distance on sunk cost, Ref. [20] studies the sunk cost effect in engineering change in engineering management, and Ref. [21] studies the positive and negative effects of sunk cost effect on consumers.
It can be seen from the above that the sunk cost effect has been studied in many fields, but there is almost no sunk cost effect applied to the incentive mechanism of crowdsensing. In this paper, the sunk cost effect is introduced into the incentive mechanism of crowdsensing to motivate participants to improve their own data quality, and then improve the data quality level of the whole crowd sensing system.

3. Design of IMBSC

This part first gives the system model, and then mainly explains the design process of IMBSC.

3.1. System Model

As shown in Figure 1, the IMBSC system model mainly includes two parts: server platform and participants. For the server platform, it is hoped that more high-quality data can be obtained at a lower cost, thereby improving the utility of the server platform. IMBSC will focus on this aspect of the incentive mechanism, by setting some parameters and using the sunk cost effect to influence the decision-making of participants and improve the data quality.
In IMBSC, a running process (that is, task completion process) includes three stages: task publishing, selection reward payment, and final reward payment. The final reward payment stage of the previous run is completed, and the task publishing stage of the next run is entered. As shown in Figure 1, in the task publishing stage, the platform first publishes the task. Then, users submit bidding information based on task information and their own capabilities. Next, the platform selects the winning participants according to the participants’ bidding information through the reverse auction. After that, the winning participant performs the task and submits the sensing data. The platform assesses the quality of data submitted by participants. Participants who meet the quality threshold are paid reward, and participants who do not meet the quality threshold are paid a certain proportion reward. Those whose quality is not up to the quality threshold evaluate their own efforts according to the effort sensing factor. Users whose sensing data quality does not meet the quality threshold set by the platform will be affected by sunk costs. In order to get reward, they will continue to participate in the task and improve the quality of sensing data. The platform pays the reward according to the data quality of the participants reaching the threshold.
Next, the process of each stage is elaborated in detail.
Task publishing stage: The server platform publishes task set T = { t 1 , t 2 , t 3 , , t n } and task budget set Budget = { budget 1 , budget 2 , , budget n } to participants P = { p 1 , p 2 , p 3 , , p m } in specific areas according to task characteristics, that is, the j th task t j has a task budget budget j . In addition to the public task budget, the server platform also has a private task value attribute V = { v 1 , v 2 , v 3 , , v n } , that is, the j th task t j has a task value v j . The i th participant p i evaluates the tasks suitable for completion according to the task information given by the server platform and his own ability level, and submits a binary group b i d i = < T i , B i > to the server platform. T i indicates the set of tasks that participants p i want to complete, and for any i , there is T i T . B i represents the set of bidding from participant p i corresponding to task set T i . B I D = { b i d 1 , b i d 2 , b i d 3 , , b i d m } represents the set of bidding information submitted by all participants.
Selection reward payment stage: After receiving the bidding information submitted by the participants, the server platform selects the appropriate participant as the winner according to the bidding information, forming a winner set. Participants who are not selected do not belong to the winner set and are not required to perform tasks. The winner executes the task and submits data to the server platform. The server platform evaluates the data quality, and when the data quality of the winner exceeds a quality threshold, the winner is called a quality standard. Conversely, winners whose data quality does not exceed the quality threshold are called quality underachievers. The server platform gives those whose quality meets the standard the reward in their bidding information, and those whose quality does not meet the standard will give a certain proportion σ of reward according to their bidding information and data quality. An effort sensing reference factor δ for those whose quality of feedback from the server platform is not up to standard. Those who fail to meet the quality standards will be evaluated according to their own efforts δ . Those who fail to meet quality standards will have a sense of disapproval of the gap between the proportion of reward they receive at the stage of choosing payment and their own efforts [13], and stimulate the sunk cost effect of those who fail to meet quality standards. In order to recover sunk costs and obtain the remaining proportion ( 1 σ ) of reward, those who fail to meet quality standards will continue to perform tasks, improve the quality of data, and re-submit data to the server platform.
Final reward payment stage: The server platform performs quality inspection on the re-submit data. When the data re-submit by the quality failure reaches the quality threshold, the quality failure is called the quality re-satisfaction. The server platform pays the reward of the remaining proportion of those who re-qualify the quality. Conversely, quality underachievers whose re-submit data have not yet met the quality threshold will not receive any compensation at this stage.
Some parameters used in this paper are listed in Table 1.

3.2. The Influence of Sunk Cost Effect on Participants’ Decision-Making

3.2.1. Effort Sensing Reference Factor

The platform judges whether the data submitted by the participant p i for the first time is qualified according to the task quality threshold Q j r . If the quality of the data submitted for the first time in the r-th run by participant p i satisfy that q i , j r < Q j r , the participants is only paid σ i , j r × b i , j r . Then, the platform will send data quality q i , j r and a percentage δ i , j r to the participants whose quality of the data submitted for the first time is not qualified, and δ i , j r indicates how many other participants the quality of the data submitted by the participant for the first time exceeds. The purpose is that participants can use δ i , j r as a reference to evaluate their own efforts. The reason we do not directly send the real percentage θ i . j r is that when the participant’s self-perceived effort is higher, the possibility of participant choosing to continue to improve the data quality to complete the task is higher. Appropriately increasing the effort sensing reference factor given to the participants will motivate the initial participants to improve their effort sensing. So θ i . j r is private to the platform. In the Definition 1, the real proportion θ i . j r is defined.
Definition 1 (Real proportion  θ i . j r ).
θ i . j r  is the proportion of participants in the winner set who submitted data quality smaller than  q i , j r  for the first time. The calculation formula of  θ i . j r  is shown in Formula (2):
θ i , j r = | W i | | W w | * 100 %
where    W w  is the set of winners in the r-th run,  W i  is the set of winners whose data quality submitted for first time is smaller than   q i , j r , and  W i W w . Then we will define effort sensing factor in Definition 2.
Definition 2 (Effort sensing reference factor  δ i , j r  ).
δ i , j r  is the proportion of participants whose data quality satisfy  q i , j r < Q j r  in the r-th run, which submitted data quality for the first time more than  q i , j r , and it is given by the platform. It means the platform provides a reference for perceptions of self-effort to participant  p i  whose data quality is not qualified, and participant  p i  will evaluate his effort in collecting data for the first time based on the proportion of platform feedback. When  θ i . j r  is smaller than 0.5, the feedback is too small even though  δ i , j r  has been increased, so motivation to participants is weak. For participants continuing complete task, when  θ i . j r  is smaller than 0.5, the platform directly sets  δ i , j r  as a constant above 0.5.  δ i , j r  is calculated as Formula (3):
δ i , j r = { log 2 1.5 θ i , j r [ 0 , 0.5 ) log 2 ( θ i , j r + 1 ) θ i , j r [ 0.5 , 1 ]
when  θ i . j r [ 0 ,   0.5 ) , we set  δ i , j r  as a constant  log 2 1.5  to ensure the continuity of  δ i , j r . Since  θ i . j r  is a proportion, its value range is [0, 1]. Since  δ i , j r  is also a proportion, its value range must be a subset of [0, 1]. As Definition 2 shown,  δ i , j r  always satisfies  δ i , j r θ i . j r . We will use Theorem 1 to prove that  δ i , j r  defined by Formula (3) satisfies the above two requirements, so as to illustrate the rationality of the formula.
Theorem 1.
When  θ i . j r [ 0 ,   1 ) , value range of  δ i , j r  is a subset of [0, 1], and  δ i , j r θ i . j r .
Proof of Theorem 1.
First, let us proof that when θ i . j r [ 0 ,   1 ] , value range of δ i , j r is a subset of [0, 1].
Case1: When θ i . j r [ 0 ,   0.5 ) , δ i , j r = log 2 1.5 , so the value range of δ i , j r equal to { log 2 1.5 } is a subset of [0, 1].
Case2: δ i , j r = log 2 ( θ i . j r + 1 ) is increasing, when θ i . j r [ 0.5 ,   1 ] , and when θ i , j r = 0.5 , δ i , j r = log 2 1.5 0.58 , and when θ i , j r = 1 , δ i , j r = 1 . So the value range of δ i , j r is [ log 2 1.5 ,   1 ] , when θ i , j r [ 0.5 ,   1 ] , and [ log 2 1.5 ,   1 ] [ 0 ,   1 ] . So the value range of δ i , j r is a subset of [0, 1].
Then, let us proof that when θ i . j r [ 0 ,   1 ] , we always have δ i , j r θ i . j r .
Case1: When θ i . j r [ 0 ,   0.5 ) , δ i , j r = log 2 1.5 0.58 , obviously δ i , j r θ i . j r .
Case2: let us proof that when θ i . j r [ 0.5 ,   1 ] , we always have δ i , j r > θ i . j r . To proof that when θ i . j r [ 0.5 ,   1 ] , we always have δ i , j r θ i . j r , we only need to proof that log 2 ( θ i , j r + 1 ) θ i , j r 0 is always satisfied for θ i , j r [ 0.5 ,   1 ] . Let f ( θ i , j r ) = log 2 ( θ i , j r + 1 ) θ i , j r , and f ( θ i , j r ) = ( ln 2 ( θ i , j r + 1 ) ) 1 1 . To calculate extremum of f ( θ i , j r ) , let f ( θ i , j r ) = 0 , and we have θ i , j r = ( ln 2 ) 1 1 < 0.5 . By derivative calculation, f ( θ i , j r )  monotonically decreases on [ ( ln 2 ) 1 1 ,   1 ] , and f ( 1 ) = 0 , so when θ i , j r [ 0.5 ,   1 ] , it is always satisfied that f ( θ i , j r ) 0 , which means when θ i , j r [ 0.5 ,   1 ] , it is always satisfied that δ i , j r θ i . j r . In summary, we always have δ i , j r θ i . j r , when θ i . j r [ 0 ,   1 ] .
Proven. □
Definition 3 (Withhold factor  σ i , j r  ).
In r-th run, when  q i , j r  is smaller than  Q j r , the platform will withhold the reward  ( 1 σ i , j r ) × b i , j r  of the participant  p i , and only pay  σ i , j r × b i , j r . When participant  p i  submit data for second time and its quality reach  Q j r , then the platform will pay the rest  ( 1 σ i , j r ) × b i , j r σ i , j r  is calculated as Formula (4):
σ i , j r = max { 0 , 1 δ i r θ i r δ i r × C } × q i , j r Q j r s . t .   q i , j r < Q j r
 where,  C  is a constant, and  C 1 C  is set to exclude participants whose  θ i . j r  is smaller in  W l , so that only if participant  p i  satisfies  δ i , j r δ i , j r θ i , j r C ,  p i  is qualified to get paid. The platform evaluates the quality of the data submitted for the first time by participant  p i  by  q i , j r , and when  q i , j r  doesn’t reach quality threshold  Q j r , the platform return  δ i , j r  based on  θ i , j r  to participant  p i , and participant judge his self effort by  δ i , j r . For task  t j r , as  q i , j r  and  δ i , j r  participant  p i  receive bigger, participant  p i  will think that the more effort he puts in to perform the task, the more payment he should get after submitting the data. δ i , j r  is based on  θ i , j r , so with  θ i , j r  getting bigger,  σ i , j r × b i , j r  is also getting bigger. We will proof in Theorem 2 that for task  t j r , when  q i , j r  submitted by participant  p i  is fixed, the greater the proportion of participants  θ i , j r  he outperforms, the more payment  σ i , j r × b i , j r  it is, that is to prove for task  t j r , when  Q j r  and  q i , j r  are fixed, we have  θ i , j r σ i , j r .
Theorem 2.
For task  t j r , when  Q j r  and  q i , j r  are fixed and  δ i , j r δ i , j r θ i , j r C , we have  θ i , j r σ i , j r .
Proof of Theorem 2.
Because δ i , j r δ i , j r θ i , j r C , we just need to consider the situation that the max part max { 0 ,   1 δ i , j r θ i , j r δ i , j r × C } × q i , j r in Formula (4) is equal to 1 δ i , j r θ i , j r δ i , j r × C , and Q j r and q i , j r are fixed, so it obviously we just need to proof θ i , j r 1 δ i , j r θ i , j r δ i , j r × C for θ i , j r σ i , j r .
(1) When θ i , j r [ 0 ,   0.5 ) , δ i , j r is a constant, then with θ i , j r increasing, ( δ i , j r θ i , j r ) is decreasing which means ( 1 δ i , j r θ i , j r δ i , j r × C ) is increasing. Therefore, when θ i , j r [ 0 ,   0.5 ) , with θ i , j r increasing, ( 1 δ i , j r θ i , j r δ i , j r × C ) is increasing, which means θ i , j r ( 1 δ i , j r θ i , j r δ i , j r × C ) .
(2) It is shown in Theorem 1 that when θ i , j r [ 0.5 ,   1 ] , ( δ i , j r θ i , j r ) is decreasing, which means with θ i , j r increasing, ( δ i , j r θ i , j r ) is decreasing and according to Formula (3) it is obviously when θ i , j r [ 0.5 ,   1 ] , with θ i , j r increasing, δ i , j r is increasing, so ( 1 δ i , j r θ i , j r δ i , j r × C ) is increasing, which means θ i , j r ( 1 δ i , j r θ i , j r δ i , j r × C ) .
(3) We proofed θ i , j r ( 1 δ i , j r θ i , j r δ i , j r × C ) , when θ i , j r [ 0 ,   0.5 ) and θ i , j r [ 0.5 ,   1 ] , and we need to proof that the left-side and right-side of θ i , j r = 0.5 satisfy that the left-side is smaller than right-side. When θ i , j r [ 0 ,   0.5 ) , it is shown in (1) that ( 1 δ i , j r θ i , j r δ i , j r × C ) is smaller than ( 1 log 2 1.5 0.5 log 2 1.5 × C ) . When θ i , j r [ 0.5 ,   1 ] , it is shown in (2) that ( 1 δ i , j r θ i , j r δ i , j r × C ) ( 1 log 2 1.5 0.5 log 2 1.5 × C ) . Therefore, the left-side of θ i , j r = 0.5 is smaller than right-side. Combining (1) (2) (3), for task t j r we have θ i , j r σ i , j r , when Q j r and q i , j r are fixed.
Proven. □
Combining Theorem 2 and Formula (3), we have that the feedback δ i , j r is bigger, the more payment σ i , j r * b i , j r participant p i receive, when Q j r and q i , j r are fixed, which is in line with economic principles. Although we didn’t mention the relationship between q i , j r and σ i , j r , it is obviously that as q i , j r increases, σ i , j r also tends to increase, because the increase of q i , j r also increases the probability that θ i , j r will increase, and it is shown in Formula 4 that the increase of q i , j r can also act directly on σ i , j r .

3.2.2. Data Quality Update Based on Sunk Cost Effect

As mentioned in Section 3.2.1, the platform uses the quality threshold Q j r required by the task t j r requester to judge whether the quality q i , j r of the data submitted by the participant p i is qualified, that is q i , j r < Q j r , at that time, p i is deemed qualified, otherwise, p i is deemed unqualified. For each of the unqualified participants p i in the set W l , the platform will withhold p i payment ( 1 σ i , j r ) × b i , j r , feedback q i , j r and δ i , j r give p i an additional opportunity to submit data. p i will judge my own quality and effort based on q i , j r and δ i , j r , and decide whether to use this extra opportunity to recover the withheld reward.
For p i in W l , the completion quality of the second uploaded data is not only related to itself, but also affected by δ i , j r and the sunk cost effect. If q i , j r is used to express the quality of the second submission data p i affected by the sunk cost effect, there is Formula (5).
q i , j r = ( 1 - γ ) × q i , j r + γ × ( 1 + δ i , j r ) × q i , j r
among them, γ [ 0 , 1 ] , γ represents the internal reference weight, and represents the degree of influence of the sunk cost effect on q i , j r . When γ 0 , it means that the sunk cost effect has a smaller impact on q i , j r , and vice versa, it has a greater impact. From the perspective of traditional economics, the cost paid by participants is q i , j r , but participants will be affected by the effort sensing factor δ i , j r we provide when evaluating the value of their own efforts, overjudge their own efforts, and think that the sunk cost is ( 1 + δ i , j r ) × q i , j r . This section will use Theorem 3 to illustrate how sunk costs affect participants.
Theorem 3.
In the rth run, when  p i W l  and  γ > 0 , there is  q i , j r q i , j r .
Proof of Theorem 3.
First, let  q i , j r q i , j r = γ × ( 1 + δ i , j r ) × q i , j r γ × q i , j r , after simplification, q i , j r q i , j r = γ × δ i , j r × q i , j r . Among them, the internal reference weight γ > 0 , p i ’s first data quality q i , j r 0 , and δ i , j r > 0 can be obtained according to Formula (3). Therefore, q i , j r q i , j r = γ × δ i r × q i , j r 0 , that is q i , j r q i , j r , the proof is completed.
Proven. □
Theorem 3 shows that p i is affected by δ i , j r and overestimates its own effort cost. In turn, it is easier to trigger the psychological sunk cost effect of p i , focus p i ’s mental focus on the cost paid, and promote p i to improve the quality of the second data. It can be obtained from the proof process, when q i , j r = 0 , q i , j r = q i , j r is obtained. When q i , j r = 0 , indicates that p i did not submit any data or have any valid effort cost, so the sunk cost effect naturally did not exist. In most cases, the sunk cost effect can improve the data quality of p i ( p i W l ).

3.3. Incentive Mechanism Design Based on Sunk Cost Effect

Taking the reverse auction as the carrier, fully considering the sunk cost effect of the participants in the decision-making process, an incentive mechanism based on the sunk cost effect is designed. A run is divided into three stages according to the sequence: the task publishing stage, the selective reward payment stage based on the sunk cost effect, and the final reward payment stage based on the sunk cost effect.

3.3.1. Task Publishing

The process of the task publishing stage is shown in Algorithm 1.
In the stage of publishing tasks, the server platform publishes the tasks and their values on the platform (lines 1–4). Participants register on the platform, choose suitable tasks according to their own preferences and abilities, and evaluate the cost of performing tasks. Send the selected task set and bidding information to the server platform (lines 5–11).
Algorithm 1: Task publishing
 1:   FOR t i T DO
 2:        t j budget j
 3:       Releasing t j on the platform
 4:   End FOR
 5:   FOR p i P DO
 6:       Registering on the platform
 7:       Selecting the tasks which are willing to execute
 8:       Evaluating value
 9:        p i b i d i
 10:       Submitting b i d i to platform
 11:   End FOR
The selection reward payment phase can be divided into two parts, namely the participant selection phase and the compensation payment phase based on the sunk cost effect. In the user selection stage, the server platform selects suitable participants to complete the task according to the B I D submitted by all participants, and these participants are called winners. In the phase of reward payment based on sunk cost effect, the winner performs the task and submits the data to the server platform, and then the server platform issues reward to the winner according to the data quality as a reward. At the same time, feedback information is given to those who fail to meet the quality standards to motivate them to improve data quality and resubmit data.

3.3.2. Participant Selection

In the participant selection stage, the server platform first arranges the tasks and generates winners according to the order of the tasks. When generating winners, arrange the participants in the order b i , j r , and select the first few participants as the winners of the task.
Definition 4 (Task winner conditions).
For a certain task  t j  in the task set  T , the server platform sorts all participants who participate in the bidding task  t j  in descending order  b i , j r , and if there is the smallest  l  participant that satisfies the Formula (6), then this  l  participant is considered as a task  t j  the winner.
k = 1 l b k , j r < v j   ( k N + )
where  l  represents the number of winners for task  t j .

3.3.3. Reward Payment Stage Based on Sunk Cost Effect

Based on the participant selection, the server platform can get a set W w of winners. For any one p i W w , the server platform will pay it according to the data quality q i , j r and b i , j r bidding submitted for the first time.
The process of the whole selection payment stage is shown in Algorithm 2.
Algorithm 2: Reward selection payment
 1:Input: P , T , B I D , V , W w = , W l =
 2:Output: W l
 3:   Sort all t j T in descending order of ( v j budget j )
 4:   FOR t j T DO
 5:       Sort all p i in ascending order of b i , j r
 6:       Find the smallest l such that i = 1 l t i m e s i > 0 b i , j r < budget j  
 7:       IF such l exists THEN
 8:           FOR all p i s.t i l DO
 9:                x i , j 1
 10:                W w W w p i
 11:           End FOR
 12:      End IF
 13:   End FOR
 14:   FOR p i W w DO
 15:       p i performs his tasks
 16:       p i submits his data to platform
 17:   End FOR
 18:   the platform test quality of data
 19:   FOR p i W w DO
 20:        IF q i , j r < Q j r THEN
 21:           Pay p i according to Definition 3
 22:            W l W l p i
 23:           Give p i the q i , j r and δ i , j r
 24:        ELSE
25:           Pay p i the b i , j r
26:        End IF
27:    End FOR
Algorithm 2 describes the participant selection and reward payment process in one run. In the participant selection, the server platform first sorts the tasks in the task set T from large to small according to ( v j b u d g e t j ) (line 3). The server platform then selects a winner for each task in turn (lines 4–13). According to b i , j r sorting the participants participating in the bidding task t j from large to small, judge whether there are any previous l participants t i m e s j > 0 satisfying Definition 4 (lines 5–6). Generate l winner for the task t j , if one exists (Lines 7–13). The next step is the payment stage, according to the Definition 3, the participants in the process are rewarded W w as rewards, and the information is fed back to those who fail to meet the quality standards.
In the final reward settlement stage, after receiving the feedback information, those who fail to meet the quality standards will be encouraged to improve their own data quality and submit data to the server platform again. When the data quality is resubmitted, the server platform will return the reward withheld by those who re-qualified. The process of the final reward settlement stage is as Algorithm 3.
Algorithm 3: Final reward settlement
 1:Input: W l
 2:    FOR p i W l DO
 3:         p i continue to performs his tasks and improve quality
 4:         p i resubmits his data to platform
 5:    End FOR
 6:    the platform test quality of data
 7:    FOR p i W l DO
 8:        IF q i , j r Q j r THEN
 9:             Pay p i the ( 1 σ i , j r ) × b i , j r
 10:        End IF
 11:   End FOR
According to the selection reward payment and the final reward settlement, when p i submits the first data quality q i , j r Q j r , p i gets paid by b i , j r as a reward, otherwise, σ i , j r × b i , j r will be obtained. When the resubmitted data quality of p i satisfies q i , j r Q j r , the remaining reward ( 1 σ i , j r ) × b i , j r will be obtained by p i , otherwise, no reward will be obtained. Then the reward of p i in the rth run can be obtained as Formula (7):
p a y i r = t j T i q i , j r Q j r o r q i , j r Q j r b i , j r + t j T i q i , j r < Q j r   and   q i , j r < Q j r σ i , j r × b i , j r
where T i represents the set of tasks for which participant p i is selected as the winner.

3.4. Utility Analysis

This section analyzes the effectiveness of IMBSC in terms of participants and server platform. For participants, IMBSC encourages participants to improve their data quality by effort sensing reference factor δ i , j r and withhold factor σ i , j r . And through the quality threshold Q j r , it makes p i realize that the reason why they cannot get rewards is that the quality of the data they submitted is not up to standard, and there will be no other psychological burden. For the server platform, IMBSC improves the effectiveness of the server platform by encouraging participants to improve data quality and complete more tasks.

3.4.1. Participant Utility

According to the process of incentive mechanism described in Section 3.3, the server platform provides different payment schemes according to the data quality of the tasks completed by the winners. In the rth run, when the task t j is completed, For participant p i who meet q i , j r < Q j r and q i , j r Q j r , according to the theory of utility = benefit-cost in traditional economics, she can obtain the following utility when completing the task t j :
u i , j r = σ i , j r × b i , j r c i , j r + ( 1 σ i , j r ) × b i , j r = b i , j r c i , j r
However, according to the sunk cost effect of behavioral economics, the participants can overestimate their own efforts by changing effort sensing factor, and the withhold factor σ i , j r detains part of the participants’ compensation, so that the psychological focus of the participants will focus on the sunk cost. When the quality of the data submitted by the participant p i for the second time is q i , j r Q j r , they will receive sinking compensation ( 1 σ i , j r ) × b i , j r . The return of this part of the reward ( 1 σ i , j r ) × b i , j r will make the participant p i psychologically magnify the value of this part of the reward. So for the rth run, when task t j is completed, the utility of the participants p i meeting q i , j r < Q j r and q i , j r Q j r is:
u i , j r = σ i , j r × b i , j r c i , j r + α × ( ( 1 + δ i r ) × ( 1 σ i , j r ) × b i , j r ) β
where α is the value amplification factor and β is the risk factor. σ i , j r × b i , j r represents the reward obtained in the payment selection stage and c i , j r represents the cost of completing the task t j . ( 1 σ i , j r ) × b i , j r refers to the reward that is withheld after the first data submission and obtained again after the second data submission. ( 1 + δ i r ) represents the value degree of reward ( 1 σ i , j r ) × b i , j r to participant p i . The larger δ i r is, the greater the participant p i ‘ perceived effort is, the greater the value of ( 1 σ i , j r ) × b i , j r is.
The above analyzes the utility of participant. For participants whose’s data quality meeting q i , j r Q j r are not affected by the server platform feedback effort sensing reference factor and withhold factor. So the utility of these participants is benefit-cost. For participants whose data quality meeting q i , j r < Q j r and q i , j r < Q j r , although they have improved their data quality under the influence of the effort sensing reference factor and withhold factor, they can learn from the quality threshold given by the server platform. The reason why they are not paid is that the submitted data quality is not up to standard, so they do not have any other psychological burden, so the utility of these participants is also benefit-cost. So for the rth run, when task is completed, the utility of the winner is:
u i , j r = { b i , j r c i , j r q i , j r Q j r σ i , j r × b i , j r c i , j r + α × ( ( 1 + δ i r ) × ( 1 σ i , j r ) × b i , j r ) β q i , j r < Q j r   and   q i , j r Q j r σ i , j r × b i , j r c i , j r q i , j r < Q j r   and   q i , j r < Q j r
The utility of participant p i in the rth run is the sum of all task utilities selected as the winner. Therefore, the utility of participant p i in the rth run is:
u i r = t j T x i , j r = T R U E u i , j r
where x i , j r = T R U E indicates the participant is the winner of task.

3.4.2. Platform Utility

The incentive mechanism IMBSC proposed in this paper focuses on data quality, which should be taken into account when calculating platform effectiveness. For any participant p i W w , if max { q i , j r , q i , j r } < Q j r , the task completed separately cannot be regarded as the completed task. So the platform utility of IMBSC is:
U r = V T w p i W w p a y i r
where in, T w represents the union of tasks performed by all participants p i in set W w that meet max { q i , j r , q i , j r } Q j r , and V T w represents the sum of the values of all tasks in the set T w , p a y i r as given in Formula (7).

4. Simulation Experiment and Result Analysis

4.1. Experimental Setup

In this section, we compare IMBSC with STATIC mechanism. STATIC does not introduce sunk cost. When selecting set W w by reverse auction, STATIC uses the same selection algorithm as IMBSC. When set W w is selected, STATIC will pay σ i , j r * b i , j r to the winner p i who satisfies q i , j r < Q j r , and will not affect the data quality of p i through δ i , j r .
To ensure the accuracy of the comparison, IMBSC and STATIC use the same experimental environment and parameters. For the value of Q j r , this section will adopt the same value 0.85 as that in reference [22]. Loss aversion factor α and risk attitude factor β will adopt the classical values 2.25 and 0.88 in reference [23]. This experiment runs IMBSC and STATIC mechanism by JAVA simulation. All experimental results were averaged after 1000 operations. Other experimental parameter settings are listed in Table 2.

4.2. Experimental Results

In order to verify the effectiveness of IMBSC, this section will analyze the server platform utility, the average utility of participants, and the number of tasks completed through simulation experiments. When verifying the impact of the number of participants m on the server platform utility and the average utility of participants, the number of tasks n is set to 200. When verifying the impact of the number of tasks n on the server platform utility and the average utility of participants, the number of participants m is set to 200.

4.2.1. Server Platform Utility

The utility of the server platform is used to measure the rationality of the revenue and expenditure of the incentive mechanism. Formula (12) gives the calculation method of the utility of the server platform running at one time. This section evaluates the server platform utility attributes by calculating the average value of all running server utilities.
Figure 2 describes the changes of the server platform utility of the two mechanisms under different task number and different participant number. It can be seen from Figure 2a that the server platform utility of both mechanisms increases with the increase of task number n . This is because as the number of tasks increases, the number of tasks that participants can choose will also increase, as a result, the completed tasks increase and the utility of the server platform increases. When the number of tasks n increases to a certain extent, the utility of the server platform tends to grow slowly. Because of the influence of the number of participants m , the number of tasks completed by participants in each round is limited due to their limited personal energy. When the number of tasks n is large, the number of tasks completed by participants reaches saturation, and the utility growth of the server platform decreases. It can be seen from Figure 2b that the server platform utility of both mechanisms increases with the increase of the number of participants m . As the number of participants m increases, the competition among participants increases, providing more participants for the task and improving the possibility of task completion. The server platform utility of IMBSC is better than that of STATIC, mainly because IMBSC uses effort sensing factor δ i , j r and withhold factor σ i , j r to motivate winners to improve their own data quality. When the winners who fail to meet the quality standards resubmit higher quality data, they will make some unfinished tasks complete, thus improving the server platform utility.

4.2.2. Average Utility of Participants

The average utility of participants can explain the feasibility of incentive mechanism to some extent. The average utility of the participants in this section is the average utility of the participants in W w . Formula (11) gives the calculation method for calculating the one time operation utility of the participants.
Figure 3 describes the changes of the average utility of the participants of the two mechanisms under different task numbers and different participant numbers. The average utility of the participants of IMBSC is still better than that of STATIC. Assume that p i P , STATIC will only pay the reward σ i , j r × b i , j r to the participants whose data quality is unqualified for the first time. When the quality of the data submitted by the participants for the second time is qualified, the remaining reward ( 1 σ i , j r ) × b i , j r will be paid, which improves the average utility of the participants, so the average utility of the participants of IMBSC is still better than that of STATIC. According to Figure 3a, the average utility of the participants of the two mechanisms increases with the increase of the number of tasks n . This is because the increase of the number of tasks n provides more opportunities to perform tasks, and the number of tasks performed by a single participant increases, which improves the average utility of the participants. According to Figure 3b, the average utility of the participants of the two mechanisms decreases with the increase of the number of participants m . This is because with the increase of the number of participants m , the participants are too saturated relative to the task execution opportunities, resulting in a decrease in the number of times the average participants perform tasks, and the average utility of the participants decreases.

4.2.3. Number of Tasks Completed

Figure 4 shows that the number of tasks completed varies with the number of tasks and participants. The number of tasks completed by IMBSC is greater than that of STATIC. Suppose p i P executes tasks t j , q i , j r < Q j r and task t j is not completed. When p i improves data quality and resubmits data quality q i , j r Q j r , task t j is completed and the number of task completions increases by 1, so the number of task completions of IMBSC is greater than that of STATIC. The number of tasks completed in Figure 4a increases with the number of tasks n because the number of tasks n increases, which increases the number of tasks performed by participants and the number of tasks to be completed. The reason for the small increase is that the number of tasks n is about to be saturated with the number of participants m . Increasing the number of tasks n cannot significantly increase the number of tasks completed. The number of tasks completed in Figure 4b also increases with the number of participants m . Because the number of tasks m is saturated, while the number of participants n is not saturated. Increasing the number of participants m is conducive to more tasks being completed, thus increasing the number of tasks completed by the system.
Figure 5 shows the influence of the internal reference weight factor γ on the number of tasks completed. According to Figure 5a, the number of tasks completed increases with the number of tasks n , and increases with γ . According to Figure 5b, the number of tasks completed increases with the number of participants m , and increases with γ . Because γ reflects the degree of influence on δ improving the data quality of participants, the greater the value γ , the greater the value of improving the quality of participants, so the more tasks completed. It can be seen that the conclusion in Figure 5 is consistent with the theory in Formula (5).

5. Summary

A crowdsensing incentive mechanism based on sunk cost effect is designed to motivate participants to improve the quality of sensing data. IMBSC mainly takes reverse auction as the main body, and through the design of effort sensing reference factor and withhold factor, it urges participants to perform tasks and submit high-quality sensing data. Participants evaluate their own efforts through the effort sensing reference factor provided by the service platform, and the server platform further detains part of the participants’ reward through withhold factor to stimulate the psychological sunk cost effect of participants. Participants in the sunk cost effect will resubmit data to the server platform in order to win the retained reward and improve the quality of sensing data. Finally, the effectiveness of IMBSC is further verified through simulation experiments from three aspects: the server platform utility, the average utility of participants, and the number of tasks completed. The simulation results show that compared with the system without IMBSC mechanism, the platform utility is increased by more than 100%, the average utility of participants is increased by about 6%, and the task completion is increased by more than 50%. This paper mainly studies how the sunk cost effect improves the data quality of participants in one operation, and lacks the research on the impact of sunk cost effect on the next operation. In fact, the impact of sunk cost effect on participants who have not received withhold reward after the second data submission will continue until the next operation. In the future, we can focus on how the sunk cost effect affects the data quality of participants in multiple operations.

Author Contributions

K.W., Z.C. and L.Z. designed the project and drafted the manuscript, as well as collected the data. J.L. and B.L. wrote the code and performed the analysis. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Science and Technology Project of State Grid Ningxia Electric Power Co., Ltd. Information and Communication Branch, (Research on Key Technologies of Anomaly Detection of Electric Power Information Equipment Based on Deep Learning). The contract number is: SGNxxt00xxJS2200078.

Data Availability Statement

Not acceptable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Capponi, A.; Fiandrino, C.; Kantarci, B.; Foschini, L.; Kliazovich, D.; Bouvry, P. A Survey on Mobile Crowdsensing Systems: Challenges, Solutions, and Opportunities. IEEE Commun. Surv. Tutor. 2019, 21, 2419–2465. [Google Scholar] [CrossRef] [Green Version]
  2. Liu, Y.; Li, H.; Guan, X.; Yuan, K.; Zhao, G.; Duan, J. Review of incentive mechanism for mobile crowd sensing. J. Chongqing Univ. Posts Telecommun. (Nat. Sci. Ed.) 2018, 30, 147–158. [Google Scholar]
  3. Liu, Z.D.; Jiang, S.Q.; Zhou, P.F. A participatory urban traffic monitoring system: The power of bus riders. IEEE Trans. Intell. Transp. Syst. 2017, 18, 2851–2864. [Google Scholar] [CrossRef]
  4. Dutta, P.; Aoki, P.M.; Kumar, N.; Mainwaring, A.; Myers, C.; Willett, W.; Woodruff, A. Demo Abstract: Common Sense: Participatory urban sensing using a network of handheld air quality monitors. In Proceedings of the International Conference on Embedded Networked Sensor Systems. DBLP, Berkeley, CA, USA, , 4–6 November 2009; pp. 349–350. [Google Scholar]
  5. Yang, D.; Li, Z.; Yang, L.; Xu, Q. Reputation—Updating Online Incentive Mechanism for Mobile Crowd Sensing. J. Data Acquis. Proc. 2019, 34, 797–807. [Google Scholar]
  6. Yang, J.; Li, P.-C.; Yan, J.-J. MCS data collection mechanism for participants’ reputation awareness. Chin. J. Eng. 2017, 39, 1922–1934. [Google Scholar]
  7. Dai, W.; Wang, Y.; Jin, Q.; Ma, J. Geo-QTI: A quality aware truthful incentive mechanism for cyber-physical enabled Geographic crowdsensing. Future Gener. Comput. Syst. 2018, 79 Pt 1, 447–459. [Google Scholar] [CrossRef]
  8. Peng, D.; Wu, F.; Chen, G. Data Quality Guided Incentive Mechanism Design for Crowdsensing. IEEE Trans. Mob. Comput. 2018, 17, 307–319. [Google Scholar] [CrossRef]
  9. Wen, Y.; Shi, J.; Zhang, Q.; Tian, X.; Huang, Z.; Yu, H.; Cheng, Y.; Shen, X. Quality-Driven Auction-Based Incentive Mechanism for Mobile Crowd Sensing. Veh. Technol. IEEE Trans. 2015, 64, 4203–4214. [Google Scholar] [CrossRef]
  10. Wang, H.; Guo, S.; Cao, J.; Guo, M. Melody: A Long-Term Dynamic Quality-Aware Incentive Mechanism for Crowdsourcing. In Proceedings of the IEEE International Conference on Distributed Computing Systems, Atlanta, GA, USA, 5–8 June 2017; IEEE: Piscataway, NJ, USA, 2017. [Google Scholar]
  11. Kai, H.; He, H.; Jun, L. Quality-Aware Pricing for Mobile Crowdsensing. IEEE/ACM Trans. Netw. 2018, 26, 1728–1741. [Google Scholar]
  12. Nash, J.S.; Imuta, K.; Nielsen, M. Behavioral Investments in the Short Term Fail to Produce a Sunk Cost Effect. Psychol. Rep. 2018, 122, 1766–1793. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Wang, J.; Zhang, B.; Liang, S.; Li, J. Sunk cost effects hinge on the neural recalibration of reference points in mental accounting. Prog. Neurobiol. 2022, 208, 102178. [Google Scholar] [CrossRef] [PubMed]
  14. Xu, C.; Si, Y.; Zhu, L.; Zhang, C.; Sharif, K.; Zhang, C. Pay as how you behave: A truthful incentive mechanism for mobile crowdsensing. IEEE Internet Things J. 2019, 6, 10053–10063. [Google Scholar] [CrossRef]
  15. Gong, X.; Shroff, N.B. Truthful mobile crowdsensing for strategic users with private data quality. IEEE/ACM Trans. Netw. 2019, 27, 1959–1972. [Google Scholar] [CrossRef]
  16. Saha, S.; Habib, M.A.; Adhikary, T.; Razzaque, M.A.; Rahman, M.M.; Altaf, M.; Hassan, M.M. Quality-of-experience-aware incentive mechanism for workers in mobile device cloud. IEEE Access 2021, 9, 95162–95179. [Google Scholar] [CrossRef]
  17. Jin, H.; He, B.; Su, L.; Nahrstedt, K.; Wang, X. Data-driven pricing for sensing effort elicitation in mobile crowd sensing systems. IEEE/ACM Trans. Netw. 2019, 27, 2208–2221. [Google Scholar] [CrossRef]
  18. Christoph Merkley, J.M.A. Closing a Mental Account: The Realization Effect for Gains and Losses. Exp. Econ. 2021, 24, 303–329. [Google Scholar] [CrossRef]
  19. Zhang, Q. The Effect of Psychological Distance on Sunk Cost Effect: The Mediating Role of Expected Regret[D]. Ph.D. Thesis, Shanghai Normal University, Shanghai, China, 2019. [Google Scholar]
  20. Jin, L.; Li, C.; Zheng, X.; Hao, S.; Han, L.; Liang, Q. Cost effects analysis of engineering change decision under different construal levels. J. Ind. Eng. Eng. Manag. 2018, 32, 202–206. [Google Scholar]
  21. Xiang, P.; Xu, F.; Shi, Y.; Zhang, H. Review of research on the effect of sunk costs in consumer decision making. Psychol. Res. 2017, 10, 30–34. [Google Scholar]
  22. Zhu, X.; An, J.; Yang, M.; Xiang, L.; Yang, Q.; Gui, X. A fair incentive mechanism for crowdsourcing in crowd sensing. IEEE Internet Things J. 2017, 3, 1364–1372. [Google Scholar] [CrossRef]
  23. Tversky, A.; Kahneman, D. Advances in Prospect Theory: Cumulative representation of uncertainty. J. Risk Uncertain. 1992, 5, 297–323. [Google Scholar] [CrossRef]
Figure 1. IMBSC System Model.
Figure 1. IMBSC System Model.
Electronics 12 01037 g001
Figure 2. Platform utility depending on number of the tasks and participants.
Figure 2. Platform utility depending on number of the tasks and participants.
Electronics 12 01037 g002
Figure 3. Average utility of participants.
Figure 3. Average utility of participants.
Electronics 12 01037 g003
Figure 4. Number of tasks completed.
Figure 4. Number of tasks completed.
Electronics 12 01037 g004
Figure 5. Impact of Internal reference weight factor on the number of tasks completed.
Figure 5. Impact of Internal reference weight factor on the number of tasks completed.
Electronics 12 01037 g005
Table 1. Common parameters and their meanings.
Table 1. Common parameters and their meanings.
Parameter Meaning
T , n , t j set of tasks, number of tasks, the jth task
P , m , p i set of participants, number of participants, the ith participant
V , v j set of values of all tasks, value of the jth task
Budget , budget j task budget set of all tasks, task budget of the jth task
c i , j r cost of completing task t j in run r of p i
T i set of completed tasks by p i
B i , b i , j r set of bidding for all tasks in T i , bidding for t j of p i in rth run
B I D , b i d i set of participants‘ bidding information, p i ’s bidding information
t i m e s i upper limit number of tasks performed by p i
Q j r quality threshold of the j th task in the r th run
q i , j r data quality of t j submitted by p i for the first time in the rth run
W w , W i set of all winners, set of winners whose first data quality is less than q i , j r
W f , W l set of winners meetting the quality standard and failling to meet
W f , W l set of quality re-attainment, set of final quality non-attainment
θ i , j r percentage of winners exceeded by p i in the rth run
δ i , j r effort sensing reference factor
σ i , j r withhold factor
x i , j Boolean value, True means p i is the winner of t j , False means p i was not selected as the winner of t j
u i , j r , u i r , U r The utility of p i complete task t j in the rth run, the utility of p i in the rth run, the utility of the platform in the rth run
Table 2. Experimental Parameters.
Table 2. Experimental Parameters.
ParameterValueMeaning
m 50~500Number of users in the participant set
n 50~500Number of tasks in the task set
v j 1~80Single task value
budget j v j 20 ~ v j (0~60)Single task budget
c i , j r 0.5~4Cost of p i completing the task
b i , j r c i , j r ~8 p i Bidding for tasks
t i m e s i 1~5Maximum number of completed tasks
q i , j r 0~1Data quality
γ 0.1~0.9 δ Impact on quality
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, K.; Chen, Z.; Zhang, L.; Liu, J.; Li, B. Incentive Mechanism for Improving Task Completion Quality in Mobile Crowdsensing. Electronics 2023, 12, 1037. https://doi.org/10.3390/electronics12041037

AMA Style

Wang K, Chen Z, Zhang L, Liu J, Li B. Incentive Mechanism for Improving Task Completion Quality in Mobile Crowdsensing. Electronics. 2023; 12(4):1037. https://doi.org/10.3390/electronics12041037

Chicago/Turabian Style

Wang, Kun, Zhigang Chen, Lizhong Zhang, Jiaqi Liu, and Bin Li. 2023. "Incentive Mechanism for Improving Task Completion Quality in Mobile Crowdsensing" Electronics 12, no. 4: 1037. https://doi.org/10.3390/electronics12041037

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop