Next Article in Journal
Monte Carlo Methods for the Shapley–Shubik Power Index
Previous Article in Journal
Optimal Policymaking under Yardstick Vote: An Experimental Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Impact of an Intermediary Agent in the Ultimatum Game

1
Desautels Faculty of Management, McGill University, Montreal, QC H3A 1G5, Canada
2
Faculty of Social Welfare and Health Science, University of Haifa, Haifa 3498838, Israel
*
Authors to whom correspondence should be addressed.
Games 2022, 13(3), 43; https://doi.org/10.3390/g13030043
Submission received: 7 April 2022 / Revised: 29 April 2022 / Accepted: 24 May 2022 / Published: 31 May 2022
(This article belongs to the Section Behavioral and Experimental Game Theory)

Abstract

:
Delegating bargaining to an intermediary agent is common practice in many situations. The proposer, while not actively bargaining, sets constraints on the intermediary agent’s offer. We study ultimatum games where proposers delegate bargaining to an intermediary agent by setting boundaries on either end of the offer. We find that after accounting for censoring, intermediaries treat these boundaries similarly to a nonbinding proposer suggestion. Specifically, we benchmark on a nonbinding setting where the proposer simply states the offer they would like to have made. We find that specifying a constraint on the intermediary has the same effect as the benchmark suggestion once censoring is accounted for. That is, giving an agent a price ceiling or price floor is treated, by the agent, the same as expressing a direct price wish, as long as the constraint is not binding. We discuss the implications of these findings in terms of the importance of communication and the role of constraints in bargaining with intermediaries.
JEL:
C79; C91; D86

1. Introduction

In many domains, commercial and legal activity is carried out through intermediaries [1]. Past literature investigates the value of different intermediaries, such as car dealers [2], real estate agents [3,4], sports agents [5], and talent management agents [6,7]. However, there is no consensus on the value of these intermediaries. Some [8] argue that such intermediaries provide invaluable expertise. Others [9,10] argue that any expertise advantage is offset in a prisoner’s dilemma-like fashion when all individuals acquire their own intermediary agents.
We isolate a particular function of intermediaries as having an influence over price, and examine whether the expressed wishes of the represented principals are being incorporated into price decisions by the intermediaries, and how. Specifically, we consider whether principals who delegate pricing decisions to agents are better off communicating their wishes as lower or upper bounds, rather than desired prices.
To motivate the problem studied here, it is helpful to consider the illustrative example of a real estate agent. Real estate agents facilitate matching and communicate offers and counteroffers between buyers and sellers. Real estate agents are often reluctant to seek the best attainable prices for their clients because the effort needed to obtain those prices is not commensurate with incremental increases in agents’ commissions [11]. Thus, to exert greater control, a seller might wish to set a hard price floor for their agent, rather than a requested price, and this is one of the delegation formats we examine.
Note that in the above example, an implicit assumption was made that some bargaining delegation takes place, where the agent negotiates for the principal. This may not be the case for all real estate transactions. Nevertheless, it is a mainstream assumption in the intermediation literature that intermediary agents influence prices on behalf of the principal [12,13,14], and this is the assumption we proceed with here.
We study an abstract bargaining setting using the ultimatum game [15]. The ultimatum game is a two-player interaction commonly used in the literature to model abstract bargaining settings. In the two-player interaction, the two players are named the “proposer” and “responder”. The proposer has the first move and offers a split of a sum of money to the responder. The responder can accept or reject the proposed split, where a rejection leads to zero payoffs to both participants.
In the extensive empirical literature on laboratory and field ultimatum games, low offers (below 20% of the pie) are often rejected, and about 50% of the proposers offer a fair split of the pie.
To study the role of the intermediary agent, we extend the simple ultimatum game with the addition of an intermediary agent who acts on behalf of the proposer. This intermediary agent may have restrictions on their bargaining power, imposed by the proposer.
The proposer might want to restrict intermediary agents for various reasons. For example, the proposer might not fully trust that there is an incentive alignment with the intermediary agent, or might be afraid that, with no constraints, the outcome of the negotiation might be disadvantageous. However, restricting the intermediary agent can be detrimental to the probability of success of the bargaining process, and leads to avoidable inefficiencies [13].
We analyze the impact of eight different conditions corresponding to four proposal formats × two visual presentations. The most restrictive proposal format is a binding proposal, where the proposer’s offer is binding to the agent. This is the benchmark, which is theoretically equivalent to the standard two-person ultimatum game. However, the intermediary agent serves as a third party observer, which can alter the proposer’s behavior [16,17]. For example, Hamman et al. [12] show that the presence of an intermediary agent alleviates social pressure, resulting in more selfish behavior. We find no evidence in support of that assertion.
The second proposal format is a nonbinding proposal, which allows us to see both whether and how the intermediary agent deviates from the proposer’s instructions, presumably in the direction of the intermediary agent’s own preferences, and whether the agent is responsive to the proposer’s directive.
The last two proposal formats are the proposer’s upper bound and lower bound. The upper bound is the maximum offer the proposer is willing to make [13]. The lower bound could represent the “guilt point”, below which the proposer is uncomfortable taking additional surplus from the responder. Both constraints have strong anchoring effects on negotiations [18], and can lead to impasse if they are too extreme.
The present investigation explores how an intermediary agent without direct stake in the outcome of the negotiation responds to the proposer’s directed constraints. The intermediary agent could interpret constraints as different levels of the proposer’s preferences [19]. For example, constraints could be interpreted as strict directives, suggestive indications of the proposer’s preferences, or as a license to follow the intermediary agent’s own preferences when the constraints are not too restrictive.

2. Background

Incentivized vs. Unincentivized Intermediation. A distinction between incentivized and unincentivized agents was made by Konow et al. [20]. Specifically, Konow et al. [20] studied a setting where a dictator divided earnings between two people who had obtained these earnings in a previous stage through a real-effort task. Konow et al. [20] distinguish between impersonal third parties—Spectators paid a fixed amount to allocate anonymously among recipients—and Stakeholders, who have a stake in the outcome of the game. Konow et al. found that, in contrast to stakeholders, spectators were more likely to account for the stakeholders’ entitlements due to their efforts in a previous stage. In our case, this insight could be extended to the message posed by the proposer. That is, we conjecture that the agent considers the entitlement of the proposer as expressed through the proposer’s message.
While our focus is on the case of unincentivized (on outcome) intermediaries, we briefly review both sets of literature to highlight the gap we address. Specifically, agent constraints have been studied in incentivized settings but not in unincentivized settings, and there are several relevant insights that we adopt from the incentivized setting.
Incentivized Intermediaries. Several studies argue that delegation to agents serves to alleviate one’s own responsibility [21,22]. Notably, Hamman et al. [12] studied delegation to agents in dictator games. Agents in in that framework were indirectly incentivized for choices in that the proposers could choose agents based on past actions, and agents received a fixed fee per transaction. Thus, over time, agents that were more lucrative to the proposer (i.e., less kind to the recipient) were chosen increasingly more and therefore, over time, the awards to the recipient shifted downward towards the selfish dictator outcome. The findings of Hamman et al. [12] for the early part of the game (initial five periods), where settings with agents resemble settings without agents, suggest that the addition of the agent to the game would make very little difference in our setting. Nevertheless, the social diffusion hypothesis they raise is compelling, and we will revisit it in our ultimatum game setting.
Incentivized intermediaries have been studied in the ultimatum game as well. Fershtman and Kalai (1997) [23] analyzed a simple ultimatum game with unobserved delegation, and showed the conditions under which delegation, even when unobservable, may affect the outcome of the game. Fershtman and Gneezy (2001) [14] studied an ultimatum bargaining game in which players delegate their actions to an intermediary agent directly incentivized to earn them higher returns. They found that delegation, with intermediary agents whose incentives are aligned with the proposer, moves proposals away from equal split and more towards the proposer’s advantage. We examine this possibility as well.
Our focus in the present article is on agent constraints. The role of agency in bargaining games with agent constraints—constraints that are structurally similar to those we study here—was first considered by a seminal work by Schotter et al. [13]. Their games were not ultimatum games in the strict sense because agents would go to a room and bargain for a period of time. However, the proposer did have an ultimatum role in that he could set a binding constraint on the agent prior to bargaining. Unlike our setting, there was asymmetric information about value and cost between the proposer and responder, but the main issue in that article was the effect of agency on the impasse rate. Schotter et al. [13] found that in face-to-face bargaining between proposer and responder, the impasse rate rose from 6% to roughly 15% in the simple variation, where the proposer got a one-shot opportunity to instruct the agent. Surprisingly, incentives provided to the agent (fixed fee or commission) proved relatively unimportant in affecting impasse rate, despite making the agent aligned with the proposer under the commission and completely misaligned under fixed fee. Indeed, proposers compensated for the misalignment by tightening their price constraints. The key driver of impasse in Schotter et al. [13] was the proposer’s constraint. We will examine responder’s behavior in light of that documented relationship.
Unincentivized Intermediaries. Our focus is on the use of constrained unincentivized intermediary agents that are not personally affected by the outcome. As indicated earlier, this is a gap in the literature that we aim to fill.
In that respect, we build on a rich literature on unincentivized dictators acting as agents for a single party (no allocation between two parties). This literature is primarily concerned with risk taking on behalf of others (e.g., [22,24,25,26,27,28]). Risk preferences are tangentially related here as well, because it can be assumed that the intermediary seeks to obtain a settlement with high probability, which itself may or may not involve risk preference on the part of intermediary and the principal parties.

3. Theory and Hypotheses

3.1. Agent Preferences

As seen above, the extant literature treats intermediary agents’ own allocation preferences as largely tangential. The present investigation departs from this assumption. One aspect that may make the agent’s preference favor one side of the transaction or the other may depend on who is able to communicate their preferences. For example, in real estate transactions, the agent for the buyer is typically paid at closing by the seller. Both sides’ agents could be called seller’s agents on account of getting paid by the seller. However, the buyer’s agent is selected by the buyer, and the buyer’s preferences are communicated to them, whereas the seller’s preferences are not. This presumably tilts the buyer agent’s allocative preferences in favor of the buyer’s interests. The same could be said for attorneys (who can be paid by the opposing party on settlement) and paid negotiators.
One could speculate that if the fairness-minded or altruistic agents in Hamman et al. [12] were truly free to set dictator transfers, they would have set them at equal split between the proposer (dictator) and recipient, because they would have been indifferent to the interests of the two. However, proposers could communicate to agents their preferences through selecting them or avoiding them, and this created feedback to the agents on whether they satisfied proposers’ interests. In [14], the proposers could communicate their preferences to agents by setting up the incentives for each offer. In addition to incentivizing the agents towards particular allocations, this communication mechanism also effectively communicated the shape of the utility function of the proposer with respect to allocation, although the two sets of objectives (agent incentives and communication of own preferences) through a single tool can be difficult to combine. In the Schotter et al. [13] study, proposers and agents had two minutes of allotted time alone to discuss preferences and strategies. The recorded guidance from the proposer to the agent was the specified constraint, but guidance about proposer preferences was given verbally, separate from the constraint, and not reported. In a different treatment, called the four-consult treatment, the experimenters gave proposers and agents four opportunities to consult. In these heightened consultation treatments, the party with an agent (proposer or responder) received an even bigger share of the pie. They reported no regressions associated with the relationship between the constraint and the relative share of the pie. Even if they had, the undocumented private communication would have seriously interfered with the estimation. This is where our work picks up. We conjecture that the specification from the proposer, be it an upper bound, a lower bound, or a nonbinding proposal, serves as an anchor, and thus pulls the agent’s offer towards that communicated specification. There are two broad explanations for this anchoring effect. Table 1 briefly lists these two broad explanations.
Model 1 posits that the intermediary agent ignores the proposer’s communication when the constraint is not binding, and adheres to it when it is. If we assume that the agent has fairness considerations that are resistant to communications from the proposer, then model 1 could describe an equitable agent. An “equitable agent” is defined as follows:
Definition 1.
An equitable agent is defined as an intermediary agent who ignores the proposer’s proposal and instead splits the surplus between the proposer and responder according to the agent’s notion of equality.
Model 2 can describe a proposer’s agent or a responsive agent: A proposer’s agent is formally defined as follows:
Definition 2.
A proposer’s agent is an agent who adheres precisely to the proposer’s expressed wish.
Between the two extremes, there is a responsive agent.
Definition 3.
A responsive agent is defined as an intermediary agent who is responsive to the proposer’s proposal, but adjusts it upwards or downwards.
For Model 2 to describe a proposer’s agent, the agent would have to be fully responsive to direct communication (Nonbinding proposal). Model 2 makes no predictions for a proposer’s agent in Upper and Lower Bound proposal formats. Model 2 could describe a responsive agent, where the agent would monotonically, but not strictly, respond to the proposal in the setting of a Nonbinding proposal; this monotonic relationship would be observed in Upper and Lower Bound settings as well.

3.2. Hypotheses

If Model 1 captures the true underlying data-generating process, then the constraints will be binding only at the edges of the proposal space. That is, the constraints will be binding only when either the upper bound is below or the lower bound is above the agent’s internal reference point for equitable allocation.
Model 2 differs in the interior of the proposal space. It argues that the interior of the proposal space should be responsive to the communication, but indistinguishable between settings, as long as the constraint is not binding.
Hypotheses corresponding to the models:
Hypothesis 1a (H1a).
In a Nonbinding proposal setting, the agent’s offer is nonresponsive to the proposer’s communication.
Hypothesis 1b (H1b).
A censoring specification for Upper Bound and Lower Bound settings fully accounts for the agent’s response function, leaving no responsiveness to the proposer’s proposal as an explanatory variable.
Hypothesis 2 (H2).
The agent’s response function is identical to a Nonbinding proposal, Upper Bound, and Lower Bound settings, as long as the constraint is not binding.
H1a and H1b correspond to Model 1. Model 1 states that the constraint censors the offer. This means that the agents’ latent intended offers are identical across proposal formats. Nevertheless, the constraints put a floor or ceiling on the latent offers, making the observed agents’ offers bound between these constraints. When the constraint is not binding, all conditions should produce the same agents’ offers over the entire range of communications—this is what nonresponsive means.
H2 corresponds to Model 2. Model 2 allows agents to be responsive to proposers’ communication, but the type of communications is not important, as long as it is not binding. That is, the latent (intended) offer by the agent, while sensitive to proposer’s communication, is the same across conditions for any given communication. In terms of an observed agent’s offer, this identity is only observed for nonbinding proposer’s communication.

4. Experimental Design

The experimental study used an ultimatum game with an intermediate. That is, we added an intermediary agent that intermediates between the proposer and the responder. Importantly, as mentioned above, the intermediary agent nominally represents the proposer, but has no real stake in the outcome. We study eight conditions, corresponding to four proposal formats × two menu displays.
In the first proposal format—the Nonbinding proposal—the agent may have complete power to make any offer he wishes. At the other extreme—the Binding proposal—the agent is passive and has no role other than to merely carry out the proposer’s offer. In the Upper Bound and Lower Bound proposal formats, the proposer sets the maximal and minimal offer allowed by the agent, respectively. We summarize the eight conditions in Table 2.
The recruitment platform is Cloud Research. In accordance with Cloud Research’s options, we only allowed vetted high-quality participants into the study. These are participants who have passed Cloud Research’s attention and engagement measures, were verified to be human, to have nonduplicate geocoding, and otherwise checked for consistency and past approval. Forty Cloud Research respondents were recruited in each setting described in Table 2, and were run in two slight visual variations. The two variations pertained only to the presentation of choices for the agent in the restricted conditions with upper and lower bounds, such that only allowed choices were displayed.
The instructions for the unrestricted menu of the experiment are included in Appendix A (except for demographic questions). The restricted menu instructions appear the same, but omit invalid responses from the menu of possibilities. The two display menus are identical in terms of the permissible action space, and only differ in the manner of display. To make sure the participants read the instructions carefully, we used one comprehension quiz question and an attention filter.

5. Results

5.1. Proposer Behavior

While our stated objectives do not involve direct hypotheses regarding proposer behavior, the proposer’s behavior is nevertheless of importance. Recall that there were two alternative proposed models of agent behavior. In Model 1, the agent was unresponsive to any proposer’s nonbinding proposal, whereas in Model 2 the agent was responsive to nonbinding proposals, but not responsive in the interior to the proposal format in which the proposal was sent. The latter model would make sense in particular if the proposer’s proposal itself was insensitive to the proposal format.
Thus, our first check is whether the proposer’s proposal is sensitive to the proposal format. Proposals are shown in Table 3.
The mean proposals in the Nonbinding proposal and Binding proposal formats are 4.825 (0.354) and 5.100 (0.377), respectively, under the unrestricted menu, and 5.100 (0.377) and 5.125 (0.351), respectively, under the restricted menu. None are significantly different from one another, suggesting that proposal differences on the part of the agent are solely due to constraints. The Upper Bound and Lower Bound proposal formats stand apart from the rest. An F-test confirms that we can jointly reject pooling the four settings. We state this as Result 1.
Result 1.
The proposals in Nonbinding proposal and Binding proposal settings are statistically the same and have the same median at 50% of the pie. The Lower Bound and Upper Bound are statistically significantly different from the rest.

5.2. Responder Behavior

We next examine responder’s reaction to the agent’s offer, as presented in Table 4.
The key measure for responder’s reaction is the minimum acceptable offer (MAO). This is the lowest offer that the responder is willing to accept. The responder knows the setting they are in and the agent’s offer, but does not observe the proposer’s proposal. This was due to the experimental design. Recall that this is a strategy method. Eliciting a conditional response for all possible agent and proposer proposal combinations would have been too cumbersome for the subject. We see that these MAOs are statistically similar across settings.
From this pattern, we can conclude that the responder’s response function does not statistically differ between settings. This holds with and without alpha adjustments for the four pairwise comparisons.

5.3. Agent Behavior

Before testing the Hypotheses in Section 3.2 pertaining to the agent, it is useful to obtain a visual on the agent’s offer as a function of proposer’s proposal. Figure 1 shows the agent’s offer in the different conditions.
Figure 1 shows that the agent’s offer is increasing in the proposer’s proposal however, this in itself does not indicate responsiveness because the Upper bound and Lower Bound settings have binding limits, which must be accounted for with censoring per H1b. However, a Nonbinding proposal setting has no limits, and so nonresponsiveness can be tested with a linear relationship between proposer’s proposal and agent’s offer.
We proceed to characterize the agent’s response function using the framework of H1a and H1b via the linear regression reported in Table 5. For all settings, we report clustered standard errors (N = 40). We apply Tobit censoring for both Upper Bound and Lower Bound settings.
From the regression for Nonbinding proposal setting shown in Table 5, we see that we can statistically reject H1a, that the agent in Nonbinding proposal setting is unresponsive to proposals (the null hypothesis is that the proposal coefficient is equal to zero in a Nonbinding proposal setting). This is true for both the unrestricted and restricted menus.
The censoring in Upper Bound and Lower Bound settings is helpful in improving fit, but does not nullify the significance of the slope. Therefore, we can conclude that where the bounds are likely to be strictly binding, the bounds proposed by the proposers serve as impactful anchors to the agents, in addition to being strict limits. Thus, we reject H1b (the null hypothesis is that the proposal coefficient is equal to zero in Upper Bound and Lower Bound settings, after accounting for censoring), and conclude that participants in the role of agents are responsive to proposer’s proposals. This is true for both the unrestricted and restricted menus.
Clearly, the proposer’s proposal means different things in the different treatments: a preferred proposal in one, an upper bound in another, and a lower bound in another. However, we conjecture that whenever the lower and upper bound are not binding they serve as anchors, in much the same way that a preferred proposal does. The censored regressions are critical in testing this conjecture because they essentially focus the comparison on the areas of the offer space where the bounds are not binding.
From the difference between pooled setting log likelihood and sum of the non-pooled settings’ regressions, we can reject H2 that the proposals are reacted to by the agent in the same way between settings. That is, the difference between the “Pooled LL” in Table 4 and the “Sum of non-pooled LL from the three separate regressions” should have a chi-square distribution with four degrees of freedom (number of constraints) under the null hypothesis that the settings can be pooled. This is known as the likelihood ratio test, which can reject pooling at p < 0.001. This is true for both the unrestricted and restricted menus.
We further examine Figure 1, and see that the agent’s response function for the upper bound setting nearly coincides with that of the Nonbinding proposal setting in the upper half of the support (where the upper bound is not highly restrictive), and coincides exactly with that of the Lower Bound setting in the lower half of the support (where the lower bound is not highly restrictive). Note that at the binding limits, the response function coincides with the 45 degree line (displayed in Figure 1).
From this pattern, we can conclude that while the agent’s response function statistically differs between settings, it nevertheless largely coincides with the nonbinding setting whenever the bounds are not restrictive. In these ranges, the bounds serve the same purpose as nonbinding proposals.
This suggests a pattern of binding ceilings and floors at the respective settings, but only when the agent would have chosen an offer above a ceiling in the Upper Bound setting or below a floor in the Lower Bound setting. We articulate this pattern more formally using the previously defined terminology of proposer’s agent, equitable agent, and responsive agent.
Pattern 1.
An agent facing an Upper Bound order from the proposer will act as a proposer’s agent at the lower half of the offer spectrum, and as a responsive agent for the upper half of the offer. The reverse is true for a Lower Bound order, where the agent acts as a responsive agent for the lower half, and proposer’s agent for the upper half.
We conducted several robustness checks to make sure that this result is not an artifact of the specification or the choice of explanatory variables. One possibility we were concerned with, given this strategy setting is that when each person plays in both the agent role and the proposer role there might be a correlation between an individual’s choice as a proposer and the same individual’s choice as an agent. Specifically, if the individual’s choice as proposer may affect their choice as an agent, then this could potentially distort the estimated relationship between an agent’s choice and their partnered proposer. Appendix B investigates this issue. We find that while adding the agent’s own proposal as an explanatory variable has a significant outcome in all of our equations (in Appendix B), there is no statistically significant effect on the coefficient on the partner’s proposal—the slope of the relationship we examined above. Since there is no underlying reason why the agent’s own proposal affects the agent’s offer rather than just being correlated with it, we consider Table 4 to be a better specification, but it is important to stress that the slopes in every setting in Table 4 are statistically comparable under both specifications.

6. Conclusions

Consider the Nonbinding proposal setting, where the agent has complete power while the proposer has no autonomy over their fate. The proposer is completely at the mercy of the agent—somewhat less so than the responder, who at least has an accept/reject decision to make. One difference between the two beneficiaries from the point of view of the agent is that the proposer can express his preference, while the responder cannot. If we accept this simplification, then the Nonbinding setting explores the role of expressed preferences on intermediary agents’ behavior. The intermediary agent has no incentive to abide by the proposer’s wishes. On the one hand, they can follow the proposer’s wishes as their representative. On the other hand, they can exercise their own judgement by their own moral code and ignore the proposer’s wishes. One important contribution of the present work is in showing where the agent falls on this spectrum. We find that the agent is likely in between the two extreme modes with a moderate response slope. In all the settings we examined, we note that intermediary agents were best described as “responsive” agents. Thus, we can safely conclude that expressing preferences has an impact on the agent’s behavior, even when there is no way to enforce the desired preference.
A second contribution pertains to the mode (or contract) of preference delegation to the agent. The ideal delegation mode for the proposer depends greatly on the agents’ behavioral inclinations. We propose three possible types of agent behaviors—proposer’s agents, who strictly want to maximize the proposer’s welfare by catering to proposer’s wishes; equitable agents, who have their own equity preferences; and responsive agents, who split the surplus based on the proposer’s indicated preferences. Depending on the inclination of the agent on the behavioral spectrum described above, the proposer may prefer different types of agency contracts. In the delegation mode corresponding to the binding proposal setting, the proposer may elect to leave no wiggle room to the agent. By doing so, the proposer would not only reduce the risk of encountering an equitable agent disadvantageous to the proposer, but also takes upon themself the full social burden of their actions. The proposer could leave wiggle room to the agent on either end of the distribution with one-sided limits (or two-sided, which we did not study) captured by simultaneous Upper and Lower Bound settings. We find that agents treat limits, at both ends, much as they would treat a Nonbinding proposer suggestion, as long as the limit does not strictly truncate the offer the agent would have made. Thus, the lower range of lower constraints and upper range of upper constraints are treated in a similar manner to Nonbinding proposer suggestions. In the upper range of lower limits and lower range of upper limits, the limit is strictly adhered to. This means that limits by themselves can serve the same function as that of nonbinding communication, and that agents react to them similarly. In other words, limits serve to reduce risk while potentially delegating some social cost that the agents take on themselves.

Author Contributions

Methodology, E.H. and Y.R.; writing—original draft preparation, E.H. and Y.R.; writing—review and editing, E.H. and Y.R. The Authors contributed equally to all aspects of the paper. All authors have read and agreed to the published version of the manuscript.

Funding

Funding was provided by Social Sciences and Humanities Research Council of Canada, Insight Development Grant, grant #430-2020-0035.

Institutional Review Board Statement

These studies were approved by IRB at University of Texas at Dallas MR 17-039 and MR 18-104.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

All data is available upon request.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Instructions for the Experiment

Welcome to this study of decision making. If you follow the instructions carefully, you can earn a bonus, in addition to your $1 show up fee, to be paid to you at the end of the experiment.
You are assigned the role of “Proposer”, “Responder”, or “Agent”. Each participant chosen will be matched to two other participants in the other roles.
You will be paid for one of your decisions. In each of the decisions, the proposer will split a pie of 10 tokens between themself and the responder.
One token is worth 20 cents.
The offer is in terms of how many tokens the responder (the receiver) obtains. The amount that the proposer can offer to the responder can be anything from 0 to 10 tokens.
Please note that if you are chosen as the proposer, the amount you will receive in cash for a chosen round is $2 (equal to 10 tokens) minus the accepted offer.
For example, if you are the proposer, your offer is 1 token, and, if the offer is accepted, you—the proposer—will receive 9 tokens (=$1.80). Alternatively, if the offer is 9 tokens, you will receive 1 token (=20 cents).
The offer must be made through a participant in the role of an “agent”. The proposer tells the agent how much they wish to offer. This is called the Proposed Offer.
The agent can then decide how much to finally offer the responder on behalf of the proposer, subject to a Lower Bound and an Upper Bound. This final offer is called the Agent’s Offer.
There are four possible settings (each participant only observes their respective condition):
  • Setting 1 ‘Nonbinding proposal’: The proposed offer is nonbinding. The agent is not obligated to make the proposed offer.
  • Setting 2 ‘Binding proposal’: The proposed offer is binding. The agent is obligated to make the proposed offer.
  • Setting 3 ‘Upper Bound’: The proposer sets a binding Upper Bound. The agent is obligated to make an offer equal to or lower than the binding Upper Bound.
  • Setting 4 ‘Lower Bound’: The proposer sets a binding Lower Bound. The agent is obligated to make an offer equal to or higher than the binding Lower Bound.
  • Payments:
If the offer is rejected by the responder, all three participants receive 0 tokens.
If the offer is accepted by the responder, participants are paid as follows:
Proposer payment for the round     = 10—Offer
Responder payment for the round    = Offer
Agent payment for the round      = 0
 
After the final decision, we will randomly choose one of the decisions, providing that you were in a non-agent role, to pay you. It will be equal to your payoff for that round.
You will face 12 decisions.
 
Only one decision will be picked at random for the purpose of payment.
1 token = 20 cents.
 
Remember, this is REAL money. Your choices determine YOUR ACTUAL PAYMENT for this study (in the bonus portion of your payment).
 
This is a screening comprehension question that you must pass in order to continue. Assume you are the proposer in the setting “Binding proposal”, where your offer is binding. Suppose you offer 2 tokens and the offer is accepted. If this decision is the decision picked for payment, and you are the Proposer, how much will you earn in this experiment in addition to your $1 show up fee?
 
You are in the role of Proposer/Responder/Agent (the wording for agent was slightly modified to reflect their role)
Your task as a (insert the relevant role here)
The Proposer is to propose an offer. If the responder accepts an offer, they receive the amount offered, and you—the proposer—receive 10 tokens minus that offer.
The Responder is to accept or reject the offer by the agent. If you reject, you and the proposer receive 0 tokens. If you accept, you receive the amount offered, and the proposer receives 10 tokens minus that offer.
Recall that the offer you receive is NOT the proposed offer the proposer originally made, but rather an offer the agent made on his behalf.
Remember, this is REAL money. Your choice below could REDUCE YOUR ACTUAL PAYMENT for this study (in the bonus portion of your payment).
 
  • Proposer choice screen:
1 token = 20 cents
12345678910
What is your proposed offer to be paid to the Responder, to be made at your own expense? Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001
  • Agent choice screen in the Nonbinding proposal condition (the screen in other conditions was slightly modified to reflect the boundaries of the respective condition):
1 token = 20 cents
12345678910
What is your offer if the Proposer’s proposed offer is 0? Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001
What is your offer if the Proposer’s proposed offer is 1? Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001
What is your offer if the Proposer’s proposed offer is 2? Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001
What is your offer if the Proposer’s proposed offer is 3? Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001
What is your offer if the Proposer’s proposed offer is 4? Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001
What is your offer if the Proposer’s proposed offer is 5? Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001
What is your offer if the Proposer’s proposed offer is 6? Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001
What is your offer if the Proposer’s proposed offer is 7? Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001
What is your offer if the Proposer’s proposed offer is 8? Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001
What is your offer if the Proposer’s proposed offer is 9? Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001
What is your offer if the Proposer’s proposed offer is 10? Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001 Games 13 00043 i001
  • Responder choice screen:
1 token = 20 cents.
RejectAccept
What is your offer if the Proposer’s proposed offer is 0? Games 13 00043 i001 Games 13 00043 i001
What is your offer if the Proposer’s proposed offer is 1? Games 13 00043 i001 Games 13 00043 i001
What is your offer if the Proposer’s proposed offer is 2? Games 13 00043 i001 Games 13 00043 i001
What is your offer if the Proposer’s proposed offer is 3? Games 13 00043 i001 Games 13 00043 i001
What is your offer if the Proposer’s proposed offer is 4? Games 13 00043 i001 Games 13 00043 i001
What is your offer if the Proposer’s proposed offer is 5? Games 13 00043 i001 Games 13 00043 i001
What is your offer if the Proposer’s proposed offer is 6? Games 13 00043 i001 Games 13 00043 i001
What is your offer if the Proposer’s proposed offer is 7? Games 13 00043 i001 Games 13 00043 i001
What is your offer if the Proposer’s proposed offer is 8? Games 13 00043 i001 Games 13 00043 i001
What is your offer if the Proposer’s proposed offer is 9? Games 13 00043 i001 Games 13 00043 i001
What is your offer if the Proposer’s proposed offer is 10? Games 13 00043 i001 Games 13 00043 i001

Appendix B. Investigating Whether the Agent’s Own Proposal in the Proposer’s Role Affects His Response to the Partnered Proposer’s Proposal

Table A1. Accounting for the agent’s own proposal when he is in proposer’s role, in addition to the partnered proposer’s proposal. Estimates for linear regression of agent’s response function for nonbinding setting proposal (where there are no limits), Upper Bound, and Lower Bound, and pooled settings. Right and left censoring was conducted via Tobit in Upper Bound and Lower Bound settings and in the pooled setting. Robust standard errors (N = 40) are reported in all cases.
Table A1. Accounting for the agent’s own proposal when he is in proposer’s role, in addition to the partnered proposer’s proposal. Estimates for linear regression of agent’s response function for nonbinding setting proposal (where there are no limits), Upper Bound, and Lower Bound, and pooled settings. Right and left censoring was conducted via Tobit in Upper Bound and Lower Bound settings and in the pooled setting. Robust standard errors (N = 40) are reported in all cases.
ParameterUnrestricted MenuRestricted Menu
Setting ‘No Bounds’ (440 observations, 40 clusters)
Intercept0.859 (1.215)0.640 (0.677)
Proposer’s Proposal0.430 * (0.073)0.334 * (0.058)
Agent’s own proposal in proposer’s role0.317 (0.185)0.553 * (0.127)
LL−1022.663−906.399
Setting ‘Upper Bound’ (440 observations, 40 clusters)
Intercept3.931 * (1.363)−0.744 (0.811)
Proposal0.219 * (0.081)0.377 * (0.054)
Agent’s own proposal in proposer’s role0.071 (0.156)0.568 * (0.133)
LL−563.805−619.838
Setting ‘Lower Bound’ (440 observations, 40 clusters)
Intercept−0.069 (1.333)−0.126 (0.868)
Proposal−0.067 (0.134)0.288 * (0.069)
Agent’s own proposal in proposer’s role0.612 * (0.193)0.723 * (0.192)
LL−433.637−580.520
Pooled settings (1320 observations, 40 clusters)
Intercept1.077 (0.872)0.378 (0.500)
Proposal0.345 * (0.060)0.313 * (0.046)
Agent’s own proposal in proposer’s role0.332 * (0.128)0.571 * (0.104)
LL−2045.113−2152.925
Sum of non-pooled LL from the three earlier regressions−2020.105−2106.757
* p < 0.05. In all settings, N = 440, and there are 40 clusters in each cell. In the regression pooled settings, there are 1320 observations in each column.

References

  1. Müller-Freienfels, W. Law of agency. Am. J. Comp. Law 1957, 6, 165–188. [Google Scholar] [CrossRef]
  2. Desai, P.S.; Purohit, D. “Let me talk to my manager”: Haggling in a competitive environment. Mark. Sci. 2004, 23, 219–233. [Google Scholar] [CrossRef]
  3. Sawyer, S.; Crowston, K.; Wigand, R.T.; Allbritton, M. The social embeddedness of transactions: Evidence from the residential real-estate industry. Inf. Soc. 2003, 19, 135–154. [Google Scholar] [CrossRef] [Green Version]
  4. Northcraft, G.B.; Neale, M.A. Experts, amateurs, and real estate: An anchoring-and-adjustment perspective on property pricing decisions. Organ. Behav. Hum. Decis. Processes 1987, 39, 84–97. [Google Scholar] [CrossRef]
  5. Weiss, M.A. The Regulation of Sports Agents: Fact or Fiction. Sports Law J. 1994, 1, 329–357. [Google Scholar]
  6. Bielby, W.T.; Bielby, D.D. Organizational mediation of project-based labor markets: Talent agencies and the careers of screenwriters. Am. Sociol. Rev. 1999, 64, 64–85. [Google Scholar] [CrossRef]
  7. Rajgopal, S.; Taylor, D.; Venkatachalam, M. Frictions in the CEO labor market: The role of talent agents in CEO compensation. Contemp. Account. Res. 2012, 29, 119–151. [Google Scholar] [CrossRef]
  8. Neale, M.A.; Northcraft, G.B. Experts, amateurs, and refrigerators: Comparing expert and amateur negotiators in a novel task. Organ. Behav. Hum. Decis. Processes 1986, 38, 305–317. [Google Scholar] [CrossRef]
  9. Ashenfelter, O.; Dahl, G.B. Bargaining and the Role of Expert Agents: An Empirical Study of Final Offer Arbitration. Rev. Econ. Stat. 2012, 94, 116–132. [Google Scholar] [CrossRef]
  10. Ashenfelter, O.C.; Bloom, D.E.; Dahl, G.B. Lawyers as Agents of the Devil in a Prisoner’s Dilemma Game: Evidence from Long Run Play; (No. w18834); National Bureau of Economic Research: Cambridge, MA, USA, 2013. [Google Scholar]
  11. Agarwal, S.; He, J.; Sing, T.F.; Song, C. Do real estate agents have information advantages in housing markets? J. Financ. Econ. 2019, 134, 715–735. [Google Scholar] [CrossRef]
  12. Hamman, J.R.; Loewenstein, G.; Weber, R.A. Self-interest through delegation: An additional rationale for the principal-agent relationship. Am. Econ. Rev. 2010, 100, 1826–1846. [Google Scholar] [CrossRef] [Green Version]
  13. Schotter, A.; Zheng, W.; Snyder, B. Bargaining through Agents: An Experimental Study of Delegation and Commitment. Games Econ. Behav. 2000, 30, 248–292. [Google Scholar] [CrossRef] [Green Version]
  14. Fershtman, C.; Gneezy, U. Strategic Delegation: An Experiment. RAND J. Econ. 2001, 32, 352–368. [Google Scholar] [CrossRef]
  15. Güth, W.; Schmittberger, R.; Schwarze, B. An experimental analysis of ultimatum bargaining. J. Econ. Behav. Organ. 1982, 3, 367–388. [Google Scholar] [CrossRef] [Green Version]
  16. Fiedler, M.; Haruvy, E. The effect of third party intervention in the trust game. J. Behav. Exp. Econ. 2017, 67, 65–74. [Google Scholar] [CrossRef]
  17. Erev, I.; Gilboa Freedman, G.; Roth, Y. The impact of rewarding medium effort and the role of sample size. J. Behav. Decis. Mak. 2019, 32, 507–520. [Google Scholar] [CrossRef]
  18. Galinsky, A.D.; Mussweiler, T. First offers as anchors: The role of perspective-taking and negotiator focus. J. Personal. Soc. Psychol. 2001, 81, 657–669. [Google Scholar] [CrossRef]
  19. Yechiam, E.; Druyan, M.; Ert, E. Observing others’ behavior and risk taking in decisions from experience. Judgm. Decis. Mak. 2008, 3, 493–500. [Google Scholar]
  20. Konow, J.; Saijo, T.; Akai, K. Equity versus equality: Spectators, stakeholders and groups. J. Econ. Psychol. 2020, 77, 102171. [Google Scholar] [CrossRef]
  21. Ertac, S.; Gumren, M.; Gurdal, M.Y. Demand for decision autonomy and the desire to avoid responsibility in risky environments: Experimental evidence. J. Econ. Psychol. 2020, 77, 102200. [Google Scholar] [CrossRef]
  22. Füllbrunn, S.; Luhan, W.J. Responsibility and limited liability in decision making for others–An experimental consideration. J. Econ. Psychol. 2020, 77, 102186. [Google Scholar] [CrossRef]
  23. Fershtman, C.; Kalai, E. Unobserved Delegation. Int. Econ. Rev. 1997, 38, 763–774. [Google Scholar] [CrossRef]
  24. Chakravarty, S.; Harrison, G.W.; Haruvy, E.E.; Rutström, E.E. Are you risk averse over other people’s money? South. Econ. J. 2011, 77, 901–913. [Google Scholar] [CrossRef] [Green Version]
  25. Eriksen, K.W.; Kvaløy, O.; Luzuriaga, M. Risk-taking on behalf of others. J. Behav. Exp. Financ. 2020, 26, 100283. [Google Scholar] [CrossRef]
  26. Polman, E.; Wu, K. Decision making for others involving risk: A review and meta-analysis. J. Econ. Psychol. 2020, 77, 102184. [Google Scholar] [CrossRef]
  27. Ifcher, J.; Zarghamee, H. Behavioral economic phenomena in decision-making for others. J. Econ. Psychol. 2020, 77, 102180. [Google Scholar] [CrossRef] [Green Version]
  28. Pahlke, J.; Strasser, S.; Vieider, F.M. Risk-taking for others under accountability. Econ. Lett. 2012, 114, 102–105. [Google Scholar] [CrossRef] [Green Version]
Figure 1. (A). Agent’s Offer—Unrestricted Menu and (B) Agent’s Offer—Restricted Menu.
Figure 1. (A). Agent’s Offer—Unrestricted Menu and (B) Agent’s Offer—Restricted Menu.
Games 13 00043 g001
Table 1. Two sets of models.
Table 1. Two sets of models.
Model 1. The agent’s offer is independent of the proposer’s specification, absent of a binding constraint.
Model 2. The agent perceives any constraint as a communication and reacts to such communication monotonically and similarly over settings.
Table 2. The eight conditions of the experimental study.
Table 2. The eight conditions of the experimental study.
Unrestricted MenuRestricted Menu
Nonbinding Proposal40 participants40 participants
Binding Proposal40 participants40 participants
Upper Bound40 participants40 participants
Lower Bound40 participants40 participants
ManipulationDescription
Proposal Format
Nonbinding proposalThe proposal is nonbinding.
The agent is not obligated to make the proposed offer.
Binding proposalThe proposal is binding.
The agent is obligated to make the proposed offer.
Upper BoundThe proposer sets a binding Upper Bound.
The agent is obligated to make an offer equal to or lower than the upper bound.
Lower BoundThe proposer sets a binding Lower Bound.
The agent is obligated to make an offer equal to or higher than the lower bound.
Menu Display
Unrestricted MenuThe agent is shown the full range of contingent offers for each potential proposal, but is restricted to choosing only qualified offers that do not violate proposer-imposed boundaries
Restricted MenuThe agent is shown only the range of qualified offers for each potential proposal that do not violate proposer-imposed boundaries
Table 3. Proposer’s average proposal by setting (standard error in parentheses). Forty participants per cell, for a total of 320 participants.
Table 3. Proposer’s average proposal by setting (standard error in parentheses). Forty participants per cell, for a total of 320 participants.
Proposal FormatUnrestricted MenuRestricted Menu
Nonbinding proposal4.825 (0.354)5.125 (0.362)
Binding proposal5.100 (0.377)5.125 (0.351)
Upper Bound5.600 (0.347)6.000 (0.330)
Lower Bound3.675 (0.344)4.400 (0.363)
F-test for the joint effect of Setting (all 4 settings)F(3, 156) = 5.36, p = 0.002F(3, 156) = 3.46, p = 0.018
Table 4. Responder’s MAO in each setting (N = 40 per cell, for a total of 320 participants).
Table 4. Responder’s MAO in each setting (N = 40 per cell, for a total of 320 participants).
Proposal FormatUnrestricted MenuRestricted Menu
Nonbinding proposal2.750 (0.301)2.200 (0.315)
Binding proposal2.750 (0.323)2.200 (0.313)
Upper Bound2.725 (0.302)2.275 (0.297)
Lower Bound2.400 (0.288)2.325 (0.321)
F-test for the joint effect of setting (all 4 settings)F(3, 156) = 0.32, p = 0.813F(3, 156) = 0.04, p = 0.990
Note. MAO is defined as the crossover between rejection and acceptance—the first offer that gets accepted.
Table 5. Estimates for linear regression of agent’s response function for each of the settings (nonbinding proposal setting, Upper Bound, and Lower Bound) as well as pooled settings. Right and left censoring was conducted via Tobit in the Upper Bound setting and Lower Bound setting, respectively, and two-sided censoring was performed in the pooled regression. Robust standard errors (N = 40) are reported in all cases. There are 40 participants per cell, and 320 participants in total.
Table 5. Estimates for linear regression of agent’s response function for each of the settings (nonbinding proposal setting, Upper Bound, and Lower Bound) as well as pooled settings. Right and left censoring was conducted via Tobit in the Upper Bound setting and Lower Bound setting, respectively, and two-sided censoring was performed in the pooled regression. Robust standard errors (N = 40) are reported in all cases. There are 40 participants per cell, and 320 participants in total.
ParameterUnrestricted MenuRestricted Menu
Setting 1 ‘Nonbinding Proposal’ (440 observations, 40 clusters)
Intercept2.703 * (0.484)3.476 * (0.441)
Proposal0.430 * (0.073)0.334 * (0.058)
LL−1021.193−985.71129
Setting 2 ‘Upper Bound’ (440 observations, 40 clusters)
Intercept4.400 * (0.707)2.768 * (0.483)
Proposal0.219 * (0.081)0.360 * (0.057)
LL−564.320−662.964
Setting 3 ‘Lower Bound’ (440 observations, 40 clusters)
Intercept2.770 * (0.702)2.980 * (0.316)
Proposal−0.067 (0.118)0.296 * (0.064)
LL−454.986−629.412
Pooled settings (1320 observations, 40 clusters)
Intercept2.891 * (0.416)3.209 * (0.368)
Proposal0.365 * (0.059)0.341 * (0.043)
LL−2093.569−2298.462
Sum of non-pooled LL from the three earlier regressions−2040.499−2278.087
F-test for the joint effect of Setting (excluding ‘Binding Proposal’)F(2, 1314) = 50.38, p < 0.001F(2, 1314) = 98.61, p < 0.001
F-test for the joint effect of Setting × ProposalF(2, 1314) = 24.51, p < 0.001F(3, 1314) = 42.21, p < 0.001
Note. * p < 0.05.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Haruvy, E.; Roth, Y. On the Impact of an Intermediary Agent in the Ultimatum Game. Games 2022, 13, 43. https://doi.org/10.3390/g13030043

AMA Style

Haruvy E, Roth Y. On the Impact of an Intermediary Agent in the Ultimatum Game. Games. 2022; 13(3):43. https://doi.org/10.3390/g13030043

Chicago/Turabian Style

Haruvy, Ernan, and Yefim Roth. 2022. "On the Impact of an Intermediary Agent in the Ultimatum Game" Games 13, no. 3: 43. https://doi.org/10.3390/g13030043

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop