Next Article in Journal
Enumeration of Optimal Equidistant Codes
Next Article in Special Issue
Knowledge Gradient: Capturing Value of Information in Iterative Decisions under Uncertainty
Previous Article in Journal
Some Remarks on Strong Fuzzy Metrics and Strong Fuzzy Approximating Metrics with Applications in Word Combinatorics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Functional Representation of the Intentional Bounded Rationality of Decision-Makers: A Laboratory to Study the Decisions a Priori

by
Carlos Sáenz-Royo
1,*,
Francisco Chiclana
2,* and
Enrique Herrera-Viedma
3
1
Department of Organization and Business Management, University of Zaragoza, 50009 Zaragoza, Spain
2
Institute of Artificial Intelligence, School of Computer Science and Informatics, De Montfort University, Leicester LE1 9BH, UK
3
Andalusian Research Institute on Data Science and Computational Intelligence (DaSCI), University of Granada, 18071 Granada, Spain
*
Authors to whom correspondence should be addressed.
Mathematics 2022, 10(5), 739; https://doi.org/10.3390/math10050739
Submission received: 24 January 2022 / Revised: 23 February 2022 / Accepted: 24 February 2022 / Published: 26 February 2022
(This article belongs to the Special Issue Reviews in Mathematics and Applications)

Abstract

:
The judgments of decision-makers are frequently the best way to process the information on complex alternatives. However, the performances of the alternatives are often not observable in their entirety, which prevents researchers from conducting controlled empirical studies. This paper justifies a functional representation that, due to its good predictive results, has been widely used ad hoc in studies in different branches of knowledge; it formalizes aspects of the human mental structure that influence the ability of people to decide and the intentional bounded rationality, and it subsequently analyzes how the reliability of decision-makers is affected by the difficulty of the problem and the expertise and beliefs of the decision-maker. The main research objective of this paper is to derive explicitly a general functional form that represents the behavior of a decision-maker linked to their way of thinking. This functional form allows a laboratory to be created to study a priori the performance of human decisions, i.e., the probability of choosing each of the alternatives, once the returns of the alternatives, the level of expertise, and the initial beliefs of the decision-maker are known exogenously. This laboratory will allow (1) the evaluation of decision support techniques; (2) the creation of agent-based models that anticipate group performances due to individual interactions; and (3) the development of other investigations based on statistical simulations.

1. Introduction

People face high-uncertainty decisions, where relationships between variables are blurred and cannot be formalized. Extraordinary complexity makes decisions unique, and the only known performances are those in the course of action, which prevents the evaluation of the decision-making process [1]. To address this issue in an epistemological framework in which there is an optimal decision, this work proposes a probabilistic modeling of the behavior of the decision-maker to allow evaluation of the way decisions are made. A fully accepted hypothesis in decision theory is that individual behavior is rationally intentional but bounded [2]. This means that cognitive boundaries of the brain lead individuals to make mistakes in judgment [3]. Aware of these limitations, scientists design systems that aim to minimize the consequences of mistakes for organizations [4].
Although psychology is deeply concerned with the mechanisms that govern human cognition, there is a certain gap with respect to studies focused on decision-making [5,6,7]. It seems obvious that a theoretical analysis of decision-making should include a solid basis that can link individual human behavior with the decision-making mechanisms. This basis must meet three criteria: (1) it must include the way of thinking; (2) it must include the possibility that individuals may be wrong, i.e., make errors; and (3) it must be efficient, i.e., specific aspects of human behavior that are not necessary to understand the decision-making process should not be modeled. Many authors have shown that the traditional rationality assumption does not produce satisfactory scientific predictability [8,9,10,11].
Different lines of research [12,13,14,15,16] have justified empirically the use of probabilistic functional forms of ad hoc choice, similar to the one presented here. The difference in the research reported here lies in how the “intentional bounded rationality” of individuals is interpreted. In our approach, the errors depend on personal aspects and difficulties, i.e., they are not purely random [17]. This allows innovative theoretical relationships to be established between the individual decision-maker’s characteristics and the performance of the decision.
Before choosing between discrete alternatives, the decision-makers have a starting point, prior knowledge/beliefs about the expected performance of alternatives, and the possibility of processing signals (information) to improve their prediction [18]. However, processing signals until certainty is achieved about the performance of each alternative is costly, due to inherent human limitations, which means that in practice some uncertainty persists about the alternatives’ performances when a decision-maker chooses between them. This internal uncertainty implies that the alternative ultimately chosen by a decision-maker may not be the one that gives the highest performance, referred to above as an error.
Individuals base their choices on logical deductions that process part of the available information and/or on previous beliefs [19]. Since this process is internal and invisible, the characterization of a decision-maker can be made using his level of error relativized with respect to an exogenous level of difficulty [20]. The probability with which a decision-maker chooses an alternative has been explored by modeling an internal process of maximization of the expected performance with constraints regarding the cost of processing the information. The result is that decision-makers improve their decision through knowledge when they expect the performance of reducing their error to be greater than the cost involved. The main difference of this approach from the classical rational choice model is that the latter considers the level of information as an exogenous variable, leading to a unique relationship between information and the response of the decision-maker: knowing the level of information in the rational system, the response is unique, and any other result is classified as non-rational. The research reported here aims to model the internal cognitive process of the decision-maker; it shows that the representation can be obtained with a logistic probability model, where the choice depends on the signals that the individual decides to process at all times. Specifically, the main research contributions of this paper are:
  • A general functional form, which we call the function of intentional bounded rationality, is proposed and Bayesian-justified to link it with the procedures of the human mind.
  • The decision depends exogenously on the difference in the performance of the alternatives, as a representation of the complexity of the decision.
  • The particular reliability of a decision-maker depends on two idiosyncratic parameters that represent their beliefs and expertise.
  • We describe a framework where the information is complete, and the uncertainty in the choice is the product of the limited capacity for individual judgment and the consequence of the internal cost of processing the information.
In summary, the objective of this work is to establish a general functional form that represents the intentional bounded rationality, where the behavior of a decision-maker is characterized by the probability of success or failure in each decision, rather than to predict the behavior of a specific decision-maker. Thus, a formal model to relate the parameters of the intentional bounded rationality with the final result in the decisions is hypothesized. Since the decision-making process is rarely observed directly by researchers, we propose to study it theoretically. The correct modeling of the individual human decision, based on the level of expertise and thinking focused on beliefs or logical deductions, allows a laboratory to be built to: (1) evaluate the performance of different group decision mechanisms: authority, majorities, consensus, hierarchy, polyarchy, disagreement, and others; (2) to evaluate decision support techniques; and (3) to evaluate interaction in agent-based models. In short, this paper focuses on determining a general functional form that represents well the cognitive aspects of human decisions.

2. Intentional Bounded Rationality

The basic elements of bounded rationality have been confirmed by empirical studies and laboratory cognitive experiments. Unlike the completely rational choice model, Simon’s model is consistent with the reality of human cognitive abilities. The approach of Newell, Shaw, and Simon [21] reflects the elements that affect human decisions psychologically. Their findings on how people reason is part of modern cognitive psychology. The scientific principles of behaviorism are shown as a clearly inductive line, with strengths in observation and quantification but weaknesses in theory. Bounded rationality presents four basic principles [22]:
  • Principle of expected rationality. This principle refers to the processing of information as an extensive system that includes human limitations in the ability to compute, and other aspects such as attention, emotion, habit, and memory. Decision-makers are oriented toward objectives, but often do not achieve them because their cognitive architectures fail to unravel the complexity of the environments they face [2]: the means are not appropriate to achieve the objectives and the result is that decision-makers are wrong. In short, it takes a great deal of effort for decision-makers to deal with complex problems, and sometimes their behavior is guided by emotional shortcuts (beliefs) that avoid having to process all the information. Simon referred to these elements as irrational and non-rational elements that limit the area of rationality.
  • Principle of adaptation. The processing of information (processed signals) allows inferences to be formed from data and prevents simple heuristics and stereotypical inferences (beliefs) from taking command of the decision. With enough time, human thought adapts to problems, i.e., human thought is adaptative and basically rational. In other words, there is a learning process that with time approaches the optimal solution. From this principle arises the inference that, in general, the longer the time available to make a decision, the more likely it is that the optimal decision will be made and the human cognitive limitations fade [23].
  • Principle of uncertainty. Studies of human choice in the real world or in laboratory situations repeatedly show that people have great difficulties in working with probabilities, assessing risk, and making inferences when there is uncertainty [24]. If understanding one of the causal factors involved in a problem is confusing or ambiguous, then uncertainty impacts the entire thought process by reducing expectations of improving processed signals and putting decision-makers, at least in part, in the hands of their beliefs.
  • Principle of compensation. There is a trade-off between improving the decision, due to processing a greater number of signals, and the cost involved. The first behavioral tool to understand compensation was Simon’s notion of satisfaction. His idea is that a decision-maker chooses alternatives that are “good enough” given cognitive limitations. Lupia, McCubbins, and Popkin [25] claim that limited rationality is consistent with maximization behavior since the term satisfy relates to maximization subject to information costs. Intentional rationality itself entails a purported maximization in a world of uncertainty resulting from human limitations. Satisfaction describes the cognitive difficulties that decision-makers have with compensation, and therefore describes an expectation of compensation. As a result, compensation targets are very difficult. The answer, Simon argued, was for people to set aspiration levels for the goals they wish to achieve. If a choice is good enough for all objectives, then it is chosen.
From the above four principles, it can be said that decision-makers must decide how much information they are willing to process (signals to be processed) to improve the outcome of beliefs or intuitions, knowing that this action has an economic cost in terms of effort and time. In this process, both additional performances and additional costs present internal uncertainty, which causes decision-makers to naturally assume a certain level of error in their decisions. The cost of compensation (latent cost of processing a higher level of signals) depends on the ability of the decision-maker, which in this study is called expertise. Characterizing bounded rationality means that the relationship between the amounts relating to success and error must be established, given the expertise level of the decision-maker.
Another element to keep in mind is that problems are not equally complex. The most difficult choices are those in which the performances to be compared have the least difference [17,20,26] since their comparison requires greater precision and certainly more information. The more information that is required for a choice, the closer it is to hypothetical behavioral rationality in the sense that there is a lower error ratio. The quality of decisions is measured based on the likelihood of judging correctly or wrongly. This approach is in line with the non-completeness of the decision process, i.e., a decision-maker does not always choose one option against another [27], but he does it with a certain probability, making mistakes of perception that can lead him to make further mistakes.

3. Modeling of Intentional Bounded Rationality

The behavior of the decision-maker where the performances of alternatives are latent is modeled from a Bayesian probability perspective. Within this framework, when quantifying an uncertain phenomenon, the decision-maker begins with a model that directly provides a calculus of the probability of the phenomenon, which a priori reflects individual beliefs and synthesizes the emotional and intuitive part of the decision process. Subsequently, the decision-maker decides whether to process more information through repeated signals (logical part) and evaluates how the model behaves when compared to the actual occurrences of the phenomenon. As the number of signals processed increases, the measure of the suitability of the model improves. This approach is similar to the cross-entropy procedure, which is one of the best numerical optimization methods for latent variables. Goodfellow, Bengio, and Courville [28] argue that cross-entropy provides very large gradient values, which are especially valuable for the gradient descent method, which turns out to be precisely the most successful optimization method available currently. The cross-entropy far exceeds other probabilistic forecasts simply because it places much more emphasis on rare events.
From his beliefs, the decision-maker can improve his informational position by considering its cost. In order to increase performance, better anticipate the natural state, and choose the alternative more likely to be the best, a higher level of signals processing is required by the decision-maker, which in turn generates a cost. An observer does not know what signals a decision-maker chooses to process, which justifies possible changes in a decision-maker’s choice in his intentional but bounded rationality. The intentional bounded rationality will be characterized by a trade-off between the cost of improving information and the expectation of profit; thus, given the cost of not making mistakes, an erroneous decision can be consciously assumed to be an acceptable decision. This approach does not establish the system by which a decision-maker learns, but only characterizes the level of information he decides to process and therefore his probability of success.
Let us assume that a decision-maker with intentional bounded rationality must choose from a finite number of are mutually exclusive alternatives ( n ) , which represent a comprehensive set ( A ) since it includes all possible alternatives at one time. Let x be a continuous random variable, with probability distribution f ( x ) , representing the set of all natural states. The latent performance of each alternative regarding the natural state is denoted a 1 ( x ) , , a n ( x ) . Thus, a i ( X ) is the latent performance of the alternative i given the state of nature X . Therefore, if y and y ^ are two continuous random variables, where y is the result of processing a set of partial signals of y ^ , and y ^ is the result of processing the signals that predict each state of nature ( x ) accurately, its distributions of probability will be g ( y ) and f ( y ^ ) , respectively, where f ( y ^ ) = f ( x ) .
At the starting point of the decision process, the decision-maker has a set of a priori information (beliefs), and therefore it is possible to define a set of signals obtained by the decision-maker, y 0 , along with an “a priori” probability distribution of those signals, g ( y 0 ) , that gives each alternative A i a probability of being chosen p 0 i . In other words, it is possible to discretize the probability distribution of the processed signals, g ( y 0 ) , and the a priori probability of choosing the alternative A i will be p 0 i .
For each set of signals processed, the decision-maker tries to maximize their performance. Therefore, the processed signals are discretized and grouped into convex subsets, with subset y i containing those signals whose consequence is that the decision-maker chooses the alternative A i [29,30]. Since y can give perfect information on x if information y ^ is processed in its entirety, then the subset y ^ i predicts all natural states x i for which a i ( x i ) is maximum, i.e., a i ( x i ) = max j ( a j ( x i ) ) . It can be said that the subsets of signals perfectly anticipate the subsets of the states of nature in which each alternative is a maximum. In these cases, the conditional probability of the subsets is equal to one, p ( x i | y ^ i ) = 1 , which means that p ( x i ) = p ( y ^ i ) . If all alternatives are eligible, then there will be as many subsets of signals as there are alternatives ( n ).
As mentioned above, the individual decides how much information to process, attending to the cost (time, effort, money) of this action. When the decision-maker chooses not to process all the information, he will have a set of partial signals (not fully processed), and in this case y will not provide complete information about x . The subsets x i and y i will not match, because the information is not perfect, and therefore p ( x i | y i ) 1 . Thus, based on y i , mistakes in the prediction of the subset x i of the natural state can be made by the decision-maker: A i can be chosen when its performance is not maximum or another alternative to the maximum performance alternative A i can be chosen.
Shannon’s entropy is used as a measure of the reduction in uncertainty [31]. The alternative A i whose latent performance is a i ( x ) has a probability of being chosen p ( A i ) = p ( y i ) (once the information y is processed, the probability of choosing the alternative A i matches the probability of the subset of signals y i ), while the probability a priori is p 0 i and the initial entropy is E [ log p 0 j ] = j = 1 n p 0 j l n p 0 j (note that the a priori probability entropy increases as the number of alternatives grows).
The decision-maker may decide to process an amount of information y i , which results in the entropy variation:
Δ H ( p 0 i , p ( y i ) ) = j = 1 n p 0 j l n p 0 j + ( j = 1 n p ( y j | x ) l n p ( y j | x ) ) f ( x ) d x
where p ( y i ) = p ( y i | x ) f ( x ) d x .
The individual accepts a cost (effort) to improve the level of information (deliberate) and modify the level of entropy. It is to be expected that the longer the deliberation, the more likely it is that a better alternative will be chosen, due to the reduction of entropy. The individual’s problem becomes that of deciding the amount of information p ( y i | x ) to be processed to optimize his decision. Thus, the decision-maker chooses the probability p ( y i | x ) that maximizes the performance considering the cost of the entropy variation
M a x p ( y i | x ) j = 1 n a j ( x ) p ( y j | x ) f ( x ) d x L [ j = 1 n p 0 j l n p 0 j + ( j = 1 n p ( y j | x ) l n p ( y j | x ) ) f ( x ) d x ]
subject to the imposition to choose one of the alternatives
j = 1 n p ( y j | x ) = 1 .
The Lagrangian with respect to p ( y i | x ) is
φ = j = 1 n a i ( x ) p ( y j | x ) f ( x ) d x L [ j = 1 n p 0 j l n p 0 j + ( j = 1 n p ( y j | x ) l n p ( y j | x ) ) f ( x ) d x ] δ ( x ) ( j = 1 n p ( y j | x ) 1 ) f ( x ) d x
where L is the unit variation entropy cost and δ ( x ) is the Lagrange multiplier. Thus,
i = 1 , , n : a i ( x ) f ( x ) d x δ ( x ) f ( x ) d x + L ( l n ( p 0 i ) + 1 l n p ( y j | x ) 1 ) = 0
Denoting a i ( x ¯ ) = a i ( x ) f ( x ) d x and δ ( x ¯ ) = δ ( x ) f ( x ) d x , we obtain
                                  p ( y i | x ) = p 0 i e a i ( x ¯ ) δ ( x ¯ ) L
Since j = 1 n p ( y j | x ) = 1 , it is j = 1 n p 0 j e a j ( x ¯ ) L = e δ ( x ¯ ) L , which when plugged into (1) yields
p ( y i | x ) = p 0 i e a i ( x ¯ ) L j = 1 n p 0 j e a j ( x ¯ ) L = p 0 i e β a i ( x ¯ ) j = 1 n p 0 j e β a j ( x ¯ )
The parameter L represents the cost of processing more information; thus, β = 1 L represents the decision-maker’s ability to improve their chance of success logically, which supports its being referred to herein as the expertise of the decision-maker. Hence, beliefs ( p i 0 ) provide an improvement in performance in terms of cost and speed, at the expense of accuracy. The obtained functional form (2) allows a laboratory to be built to obtain the probability of choosing each alternative by knowing the latent performances of the alternatives, the alternatives’ a priori probabilities, and the expertise of the decision-maker.

3.1. Functional Form Interpretation

Kahneman [32] postulates two main processes that characterize thought: the rapid decision-making system that stems from intuition based on emotions, vivid images, associative memory, and consolidated knowledge as beliefs; and the slow logical system that intervenes when beliefs are insufficient. Logical intervention occurs when an event contradicting the world model based on beliefs is detected. While the rapid system employs a selection of actions on a stimulus within one or two cognitive cycles, the slow system requires more cognitive cycles in its deliberative decision-making [33,34].
In the above, this has been modeled as an individual internal process of maximum entropy, i.e., a procedure that generates probability distributions in a systematic and objective way. This procedure becomes completely natural when probability is considered as an extension of logic that allows reasoning in incomplete information situations [35]. Functional form Equation (2), provided by this procedure, represents the probability of choosing each alternative, and it clearly distinguishes the two thought processes indicated by Kahneman: (i) the a priori probabilities ( p 0 i ) indicate the attraction of the decision-maker to each alternative based on his intuition and beliefs; and (ii) the exponential latent performance of each alternative naturally attracts the decision-maker to the alternative with greater value. However, this second part is also dependent on the cost ( L ) of increasing certainty regarding the performance. Thus, functional form Equation (2) states that both “beliefs” and “cost of improvement” are the specific characteristics of a decision-maker that will determine their level of success/error. Moreover, it can be deduced that logical information processing follows a logistical distribution, the average of which is the relative performance of the alternative to be compared, and whose typical deviation is the inverse of the decision-maker’s expertise ( β ).

3.1.1. Role of Beliefs

Decision-makers’ beliefs are assumed to be true relationships. On the one hand, belief-based mechanisms establish simple and fast decision rules that the human mind uses to choose from the available alternatives at any given time [36]. On the other hand, the quality of the decision (coincidence between belief and reality) determines the level of success/error.
The lower the expertise of the decision-maker, the higher the cost of processing information and the more functional form Equation (2) will depend on beliefs-based prior probabilities. In the limit case, β 0 ( L ), it   is   e β a i ( x ¯ )     1 , which results in the limit value of functional form Equation (2) shown below:
p ( y i | x ) = p 0 i j = 1 n p 0 j = p 0 i
Thus, if it is assumed that, at the starting point of the decision process, the decision-maker does not have beliefs (that is, he knows nothing of the problem a priori, not even the distribution of latent performances of alternatives), then it is possible to define a set of signals obtained by the decision-maker ( y 0 ) along with an “a priori” probability distribution of those signals ( g ( y 0 ) ) that gives all alternatives the same probability of being chosen ( p 0 i = 1 / n   for   all   i ). However, the opposite assumption at the starting point of the decision process means that the decision-maker’s beliefs allow him to be totally sure about which alternative is the best, i.e., his beliefs-based prior probabilities will verify p 0 i = 1 (for one i ) while p 0 j = 0 (for j i ).

3.1.2. Role of Logic

Human cognition consists of a continuous and overlapping iteration of cognitive cycles, each of which is a cognitive atom, from which higher-order processes are built [34]. The assumption that the decision-maker has no beliefs implies that the choice of each alternative will depend exclusively on the logical part, since in this case p 0 i = 1 / n   for   all   i , and the functional form Equation (2) becomes:
p ( y i | x ) = 1 n e β a i ( x ¯ ) 1 n j = 1 n e β a j ( x ¯ ) = e β a i ( x ¯ ) j = 1 n e β a j ( x ¯ )
The two main elements in Equation (4) are: (i) the difficulty of the problem as represented by the latent performance of each alternative compared to the others; and (ii) the cost of processing the information or expertise. The closer the performances of the alternatives are, the more difficult is to choose correctly, and at the same time, this case also disincentives deliberation. In any case, the above Equation (4) shows how the decision-maker is not entirely in darkness: the decision-maker is attracted to the alternative with the best performance, so that the probability of choosing the best alternative is always greater than that of choosing the other alternatives. Thus, Equation (4) allows mechanisms of confrontation to be established, e.g., the exchange of ideas in a group decision, that improve individual performance in approaching the “best” option.

3.2. Some Empirical Evidence Regarding a Priori Beliefs and Deliberation

Tetlock [37] presents an interesting study on how establishing accountability mechanisms (pressures to justify a person’s own a priori beliefs compared to others’) leads individuals to process more information, which in turn reduces the “undue” influence of a priori beliefs. Another outstanding study on the balance between a priori beliefs and deliberation is that of Tetlock and Gardner [38], based on an impressive corpus of empirical evidence collected over years of research (more than 25,000 forecasters who made more than one million predictions on diverse topics), where the authors showed that people with better decision-making capacity are those whose “way of thinking” allows them to have an open mind (free from a priori beliefs), paying special attention to the evidence and related information (deliberation). This work emphasizes that this way of facing decisions provides better results than analysts with privileged information, with degrees of precision 60% higher than in the average population.

4. Ad Hoc Functional Form

Empirical data repeatedly suggest that decision-makers do not necessarily make the same choice when a decision is repeated. Different disciplines have studied how the decision-maker processes information, trying to anticipate their choice [14,15,16,27,39], and different ad hoc algebraic forms have been sought to adapt to the behaviors recorded by the data but leaving unresolved the relationship between the algebraic structure and the management of stimuli that occur after the choice.
The main lines of research that have used a functional form similar to that obtained in Equation (2) are summarized in Table 1, with further details as follows:
(1)
Luce was a pioneer in considering human decisions from a probabilistic point of view [27]. He proposed an ad hoc algebraic structure whose results were in close agreement with experimental practice and with the way most people structure explicit choice situations [13] such as when deciding on a mode of travel (for example, car, bus, or plane), or a patient’s choice between alternative therapies (for example, doing nothing, surgery, radiation, or chemotherapy).
(2)
Luce’s probabilistic approach allowed the development of an extensive learning literature that focused on studying which elements increased the probability of choosing the best option. Erev and Roth [40] used a probabilistic functional form of choice, which was based on Luce’s proposal, to calibrate different learning algorithms with real experiments. The argument for using this functional form is that it satisfies the effect law and the power law of practice, which ensure that pure strategies that have been played successfully tend to be played over time more frequently than those that have had less success. Camerer and Hua Ho [12] calibrated the experience-weighted attraction learning model (EWA) using experimental data. They interpreted the functional form of probability as a way of evaluating the attractiveness of each strategy.
(3)
The psychological approach ([14,15,16] among others) was based on choosing an ad hoc functional form that predicts human behavior well, focusing on the aspects that characterize the choices in terms of psychophysical (decreased sensitivity to probabilities and results), psychological (risk aversion and loss aversion), and simple principles of information processing.
(4)
Finally, the discrete choice models of McFadden [41] and Train [42] showed that any random utility model can be approximated with any level of precision with a mixed logit model. This functional form approach is the consequence of trying to obtain the parameters associated with the causes that make individuals choose one alternative over another. These models try to evaluate the utility that each individual gives to each alternative based on his choices; in short, they try to obtain the performances assigned subjectively to each alternative. These models do not recognize that there are errors in the choice, but that there are simply different tastes. The errors that the behaviors of the individuals provide are assigned to the fact that the observer is not able to perceive all the elements that affect the choice of an alternative. However, these models have provided a large number of empirical studies that show good empirical results using this functional form.
Our research presented herein does not consider how the decision-maker treats the signals to create an ordering of alternatives (as the theory of decision behavior does), but rather it tries to determine the reliability of the decision-maker. In our approach, the performances of the alternatives are latent, and the decision-maker tries to determine them (intentionality) but due to their limitations (limited rationality), he cannot decipher them in their entirety. For us, this process comes naturally to the decision-maker, who tries to choose the best alternative to survive, focusing their limited attention on one signal or another. Our model simply establishes the probability distribution of the decision-maker when choosing each alternative, based on their beliefs and expertise, evaluating the result (successful or unsuccessful) and not the procedure followed to obtain it. For us, the logical processes and beliefs are built by the decision-maker on values that are not well defined for most objects, questions, etc., but on which a probabilistic regularity can be built. This conceptual framework allows the evaluation of decision support techniques and agent-based systems, evidencing the contribution of each element (communication structures, decision rules, consistency requirements, etc.) to the quality of the final decision.

5. Function Characterization of Intentional Bounded Rationality

The expertise ( β ) is a measure of the decision-maker’s information processing skills. Its value will depend on the overall ability to deal with a specific problem, whose difficulty is determined by the difference between the alternatives’ latent performances. Without loss of generality, it can be assumed that the latent performance values of the alternatives are normalized υ i = a i ( x ) / i = 1 n a i ( x ) , i.e., we use a set of performance values verifying i = 1 n v i = 1 v i 0   i .
From a logical point of view, the probability of choosing any of the given alternatives will depend on the set of performance values and the expertise of the decision-maker. Thus, for a given expertise value β ( 0 ) , the greater the performance of alternative A i , the greater the probability of its being chosen by the decision-maker p ( y i | X ) = p i . In other words, there is an increasing function h β : [ 0 , ) [ 0 , ) such that
p β i = h β ( v i ) j = 1 n h β ( v j )
As per Yager [43], the expertise value β ( 0 ) can be considered as an indication of the power or importance that the decision support techniques must give to the individual in the group decision: the higher the expertise value, the more important the decision-maker. In a fuzzy context, the methodology for implementing importance values associated with decision-makers usually involves a t-norm operator, in particular the product t-norm. In this approach, h β is assumed to be of the form h β ( v i ) = β h ( v i ) . The alternative power implementation of the importance values approach considers h β ( v i ) = h ( v i ) β .   In   both   cases ,   h : [ 0 , ) [ 0 , ) is an increasing function.
In our framework, the power implementation of importance values is superior to the multiplication implementation, since in the latter case, the reliability value would not play any role in determining the probabilities above. Indeed, the multiplication approach to implement importance values yields probabilities that do not depend on the expertise value of the individual making the decision, which is not expected:
p β i = h β ( v i ) j = 1 n h β ( v j ) = β h ( v i ) j = 1 n β h ( v j ) = h ( v i ) j = 1 n h ( v j )
It is therefore assumed that there exists an increasing function h : [ 0 , ) [ 0 , ) such that h β ( v i ) = h ( v i ) β .
Whether β 0 is considered as an importance value or as a measure of the decision processing skills, it is expected that the probability of choosing an alternative will be greater than the probability of choosing another alternative with a lower performance value. In other words, the following rule is assumed:
v i > v j     p β i p β j
First, we will show that the range of function h does not include [ 0 , 1 ) . If this were the case, i.e., if h ( v i ) [ 0 , 1 ) , then h ( v i ) β β 0 1 and h ( v i ) β β 0 . In the first case, as the reliability value decreases ( β 0 ) , the probabilities of alternatives that do not present the highest performance value ( h ( v i ) < 1 )   will increase, until they all become equal in the case where β = 0 . This limit case is expected (lack of decision processing skills translates into treating all alternatives equally irrespective of their performance). However, the second limit case is counterintuitive, since unlimited decision processing skills means that the decision-maker would be able to differentiate between the alternatives no matter how little their difference in performance, i.e., the decision-maker would be able to find the best performance value for any given set of different performance values. Based on this, it is therefore assumed that the increasing function h is a function with range [ 1 , ) , i.e., h β : [ 0 , ) [ 1 , ) . Secondly, without loss of generality, the following boundary condition can be assumed: h ( 0 ) = 1 . Indeed, for h ( 0 ) > 1 ,   then   function   h ( x ) h ( 0 ) will verify the mentioned boundary condition and will lead to the same probability values as those obtained with function h ( x ) via Equation (5).
Thus, there exists an increasing function h : [ 0 , ) [ 1 , ) ,   verifying   h ( 0 ) = 1 , such that h β ( v i ) = h ( v i ) β . An example of such a function is h ( x ) = e x or, in general, h ( x ) = e q ( x ) , where q : [ 0 , ) [ 0 , ) is an increasing function verifying q ( 0 ) = 0 . The selection of a function q that is different from the identity function does not change the ordering of the probability values associated with a set of performance values verifying i = 1 n v i = 1 v i 0   i . Indeed, since there is a permutation σ :   { 1 , 2 , ,   n } { 1 , 2 , ,   n } such that v σ ( i ) v σ ( j ) when i < j , it can be assumed that v 1 v 2 v n . The monotonicity of q implies that q ( v 1 ) q ( v 2 ) q ( v n ) , and therefore e β q ( v 1 )   e β q ( v 2 )   e β q ( v n ) . Therefore, p β 1   p β 2   p β n . Thus, without loss of generality, h β ( v i ) = e β v i and
p β i = e β v i j = 1 n e β v j = 1 j = 1 n e β ( v j v i ) = 1 1 + j i e β ( v j v i )
The judgment of the decision-maker is distributed as a logistic function, the mean of which matches the true difference in performance ( v i v j ) , where β is the inverse of the standard deviation, which coincides with the expression in (4). The analysis below shows the goodness of the derived expression (7).
Following the modeling of the probability representing a decision-maker without beliefs, it is noticed that the minimum value of the reliability parameter β = 0 leads to p 0 i = 1 n   i = 1 , 2 ,   ,   n . In this case, the probability of choosing a particular alternative does not depend on its performance; in other words, the decisions made are purely random with no account taken of the alternatives’ performance. This is in agreement with the interpretation of the expertise parameter as a measure of the decision processing skills, i.e., the specific knowledge that the decision-maker has about the alternatives compared. This extreme case of lack of knowledge about how to process a comparison of the alternatives’ performances means that for such a decision-maker, all alternatives are treated equally.
Suppose now that β > 0 . First, the case where all performance values of alternatives are positive and different
v 1 > v 2 > > v n ( > 0 )
is analyzed. If i < k ,   since   v i > v k , then
v j v i < v j v k   j     β ( v j v i ) < β ( v j v k )   j     j = 1 n e β ( v j v i ) < j = 1 n e β ( v j v k )  
Thus, in this case, p β 1 > p β 2 > > p β n . Secondly, it is supposed that in the above case it is accepted that some of the performance values are the same. Let us assume that only two consecutive performance values are equal in the above ordering, and denote these as k ,   k + 1 :
v 1 > v 2 > > v k = v k + 1 > v k + 2 > > v n   ( > 0 ) .
Since v k = v k + 1 ,
j = 1 n e β ( v j v k ) = j = 1 n e β ( v j v k + 1 )     p β k = p β k + 1  
and
p β 1 > p β 2 > > p β k = p β k + 1 > p β k + 2 > > p β n
Thirdly, the case where performance values are allowed to be zero is discussed. It is supposed that in the above case, some performance values are equal to zero, i.e., there exists   k > 1 such that v k = 0 . Then for k 1 > k , v k 1 = 0 and
p β k 1 = e β v k 1 j = 1 n e β v j = 1 j = 1 k 1 e β v j + ( n k + 1 )   k 1 k
which implies
p β 1 > p β 2 > p β k = = p β n
Summarizing:
v 1 v 2 v n   ( 0 )   p β 1 p β 2 p β n   ,  
with corresponding strict inequalities in both statements of the above equivalence.
Finally, the limit case β is analyzed. First, it is assumed that the maximum value of the set of performance values is unique, i.e., v 1 > v 2 . It has been proved above that for any β > 0 , p β 1 > p β 2 p β n . Next, it is proved that p β 1 tends to 1 when β . Since β ( v j v 1 ) < 0   j > 1 ,
β β ( v j v 1 )   j > 1 e β ( v j v 1 ) 0   j > 1     j > 1 e β ( v j v 1 ) 0
Therefore,
p β 1 = 1 1 + j > 1 e β ( v j v 1 ) β   1
Consequently, this case will choose alternative A 1 with a total probability of 1, and the rest will have an associated probability value of 0. If there is more than one maximum performance value ( v 1 = v 2 = = v k > v k + 1 ), the limit case β will assign equal probability ( 1 / k ) to corresponding alternatives ( A 1 ,   A 2 , ,   A k ) ,   and   0 to the rest of the alternatives.

6. Binary Decision Analysis

The maximum simplification of complex decision problems focuses on the choice between two alternatives. In fact, the choice between multiple alternatives can be seen as a problem of sequenced binary choices. In this sequential approach, the maximum simplification is represented by the problem of choosing between an alternative vs. the status quo [44,45]. Another possible advantage of binary analysis is that it allows relative analyses instead of absolute ones, making it possible to dispense with scale problems. The pairwise comparison is a well-established method of assisting decision-makers [46,47]. In this case, the decision-maker does not need to evaluate the specific values of the alternative performances a i ( X ) 0   ( i = 1 , , n ) ; it is sufficient to estimate the comparison of each alternative with respect to another by means of a judgment relationship a i j = a i ( X ) / a j ( X )   ( i j ) , which as mentioned above avoids scale problems. Now, the decision-maker compares the intensity of the performance of a pair of alternatives ( a i j ) against its inverse ( a j i ), to try to determine which is greater.
To derive the probability of choosing alternative A i over alternative A j by a decision-maker with expertise β , p β i j , our previous intentional bounded rationality expression (7) applied to the case of conducting a pairwise comparison of alternatives becomes
p β i j = e β a i ( X ) a j ( X ) e β a i ( X ) a j ( X ) + e β a j ( X ) a i ( X ) = 1 e β ( a j ( X ) a i ( X ) a i ( X ) a j ( X ) ) + 1 = 1 e β ( a j i a i j ) + 1
Using (8), it is possible to obtain the probability that the judgment of the decision-maker shows that the intensity a i j is greater than a j i ( p β i j ) and vice versa ( 1 p β i j ). A higher relative performance of alternative A i ( a i ( X ) ) increases the likelihood that this alternative will be preferred in binary comparisons. The probability of expressing a preference for the relative value a i j over the relative value a j i ( p β i j ) depends on the relative difference in their valuations ( a j ( X ) a i ( X ) a i ( X ) a j ( X ) ) . A rational person without limits ( β = ) would show a preference with a probability of 1 , when a i ( X ) > a j ( X ) . Therefore, when ( a i ( X ) > a j ( X ) ) the probability of making the mistake of showing a preference for the relative value that presents a lower relative performance is 1 p β i j = p β j i . When the intensities of the two alternatives are the same, then the probability of choosing each of them is 1 / 2 , and the decision-maker must opt for a relative value. The probability of making a mistake decreases when the difference between a i ( X ) and a j ( X ) increases and is higher when the performances of a i ( X ) and a j ( X ) are relatively close.
In paired judgments Equation (8) for intentional bounded rationality shows how the decision-maker is not entirely in darkness, since he is attracted to the higher-performance option, so that the probability of choosing the best option is always higher than 0.5 if β > 0 . This allows decision support techniques to be established that improve individual performance by approaching the “best” option.
Notice that it is possible to directly apply Equation (5) to the case of considering a set of two alternatives ( A i , A j ) with associated performance values ( a i ( X ) , a j ( X ) ) . Implementing this strategy would imply that the probability of choosing alternative A i over alternative A j for a decision-maker with expertise β , p β i j , would be
p β i j = p β i p β i + p β j = f β ( v i ) f β ( v i ) + f β ( v j ) = e β v i e β v i + e β v j = 1 1 + e β ( v j v i )
The above expression, although valid, is not restricted to the comparison of the intensities of alternatives, because the performance values of the rest of the alternatives are considered indirectly through the normalization of the performance values: v i = a i ( X ) k = 1 n a k ( X ) . Even if this is circumscribed to the set of alternatives of cardinality 2 in the pairwise comparison at hand, i.e., by applying Equation (1) to the set { A i , A j } with associated performance values { a i ( X ) , a j ( X ) } , we would have p β i j = e β v i e β v i + e β v j where v i = a i ( X ) a i ( X ) + a j ( X ) and v j = a j ( X ) a i ( X ) + a j ( X ) . This approach continues to be a comparison of values, not of intensities, and it could be criticized using the argument that when comparing one alternative with each of the others in the set of alternatives, a different normalized performance value v i would be needed.
There is an alternative method of deriving the targeted peer-to-peer probabilities of the decision-maker with expertise β , which derives from the preference modeling framework methodology developed in [48], based on performance values of alternatives captured in a multiplicative preference relation with elements representing the preference values of alternatives when compared pairwise. Given a set of alternatives A 1 , A 2 , , A n with associated performance values a i ( X ) 0   ( i = 1 , , n ) , according to [48] there is a multiplicative preference relation A = ( a i j ) , where the intensity of alternative i over alternative j ,   a i j , is
a i j = ( a i ( X ) a j ( X ) ) c ,     c > 0
The probability of choosing alternative A i over alternative A j , p i j , based on their performance values can be defined in terms of intensity as follows:
p i j = a i j a i j + a j i
As discussed in Section 5, the implementation of the decision-maker’s level of expertise, β ( 0 ) , would lead to the following pairwise preference values
a i j = f β [ ( a i ( X ) a j ( X ) ) c ] = e β ( a i ( X ) a j ( X ) ) c ,     c > 0
Therefore, the corresponding probability of choosing alternative i over alternative j by the decision-maker with expertise β , p β i j , would be:
p β i j = e β ( a i ( X ) a j ( X ) ) c e β ( a i ( X ) a j ( X ) ) c + e β ( a i ( X ) a j ( X ) ) c   ,     c > 0
In particular, and for computational efficiency, we consider the value c = 1 :
p β i j = e β a i ( X ) a j ( X ) e β a i ( X ) a j ( X ) + e β a j ( X ) a i ( X ) = 1 1 + e β ( a j ( X ) a i ( X ) a i ( X ) a j ( X ) ) = 1 1 + e β ( a j ( X ) a i ( X ) ) ( a j ( X ) + a i ( X ) ) a i ( X ) a j ( X )
Expression (10) only requires the intensity values between alternatives, with no normalization required. When the absolute value of the difference between the performance values of the alternatives compared increases, the absolute value of the difference between the probabilities of the compared alternatives also increases.

7. A Laboratory to Study a Priori Decisions

The established formalization of intentional bounded rationality is a key contribution since it provides a contingent strategic framework where the probability of choosing each alternative depends on internal elements (the decision-makers’ experience and beliefs) and on external environmental elements (complexity). Encoding human decisions by assigning values to these three elements through the associated parameters allows the creation of an a priori testing laboratory to simulate the performance with respect to:
(1)
Different forms of employee participation in organizational decisions;
(2)
How to combine the different levels of experience of the members of a group;
(3)
Different processes for the dissemination of ideas and knowledge in organizations and markets;
(4)
Different levels of hierarchy and communication, with decision rules linking all these simulations with the performance of each alternative in a probabilistic way.
The proposed function for individual behavior allows agent-based models to be built to evaluate the diffusion and adoption of judgments, innovations, or projects. The progress of an opinion or innovation in a network structure is based on establishing the rules by which an agent can change his judgment. It is usual to allow judgment changes when there is a random interaction between agents with different judgments that are interconnected. The interactions allow the change of judgment but, because of learning, they also improve the expertise of some of the agents. The dynamics of judgment changes or innovations adoption in agents will be conditioned by the types of communication structures and by previous beliefs. The fact that individual probability depends on latent returns makes it possible to quantify group performance based on the characteristics of the agents that make up the organization, such as the dispersion of expertise and/or beliefs. In addition, it makes it possible to analyze the level of performance difference from which it is guaranteed that innovation will be adopted by a high percentage of the organization, as well as to study whether the learning processes of innovative technologies improve their diffusion, which is an important element when heterogeneous networks are considered. Finally, it is possible to evaluate the way in which social pressure, understood as the influence of the environment on individual judgment, can block the process of change in any organizational structure. These analyses can help in understanding how different factors influence the spread of judgments or innovations in different communities and organizations, or how they influence decisions of another nature.
Our formalization also quantifies the losses due to omission errors and commission errors [44,45]. Omission error remains invisible in empirical studies because only the success or failure of the decision made (commission error) can be observed. This characteristic is especially important when studying the performance of an organization in group decisions. The structure of individual decisions (beliefs, expertise, and difficulty) affects the performance of the different group decision mechanisms (authority, majorities, consensus, hierarchy, polyarchy, disagreement, etc.).
The contingent analysis implies that the individual aspects and the group decision mechanisms must be adapted to the environments in which the organization is immersed. The asymmetry in the error cost is an additional factor relevant to the group configuration. There are many situations where choosing the right alternative is critical because error may be too costly [49,50]. Some decision-makers must minimize the commission error: justice must not convict innocents, nuclear power plants must not commit safety errors, etc. In many strategic decisions, commission errors can have serious consequences for the continuity of the organization. If an organization chooses a project that generates significant losses, its reputation will be negatively affected, and it could even lead to its closure. Sometimes it is possible to block a decision and not invest by committing an omission error, i.e., an opportunity loss occurs in the organization but it is not observable by third parties. These special situations can change the convenience of decision mechanisms since they require ensuring high levels of quality. In addition, sunk agency costs occur when adjustments are made for reduced performance (administrative costs, increased controls, employee uncertainty, etc.). Myers [51] justifies the fact that that safer organizations obtain cheaper resources, and therefore face lower costs, stating that investments tend to underestimate the organizations’ intangibles that have led to losses at some point and, in general, growth opportunities are reduced. There are other sectors such as mobile telephony or the pharmaceutical industry where innovation is crucial and omission errors can cause the loss of the market share of the organization. There are also other environments where time has a very high discount rate due to the rapid deterioration of alternative actions. Examples of these environments are the military or disaster management, where decision-makers must make decisions in a short time. In summary, the heterogeneity in the cost of errors justifies an analysis in terms of fallibility, accommodating decision support techniques or collective decision mechanisms that best suit each problem.

8. Conclusions

This study considered how the human way of thinking can be formalized mathematically. Before making a choice, individuals decide how much information to process to improve their choice. This way of acting synthesizes the four basic principles of bounded rationality: the principle of expected rationality, the principle of adaptation, the principle of uncertainty, and the principle of compensation. A formalization was proposed whose development focused on the human cognitive process from a Bayesian point of view, where the decision maker tries to maximize the total performance of the decision, considering the cost associated with increasing the amount of information to be processed.
Three aspects determine the performance of human decision-making: complexity as a latent performance difference between alternatives; beliefs as an a priori attraction system about the alternatives ( p i 0 ) (quick and intuitive thinking); and expertise as the ability to process relevant information that allows the decision to be improved ( β ) (slow and reflective thinking). Traditionally, the literature has considered that the decision-maker makes the best decision given the level of available information (classical rationality). However, our approach started from the premise that perfect information is available, and the decision-maker chooses how much information to process, assuming the possibility of making an error. Our functional form showed a relationship between error and the idiosyncratic characteristics of the decision-maker (beliefs and expertise), as well as an inversely proportional relationship with complexity (element exogenous to the decision-maker).
The functional representation of the human mental process of choosing an alternative obtained was a general “logit function”. Different particular cases of the proposed functional form have been used in the empirical literature in an “ad hoc” way. The characterization analyzed how the extreme values of beliefs and expertise condition the functional form. When expertise is high, the decision-maker will have a low probability of being wrong, even with small performance differences, due to the low cost of processing information, while when expertise is low, beliefs become more relevant. The binary comparison is especially important because: (i) it allows a scale-free comparison, and (ii) it has been used in the main decision support techniques.
Social systems are chaotic systems since small variations of the parameters can lead to important changes in the whole. Our formalization allows a social laboratory to be established to evaluate a priori the contributions of decision support techniques, facilitates a base of individual behavior in the models based on agents that study interactions, and visualizes management based on the types of error (omission and commission). In this way, the way that a collective decision mechanism affects the balance between omission errors and commission errors can be analyzed, regarding the characteristics of the people who compose it and the difficulty of the problems they study. This analysis will allow decision systems to be adapted to the organization’s objectives (error cost).
The experience ( β = 1 / L ) represented the effort (cost) of improving the error for each relative unit of performance. The higher the experience β , the higher the probability of choosing the correct option since the cost of processing information is low, and vice versa. On the other hand, beliefs ( p 0 i ) could change the probability with which each alternative was chosen. It has been widely demonstrated that affective states strongly influence the mechanisms of the human mind to establish the relationship between beliefs and the logical part [52]. Decision-makers face cognitive dissonances (internal tensions between ideas, beliefs, and evidence) that they must resolve to allow their beliefs to evolve, and therefore affective states play an important role [53,54,55]. By considering the “a priori” elements of psychological variables such as antecedents or consequences of affective states, personality traits, and beliefs, the black box of decision making can be decoded.
The intentional bounded rationality depicted here may have a promising future application in agent-based models (ABM), decision support techniques (DST), and collective decision mechanisms, by extending the model with the trade-offs that occur between mindset, complexity, and decision structures.

Author Contributions

Conceptualization, C.S.-R., F.C. and E.H.-V.; methodology, C.S.-R., F.C. and E.H.-V.; formal analysis, C.S.-R., F.C. and E.H.-V.; investigation, C.S.-R., F.C. and E.H.-V.; resources, C.S.-R., F.C. and E.H.-V.; writing—original draft preparation, C.S.-R., F.C. and E.H.-V.; writing—review and editing, C.S.-R., F.C. and E.H.-V.; visualization, C.S.-R., F.C. and E.H.-V.; supervision, C.S.-R., F.C. and E.H.-V. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Spanish Ministerio de Economía y Competitividad, [ECO2013-48496-C4-3-R and MTM2016-77642-C2-2-P], the Diputación General de Aragón (DGA) and the European Social Fund [CREVALOR], the Spanish State Research Agency Projects PID2019-10380RBI00/AEI/10.13039/501100011033, and the Andalusian Government Project P20_00673.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hu, S.; Li, D.; Jia, J.; Liu, Y. A Self-Learning Based Preference Model for Portfolio Optimization. Mathematics 2021, 9, 2621. [Google Scholar] [CrossRef]
  2. Simon, H.A. Administrative Behavior: A Study of Decision-Making Processes in Administrative Organizations, 4th ed.; Free Press: New York, NY, USA, 1997; ISBN 978-0-684-83582-2. [Google Scholar]
  3. Salas-Fumás, V.; Sáenz-Royo, C.; Lozano-Rojo, Á. Organisational Structure and Performance of Consensus Decisions through Mutual Influences: A Computer Simulation Approach. Decis. Support Syst. 2016, 86, 61–72. [Google Scholar] [CrossRef]
  4. March, J.G.; Simon, H.A.; Guetzkow, H.S. Organizations, 2nd ed.; Blackwell: Cambridge, MA, USA, 1993; ISBN 978-0-631-18631-1. [Google Scholar]
  5. Jia, W.; Qiu, X.; Peng, D. An Approximation Theorem for Vector Equilibrium Problems under Bounded Rationality. Mathematics 2020, 8, 45. [Google Scholar] [CrossRef] [Green Version]
  6. Ueda, M. Effect of Information Asymmetry in Cournot Duopoly Game with Bounded Rationality. Appl. Math. Comput. 2019, 362, 124535. [Google Scholar] [CrossRef] [Green Version]
  7. Zhang, M.; Wang, G.; Xu, J.; Qu, C. Dynamic Contest Model with Bounded Rationality. Appl. Math. Comput. 2020, 370, 124909. [Google Scholar] [CrossRef]
  8. Kahneman, D.; Slovic, P.; Tversky, A. Judgment under Uncertainty: Heuristics and Biases; Edición: 1; Cambridge University Press: Cambridge, UK; New York, NY, USA, 1982; ISBN 978-0-521-24064-2. [Google Scholar]
  9. Kahneman, D.; Tversky, A. Prospect Theory: An Analysis of Decision under Risk. In World Scientific Handbook in Financial Economics Series; World Scientific: Singapore, 2013; Volume 4, pp. 99–127. ISBN 978-981-4417-34-1. [Google Scholar]
  10. Cook, K.S.; Levi, M. The Limits of Rationality; University of Chicago Press: Chicago, IL, USA, 2008. [Google Scholar]
  11. Hogarth, R.; Einhorn, H.J.; Hogarth, R.M. Insights in Decision Making: A Tribute to Hillel J. Einhorn; University of Chicago Press: Chicago, IL, USA, 1990; ISBN 978-0-226-34856-8. [Google Scholar]
  12. Camerer, C.; Hua Ho, T. Experience-Weighted Attraction Learning in Normal Form Games. Econometrica 1999, 67, 827–874. [Google Scholar] [CrossRef] [Green Version]
  13. Luce, R.D. Utility of Gains and Losses: Measurement-Theoretical and Experimental Approaches; Psychology Press: London, UK, 2014; ISBN 978-1-138-00344-6. [Google Scholar]
  14. Pachur, T.; Suter, R.S.; Hertwig, R. How the Twain Can Meet: Prospect Theory and Models of Heuristics in Risky Choice. Cogn. Psychol. 2017, 93, 44–73. [Google Scholar] [CrossRef] [Green Version]
  15. Scheibehenne, B.; Pachur, T. Using Bayesian Hierarchical Parameter Estimation to Assess the Generalizability of Cognitive Models of Choice. Psychon. Bull. Rev. 2015, 22, 391–407. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Sutton, R.S.; Barto, A.G. Reinforcement Learning: An Introduction; Adaptive Computation and Machine Learning; MIT Press: Cambridge, MA, USA, 1998; ISBN 978-0-262-19398-6. [Google Scholar]
  17. Csaszar, F.A. An Efficient Frontier in Organization Design: Organizational Structure as a Determinant of Exploration and Exploitation. Organ. Sci. 2013, 24, 1083–1101. [Google Scholar] [CrossRef]
  18. Yi, G.; Zhou, C.; Cao, Y.; Hu, H. Hybrid Assembly Path Planning for Complex Products by Reusing a Priori Data. Mathematics 2021, 9, 395. [Google Scholar] [CrossRef]
  19. Vinogradova-Zinkevič, I. Application of Bayesian Approach to Reduce the Uncertainty in Expert Judgments by Using a Posteriori Mean Function. Mathematics 2021, 9, 2455. [Google Scholar] [CrossRef]
  20. Christensen, M.; Knudsen, T. Design of Decision-Making Organizations. Manag. Sci. 2010, 56, 71–89. [Google Scholar] [CrossRef]
  21. Newell, A.; Shaw, J.C.; Simon, H.A. Elements of a Theory of Human Problem Solving. Psychol. Rev. 1958, 65, 151–166. [Google Scholar] [CrossRef]
  22. Jones, B.D. Bounded Rationality and Political Science: Lessons from Public Administration and Public Policy. J. Public Adm. Res. Theory 2003, 13, 395–412. [Google Scholar] [CrossRef]
  23. Newell, A. Unified Theories of Cognition; Harvard University Press: Cambridge, MA, USA, 1994; ISBN 978-0-674-92101-6. [Google Scholar]
  24. Tversky, A.; Kahneman, D. Judgment under Uncertainty: Heuristics and Biases. Science 1974, 185, 1124–1131. [Google Scholar] [CrossRef] [PubMed]
  25. Lupia, A.; McCubbins, M.D.; Popkin, S.L. Beyond Rationality: Reason and the Study of Politics. In Elements of Reason; Lupia, A., McCubbins, M.D., Popkin, S.L., Eds.; Cambridge University Press: Cambridge, UK, 2000; pp. 1–20. ISBN 978-0-521-65329-9. [Google Scholar]
  26. Knudsen, T.; Levinthal, D.A. Two Faces of Search: Alternative Generation and Alternative Evaluation. Organ. Sci. 2007, 18, 39–54. [Google Scholar] [CrossRef] [Green Version]
  27. Luce, R.D. Semiorders and a Theory of Utility Discrimination. Econometrica 1956, 24, 178. [Google Scholar] [CrossRef] [Green Version]
  28. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; Adaptive Computation and Machine Learning; The MIT Press: Cambridge, MA, USA, 2016; ISBN 978-0-262-03561-3. [Google Scholar]
  29. DeGroot, M.D. DeGroot, M: Optimal Statistical Decisions; Edición: WCL Ed.; Wiley-Blackwell: Hoboken, NJ, USA, 2004; ISBN 978-0-471-68029-1. [Google Scholar]
  30. Rockafellar, R.T. Convex Analysis; Edición: New Ed.; Princeton University Press: Princeton, NJ, USA, 1997; ISBN 978-0-691-01586-6. [Google Scholar]
  31. Shannon, C.E. A Mathematical Theory of Communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef] [Green Version]
  32. Kahneman, D. Thinking, Fast and Slow, 1st ed.; Farrar, Straus and Giroux: New York, NY, USA, 2013; ISBN 978-0-374-53355-7. [Google Scholar]
  33. James, W. The Principles of Psychology; Cosimo Classics: New York, NY, USA, 2007; Volume 1, ISBN 1-60206-314-1. [Google Scholar]
  34. Faghihi, U.; Estey, C.; McCall, R.; Franklin, S. A Cognitive Model Fleshes out Kahneman’s Fast and Slow Systems. Biol. Inspired Cogn. Archit. 2015, 11, 38–52. [Google Scholar] [CrossRef]
  35. Jaynes, E.T.; Rosenkrantz, R.D. Papers on Probability, Statistics and Statistical Physics; Synthese Library; Reprinted; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1989; ISBN 978-0-7923-0213-1. [Google Scholar]
  36. Hertwig, R.; Herzog, S.M. Fast and Frugal Heuristics: Tools of Social Rationality. Soc. Cogn. 2009, 27, 661–698. [Google Scholar] [CrossRef]
  37. Tetlock, P.E. Accountability and the Perseverance of First Impressions. Soc. Psychol. Q. 1983, 46, 285–292. [Google Scholar] [CrossRef]
  38. Tetlock, P.; Gardner, D. Superforecasting: The Art and Science of Prediction; Random House: New York, NY, USA, 2015; ISBN 978-1-4481-6659-6. [Google Scholar]
  39. Payne, J.W.; Bettman, J.R.; Luce, M.F. Behavioral Decision Research: An Overview. In Measurement, Judgment and Decision Making, 2nd ed.; Birnbaum, M.H., Ed.; Handbook of Perception and Cognition; Academic Press: San Diego, CA, USA, 1998; pp. 303–359. ISBN 978-0-12-099975-0. [Google Scholar]
  40. Erev, I.; Roth, A.E. Predicting How People Play Games: Reinforcement Learning in Experimental Games with Unique, Mixed Strategy Equilibria. Am. Econ. Rev. 1998, 88, 848–881. [Google Scholar]
  41. McFadden, D. Economic Choices. Am. Econ. Rev. 2001, 91, 351–378. [Google Scholar] [CrossRef]
  42. Train, K.E. Discrete Choice Methods with Simulation; Cambridge University Press: Cambridge, UK, 2009; ISBN 978-1-139-48037-6. [Google Scholar]
  43. Yager, R.R. Fuzzy Decision Making including Unequal Objectives. Fuzzy Sets Syst. 1978, 1, 87–95. [Google Scholar] [CrossRef]
  44. Sah, R.K.; Stiglitz, J.E. The Architecture of Economic Systems: Hierarchies and Polyarchies. Am. Econ. Re. 1986, 76, 716–727. [Google Scholar]
  45. Sah, R.K.; Stiglitz, J.E. Committees, Hierarchies and Polyarchies. Econ. J. 1988, 98, 451. [Google Scholar] [CrossRef] [Green Version]
  46. Saaty, T.L. The Analytic Hierarchy Process: Planning, Priority Setting, Resource Allocation; McGraw-Hill International Book Co.: New York, NY, USA; London, UK, 1980; ISBN 978-0-07-054371-3. [Google Scholar]
  47. Siraj, S.; Mikhailov, L.; Keane, J.A. Contribution of Individual Judgments toward Inconsistency in Pairwise Comparisons. Eur. J. Oper. Res. 2015, 242, 557–567. [Google Scholar] [CrossRef] [Green Version]
  48. Herrera, F.; Herrera-Viedma, E.; Chiclana, F. Multiperson Decision-Making Based on Multiplicative Preference Relations. Eur. J. Oper. Res. 2001, 129, 372–385. [Google Scholar] [CrossRef]
  49. Catalani, M.S.; Clerico, G.F. How and When Unanimity Is a Superior Decision Rule. In Decision Making Structures; Contributions to Management Science; Physica-Verlag HD: Heidelberg, Germany, 1996; pp. 15–29. ISBN 978-3-7908-0895-7. [Google Scholar]
  50. Csaszar, F.A. Organizational Structure as a Determinant of Performance: Evidence from Mutual Funds. Strat. Mgmt. J. 2012, 33, 611–632. [Google Scholar] [CrossRef] [Green Version]
  51. Myers, S.C. Capital Structure Puzzle; National Bureau of Economic Research: Cambridge, MA, USA, 1984. [Google Scholar]
  52. Edwards, K.; Smith, E.E. A Disconfirmation Bias in the Evaluation of Arguments. J. Personal. Soc. Psychol. 1996, 71, 5. [Google Scholar] [CrossRef]
  53. Brehm, J.W.; Cohen, A.R. Explorations in Cognitive Dissonance; John Wiley & Sons Inc.: Hoboken, NJ, USA, 1962. [Google Scholar]
  54. Festinger, L. A Theory of Cognitive Dissonance; Reissued by Stanford Univ. Press in 1962, Renewed 1985 by Author, [Nachdr.]; Stanford University Press: Stanford, CA, USA, 2001; ISBN 978-0-8047-0911-8. [Google Scholar]
  55. Festinger, L.; Riecken, H.W.; Schnachter, S. Reactions to Disconfirmation. In When Prophecy Fails; University of Minnesota Press: Minneapolis, MN, USA, 1956. [Google Scholar]
Table 1. Research lines that use the logistic functional form.
Table 1. Research lines that use the logistic functional form.
SourcesApproachDescription
Luce [13]MathematicalCarries out an extensive study on the properties of different functional forms chosen ad hoc, as well as their predictive capacity in experimental practices.
Erev and Roth [40]; Camerer and Hua Ho [12]LearningDifferent learning formulations are calibrated through real experiments to see how a chosen ad hoc probabilistic choice function evolves.
Sutton and Barto [16]; Scheibehenne and Pachur [15]; Pachur, Suter, and Hertwig [14]PsychologicalPrediction of human behavior through an ad hoc functional form where psychophysical, psychological, and information processing restrictions are collected.
McFadden [41]; Train [42]EconometricThe parameters associated with the different causes that provoke one or another choice are estimated based on empirical evidence.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sáenz-Royo, C.; Chiclana, F.; Herrera-Viedma, E. Functional Representation of the Intentional Bounded Rationality of Decision-Makers: A Laboratory to Study the Decisions a Priori. Mathematics 2022, 10, 739. https://doi.org/10.3390/math10050739

AMA Style

Sáenz-Royo C, Chiclana F, Herrera-Viedma E. Functional Representation of the Intentional Bounded Rationality of Decision-Makers: A Laboratory to Study the Decisions a Priori. Mathematics. 2022; 10(5):739. https://doi.org/10.3390/math10050739

Chicago/Turabian Style

Sáenz-Royo, Carlos, Francisco Chiclana, and Enrique Herrera-Viedma. 2022. "Functional Representation of the Intentional Bounded Rationality of Decision-Makers: A Laboratory to Study the Decisions a Priori" Mathematics 10, no. 5: 739. https://doi.org/10.3390/math10050739

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop