Next Article in Journal
Microstructure and Properties of AA6061/SiCp Composites Sintered under Ultra High-Pressure
Next Article in Special Issue
Observer-Based Distributed Fault Detection for Heterogeneous Multi-Agent Systems
Previous Article in Journal
Applications of Medical Informatics and Data Analysis Methods
Previous Article in Special Issue
A Review on MAS-Based Sentiment and Stress Analysis User-Guiding and Risk-Prevention Systems in Social Network Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Framework for Incorporating Artificial Somatic Markers in the Decision-Making of Autonomous Agents

1
Escuela de Ingeniería Informática, Universidad de Valparaíso, 2362905 Valparaíso, Chile
2
Escuela de Ingeniería Informática, Pontificia Universidad Católica de Valparaíso, 2362807 Valparaíso, Chile
3
Departamento de Ciencias Naturales y Tecnología, Universidad de Aysén, 5952039 Coyhaique, Chile
4
Escuela de Comercio, Pontificia Universidad Católica de Valparaíso, 2340031 Valparaíso, Chile
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(20), 7361; https://doi.org/10.3390/app10207361
Submission received: 21 September 2020 / Revised: 15 October 2020 / Accepted: 17 October 2020 / Published: 21 October 2020
(This article belongs to the Special Issue Multi-Agent Systems 2020)

Abstract

:
The somatic marker hypothesis proposes that when a person faces a decision scenario, many thoughts arise and different “physical consequences” are fleetingly observable. It is generally accepted that affective dimension influences cognitive capacities. Several proposals for including affectivity within artificial systems have been presented. However, to the best of our knowledge, a proposal that considers the incorporation of artificial somatic markers in a disaggregated and specialized way for the different phases that make up a decision-making process has not been observed yet. Thus, this research work proposes a framework that considers the incorporation of artificial somatic markers in different phases of the decision-making of autonomous agents: recognition of decision point; determination of the courses of action; analysis of decision options; decision selection and performing; memory management. Additionally, a unified decision-making process and a general architecture for autonomous agents are presented. This proposal offers a qualitative perspective following an approach of grounded theory, which is suggested when existing theories or models cannot fully explain or understand a phenomenon or circumstance under study. This research work represents a novel contribution to the body of knowledge in guiding the incorporation of this biological concept in artificial terms within autonomous agents.

1. Introduction

The somatic marker hypothesis proposes that when a person faces a decision scenario, many thoughts arise and different “physical consequences” are fleetingly observable (somatic consequences being momentary changes in the body) [1]. The body sends signals in terms of sudden and immediate physical changes. Some activation signals of a somatic marker are sweating, rapid heartbeat, or momentary contraction of the body. These corporal changes could anticipate decision-making.
Human life is recorded in terms of experience, which is accompanied by emotional associations and bodily reactions. These (somatic) memories are evoked or brought to the present when experiencing or facing a situation similar to a past episode [1,2]. A somatic marker can focus the attention on the positive or negative outcome of a given action and act as an automatic alarm. This signal could trigger, for example, that a person immediately discards a course of action before decision analysis and selecting, forcing the search for alternatives. This automatic step can drastically reduce the number of options to choose from [1].
It is also clear that somatic markers themselves are not enough to make decisions, since it is necessary to implement a process of reasoning and a final selection of the most convenient alternative [1]. Considering the above, it is interesting to explore how the biological capacity to experience somatic markers could be designed to be incorporated into artificial entities that must face decision-making scenarios. In this sense, the agent paradigm arises as an interesting option to consider because evidence based on theoretical and empirical research shows how agent technology has been successfully used for developing and implementing decision scenarios in criminology [3], logistics, transportation and urban issues [4,5,6,7,8], e-markets [9,10,11,12], and catastrophe management [13], to name a few. This technology is based on the agent concept, which represents an entity with the capability to undertake autonomous actions in a determined environment considering specific design goals [14,15,16].
It is generally accepted that emotions and the affective dimension influence cognitive capacities [17]. Considering the above, several proposals for including these elements within autonomous agents have been presented in videogames [18], human-robot interaction [19], e-markets [20], agent societies [21,22], health [23], and learning [24], to name a few. The authors of the present research work have, in the past, designed software frameworks [25,26] and explored the incorporation of artificial emotions in autonomous agents, specifically in purchasing decisions [27] and investment decisions [28,29,30,31]. In particular, the incorporation of primary emotions (such as joy or fear) in investment profiles implemented within autonomous agents has been analyzed to observe the effect of emotional containment and recovery. However, in no previous work have the authors explored the incorporation of artificial somatic markers into autonomous agents. The quantity of proposals that suggest the incorporation of somatic markers in autonomous agents is considerably less [32,33,34]. To the best of our knowledge, a proposal that considers the incorporation of artificial somatic markers in a disaggregated and specialized way for the different phases that make up a decision-making process has not been observed yet. In this sense, it is possible to consider the use of artificial somatic markers both in a specialized way in a particular decision phase that an autonomous agent needs to perform, as well as in an integrated way, that is, in all different moments of a decision disposed as a unified decision-making process.
The present research work offers a qualitative perspective following an approach of the grounded theory [35], which is suggested when existing theories or models cannot fully explain or understand a phenomenon, context, or the circumstances under study. In this sense, currently, there is no body of knowledge that can fully guide the incorporation of this biological concept in artificial terms within autonomous agents. In this light, the current research work is novel in the sense that it (1) designs a set of algorithms that consider the use of artificial somatic markers in different phases of a decision; (2) defines a unified decision-making process considering the different phases of a decision; (3) designs a general architecture for considering artificial somatic markers in autonomous agents; (4) defines and analyzes a conceptual case study for visualizing the applicability of the framework.
The remainder of this work is organized as follows: Section 2 includes the conceptual background and literature review. Section 3 includes methodological aspects. Section 4 presents the design of artificial somatic markers for decision-making of autonomous agents, in terms of decision phases, a set of algorithms, a unified process of decision-making, and a general architecture. Section 5 includes the definition of a conceptual case study that allows visualization of the applicability of the general framework. Section 6 presents a discussion and analysis derived from the conceptual study case and the current research proposal. Finally, Section 7 presents conclusions of the work done and possibilities for future work.

2. Background

The expected utility model, initiated by J. Bernoulli in the eighteenth century, tried to explain people’s preferences in relation to a set of possible choices to be selected and the utility expected from each one, under an uncertainty scenario. The model considers principles such as rationality, decision transitivity, and procedural invariance; aspects that were objected by several authors [36,37,38]. Prospect theory and ideas such as the use of heuristics, representativeness, availability, and the existence of biases in human decisions were concepts introduced by Kahneman and Tversky in several works [39,40,41], with the aim to propose a reference-based utility function for explaining how people make decisions under uncertainty. Conforming to the advancements in psychology and neuroscience, it is generally accepted that emotions influence human decision-making processes [42,43]. There are several approaches to the concept of emotion [44,45,46]. In the current research work, it will be understood on the basis of the approach presented by [1]: “emotion is the combination of a mental assessment process, simple or complex, with responses to that process emanating from dispositional representations, addressed mainly to the body, resulting in an emotional body state; and also oriented towards the brain itself, resulting in additional mental changes”.
The human brain constantly produces a high quantity of mental images (for example, when a stimulus is detected). The brain probably highlights an image by generating an emotional state that accompanies the image. The degree of emotion serves as a “marker” for the relative importance of the image [2]. The mental image-processing machinery could then be guided by reflection and used for effective anticipation of situations, previewing of possible outcomes, navigation of the possible future, and invention of management solutions [2]. The somatic marker does not need to be a fully formed emotion which is overtly experienced as a feeling. It can be a covert, emotion-related signal of which the subject is not aware [2]. The somatic marker hypothesis offers a mechanism for how the brain would execute a value-based selection of images. The principle for the selection of images is connected to life management needs [2].
There are many possible uses or interpretations of somatic markers [47]. There exists a proposal for the possible use of somatic markers in decision-making, where they could intervene in the recognition of a decision point, option generation, deliberation, evaluation, and execution [47]. It is important to mention that the authors do not propose any implementation, design of a real scenario, results analysis, or consideration to implement somatic markers in artificial terms within autonomous agents. Meanwhile, an architecture that incorporates the use of artificial somatic markers in decision making was presented by [48]. The decisions that are made correspond to the elementary actions of a robot (e.g., move, stop, wave). Similarly, an implementation of somatic markers for social robots was presented in [34]. The Iowa gambling task [49] was considered for evaluating the proposal. It is important to note that the agents do not have consciousness about the causes themselves that activate the somatic markers. In this sense, it is not clear to determine how the somatic memories are recorded and then used for a long time. For its part, an abstract cognitive architecture that considers somatic markers was presented in [50], whose emphasis is on moral schemes and semantic maps. As with the previous cases, there are no procedural or algorithmic details that illustrate how artificial somatic markers could be used and implemented within an autonomous agent, in the different phases of the decision-making process.
On the other hand, several proposals regarding agent frameworks have been presented, such as for improving resource utilization in a smart city [51], for non-cooperative multiagent planning [52], for bus holding control in transport context [53], for controlling an indoor intelligent lighting system [54], for embodied conversational agents while considering an empathy approach [55], for handling disruptions in chemical supply chains [56], for including an affective dimension within a belief–desire–intention (BDI) model [57], and for designing artificial emotional systems [58], to name a few. It is important to note that none of the aforementioned proposals considers the use of artificial somatic markers.
To the best of our knowledge, the present research work represents the first proposal that considers the incorporation of artificial somatic markers within the complete decision-making process in autonomous agents. There is a knowledge gap on how to guide the incorporation of the biological concept artificially in autonomous agents.
It is not intended to replace any current decision-making mechanism in autonomous agents, but, rather, to extend the current frontier of knowledge in the design and implementation of autonomous agents. There are some advantages of the use of artificial somatic markers in decision-making in autonomous agents. First, the availability of new mechanisms for the recognition of decision points, where artificial somatic associations can alert the autonomous agent to a particular situation. Second, through artificial somatic associations, the process of searching and selecting decision options can be guided. Third, the use of artificial somatic markers can reinforce or discourage the selection of a decision option (through somatic rewards or punishments).

3. Methodology

The present research work offers a qualitative perspective following a grounded theory approach [35], which is suggested when existing theories or models cannot fully explain or understand a phenomenon, context, or the circumstances under study. The grounded theory approach provides categories of the process or phenomenon under study and their relationships. In the same way, it allows the emergence of theories and models that helps explain or understand the process or phenomenon under study. In grounded theory, the processes, actions, and interactions between individuals are examples of study objects. Its explanations are limited to a specific area, but they have an interpretative richness and provide new visions of a phenomenon [35]. Grounded theory has been applied in several contexts, such as the design for theoretical frameworks [59,60] and decision-making [61,62].
The present research work addresses the incorporation of artificial somatic markers in the decision-making of autonomous agents. In this sense, currently, there is no body of knowledge that can fully guide the incorporation of this biological concept in artificial terms within autonomous agents. In this way, base documents are used for the lifting of the analysis categories, particularly, the somatic marker hypothesis [1,2], Ekman’s basic emotions [63,64], and the proposal about the potential uses of biological somatic markers in different phases of human decision-making [47].
Considering all of the above and to contribute to the formation of a body of knowledge that guides the incorporation of artificial somatic markers in autonomous agents, first, several preliminary categories involved both in the decision-making and the somatic market hypothesis are defined. This is called open coding. A category corresponds to a relevant concept, idea, or fact, to which it is possible to associate a meaning. Second, subsequently, these categories are grouped into main categories or themes to identify the central categories of the process or phenomenon under study. This is called axial coding. Third, these central categories or themes are connected or associated, for example, to conform a model. This is called selective coding. At the end, a story or narrative is written that links the categories and describes the process or phenomenon [35].
Following the previous explanation, Table 1 shows a list of preliminary categories, main categories, and core categories associated with decision-making, somatic markers, and their approach as artificial somatic markers for autonomous agents. In order to illustrate the association between the different types of categories identified in Table 1, first, a set of algorithms that consider artificial somatic markers in decision-making is designed. Second, a unified decision-making process that considers artificial somatic markers is defined. Third, a general architecture for considering artificial somatic markers in autonomous agents is designed. Finally, to visualize the applicability of the framework, a conceptual case study is presented.

4. Design of Artificial Somatic Markers for Decision-Making of Autonomous Agents

4.1. Defining Phases in the Decision-Making of Autonomous Agents

Considering a decision-making process performed by an autonomous agent, Figure 1 shows a general view of the potential incorporation of artificial somatic markers. Each of the phases is explained below:
Recognition of Decision Point: Upon detection of some type of stimulus (internal or external), an artificial somatic marker can act as a mechanism which captures the attention of the autonomous agent and it focuses on interpreting the detected stimulus. Depending on the case, each stimulus can represent a threat or an opportunity, or be previously associated with feelings of joy, sadness, fear, surprise, or anger. It is also possible that there is no previous association, thus assuming a neutral character of the stimulus.
Determination of the Courses of Action: An artificial somatic marker can act as a mechanism for generating decision options, that is, for determining alternative courses of decision. This allows the autonomous agent to verify past goals, actions, success levels, and emotional effects associated to a specific stimulus, and then define (or discard) possible actions and goals for the current scenario. The above can imply, e.g., promoting or inhibiting components of a solution to a specific problem.
Analysis of Decision Options: An artificial somatic marker can participate in the analysis of decision options under evaluation. The existence of somatic associations between a goal to be achieved and a specific available decision option (oriented to achieve the mentioned goal) can be represented in numerical or conceptual terms, which could reinforce the mentioned decision option within a set of candidate decision options. Additionally, an artificial somatic marker could limit or interrupt the process of analysis of decision options, if any directive that limits the time available for the said process or the number of options to be evaluated is enabled. This use of an artificial somatic marker is assimilated into the context where the feeling of “there is no more time, and it is necessary to act” arises.
Decision Selection and Performing: Regardless of the prior existence of a process of analysis and evaluation of decision options, an artificial somatic marker can participate in the final selection of a decision option by strongly guiding the choice of an option through the use of rewards (e.g., increase in a feeling of well-being) or by the use of punishment (e.g., increase feelings of displeasure, sadness, anger, etc.). This means that, eventually, the decision option analysis process may suggest a path, and the decision selection and performing process may reinforce or promote the execution of a different path. It should be noted that, particularly in this potential use of an artificial somatic marker, it is necessary to modulate the level of intensity on rewards or punishments.
Memory Management: An artificial somatic marker can guide the experience recording of an autonomous agent. In this sense, memory management is vital in two aspects: first, the somatic activations that have been observed in the last decision-making process can be temporarily registered in the working memory of the autonomous agent, in order to be considered in a subsequent decision-making process. It is important to note that this use is suggested considering a short-term temporal dimension. Second, the decision made and its affective effects must be recorded in the long-term memory of the autonomous agent in terms of a somatic association, in order to extend the current experience of the autonomous agent. It is important to note that this use is suggested considering a long-term temporal dimension.

4.2. Designing Algorithms for the Decision-Making of Autonomous Agents

This subsection includes the design of a set of algorithms for the decision-making of autonomous agents, by considering the incorporation of artificial somatic markers. The main objective of algorithm 1 is to recognize a decision context. It begins with the identification of a new stimulus. It should be noted that in the present research, a stimulus is understood as a signal (internal or external) that can generate some type of reaction in the autonomous agent. After a stimulus is detected, it is required to determine its origin. It is important to note that if the stimulus is the product of an autonomous agent’s own process (cognitive or of another nature), its origin is understood as “internal”. Conversely, if the stimulus is triggered outside the autonomous agent, then its origin is understood as “external”. Another relevant aspect is determining the type of stimulus detected. In the present research work, the types of stimuli considered are associated to the evocation of specific and different sensations, and that by themselves have the potential capacity to generate a reaction in the autonomous agent, such as threat, opportunity, joy, sadness, fear, surprise, anger. It is also considered a “neutral” type of stimulus. Except for the neutral type, the emotional state is updated in each case (according to the type of stimulus detected). Then, for all types of stimulus, algorithm 2 is called, indicating the received stimulus and its type in the call.
Algorithm 1 Recognition of Decision Point
Begin
1. newStimulus = get_Stimuli( )
2. stimulusOrigin = determine Origin (newStimulus)
3. /*stimulusOrigin ∈ {‘external’; ‘internal’} */
4. stimulusType = get Stimulus Type (newStimulus) /* using past somatic reactions */
/*stimulusType ∈ {‘threat’; ‘opportunity’; ‘joy’; ‘sadness’; ‘fear’; ‘surprise’; ‘anger’; ‘neutral’} */
5.If (stimulusType != ‘neutral’)
6.  emotional_effects = get Emotional Effects (newStimulus, stimulusType) /*emotional updating*/
7. End If
8.Determination of the Courses of Action (newStimulus, stimulusType)
End Algorithm 1
On the other hand, algorithm 2 aims to determine alternative courses of action based on the stimulus detected. For this, first, a list of past goals that are associated with the detected stimulus or its type is obtained. A past goal corresponds to an objective that the autonomous agent tried to achieve, and, therefore, there is a record of it within his long-term memory. In this sense, to reach this goal, some actions have to be identified and executed. Thus, step two obtains the record of actions carried out based on past goals, always associated with the detected stimulus. Next, both the list of past goals and the list of past actions give rise to a list of levels of success achieved. In the next step, a list of emotional effects is obtained considering the past actions and the levels of success achieved.
Algorithm 2 Determination of the Courses of Action
Input: {stimulus; stimulusType}
Begin
1. past_goals = get Past Goals (stimulus, stimulusType)
2. performed_actions = get Past Actions (past_goals)
3.success_levels = get Past Success Level (past_goals, performed_actions)
4.emotional_effects = get Past Emotional Effects (performed_actions, success_levels)
5.If {stimulusType} ∈ {‘threat’; ‘fear’}
6. newEmotion = ‘trust
7.Else If {stimulusType} ∈ {‘opportunity’; ‘sadness’}
8. newEmotion = ‘joy
9.Else If {stimulusType} ∈ {‘joy’; ‘surprise’; ‘neutral’}
10. newEmotion = ‘neutral
11.Else If {stimulusType} ∈ {‘anger’}
12. newEmotion = ‘tranquility
13.End If
14.refined_actions_list = select Specific Actions (performed_actions, emotional_effects, newEmotion)
15.refined_goal_list = select Specific Goals (refined_actions_list, past_goals)
16.priority_level = get Priority (stimulusType) /* only ‘threat’ and ‘opportunity’ have ‘high priority’ */
17.Analysis of Decision Options (refined_actions_list, refined_goal_list, stimulusType, priority_level)
End Algorithm 2
From step 5 onwards, depending on the stimulus detected, actions and goals followed in the past are sought for stimuli of the same nature, in order to generate specific lists of actions and goals that can be chosen by the autonomous agent at present.
In the case of stimuli of the “threat” type, first, specific actions are sought whose execution has given “confidence” to the autonomous agent. In turn, these specific actions are linked to specific goals. Finally, a call is made to the “Analysis of Decision Options” algorithm, providing a refined list of actions, a refined list of goals, and the indication that the detected stimulus is a threat and that it requires a “high priority” response.
Meanwhile, in the case of stimuli of the “opportunity” type, first, all specific actions are sought whose execution has given “joy” to the autonomous agent. These specific actions are linked to specific goals. Finally, a call is made to the “Analysis of Decision Options” algorithm, providing a refined list of actions, a refined list of goals, and the indication that the detected stimulus is an opportunity and that it requires a “high priority” response.
Algorithm 3 Analysis of Decision Options
Input: {actions_list; goal_list; stimulusType; priority_level}
Begin
1. If (priority_level = ‘high priority’ and stimulusType = ‘threat’)
2. reduced_options_list = select by Maximizing{‘trust’} on {actions_list, goal_list}
3. Decision Selection and Performing (reduced_options_list, goal_list, priority_level)
4.Else If (priority_level = ‘high priority’ and stimulusType = ‘opportunity’)
5. reduced_options_list = select by Maximizing{‘joy’} on {actions_list, goal_list}
6. Decision Selection and Performing (reduced_options_list, goal_list, priority_level)
7.Else If (priority_level = ‘normal priority’)
8. options_list = select by {stimulusType} on {actions_list, goal_list}
9.For each {option} ∈ {options_list}
10.  conflicts_list = check Conflicts (option, goal_list, global_goals)
11.  possible_emotional_effects = analyze Emotional Effects (option, goal_list, global_goals)
12.  option_somatic_memory = evaluate Somatic Associations (option, conflicts_list,
  possible_emotional_effects)
13.  If (option_somatic_memory = ‘positive’)
14.   Add (option; ‘positive’) in {extended_option_list}
15.  Else If (option_somatic_memory = ‘negative’)
16.   Add (option; ‘negative’) in {extended_option_list}
17.  Else
18.   Add (option; ‘neutral’) in {extended_option_list}
19.  End If
20.End For
21. Decision Selection and Performing (extended_option_list, goal_list, priority_level)
22.End If
End Algorithm 3
On the other hand, for the rest of the stimulus types, it is important to note that they all suggest a “normal” (not “fast”) treatment. For stimuli of the “joy” type, actions and goals with a “neutral” profile are sought. Meanwhile, for stimuli of the “sadness” type, actions and goals that have given “joy” in the past are sought. Similarly, for stimuli of the “fear” type, actions and goals that have given “confidence” in the past are sought. For stimuli of the “surprise” type, actions and goals with a “neutral” profile are sought. Meanwhile, for stimuli of the “anger” type, actions and goals are sought that in the past have given “tranquility”. In all these cases, a call is made to the “analysis of decision options” algorithm, providing a refined list of actions, a refined list of goals, also indicating the type of stimulus and that a “normal priority” response is required. In the case of a “neutral” stimulus, a general list of past actions is given, a list of goals, also indicating the type of stimulus and that a “normal priority” response is required.
On the other hand, algorithm 3 divides its scope of action in three senses. If the decision scenario has a “high priority” level and the type of stimulus is a “threat”, a subselection of options is generated from both the list of actions and the list of goals, considering those oriented to maximize the sense of trust in the autonomous agent. Meanwhile, if the decision scenario has a “high priority” level and the type of stimulus is an “opportunity”, the particular subselection of options is generated considering those oriented to maximize the feeling of joy in the autonomous agent.
In both of the previous cases, algorithm 4 “Decision Selection and Performing” is quickly called, providing the reduced list of possible options, the list of agent goals, and the priority level associated with the detected stimulus. In another case, if the decision scenario has a “normal priority” level, based on the type of stimulus detected, and for each option that could be followed by the autonomous agent, possible conflicts between the particular option are checked, as well as the list of (specific) goals and global goals of the agent, that is, those fundamental objectives or guidelines that structurally define the work of the autonomous agent. Then, possible emotional effects are obtained from following the option under analysis. Subsequently, somatic associations recorded in the long-term memory of the autonomous agent are searched for. If the somatic association is positive, then the option under evaluation is labeled “positive” and loaded into an updated list of options. Similarly, if the somatic association is negative, then the option under evaluation is labeled “negative” and loaded into an updated list of options. In another case, the option under evaluation is labeled “neutral”. Finally, algorithm 4 “Decision Selection and Performing” is called.
Algorithm 4 Decision Selection and Performing
Input: {decision_options_list, goal_list, priority_level}
Begin
1. If (priority_level = ‘high priority’)
2.  {final_decision} = apply Fast Decision Rules to {decision_options_list}
3.Else If (priority_level = ‘normal priority’)
4.For each {option} ∈ {decision_options_list}
5.  decision_reward = determine Somatic Reward (option, goal_list, global_goals)
6.  decision_punishment = determine Somatic Punishment (option, goal_list, global_goals)
7.  decision_selectivity_index = apply Decision Rules to {option, decision_reward,
  decision_punishment}
8.  Add (option; decision_reward; decision_punishment; decision_selectivity_index)
  in {candidate_decision_list}
9.End For
10.Sort {candidate_decision_list} by {decision_selectivity_index}
11. {final_decision} = get {option} by Max {decision_selectivity_index} from {candidate_decision_list}
12.End If
13.Get {actions} associated to {final_decision}
14.Activate {executive_processes} to perform {actions}
15.Memory Management (final_decision, option_selectivity_index, actions, option_reward,
option_punishment)
End Algorithm 4
Algorithm 4 guides the selection of a decision and its execution. If the priority to decide is high, then a specific set of decision rules geared towards scenarios that require urgent attention is applied. Conversely, if the priority to decide is normal, then for each candidate option to be chosen, a “somatic reward” and a “somatic punishment” are determined, corresponding to positive or negative sensations that may remain latent in the working memory of the autonomous agent, and which try to emulate a feeling of “well-being” when a satisfactory decision is made, and to emulate a feeling of “regret or discomfort” when a partially (or totally) unsatisfactory decision is made. In the next step, a set of decision rules is applied to the option, the reward for the decision, and the punishment for the decision, yielding a decision selectivity index. This index can be defined in an ad hoc scale and its main objective is to facilitate the comparison of different decision options. Subsequently, the list of possible decisions to be chosen is ordered through their selectivity index, and the one that represents the highest value of the mentioned index is chosen. Subsequently, the specific actions that allow the chosen decision to be performed are obtained, simultaneously activating the corresponding executive processes. Finally, algorithm 5 “Memory Management” is called.
Algorithm 5 allows the recording of the decision-making process. For the above, the reward of the decision, the punishment of the decision, and the actions taken are recorded in the autonomous agent’s working memory. Meanwhile, a set of decision variables are recorded in the long-term memory of the autonomous agent. These are the stimulus detected, the origin of the stimulus, the type of stimulus, the reward of the decision made, the punishment of the decision made, the decision selectivity index of the decision made, the decision made, and the performed actions.
Algorithm 5 Memory Management
Input: {final_decision, decision_selectivity_index, actions, decision_reward, decision_punishment}
Begin
1. /*Working Memory*/
2.Add (decision_reward) in {Working Memory}
3.Add (decision_punishment) in {Working Memory}
4.Add (actions) in {Working Memory}
5./*Long-Term Memory*/
6.Add (stimulus) in {Long-Term Memory}
7.Add (stimulusOrigin) in {Long-Term Memory}
8.Add (stimulusType) in {Long-Term Memory}
9.Add (emotional_effects) in {Long-Term Memory}
10.Add (conflicts_list) in {Long-Term Memory}
11.Add (possible_emotional_effects) in {Long-Term Memory}
12.Add (option_somatic_memory) in {Long-Term Memory}
13.Add (final_decision) in {Long-Term Memory}
14.Add (decision_selectivity_index) in {Long-Term Memory}
15.Add (decision_reward) in {Long-Term Memory}
16.Add (decision_punishment) in {Long-Term Memory}
17.Add (actions) in {Long-Term Memory}
End Algorithm 5
Figure 2 shows a general diagram of a unified decision-making process that considers the incorporation of artificial somatic markers based on the algorithms described above. It is possible to observe that there are different types of input stimuli (such as a variation in some investment, third-party comments, or even own thoughts). In the recognition of a decision point, a distinction is made between an internal or external stimulus. At this level, it is key to determine the type of stimulus. For this, past somatic reactions are obtained based on the detected stimulus.
On the other hand, determining courses of action requires a recollection, in terms of goals, actions, emotional effects, and observed levels of success. Then, there is a quick analysis of possible decision options in the case of a high priority decision (threat-type or opportunity-type stimulus). In the case of a normal priority decision, possible conflicts are checked between each decision option, the current goals, and the global goals. Likewise, the emotional effects of each possible decision option are analyzed. In the same way, past somatic associations between each of the possible decision options, the potential conflicts identified, and the possible emotional effects are evaluated. Then, depending on the type of somatic association, each of the possible decision options is labeled positively, negatively, or neutrally.
Next, in the decision selection and performing, if the decision has high priority, a set of fast decision rules is applied, resulting in the final decision. On the contrary, if the decision has normal priority, for each of the possible decision options both a somatic reward and a somatic punishment are determined (corresponding to positive or negative sensations that may remain latent in the autonomous agent’s working memory), obtaining a selectivity index. Then, the decision option that represents the highest selectivity value is selected. Subsequently, the actions are activated according to the decision made. Simultaneous to the performance of actions, a record is generated in the working memory, including the somatic reward, somatic punishment, and the current actions under execution. Meanwhile, the long-term memory takes all those relevant aspects that can be considered in subsequent decision-making processes, including the stimulus received, somatic reactions, emotional effects, the details of analysis on the possible options decision, the decision taken, and its context, among others.

4.3. General Architecture for Incorporating Artificial Somatic Markers in Autonomous Decision-Making

Figure 3 shows a general architecture for considering artificial somatic markers within autonomous decision-making processes. The Stimulus Manager identifies and manages the stimuli detected. The Stimulus Detector determines when a new stimulus is present, and the Stimulus Engine determines the origin and analyzes the type of stimulus detected.
Meanwhile, the Memory Manager manages both long-term and working memories. Long-Term Memory manages the record and memory of past experiences (e.g., events, relationships between elements, past goals, actions performed in the past, emotional effects, somatic reactions, levels of success). Working Memory manages the registration and use of information in ongoing decision processes (e.g., actions under execution).
The Affective Manager analyzes the emotional effects of stimuli and decisions. It also evaluates somatic associations and determines rewards and punishments for the decisions made. The Emotional Engine obtains the emotional effects of each stimulus based on emotional update rules and analyzes possible emotional effects of a decision based on current and global goals.
The Somatic Engine evaluates somatic associations between a decision option, its possible conflicts, and its possible emotional effects. It determines somatic rewards and punishments, in terms of positive and negative sensations that can remain latent in working memory.
The Decision Manager analyzes and defines the goals to be achieved, along with defining and analyzing decision options. It also applies decision rules and activates the execution of actions to achieve specific and global goals. The Decision Engine analyzes past goals and actions, defines possible decision courses, and verifies the existence of conflicts between decision options and the current goals to achieve. The Decision Selector generates and uses the selectivity index for candidate decisions, evaluates decisions using decision rules, and makes decisions and activates the executive processes.

5. Study Case

In order to illustrate the applicability of the general framework, a conceptual case study on the transportation of people under a tourism context is presented. Figure 4 shows a map of a hypothetical city that has a Central Quarter, East Quarter, West Quarter, Great Tower area, and Financial District. It should be noted that the existing distances on the map are merely referential.
The Central Quarter has a series of tourist places such as the Tourism Agency (point “A”), the National Bank (point “B”), the Contemporary Museum (point “C”), the Old Warehouse (point “D”), the Chocolate Factory (point “E”), the Urban Park (point “F”), and the Archeology Museum (point “G”). Outside the Central Quarter, already in the East Quarter, is located the Poet’s House (point “H”). For its part, in the West Quarter are the Six Towers (point “I”), famous for being included in several internationally acclaimed films. Another tourist area much more distant from the Central Quarter is the area of the Great Tower (point “J”), a sector characterized by beautiful gardens and forests, whose main attraction is a high tower. Already on the outskirts of the city is located the famous Financial District (point “K”), known for its modern buildings with offices for financial services.
This conceptual study case considers the existence of a tourist bus, which begins its journey at point “A”, and must follow the following travel itinerary: A-C-G-H-J-K. The vehicle has an autonomous agent built into its computing and navigation system. This autonomous agent receives the travel itinerary to be followed during the day, the profile of the passengers transported (e.g., youth, senior citizens), and real-time information on the weather conditions and city vehicle traffic as the input data. It is capable of receiving voice messages from the driver, and at the same time, it delivers relevant information through audio and an interactive screen.
The journey begins smoothly from the Tourist Agency which is the origin point “A”. The profile of the passengers transported is “foreign delegation”. The vehicle arrives safely at the Contemporary Museum, point “C”. Tourists get out of the vehicle and visit the place. Upon leaving the museum, tourists enter the vehicle again, which at that time receives information about traffic congestion in the vicinity of the Archeology Museum (point “G”), the next destination to visit. This information represents an external stimulus for the autonomous agent, which recognizes a decision point of the “threat” type. It is interpreted in this way since the stimulus that is received risks the fulfillment of the autonomous agent’s goals, specifically, complying with the travel itinerary and meeting the expectations of tourists.
The autonomous agent determines courses of action by searching for past actions that are related to “traffic congestion” and “foreign delegation”, observing in each case the emotional effect of the actions taken. Then, in the analysis of decision options, stimuli of the “threat” type have “high priority”. Therefore, this type of stimulus receives rapid treatment through the generation of a reduced list of possible decision options. The aim is to select the decision option that offers greater confidence (in the service); that is, the decision that generates greater satisfaction or well-being to the tourists of the foreign delegation.
Considering the above, the autonomous agent has several options: go directly to the next destination (point “C”) under the risk that, considering the traffic congestion near the mentioned place, passengers spend a longer time in the vehicle without receiving greater service attention; keep all the tourist destinations of the travel itinerary but change the order of arrival to each place; eliminate the tourist destination under conflict from the travel itinerary (point “C”); and carry out an “active pause” where all passengers temporarily get off the vehicle, without modifying the tourist destinations of the travel itinerary.
In the analysis of decision options, the option of changing the order of arrival is quickly discarded (it requires more analysis time), deriving a reduced list of decision options to the next process of decision selection and performing of the final decision. In this context, fast decision rules apply, particularly choosing an option that generates greater confidence in the service. The autonomous agent determines that the most appropriate option is to take an active pause, suggesting making a temporary stop at the “Chocolate Factory” while waiting for the traffic congestion problem to be solved in a reasonable period of time. The selection of this decision option is based on the fact that the Chocolate Factory has a high positive evaluation by tourists in online tourism systems. In particular, the own records of the autonomous agent indicate that visits to the Chocolate Factory have generated positive emotional effects in the past.
The decision made by the autonomous agent is communicated to the driver of the vehicle, both audibly and also visually through a message on the driver’s interactive screen. In this context, it is important to note that in the present conceptual study case, the autonomous agent does not govern the vehicle. Considering this, the final decision on the driving (routes followed, vehicle stops, speed) lies on the human driver. The autonomous agent records the decision made and all the data associated with it in both its working memory and in its long-term memory. The driver, upon receiving the messages, decides to follow the suggestion of the autonomous agent and reports the situation to the foreign delegation on board. The vehicle stops, and the tourists descend to go to the Chocolate Factory. They take advantage of the moment to rest and shop for chocolates.
Moments before continuing the trip, and with the passengers still not returning to the vehicle, the autonomous agent independently activates an analysis of the possibilities related to the travel itinerary. This is interpreted as a stimulus of internal origin, recognizing a decision point under the label “fear”. The foregoing is based on the existing uncertainty about the degree of compliance with the initially defined travel itinerary. The autonomous agent obtains new information on the situation of traffic congestion near point “C” and learns that the cause of the traffic congestion remains since the Archeology Museum (point “C”) is receiving new material of historical value.
Considering the above, the autonomous agent determines courses of action by searching for past actions that are related to “traffic congestion,” “active pause,” “Archeology Museum,” “maintenance,” and “foreign delegation,” observing the emotional effect of the actions taken in each case. Then, in the analysis of decision options, stimuli of the “fear” type have “normal priority”. The aim is to generate a more complete list of decision options than in the case of a “high priority” context, first analyzing potential conflicts between each possible decision option and the goals of the autonomous agent (that is, complying with the travel itinerary and meeting the expectations of tourists). Then, the possible emotional effects of each possible decision option are analyzed. Subsequently, the potential conflicts and emotional effects are used to search for somatic memories registered in long-term memory. If there is a positive somatic association, the decision option is labeled “positive”; meanwhile, if there is a negative somatic association, the decision option is labeled “negative”. In another case, the decision option is labeled “neutral”. In this sense, the autonomous agent has several decision options: go directly to the Archeology Museum (“negative” label); take a new additional active pause, waiting for traffic congestion to decrease (“negative” label); take the vehicle as close as possible to the Archeology Museum, and wait with the passengers on the vehicle (“negative” label); and remove the Archeology Museum from the travel itinerary (“neutral” label) and continue with the travel itinerary.
The list of decision options is sent to the next process, of decision selection and performing of the final decision. In this context, given that the decision has “normal priority”, somatic rewards and punishments are determined first for each decision option. In the cases of negative-labeled decision options, somatic punishment is manifested in terms of a high penalty in the respective selectivity index. For its part, the decision option with a neutral label has a somatic penalty for canceling the visit of a previously programmed tourist point, and at the same time, it has a somatic reward for reducing the “time lost” (while waiting for a not available touristic point). The own records of the autonomous agent indicate that there is no ideal decision in this situation. However, the prolonged waiting time has generated a chain effect on the following points of the travel itinerary. In this sense, removing the point of conflict and continuing with the travel itinerary gives greater reliability about the service and its ability to adapt to the context.
The autonomous agent decides to cancel the visit to the Archeology Museum and continue with the travel itinerary. This particular decision is communicated to the driver of the vehicle, both by voice and also in visual terms through a message on the driver’s interactive screen. The autonomous agent records the decision made and all the data associated with it in both its working memory and its long-term memory. The driver, upon receiving the messages, decides to follow the suggestion of the autonomous agent. At that moment, the tourists return to the vehicle and indicate to the driver how pleasant the visit to the Chocolate Factory was. This information is communicated to the autonomous agent by voice (of the driver). Likewise, complementary information on the perception of tourists is incorporated afterward in the autonomous agent, both from the application of surveys and also from the processing of online reviews in social networks.
The driver informs tourists about the cancellation of the visit to the Archeology Museum. This generates mixed reactions in tourists. Finally, the vehicle heads its way to point “H”, the Poet’s House. The vehicle arrives and the visit takes place without any major events. At the exit from point “H”, the autonomous agent receives information about a problem in accessing point “J”, the next tourist destination on the travel itinerary: there are improvement works in access to the “Great Tower”, therefore only the option of following an alternative route (much longer than usual) remains. This information represents an external stimulus for the autonomous agent, which recognizes a decision point of the “threat” type. It is interpreted in this way since the stimulus received risks the fulfillment of the autonomous agent’s goals, specifically, complying with the travel itinerary and meeting the expectations of tourists.
The option of following the alternative route to the Great Tower generates another effect: the time would be insufficient to visit the Financial District (point “K”). Considering the above, the autonomous agent determines courses of action, searching for past actions that are related to “great tower”, “financial district”, “cancellation”, and “foreign delegation”, observing in each case the emotional effect of the actions executed. Then, in the analysis of decision options, stimuli of the “threat” type have “high priority”. Therefore, this type of stimulus receives rapid treatment through the generation of a reduced list of possible decision options. The aim is to select the decision option that offers greater confidence (in the service), that is, the decision that generates greater satisfaction or well-being to the tourists of the foreign delegation. In this sense, the autonomous agent has the following options: take the alternative route to point “J” (Great Tower), and cancel the visit to point “K” (Financial District), and; take the route to point “K”, additionally for photographing the Six Towers located at point “I”, and cancel the visit to point “J”.
In the analysis of decision options, it is highlighted that the option of visiting point “J” in the current travel itinerary opens the possibility of offering a discount voucher for visiting points “I” and “K” since it corresponds to the same route to follow on the way to the airport (located on the outskirts of the city). There are records of this decision in the past related to positive emotional effects. The reduced list of decision options is sent to the next process of decision selection and performing of the final decision. In this context, fast decision rules apply, particularly choosing an option that generates greater confidence in the service. The autonomous agent selects the option to follow the alternative route and visit the “Great Tower”. The autonomous agent records the decision made and all the data associated with it in both its working memory and in its long-term memory. The driver, upon receiving the messages, decides to follow the suggestion of the autonomous agent and reports the situation to the foreign delegation on the vehicle. The reactions of tourists are mixed. However, offering a voucher for a new itinerary keeps the hope of accessing the still missing tourist spots.

6. Discussion

The previous section described a conceptual study case of a possible application of the general framework in the domain of passenger transportation under a tourism context. The study case highlights the existence of a tourist bus equipped with an autonomous agent incorporated into its computing and navigation system, with the ability to perform autonomous decision-making processes. The autonomous agent can receive information in real time from the outside, which is interpreted and processed as stimuli of external origin. In the same way, the autonomous agent has the ability to activate analysis processes independently, which are interpreted as stimuli of internal origin. The scenario of the study case shows a plausible decision-making context in the domain of passenger transportation, where, even when a travel itinerary is previously defined, it is necessary to have a capacity to adapt said travel itinerary according to possible problems or contingencies that arise on the route.
When the vehicle tries to travel from point “C” (Contemporary Museum) to point “G” (Archeology Museum) it is observed that, if there is no on-board mechanism for route assistance (read, the autonomous agent), possibly the bus would have approached and entered sufficiently in the saturated area with vehicular congestion; in such a way that a return or change of route at that moment would have had a higher cost in time and a negative effect on the perception of the service by the foreign delegation on board. In the first attempt to travel to point “G”, the autonomous agent suggests taking an “active pause” at point “E” (Chocolate Factory). This suggestion is not whimsical or random; it is the product of performing a decision-making process according to the description given both in the different algorithms proposed in this research work, as well as in the graphic description of them as unified process decision-making (see Figure 2). The information about the traffic congestion near to point “G” was interpreted as a threat, which finally allowed generating a quick decision. Search and analysis of actions that in the past have increased confidence in the service and that are associated with positive emotional effects were activated. The positive perception of the foreign delegation on this decision was later incorporated as feedback, which will allow reinforcing this type of somatic association in the autonomous agent.
On the other hand, the persistence of vehicular congestion on the route to point “G” makes the autonomous agent decide to remove the mentioned point from the travel itinerary. The previous decision was based on the fact that all other possible decision options have negative somatic associations. Meanwhile, the decision option to remove the point “G” and continue with the travel itinerary has a neutral somatic association. It is important to note that there is no positive way out in situations of this nature. In this sense, the autonomous agent relies on past somatic associations to guide its current decision.
Upon receiving information about problems in accessing point “J” (Great Tower), the autonomous agent again interprets this stimulus as a threat. In this sense, the current threat is different from the previous one (visit to the Archeology Museum), since now the autonomous agent must decide what destination to visit (and what destination to discard). The current decision also should consider the previous additional stop at the Chocolate Factory (active pause), and the removal of the Archeology Museum from the travel itinerary. The availability of a travel voucher to complete the travel itinerary at another time avoids further loss of confidence in the service. This option arises since in the past it has been associated with positive emotional effects.
It is important to note that the autonomous agent does not seek to minimize the operational cost associated with the service providing. In the same way, it does not seek to increase the operational cost unnecessarily. Essentially, the autonomous agent derived from this general framework seeks to make the most appropriate decision for each identified decision scenario. To comply with the above, the autonomous agent requires the existence of somatic associations that allow it to determine what each detected stimulus represents. It also requires the availability of information on past events and goals, levels of success, and emotional effects. It further requires mechanisms to determine actions to maximize confidence or joy and to be able to label or classify a decision option in positive, negative, or neutral terms by using somatic associations. The existence of somatic rewards or punishments affects the selectivity index of a decision. In this way, the prevalence of a positive or negative feeling in the autonomous agent also influences his final decision.
Regarding the architecture, the general framework allows having mechanisms for the following: the detection and analysis of stimuli; the management of somatic associations and emotional effects; the analysis of each decision option; the selection and perform of a decision option; the record of the decision made in the working memory; and the record of the decision-making process in the long-term memory. The layer of executive processes allows operationalizing the decisions made according to each scenario. In the conceptual study case, this particular layer materializes in terms of audio output and interactive screen. Meanwhile, the stimulus management layer allows the input and management of external stimuli and is materialized through input voice, interactive screen, and obtaining data in real time from Internet connectivity.
The choice of threat and opportunity as types of stimulus is supported by the fact that they are easily linked to somatic biological markers. Meanwhile, the emotions considered correspond to basic emotions [63,64]. The advantage of considering these stimuli is that they allow clearly identifying “what is happening” and, then, associate a detected stimulus with possible decisions and actions. There are other so-called “secondary” emotions [1] that correspond to mixtures of primary emotions and other states. The fact of considering, for example, secondary emotions, would require specializing (modulating) the response according to the stimulus detected in much more precise terms. The above could be considered as an additional line of research work.
The rationale for the choice of parameters is based on the categories identified from the methodology followed in this research work. The identification of categories led to their incorporation into the different algorithms, the unified decision-making process, and the general architecture. In the past, the use of primary emotions within artificial agents have been analyzed within stock markets [31], to examine the effect of emotional containment using emotional bands [30] and the effect of emotional recovery (resilience) [29]. However, to date, there is no analysis on the incorporation of artificial somatic markers in the different phases of decision-making in autonomous agents.
The framework design abstracts the type of stimulus received. In this way, a real implementation can consider different ways of incorporating the mentioned stimuli (e.g., a numerical variation of a stock market indicator; the receipt of a message from third parties whose content evokes a memory; an individual analysis of the autonomous agent that generates a result to activate a decision-making process). An example of a real implementation may correspond to considering a stimulus arranged as an “object”, that is, a particular instance of a class named “stimulus” (object-oriented paradigm), something compatible with the agent-oriented paradigm.
On the other hand, algorithm 3 considers “high priority” decisions (for stimuli of the threat and opportunity type) and “normal priority” decisions. The advantages of a “high priority” decision-making is that it guides a quick response from the autonomous agent. The results may be affected in terms of suggesting not examining all the possible options of a decision but responding quickly to the stimulus (as it sometimes happens in humans). Another option corresponds to modulate the type of threat or opportunity using a continuous scale and, in this way, to obtain a type of “gradual response” according to the received stimulus, allowing the emerging of intermediate stimulus types. The above could be considered as future work.
A decision selectivity index is suggested in order to compare the different possible decision options and guide the choice regarding one of them. Another possible mechanism corresponds to the autonomous agent choosing the first available decision option (without further deliberation), which would bring the autonomous agent closer to an “impulsive” profile where, essentially, each decision made is not based on the performing of the analysis processes. Having a selectivity index also offers a flexible perspective in relation to the available decision options, since it eventually allows each autonomous agent to have their own mechanism for evaluating decision options, that is, for each autonomous agent to define their priorities and decide based on them.
The conceptual study case, disposed in terms of a map of a hypothetical city and the circumstances described in the story, was specifically designed and thought to present and visualize possible applications of the general framework for incorporating artificial somatic markers in the decision-making of autonomous agents. It is not intended to present an ideal scenario where all situations occur according to plan, but rather, to show how an autonomous agent could perform in an interactive environment with real humans, in a decision scenario under pressure, and with decision-making factors that can change over time. The context of the transport of people on demand is a scenario complex enough to illustrate the possibilities of application of the current proposal.
There are other scenarios that could also allow visualizing the applicability of the current proposal. For example, in an online shopping context, a stimulus on an offer for a short period of time may represent an external stimulus. In the same way, in the stock market context, a deep fall in market indicators can also form an external stimulus. In both cases, somatic memories can be recalled and considered within the decision-making process.
In general, this framework has greater applicability whether the decision scenario can be defined within the variables and processes considered in the current proposal or not. The foregoing implies, for example, that external stimuli must ascribe to one of the two currently defined types (threat or opportunity) and that internal stimuli are associated with some primary emotion considered in the current proposal. In this sense, purchasing assistant systems or systems for risk analysis in investments can represent application examples.
On the other hand, there are scenarios in which the direct application of this framework would be more difficult. For example, the current version does not consider in its design the availability of stimuli that facilitate communication with other autonomous agents within the contexts of cooperation and competition. External stimuli such as empathy or compassion could modify, for example, the recognition of a decision point in a context of cooperation. Similarly, aspects such as loyalty could require the existence of additional mechanisms for the elaboration of decision options. On the other hand, external stimuli such as rivalry or enmity could also modify the recognition of a decision point. Examples of frameworks for agent collaboration and competition are [65,66,67,68].
The present framework does not have special considerations for mobile autonomous agents, that is, agents that must travel through different environments (or “containers”), adapting to specific conditions of each environment in which they must make decisions. For this, the present framework could be extended by incorporating a new type of stimulus (mobility), adding greater complexity and richness at the algorithmic level and within the unified decision-making process. Examples of frameworks for mobile agents are [69,70,71]. Moreover, the current proposal considers only two major priority levels (high priority and normal priority), so if a disaggregated level of priorities is required (e.g., different alert levels), an extension of the features of this proposal would be required. The above could be explored using soft computing techniques [72,73,74].
In the literature, it is possible to find different architectural proposals for the design and implementation of autonomous agents. However, to the best of our knowledge, it is not possible to find frameworks that allow considering the incorporation of artificial somatic markers in the different phases of decision-making in autonomous agents. The foregoing allows the opening of a new line of research work related to the design and implementation of systems based on the use of autonomous agents in various application fields, where the decision-making process can be guided by the availability and use of artificial somatic markers.
The present research work does not seek to analyze the decision-making of an autonomous agent from game theory but, rather, to offer a perspective from artificial intelligence, cognitive psychology, and neuroscience. In this sense, it is recognized, for example, that decision-making is the consequence of a permanent rational-emotional process, where, depending on the context, the affective and somatic dimension may have a greater weight in the choice of each decision. Aspects such as the intransitivity of the decision or the invariance of the procedure are aspects that are not particularly studied in this research work. However, the incorporation of artificial somatic markers in autonomous agents could precisely make it possible to illustrate how humans decide in certain scenarios and, in this way, increase the available knowledge about the understanding of human decision making.

7. Conclusions

A general framework for incorporating artificial somatic markers in the decision-making of autonomous agents has been presented. This framework considers a collection of algorithms for different phases of a decision: recognition of decision point; determination of the courses of action; analysis of decision options; decision selection and performing, and; memory management. These phases have been integrated into a unified decision-making process. Likewise, a general architecture that guides the implementation of artificial somatic markers in autonomous agents has been designed. A conceptual study case on transportation of people under a tourism context is presented.
The scope of this work corresponds to showing the design of a framework for incorporating artificial somatic markers within autonomous agents. To the best of our knowledge, it is not possible to find a proposal that presents what has been developed in the current work. The execution of each of the future lines of research work represent, by themselves, independent contributions given the complexity involved in the implementation of all the possible decision environments.
A limitation of this research work is that it considers only two main types of external stimuli: threats and opportunities. Another limitation is that it considers only five primary emotions: joy, sadness, fear, surprise, and anger. A third limitation would be the simplification of the cognitive processes that underlie a human decision-making process. Another limitation is regarding the presentation of a conceptual study case, which does not make it possible to fully visualize the benefits and possible difficulties that can exist when implementing artificial somatic markers in autonomous agents for real-life scenarios.
The existence of all these limitations affects the results in the sense that there is still a gap in analysis and knowledge in relation to the possibilities of incorporating artificial somatic markers in autonomous agents. For example, the consideration of an external stimulus such as empathy can lead to an analysis of how this type of stimulus could guide the decision-making of an autonomous agent within a negotiation or collaborative work scenario. The above also applies, for example, to the consideration of secondary emotions and to the inclusion of more complex cognitive processes that go beyond the deliberative (e.g., reflective processes).
A possible future line of work would be to extend the type of stimuli considered within the general framework, specifically, by incorporating stimuli that signify or represent other aspects of human affectivity such as empathy, sarcasm, humor, among others. This could increase the sensitivity and precision of the autonomous agent concerning the type of stimulus identified, something especially useful in interactive processes. Another possible future line of work corresponds to exploring the extension of architecture design through the incorporation of new components, or also, through the disaggregation of the current defined main components. This would allow, for example, to further specialize the components of the architecture according to the specific nature of each decision. Finally, yet another possibility for future work includes the definition of different real application scenarios to implement and test the incorporation of artificial somatic markers within autonomous agents in real-life scenarios.

Author Contributions

Conceptualization, D.C.; investigation, D.C. and C.C.; methodology, D.C., C.C., E.U. and R.M.; project administration, D.C.; writing—original draft preparation, D.C., C.C., E.U. and R.M.; writing—review and editing, D.C., C.C., E.U. and R.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by ANID Chile through FONDECYT INICIACION Project No. 11190370.

Conflicts of Interest

The authors declare no conflict of interest. The funding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

References

  1. Damasio, A. Descartes’ Error: Emotion, Rationality and the Human Brain; Putnam: New York, NY, USA, 1994; ISBN 0380726475. [Google Scholar]
  2. Damasio, A. Self Comes to Mind: Constructing the Conscious Brain; Pantheon Books: New York, NY, USA, 2010; ISBN 9780307378750. [Google Scholar]
  3. Caskey, T.R.; Wasek, J.S.; Franz, A.Y. Deter and protect: Crime modeling with multi-agent learning. Complex Intell. Syst. 2018. [Google Scholar] [CrossRef] [Green Version]
  4. Wuthishuwong, C.; Traechtler, A. Distributed control system architecture for balancing and stabilizing traffic in the network of multiple autonomous intersections using feedback consensus and route assignment method. Complex Intell. Syst. 2020. [Google Scholar] [CrossRef] [Green Version]
  5. Arokiasami, W.A.; Vadakkepat, P.; Tan, K.C.; Srinivasan, D. Interoperable multi-agent framework for unmanned aerial/ground vehicles: Towards robot autonomy. Complex Intell. Syst. 2016. [Google Scholar] [CrossRef] [Green Version]
  6. Vizzari, G.; Crociani, L.; Bandini, S. An agent-based model for plausible wayfinding in pedestrian simulation. Eng. Appl. Artif. Intell. 2020. [Google Scholar] [CrossRef]
  7. Cubillos, C.; Díaz, R.; Urra, E.; Cabrera, G.; Lefranc, G.; Cabrera-Paniagua, D. An agent-based solution for the berth allocation problem. Int. J. Comput. Commun. Control 2013. [Google Scholar] [CrossRef] [Green Version]
  8. Cabrera-Paniagua, D.; Herrera, G.; Cubillos, C.; Donoso, M. Towards a model for dynamic formation and operation of virtual organizations for transportation. Stud. Inform. Control 2011. [Google Scholar] [CrossRef]
  9. Cubillos, C.; Donoso, M.; Rodríguez, N.; Guidi-Polanco, F.; Cabrera-Paniagua, D. Towards open agent systems through dynamic incorporation. Int. J. Comput. Commun. Control 2010. [Google Scholar] [CrossRef] [Green Version]
  10. Briola, D.; Micucci, D.; Mariani, L. A platform for P2P agent-based collaborative applications. Softw. Pract. Exp. 2019. [Google Scholar] [CrossRef] [Green Version]
  11. Shiang, C.W.; Tee, F.S.; Halin, A.A.; Yap, N.K.; Hong, P.C. Ontology reuse for multiagent system development through pattern classification. Softw. Pract. Exp. 2018. [Google Scholar] [CrossRef] [Green Version]
  12. Iribarne, L.; Asensio, J.A.; Padilla, N.; Criado, J. Modeling Big data-based systems through ontological trading. Softw. Pract. Exp. 2017. [Google Scholar] [CrossRef] [Green Version]
  13. Vallejo, D.; Castro-Schez, J.J.; Glez-Morcillo, C.; Albusac, J. Multi-agent architecture for information retrieval and intelligent monitoring by UAVs in known environments affected by catastrophes. Eng. Appl. Artif. Intell. 2020. [Google Scholar] [CrossRef]
  14. Weiss, G. Multiagent Systems, 2nd ed.; The MIT Press: Cambridge, MA, USA, 2016; ISBN 9780262533874. [Google Scholar]
  15. Wooldridge, M. An Introduction to MultiAgent Systems, 2nd ed.; Wiley: Glasgow, Scotland, 2009; ISBN 978-0-470-51946-2. [Google Scholar]
  16. Sterling, L.; Taveter, K. The Art of Agent-Oriented Modeling, 1st ed.; The MIT Press: Cambridge, MA, USA, 2009; ISBN 9780262013116. [Google Scholar]
  17. Salovey, P.; Detweiler-Bedell, B.; Detweiler-Bedell, J.; Mayer, J. Emotional intelligence. In Handbook of Emotions; Lewis, M., Haviland-Jones, J., Feldman, L., Eds.; The Guilford Press: New York, NY, USA, 2010; pp. 533–547. ISBN 9781609180447. [Google Scholar]
  18. Tikhomirova, D.V.; Chubarov, A.A.; Samsonovich, A.V. Empirical and modeling study of emotional state dynamics in social videogame paradigms. Cogn. Syst. Res. 2020. [Google Scholar] [CrossRef]
  19. Desideri, L.; Ottaviani, C.; Malavasi, M.; di Marzio, R.; Bonifacci, P. Emotional processes in human-robot interaction during brief cognitive testing. Comput. Human Behav. 2019. [Google Scholar] [CrossRef]
  20. Araujo, T. Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. Comput. Hum. Behav. 2018. [Google Scholar] [CrossRef]
  21. Rincon, J.A.; Costa, A.; Villarrubia, G.; Julian, V.; Carrascosa, C. Introducing dynamism in emotional agent societies. Neurocomputing 2018. [Google Scholar] [CrossRef]
  22. Rincon, J.A.; de la Prieta, F.; Zanardini, D.; Julian, V.; Carrascosa, C. Influencing over people with a social emotional model. Neurocomputing 2017. [Google Scholar] [CrossRef]
  23. Yokotani, K.; Takagi, G.; Wakashima, K. Advantages of virtual agents over clinical psychologists during comprehensive mental health interviews using a mixed methods design. Comput. Hum. Behav. 2018. [Google Scholar] [CrossRef]
  24. Reis, R.C.D.; Isotani, S.; Rodriguez, C.L.; Lyra, K.T.; Jaques, P.A.; Bittencourt, I.I. Affective states in computer-supported collaborative learning: Studying the past to drive the future. Comput. Educ. 2018. [Google Scholar] [CrossRef]
  25. Cabrera, D.; Cubillos, C. Multi-agent framework for a virtual enterprise of demand-responsive transportation. In Advances in Artificial Intelligence, Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 1st ed.; Bergler, S., Ed.; Springer: Berlin/Heidelberg, Germany, 2008; pp. 66–71. [Google Scholar]
  26. Urra, E.; Cubillos, C.; Cabrera-Paniagua, D.; Mellado, R. hMod: A software framework for assembling highly detailed heuristics algorithms. Softw. Pract. Exp. 2019. [Google Scholar] [CrossRef]
  27. Cabrera, D.; Araya, N.; Jaime, H.; Cubillos, C.; Vicari, R.M.; Urra, E. Defining an Affective Algorithm for Purchasing Decisions in E-Commerce Environments. IEEE Lat. Am. Trans. 2015. [Google Scholar] [CrossRef]
  28. Cabrera-Paniagua, D.; Primo, T.T.; Cubillos, C. Distributed stock exchange scenario using artificial emotional knowledge. In Advances in Artificial Intelligence-IBERAMIA 2014, Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 1st ed.; Bazzan, A.L.C., Pichara, K., Eds.; Springer: Cham, Switzerland, 2014; pp. 649–659. [Google Scholar] [CrossRef]
  29. Cabrera, D.; Rubilar, R.; Cubillos, C. Resilience in the Decision-Making of an Artificial Autonomous System on the Stock Market. IEEE Access 2019. [Google Scholar] [CrossRef]
  30. Cabrera, D.; Cubillos, C.; Cubillos, A.; Urra, E.; Mellado, R. Affective Algorithm for Controlling Emotional Fluctuation of Artificial Investors in Stock Markets. IEEE Access 2018. [Google Scholar] [CrossRef]
  31. Cabrera-Paniagua, D.; Cubillos, C.; Vicari, R.; Urra, E. Decision-making system for stock exchange market using artificial emotions. Expert Syst. Appl. 2015. [Google Scholar] [CrossRef]
  32. Hoefinghoff, J.; Steinert, L.; Pauli, J. Implementation of a Decision Making Algorithm Based on Somatic Markers on the Nao Robot. In Autonomous Mobile Systems 2012; Levi, P., Zweigle, O., Häußermann, K., Eckstein, B., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 69–77. [Google Scholar]
  33. Hoogendoorn, M.; Merk, R.-J.; Treur, J. A Decision Making Model Based on Damasio’s Somatic Marker Hypothesis. In Proceedings of the 9th International Conference on Cognitive Modeling, Manchester, UK, 24–26 July 2009; pp. 1001–1009. [Google Scholar]
  34. Cominelli, L.; Mazzei, D.; Pieroni, M.; Zaraki, A.; Garofalo, R.; De Rossi, D. Damasio’s somatic marker for social robotics: Preliminary implementation and test. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer International Publishing: Cham, Switzerland, 2015. [Google Scholar]
  35. Hernandez, R.; Fernandez, C.; Baptista, M. Metodologia de la Investigación; McGrawHill: Mexico City, Mexico, 2014; ISBN 9781456223960. [Google Scholar]
  36. Smith, E.; Kosslyn, S. Procesos Cognitivos: Modelos y Bases Neurales; Pearson Prentice Hall: Madrid, Spain, 2008; ISBN 9788483223963. [Google Scholar]
  37. Schoemaker, P.J.H. The Expected Utility Model: Its Variants, Purposes, Evidence and Limitations. J. Econ. Lit. 1982, 20, 529–563. [Google Scholar]
  38. Simon, H.A. A behavioral model of rational choice. Q. J. Econ. 1955. [Google Scholar] [CrossRef]
  39. Tversky, A.; Kahneman, D. Advances in prospect theory: Cumulative representation of uncertainty. J. Risk Uncertain. 1992. [Google Scholar] [CrossRef]
  40. Kahneman, D.; Tversky, A. Prospect Theory: An Analysis of Decision under Risk Daniel. Econometrica 1979. [Google Scholar] [CrossRef] [Green Version]
  41. Tversky, A.; Kahneman, D. Judgment under uncertainty: Heuristics and biases. Science 1974. [Google Scholar] [CrossRef]
  42. So, J.; Achar, C.; Han, D.H.; Agrawal, N.; Duhachek, A.; Maheswaran, D. The psychology of appraisal: Specific emotions and decision-making. J. Consum. Psychol. 2015. [Google Scholar] [CrossRef]
  43. Sellers, M. Toward a comprehensive theory of emotion for biological and artificial agents. Biol. Inspired Cogn. Archit. 2013. [Google Scholar] [CrossRef]
  44. Bloch, S. Surfeando la Ola Emocional; Uqbar Editores: Santiago, Chile, 2008. [Google Scholar]
  45. LeDoux, J.E. The Emotional Brain: The Mysterious Underpinnings of Emotional Life; Simon & Schuster: New York, NY, USA, 1998. [Google Scholar]
  46. Ortony, A.; Clore, G.L.; Collins, A. The Cognitive Structure of Emotions; Cambridge University Press: Cambridge, UK, 1988. [Google Scholar]
  47. Bartol, J.; Linquist, S. How do somatic markers feature in decision making? Emot. Rev. 2015. [Google Scholar] [CrossRef] [Green Version]
  48. Höfinghoff, J.; Steinert, L.; Pauli, J. An easily adaptable Decision Making Framework based on Somatic Markers on the Nao-Robot. Kogn. Syst. 2013, 2013. [Google Scholar] [CrossRef]
  49. Buelow, M.T.; Suhr, J.A. Construct validity of the Iowa gambling task. Neuropsychol. Rev. 2009, 19, 102–114. [Google Scholar] [CrossRef] [PubMed]
  50. Samsonovich, A.V. Socially emotional brain-inspired cognitive architecture framework for artificial intelligence. Cogn. Syst. Res. 2020. [Google Scholar] [CrossRef]
  51. Chang, K.C.; Chu, K.C.; Wang, H.C.; Lin, Y.C.; Pan, J.S. Agent-based middleware framework using distributed CPS for improving resource utilization in smart city. Future Gener. Comput. Syst. 2020. [Google Scholar] [CrossRef]
  52. Jordán, J.; Bajo, J.; Botti, V.; Julian, V. An abstract framework for non-cooperative multi-agent planning. Appl. Sci. 2019, 9, 5180. [Google Scholar] [CrossRef] [Green Version]
  53. Wang, J.; Sun, L. Dynamic holding control to avoid bus bunching: A multi-agent deep reinforcement learning framework. Transp. Res. Part C Emerg. Technol. 2020. [Google Scholar] [CrossRef]
  54. Sun, F.; Yu, J. Indoor intelligent lighting control method based on distributed multi-agent framework. Optik 2020. [Google Scholar] [CrossRef]
  55. Yalçın, Ö.N. Empathy framework for embodied conversational agents. Cogn. Syst. Res. 2020. [Google Scholar] [CrossRef]
  56. Behdani, B.; Lukszo, Z.; Srinivasan, R. Agent-oriented simulation framework for handling disruptions in chemical supply chains. Comput. Chem. Eng. 2019. [Google Scholar] [CrossRef]
  57. Sánchez, Y.; Coma, T.; Aguelo, A.; Cerezo, E. ABC-EBDI: An affective framework for BDI agents. Cogn. Syst. Res. 2019. [Google Scholar] [CrossRef]
  58. Rosales, J.H.; Rodríguez, L.F.; Ramos, F. A general theoretical framework for the design of artificial emotion systems in Autonomous Agents. Cogn. Syst. Res. 2019. [Google Scholar] [CrossRef]
  59. Maysami, A.M.; Elyasi, G.M. Designing the framework of technological entrepreneurship ecosystem: A grounded theory approach in the context of Iran. Technol. Soc. 2020. [Google Scholar] [CrossRef]
  60. Huang, B.; Li, H.; Chen, M.; Lin, N.; Wang, Z. Theoretical framework construction on care complexity in Chinese hospitals: A grounded theory study. Int. J. Nurs. Sci. 2019. [Google Scholar] [CrossRef]
  61. King, E.L.; Snowden, D.L. Serving on multiple fronts: A grounded theory model of complex decision-making in military mental health care. Soc. Sci. Med. 2020. [Google Scholar] [CrossRef] [PubMed]
  62. Božič, B.; Siebert, S.; Martin, G. A grounded theory study of factors and conditions associated with customer trust recovery in a retailer. J. Bus. Res. 2020. [Google Scholar] [CrossRef]
  63. Ekman, P. Emotion in the Human Face; Cambridge University Press: Cambridge, UK, 1982; ISBN 0521239923. [Google Scholar]
  64. Ekman, P. An Argument for Basic Emotions. Cogn. Emot. 1992. [Google Scholar] [CrossRef]
  65. Florez-Lozano, J.; Caraffini, F.; Parra, C.; Gongora, M. Cooperative and distributed decision-making in a multi-agent perception system for improvised land mines detection. Inf. Fusion 2020. [Google Scholar] [CrossRef]
  66. Hawley, L.; Suleiman, W. Control framework for cooperative object transportation by two humanoid robots. Rob. Auton. Syst. 2019. [Google Scholar] [CrossRef] [Green Version]
  67. Di Febbraro, A.; Sacco, N.; Saeednia, M. An agent-based framework for cooperative planning of intermodal freight transport chains. Transp. Res. Part C Emerg. Technol. 2016. [Google Scholar] [CrossRef]
  68. Chahla, G.A.; Zoughaib, A. Agent-based conceptual framework for energy and material synergy patterns in a territory with non-cooperative governance. Comput. Chem. Eng. 2019. [Google Scholar] [CrossRef]
  69. Xiong, W.; Lu, Z.; Li, B.; Wu, Z.; Hang, B.; Wu, J.; Xuan, X. A self-adaptive approach to service deployment under mobile edge computing for autonomous driving. Eng. Appl. Artif. Intell. 2019. [Google Scholar] [CrossRef]
  70. Bottarelli, L.; Bicego, M.; Blum, J.; Farinelli, A. Orienteering-based informative path planning for environmental monitoring. Eng. Appl. Artif. Intell. 2019. [Google Scholar] [CrossRef]
  71. Zitouni, M.S.; Sluzek, A.; Bhaskar, H. Visual analysis of socio-cognitive crowd behaviors for surveillance: A survey and categorization of trends and methods. Eng. Appl. Artif. Intell. 2019. [Google Scholar] [CrossRef]
  72. Vaughan, N.; Gabrys, B. Scoring and assessment in medical VR training simulators with dynamic time series classification. Eng. Appl. Artif. Intell. 2020. [Google Scholar] [CrossRef]
  73. Fan, Y.; Xu, K.; Wu, H.; Zheng, Y.; Tao, B. Spatiotemporal Modeling for Nonlinear Distributed Thermal Processes Based on KL Decomposition, MLP and LSTM Network. IEEE Access 2020. [Google Scholar] [CrossRef]
  74. Shamshirband, S.; Rabczuk, T.; Chau, K.W. A Survey of Deep Learning Techniques: Application in Wind and Solar Energy Resources. IEEE Access 2019. [Google Scholar] [CrossRef]
Figure 1. General view of decision-making phases.
Figure 1. General view of decision-making phases.
Applsci 10 07361 g001
Figure 2. Unified decision-making process considering artificial somatic markers.
Figure 2. Unified decision-making process considering artificial somatic markers.
Applsci 10 07361 g002
Figure 3. General architecture for considering artificial somatic markers within decision making.
Figure 3. General architecture for considering artificial somatic markers within decision making.
Applsci 10 07361 g003
Figure 4. Map of a hypothetical city (Source: Own elaboration).
Figure 4. Map of a hypothetical city (Source: Own elaboration).
Applsci 10 07361 g004
Table 1. Open, axial and selective coding.
Table 1. Open, axial and selective coding.
Open Coding
(Preliminary Categories)
Axial Coding
(Main Categories or Themes)
Selective Coding
(Core Categories)
External stimulus; changes in the environment; internal stimulus; own analysis; own goals.Stimulus originStimulus
Threat; opportunity; joy; sadness; fear; surprise; anger; neutrality.Stimulus typeStimulus
High priority; normal priority.Priority levelDecision Priority
Positive somatic memory; negative somatic memory; neutral somatic memory.Type of somatic memoryArtificial somatic Memory
Somatic reward; somatic punishment.Somatic feelingsArtificial somatic feelings
Past goals; past actions; past success level; past emotional effects; decision options; decision rules; decision conflicts; emotional effects; selectivity index.Decision goals, rules and effectsDecision factors
Selecting actions for achieving goalsCurrent actionsExecutive processes
Long-term memories; short-term memoriesLong-term/working memoryMemory
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Cabrera, D.; Cubillos, C.; Urra, E.; Mellado, R. Framework for Incorporating Artificial Somatic Markers in the Decision-Making of Autonomous Agents. Appl. Sci. 2020, 10, 7361. https://doi.org/10.3390/app10207361

AMA Style

Cabrera D, Cubillos C, Urra E, Mellado R. Framework for Incorporating Artificial Somatic Markers in the Decision-Making of Autonomous Agents. Applied Sciences. 2020; 10(20):7361. https://doi.org/10.3390/app10207361

Chicago/Turabian Style

Cabrera, Daniel, Claudio Cubillos, Enrique Urra, and Rafael Mellado. 2020. "Framework for Incorporating Artificial Somatic Markers in the Decision-Making of Autonomous Agents" Applied Sciences 10, no. 20: 7361. https://doi.org/10.3390/app10207361

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop