Next Article in Journal
Cross-Correlation- and Entropy-Based Measures of Movement Synchrony: Non-Convergence of Measures Leads to Different Associations with Depressive Symptoms
Next Article in Special Issue
An Application of Inverse Reinforcement Learning to Estimate Interference in Drone Swarms
Previous Article in Journal
Research on China’s Risk of Housing Price Contagion Based on Multilayer Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Interdependent Autonomous Human–Machine Systems: The Complementarity of Fitness, Vulnerability and Evolution

by
William F. Lawless
Departments of Mathematics and Psychology, Paine College, Augusta, GA 30901, USA
Entropy 2022, 24(9), 1308; https://doi.org/10.3390/e24091308
Submission received: 27 June 2022 / Revised: 25 August 2022 / Accepted: 6 September 2022 / Published: 15 September 2022

Abstract

:
For the science of autonomous human–machine systems, traditional causal-time interpretations of reality in known contexts are sufficient for rational decisions and actions to be taken, but not for uncertain or dynamic contexts, nor for building the best teams. First, unlike game theory where the contexts are constructed for players, or machine learning where contexts must be stable, when facing uncertainty or conflict, a rational process is insufficient for decisions or actions to be taken; second, as supported by the literature, rational explanations cannot disaggregate human–machine teams. In the first case, interdependent humans facing uncertainty spontaneously engage in debate over complementary tradeoffs in a search for the best path forward, characterized by maximum entropy production (MEP); however, in the second case, signified by a reduction in structural entropy production (SEP), interdependent team structures make it rationally impossible to discern what creates better teams. In our review of evidence for SEP–MEP complementarity for teams, we found that structural redundancy for top global oil producers, replicated for top global militaries, impedes interdependence and promotes corruption. Next, using UN data for Middle Eastern North African nations plus Israel, we found that a nation’s structure of education is significantly associated with MEP by the number of patents it produces; this conflicts with our earlier finding that a U.S. Air Force education in air combat maneuvering was not associated with the best performance in air combat, but air combat flight training was. These last two results exemplify that SEP–MEP interactions by the team’s best members are made by orthogonal contributions. We extend our theory to find that competition between teams hinges on vulnerability, a complementary excess of SEP and reduced MEP, which generalizes to autonomous human–machine systems.

1. Introduction

Our objective for this work-in-progress is to develop mathematical physics models of autonomous human–machine teams and systems (A-HMT-S) to guide human–machine systems. Our success thus far began by attributing the social sciences replication crisis [1] to a science based on independent individuals [2]; this resulted in “causing” many static theories in social science to not only become invalid (e.g., self-esteem [3]; the implicit attitudes test [4]; ego depletion theory [5]), but also non-generalizable to theories of human–human interaction, let alone to autonomous human–machine teams and systems (A-HMT-S).
To better illustrate this problem in social science, the implicit attitudes test (IAT) has become widely used at great expense to industry, even though the concept is invalid [3] and its interventions ineffective [6]. Later, the lead scientist on the team that found the IAT to be invalid, Tetlock, wrote a book about training practitioners to become “superforecasters” [7]; Tetlock started a public website with challenges for his hand-picked and “trained superforecasters” to make predictions; his first two superforecasts were that Brexit would not happen and that Trump would not become President of the U.S. (http://goodjudgment.com/superforecasting/index.php/2016/11/03/is-donald-trump-mr-brexit/; replaced with gjopen.com (accessed on 15 December 2016)); both predictions failed. Additionally, of note, the leader of the group that found self-esteem to be invalid, Baumeister, produced ego-depletion theory, which became one of the leading theories in social psychology [8]; it is a concept that has since been found to be questionable [5].
Machine learning has limits, too. Machine-learned intelligence is non-generalizable, e.g., [9] reports that “current machine learning systems typically cannot identify differences in contexts, let alone adapt to them,” likely due to machine learning’s “unreliable empirical risk minimizer” on known sets of training data [10]. Machine learning programs trained with big data are satisfactory for known, stable contexts [11], but not for new, uncertain, or dynamic contexts (e.g., social contexts), where the failure of big data has been attributed to its being independent and identically distributed (i.i.d.) [12], precluding Shannon information from being able to reconstruct whatever social event is being observed.
To counter these problems and to build the science of autonomy, investigators have begun to call for a new direction [12]. Echoing this call, the National Security Commission on Artificial Intelligence (NSCAI; in [13]) reported that the “profound” changes being wrought by AI (and machine learning) posed technological concerns that had to be addressed to prevent war and to safeguard freedom. To prepare for autonomy, in this article, we address these calls for a new direction and some of the concerns raised about AI. We propose that the new direction must address environmental uncertainty, and we explain in this study why models based on logic cannot. The National Academy of Sciences provided support for this new direction by reviewing the research gaps in this new field, by reviewing interdependence and by addressing the need for open science research conducted in the field as opposed to laboratory research [14]; we cover in this study: autonomy; uncertainty; and open science field research.

2. History: Complementarity and Interdependence

Bohr [15] noted for “the ideal of causality, that the behavior of a physical object relative to a given system of coordinates is uniquely determined, quite independently of whether it is observed or not” (p. 290). Causality ended at the atomic level, however, when Heisenberg’s uncertainty relations for an object’s pairs of conjugate factors found that one predicted outcome could be known precisely but not with the other simultaneously [15], e.g., position–momentum; energy–time. Bohr wanted to generalize Heisenberg’s principle of uncertainty. He learned that William James, the psychologist, distinguished between a concept and its expression: “consciousness may be split into parts which coexist but mutually ignore each other, and share the objects of knowledge between them. More remarkable still, they are complementary” ([16], p. 204).
With this idea, Bohr began to replace causality with complementarity to reflect dependence between a human actor and observer. According to Bohr, the dependency in complementarity occurs when an object interacts with a measurement instrument, that is, when they become interdependent. Bohr [17] generalized complementarity to the behavior of individuals; to actors and observers; and to Culture A and Culture B. In these and other relationships, Bohr used the term complementarity to describe interdependent elements that combined concepts and behaviors. He concluded that these dependent roles, say of an actor and an observer of the actor, both collect different information from reality.
For this new direction, there is research in social science that can help. One theory in social science that is generalizable separates team structure [18] and performance [19,20]. Another generalizable social concept, borrowed from Schrödinger [21], from Systems Engineers [22] and from Social Psychologists ([23], p. 146), is the claim that the whole is greater than the sum of its parts. Schrödinger [21,24], the quantum theorist who discovered entanglement, concluded that dependency in the parts interacting inside of a whole meant that knowing a whole with dependent parts was possible, but not its parts simultaeously. From our perspective, if the parts reflect autonomous, interdependent humans acting in orthogonal roles, the information that they generate may not correlate, contributing to the replication crisis, and yet, advancing the science of complementary interdependence [20].
We present evidence in this article that rational models derived from individuals fail when exposed to conflict and uncertainty [25], whereas complementarity is effective and generalizable.

3. Background: Why Team Fitness and Entropy?

The motivation for human–machine teams and systems is being driven by the need to make decisions faster, more accurately and more safely than can humans alone. Examples exist across the whole of society, with the arrival of autonomous transportation [26], the introduction of autonomy into mundane banking and advanced finance processes [27], the sophisticated applications of “medical and industrial robots, agricultural and manufacturing facilities, traffic management systems” [28], and their introduction into complex weapons systems such as hypersonic missiles (e.g., [29]). However, for weapons, the editors of the New York Times [30] warned of the threat to humanity from errant autonomous systems, citing the UN Secretary General, Antonio Guterres, who stated that “machines with the power and discretion to take lives without human involvement … should be prohibited by international law.” The Editorial recommended that “humans [should] never completely surrender life and death choices in combat to machines.”
Our response to the New York Times exemplifies the interdependence found in every aspect of society. Interdependence has long been a part of the human condition, a thread running through diplomacy (e.g., Henry Kissinger), economics (Adam Smith), the law (Richard Freer’s law student casebook), war (von Clausewitz and Emile Simpson) and governance (Kessinger, Montesquieu and Madison). For example, according to Kissinger [31], “diplomacy is the art of restraining power.” In Smith’s [32] view of business, the economic wealth of a nation is promoted when its citizens are free to pursue their own interest in their own way: “It is not from the benevolence of the butcher, the brewer, or the baker, that we expect our dinner, but from their regard to their own interest.” Freer and Purdue [33] write that justice in the courtroom is found from the arguments of equally competent lawyers. According to von Clausewitz [34], “war is not merely a political act but a real political instrument, a continuation of political intercourse, a carrying out of the same by other means” (p. 280); Simpson [35] adds that war is a competition to impose meaning (p. 35). For the least corrupt form of governance, according to Kissinger ([31], pp. 18, 21), the “realities of interdependence” self-organize the balance of power shared among nations. Within a nation, Montesquieu‘s concept of checks and balances to reduce corruption was adopted by Madison [36] so that “ambition must be made to counteract ambition,” a concept now enshrined in the U.S. Constitution. These examples of interdependence provide the backdrop for our response to the Editors of the New York Times: What about a machine that can interdependently counter a copilot who might be intent on committing suicide, or interdependently guard against a distracted train engineer or a negligent crew aboard a ship? Should we not permit the most capable agents, human or machine, to save humans under threat [37], even when that mechanism is a machine? An example of this is the case of unconscious fighter pilots saved by their AI: https://futurism.com/the-byte/fighter-pilots-blacked-out-saved-ai (accessed on 15 March 2021).
What we have established at this point is that social science is not generalizable to autonomous human–machine teams and systems and that a similar problem exists with machine learning, but that human life entails interdependence in every context. Yet, these new teams and systems are soon to arrive. How to construct them and measure their performance are critical questions. Part of the problem with social science is that it has relied on questionnaires, interviews, observations and other measurements that interact with the individual humans being observed [20]. Like the measurement of quantum objects, we have known for decades that interdependent social objects are affected when they are being observed (i.e., the Hawthorne effect [38]). In contrast to the traditional social science of individuals, we propose two new measures for teams: structural entropy production (SEP) and maximum entropy production (MEP). We shall argue that these two entropy measures offer a more objective means to determine the performance of autonomous systems.
The plan for this manuscript: next, we will review the theory of complementarity, interdependence, the mathematics and its relationship to entropy and our Research Design, Results, Discussions and Conclusions.

4. Theory

Entropy is a thermodynamic quantity in a system that reduces the system’s available energy to perform useful work [39]. We will argue that the structure (the configuration, the skills and roles of team members, etc.) of a team are complementary to its performance, that the more available energy expended to make a team’s structure effective, the less available energy the team or system has to perform useful work.
Trusted artificial intelligence (AI) in human–machine teams places more emphasis on a solution than the science of individuals can support [14]. We first addressed this issue in a book chapter [40] followed by an Editorial in a Special Issue of AI Magazine [41], leading us to conclude that a context with uncertainty can only be determined interdependently by intelligent teams in competition or conflict. In causal terms understood by machines and humans, Pearl [42,43] recommended that human–machine teams must be able to communicate interdependently. In 2018, the concept of team interdependence was violated by Uber’s self-driving car when it fatally injured a pedestrian even though the car had discovered an object in the road 5 s before its human operator, but the car, acting independently, did not communicate that information to its human teammate [24,44]. In contrast, rejecting interdependence, Kenny and colleagues ([45], p. 235) assumed that only independent, identically distributed data are of value for experimental social data, which is somewhat akin to treating quantum dependency at the atomic level as an irritation, not a resource; we have attributed Kenny’s assumption to the root cause of the failure of replication in the social sciences, and its inability to generalize to teams and machines [46].
We know that beliefs correlate with other beliefs (e.g., the beliefs of individuals on their self-esteem correlate significantly with beliefs in their academic and work abilities); however, self-esteem does not correlate with actual academics or work performance (reviewed in [20]). In sum, we need a theory that accounts for the failure of observational social science (e.g., self-esteem) to predict behavior; for the failure of behavior (e.g., reinforcement learning) to predict cognitive states; and for the failure of social scientists to produce a theory that accounts for both. Without this knowledge, how else are we to theorize, design and operate human–machine teams?

5. Interdependence: Structure, Performance and Uncertainty

We have associated interdependence with three effects: bistable perspectives of reality, such as those that spontaneously arise between two tribes, teammates, or political parties [11]; a measurement problem arising with the collapse of bistability; and non-factorability, the difficulty of unraveling the causes of social conflict or competition [20]. We argue that the ignorance of these effects contributes to the replication crisis in psychology.
First, interdependence is entailed in all social interactions [47], the original premise of game theory [2]. Conant [48] generalized Shannon to organizations, concluding that interdependence should be minimized. Contradicting Conant, Cummings (in [46]; also, see [49]) found that an interdependent team is significantly more productive than the same individuals in the team when they are working independently. Moreover, contradicting the National Academy of Sciences’ claim that “more hands make light work” ([50], p. 33), and in contrast to swarms, we propose and found—based on studies on international oil firms [2], replicated with militaries around the globe [51]—that redundancy in an interdependent team impairs its performance, e.g., in 2018, although both companies produced about the same amount of oil, Exxon had one eighth the number of employees than had Sinopec [52].
Second, interdependent states include bistable interpretations of reality [53], which are spontaneously evident when uncertainty in a team prevents action. The brain only “interprets” one aspect of bistability at a time [54], where bistable refers to multiple interpretations, e.g., a bistable illusion is the faces–vase illusion (see p. E3351 in [55]). The split-brain evidence from Gazzaniga [56] showed that the two halves of a human brain perceive independent versions of reality. However, the brain apparently stitches these independent visual memories from the hemifields of the left and right brain together [57], possibly by managing constructive and destructive interference from the information produced by the brain’s two halves as the brain constructs a coherent view of reality. Bistability supports Bohr’s claim of the complementarity between action and observation, tribal disagreements, etc. The power from bistable interpretations also supports the claim that interdependence between society and technology drives the evolution of humans [58].
How this stitching of “reality” occurs in the brain’s “mind’s eye” is unknown, but may depend on the development of a new theory of information value; similarly, as we will discuss, we do not yet know how political information is aggregated beyond the realm of logic or our present knowledge of the process. We speculate that both are related to the theory of the non-factorable value of information, which we leave to a future project.
Third, indeed, many aspects of social reality are non-factorable, requiring an adversarial process to de-construct interpretations of reality, e.g., a high-profile child custody case; the NTSB’s determination of responsibility for the Uber pedestrian fatality in 2018 [24,44]; and Weinberg’s failure to reach consensus about the meaning of quantum mechanics (compare [59] with [60]). Humans pursue adversarial struggles to determine the non-factorable reality hidden in uncertain contexts. As non-factorable information is processed in an adversarial process, it tests the bistable interpretations of an uncertain reality. For example, Justice Ginsburg ([61], p. 3) concluded that the lengthy appeals process for a case reaching the U.S. Supreme Court should not be short-circuited so that the Justices can best gain an “informed assessment of competing interests.” Ginsburg’s decision exemplifies the processing of interdependent (bistable) information, and the brain’s “mind’s eye” appears to function interdependently to produce a similar effect.
Interdependence is fragile and has to be contained inside of stable boundaries to preclude interference; boundaries suggest an engine, a dual function of the constraints to stabilize the interaction, to block it from interference and to allow the interactions of a team to coherently increase the team’s performance to reach its maximum entropy production (MEP) [19,20].
Subsequently, in an earlier study using UN Human Development Index (HDI) data for Middle Eastern North African (MENA) nations plus Israel, mostly the data on schooling and patents produced, we theorized and found that a nation’s structure of education is significantly associated with a nation’s MEP, characterized by the number of patents it produces; in this manuscript, we revisit these results. However, our result conflict with our earlier finding that a United States Air Force education in air combat maneuvering was not associated with the best performance in air combat, but air combat physical training was. These two results, to achieve MEP, combine to exemplify that two different structures built to exploit the interactions in a team are not based on independent individuals, but on interdependent teammates who function in orthogonal roles [51].
There are two basic approaches to social science, both occurring most often in the laboratory: behavior and cognition. First, with the focus on fixed choices, games can impede a “proof of concept” in the field for operating human–machine teams [2]. Briefly, games work by assuming that the preferences of players are implicit based on observed behaviors (viz., game choices; in [62]). Von Mises makes the same assumption in his theory of entrepreneurship [63]. Second, and in contrast, cognitive psychology is based on implicit behaviors [64]. The implicit assumptions in both traditional disciplines help to make these concepts rational. Both approaches function well within the limits of their assumptions, but when it comes to generalizations for operations in the field, a new direction is needed in the situations where uncertainty and conflict operate; this is exactly where rational theories fail, despite heroic effort [25]. Namely, the lack of field research, also known as “open science,” is the largest gap in research reported by the National Academy of Sciences. The Academy blamed this gap on the emphasis by social scientists on conducting research in the laboratory that often cannot be generalized and applied in the field [14].
For example, a modern fighter jet is “trained” to take control from its human operator to save the pilot and plane when the machine (plane) recognizes that its fighter pilot’s high “g” maneuver has caused its pilot to suffer a loss of consciousness, known as G-LOC. Similar technology can be applied to counter dysfunctional human operators, e.g., an attempt to commit suicide by a commercial pilot can be countered by allowing a plane to safe itself and its passengers (e.g., the Germanwings copilot who committed suicide and killed all aboard in 2015 [20]).
More applicable in the field is a model that performs similarly to how autonomous humans combine behavior and cognition to process the information interdependently generated during competition, by testing whatever implicit assumptions arise. A foundational problem for traditional social science, we have found that the theory of independent individuals, not surprisingly, does not scale from individuals. In contrast, our theory not only scales from teams to firms and systems, but also scales downward from teams to individuals [20,46].

6. Mathematical Physics of Complementarity

When two agents or two groups, A and B, are independent, we can represent the information derived from them using Shannon entropy in Equation (1) [65]:
S A , B S A , S B
Alternatively, when the two agents or two groups are fully independent, we revise Equation (1) to become:
S A , B = S A + S B
Equations (1) and (2) represent a state of independence; based on this, the recommendation from computational Shannon models is to minimize interdependence [48]. Equations (1) and (2) identify a lack of interdependence and reflect either social independence, or the breakdown of a preexisting state of social interdependence. In the former case, such a situation could reflect completely unrelated or independent activities; in the latter situation, Equation (2) could reflect social suppression [24], e.g., the Stasi secret police in East Germany made individuals fearful of divulging personal information, creating forcible isolation among East German individuals [66].
From Cummings [67], however, we know that the most coherent and productive teams of scientists are highly interdependent [67]. In contrast to Equations (1) and (2), we assume that a state of dependence with one agent’s dependence on the other is sufficient to reduce its contributions to their joint entropy, but that this reduced state forms a state of interdependence or mutual dependence, e.g., representing a team (a business unit, a marriage, or a boss–employee relationship). In these cases, the degrees of freedom are correspondingly reduced for teams [24]. Further, in the limit, as a team forms structurally into a unit, this reduction produces a joint entropy, S A , B , that, in theory, can become zero for a perfect team:
S A , B = l i m Σ n i n d i v i d u a l s t e a m dof u n i t l o g d o f = 0
When this lowest state of structural entropy production occurs, because the entropy produced by one structural part of a team is offset by another structural part of the team, at this point, the entropy becomes sub-additive, and is reflected as:
S A , B S A + S B
Another interpretation of Equation (4) shows, counterintuitively, that the structure of the whole is producing less entropy than its parts:
S W h o l e Σ i = 1 n S i
In Equation (5), the summation of the structural entropy occurs over all of the elements, i, or parts of the whole, from 1 to n, where n represents the whole team. For the first time, Equation (5) not only reflects Lewin’s [23] claim, but also the claim by Systems Engineers [22] and the assertion by Schrödinger that the whole has become greater than the sum of its parts [21]. This result occurs when the parts of a whole contribute to the performance of the whole, and each part is dependent on the other, precluding an objective determination of each part’s contribution. We found support in the assertion by the National Academy of Sciences, in their recent report, that the “performance of a team is not decomposable to, or an aggregation of, individual performances” ([14], p. 11). Equations (4) and (5) account for this assertion.
However, what about generalizing this result to two teams? We address this question with a simulation. If we set b as the belief in a concept to solve a problem that increases the negative entropy (negentropy) as the belief circulates in a series circuit against the social resistance, R (which impedes the acceptance of a concept) the social capacitance, C (which acts to store the energy behind opposing beliefs) and the inductance, L (which creates a field to resist the directive flow of energy for a concept, inducing a concept’s flow in the opposite direction when the field reverses), we obtain:
L b + R + 1 C b = 0
when L = 1 unit, R = 0 units, and C = 1 unit, we obtain b = ± i for points 1 and 2 on the y axis in Figure 1. When R = 2 units, solving for b gives −1 and −1 for points 3 and 4 on the x axis in Figure 1. When R = 3 units, b = −2.6 and −0.4, reflected by points 6 and 5 in Figure 1.
Figure 1 simulates a confrontation between two individuals or two teams in an argument that cycles back and forth (e.g., a competition, a debate, or collaborative decision-making) that precedes and may be followed by a decision to act (action is more likely to occur under majority rules than consensus-seeking rules; for an example in European matters, see [69], p. 29).
The Department of Energy’s (DOE’s) cleanup of its nuclear waste mismanagement best represents Figure 1 [70]. As an example of a pure oscillation, in 2005, the DOE’s high-level radioactive waste (HLW) cleanup of its tanks had been stopped by a lawsuit, but was restarted by U.S. Congress. As part of this new law, the U.S. Nuclear Regulatory Commission (NRC) was given sufficient oversight to overrule the DOE’s technical decisions for its HLW tank closure program. However, from 2005 to 2011, the DOE proposed a restart of its HLW tank closure program, but NRC equivocated, then required the DOE to make another proposal. This pure oscillation is represented by the back-and-forth between points 1 and 2 in Figure 1. The back-and-forth, approaching 7 years, lasted until the DOE complained to its Citizens Advisory Board that it was in danger of missing its legally mandated milestone to restart its tank closures. The citizens demanded, in public, that the DOE and NRC settle their differences and immediately restart tank closures, which happened, reflected by point 6 in Figure 1. If we let 1 0 represent 1 on the x axis and 0 on the y axis, we simulate the rotation in the minds of the NRC, shown by Equation (7), as an orthogonal rotation of 90 degrees:
c o s   θ s i n   θ s i n   θ c o s   θ   1 0 = cos π / 2 sin π / 2 = 0 1
In Figure 2 below, we plot a simplified simulation of the NRC’s decisions over time.
We simulated the DOE–NRC stalemated oscillations using an online five-step Markov chain and transitions initially set at 0.6 from State 0 to 1, etc., which we changed to create an oscillation from State 0 to State 2. With game theory played online, we re-ran the DOE–NRC standoff for almost 7 years as a Nash equilibrium. A Nash equilibrium occurs when neither side is a winner, or when both sides win. We conceptualized the DOE–NRC standoff as a quasi-Nash equilibrium because neither side won; however, the American public was losing until the DOE’s Citizens Advisory Board was told about the standoff, and then, confronted both Federal agencies in public and demanded that they make a decision to allow the tank closures to resume [70]. The tank closures resumed almost immediately in what one DOE official described as the fastest action he had ever witnessed in his career.
To summarize, the DOE–NRC situation is represented in Figure 1 and plotted over time in Figure 2. Figure 1 reflects harmonic motion where beliefs are exchanged but no decision to act is made (on Figure 1’s imaginary axis, an oscillation is between points 1 and 2, then 1, etc.), resulting in a compromise won by neither agency (points 3 and 4 on the real axis), or in resistance by the audience and a decision it has forced on the agencies, in this case (points 5 and 6).
Competition. In reviewing the literature (e.g., von Mises, in [63]), competition is often described as a struggle between two or more organisms, humans or businesses to be more efficient [32]; to market a better product or an existing product more quickly; to win a position or grant (as in education); or from ecology, “in which birth, growth and death depend on gaining a share of a limited environmental resource.” (https://www.lexico.com/en/definition/competition (accessed on 26 June 2022)) However, in the many and varied definitions of competition, the “how of competition” is often omitted, ignored or assumed to have happened. Shifting to autonomous human–machine systems makes intuition unsatisfactory. To rectify this situation, we build on the rotations represented by Equation (7) to introduce a decision advantage, DA, with torque, τ ; force, F; the radius of a turn, r; and rotational velocity, ω ; to give τ = r ω = F r ; then, with power, P, we obtain: P = F v = τ ω = τ v / r , becoming [40]:
D A = τ A / τ B
In Equation (8), we mean by DA that one team was quicker than another in its oscillations in the back-and-forth between competitors during a debate, or that one team’s grasp of the issues was more forceful, or that one team’s knowledge of its solution was held with more conviction than the other.
Equation (8) has support in the literature and also in the field. According to McConnell [71], the Director of National Intelligence, “… strategic advantage is the ability to rapidly and accurately anticipate and adapt to complex challenges ([71], p. 6). … the key to intelligence-driven victories may not be the collection of objective ‘truth’ so much as the gaining of an information edge or competitive advantage over an adversary” ([71], p. 8) (see also J. Sims, Director of Intelligence Studies, Georgetown University, quoted in McConnell [71]). In this environment, one prerequisite for decision advantage is global awareness: the ability to develop, digest and manipulate vast and disparate data streams about the world as it is today. Another requirement is strategic foresight: the ability to probe existing conditions and use the responses to consider alternative hypotheses and scenarios, and to determine linkages and possibilities ([71], p. 9). Secrecy, however, is only one technique that may lead to decision advantage; so may speed, relevance or collaboration ([71], p. 9).
The purpose of decision advantage in combat, or in preparation for it, is to “exploit vulnerabilities” ([72], p. 7). Speed and quality decisions are important in business, too [73]. Additionally, the same is true in advertisements that promote athletic performance (e.g., [74]).
In Equation (5), we reported on how the whole is greater than the sum of its parts for a well-functioning team. However, internal conflict is all-too-common, reversing the relationship between the whole and its parts. In the literature, business “Executives underestimate the inevitability of conflict, but also … its importance to the organization” [75]. In the field, in the Wall Street Journal [76], after J&J discovered that Emergent had ruined a batch of COVID-19 vaccines, J&J replaced its Emergent BioSolution contract manufacturer “in response to the recent discovery of the contaminated batch.” According to the New York Times explaining that a candidate faltered because of internal conflict [77]: “Dianne Morales, a former nonprofit executive, once appeared poised to be a left-wing standard-bearer, but her standing suffered amid internal campaign turmoil …” Another example from the Bangkok Post about the destruction of the newspaper business in an attempt to save it [78] states that “As a major Tribune shareholder, Alden backed cuts last year at the company—amid a steep, pandemic-driven advertising downturn—that slashed a third of the total payroll and closed about half its news offices, with no stated plans to open new ones.”
In these examples and numerous others (divorce; forced business spin-offs; hostile mergers; war, etc.), the whole may be significantly less than the sum of its parts, which we represent in Equation (9) to show counterintuitively that the whole is now superadditive, producing more entropy than the sum of its parts:
S w h o l e Σ i = 1 n S i
For the purpose of illustration, in Figure 3 below, we show how we integrated these two concepts for the two Equations (5) and (9).
From the discussion, we next assume that the ability of a team to be productive is related to its ability to minimize the team’s configuration entropy produced by its structure [46]. We show that achieving minimal or the least entropy production through a team’s structure (SEP), as suggested by Equations (4) and (5), allows a team to direct more of its available free energy to increasing its productivity [39], possibly even to achieve maximum entropy production (MEP; [19]). An example of MEP would be a highly productive team, e.g., a marriage devoted to raising its children; a business such as Apple Technology Company becoming ever more productive in its struggle to reach MEP based on its structure; an alliance devoting all of its available free energy to solving the problems placed before it; or a nation’s people educated to search through potential solution spaces to solve the problems faced by a nation with solutions that can be patented [52].
Finally, we assume that uncertainties for both the structural entropy production in Equation (4) and the maximum entropy production in a highly productive team are interdependent. We conceptualize this relationship in Equation (10):
C Δ S E P · Δ M E P
Equation (10) represents a tradeoff between SEP and MEP. It indicates that a team cannot fully know both SEP and MEP simultaneously. However, Equation (10) also indicates that as uncertainty in the structural entropy production (SEP) decreases to a minimum, the team is able to devote more and more of its available free energy to maximizing its performance (MEP).
In the literature about the inevitability of the need to resolve conflict for an organization to be coherent [75], coupled with the example from the field about J&J’s replacement of its Emergent contracted manufacturer (i.e., [76]), Equation (10) also indicates that achieving structural fitness is key if a team or organization is freely able to choose the best available candidate to be a member of the best performing team, reducing entropy in a way that is similar to adding the last element to complete the structure of a molecule (e.g., when ions of sodium and chlorine combine to form a crystal of salt, their combined entropy drops). What happens when a well-trained team member is replaced by a highly capable but new teammate? Initially, structural entropy increases dramatically while the newcomer builds relationships with existing teammates, but as the new teammate settles in and masters the role of the individual replaced, the team’s structural entropy production drops, allowing the team’s performance to recover [50].
On the other hand, by inverting this concept, according to Equation (10), when a team increases its structural entropy production for whatever reason, all things being equal, the team must correspondingly reduce its maximum entropy production. What could lead to such a reduction? Of all of the possible causes, we focus on a vulnerability discovered in one’s own team structure, leading to a search for new candidates among the best available for a merger; a vulnerability created in a team by an old merger; a vulnerability created purposefully during an attack; or a vulnerability discovered in an opponent, leading to the exploitation of the opponent.

7. Research Design

First, we will review mergers and how vulnerability affects firms. Second, we will revisit the most recent MENA data with a correlation matrix to provide insight into vulnerabilities in nations in the MENA countries (the UN designation for Middle East North African countries).

8. Results

8.1. A Hypothesis from Case Studies of Vulnerability in the Field

To illustrate the vulnerabilities posed by Equation (10), we first look to the literature, and then, the field to find examples of teams (firms, organizations, alliances and systems) that fit into one of three categories of mergers to reduce a vulnerability in a business, to spin-off a previous merger that has failed, or to exploit a vulnerability created in another business or organization.

8.1.1. Mergers Designed to Prevent a Vulnerability in Being Able to Compete

According to the literature [79], mergers inform us about teamwork, cooperation and competition; about how systems and alliances may function; and about how well markets are performing. Alliances can form among rivals when facing emergencies, such as national defense or the COVID pandemic in 2020–2021 (for alliances promoting national defense, see [80]). In the field, mergers may reflect a vulnerability from a lack of the technology to compete; of the assets and infrastructure needed to address an overwhelming problem; or of the need for scale to cope with an international emergency, such as the development of new vaccines (e.g., [81]). Mergers can be used to enter new markets. For example, Allied Universal’s bid for G4S PLC was designed to create a private security-services giant that would strengthen Allied’s operations in the U.S. and the U.K. while extending its operations in Europe, the Middle East and Africa [82]. Mergers may be used by a firm to strengthen its competitiveness after gaining access to new technology, such as Northvolt’s acquisition of Cueberg to increase the range of its electric-vehicle batteries [83]. A merger can also be designed to save on expenses while preparing to invest in the adoption of a new technology, such as the 5G wireless service business in Canada [84], or can be pursued to expand services, such as DoorDash’s purchase of Chowbotics, a robotic fresh-salad maker [85].
In the literature, once a vulnerability is discovered by a firm in itself, action is taken through the mechanism of a merger to offset its perceived weakness [79]. In the field, the same step is taken whenever leaders are thought to be the problem, e.g., the change in managers at Credit Suisse after their involvement in the collapse of the Greensill Capital supply-chain finance funds [86].

8.1.2. Spin-Offs to Reduce Vulnerability

In the literature [87], not all mergers or acquisitions work. In the field, AT&T purchased DirecTV near the peak of the market for payTV, which ended shortly thereafter due to the cord-cutting movement; today, AT&T split off this same unit, which has struggled and lost significant value since it was purchased [88]. After International Business Machine’s (IBM) artificial intelligence system “Watson” bested humans on a quiz show, it indicated a possible new business in health care that has not lived up to its hype [89]: “Watson Health has struggled for market share in the U.S. and abroad and currently is not profitable.” Since competition has tightened returns on solar plants for what was “once the world’s most ambitious solar investor”, SoftBank Corporation is now selling its renewable energy firm to a rival, Adani Green Energy Ltd. [90]. With its investors frustrated with the company’s performance [91], Dell plans to “spin off its stake … in cloud-software company VMware … to strengthen its financial position as it looks to enter new markets and works to reach clients revamping their digital strategies.” New management at Citigroup has motivated the bank to reduce its competitive vulnerabilities by “moving to exit consumer banking franchises in 13 countries in Asia, Europe and the Middle East” [92]). Reflecting the poor situation currently existing for European banks, HSBC Holdings PLC in the UK is planning to sell its French retail bank for EUR 1, a significant loss for a bank it had bought for over USD 10 billion just two decades ago [93].
The DuPont Company was once the largest employer and philanthropist in the state of Delaware, funding schools, libraries and theaters [81]. Joe Biden, now President, celebrated his victory for political office in 1972 in the Hotel DuPont. “More than four decades later … DuPont, struggling to boost profits, was targeted by an activist shareholder, sold the hotel, eased out its chief executive, merged with another company, split into three pieces and cut its Delaware workforce by one-fourth [94].”

8.1.3. Exploitation of Vulnerability: Deception

Equation (10) gives guidance on how a machine, as part of a team, can blend in with its teammates should its function be to deceive its teammates, as the deceiver seeks to create a vulnerability in a team, firm or system. Using deception, an individual playing an instrumental role in a team could double as a spy against the team. To keep from being perceived as a “bad” actor within the team by fellow teammates, based on Equation (10), a deceiver must reduce the entropy it produces structurally, making it particularly difficult in cyber-warfare to identify an unwelcome intrusion [95] when an interloper does not stand out (e.g., in the literature, [82]). An example from the field is what happened with Volkswagen [96], an organization selling cars by deceiving the public and the public’s regulators regarding its faithful application of the rules, a deception that succeeded until it was uncovered years later; namely, six years ago, Volkswagon admitted to the deceitful gambit that its emissions failed to meet regulatory standards [83]: “U.S. authorities charged Volkswagen with conspiracy to commit fraud, making false statements on goods brought for sale in the U.S. and obstruction of justice. The U.S. probe uncovered a decade long ploy by Volkswagen to rig millions of diesel-powered vehicles to cheat emissions tests and later attempt to cover up the cheat.”
In the literature, dependencies can become vulnerabilities (e.g., in cyber, see [97]). In the field, the fight between Apple and Facebook [98] directly addresses the exploitation of a created vulnerability [84]: “For the past few weeks, Facebook has been running an ad campaign in defense of personalized advertisements, arguing that targeted ads are key to the success of small businesses. The catalyst for the campaign has been an ongoing battle between the social media company and Apple. The battle focuses on a unique device identifier on every iPhone and iPad called the IDFA. Facebook and others that sell mobile advertisements rely on this ID to help target ads to users and estimate how effective they are. … apps that want to use IDFA will have to ask users to opt in to tracking when the app is first launched. If users opt out, it will make these ads a lot less effective. Facebook has warned investors that these looming changes could hurt its advertising business …” In the Fall of 2020 during the height of the pandemic in the United States and the election season, Southwest Airlines seized the opportunity to exploit the struggles of its competitors [85]. “The pandemic is forcing many airlines to defend their turf. Southwest is using it to invade. Even as air travel languished in this fall, Southwest Airlines Co. executives fanned out to cities from Palm Springs, Calif., to Sarasota, Fla., to scope out potential new markets. The airline is adding four more cities to its network this year and announced plans for another six in 2021.”

8.2. Results of Correlational Tests for Theses of Vulnerability in MENA Countries

Theses: The following theses focus on vulnerability. Based on Equation (10), should a team become vulnerable (characterized by a higher SEP), it is likely to struggle with defending itself and with being more productive.
Thesis 1: As education (HDI) improves across a population, a nation becomes more intelligent and less vulnerable, reflected by its patent productivity increasing. The results in the CM in Table 1 below support the hypothesis; as HDI increased, patent productivity rankings in the MENA countries increased significantly (r = −0.81, p < 0.001).
Thesis 2: As freedom scores increase and teams are freer to replace underperforming teammates (Equation (10)), a team’s improved fitness leads to less vulnerability and an increase in patent productivity rankings (PPR). The results in the CM below support the hypothesis (Table 1); as the freedom ranking of a MENA nation increased, its patent productivity rankings increased significantly (r = −0.60, p < 0.02).
Thesis 3: As a population increases, an increase in redundancy (the number of free riders) reduces its ability to evolve, reflected by its lower patent productivity rankings (PPR); the results (Table 1), while not significant, agree with the hypothesis in that as the population increased in a MENA country, its patent productivity rankings decreased (r = 0.28, p n.s.).
Thesis 4: As corruption perceptions (CPI) increase across a society, if they are not sufficiently free to root out the corruption or deception, its ability to evolve decreases, reflected by a decrease in its patent productivity rankings (PPR). The results of the association in the CM support this hypothesis (Table 1); as CPI scores decreased in the MENA countries, their patent productivity rankings decreased (r = −0.76, p < 0.001), freedom decreased (r = 0.55, p < 0.05) and population levels increased (r = −0.48, p n.s.).
Other findings: In the MENA populations surveyed in Table 1, as their populations increased, freedom scores decreased (r = −0.70, p < 0.01) and HDI scores also decreased, but not significantly (r = −31, p n.s.). For these populations sampled in the MENA countries, we also found that Gross Domestic Product (GDP) was not significantly related to HDI scores (r = 0.24, p n.s.), freedom scores (r = −0.47, p n.s.), patent productivity (r = −0.17, p n.s.) or CPI scores (r = 0.06, p n.s.).

9. Discussion

While we recognize its importance in decisions, we discuss time and its effects on the interaction from a theoretical perspective briefly and later (e.g., [99,100,101,102,103,104]).

9.1. Mergers

We have made several points. First, in support of causality, Pearl [42,43] has been forceful in communicating the need for human and machines in a team to be able to communicate clearly in a language that both understand, which was certainly violated in the Uber self-driving car fatality of a pedestrian [24,44]. The Uber car’s failure to communicate with its human operator strengthens the value of interdependent communications among human and machine teammates to let each other know when either perceives a threat.
Second, and in support of Bohr’s [15,17] theory of complementarity, we adduced equations as a way to organize the plentiful evidence from the literature and in the field, in support of Equation (10), which states that tradeoffs exist in the struggles of humans to survive when confronted with uncertainty, conflict or both.
Third, during a competition between two evenly matched teams, if an attack finds a vulnerability that triggers a tradeoff between an opposing team’s structure and its performance (i.e., in the literature, dependencies create vulnerabilities [97], e.g., the vulnerability of Facebook exposed by Apple’s change in its new app-store rules; in [105]), excess structural entropy characterizes the vulnerability [46]; this is a discovery that both machines and humans can mathematically use to determine a weakness in an opponent, or in one’s own team. Additionally, as an example of coherence, protecting a team’s vulnerability occurs by trusting its agents [106], but if they are in orthogonal roles, orthogonality reduces the confirmatory evidence when those roles work. Should these two results be validated in the field, this suggests how the defeat of an opponent [107], or the remediation of the weaknesses in one’s own team, is enacted, but indirectly, by reducing the intrinsic value of trust.
Fourth, as interactions become less restricted (e.g., a free market [93]), Equation (10) represents the tradeoffs that occur between structure and performance. To exploit a people by making their survival contingent on vulnerability, the blocking of tradeoffs under oppressive leaders accounts for how gangs, authoritarians and monopolies construct barriers to growth, wittingly or not; however, by suppressing alternative viewpoints, the very key to innovation [52], these regimes become hampered by the suppression they inflict on others when they need to innovate, especially as happens when facing uncertainty or conflict.
Regimes of oppression can innovate by using deception to steal ideas [108]. For example, although praised by Charlie Munger, who stated that “China has managed to lift about 100 million impoverished people out of extreme poverty since 2012,” (https://news.cgtn.com/news/2021-02-26/Charlie-Munger-staggered-by-China-s-poverty-elimination-success-YbFCa6WjyE/index.html (accessed on 15 August 2022)), enforced cooperation in China has led to its vulnerability and its need to steal technology (e.g., [109,110]). China’s widespread political suppression has impaired its ability to innovate, leading it to hide new developments, e.g., according to the New York Times [111], “Sinovac, which is produced by a Chinese pharmaceutical company and promoted by the Chinese government, has been criticized for a lack of transparency about its clinical trials.” Similarly, by rejecting the latest offer for the “establishment of permanent peace on the Korean Peninsula” [112], North Korea continues to trap its own people; unable to feed or to help themselves, many of them are tricked into slavery by promises of a “Socialist Paradise” [113].
Not surprisingly, the inability of China to innovate with a new vaccine also occurs with business monopolies. In the U.S., the company Amazon, being a monopoly, has created its own vulnerability from its lack of innovation, motivating its need to steal ideas from some of its best clients [114].

9.2. Discussion: Correlation Matrix for the MENA Countries

As a team’s fitness improves, as in Equation (10), the improvement is characterized by a reduction in its structural entropy production (SEP) along with a complementary increase in the team’s productivity (MEP); this is best illustrated in the MENA region by the technological leadership of Israel. Team fitness, however, depends on a team’s ability to freely choose its members in order to reduce its vulnerabilities, to increase its strengths and to learn from its mistakes, consequently advancing its social evolution. The primary reduction in the vulnerabilities of a team is the team’s reduction in redundancy among its team members (free riders). The key to understanding this aspect of team size is that teams work best when interdependence is allowed to reach a maximum [67].
Freedom allows a team to self-organize to minimize its SEP (a reduction in free riders and the ability to freely choose a member’s replacement over a forced choice), leading to an increase in its evolutionary competitiveness, reflected by a complementary increase in MEP.
In contrast, in agreement with Equation (10), authoritarian regimes depend on social compliance; they do not fear corruption getting in their way, but instead, exploit it to assist with social compliance by forcing the family members or associates of key allies to be hired by productive teams, increasing a team’s level of redundancy, but thereby reducing its productivity. The key to understanding corruption administered in the lower levels of a corrupt hierarchy, business or government agency (e.g., the military; [51]) is knowing that redundancy helps to increase social compliance by reducing social turmoil, causing a reduction in evolutionary pressure.
Complementary tradeoffs by authoritarian leaders are evident in the results. In agreement with Equation (10), the citizenry in an authoritarian regime must be controlled, even if that means impeding the primary path to innovation available through free markets and free speech (Lawless, 2019); the tradeoff is that increased SEP costs reduce a team’s performance outcomes by reducing its MEP, but, across an authoritarian society, the ability of a country to evolve becomes compromised.

10. Future Research

First, many years after Bohr, but clearly capturing Bohr’s concept of the tradeoffs captured by complementarity, Cohen [100] described signal theory as transformations between Fourier pairs and that a “narrow waveform yields a wide spectrum, and a wide waveform yields a narrow spectrum and that both the time waveform and frequency spectrum cannot be made arbitrarily small simultaneously” (p. 45).
In this study, we have focused on complementarity and causality, open science and interdependence and the effects that machines can discern. Equation (10) tells us, however, that the arrow of time [101] for the perfect team in action should be harder to link causally to the actions performed by the team. Supported by the literature, this appears to be the case for the perception of time: “relatively passive activities appear longer than do those requiring active participation” [102]. Moreover, “if causal asymmetry is mind-dependent … then we cannot appeal to it in accounting for our experience of temporal asymmetry—the difference between past and future” [103]. To the point and for further investigation, in Rovelli’s The Order of Time [104], time is the direction in which entropy increases for a closed system from which we gain information [101]. Regardless, if the perception of time is obscured during a performance by a perfect team, it suggests that the structures of perfect, the best or even good teams (marriages, mergers, alliances and human–machine teams) depend on trial and error, and a successful match, identified using Equation (10), is characterized by a reduction in structural entropy as a team is assembled. As we previously mentioned, this speculation is supported by the National Academy of Sciences, who claim that the “performance of a team is not decomposable to, or an aggregation of, individual performances” (p. 11 [14]).
Second, we are pursuing metrics for the physics of the interdependence between the structure and performance of teams (Equation (10)). How much is a team’s productivity amplified versus the individuals who compose it? What happens to an autonomous team or system when issues arise unexpectedly such as confronting contexts of uncertainty, conflict or competition? Additionally, how might the governance of an A-HMT-S be conducted? Namely, how much authority should be given to human or machine members in an interdependent team to override human or AI counterparts in the event that an operator or leader becomes dysfunctional?
Third, of its recommendations that we plan to pursue, the NSC—Artificial Intelligence (AI) Final Report (p. 12, https://reports.nscai.gov/final-report/table-of-contents/ (accessed on 15 August 2022)) not only motivates human–AI teaming for national security, but is also a model to increase user and public faith in AI. We believe that our model of the adversarial pursuit of context agrees with Ginsburg [61], in that it could increase transparency by testing vulnerability in opposing teams or in one’s own team, and it could promote human–machine governance [20,46].
Fourth, one of the problems with machine learning that we also plan to study is its lack of generalizability [9] from one context to another [11]. However, with the ability in the future of machines to learn using one-shot learning (reviewed in [10]) combined with prior learning, and the more generalizable AI combined with complementarity, we believe that machines, with human help, may be able to reprogram themselves on the spot, overcoming present design limitations regarding the determination of context. This concept addresses what we see as von Neumann’s [115] problem with the aggregation of self-reproducing automata.
Fifth, in the future, we hope to address a limitation of Shannon’s information theory with its inability to serve as a theory of information value. Suppression in China and North Korea to achieve a stable society (simulated using NetLogo’s Ising model initiated with spin-ups at 100% and low social temperature, evolving over time to a mix), contrasts starkly with the approach taken by freer societies where humans struggle adversarially to determine uncertain contexts. The need for an “informed assessment of competing interests” that was recognized by Justice Ginsburg ([61], p. 3) exemplifies the processing of interdependent (bistable) information. Ginsburg’s decision implies that a context facing uncertainty can only be resolved interdependently by teams in competition or conflict [41]. We speculate that Equation (10) points to a theory of value information processing.
Sixth, and for a future project, we plan to apply Equation (10) to address minority housing. We know from Equation (10) that structure is important, but it also tells us that to “fit” in means to not emit excess structural entropy, suggesting that deception, while sometimes a curse, can be a cure; for example, the Electronic Benefit Transfer (EBT) card, which looks like a credit card, allows minorities on welfare to avoid stigma, when buying groceries, caused by the embarrassment of using Food Stamps in a public transaction (https://www.fns.usda.gov/snap/short-history-snap (accessed on 15 August 2022)).

11. Conclusions

First, from the Correlation Matrix, in Equation (10), a complementarity exists between a team’s structural entropy production (SEP) and its performance, characterized by its maximum entropy production (MEP). If a team has internal conflict (e.g., a prior merger’s failure; divorce; infighting; or power struggles), a lack of motivation (e.g., social suppression by an authoritarian regime or a command economy) or even a lack of ability to make a claim for the ownership of a patent, its productivity will suffer.
Second, if it can be shown that the mathematical physics of autonomous human–machine teams is not only possible in theory, but also applicable in the field, the potential impact for our work-in-progress on the development of a new direction for the science of interaction would be profound. Directly applicable to an A-HMT-S, it would allow human–machine teams to determine, by observation, whether an opponent’s team was performing at a maximum level, was malfunctioning or was vulnerable and how to exploit it; alternatively, it would enable the determination of whether its own team was vulnerable and in need of a trial-and-error process to reduce its vulnerability by retraining or replacing an existing member.
The human brain versus the mind: Gazzaniga’s [56] study of split-brain patients led him to discover that “the left half did not know what the right half was processing” (p. 57). Brincat et al. [57] concluded the same in research with visual working memory as part of a seamless whole. This seamless combination of information from the left and right brain may operate to determine reality similarly in the social dimension, where information passes between one debating team to another, or from the prosecuting attorney to the defense attorney to the jury and the judge, with ideas flowing from one side of a debate and triggering a response from the other; each side is able to identify the key idea summoned by the winning side, but is sometimes unsure of how it may have arisen or was prompted; they stitch reality together, underscoring the value of the bicameral U.S. Congress and the checks and balances afforded by the U.S. Constitution, but also highlighting how an audience makes causal sense while immersed inside of a fast-moving narrative as the key idea in it takes shape and unifies it into a coherent whole [101].
Third, causal sense is important in building a narrative over time for a story, a movie, or a communication between a machine and its operator. However, what happens when uncertainty or conflict beclouds an ongoing narrative? Then, among free humans, a unified narrative is spontaneously displaced by debate between bistable, opposing viewpoints in a search for the least vulnerable path going forward [46]. Bistability has multiple interpretations [55]. However, the brain only “sees” one interpretation of bistable reality at a single moment in time [54]. Based on the model by Gazzaniga [56] stating that the two halves of the brain are independent, and by Brincat et al. [57] stating that the independent halves of the brain stitch together sensory information into a coherent view of reality, it is important for AI to address this process further, and it has. AI has recently become engaged with champion debaters, in “which humans still prevail,” but the AI is improving over time [102].
Lastly, in conclusion, the science of autonomous human–machine systems has to be open, applicable in the field and practical; however, once a discovery has been made in the field, it should be replicated in the laboratory and used to build new theory [23]. While traditional causal interpretations of reality may suffice in known contexts (roadways; airports; mining operations, etc.), we have challenged the traditional model of interaction in situations where autonomous teams are facing uncertainty or conflict with debate narratives (e.g., see [116,117]), and in the rational design of the best human–machine teams. By separating team structure and performance, we extended our theory, borrowed from Bohr, of interdependent complementarity to propose a new direction of research. With this, we discovered that competition between evenly matched teams hinges on vulnerability, characterized by excess SEP and reduced MEP, which we believe generalizes to autonomous human–machine systems. We conclude that Shannon entropy generated by independent and identically distributed (i.i.d.) data does not generalize; however, interdependent complementarity does. This supports our call for the new science of autonomous human–machine teams and systems [118].
To generalize, autonomous human–machine teams operating in a less free system will be vulnerable to command decisions, less fit to make decisions in the interests of their society, at an evolutionary disadvantage and less able to step in to take over when a human leader becomes dysfunctional (e.g., when a copilot is attempting to commit suicide).

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nosek, B. Open Collaboration of Science: Estimating the reproducibility of psychological science. Science 2015, 349, 943. [Google Scholar] [CrossRef]
  2. Lawless, W.F. The entangled nature of interdependence. Bistability, irreproducibility and uncertainty. J. Math. Psychol. 2017, 78, 51–64. [Google Scholar] [CrossRef]
  3. Blanton, H.; Klick, J.; Mitchell, G.; Jaccard, J.; Mellers, B.; Tetlock, P.E. Strong Claims and Weak Evidence: Reassessing the Predictive Validity of the IAT. J. Appl. Psychol. 2009, 94, 567–582. [Google Scholar] [CrossRef] [PubMed]
  4. Baumeister, R.F.; Campbell, J.D.; Krueger, J.I.; Vohs, K.D. Exploding the self-esteem myth. Sci. Am. 2005, 292, 84–91. Available online: https://www.uvm.edu/~wgibson/PDF/Self-Esteem%20Myth.pdf (accessed on 15 January 2006). [CrossRef]
  5. Hagger, M.S.; Chatzisarantis, N.; Alberts, H.; Anggono, C.O.; Batailler, C.; Birt, A.R.; Brand, R.; Brandt, M.; Brewer, G.; Bruyneel, S.; et al. A Multilab Preregistered Replication of the Ego-Depletion Effect. Perspect. Psychol. Sci. 2016, 11, 546–573. [Google Scholar] [CrossRef]
  6. Singal, J. Psychology’s Favorite Tool for Measuring Racism Isn’t up to the Job. Almost Two Decades after Its Introduction, the Implicit Association Test Has Failed to Deliver on Its Lofty Promises. New Yorker Magazine. 2017. Available online: https://www.thecut.com/2017/01/psychologys-racism-measuring-tool-isnt-up-to-the-job.html (accessed on 21 August 2021).
  7. Tetlock, P.E.; Gardner, D. Superforecasting: The Art and Science of Prediction; Crown: New York, NY, USA, 2015. [Google Scholar]
  8. Baumeister, R.F.; Vohs, K.D. Self-Regulation, ego depletion, and motivation. Soc. Personal. Psychol. Compass 2007, 1, 115–128. [Google Scholar] [CrossRef]
  9. Futoma, J.; Simons, M.; Panch, T.; Doshi-Velez, F.; Celi, L.A. The myth of generalisability in clinical research and machine learning in health care. Lancet 2020, 2, e489–e492. [Google Scholar] [CrossRef]
  10. Wang, Y.; Yao, Q.; Kwok, J.T.; Ni, L.M. Generalizing from a few examples: A survey on few-shot learning. ACM Comput. Surv. 2020, 53, 63. [Google Scholar] [CrossRef]
  11. Peterson, J.C.; Bourgin, D.D.; Agrawal, M.; Reichman, D.; Griffiths, T.L. Using large-scale experiments and machine learning to discover theories of human decision-making. Science 2021, 372, 1209–1214. [Google Scholar] [CrossRef]
  12. Schölkopf, B.; Locatello, F.; Bauer, S.; Ke, N.R.; Kalchbrenner, N.; Goyal, A.; Bengio, Y. Towards Causal Representation Learning. arXiv 2021, arXiv:2102.11107. [Google Scholar] [CrossRef]
  13. National Security Commission on Artificial Intelligence (NSCAI). The Final Report 2021. Available online: https://reports.nscai.gov/final-report/table-of-contents/ (accessed on 15 October 2021).
  14. Endsley, M.R. Human-AI Teaming: State-of-the-Art and Research Needs; The National Academies of Sciences-Engineering-Medicine; National Academies Press: Washington, DC, USA, 2022; Available online: https://www.nap.edu/catalog/26355/human-ai-teaming-state-of-the-art-and-research-needs (accessed on 27 December 2021).
  15. Bohr, N. Causality and Complementarity. Philos. Sci. 1937, 4, 289–298. Available online: http://www.jstor.org/stable/184445 (accessed on 28 February 2021). [CrossRef]
  16. James, W.; Burkhardt, F.; Bowers, F.; Skrupskelis, I.K. The Principles of Psychology; Dover Publications: Miniola, NY, USA, 1890. [Google Scholar]
  17. Bohr, N. Science and the unity of knowledge. In The Unity of Knowledge; Leary, L., Ed.; Doubleday: New York, NY, USA, 1955; pp. 44–62. [Google Scholar]
  18. Bisbey, T.M.; Reyes, D.L.; Traylor, A.M.; Salas, E. Teams of psychologists helping teams: The evolution of the science of team training. Am. Psychol. 2019, 74, 278–289. [Google Scholar] [CrossRef] [PubMed]
  19. Martyushev, L.M. Entropy and entropy production: Old misconceptions and new breakthroughs. Entropy 2013, 15, 1152–1170. [Google Scholar] [CrossRef]
  20. Lawless, W.F. The Interdependence of Autonomous Human-Machine Teams: The Entropy of Teams, but Not Individuals, Advances Science. Entropy 2019, 21, 1195. [Google Scholar] [CrossRef]
  21. Schrödinger, E. Discussion of Probability Relations Between Separated Systems. Proc. Camb. Philos. Soc. 1935, 31, 555–563. [Google Scholar] [CrossRef]
  22. Walden, D.D.; Roedler, G.J.; Forsberg, K.J.; Hamelin, R.D.; Shortell, T.M. (Eds.) Systems Engineering Handbook. A Guide for System Life Cycle Processes and Activities, 4th ed.; Prepared by International Council on System Engineering (INCOSE-TP-2003-002-04); John Wiley, Sons: Hoboken, NJ, USA, 2015. [Google Scholar]
  23. Lewin, K. Field Theory of Social Science. Selected Theoretical Papers; Cartwright, D., Ed.; Harper, Brothers: New York, NY, USA, 1951. [Google Scholar]
  24. Lawless, W.F. Toward a physics of interdependence for autonomous human-machine systems: The case of the Uber fatal accident, 2018. Front. Phys. 2022, 10, 879171. [Google Scholar] [CrossRef]
  25. Mann, R.P. Collective decision making by rational individuals. Proc. Natl. Acad. Sci. USA 2018, 115, E10387–E10396. [Google Scholar] [CrossRef]
  26. USDOT. USDOT Automated Vehicles Activities; US Department of Transportation: Washington, DC, USA, 2021. Available online: https://www.transportation.gov/AV (accessed on 27 February 2021).
  27. Saracco, R. Autonomous Systems in Finance. IEEE Future Systems. 2019. Available online: https://cmte.ieee.org/futuredirections/2019/08/06/autonomous-systems-in-finance/ (accessed on 27 February 2021).
  28. Harel, D.; Marron, A.; Sifakis, J. Autonomics: In search of a foundation for next-generation autonomous systems. Proc. Natl. Acad. Sci. USA 2020, 117, 17491–17498. [Google Scholar] [CrossRef]
  29. Bechai, D. Northrop Grumman Wins as Chinese and Russian Hypersonic Threats Increase. Seeking Alpha. 2021. Available online: https://seekingalpha.com/article/4408686-northrop-grumman-wins-chinese-and-russian-hypersonic-threat (accessed on 27 February 2021).
  30. Editors. “Ready for weapons with free will?” New York Times, 26 June 2019. Available online: https://www.nytimes.com/2019/06/26/opinion/weapons-artificial-intelligence.html (accessed on 15 August 2022).
  31. Kissinger, H. Diplomacy; Simon, Schuster: New York, NY, USA, 1994. [Google Scholar]
  32. Adam, S. An Inquiry into the Nature and Causes of the Wealth of Nations; University of Chicago Press: Chicago, IL, USA, 1977. [Google Scholar]
  33. Freer, R.D.; Perdue, W.C. Civil Procedure; Anderson: Cincinatti, OH, USA, 1996. [Google Scholar]
  34. von Clausewitz, C.; On War, J.J. Graham (Tr.) of Vom Kriege; Clausewitz Bookstores: Londin, UK, 1832; Chapter 9. [Google Scholar]
  35. Simpson, E. War from the Ground Up. Twenty-First Century Combat as Politics (Crises in World Politics); Oxford University Press: Oxford, UK, 2012. [Google Scholar]
  36. Madison, J. The Federalist No. 51. The Structure of the Government Must Furnish the Proper Checks and Balances Between the Different Departments. In The Federalist Papers; Hamilton, A., Madison, J., Jay, J., Eds.; Bantam Books: Toronto, ON, USA, 1788. [Google Scholar]
  37. Sofge, D. (Referee); Mittu, R. (Con Bet); Lawless, W.F. (Pro Bet). AI Bookie Bet: How likely is it that an AI-based system will self-authorize taking control from a human operator? AI Mag. 2019, 40, 79–84. [Google Scholar]
  38. Roethlisberger, F.; Dickson, W. Management and the Worker; Cambridge University Press: Cambridge, UK, 1939. [Google Scholar]
  39. Roach, T.N.F. Use and Abuse of Entropy in Biology: A Case for Caliber. Entropy 2020, 22, 1335. [Google Scholar] [CrossRef]
  40. Lawless, W.F.; Sofge, D.A. The Intersection of Robust Intelligence and Trust: Hybrid Teams, Firms and Systems. In Autonomy and Artificial Intelligence: A Threat or Savior? Lawless, W.F., Mittu, R., Sofge, D., Russell, S., Eds.; Springer: New York, NY, USA, 2017; pp. 255–270. [Google Scholar]
  41. Lawless, W.F.; Mittu, R.; Sofge, D.A.; Hiatt, L. Editorial (Introduction to the Special Issue). Artificial intelligence (AI), autonomy and human-machine teams: Interdependence, context and explainable AI. AI Mag. 2019, 40, 5–13. [Google Scholar]
  42. Pearl, J. Reasoning with Cause and Effect. AI Mag. 2002, 23, 95–111. Available online: https://aaai.org/ojs/index.php/aimagazine/article/download/1612/1511 (accessed on 15 December 2018).
  43. Pearl, J.; Mackenzie, D. AI Can’t Reason Why. The current data-crunching approach to machine learning misses an essential element of human intelligence. Wall Str. J. 2018. Available online: https://www.wsj.com/articles/ai-cant-reason-why-1526657442 (accessed on 15 December 2018).
  44. NTSB. Vehicle Automation Report. National Transportation Safety Board. 2019. Available online: https://dms.ntsb.gov/pubdms/search/document.cfm?docID=477717&docketID=62978&mkey=96894 (accessed on 3 December 2020).
  45. Kenny, D.A.; Kashy, D.A.; Bolger, N. Data analyses in social psychology. In Handbook of Social Psychology, 4th ed.; Gilbert, D.T., Fiske, S.T., Lindzey, G., Eds.; McGraw-Hill: Boston, MA, USA, 1998; Volume 1, pp. 233–265. [Google Scholar]
  46. Lawless, W.F. Quantum-Like Interdependence Theory Advances Autonomous Human–Machine Teams (A-HMTs). Entropy 2020, 22, 1227. [Google Scholar] [CrossRef]
  47. Jones, E.E. Major developments in five decades of social psychology. In The Handbook of Social Psychology; Gilbert, D.T., Fiske, S.T., Lindzey, G., Eds.; McGraw-Hill: Boston, MA, USA, 1998; Volume I, pp. 3–57. [Google Scholar]
  48. Conant, R.C. Laws of information which govern systems. IEEE Trans. Syst. Man Cybern. 1976, 6, 240–255. [Google Scholar] [CrossRef]
  49. Cooke, N.J.; Lawless, W.F. Effective Human-Artificial Intelligence Teaming. In Engineering Science and Artificial Intelligence; Springer: Berlin/Heidelberg, Germany, 2021. [Google Scholar]
  50. Committee on the Science of Team Science; Board on Behavioral, Cognitive, and Sensory Sciences; Division of Behavioral and Social Sciences and Education; National Research Council. Enhancing the Effectiveness of Team Science; Cooke, N.J., Hilton, M.L., Eds.; National Academies Press: Washington, DC, USA, 2015. [Google Scholar]
  51. Lawless, W.F. The physics of teams: Interdependence, measurable entropy and computational emotion. Front. Phys. 2017, 5, 30. [Google Scholar] [CrossRef]
  52. Lawless, W.F. Interdependence for human-machine teams. Found. Sci. 2021, 26, 129–142. [Google Scholar] [CrossRef]
  53. Miller, G. What is the biological basis of consciousness? Science 2005, 309, 79. [Google Scholar] [CrossRef]
  54. Eagleman, D.M. Visual illusions and neurobiology. Nat. Rev. Neurosci. 2001, 2, 920–926. [Google Scholar] [CrossRef]
  55. Wang, M.; Arteaga, D.; He, B.J. Brain mechanisms for simple perception and bistable perception. Proc. Natl. Acad. Sci. USA 2013, 110, E3350–E3359. [Google Scholar] [CrossRef]
  56. Gazzaniga, M.S. Who’s in Charge? Free Will and the Science of the Brain; Ecco: New York, NY, USA, 2011. [Google Scholar]
  57. Brincat, S.L.; Donoghue, J.A.; Mahnke, M.K.; Kornblith, S.; Lundqvist, M.; Miller, E.K. Interhemispheric transfer of working memories. Neuron 2021, 109, 1055–1066. [Google Scholar] [CrossRef]
  58. de León, M.S.P.; Bienvenu, T.; Marom, A.; Engel, S.; Tafforeau, P.; Warren, J.L.A.; Lordkipanidze, D.; Kurniawan, I.; Murti, D.B.; Suriyanto, R.A.; et al. The primitive brain of early Homo. Science 2021, 372, 165–171. [Google Scholar] [CrossRef] [PubMed]
  59. Weinberg, S. The Trouble with Quantum Mechanics, The New York Review of Books. 2017. Available online: http://www.nybooks.com/articles/2017/01/19/trouble-with-quantum-mechanics/ (accessed on 19 January 2017).
  60. Weinberg, S. Steven Weinberg and the Puzzle of Quantum Mechanics, Replies by N. David Mermin, Jeremy Bernstein, Michael Nauenberg, Jean Bricmont, and Sheldon Goldstein; et al. In Response to: The Trouble with Quantum Mechanics from the 19 January 2017 Issue; The New York Review of Books. 2017. Available online: http://www.nybooks.com/articles/2017/04/06/steven-weinberg-puzzle-quantum-mechanics (accessed on 6 April 2017).
  61. Ginsburg, R.B. Supreme Court of the United States; American Electric Power Co. Inc., v. Connecticut: Columbus, OH, USA, 2011; pp. 10–174. Available online: http://www.supremecourt.gov/opinions/10pdf/10-174.pdf (accessed on 11 May 2017).
  62. Amadae, S.M. Rational Choice Theory. Political Science and Economics. In Encyclopaedia Britannica; Encyclopaedia Britannica, Ltd.: London, UK, 2017; Available online: https://www.britannica.com/topic/rational-choice-theory (accessed on 17 November 2017).
  63. Klein, P.G. The State of Modern Economics. Austrian 2021, 7, 4–15. Available online: https://mises.org/library/state-modern-economics (accessed on 1 December 2021).
  64. Thagard, P. Cognitive Science. In The Stanford Encyclopedia of Philosophy; Zalta, E.N., Ed.; Stanford University: Stanford, CA, USA, 2019; Available online: https://plato.stanford.edu/archives/spr2019/entries/cognitive-science (accessed on 15 November 2020).
  65. Shannon, C.E. A Mathematical Theory of Communication. Bell Syst. Tech. J. 1948, 27, 379–423, 623–656. [Google Scholar] [CrossRef]
  66. Cameron, J.D. Stasi. In Encyclopedia Britannica; Encyclopedia Britannica, Ltd.: London, UK, 2021; Available online: https://www.britannica.com/topic/Stasi (accessed on 20 August 2021).
  67. Cummings, J. Team Science Successes and Challenges. National Science Foundation Sponsored Workshop on Fundamentals of Team Science and the Science of Team Science (June 2), Bethesda MD. 2015. Available online: https://www.ohsu.edu/xd/education/schools/school-of-medicine/departments/clinical-departments/radiation-medicine/upload/12-_cummings_talk.pdf (accessed on 2 June 2015).
  68. Suleiman, Y. A War of Words. Language and Conflict in the Middle East; Cambridge University Press: Cambridge, UK, 2012. [Google Scholar]
  69. WP. White Paper. European Governance (COM (2001) 428 Final; Brussels, 25.7.2001); Commission of the European Community: Brussels, Belgium, 2001. [Google Scholar]
  70. Lawless, W.F.; Akiyoshi, M.; Angjellari-Dajcic, F.; Whitton, J. Public consent for the geologic disposal of highly radioactive wastes and spent nuclear fuel. Int. J. Environ. Stud. 2014, 71, 41–62. [Google Scholar] [CrossRef]
  71. McConnell, J.M. Vision 2015. A Globally Networked and Integrated Intelligence Enterprise. The Director of National Intelligence. 2015. Available online: https://www.dni.gov/files/documents/Newsroom/Reports%20and%20Pubs/Vision_2015.pdf (accessed on 21 June 2021).
  72. James, L.D.; LtGen, U.S.A.F. Delivering Decision Advantage. Air Space Power J. 2012, 6, 4–10. Available online: https://www.airuniversity.af.edu/Portals/10/ASPJ/journals/Volume-26_Issue-6/SLP-James.pdf (accessed on 15 January 2013).
  73. Hughes, J.; Maxwell, J.R.; Weiss, L. Reimagine Decision Making to Improve Speed and Quality. Inefficient Decision Making Wastes Time, Money and Productivity. As Leaders Respond to Today’s Paradigm Shift, Companies Can Pursue Four Actions to Adopt and Sustain High-Velocity Decision Making; McKinsey, Company: New York, NY, USA, 2020; Available online: https://www.mckinsey.com/business-functions/organization/our-insights/the-organization-blog/reimagine-decision-making-to-improve-speed-and-quality (accessed on 22 June 2021).
  74. Cohn, P. Be Decisive to Improve Sports Performance. Peak Performance Sports. 2021. Available online: https://www.peaksports.com/sports-psychology-blog/be-decisive-to-improve-sports-performance/ (accessed on 22 June 2021).
  75. Weiss, J.; Hughes, J. Want Collaboration?: Accept—and Actively Manage—Conflict. Harvard Business Review: 2005. Available online: https://hbr.org/2005/03/want-collaboration-accept-and-actively-manage-conflict (accessed on 20 August 2021).
  76. Loftus, P.; Leary, A. J&J Takes Over Contractor’s COVID-19 Vaccine Manufacturing Plant. The health-products giant assumes full responsibility for making its vaccine’s main ingredient at a plant that recently suffered a quality lapse. Wall Str. J. 2021. Available online: https://www.wsj.com/articles/j-j-takes-over-contractors-covid-19-vaccine-manufacturing-plant-11617557744 (accessed on 25 June 2021).
  77. Glueck, K. Adams Leads in N.Y.C. Mayoral Primary, but Ranked-Choice Awaits. Voters on Tuesday Took Part in the City’s First Mayoral Election Using Ranked-Choice Voting, a System That May Delay the Declaration of a Winner until Mid-July. New York Times. 2021. Available online: https://www.nytimes.com/2021/06/22/nyregion/nyc-mayor.html? (accessed on 23 June 2021).
  78. Alpert, L.I.; Lombardo, C. Meet the Hedge Fund Boss Who Just Bought Tribune’s Newspapers. Heath Freeman, President of Alden Global Capital, Is Known for Pushing Big Cost Reductions, Which He Says Help to Save Newspapers. Bangkok Post. 2021. Available online: https://www.bangkokpost.com/business/2121731/meet-the-hedge-fund-boss-who-just-bought-tribunes-newspapers (accessed on 25 June 2021).
  79. Andrade, G.; Stafford, E. Investigating the Economic Role of Mergers. J. Corp. Financ. 2004, 10, 1–36. [Google Scholar] [CrossRef]
  80. Mattis, J. Summary of the 2018 National Defense Strategy of The United States of America Sharpening the American Military’s Competitive Edge; US Department of Defense: Washington, DC, USA, 2018. Available online: https://dod.defense.gov/Portals/1/Documents/pubs/2018-National-Defense-Strategy-Summary.pdf (accessed on 24 February 2021).
  81. Hopkins, J.S. To Make More COVID-19 Vaccines, Rival Drugmakers Team Up. Sanofi and Novartis Are among the Big Pharmaceutical Companies That Have Agreed to Help Make a Competitor’s Shots. Wall Str. J. 2021. Available online: https://www.wsj.com/articles/to-make-more-covid-19-vaccines-rival-drugmakers-team-up-11614081601 (accessed on 24 February 2021).
  82. Dummett, B. Allied Universal Set to Create Private-Security Giant with G4S Deal. Battle to Acquire G4S Was All but Won by a $5.3 Billion Bid from Warburg-Backed Allied Universal, as a Rival Bidder Bowed Out. Wall Str. J. 2021. Available online: https://www.wsj.com/articles/allied-universal-set-to-create-private-security-giant-with-g4s-deal-11613987683 (accessed on 23 February 2021).
  83. Boston, W. Northvolt Purchases U.S. Startup Cuberg to Boost Battery Technology. Swedish Battery Maker Accelerates Ramp-Up as European EV Sales Surge. Wall Street J. 2021. Available online: https://www.wsj.com/articles/northvolt-purchases-u-s-startup-cuberg-to-boost-battery-technology-11615370401 (accessed on 3 November 2021).
  84. Sebastian, D. Rogers, Shaw to Combine in $16 Billion Deal. Canadian communications companies seek to invest in rollout of 5G wireless service. Wall Street J. 2021. Available online: https://www.wsj.com/articles/rogers-shaw-to-combine-in-16-billion-deal-11615814257?mod=hp_lista_pos2 (accessed on 15 March 2021).
  85. Durbin, D.A. DoorDash buys robot food prep company Chowbotics. Seattle Times. 2021. Available online: https://www.seattletimes.com/business/doordash-buys-robot-food-prep-company-chowbotics/ (accessed on 29 March 2021).
  86. Patrick, M. Credit Suisse Sidelines Employees Who Oversaw Greensill Funds. Swiss bank’s European head of asset management and two fund managers are no longer performing their roles, according to an internal memo. Wall Str. J. 2021. Available online: https://www.wsj.com/articles/credit-suisse-employees-who-oversaw-greensill-funds-step-aside-11615367827 (accessed on 11 March 2021).
  87. Vantrappen, H.; Polastro, E. How to Know If a Spin-Off Will Succeed? In Harvard Business Review; Harvard Business Publisher: Boston, MA, USA, 2015; Available online: https://hbr.org/2015/02/how-to-know-if-a-spin-off-will-succeed (accessed on 15 March 2015).
  88. FitzGerald, D.; Gottfried, M. AT&T Carves Out Pay-TV Business in Deal With TPG. Private-equity giant to pay $1.8 billion for 30% interest in new DirecTV unit. Wall Str. J. 2021. Available online: https://www.wsj.com/articles/at-t-to-spin-off-directv-unit-in-deal-with-tpg-11614288845 (accessed on 26 February 2021).
  89. Hernandez, D.; Fitch, A. IBM’s Retreat from Watson Highlights Broader AI Struggles in Health. Watson Health was billed as a ‘bet the ranch’move by Big Blue; now the company is prepared to throw in the towel. Wall Str. J. 2021. Available online: https://www.wsj.com/articles/ibms-retreat-from-watson-highlights-broader-ai-struggles-in-health-11613839579 (accessed on 23 February 2021).
  90. Dvorak, P. SoftBank Sells Renewable-Energy Unit in India. Adani conglomerate takes control in deal that values solar-power operator at $3.5 billion. Wall Str. J. 2021. Available online: https://www.wsj.com/articles/softbank-sells-renewable-energy-unit-in-india-11621423025 (accessed on 25 June 2021).
  91. Maidenberg, M. Dell Plans Spinoff of $52 Billion Stake in VMware. PC maker intends to use net proceeds from the deal to pay down debt. Wall Str. J. 2021. Available online: https://www.wsj.com/articles/dell-plans-spinoff-of-52-billion-stake-in-vmware-11618434822 (accessed on 25 June 2021).
  92. Demos, T. Citigroup’s Long Trip Home. An exit from some global retail markets and a focus on wealth are moves investors should watch closely. Wall Str. J. 2021. Available online: https://www.wsj.com/articles/citigroups-long-trip-home-11623148200 (accessed on 25 June 2021).
  93. Clark, S. HSBC to Take $3 Billion in Losses on Sale of Troubled French Bank. Lender controlled by private-equity firm Cerberus will buy HSBC’s French retail operations for a symbolic one euro. Wall Str. J. 2021. Available online: https://www.wsj.com/articles/hsbc-to-take-3-billion-in-losses-on-sale-of-troubled-french-bank-11624027292 (accessed on 25 June 2021).
  94. Schlesinger, J.M. DuPont’s Up-and-Down History Shaped Biden’s Views on Business. ’Uncle Dupie’ was a leading employer and philanthropist in Delaware, but declining profits and global competition put it in the crosshairs of shareholder activist Nelson Peltz. Wall Str. J. 2020. Available online: https://www.wsj.com/articles/duponts-up-and-down-history-shaped-bidens-views-on-business-11606157282 (accessed on 25 June 2021).
  95. Lawless, W.F.; Mittu, R.; Moskowitz, I.S.; Sofge, D.; Russell, S. Cyber-(in)security, revisited: Proactive cyber-defenses, interdependence and autonomous human-machine teams (A-HMTs). In Adversary Aware Learning Techniques and Trends in Cyber Security; Dasgupta, P., Collins, J.B., Mittu, R., Eds.; Springer Nature: London, UK, 2020. [Google Scholar]
  96. Boston, W. Volkswagen to Seek Damages from Former CEO Winterkorn in Diesel Scandal. Decision caps years-long internal investigation into the origins of one of Europe’s biggest corporate scandals. Wall Str. J. 2021. Available online: https://www.wsj.com/articles/volkswagen-to-seek-damages-from-former-ceo-winterkorn-in-diesel-scandal-11616773838 (accessed on 29 March 2021).
  97. Lambert, N. Brits-Krieg: The Strategy of Economic Warfare. As national economies become increasingly dependent on and intertwined with cyberspace, the topic of cybersecurity moves steadily up the defense agenda. In Understanding Cyber Conflict: 14 Analogies; Perkovich, G., Levite, A., Eds.; Georgetown University Press: Washington, DC, USA, 2017; Chapter 8. [Google Scholar]
  98. Rodriguez, S. Former Facebook insiders explain why the company is making such a big fuss over Apple’s upcoming privacy change. CNBC 2021. Available online: https://www.cnbc.com/2021/03/11/why-facebook-is-so-upset-about-apple-idfa-change-insiders-spill.html (accessed on 14 March 2021).
  99. Sider, A. Predatory and Opportunistic’: Southwest Airlines Seizes the Moment as Rivals Struggle. Through its history, the no-frills airline has leapt at opportunities to expand into new cities. It’s doing it again. Wall Str. J. 2020. Available online: https://www.wsj.com/articles/southwest-airlines-covid-expansion-airports-11605296205 (accessed on 25 June 2021).
  100. Cohen, L. Time-frequency analysis: Theory and applications. In Prentice Hall Signal Processing Series; Prentice Hall Inc.: Hoboken, NJ, USA, 1995. [Google Scholar]
  101. Hooft, G. ’t. Time, the Arrow of Time, and Quantum Mechanics. Front. Phys. 2013, 134, 4002. [Google Scholar] [CrossRef]
  102. Fraisse, P.; West, L.J. Time perception. Encyclopedia Britannica. 2009. Available online: https://www.britannica.com/science/time-perception (accessed on 28 June 2021).
  103. Le Poidevin, R. The Experience and Perception of Time. In The Stanford Encyclopedia of Philosophy; Zalta, E.N., Ed.; Encyclopedia Britannica, Ltd.: London, UK, 2019; Available online: https://plato.stanford.edu/archives/sum2019/entries/time-experience/ (accessed on 28 June 2021).
  104. Rovelli, C. The Order of Time; Riverhead Books: New York, NY, USA, 2018. [Google Scholar]
  105. Abril, D. Facebook Is Still Taking Shots at Apple. Fortune. Available online: https://fortune.com/2021/04/29/facebook-apple-criticism-q1-earnings-privacy/ (accessed on 27 May 2021).
  106. McLeod, C. Trust. In The Stanford Encyclopedia of Philosophy; Zalta, E.N., Ed.; Stanford University: Stanford, CA, USA, 2020; Available online: https://plato.stanford.edu/archives/fall2020/entries/trust/ (accessed on 30 May 2021).
  107. Tzu, S. The Art of War; Sawyer, R.D. (Tr), Ed.; Basic Books: New York, NY, USA, 1994. [Google Scholar]
  108. Hayek, F. The Road to Serfdom; University of Chicago Press: Chicago, IL, USA, 1944. [Google Scholar]
  109. Baker, G. Michael Hayden Says U.S. Is Easy Prey for Hackers. Former CIA and NSA Chief Says ‘Shame on Us’ for Not Protecting Critical Information Better. Interview by the Wall Street Journal’s Editor in Chief. 2015. Available online: http://www.wsj.com/articles/michael-hayden-says-u-s-is-easy-prey-for-hackers-1434924058 (accessed on 21 June 2015).
  110. Ratcliffe, J. China Is National Security Threat No. 1. Resisting Beijing’s attempt to reshape and dominate the world is the challenge of our generation. Wall Str. J. 2020. Available online: https://www.wsj.com/articles/china-is-national-security-threat-no-1-11607019599 (accessed on 8 May 2021).
  111. Beech, H.; Sukhartono, M. Indonesia’s Doctors Got Vaccinated with Sinovac, and Got Sick. Facing increased cases and a deadly variant, the Indonesian health system is under strain, as doctors become patients. New York Times. 2021. Available online: https://www.nytimes.com/2021/06/25/world/asia/indonesia-covid-sinovac.html? (accessed on 25 June 2021).
  112. Eberstadt, N. Biden and Moon Are Getting North Korea Wrong. New York Times. 2021. Available online: https://www.nytimes.com/2021/05/26/opinion/north-korea-biden-moon-jae-in.html (accessed on 26 June 2021).
  113. Sang-Hun, C. They Were Promised a Socialist Paradise, and Ended Up in ‘Hell’. Lee Tae-kyung has campaigned tirelessly in South Korea to share the story of the 93,000 people who migrated from Japan to North Korea from 1959 to 1984 and were forced to stay. New York Times. 2021. Available online: https://www.nytimes.com/2021/05/05/world/asia/north-korea-japan-migration.html (accessed on 26 June 2021).
  114. Mattioli, D. Amazon Accused of Using Monopoly Power as E-Commerce ‘Gatekeeper’. Online retailer says it welcomes regulatory scrutiny; congressional report mirrors findings from Wall Street Journal investigations. Wall Str. J. 2020. Available online: https://www.wsj.com/articles/amazon-accused-of-using-monopoly-power-in-rise-as-e-commerce-gatekeeper-11602084168 (accessed on 8 May 2021).
  115. Von Neumann, J. Theory of Self-Reproducing Automata; Burks, A.W., Ed.; University of Illinois Press: Urbana, IL, USA, 1966. [Google Scholar]
  116. Domínguez, E. Going Beyond the Classic News Narrative Convention: The Background to and Challenges of Immersion in Journalism. Front. Digit. Humanit. 2017, 4, 10. [Google Scholar] [CrossRef]
  117. Slonim, N.; Bilu, Y.; Alzate, C.; Bar-Haim, R.; Bogin, B.; Bonin, F.; Choshen, L.; Cohen-Karlik, E.; Dankin, L.; Edelstein, L.; et al. An autonomous debating system. Nature 2021, 591, 379–384. [Google Scholar] [CrossRef]
  118. Davies, P. Does new physics lurk inside living matter? Phys. Today 2020, 73, 34–40. [Google Scholar] [CrossRef]
Figure 1. The y axis or imaginary axis represents the beliefs held not in physical reality, but instead, in the minds of debaters, and re-constructed in the collective minds of an audience. When no decision for action is forthcoming, especially when there is no audience and the debate is endless, the debate is located at points 1 and 2; in the extreme, it is observed as a “war of words” [68]. The x axis represents physical reality (e.g., action). Often encouraged by an audience (e.g., a jury), when debaters reach a compromise for action, that decision is represented by points 3 and 4. When debate is stopped by resistance from the audience with a decision acceptable to the majority in an audience, that decision is reflected by points 5 and 6 ([20,46]).
Figure 1. The y axis or imaginary axis represents the beliefs held not in physical reality, but instead, in the minds of debaters, and re-constructed in the collective minds of an audience. When no decision for action is forthcoming, especially when there is no audience and the debate is endless, the debate is located at points 1 and 2; in the extreme, it is observed as a “war of words” [68]. The x axis represents physical reality (e.g., action). Often encouraged by an audience (e.g., a jury), when debaters reach a compromise for action, that decision is represented by points 3 and 4. When debate is stopped by resistance from the audience with a decision acceptable to the majority in an audience, that decision is reflected by points 5 and 6 ([20,46]).
Entropy 24 01308 g001
Figure 2. Rotations over time (compressed) to reflect the NRC’s decision to allow the DOE to proceed based on an orthogonal rotation of positive feedback, then negative, then positive, etc., which continued for almost 7 years until the NRC conceded to a “yes” decision. Since then, six more tanks have been closed safely under regulatory oversight (https://www.srs.gov/general/news/factsheets/srr_radioactive_tank_farms.pdf (accessed on 26 June 2022)).
Figure 2. Rotations over time (compressed) to reflect the NRC’s decision to allow the DOE to proceed based on an orthogonal rotation of positive feedback, then negative, then positive, etc., which continued for almost 7 years until the NRC conceded to a “yes” decision. Since then, six more tanks have been closed safely under regulatory oversight (https://www.srs.gov/general/news/factsheets/srr_radioactive_tank_farms.pdf (accessed on 26 June 2022)).
Entropy 24 01308 g002
Figure 3. Shown in the lower left in this figure is the ground state for SEP (Equation (5)). Shown in the upper right is the excited state for SEP (Equation (9)). In the ground state for SEP, an extra arrow denotes that the interference flowing through the team produces the team’s more productive state. For the excited state, however, all of a team’s free energy is being directed at tearing apart the team’s structure, maintaining its interference in a destructive state.
Figure 3. Shown in the lower left in this figure is the ground state for SEP (Equation (5)). Shown in the upper right is the excited state for SEP (Equation (9)). In the ground state for SEP, an extra arrow denotes that the interference flowing through the team produces the team’s more productive state. For the excited state, however, all of a team’s free energy is being directed at tearing apart the team’s structure, maintaining its interference in a destructive state.
Entropy 24 01308 g003
Table 1. The correlation matrix (CM) calculated from the data for the Middle Eastern North African (MENA) a–d countries (the levels of significance for all two-way associations were determined using two-way Pearson correlations, resulting in a probability, p, of chance set of less than 0.05 for correlations of 0.51 or above; less than 0.02 for associations of or above 0.59; 0.01 for correlations of or stronger than 0.64; and 0.001 for those of 0.76 or greater; for this data set, N = 15, giving degrees of freedom of 13).
Table 1. The correlation matrix (CM) calculated from the data for the Middle Eastern North African (MENA) a–d countries (the levels of significance for all two-way associations were determined using two-way Pearson correlations, resulting in a probability, p, of chance set of less than 0.05 for correlations of 0.51 or above; less than 0.02 for associations of or above 0.59; 0.01 for correlations of or stronger than 0.64; and 0.001 for those of 0.76 or greater; for this data set, N = 15, giving degrees of freedom of 13).
Valences of Factors (in Parentheses)HDIFreedomPopulationPPRGDPCPI
Human Development Index (HDI; higher) a1
Freedom (more) c0.531
Population (greater) b−0.31−701
Patent productivity ranking (PPR), (inverted) b−0.81−0.600.281
GDP (greater) b0.24−0.470.70−0.171
Corruption Perceptions Index (CPI; inverted) d0.760.55−0.48−0.760.061
Notes on Sources: a. Human development insights; education: https://hdr.undp.org/data-center/country-insights#/ranks (accessed on 15 August 2022). b. Patent productivity rankings (PPR); GDP; population (five MENA countries were omitted due to their lack of a patent productivity ranking: Iraq; Libya; Palestine; Sudan; Syria): https://www.globalinnovationindex.org/Home (accessed on 15 August 2022). c. Freedom Index: https://worldpopulationreview.com/country-rankings/freedom-index-by-country (accessed on 15 August 2022). d. Corruption Perceptions Index: https://www.transparency.org/en/cpi/2021 (accessed on 15 August 2022).
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lawless, W.F. Interdependent Autonomous Human–Machine Systems: The Complementarity of Fitness, Vulnerability and Evolution. Entropy 2022, 24, 1308. https://doi.org/10.3390/e24091308

AMA Style

Lawless WF. Interdependent Autonomous Human–Machine Systems: The Complementarity of Fitness, Vulnerability and Evolution. Entropy. 2022; 24(9):1308. https://doi.org/10.3390/e24091308

Chicago/Turabian Style

Lawless, William F. 2022. "Interdependent Autonomous Human–Machine Systems: The Complementarity of Fitness, Vulnerability and Evolution" Entropy 24, no. 9: 1308. https://doi.org/10.3390/e24091308

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop