Next Article in Journal
EEGT: Energy Efficient Grid-Based Routing Protocol in Wireless Sensor Networks for IoT Applications
Next Article in Special Issue
Harnessing the Power of User-Centric Artificial Intelligence: Customized Recommendations and Personalization in Hybrid Recommender Systems
Previous Article in Journal
Application of GNS3 to Study the Security of Data Exchange between Power Electronic Devices and Control Center
Previous Article in Special Issue
Supporting the Conservation and Restoration OpenLab of the Acropolis of Ancient Tiryns through Data Modelling and Exploitation of Digital Media
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Info-Autopoiesis and the Limits of Artificial General Intelligence

Department of Mechanical Engineering, University of Maryland, Baltimore County, 1000 Hilltop Circle, Baltimore, MD 21250, USA
Computers 2023, 12(5), 102; https://doi.org/10.3390/computers12050102
Submission received: 10 April 2023 / Revised: 2 May 2023 / Accepted: 4 May 2023 / Published: 7 May 2023

Abstract

:
Recent developments, begun by the ascending spiral of the anticipated endless prospects of ChatGPT, promote artificial intelligence (AI) as an indispensable tool and commodity whose time has come. Yet the sinister specter of a technology that has hidden and unmanageable attributes that might be harmful to society looms in the background, as well as the likelihood that it will never deliver on the purported promise of artificial general intelligence (AGI). Currently, the prospects for the development of AI and AGI are more a matter of opinion than based on a consistent methodological approach. Thus, there is a need to take a step back to develop a general framework from which to evaluate current AI efforts, which also permits the determination of the limits to its future prospects as AGI. To gain insight into the development of a general framework, a key question needs to be resolved: what is the connection between human intelligence and machine intelligence? This is the question that needs a response because humans are at the center of AI creation and realize that, without an understanding of how we become what we become, we have no chance of finding a solution. This work proposes info-autopoiesis, the self-referential, recursive, and interactive process of self-production of information, as the needed general framework. Info-autopoiesis shows how the key ingredient of information is fundamental to an insightful resolution to this crucial question and allows predictions as to the present and future of AGI.

1. Introduction

Generative Pre-trained Transformer 3 (GPT-3) was released on 11 June 2020 by OpenAI, a San Francisco-based artificial intelligence research laboratory. This third-generation program is capable of learning how to write like a human without being taught by a computer scientist. GPT-3 searches a massive amount of written text by reading millions of articles and books online to produce work that has perfect grammar, correct punctuation, and no spelling mistakes. It may be used to write songs, stories, press releases, guitar tabs, interviews, essays, technical manuals; and has the ability, after analyzing a text based on users’ input, to capture the style of writing and produce new articles in that style [1]. The creation of texts containing misleading or false information is something that is readily achieved.
DALL-E, in a nod to both “WALL-E”, the 2008 animated movie about an autonomous robot, and Salvador Dalí, the surrealist painter, was released on 5 January 2021, also by OpenAI. This is a technology that lets you create digital images simply by describing what you want to see and can combine concepts, attributes, and styles [2,3]. It also can be a worrisome development because of its ability to generate images that may be sources of fake digital images to easily help spread online disinformation campaigns [4,5].
The subsequent launching of ChatGPT as a prototype by OpenAI on 30 November 2022 created the sensation that AI had achieved the status of mass use [6]. However, an Open Letter to pause AI experiments seems to accentuate the sinister specter of a technology that has hidden and unmanageable attributes that might be harmful to society. It begins with the acknowledgment that “AI systems with human-competitive intelligence can pose profound risks to society and humanity as shown by extensive research and acknowledged by top AI labs…Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources”. It then identifies “AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control.”.
Furthermore, the report continues, “Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us? Should we risk the loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.” After this, a call is made for a pause of at least six months [7].
This Open Letter calls for the development and implementation of shared safety protocols as well as refocusing research to make these systems “more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal (emphasis added), as well as “to dramatically accelerate development of robust AI governance systems” and include “new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.”
These statements show the unbridled imagination and optimism with which AI is welcomed and condemned at the same time. It is particularly noteworthy (see emphasis above) that these systems are to be “…aligned, trustworthy, and loyal”. This may be interpreted as a prediction of sentient AI systems and also reveals that the current prospects for the achievement or non-achievement of AI and AGI are more a matter of opinion than based on a consistent methodological approach. This is true of researchers who predict sentient AGI or superintelligence as a matter of faith [8,9], or who consider that AGI is a myth promoted without a scientific basis [10]. Thus, the need to take a step back to develop a general framework (non-existent at the present time) from which to evaluate current AI efforts, which also permits the determination of the limits to its future prospects as AGI.
This is analogous to achieving flight in heavier-than-air vehicles, following the example of birds. Clearly, planes are not birds, but the Bernoulli principle is fundamental to the flight of both. In a similar way, we generally recognize that computers are not humans, but we seem to want to equate the inner hardware and software workings of computers to those of the human brain. Machine learning using neural networks is said to parallel the activities of the neural networks of the human brain. Does this mean that the accepted general framework for”dupl’cating the human brain is that of neural networks? If that is the case, then incremental steps in understanding how combinations, permutations, and improved and newly developed neural networks will eventually yield AGI. However, what is the basis to believe that indeed that will be the end result? Currently, this is more a matter of faith rather than solidly grounded knowledge.
To gain insight into the development of a general framework, a key query needs resolution: what is the connection between human intelligence and machine intelligence? This is the crucial question that needs a response because humans are at the center of AI creation. Without an understanding of how we become what we become intelligence-wise; we have no chance of finding a solution as to how to make intelligent machines. A similar argument is posed by Larson [10] by making the distinction between ingenuity and intuition proposed by Turing [11]. Ingenuity is the capacity to subdivide a problem into parts that are then tackled in a mechanical way to reach a solution. Intuition on the other hand relies on human judgements that we are incapable of discernment. This work proposes info-autopoiesis, the self-referential, recursive, and interactive process of self-production of information, as the needed general framework. Info-autopoiesis shows how the key ingredient of information is fundamental to an insightful resolution to this crucial question and allows predictions as to the present and future of AGI.
To gain some perspective on the concept of information, we have to be aware that it is a concept that generally eludes definition and explanation and has gained applicability in many spheres of human knowledge. For example, most scientists rely on a definition that postulates information to be a fundamental quantity in the Universe existing independently in the environment [12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35]. This widely held but erroneous perspective can be summarized as “it from bit” [36]. In short, when something is postulated to explain everything, it is a good sign that it explains nothing.
An alternative explanation that naturalizes information relies on Gregory Bateson’s definition of information as “a difference which makes a difference” to a living being [37] (p. 453). This is the definition that is used as the basis for info-autopoiesis. Succinctly, info-autopoiesis (information self-creation/self-production) is defined as the individuated sensory commensurable, self-referential, recursive, and interactive homeorhetic feedback process relevant to all living beings in their endless quest to satisfy physiological and/or relational needs [38], leading to the naturalized, factual observation that self-produced information is not a fundamental quantity of the Universe. This puts sensorial percepts as the source of signals that play an important role in making the external environment meaningful to any living organism. This suggests that information does not pre-exist living beings in the environment, and if we indeed find information in the environment it is due to the actions of living beings.
To discuss the process of info-autopoiesis and its potential impact on AGI, this paper is divided into several sections. Firstly, the concept of information is naturalized by examining its etymological origins as well as Gregory Bateson’s characterization as “a difference which makes a difference.” Then, the process of info-autopoiesis is presented as central to the self-production of information. This includes consideration of the human organism-in-its-environment; and the auto-generation, processing, and transmission of information, including a critical examination of the mathematical theory of communication by Claude Shannon. Lastly, several examples are presented to illustrate the implications of info-autopoiesis for all human creations and AGI, as well as suggestions for overcoming any identified limitations.

2. What Is Information?

The concept of information, even from a cursory examination of the literature, has great diversity and reach. Because of its broad applicability in many fields of knowledge, a wide range of criteria seem to apply in its study [39,40,41,42]. The gamut runs from Shannon or syntactic information [43], Bateson information [37], biological, heritable, or intrinsic information [12,13,44], functional information [45,46], non-heritable learned or creative information [29], measured physical information [29], and ecological information [47]. These various types of information certainly prove their diversity, but what might be ideal is to identify the more general characteristics of information that naturalize its meaning.
If we inquire into the etymology of the word information, we find its Latin roots in the word informatio, which comes from the verb informare (to inform) in the sense of giving shape to something material, as well as the act of communicating knowledge to another person [41,48,49,50]. In addition, in parallel to its etymological origins, Bateson defines information as “a difference which makes a difference” [37] (p. 453). The common dynamic of these two approaches to information is the implicit notion of interactivity and recursivity. Its etymology reflects the human capacity to interact recursively with matter in the environment and with other similar beings to achieve an end. The Batesonian approach manifests the human ability for sensorial commensurable spatial/temporal comparison (between two instances) in pursuit of the satisfaction of its physiological (internal/external) and/or relational needs. The first, objective “difference” is sufficiently large in magnitude, yet suitably slow to be detectable by human sensory organs. The second, subjective “difference which makes a difference” is assessed by the human organism as able to satisfy its physiological (internal/external) and/or relational needs. These “differences” reflect the dynamic self-referential, interactive, and recursive acts of the human organism-in-its-environment in an ascending virtuous spiral of sensation-information-action-sensation. In summary, information is not a fundamental quantity of the Universe but rather, the most important element created by human organisms-in-their-environment to discover themselves in their environment.
Another important aspect of Batesonian information is that “difference” and “idea” are found to be synonymous and homeorhetic [37] (p. 453). In addition, the endless self-referential, recursive, and interactive process of sensation-information-action-sensation related to Bateson’s ‘difference which makes a difference’ is illustrated by the actions of a woodcutter on a tree:
“Consider a tree and a man and an axe. We observe that the axe flies through the air and makes certain sorts of gashes in a pre-existing cut in the side of the tree. If now we want to explain this set of phenomena, we shall be concerned with differences in the cut face of the tree, differences in the retina of the man, differences in his central nervous system, differences in his efferent neural messages, differences in the behavior of his muscles, differences in how the axe flies, to the differences which the axe then makes on the face of the tree. Our explanation (for certain purposes) will go round and round that circuit. In principle, if you want to explain or understand anything in human behavior, you are always dealing with total circuits, completed circuits. This is the elementary cybernetic thought.” [37] (pp. 458–459).
This description evolves from Bateson’s cybernetic perspective of the world. The woodcutter performs self-referencing, interactive, and recursive labor to harvest wood to use to build a fire for warmth and/or food preparation, a description applicable to many job tasks pertinent to shaping matter in our environment. The use of the word cybernetics is done not in its homeostatic sense of the word, but rather, in a homeorhetic sense. This implies that the activities of the woodcutter tend to converge towards a dynamic trajectory where the activity of the woodcutter follows a moving and developing target during the activity, i.e., shaping the surface of the tree to make it fall. Repetition of an activity such as cutting down trees generally leads to skill improvement and efficiency, unlike homeostatic adaptations that tend to return to a state of equilibrium.
In summary, the naturalization of information allows us to identify information as something palpable in our daily living. Information is something that we impart to matter and other living beings, noticing differences that permit the control of those actions. In addition, we internalize those changes as ideas that help us promote the ever-expanding and spiraling cycle of sensationinformationactionsensation.

3. Info-Autopoiesis

Info-autopoiesis (info = information; auto = self; poiesis = production/creation) is proposed as a general framework to answer the key question: what is the connection between human intelligence and machine intelligence? Info-autopoiesis is the self-referential, recursive, and interactive process of self-production of information. This is accomplished by finding the “difference which makes a difference” [37] (p. 453) from the spatially/temporally separated “Sensorial Signals” of the noisy environment in which all living beings live, in their motivated efforts to satisfy their physiological and/or relational needs to improve their ability to engage in their ever-changing environment [38,51,52,53,54]. Through their sensory organs, they discover the bountifulness of matter and/or energy as expressions of their environmental spatial/temporal motion/change.
Info-autopoiesis results in the generation of semantic (internal or endogenous) and syntactic (external or exogenous) information relevant to humans in their environment. Internal or semantic information generation is motivated by the individuated satisfaction of physiological (internal and external) and relational needs, where sensorial percepts play an important role and make the external environment meaningful. One characteristic of internal information is its inaccessibility, which may be remedied if an individual is willing to share its contents through external expressions using language, gestures, pictographs, musical instruments, sculptures, writing, coding, etc., which is syntactic in nature and corresponds to Shannon information. This means that Shannon/syntactic information is a metaphor for everything that is produced by all living beings. In the case of human beings, this includes all our artificial creations in the arts and sciences and all human artifacts which surround us. In this regard, recent research shows that [55,56,57] “Humanity has reached a new milestone in its dominance of the planet: human-made objects may now outweigh all of the living beings on Earth.
Roads, houses, shopping malls, fishing vessels, printer paper, coffee mugs, smartphones, and all the other infrastructure of daily life now weigh in at approximately 1.1 trillion metric tons—equal to the combined dry weight of all plants, animals, fungi, bacteria, archaea, and protists on the planet. The creation of this human-made mass has rapidly accelerated over the past 120 years: Artificial objects have gone from just three percent of the world’s biomass in 1900 to on par with it today. In addition, the amount of new stuff being produced every week is equivalent to the average body weight of all 7.7 billion people.” [56].
The implication is that there is no information in the environment or in the Universe independent of humans. For this reason, a need exists to critically examine the implications of the syntactic nature of externalized information on Artificial Intelligence. Recognizing that the mediating role of information as a self-referential, recursive, and interactive process of sensationinformationactionsensation between the human organism-in-its-environment and matter/energy, it becomes necessary to detail how the generation of different types of information is carried out and the function that they fulfill.

3.1. Infoautopoiesis and the Human Organism-in-Its-Environment

The representation in Figure 1 shows a human ORGANISM-in-its-ENVIRONMENT, illustrating the self-referential, interactive, and recursive process of sensationinformationactionsensation between the ORGANISM and the ENVIRONMENT. The direction of the arrowheads in the image of the human organism-in-the-environment in Figure 1 shows that the flow begins as ENVIRONMENTAL NOISE; it then transforms into Sensorial Signals after detection and transduction by the SENSES; which are then converted, in the box identified as INFOAUTOPOIESIS, into useful/meaningful information for the organism-in-its-environment; and, as a result, elicits an ACTION that is exerted on the environment and identified as an ACTION RESULT.
Since its unicellular beginnings in the womb, the human organism-in-its-environment lives in a milieu where ENVIRONMENTAL NOISE is the norm. The ability of the human ORGANISM-in-its-ENVIRONMENT to distinguish what is relevant to satisfy its physiological (internal and external) and/or relational needs is what guides its actions. This is a constant for human beings during their lifetimes and drives an always changing homeorhetic becoming.
Internal and external circuits are distinguished in the dynamic of the human organism-in-its-environment. The internal homeorhetic cybernetic self-referential circuit is the one that makes effective the definition of information of Bateson [37] as “a difference that makes a difference” and is represented in Figure 1 as a box identified as INFO-AUTOPOIESIS. The external circuit allows the organism to influence its environment in a self-referential, interactive, and recursive way in line with the internal circuit. This external circuit is defined to begin from the ENVIRONMENTAL NOISE, which is admitted and processed by the organism in the INFO-AUTOPOIESIS box in response to its physiological (internal/external) and/or relational needs, which results in an ACTION that impacts the environment as an ACTION RESULT, as well as the SENSES of the human organism. The feedback of the ACTION into the human organism is by way of the SENSES and the ACTION RESULT that cuts through the ENVIRONMENTAL NOISE since it is an expected signal by the human organism-in-its-environment. The self-referential, interactive, and recursive process of sensationinformationactionsensation between the ORGANISM and the ENVIRONMENT in this external circuit allows for the achievement of greater efficiency of the ORGANISM in dealing in space and time with its ENVIRONMENT.
The Sensorial Signals identified in Figure 1 result from the sensory interaction of the organism with its environment from which comes the ENVIRONMENTAL NOISE or white noise that impacts the SENSES. In this figure when we talk about SENSES, we are not only talking about the five most common senses: touch, sight, smell, hearing, and taste; but also, that these SENSES have millions of sensory elements throughout our body. It should be noted that each sensory element acts in a way that is commensurable when activated repeatedly by the ENVIRONMENTAL NOISE according to the specificity of its sensory capacity. For example, a sensory element may be attuned to only measure pressure/force. AMBIENT NOISE must be of sufficient intensity (strong enough to be detected yet not so strong that it damages the sensory organs) and duration (long enough to be differentiated yet not so short as to be ignored), as well as being of interest to the organism to generate Sensorial Signals. The motivation of the human organism-in-its-environment to recognize a particular characteristic of the ENVIRONMENTAL NOISE or white noise impinging on the SENSES is the satisfaction of its physiological (internal/external) and/or relational needs. For example, an infant in its gestation phase outside the womb seeks its mother’s nipple to feed. That does not necessarily mean that the infant realizes what she is doing, or even that seeking a nipple means she is hungry or seeks nourishment, even if her hunger is eventually satisfied. It only knows that it instinctively must look for something that feels like a nipple, rejecting all other sensory artifacts. Indeed, it uses constitutive absence [58] as a beacon to satisfy its unspecified needs. This also means that the generation of information might run the gamut from voluntary to involuntary.
In Figure 1, after the Sensorial Signals reach the box labeled INFO-AUTOPOIESIS, a meaning-making box, they are processed and are converted into semantic, meaningful, or internalized (endogenous) information. It is here that after an accumulation of Sensorial Signals, the organism deploys its capacity for ACTION that is specific to the ACTION RESULT that is sought in its environment to pursue the satisfaction of physiological (internal/external) and/or relational needs. This ACTION leading to an ACTION RESULT may be characterized as syntactic or externalized (exogenous) information which is a consequence of the externalization of the internalized semantic information. In other words, an externalized syntactic informational homeorhetic ACTION occurs depending on the internalized semantic information generated by the human organism-in-its-environment due to the acquired Sensorial Signals identified by the self-referenced needs of the organism. This is a never-ending self-referential, interactive, and recursive sensationinformationactionsensation cycle. This arrangement of components serves to illustrate, in a rudimentary way, how the mind-body problem might be addressed since it shows how the world and our images of the world are coordinated [59,60] (p. 165).
Recent research serves to illustrate this type of voluntary/involuntary behavior. Researchers discovered that bacterial spores Bacillius subtitilis, which are partially dehydrated cells, can analyze their environment, despite being in a lethargic state and considered physiologically dead for years, to survive disadvantageous environmental conditions. However, they continue to generate information from short-lived environmental signals, leaving their dormant state after accumulating a certain number of sensorial signals that confirm that they can activate again and return to life under now more favorable environmental conditions [61,62]. This is precisely what we want to represent as occurring within the INFO-AUTOPOIESIS block of Figure 1.
The internal and external circuits define an asymmetrical relationship between the organism and its environment. The ENVIRONMENTAL NOISE that impacts the SENSES of the human ORGANISM-in-its-ENVIRONMENT is not a reflection of the actions of the organism in the environment, although they are related. Our SENSES (touch, hearing, sight, smell, and taste) are the only window that allow us the possibility of ACTION on our environment to succeed in satisfying our physiological (internal/external) and/or relational needs. The result of an ACTION on our environment is an expression of the externalized syntactic information that results from internalized semantic information.
This means that there is no information in the environment except for the syntactic information that we externalize. Our SENSES are incapable of identifying information in the environment. We are only capable of capturing Sensorial Signals using our SENSES that need interpretation to info-autopoietically create internalized semantic information. This is in contradiction to the postulate of many scientists who believe that information exists in the environment, as noted above.

3.2. Auto-Generation, Processing, and Transmission of Information

Figure 2 shows a condensed version of the info-autopoiesis of Sensorial Signals into the semantic and syntactic information previously shown in Figure 1. Sensorial Signals are the basis for our interactions with the environment in seeking the satisfaction of our physiological and/or relational needs through a process of info-autopoiesis that enables meaning-making and its externalization in a triadic process involving Personal-Subjective-Relative (PSR-I), Impersonal-Objective-Absolute (IOA-I) and Shannon-Distilled (SD-I) information. The details of this triadic process may be consulted elsewhere [38,53]. This triadic process shows the evolution from endogenous (internalized) semantic information to exogenous (externalized) syntactic information. The depicted externalized, syntactic information by the human organism-in-its-environment depicted in Figure 2 is in the form of oral sounds.
A further implication is that Shannon/Distilled Information (SD-I) is secondary to PSR-I and IOA-I and that Shannon/Distilled Information (SD-I) cannot exist independently. SD-I is the basis for the existence of this artificial world that we inhabit. A characteristic of an individual’s PSR-I and IOA-I is their inaccessibility, no one can have access to our innermost thoughts and feelings. While it can be assumed that the individual PSR-I and IOA-I are extensive in their content, most of us are unable to externalize all our complex emotions, feelings, and learning, whether our intention is to make them intelligible to those around us or not.

3.3. Info-Autopoietic Communication

Figure 3 illustrates info-autopoietic communication, i.e., the transformation of Sensorial Signals into semantic and syntactic information, between an individual on the left side of the figure with a similar individual on the right side of the figure. The distillation of PSR-I and IOA-I for externalization and communication transforms PSR-I and IOA-I into Shannon/Distilled Information (SD-I), or syntactic information [43].
The represented process of communication is of a general nature and follows the depicted sequence: The syntactic information externalized by the individual on the left side of the figure initially gets transformed into electrical signals at the Information Source such as a microphone. The microphone transforms the sound waves into analog/digital electrical signals which are then fed into the Transmitter. The Transmitter then directs them into a Channel. This Channel may be a wire that carries the electrical signal, or it might involve the generation of electromagnetic waves that are sent into the ether, subject to capture by any number of Receivers. Either of these two options may be subjected to random noise from an extraneous Noise Source. Upon detection by the Receiver, it is amplified, denoised, and if needed, transformed into an analog/digital electrical signal. The signal is then sent to the Destination to be interpreted into an understandable format to be printed out, or voice synthesized for someone to hear or record the communicated message.
Quoting Shannon [43], we note that the source of information ‘… produces a message or sequence of messages to be communicated to the receiving terminal.’ The transmitter ‘… operates on the message in some way to produce a signal suitable for transmission over the channel.’ For example, ‘… In telegraphy we have an encoding operation which produces a sequence of dots, dashes and spaces on the channel corresponding to the message’. The channel is ‘… the medium used to transmit the signal from transmitter to receiver’, which accumulates noise from multiple sources in its path, some predictable, some not. The receiver ‘… performs the inverse operation of that done by the transmitter, reconstructing the message from the signal.’ Finally, destination ‘… is the person (or thing) for whom the message is intended.’
The ‘fundamental problem of communication is defined as that of reproducing at one point either exactly or approximately a message selected at another point.’ Although messages can be syntactically designed to have meaning, these semantic aspects of communication are irrelevant to the engineering problem, although in some cases engineering aspects may reveal or involve semantic content. One aspect of this communication system is that it can be analyzed mathematically in detail, even incorporating probabilistic prediction to recognize the originally sent message of all possible messages that could have been sent. A typical example is the language corrector on our cell phones that correct us when writing, which can encourage mistakes in what we want to say so that sometimes we choose to blame the word corrector of our cell phone. It is also clear that only a human being at the Destination can make use of the content transmitted syntactically in the messages.
Shannon’s purpose in devising this analysis was to understand and solve the communication problem from an engineering perspective by emphasizing the syntactic aspects of communication. The impact of these developments on digital communications is there for all to see. If we are going to naturalize the communication process, we might wonder if there are missing elements that deserve to be included for a more exhaustive analysis. For example, how does the sender of the message produce the message to be encoded for transmission? What is the historical and technical process that allows human beings to develop the technology, design, build, and use the apparatus that allows communication to take place? Indeed, how do humans educate and prepare themselves not only to produce advanced technological developments for communication, but to be able to express themselves by taking advantage of their use?
Phylogenetically, not so long ago we lived a hand-to-mouth existence where communication was, at best, by signs and/or direct oral communication. Ontogenetically, we developed from a state in which we could hardly communicate to a state in which oral communication is part of our nature. These questions seem relevant if we want to understand information from a more general perspective. Having no answers to these questions suggests that we may suffer from alienation, or an inability to recognize our work in the products of that labor. We seem to forget that the communication system we are describing is due to our work. In addition, there is a human being on the far left and right of the communication system in Figure 3. The human being on the far left generates a message, because of an internal or endogenous process of creating semantic information, encodes it as syntactic information to externalize it or convert it into Shannon information/exogenous syntactic spoken language, which the communication apparatus then digitally encodes as syntactic information and sends to the human on the far right. After the digitized message acquires noise in the channel, the noise is removed and decoded in the receiver to convert it into synthetic language, which reaches the ears of the human being on the far right. The individual must then decode syntactic and synthetic speech and then decode/interpret the message based on their previous experience and knowledge. This process leads to recognizing syntactic information and interpreting information as semantic information. The same message can have different meanings for different individuals.
A more general interpretation of Figure 3 is that it defines all types of communication processes, all of which require the externalization of information by the human organism-in-its-environment. This implies that all artifacts produced by humans are syntactic in nature, all needing to be interpreted by other humans by means of Sensorial Signals. They all embody the capability to transform semantic information into syntactic information in the process of the satisfaction of a need, of design, of the development of the means of manufacture, and the finishing process. This is true of all human creations, whether oral, written, musical, scientific creations, sculptures, humanities, arts-oriented works, etc.
Let us examine in more detail the impact of the syntactic nature of human creations in the realm of science and scientific accomplishment by looking at several examples relevant to our inquiry.

3.4. Examples Showing the Impact of Info-Autopoiesis on Syntactic Creation

To gain a measure of what we mean when we refer to syntactic elements in nature, we quote Pattee when he states, “For my argument here, I will mean by matter and energy those aspects of our experience that are normally associated with physical laws” [63] (p. 213). In other words, when we observe nature and apply science and the scientific method to make sense of what we observe, we build an understanding that is based on our syntactic conceptualizations. We observe, experiment, and theorize using our syntactic creations, including mathematics, physics, and chemistry to gain access to the world that surrounds us so that we can change it in our own image to serve our needs. What this means is that all of what we discover and build is subject to interpretation by someone, so we have to teach every new generation how to understand and interpret our scientific creations. If for some reason this chain gets broken, for example when we were unable to decipher Egyptian hieroglyphic script, it was only because of the Rosetta Stone, the first Ancient Egyptian bilingual text recovered in modern times, that we were able to gain access to the inscribed knowledge. The explanations and practical achievements of science need to be reevaluated since they all are the result of syntactic creation. In short, syntactic creation is only able to explain other syntactic elements in our environment. It cannot explain nor create life, an element in nature that is capable of semantic interpretation for its own benefit as well as syntactic creation to close the circle of its metabolic connection with nature. This has the implication that all efforts to use chemistry to attempt to create life are doomed to failure [64,65,66,67,68,69,70].
The Chinese room argument by John Searle is an argument against a symbol-processing machine having an understanding ability, no matter how intelligently it behaves [71,72,73]. The argument involves a computer (inside a room) that takes Chinese characters as input and follows the instructions of a program to produce other Chinese characters, which it presents as output. The computer does this so convincingly that it comfortably convinces an external human Chinese speaker that it is itself a human Chinese speaker—effectively it passes the Turing Test [74] as it fools humans into believing that it is, itself, human. The arguments presented for and against the computer having an understanding ability are quite contrived and extensive. In the end, Searle makes a convincing argument against the computer being capable of understanding. Using info-autopoiesis, we can dispense with the contrived discussion by pointing out that the computer is a human syntactic creation that inputs and processes syntactic creations in the form of Chinese characters. It then outputs other syntactic Chinese characters for someone to interpret their correctness. Since all the processes consist of syntactic elements from beginning to end, we can be 100% certain that these syntactic creations are incapable of understanding.
Another common argument that is made is that we live in a computer simulation [75]. This argument assumes the existence of an advanced civilization of unknown superbeings that has developed the computational capacity to simulate anything it wishes. The result is that they choose to simulate us and the world in which we live. Great effort in this argument is devoted to making the case that we are the product of someone’s imagination and creativity and are part of a computer simulation. This whole contention can be very readily dispensed with by noting that all computer simulations are syntactic in nature, whether by an advanced civilization or not. Therefore, we do not live in and were not created as part of a computer simulation.
Consider the case where an individual falls into a coma or is neurologically incapacitated to communicate or move. Different approaches are used in attempts to communicate with these individuals. All require the detection of neurons firing which are assumed to be reflections of brain functionality using a brain-computer interface (BCI). These techniques run the gamut, from non-invasive to invasive recordings, as well as real-time assessment [76,77,78,79,80,81,82,83,84]. All these approaches, since they rely on instrumentation which is syntactic in nature, can only produce syntactic information that needs to be interpreted by humans to be useful. In short, the only way to access the innermost thoughts of individuals is by using syntactic creations that are only capable of generating syntactic information subject to interpretation.
Since the design, construction, and use of computing machines also fall under the umbrella of syntactic creations, it means that the nature of artificial intelligence (AI) is also syntactic. This would seem to put a damper on the potential for the achievement of artificial general intelligence (AGI), although this does not preclude the development of many interesting AI applications such as ChatGPT [85]. We also need to be careful when we assume that we can find semantic content in our environment which a machine will be able to interpret, as syntactic content only exists in the environment because we put it there. This fact is formalized as the Central Dogma of Information which states: ‘Once semantic information has got into syntactic information it can’t get out again’ [53]. Thus, any arguments for the achievement of sentient artificial general intelligence or superintelligence [57] are incorrect because of its dependence on syntactic computations by syntactic machines. As noted above, the notion of meaning-making is beyond the capabilities of any machine, no matter how sophisticated.
From an engineering design perspective, if we acknowledge the fact that our creations are syntactic in nature, then we can treat this fact as a boundary condition for AI research. This could possibly lead to a change of approach to developments in AI since we need to acknowledge that AGI is not achievable. The use and abuse of AI systems is an ethical dilemma not easily resolved.
All these findings have a common and fundamental basis, i.e., that information does not exist in the environment; rather, information is self-produced by living beings through sensorial interactions with the environment motivated by the need to satisfy physiological and/or relational needs. Info-autopoiesis results in endogenous, semantic information, which becomes exogenous, and syntactic information, which is synonymous with ordered structure and artificial creation. Syntactic information does not have the capacity for meaning-making, whatever its configuration. For example, all our basic sciences, including physics, chemistry, biology, and mathematics are syntactic constructions, hence science cannot encompass life. Info-autopoiesis is a new information paradigm that we can use to think about the world in which we live to determine what is possible within the bounds of syntactic information creation.

4. Discussion

There is a tendency by most researchers to postulate the existence of information as a mysterious quantity that is to be found everywhere in our environment, except that it is difficult to describe and no identifiable sense organs seem to detect it. Yet, the notion that information can be identified gives credence to the colloquial expression “I know it when I see it”.
Norbert Wiener attempted to be more specific when he stated that “Information is information, not matter or energy. No materialism which does not admit this can survive at the present day” [34] (p. 132). While describing information in terms of itself, this points to a general belief among many scientists that information is a third quantity of the Universe besides matter and/or energy, something that is wholly dependent on a postulate that has no basis in fact.
The approach promoted in this work is that information is paramount to the functioning of the human organism-in-its-environment. An etymological perspective ties information to giving shape to matter and using communication as the means to shape the minds of other individuals. While also using Gregory Bateson’s definition of information as ‘a difference which makes a difference’, it makes for the possibility that information may be identified and analyzed. Indeed, both perspectives coincide in promoting a naturalized and dynamic view of information. They promote the view that information is a means to describe change in matter and/or energy, and that humans have an individuated role to play in acting to promote and observe that change. This approach ties the finding of an answer to phenomena that may be observed in the daily lives of humans and how they interact with their environment.
The result is a new paradigm to study information, that of info-autopoiesis, or the self-referential, sensory commensurable, recursive, and interactive homeorhetic feedback process immanent to Bateson’s ‘difference which makes a difference’, by which a human organism-in-its-environment pursues the satisfaction of its physiological (internal/external) and/or relational needs. This yields the discovery that the information process is a never-ending sensationinformationactionsensation cycle that allows us to discover and act on our environment.
The process of info-autopoiesis transforms the “Sensorial Signals” of the noisy environment in which all living beings live, through their motivated efforts to satisfy their physiological and/or relational needs to improve their ability to engage in their ever-changing environment. The human organism-in-its-environment, through a triadic process involving Personal-Subjective-Relative, Impersonal-Objective-Absolute, and Shannon-Distilled information, can internally generate semantic information that it can then externalize as syntactic information. Our syntactic creations are all the artificial creations that we have created and surround us, some being very rudimentary but others of great sophistication and technological scope.
Previously, it has been possible to make the argument that there exists a connection between the mind-body problem, information, and meaning. In addition, Figure 1 is elucidatory of the possibility of resolving the mind-body problem by illustrating how errors/differences/information/ideas are self-produced from Sensorial Signals. These errors/differences/information/ideas, as images of the world, reflect the meaning that helps the human being coordinate its actions in the external environment [59,60] (p. 165). Generalizing, the self-produced errors/differences/information/ideas are always meaningful for the organism, whether they are the result of voluntary or involuntary info-autopoiesis. Determining differences is the source of satisfaction of our most basic physiological needs such as breathing and eating, changing our surroundings by acting on our environment, and when engaged in discussions with others. This dynamic view of the process of homeorhetic cybernetic human actions, or constitutive absence [58], may be revealed as engaging every instant of our lives. We might not know exactly what it is that is motivating our sensory engagements, but we cannot deny that satisfaction of physiological and relational needs are drivers. In other words, life is info-autopoiesis, or info-autopoiesis is life.
The traditional perspective on life is that “life is chemistry”, leading to the continuous search for primordial organic molecules as precursors to life [66,69], although it is then difficult to find the path of how these precursors become life. A current perspective is centered on the search for a ‘protoribosome’ or a primitive RNA machine capable of linking two amino acids together [64,65,68,70]. This is an attempt “to recapitulate a milestone on the road from primordial organic molecules to the ribosome used by the last common ancestor of all living things” [67] (p. 23). While this might be considered a promising approach to discovering the origin of life, the question that needs to be asked and answered is how does this mechanism for life become part of a cellular structure and what motivates its incorporation? Indeed, what motivates its information self-production?
This info-autopoietic approach to the study of information allows the examination of many instances where other approaches may fail. Such is the case of artificial general intelligence (AGI), where the potential benefits/dangers are more a matter of speculation, rather than the certainty that info-autopoiesis provides by defining its limits as syntactical creations. Syntactical expressions that are unable to engage in semantical responses. This also impacts the limits of the arts and sciences and what we can understand and achieve in their pursuit. Finally, info-autopoiesis allows us to find that there exists a connection between the mind-body problem, information, meaning, and life.

5. Summary and Conclusions

Info-autopoiesis is a new paradigm to understand information in the context of all living beings-in-their-environment. Info-autopoiesis is the process of self-production of information; an individuated sensory commensurable, self-referential, recursive, interactive homeorhetic feedback process immanent to Bateson’s ‘difference which makes a difference’. A basic premise to info-autopoiesis is that information is not a fundamental quantity of the Universe, yet its importance cannot be underestimated for human organisms-in-their-environment. Humans self-produce information to discover the bountifulness of matter and/or energy as expressions of their environmental spatial/temporal motion/change, as information or ‘differences which make a difference’ to satisfy their physiological (internal/external) and relational needs.
Info-autopoiesis defines a connection between the mind-body problem, information, meaning, and life, and results in the generation of internalized and externalized information relevant to human organisms-in-their-environment. The self-production of inaccessible internalized or semantic information makes the external environment meaningful. Semantic information is made accessible by communicating through externalized syntactic expressions using language, gestures, pictographs, musical instruments, sculptures, writing, coding, etc. We live in and are surrounded by our artificial creations [55,56,57]. This means that all externalized expressions of human-created knowledge are syntactic in nature and require re-interpretation by other peers through Sensorial Signals. Syntactic artificial creations surround us and make us believe that information exists in the environment, yet there is no information in the environment or in the Universe independent of humans. Info-autopoiesis is the link between the living and non-living. Despite our ability for externalized syntactic information creation, however sophisticated, we are unable to make these syntactic creations produce semantic information. This includes all human knowledge creations in the arts and sciences. Information cannot but be the primary element that allows humans their unique existence. Since the design, construction, and use of computing machines also fall under the umbrella of syntactic creations, it means that the nature of artificial intelligence (AI) is also syntactic. This would seem to put a damper on the potential for the achievement of artificial general intelligence (AGI), although this does not preclude the development of many interesting AI applications such as ChatGPT [85]. This fact is formalized as the Central Dogma of Information which states: ‘once semantic information has got into syntactic information it can’t get out again’ [53].

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

To the memory of JCCN who inspired me to think about novel fundamental universals. The author wants to express his appreciation to the reviewers for their insightful suggestions with a critical eye for improvement of the manuscript.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Shead, S. Why Everyone Is Talking about the A.I. Text Generator Released by an Elon Musk-Backed Lab; CNBC: Englewood Cliffs, NJ, USA, 2020; Available online: https://www.cnbc.com/2020/07/23/openai-gpt3-explainer.html (accessed on 30 March 2023).
  2. Darby, C. How to Use DALL-E 2 to Turn Your Creative Visions into AI-Generated Art; ZDNET 2023. Available online: https://www.zdnet.com/article/how-to-use-dall-e-2-to-turn-your-creative-visions-into-ai-generated-art/ (accessed on 30 March 2023).
  3. Metz, C. Meet DALL-E, the A.I. That Draws Anything at Your Command. New York Times. 6 April 2022. Available online: https://www.nytimes.com/2022/04/06/technology/openai-images-dall-e.html (accessed on 30 March 2023).
  4. Devlin, K.; Cheetham, J. Fake Trump Arrest Photos: How to Spot an AI-Generated Image. BBCNews. 24 March 2023. Available online: https://www.bbc.com/news/world-us-canada-65069316 (accessed on 30 March 2023).
  5. Tolentino, D. AI-Generated Images of Pope Francis in Puffer Jacket Fool the Internet. NBC News. 27 March 2023. Available online: https://www.nbcnews.com/tech/pope-francis-ai-generated-images-fool-internet-rcna76838 (accessed on 30 March 2023).
  6. Roose, K. The Brilliance and Weirdness of ChatGPT. New York Times. 5 December 2022. Available online: https://www.nytimes.com/2022/12/05/technology/chatgpt-ai-twitter.html (accessed on 30 March 2023).
  7. Pause Giant AI Experiments: An Open Letter. 2023. Available online: https://futureoflife.org/open-letter/pause-giant-ai-experiments/ (accessed on 30 March 2023).
  8. Bostrom, N. Superintelligence: Paths, Dangers, Strategies; Oxford University Press, Inc.: Oxford, UK, 2014. [Google Scholar]
  9. Yudkowsky, E. Pausing AI Developments Isn’t Enough. We Need to Shut It All Down. Time. 29 March 2023. Available online: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/ (accessed on 30 March 2023).
  10. Larson, E.J. The Myth of Artificial Intelligence—Why Computers Can’t Think the Way We Do; Harvard University Press: Cambridge, MA, USA, 2021. [Google Scholar]
  11. Turing, A.M. On Computable Numbers, with an Application to the Entscheidungsproblem. Proc. Lond. Math. Soc. 1937, s2–s42, 230–265. [Google Scholar] [CrossRef]
  12. Barbieri, M. What is Information? Biosemiotics 2012, 5, 147–152. [Google Scholar] [CrossRef]
  13. Barbieri, M. The Paradigms of Biology. Biosemiotics 2013, 6, 33–59. [Google Scholar] [CrossRef]
  14. Battail, G. Applying Semiotics and Information Theory to Biology: A Critical Comparison. Biosemiotics 2009, 2, 303. [Google Scholar] [CrossRef]
  15. Battail, G. Biology Needs Information Theory. Biosemiotics 2013, 6, 77–103. [Google Scholar] [CrossRef]
  16. Bayne, T.; Chalmers, D.J. What is the Unity of Consciousness? In The Unity of Consciousness: Binding, Integration, and Dissociation; Oxford University Press: New York, NY, USA, 2003; pp. 23–58. [Google Scholar]
  17. Brier, S. Biosemiotics and the foundation of cybersemiotics: Reconceptualizing the insights of ethology, second-order cybernetics, and Peirce’s semiotics in biosemiotics to create a non-Cartesian information science. Semiotica 1999, 127, 169–198. [Google Scholar] [CrossRef]
  18. Brier, S. Cybersemiotics: Why Information Is Not Enough! University of Toronto Press: Toronto, ON, Canada, 2008; Volume xx, 477p. [Google Scholar]
  19. Burgin, M. Theory of Information—Fundamentality, Diversity and Unification; World Scientific Series in Information Studies—Vol. 1; World Scientific Publishing Co. Pte. Ltd.: Singapore, 2010. [Google Scholar]
  20. Chalmers, D. Facing up to the problem of consciousness. J. Conscious. Stud. 1995, 2, 200–219. [Google Scholar]
  21. Clark, A.; Chalmers, D. The Extended Mind. Analysis 1998, 58, 7–19. [Google Scholar] [CrossRef]
  22. Chalmers, D. Is the Hard Problem of Consciousness Universal? J. Conscious. Stud. 2020, 27, 227–257. [Google Scholar]
  23. Floridi, L. Information: A Very Short Introduction; Oxford University Press: Oxford, UK, 2010. [Google Scholar]
  24. Floridi, L. The Philosophy of Information; Oxford University Press: Oxford, UK, 2011; Volume xviii, 405p. [Google Scholar]
  25. Hidalgo, C.A. Why Information Grows: The Evolution of Order, from Atoms to Economies; Basic Books: New York, NY, USA, 2015; Volume xxi, 232p. [Google Scholar]
  26. Koch, C. A Theory of Consciousness. Sci. Am. Mind 2009, 20, 16–19. [Google Scholar] [CrossRef]
  27. Koch, C. The Feeling of Life Itself: Why Consciousness Is Widespread But Can’t Be Computed; The MIT Press: Cambridge, MA, USA, 2019. [Google Scholar]
  28. Lloyd, S. Programming the Universe; Alfred A. Knopf: New York, NY, USA, 2006. [Google Scholar]
  29. Pattee, H.H. Epistemic, Evolutionary, and Physical Conditions for Biological Information. Biosemiotics 2013, 6, 9–31. [Google Scholar] [CrossRef]
  30. Rovelli, C. Meaning = Information + Evolution. arXiv 2016, arXiv:1611.02420. [Google Scholar]
  31. Stonier, T. Information and Meaning—An Evolutionary Perspective; Springer: Berlin/Heidelberg, Germany, 1997. [Google Scholar]
  32. Umpleby, S.A. Physical Relationships among Matter, Energy and Information. Syst. Res. Behav. Sci. 2007, 24, 369–372. [Google Scholar] [CrossRef]
  33. Vedral, V. Decoding Reality—The Universe as Quantum Information; Oxford University Press: Oxford, UK, 2010. [Google Scholar]
  34. Wiener, N. Cybernetics: Or Control and Communication in the Animal and the Machine; John Wiley: New York, NY, USA, 1948. [Google Scholar]
  35. Yockey, H.P. Information Theory, Evolution, and the Origin of Life; Cambridge University Press: Cambridge, UK, 2005. [Google Scholar]
  36. Wheeler, J.A. Sakharov revisited: “It from Bit”. In Proceedings of the First International A D Sakharov Memorial Conference on Physics, Moscow, Russia, 27–31 May 1991; Nova Science Publishers: Commack, NY, USA. [Google Scholar]
  37. Bateson, G. Steps to an Ecology of Mind: Collected Essays in Anthropology, Psychiatry, Evolution, and Epistemology; Chandler Publications for Health Sciences; Ballantine Books: New York, NY, USA, 1978; Volume xxviii, 545p. [Google Scholar]
  38. Cárdenas-García, J.F. The Process of Info-Autopoiesis—The Source of all Information. Biosemiotics 2020, 13, 199–221. [Google Scholar] [CrossRef]
  39. Bawden, D.; Robinson, L. Introduction to Information Science, 2nd ed.; Facet Publishing: London, UK, 2022. [Google Scholar]
  40. Burgin, M.; Hofkirchner, W. (Eds.) Information Studies and the Quest for Transdisciplinarity—Unity Through Diversity; World Scientific Series in Information Studies; World Scientific Publishing Company: Hackensack, NJ, USA, 2017; 560p. [Google Scholar]
  41. Capurro, R.; Hjørland, B. The Concept of Information. Annu. Rev. Inf. Sci. Technol. 2003, 37, 343–411. [Google Scholar] [CrossRef]
  42. Shannon, C.E.; Sloane, N.J.A.; Wyner, A.D. Claude Elwood Shannon: Collected Papers; Wiley: Hoboken, NJ, USA, 1993. [Google Scholar]
  43. Shannon, C.E. A Mathematical Theory of Communication. Bell Syst. Tech. J. 1948, 27, 379–423, 623–656. [Google Scholar] [CrossRef]
  44. Crick, F. Central dogma of molecular biology. Nature 1970, 227, 561–563. [Google Scholar] [CrossRef]
  45. Sharov, A.A. Coenzyme Autocatalytic Network on the Surface of Oil Microspheres as a Model for the Origin of Life. Int. J. Mol. Sci. 2009, 10, 1838–1852. [Google Scholar] [CrossRef]
  46. Sharov, A.A. Evolution of Natural Agents: Preservation, Advance, and Emergence of Functional Information. Biosemiotics 2016, 9, 103–120. [Google Scholar] [CrossRef]
  47. Heras-Escribano, M.; de Jesus, P. Biosemiotics, the Extended Synthesis, and Ecological Information: Making Sense of the Organism-Environment Relation at the Cognitive Level. Biosemiotics 2018, 11, 245–262. [Google Scholar] [CrossRef]
  48. Capurro, R. Past, present, and future of the concept of information. TripleC 2009, 7, 125–141. [Google Scholar] [CrossRef]
  49. Díaz Nafría, J.M. What is information? A multidimensional concern. TripleC 2010, 8, 77–108. [Google Scholar] [CrossRef]
  50. Peters, J.D. Information: Notes Toward a Critical History. J. Commun. Inq. 1988, 12, 9–23. [Google Scholar] [CrossRef]
  51. Cárdenas-García, J.F.; Ireland, T. The Fundamental Problem of the Science of Information. Biosemiotics 2019, 12, 213–244. [Google Scholar] [CrossRef]
  52. Burgin, M.; Cárdenas-García, J.F. A Dialogue Concerning the Essence and Role of Information in the World System. Information 2020, 11, 406. [Google Scholar] [CrossRef]
  53. Cárdenas-García, J.F. The Central Dogma of Information. Information 2022, 13, 365. [Google Scholar] [CrossRef]
  54. Cárdenas-García, J.F.; Ireland, T. Bateson Information Revisited: A New Paradigm. Proceedings 2020, 47, 5. [Google Scholar]
  55. Elhacham, E.; Ben-Uri, L.; Grozovski, J.; Bar-On, Y.M.; Milo, R. Global human-made mass exceeds all living biomass. Nature 2020, 588, 442–444. [Google Scholar] [CrossRef]
  56. Pappas, S. Human-Made Stuff Now Outweighs All Life on Earth; Scientific American: New York, NY, USA, 2020; Available online: https://www.scientificamerican.com/article/human-made-stuff-now-outweighs-all-life-on-earth/# (accessed on 30 March 2023).
  57. Thompson, A. Taking Stock of Life. Sci. Am. 2018, 319, 16. [Google Scholar] [CrossRef]
  58. Deacon, T.W. Emergence: The Hole at the Wheel’s Hub. In The Re-Emergence of Emergence: The Emergentist Hypothesis from Science to Religion; Clayton, P., Davies, P., Eds.; Oxford University Press: Oxford, UK, 2008; pp. 111–150. [Google Scholar]
  59. Pattee, H.H. Cell Psychology: An Evolutionary Approach to the Symbol-Matter Problem. Cogn. Brain Theory 1982, 5, 325–341. [Google Scholar]
  60. Pattee, H.H. Cell Psychology: An Evolutionary Approach to the Symbol-Matter Problem. In Laws, Language and Life—Howard Pattee’s Classic Papers on the Physics of Symbols with Contemporary Commentary; Pattee, H.H., Rączaszek-Leonardi, J., Eds.; Springer: Dordrecht, The Netherlands, 2012; pp. 165–179. [Google Scholar]
  61. Kikuchi, K.; Galera-Laporta, L.; Weatherwax, C.; Lam, J.Y.; Moon, E.C.; Theodorakis, E.A.; Garcia-Ojalvo, J.; Süel, G.M. Electrochemical potential enables dormant spores to integrate environmental signals. Science 2022, 378, 43–49. [Google Scholar] [CrossRef]
  62. Lombardino, J.; Burton, B.M. An electric alarm clock for spores. Science 2022, 378, 25–26. [Google Scholar] [CrossRef]
  63. Pattee, H.H. Evolving Self-reference: Matter, Symbols, and Semantic Closure. In Laws, Language and Life—Howard Pattee’s Classic Papers on the Physics of Symbols with Contemporary Commentary; Pattee, H.H., Rączaszek-Leonardi, J., Eds.; Springer: Dordrecht, The Netherlands, 2012; pp. 211–226. [Google Scholar]
  64. Bose, T.; Fridkin, G.; Bashan, A.; Yonath, A. Origin of Life: Chiral Short RNA Chains Capable of Non-Enzymatic Peptide Bond Formation. Isr. J. Chem. 2021, 61, 863–872. [Google Scholar] [CrossRef]
  65. Bose, T.; Fridkin, G.; Davidovich, C.; Krupkin, M.; Dinger, N.; Falkovich, A.H.; Peleg, Y.; Agmon, I.; Bashan, A.; Yonath, A. Origin of life: Protoribosome forms peptide bonds and links RNA and protein dominated worlds. Nucleic Acids Res. 2022, 50, 1815–1828. [Google Scholar] [CrossRef]
  66. Criado-Reyes, J.; Bizzarri, B.M.; García-Ruiz, J.M.; Saladino, R.; Di Mauro, E. The role of borosilicate glass in Miller–Urey experiment. Sci. Rep. 2021, 11, 21009. [Google Scholar] [CrossRef]
  67. Dance, A. How did life begin? One key ingredient is coming into view. Nature 2023, 615, 22–25. [Google Scholar] [CrossRef]
  68. Kawabata, M.; Kawashima, K.; Mutsuro-Aoki, H.; Ando, T.; Umehara, T.; Tamura, K. Peptide Bond Formation between Aminoacyl-Minihelices by a Scaffold Derived from the Peptidyl Transferase Center. Life 2022, 12, 573. [Google Scholar] [CrossRef]
  69. Miller, S.L. A Production of Amino Acids Under Possible Primitive Earth Conditions. Science 1953, 117, 528–529. [Google Scholar] [CrossRef]
  70. Sharma, A.; Czégel, D.; Lachmann, M.; Kempes, C.P.; Walker, S.I.; Cronin, L. Assembly theory explains and quantifies the emergence of selection and evolution. arXiv 2022, arXiv:2206.02279. [Google Scholar]
  71. Searle, J.R. Minds, brains, and programs. Behav. Brain Sci. 1980, 3, 417–424. [Google Scholar] [CrossRef]
  72. DiMatteo, L.A.; Poncibò, C.; Cannarsa, M. (Eds.) The Cambridge Handbook of Artificial Intelligence: Global Perspectives on Law and Ethics; Cambridge University Press: Cambridge, MA, USA, 2022. [Google Scholar]
  73. Warwick, K. Artificial Intelligence: The Basics; Routledge: Milton Park, UK, 2011. [Google Scholar]
  74. Turing, A.M.I. Computing Machinery and Intelligence. Mind 1950, LIX, 433–460. [Google Scholar] [CrossRef]
  75. Bostrom, N. Are We Living in a Computer Simulation? Philos. Q. 2003, 53, 243–255. [Google Scholar] [CrossRef]
  76. Barbosa, L.S.; Marshall, W.; Streipert, S.; Albantakis, L.; Tononi, G. A measure for intrinsic information. Sci. Rep. 2020, 10, 18803. [Google Scholar] [CrossRef] [PubMed]
  77. Chari, A.; Budhdeo, S.; Sparks, R.; Barone, D.G.; Marcus, H.J.; Pereira, E.A.; Tisdall, M.M. Brain–Machine Interfaces: The Role of the Neurosurgeon. World Neurosurg. 2021, 146, 140–147. [Google Scholar] [CrossRef] [PubMed]
  78. Dingle, A.M.; Moxon, K.; Shokur, S.; Strauss, I. Editorial: Getting Neuroprosthetics Out of the Lab: Improving the Human-Machine Interactions to Restore Sensory-Motor Functions. Front. Robot. AI 2022, 9, 147. [Google Scholar] [CrossRef]
  79. Martini, M.L.; Oermann, E.K.; Opie, N.L.; Panov, F.; Oxley, T.; Yaeger, K. Sensor Modalities for Brain-Computer Interface Technology: A Comprehensive Literature Review. Neurosurgery 2020, 86, E108–E117. [Google Scholar] [CrossRef]
  80. Opie, N.L.; John, S.E.; Gerboni, G.; Rind, G.S.; Lovell, T.J.; Ronayne, S.M.; Wong, Y.T.; May, C.N.; Grayden, D.B.; Oxley, T.J. Neural Stimulation with an Endovascular Brain-Machine Interface. In Proceedings of the 2019 9th International IEEE/EMBS Conference on Neural Engineering (NER), San Francisco, CA, USA, 20–23 March 2019. [Google Scholar]
  81. Opie, N.L.; Ronayne, S.M.; Rind, G.S.; Yoo, P.E.; Oxley, T.J. Mechanical suitability of an endovascular braincomputer interface. In Proceedings of the 2020 8th International Winter Conference on Brain-Computer Interface (BCI), Gangwon, Republic of Korea, 26–28 February 2020. [Google Scholar]
  82. Oxley, T.J.; Yoo, P.E.; Rind, G.S.; Ronayne, S.M.; Lee, C.S.; Bird, C.; Hampshire, V.; Sharma, R.P.; Morokoff, A.; Williams, D.L.; et al. Motor neuroprosthesis implanted with neurointerventional surgery improves capacity for activities of daily living tasks in severe paralysis: First in-human experience. J. Neurointerv. Surg. 2021, 13, 102–108. [Google Scholar] [CrossRef]
  83. Soldozy, S.; Young, S.; Kumar, J.S.; Capek, S.; Felbaum, D.R.; Jean, W.C.; Park, M.S.; Syed, H.R. A systematic review of endovascular stent-electrode arrays, a minimally invasive approach to brain-machine interfaces. Neurosurg. Focus 2020, 49, E3. [Google Scholar] [CrossRef]
  84. Tononi, G.; Boly, M.; Massimini, M.; Koch, C. Integrated information theory: From consciousness to its physical substrate. Nat. Rev. Neurosci. 2016, 17, 450–461. [Google Scholar] [CrossRef]
  85. Hutson, M. Could AI help you to write your next paper? Nature 2022, 611, 192–193. [Google Scholar] [CrossRef]
Figure 1. The human ORGANISM-in-its-ENVIRONMENT and INFO-AUTOPOIESIS.
Figure 1. The human ORGANISM-in-its-ENVIRONMENT and INFO-AUTOPOIESIS.
Computers 12 00102 g001
Figure 2. Info-autopoiesis of Sensorial Signals into semantic and syntactic information (Adapted from [53]).
Figure 2. Info-autopoiesis of Sensorial Signals into semantic and syntactic information (Adapted from [53]).
Computers 12 00102 g002
Figure 3. Info-autopoietic communication: transformation of Sensorial Signals into semantic and syntactic information (Adapted from [53]).
Figure 3. Info-autopoietic communication: transformation of Sensorial Signals into semantic and syntactic information (Adapted from [53]).
Computers 12 00102 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cárdenas-García, J.F. Info-Autopoiesis and the Limits of Artificial General Intelligence. Computers 2023, 12, 102. https://doi.org/10.3390/computers12050102

AMA Style

Cárdenas-García JF. Info-Autopoiesis and the Limits of Artificial General Intelligence. Computers. 2023; 12(5):102. https://doi.org/10.3390/computers12050102

Chicago/Turabian Style

Cárdenas-García, Jaime F. 2023. "Info-Autopoiesis and the Limits of Artificial General Intelligence" Computers 12, no. 5: 102. https://doi.org/10.3390/computers12050102

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop