Next Article in Journal
New Approaches to the Circle of Sense and Nonsense
Previous Article in Journal
The Non-Arbitrary Link between Feeling and Value: A Psychosemantic Challenge for the Perceptual Theory of Emotion
Previous Article in Special Issue
The Rise of Particulars: AI and the Ethics of Care
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fourth Generation Human Rights in View of the Fourth Industrial Revolution

by
Manuel Jesús López Baroni
Center for Nanoscience and Sustainable Technologies (CNATS), Universidad Pablo de Olavide, 41013 Sevilla, Spain
Philosophies 2024, 9(2), 39; https://doi.org/10.3390/philosophies9020039
Submission received: 18 December 2023 / Revised: 8 March 2024 / Accepted: 13 March 2024 / Published: 19 March 2024
(This article belongs to the Special Issue The Ethics of Modern and Emerging Technology)

Abstract

:
We are at the dawn of the Fourth Industrial Revolution, characterised by the interaction of so-called disruptive technologies (biotechnology, synthetic biology, nanotechnology, neurotechnology and artificial intelligence). We believe that the challenges posed by technoscience cannot be met by the three generations of human rights that already exist. The need to create a fourth generation of human rights is, therefore, explored in this article. For that purpose, the state of the art will be analysed from a scientific and ethical perspective. We will consider the position of academic doctrines on the issues that a fourth generation of human rights should tackle. And, finally, in this fourth generation, we will propose the principles of identity and precaution as reference values, equivalent to the role played by freedom, equality and solidarity in the first three generations of human rights.

1. Introduction

We are in the prolegomena of the Fourth Industrial Revolution. It is practically impossible to anticipate its potential implication on our world, given the lack of precedents. Principio del formulario Essentially, we are developing technologies that make it possible to manipulate, create, re-create or modify matter, both the inert and the living, of which the human species is one. Racing against the clock, we have set ourselves the task of restructuring everything that has been left to us, whether cultural or natural, and to do so, we make hasty decisions with little debate or collective reflection.
In order to provide a degree of security in the face of possible scenarios, a set of legal norms is being created. These norms are referred to as the “fourth generation of human rights”. The most advanced of these are being developed in the context of the European Union, with the handicap of standards arising from single-market homogeneity. However, since the counterpoint is the anomie of the rest of the world, they are an unavoidable point of reference.
Specifically, we have to make decisions on the following questions as soon as possible: whether or not we are in favour of human germ-line modification, and where we draw the line, if at all; whether or not we share genetic material with animals; whether or not we replace non-human living beings with others that have been designed in the laboratory and adapted to our needs; how to deal with regulating artificial intelligence or neurotechnologies, etc.
First, this paper is a review of the state of the art from a scientific and ethical point of view. For this purpose, we will take into account the interrelationship between the disciplines involved, the calls from the scientific community for a moratorium on research and the potential social impact of such research. Second, we will briefly analyse the position of the academic world on the issues that a hypothetical fourth generation of human rights should address. They are divided between those who are in favour of the regulation of artificial intelligence and those who are in favour of the “bio” disciplines. Third, we will propose, as reference values for the fourth generation of human rights, the principles of human identity in their various manifestations and of precaution, as currently understood by the European Union. We believe that these principles can fulfil the functions that freedom, equality and solidarity have played in the first three generations of human rights. To conclude, we will take stock of the overall situation.

2. State of Play

We are at the beginning of the Fourth Industrial Revolution [1,2]. In order to understand the extent to which this affects our rights as human beings, we will distinguish three closely related levels. These levels can lead us to justify the need for a new generation of human rights. Specifically, we will analyse the following: (a) what is happening; (b) what is causing it; (c) and how the scientific community is responding to the ethical and social implications of its research.
(a) Firstly, the current industrial revolution is blurring the boundaries between perspectives, entities or limits that our ancestors could never have discussed.
(a.1.) The line between the cure and enhancement of human beings is being blurred [3]. In fact, with today’s genome editing techniques, genes can be transferred from one species to another, or silenced or activated in a targeted, non-random way. Characteristics, skills or traits from other species could be introduced into the human gene pool and even passed on to offspring, warns one of CRISPR’s co-discoverers [4]. Even a split of the human species (a species is a group of individuals that can reproduce and produce fertile offspring) would be biologically feasible in the medium term. Should we ban all forms of human germline intervention, as laid down in the European Convention on Bioethics [5,6], or only those that are not therapeutic, as the European Union’s Charter of Fundamental Rights seems to suggest [7]? Do we use this technique for prevention [8], as a kind of genetic vaccine? Last but not least, why not transfer to a human embryo the natural mutations already present in our species, which are the guarantee of above-average health [9]?
(a.2.) The lines between animal and human are being blurred. The above-mentioned techniques of genome editing are making it possible to humanise certain animals [10]. Although the purpose may be legitimate (developing human organs in animals that can be transplanted without risk of rejection or virus transmission [11], humanising a mammalian brain for research into Parkinson’s, Alzheimer’s, dementia, etc.), the risks are unprecedented [12].
We are not talking about genetically modifying a non-human being to make it more like us or, if necessary, to make it extinct (objectives that also raise questions, no less disturbing, as is the case with the Gene Drives [13]), but rather, we are talking about the introduction of human biological material into an animal embryo that could become [14], by accident or intentionally, human neurons (or glial cells), increasing its cognitive capacity [15], a scenario only surpassed by the possibility of it being passed on to offspring [16]. What is the legal and ontological status of an animal with more consciousness or cognitive capacity than it already had? At what point does an ape abandon its animal nature to become confused with ours [17,18,19]?
(a.3.) The differences between inert matter and living matter are becoming increasingly blurred [20]. Genes are indeed the building blocks of life, but they are not living entities in themselves. What is the extent to which they can be re-used, re-engineered or re-formulated by synthetic biology to explore new possibilities (for example, George Church’s levorotatory life forms; Craig Venter’s artificial cell [21,22]), given that the end product will in fact be a living being? How many ways can a human cell with the capacity to become a human being be developed, differentiated or synthesised if they are implanted into a woman’s womb? On the other hand, how do we regulate the industrial ownership of entities such as genes or synthetic biology products [23]?
Indeed, embryonic stem cells and IPS cells may, in the medium term, make sperm and eggs a thing of the past. What biological entity will a human quasi-embryo be for legal purposes [24]? Should a synthetic human genome (with the capacity to create a human being) be equated with embryos for the purposes of the European Convention on Bioethics, which prohibits their creation for research (only leftovers from in vitro fertilisation are allowed for research)? And if an IPS cell can be fully reprogrammed to develop into a human embryo, what will be the status of skin cells, hair cells, etc.? When will an IPS cell be able to dedifferentiate up to the embryonic stage, as is already the case in mice [25]? Is every single cell in the human body going to be an embryo for the purposes of the law [26]? Moreover, should human brain organoids be considered human, or only if they suffer and/or are conscious? Looking deeper into this question prompts the following: can a human brain organoid really become conscious and even feel [27]? If so, what is the ontological and legal status of such a biological entity? Is it in legal limbo, as is currently the case with pseudo-embryos [28]?
To paraphrase Pérez Luño, if the holder of the third generation human rights is the “interconnected” individual [29], the fourth generation would be, in Rodotá’s felicitous expression, the “disseminated” human being [30], or interconnected, but with the planet [31], and even then we fall short, as we shall see later, because for the first time, we have to consider the legal status of entities, which are not necessarily biological, with human attributes.
(a.4.) The boundaries between the real, the imagined and/or the digital are becoming blurred [32]. Can an AI create [33], invent [34] (Dabus case [35]), reflect and, in short, surpass us? What, in particular, is it supposed to cover [36]? What is, ontologically and legally, a set of instructions (algorithms) that is capable of manipulating us, deceiving us, distorting our will and even endangering our societies [37]? How do we regulate algorithms that generate new algorithms that generate new algorithms that generate new algorithms that generate new algorithms…? Is the Black Box of an artificial intelligence comparable to our consciousness, in the sense of being inscrutable or incomprehensible? In short, if we are just dealing only with algorithms, even if they have fabulous predictive capabilities, then what are we afraid of? But if they are something else, what exactly are they?
On the other hand, will neuro-rights be enough of a barrier to protect our brains from being invaded by political, economic and religious powers when they try to interfere with our minds through external or internal interfaces? Or is it still too early to regulate this matter without descending into the realms of science fiction [38]?
And finally, will AI algorithms be able to understand us better than we do, to the point of anticipating our desires, imagining or facilitating our actions and channelling them appropriately, long before we are conscious (e.g., Libet case [39]), if these two disciplines, AI and neurotechnologies, are intertwined?
(a.5.) As we have seen, we are witnessing an interaction between different disciplines that is blurring spheres, levels or subjects that we have always separated, but this process of convergence also affects the disciplines themselves, so that it is no longer easy to identify which specialist is dealing with what. In this way, the algorithms of artificial intelligence can be used, in a non-exhaustive list, to do the following: (a) accelerate discoveries in other areas of knowledge (e.g., the discovery of the three-dimensional structure of proteins); (b) design a living entity with the ability not only to survive but also to replicate itself (e.g., biobots) [40]; (c) use biological material to support computing (e.g., biological computing); (d) create an entity with intelligence equal to or greater than ours (e.g., strong artificial intelligence), and so on. In all these cases, it is difficult to find an expression that captures the complexity of the interaction between the “bio” and “digital” disciplines, since we are witnessing a fusion of specialities, with very different objectives, but with an undeniable interaction, and therefore, a potential “emergence” of unprecedented events.
(b) Secondly, we ask ourselves what makes this progressive process of blurring possible, which confronts us with scenarios halfway between the hope of improving our quality of life and the most unimaginable dystopia [41].
The technologies responsible for these questions are biotechnology, synthetic biotechnology, nanotechnology, neurotechnologies and artificial intelligence. Together, they are leading the Fourth Industrial Revolution, but what matters most is not what each can do on its own but how they overlap, interrelate and feed into each other so that advances in one have an impact on the others, something that had already been noticed in the USA in 2003 and was replicated in the European context the following year [42,43].
For this reason, these technologies are described as exponential (for their ability to increase our capacity on an exponential basis), emergent (playing on the double meaning of the word “emergence”, where the whole is more than the sum of its parts, as is the case with life or consciousness), disruptive (because of their potential to substantially alter our societies) and, finally, to paraphrase Ricoeur, technologies of suspicion (because of the intuitive reflection that everything we have taken for granted until now, such as life, the human beings or reality, can be irreversibly replaced) [44].
(c) Thirdly and lastly, we consider how the scientific community reacts to the possible collateral effects of what we have described so far, a yardstick by which we can gauge the true extent of the present technoscientific revolution.
Indeed, it is through two closely related events that we can best understand the uniqueness of our times. We refer to the scientific moratorium that was called for by the biotechnologists in 1973 (Paul Berg et al. Asilomar [45]) and the more recent one, March 2023, proposed by those researching artificial intelligence [46,47]. Meanwhile, another moratorium was called for in 2019. This time, experiments with the CRISPR genome editing technique will be suspended. To complete the circle of the interconnection between living matter, inert matter and algorithms, suffice it to recall that the Asilomar Principles on Artificial Intelligence (2017) are named precisely in honour of their biotechnological counterparts for the moratorium invoked in such a symbolic place in the 1970s [48].
The common link in this period, almost half a century, is that scientists have become aware of the uniqueness of their research, i.e., the fact that we have no precedent for the experiments, objectives or projects they are developing, hence their perplexity and even fear. It should be noted that the scientific community is not proposing to ban research but to postpone it until the risks to our societies have been properly assessed and the moral and legal limits have been established. It is for this reason that the possibility of such moratoria being legally binding has been under consideration.
In short, the issue we are analysing is a consequence of this progressive process of blurring of boundaries, which did not need to be addressed by previous generations of human rights; of the process of convergence of the aforementioned technologies, which did not need to be addressed by our ancestors; and, finally, of the warnings and fears expressed time and again by the various scientific communities about the imminent risks facing our societies.
Fourth generation human rights seek to provide a legal/philosophical response to this scenario.

3. Theoretical Justification

There is some consensus in the academic world about the need to create a fourth generation of human rights, which indirectly implies the acceptance of the traditional tripartite classification. However, this starting axiom does not entail a consensus on which facts in particular justify its existence, which fully affects its content. Therefore, we would have to make a distinction between the following:
The position of those who are in favour of the need for a fourth generation of human rights [49,50,51,52] but either disregard traditional academic classification or separate it from technoscience [53].
The position of those who argue that current technoscientific advances justify the need for a fourth generation of human rights [54,55] but do not agree on which disciplines, in particular, justify this requirement. Thus, we must distinguish between the following:
The first sector of the doctrine focuses on the needs of the digital world [56,57,58,59,60,61,62], including artificial intelligence [63].
A second teaching area focuses on genetic and biomedical issues [64,65], which in practice represents a juridification of bioethics mediated by the term “bio-law” [66,67,68,69,70,71], which has led to a degree of sectoralisation (e.g., neurolaw/neurorights).
A third sector refers to both fields (digital and biotech) but not to the interaction between the two. This nuance is important because the problems continue to appear in watertight compartments.
Finally, we would like to stake our position at the following point: The justification for arguing for the emergence of a fourth generation of human rights lies not in a particular technology or technologies, let alone a political issue, but in the interaction and feedback of exponential or disruptive technologies; that is, in the way they feed backwards and forward in unison, blurring the academic boundaries between disciplines [72], raising the troubling questions we discussed in the previous section (can an AI create viruses completely autonomously, even if it lacks self-awareness, communicating with laboratories via the internet and without us noticing it [73]?) and forcing us to carry out a holistic analysis for which we have no precedents [74,75].
The challenge is to find values that can operate transversally to these technologies (just as these are presented to us as interwoven as a whole), which would allow for a subsequent normative concretisation.
However, if we take a closer look at the recent legislation on data protection, digital services or AI, or the more-than-outdated legislation on biotechnology (there is practically nothing on synthetic biology or neurotechnologies), we see that they have been drafted not from the perspective of human rights but from the point of view of the needs of the internal market, where the citizen is a customer and his or her demands are those of a consumer, hence the need to create a new generation of human rights, to draw red lines that cannot be crossed by a technoscientific development essentially driven by the capitalist system.

4. Intersection of the Fourth Generation in All Generations of Human Rights

With the need for a fourth generation of human rights justified by the technological revolution, we must ask ourselves what values inspire this generation and what relationship we can find between the previous three and the fourth generation.
Until now, the generations of human rights have hinged on a value that served as an axiom, which then allowed them to be broken down into different values of a secondary or subordinate nature. The challenge in the case of the fourth is to find a starting point that avoids falling into extreme casuism on the one hand and being overtaken by the rapid advances of technoscience on the other. That is, we must find a value that fulfils the functions that freedom, equality and solidarity, respectively, lent to the first three generations of human rights and then examine how they interrelate with the fourth.
However, we must not forget that the three most characteristic values of the first three generations of human rights are far from clear or uniform in their meaning, so it would not be reasonable to impose on the fourth generation the systematic requirements which, for various reasons, were not imposed on their counterparts.
To analyse these questions, we will compare the fourth generation with the first two and then with the third generation.

4.1. Confrontation of the Fourth Generation of Rights with the First Two Generations Rights

I believe that the “freedom/equality” pair that characterises each of the first two generations of rights has been replaced or complemented in the case of the fourth generation by the idea of “human identity”.
This right to identity can be broken down into at least four levels: (a) Subjective identity, which would imply the right that technoscience does not make us doubt or even lose our sense of self (e.g., neurotechnologies [76]). (b) Objective identity, which would imply the right that technoscience does not make us doubt or even lose our sense of reality (e.g., deep fakes or ultra-counterfeits in the EU AI Act). (c) Species identity [77], which would imply the right not to be left in doubt as to whether certain biological entities (e.g., chimaeras, brain organelles, etc.), non-biological entities (e.g., artificial intelligences) or hybrids (e.g., products of synthetic biology, such as Boecke et al.’s synthetic human genome, or designed by AI, such as the emerging biobots) are comparable to human beings. (d) Identity in the sense of historical continuity, which would imply the right for our world to remain recognisable as such and not to be transformed or completely replaced (e.g., by nanotechnologies and other disruptive technologies, gene drives in ecosystems, biological diversification of the human species, etc.), i.e., to avoid the objectives of transhumanist or any dystopian singularity that implies an absolute rupture with our past [78].
In this sense, we can see how this value implicitly or explicitly inspires both the Oviedo Convention and the four Protocols that supplement it (e.g., the prohibition of modifying the germline, not even for medical reasons, to prevent the introduction into the human gene pool of variants that are foreign to us; the prohibition of human reproductive cloning, preventing the biological identity of the subjects from being artificially multiplied through the creation of twins). The Charter of Fundamental Rights of the European Union reinforces this value by prohibiting the human body or parts of it as such from being turned into an object of profit, which would blur or extend the identity of the subject to the extent that their DNA could be incorporated into other biological entities or used for spurious purposes (e.g., think of IPS cells and their use for animal/human chimaeras). The Explanatory Memorandum to the Charter links the principle of autonomy (Art. 3.2.1) to moral integrity (physical and psychological, as it appears in the first paragraph), citing the jurisprudence of the EU Court of Justice [7]. The Court’s ban on the patenting of totipotent human cells (cells capable of giving rise to a human being), whatever name, embryo, pseudo-embryo, etc., may be given to them, is also based on this idea of respecting “human identity”. What matters is whether it is identifiable as human or not, hence its withdrawal from the market (Sandel’s famous book, “What Money Can’t Buy”, is part of this attempt to put limits on the market), reinforcing this idea of “subjective identity”. (The biological material of a person is not to be equated with an object that can be appropriated and/or transferred.) When the first article of the Universal Declaration on the Human Genome and Human Rights (UNESCO, 1997) states that, “in a symbolic sense, the human genome is the heritage of humanity”, it strengthens our identity as a collective and focuses our definition as a species not on metaphysical abilities (consciousness, language, transcendence, symbolism, etc.) but on something tangible and material: our DNA. A human being is someone who shares a genetic identity with the other members of our species, so our species is bounded and defined. All other living or inert entities are not human. This is why it is so important to avoid blurring our genetic identity by sharing it with other entities, especially animals that are evolutionarily close to us (e.g., the chimaera problem).
The idea underlying all these rules is that our genome transcends the individual (it belongs not just to one person but to the human species as a whole), hence the need to set direct and indirect limits on how it can be isolated, modified, recreated or mixed with that of other species. The final result, whatever it may be, must not violate our “identity”, an undoubtedly ambiguous term (in any case, no more so than “freedom” or “equality”, which inspired the other two generations) but the only intellectual handle for dealing with certain experiments. This is why the first article of the Oviedo Convention links “dignity” to “identity”, i.e., that whatever happens, a biological entity should be unmistakably human, or not, but without ambiguities or disturbing tertium genus.
But as we have discussed, the Fourth Industrial Revolution affects not only living matter but also inert matter, both in itself and when it interacts with living matter. As far as this issue is concerned, human beings are in the midst of two technologies that can seriously condition our nature as morally autonomous individuals: neurotechnologies (external and internal interfaces) and artificial intelligence. As a result, and for the first time in our history, we take action not against the actions of other human beings but against living or inert entities that we could never before imagine to be an affront to our dignity or blur our sense of reality and/or subjectivity (identity).
The expression that best encapsulates the translation of the principle of identity to our interaction with these new technologies is that of “meaningful human contact”, with content that expands as the possibilities for conditioning us grow.
Thus, it is not enough for the interaction to be voluntary (equivalent to the principle of autonomy in bioethics), but it is essential that the subject is aware that he or she is facing an artificial intelligence, i.e., that he or she identifies the interlocutor as “non-human”. As the ability to blur reality through images or sound is increasing by leaps and bounds, the European Union has created the concept of “deep fake” (ultra-falsification), so that it is mandatory to inform the citizen not only that he or she is dealing with an AI but also that what he or she is seeing, however real it may seem, is artificially created.
The line between confusion and manipulation is a fine one, so fine that the EU AI law prohibits any form of altering human behaviour that could endanger their physical or psychological safety or that of others. In this sense, we must emphasise how the possibilities of neurotechnologies border on the implausible, from the blurring of personality to the possible emergence of new properties with interconnected brains (who knows if also to the internet). For this reason, neuro-rights seek to complement the incipient regulation of AI in order to protect our most intimate sphere, the human mind, as much as possible, hence the special precautions for the most vulnerable groups, such as minors, the elderly or the disabled, in both neurotechnologies and AI.
The term “meaningful human contact” also implies the reservation of some human facets that cannot be delegated to AIs, especially those related to life and death. It is not about the red button (there must always be a human to take responsibility for actions) but about avoiding a total and absolute transfer of our lives as human beings, including our most intimate ones, to AIs, however safe they may be.
In short, the Fourth Industrial Revolution takes us to the vertiginous heights of situations where ultimately our identity as human beings is at stake, from our genomic integrity to our moral integrity, that is, our ability to make free, conscious and responsible choices without external interference that manipulates, conditions or limits us and without being able to confuse the reality or nature of our interlocutors. This is why we believe that “identity” should be the intersection of disruptive or exponential technologies and, therefore, the key axiom of this fourth generation of human rights.
Finally, unlike the value of “dignity”, which can also be used and is in fact constantly invoked in relation to technoscientific issues, “identity” allows us to establish more objective parameters, albeit not without difficulty, for assessing when it is violated and when it is not. For all these reasons, we could base our argument on the following reflection of the Academy of Medical Sciences:
“Whether or not a blended embryo is predominantly ‘human’ is an expert judgement, including an assessment of the likely phenotype, but neither the precise final composition of an individual embryo nor the phenotypic effect of blending will be readily predictable at the present state of knowledge [10].”

4.2. Confrontation of the Fourth Generation of Rights with the Third Generation

If the third generation rights are concentrated around the value of “solidarity”, I believe that the fourth generation rights revolve around the precautionary principle [79], the legal transcript of Jonas’ principle of responsibility [80].
The EU Charter of Fundamental Rights did not enshrine the precautionary principle (it only established a “high level of protection” for the environment). The International Covenants on Civil and Political Rights or on Social, Economic and Cultural Rights did not either. Moreover, other iconic documents, such as the Universal Declaration of Human Rights or the European Convention on Human Rights, make no reference to this principle. It could be said that these are earlier legal documents, but we could counter this argument by showing how the European Convention on Bioethics (1997) or the Universal Declaration on Human Rights and Bioethics (2005) continue this line of silence on a principle that is particularly inconvenient for reasons that are not strictly legal but rather political and economic.
In fact, the precautionary principle obliges us to take action against risks that may themselves be highly unlikely, bordering on the impossible. It is what Ravetz calls “ignorance squared” [81] (we do not know what we do not know), which forces us to refine predictions in situations of maximum uncertainty. That is exactly what is happening with the exponential, or disruptive, technologies leading to the Fourth Industrial Revolution.
It is about taking action on issues such as the following: (a) If gene drives are applied to mosquitoes in Brazil or mammals in Australia, could they have an impact on ecosystems across the planet? (b) If human germline modification is widespread, could it have a structural impact on our species? (c) If we introduce human biological material into animal embryos, are we taking an existential risk? (d) If we continue to increase the potential of AI, could it surpass the human mind? (e) If we continue to increase the potential of AI, could we take an existential risk? (f) Are we taking an existential risk by introducing human biological material into animal embryos? (g) Can brains really be connected to each other or to a computer network?
In fact, the precautionary principle was not born, at the time, to answer these questions. Its context is that of environmental protection in the 1960s. However, the emergence of these and many other unanswered questions led to the extension of this principle to any scientific activity, technology or research that might pose a risk, however unlikely, to the planet. It is, therefore, perhaps fair to recognise that the precautionary principle, as we know it today, has been a product of the European Union, not because it had not been legally enshrined before but because it has a broader meaning, in line with the technologies discussed above.
Indeed, a European Commission Communication on the use of the precautionary principle explains how this principle has been extended beyond the environmental field as a field of application to be invoked when “scientific information is incomplete or inconclusive and the risk is considered too high to be imposed on society” [82].
In order to understand the meaning of certain terms, it is necessary to clarify that “risk” is measured not only by the probability of an event occurring but also by the impact it would have if it did occur (e.g., a nuclear power plant is unlikely to explode, but the consequences would be intolerable if it did) and that this impact may affect people, the environment or even the planet.
Its most representative practical application was the 2018 ruling of the Court of Justice of the European Union, which invoked this very principle in relation to the CRISPR genome-editing technique [83]. According to this ruling, if an organism is genetically modified using this technique, the 2001 Directive on GMOs will apply [84]. This means that it will be subject to the same control as transgenesis, even if it does not technically exist and even if the end result is indistinguishable from a natural organism or one modified by non-ionising radiation or chemical products (critics of the decision make precisely the following argument: why apply the above-mentioned directive to a modified biological entity if it cannot be distinguished from the rest?). Regardless of how right the ruling is, we consider it to be the most representative example of the times we are living in. Such a ruling would be unthinkable in the United States.
Of course, the application of the precautionary principle comes at a high price. If we compare the policies of the European Union with those of the United States, not to mention China, we can see how this principle prevents the marketing of products or services that are widely used outside the European continent. As a result, foreign patents have to be paid for, leading European research companies have to leave and the pace of growth slows down. This explains the difference in the treatment of GMOs in the US and Europe. Recent legislation in the US has effectively put genetic engineering on an equal footing with traditional agriculture, so that there is almost no need for a filter for the marketing of GMOs [85], while the EU is governed by a 2001 directive, which is clearly outdated, and the aforementioned of the European Court of Justice ruling, which restricts GMOs as much as possible. It is, after all, a principle that can be mistakenly invoked to justify protectionist economic practices or to defend the irrationality of technophobia.
In light of the above, I believe that the precautionary principle, in the sense in which it is currently recognised by the EU, can imply at least three rights: (a) the right that technoscience does not substantially alter our model of civilisation; (b) the right that technoscience does not pose a risk to ecosystems; (c) the right that no experiments are carried out or technologies developed that pose an existential risk to the human species and even to life itself on the planet (extinction).
In short, if the value of solidarity represents the third generation of human rights, I believe that the precautionary principle, as interpreted in the above-mentioned European Commission resolution, i.e., with an application that goes beyond the environmental context, characterises the fourth generation of rights under consideration. This is a consequence of the potentially disruptive nature of the technologies at the forefront of the Fourth Industrial Revolution.

4.3. Extension of the Rights of the First Three Generations to the Fourth

To summarise what has been discussed in the last two subsections, the first generation of rights would be represented by liberty, the second by equality, the third by solidarity and the fourth by identity and precaution.
This does not mean, of course, that they are watertight compartments. One need only recall what has already been discussed about how difficult it is to assign a particular right to a particular generation, because it is sometimes spread over three generations.
Notwithstanding the fact that the fourth generation of rights is fundamentally based on the values mentioned above, this generation has refined the rights acquired in the previous three generations.
In this way, the liberal/bourgeois “freedom” of the first generation has given way to the “autonomy” of bioethics, which has repercussions in many aspects, such as the voluntary nature of human experimentation, the acceptance or even refusal of medical treatment (active euthanasia), the question of transplants, including inter vivos, being the subject of predictive genetic diagnosis, etc. The right of children to know who provided the biological material that allowed them to be born (in vitro fertilisation with an anonymous donor) is part of the right to free development of the personality, and although it is not yet recognised in Spain, other European countries, such as Portugal, have not hesitated to put an end to anonymity (e.g., article 7 of the Convention on the Rights of the Child). Discussions about whether AI can create and/or patent an invention seek to extend freedom of thought, creation or research to a format for which they were not originally conceived but which can undoubtedly be included in the original freedoms.
Finally, respect for free will as a neuroright is about adapting individual freedom to the context of neurotechnological interfaces and their infinite possibilities for conditioning our will.
Worker/socialist equality and its flip side, the prohibition of discrimination, has slipped in the fourth generation into efforts to prevent the bias or the objectification of women with artificial intelligence, the non-discrimination of those subject to biomedical intervention (e.g., children born in vitro, hence no civil registration) or the avoidance of stigmatisation of ethnic minorities in research involving genetic material.
Finally, echoes of solidarity can also be found in the fourth generation in policies aimed at socialising scientific advances (art. 2.f of the Unesco Declaration on Human Rights and Bioethics), including possible genetic (e.g., Singer [86]) or neurological (Yuste et al. [76]) improvements of the human species. That is, together with a certain awareness of our obligations towards future generations, so that they do not find a worse world than ours, the objective at the moment is that no one is left behind in what seems to be a qualitative leap in our structural constitution as living beings. All or nothing seems to be the offer from the third to the fourth generation.

5. Conclusions

Looking at the interaction between the five disruptive or exponential technologies and the three generations of human rights examined, we can draw the following conclusions:
1. Unlike the other three generations, the fourth generation of human rights cannot be linked to a social or political revolution or to a specific declaration of rights. If the third generation was founded in 1979, it seems obvious that the fourth generation was born later. On the other hand, it can be observed that the use of the expression we are analysing has increased since 2000, perhaps due to the symbolism of the turn of the century. However, the advances that would, in our view, justify the need for a fourth generation of human rights are somewhat later and more gradual (e.g., neuroethics appeared in 2002; the leap in artificial intelligence took place in 2005; the CRISPR genome editing technique appeared in 2015, etc.). In other words, the term “fourth generation of human rights” is materially justified by the progress made since the term was first used, at the latest in the last twenty years.
On the other hand, the need for these rights is justified by the Fourth Industrial Revolution, and more specifically by the way in which the technologies at the forefront of that revolution intersect. They appeared unconnected throughout the 20th century (AI in the 1950s, biotechnology in the 1970s, etc.), but what is relevant is not the specific date on which each appeared separately but the moment when they interact with each other and feed backwards and forward in unison, something that was noted at the beginning of this millennium on both sides of the Atlantic in the two documents already cited.
2. We need to find a way of attributing subjective rights to entities, biological or otherwise, to which the law has never paid attention. In fact, the first generations of human rights argued bitterly about whether the rights holders were only individuals or whether, on the contrary, they also included human groups. Over time, this debate evolved into the no less problematic question of whether we have duties to animals or whether they have rights in themselves as sentient beings.
With the Fourth Industrial Revolution, the debates have taken on a new dimension. The question arises as to the point at which an entity, natural or otherwise, can be a holder of rights, not because we grant them to them, as we do to animals or groups of humans, but because their cognitive structure allows them to demand them. Thus, given the possibility of a Neanderthal living again, a mammal with human neurons, who knows if an artificial intelligence will be either fully comparable to us (e.g., hominins) or sufficiently close to animals (or distant from animals, depending on the perspective adopted, such as an ape with human biological material) to force us to fundamentally rethink the question of the subjective ownership of rights. In between, we will have to regulate the status of entities as strange to our legal tradition as biobots (cells designed by an AI), brain organelles, creations of synthetic biology, virtual or augmented reality, etc. We will have to deal with surreal debates, such as whether a group of interconnected human brains will lead not only to the emergence of new properties, as neurotechnologists predict or rather describe, but also to a group subjective right; we will have to narrow the concept of the individual, since a human being can not only reproduce after death by in vitro fertilisation but also clone himself or herself or, more worryingly, share cells, organs or genetic material no longer with an animal but with a lineage of animals (which raises the question of how to deal legally with the relationship between the owner of genetic material and a cohort of animal descendants with whom he or she shares this material); what about computing with biological material?
Finally, to the extent that a simple skin cell can become a totipotent cell (IPS cell), we will have to address the legal status of biological material that we have not paid much attention to so far (e.g., hair, skin).
On the other hand, we will have to face the regulation of the intellectual and/or commercial property of entities with diffuse individuality (nucleotides, chromosomes, genes, sequences and algorithms), which may, in turn, contain the rules or instructions for the creating living beings that can reproduce themselves (biotechnology, synthetic biology), in the context of when an artificial intelligence will be able to equal or even surpass us, even if only for commercial purposes.
In short, the number of entities that are subject to legal regulation, biological or otherwise, is growing exponentially, so that from the fourth generation of human rights onwards, both the ontological nature of these entities (what they are, what they can be equated with) and their legal status will have to be addressed.
3. The Fourth Industrial Revolution can be seen as, in effect, the fourth revolution or as a hiatus in human history. In any case, I believe that this dichotomy, a rupture or simple evolution, allows us to advocate the need for a fourth generation of human rights, in line with the interesting canon proposed by González Álvarez to justify the birth of a new generation of human rights [87], without falling into the mere modulation of the previous ones or adapting to new threats without leaving this framework.
We need to ask how we can regulate the implicit or explicit, conscious or unconscious, voluntary or involuntary claim to create something superior to ourselves.
The “something” we seek may be an enhanced human being, an artificial intelligence, a hybrid, a chimaera, or some kind of entity currently unimaginable to us. For all practical purposes, we are indifferent. The axiom that drives us is that if we can, we will. Many things can happen in the meantime, from enormous benefits for all humanity (the AI unravels all possible combinations of proteins) to dystopias (we achieve, in effect, the first goal but not the second). The only thing that can prevent it is the laws of nature, i.e., insurmountable limits (e.g., the speed of light; Pauli’s exclusion principle, etc.), but everything that is technologically feasible but not morally possible will become a reality when we have made sufficient progress. The weakness of the law that we have examined, a reformulation of the classic laisser faire, laisser passer, is the legal response to this lack of reflection on where we are going.
Our connection with future generations is, therefore, different from the one our ancestors had with us. We can shape their world in structural and irreversible ways. And that is at best, since there is room for others, unlikely but no longer a merely imaginary scenario in which a collective hecatomb (Bostrom’s “hiatus”) occurs. For these reasons, the fourth generation of human rights must address the question of how to give legal value to scientific moratoria, so that they are not merely well-intentioned declarations; how to restrict or ban research (e.g., gain of function in viruses, gene drives, synthetic biology, artificial intelligence); how to regulate the democratisation of technology (e.g., biohackers experimenting with CRISPR); and a long etcetera that could endanger everything we know. Treaties on the non-proliferation of nuclear weapons, the banning of biological weapons, etc. are inadequate because of the scale and unpredictability of the scenarios that can arise. In short, we need to give legal status to Cecchetto’s “ethics for the absent” [88].
To this end, we are trying to hide behind “identity” and “precaution”, a mixture of crypto-iusnaturalism and extreme utilitarian pragmatism. Nevertheless, once the axioms have been established, we must be as quick as we are cautious to set clear, precise and binding global limits on what are the morally insurmountable red lines of the current technological revolution, lest it at least leads to legal/moral involution.
Last but not least, neither atoms and molecules nor non-human beings are aware of our concerns and limitations, so it seems obvious that we cannot regulate a technoscientific revolution taking place on a planetary scale and on a regional, let alone national, level. For the first time, we can manipulate living and inert matter on a scale that is unprecedented both in terms of the scale and the symbolism of the objects involved. We are, therefore, faced with philosophical/legal problems of the first order, ranging from the gradual granting of rights to non-human entities that are close to our nature to the preservation of the planet for future generations, hence the need to develop, protect and deepen the fourth generation of human rights.
4. There are three institutions that are currently focusing international attention on AI. The most protective standard from a human rights perspective has been created by UNESCO [89], which also attributes more risks to AI. But it is only a non-legal recommendation, i.e., a symbolic, non-binding statement. The Council of Europe’s draft Convention on AI is the second relevant standard [90]. However, it will only be binding on those countries that are signatories to it voluntarily. Its immediate predecessor, the Convention on Bioethics, was not signed by most of the major European countries for one reason or another, and it has also been outdated by the rapid advances in biotechnology and biomedicine. It seems reasonable to expect the same to happen with the AI Convention. Thirdly and finally, the EU regulation on AI has the advantage of being binding on all 27 EU countries, but it is less ambitious than the other two regulations mentioned above, not least because its ultimate aim is the unity of the single market and not the protection of human rights. Moreover, the fact that it does not come into force until 2026 suggests that it will be outdated in the face of rapid technological progress. Finally, despite the fact that Bletchley’s statement, following the line of argument of a certain doctrinal sector concerned about these links, suggests it, none of the three standards links AI to biotechnology [91].
Any hypothetical international human rights norm on disruptive technologies will face the same problems: how to reconcile the protection of citizens with the sovereignty of states that are reluctant to accept any regulation that might impede technoscientific progress; how to adapt the rules to the dizzying advance of technoscience; and, finally, how to interrelate such disparate disciplines.
Despite these limitations, it is clear that the only laws that can make sense in the context analysed in this paper are binding international standards. We need to achieve universal common minimum standards that transcend the framework of the nation-state, that are flexible enough to adapt to new realities and that manage to regulate the progress of the various disruptive disciplines in a cross-cutting way.
In summary, we believe the following: (a) it is undeniable that we are at the beginning of the Fourth Industrial Revolution; (b) this revolution is being led by disruptive technologies, i.e., biotechnology, synthetic biology, nanotechnology, neurotechnology and artificial intelligence; (c) the challenges we face in the short term are not met by the human rights established in the three previous generations of human rights; (d) the right to human identity can serve as a basis for addressing both the challenges of AI and the challenges of “bio” disciplines, including neurotechnology; (e) the precautionary principle, understood not only as an environmental principle but also as it is understood by the Court of Justice of the European Union and by the EU itself, can serve as a point of reference when taking decisions that go beyond the temporal and/or spatial framework of technological progress, allowing them to be concretised as the situation requires.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. World Economic Forum. Health and Healthcare in the Fourth Industrial Revolution Global Future Council on the Future of Health and Healthcare 2016–2018; World Economic Forum: Geneva, Switzerland, 2019. [Google Scholar]
  2. European Parliament. Resolution of 3 May 2022 on Artificial Intelligence in the Digital Age (2020/2266)(INI)); European Parliament: Strasbourg, France, 2022; pp. 8–10. [Google Scholar]
  3. López Baroni, M.J.; Marfany, G.; Lecuona, I.D.; Corcoy, M.; Boada, M.; Royes, A.; Santaló, J.; Casado, M. La edición genómica aplicada a seres humanos: Aspectos éticos, jurídicos y sociales. Rev. Derecho Genoma 2017, 46, 317–340. [Google Scholar]
  4. Doudna, J.; Sternberg, S. Una Grieta en la Creación. CRISPR, La Edición Génica y El Increíble Poder de Controlar la Evolución; Alianza Editorial: Madrid, Spain, 2020. [Google Scholar]
  5. Convention for the Protection of Human Rights and Dignity of the Human Being with Regard to the Application of Biology and Medicine; Council of Europe: Oviedo, Spain, 1997.
  6. Parliamentary Assembly of the Council of Europe. Recommendation No. 934 of the Parliamentary Assembly of the Council of Europe, on Genetic Engineering; Parliamentary Assembly of the Council of Europe: Strasbourg, France, 1982. [Google Scholar]
  7. Explanations on the Charter of Fundamental Rights. Official Journal of the European Union. C 303/17. 12/12/2007. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A32007X1214%2801%29 (accessed on 30 November 2023).
  8. Cyranoski, D.; Ledford, H. Genome-edited baby claim provokes international outcry. Nature 2018, 563, 607–608. [Google Scholar] [CrossRef] [PubMed]
  9. DrXaverius. George Church’s Project. XATACA 20/11/2018. Available online: https://www.xataka.com/medicina-y-salud/10-variantes-geneticas-ventajosas-efectos-secundarios-que-george-church-quiere-meter-nuestros-hijos-crispr (accessed on 30 November 2023).
  10. Academy of Medical Science. Exploring the Boundaries. Report on A Public Dialogue Into ANIMALS Containing Human Material. September 2010. Available online: https://www.ipsos.com/en-uk/exploring-boundaries-public-dialogue-animals-containing-human-material (accessed on 30 November 2023).
  11. Wang, J.; Xie, W.; Li, N.; Li, W.; Zhang, Z.; Fan, N.; Ouyang, Z.; Zhao, Y.; Lai, C.; Li, H.; et al. Generation of a humanised mesonephros in pigs from induced pluripotent stem cells via embryo complementation. Cell Stem Cell 2023, 30, 1235–1245. [Google Scholar] [PubMed]
  12. Tan, T.; Wu, J.; Si, C.; Dai, S.; Zhang, Y.; Sun, N.; Zhang, E.; Shao, H.; Si, W.; Yang, P.; et al. Chimeric contribution of human extended pluripotent stem cells to monkey embryos ex vivo. Cell 2021, 184, 2020–2032. [Google Scholar]
  13. Esvelt, K. Conservation demands safe gene drive. PLoS Biol. 2017, 15, e2003850. [Google Scholar]
  14. Aksoy, I.; Rognard, C.; Moulin, A.; Marcy, G.; Masfaraud, E.; Wianny, F.; Cortay, V.; Bellemin-Ménard, A.; Doerflinger, N.; Dirheimer, M.; et al. Apoptosis, G I Phase Stall, and Premature Differentiation Account for Low Chimeric Competence of Human and Rhesus Monkey Naïve Pluripotent Stem Cell. Stem Cell Rep. 2021, 16, 56–74. [Google Scholar]
  15. Han, X.; Chen, M.; Wang, F.; Windrem, M.; Wang, S.; Shanz, S.; Xu, Q.; Oberheim, N.A.; Bekar, L.; Betstadt, S.; et al. Forebrain engraftment by human glial progenitor cells enhances synaptic plasticity and learning in adult mice. Cell Stem Cell 2013, 12, 342–353. [Google Scholar]
  16. Belmonte, J.C.I. Órganos humanos fabricados dentro de animales. Investig. Ciencia 2017, 484, 26–34. [Google Scholar]
  17. Counihan, D. Neurological Chimeras and the Moral Staircase. In Chimera Research. Methods and Protocols; Hyun, I., De los Ángeles, A., Eds.; Springer: Berlin/Heidelberg, Germany, 2019. [Google Scholar]
  18. Greeley, H.; Farahany, N. Advancing the ethical dialogue about monkey/human chimeric embryos. Cell 2021, 184, 1962–1963. [Google Scholar] [CrossRef]
  19. Sawai, T.; Hatta, T.; Fujita, M. Japan Significantly Relaxes Its Human-Animal Chimeric Embryo Research Regulations. Cell Stem Cell 2019, 24, 513–514. [Google Scholar]
  20. Deplazes, A.; Huppenbauer, M. Synthetic organism and living machines. Syst. Synth. Biol. 2009, 3, 55–63. [Google Scholar] [CrossRef] [PubMed]
  21. Church, G. Regenesis, How Synthetic Biology Will Reinvent Nature and Ourselves; Regis University: Denver, CO, USA, 2011. [Google Scholar]
  22. Venter, J. Synthetic Life. Transcript Press Conference. 2010. Available online: https://www.ted.com/talks/craig_venter_unveils_synthetic_life/trasncript (accessed on 30 November 2023).
  23. Boeke, J.D.; Church, G.; Hessel, A.; Kelley, N.J.; Arkin, A.; Cai, Y.; Carlson, R.; Chakravarti, A.; Cornish, V.W.; Holt, L.; et al. The Genome Project-Write. Science 2016, 353, 126–127. [Google Scholar] [CrossRef] [PubMed]
  24. López Baroni, M.J. El criterio de demarcación en las Biopatentes. An. Cátedra Fr. Suárez 2018, 52, 131–153. [Google Scholar] [CrossRef]
  25. Oldak, B.; Wildschutz, E.; Bondarenko, V.; Comar, M.Y.; Zhao, C. Complete Human Day 14 Post-Implantation Embryo Models From Naïve ES Cells. Nature 2023, 622, 562–573. [Google Scholar] [CrossRef] [PubMed]
  26. Takahashi, K.; Yamanaka, S. Induction of pluripotent stem cells from mouse embryonic and adult fibroblast cultures by defined factors. Cell 2006, 126, 663–676. [Google Scholar] [CrossRef]
  27. Farahany, N.; Greely, H.T.; Hyman, S.; Koch, C.; Grady, C.; Pașca, S.P.; Sestan, N.; Arlotta, P.; Bernat, J.L.; Ting, J.; et al. The ethics of experimenting with human brain tissue. Nature 2018, 556, 429–432. [Google Scholar] [CrossRef] [PubMed]
  28. Ilic, D.; Ogilvie, C.; Noli, L.; Kolundzic, N.; Khalaf, Y. Human embryos from induced pluripotent stem cell-derived gametes: Ethical and quality considerations. Regen. Med. 2017, 12, 681–691. [Google Scholar] [CrossRef]
  29. Pérez Luño, A. La Tercera Generación de Derechos Humanos; Aranzadi: San Sebastian, Spain, 2006. [Google Scholar]
  30. García Manrique, R. El Cuerpo Diseminado. Estatuto, Uso y Disposición de los Biomateriales Humanos; Civitas-Thomson Reuters: Ann Arbor, MI, USA, 2018. [Google Scholar]
  31. Villalobos Antúnez, J.; Hernández, J.P.; Palmar, M. El Estatuto Bioético de los Derechos Humanos de Cuarta Generación. Fronesis 2012, 19, 350–371. [Google Scholar]
  32. Comfort, N. How science has shifted our sense of identity. Nature 2019, 574, 167–168. [Google Scholar] [CrossRef]
  33. Spanish Patent and Trademark Office. Patenting Software? Rules and Practices at the European Patent Office. Available online: https://www.oepm.es/cs/OEPMSite/contenidos/Folletos/FOLLETO_3_PATENTAR_SOFTWARE/017-12_EPO_software_web.html (accessed on 30 November 2023).
  34. Abbott, R. I think, therefore I invent: Creative computers and the future of patent law. Boston Coll. Law Rev. 2016, 57, 1079–1126. [Google Scholar] [CrossRef]
  35. Gibson, S.; Newman, J. What Happens When AI Invents: Is the Invention Patentable? AI Mag. 2021, 41, 96–99. Available online: https://ojs.aaai.org/aimagazine/index.php/aimagazine/article/view/7374 (accessed on 30 November 2023).
  36. Proposal for a Regulation of the European Parliament and of the Council on Harmonized Rules in the Field of Artificial Intelligence (Artificial Intelligence Act). European Union. COM/2021/206. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206 (accessed on 30 November 2023).
  37. López de Mantarás, R.; Messeguer, P. Inteligencia Artificial; CSIC y Los Libros de la Catarata: Madrid, Spain, 2017. [Google Scholar]
  38. De Asís, R. Sobre la propuesta de los neuroderechos. Derechos y Libertades. Rev. Filos. Derecho Derechos Hum. 2022, 51–70. [Google Scholar] [CrossRef]
  39. Libet, B. Unconscious Cerebral Initiative and the Role of Conscious will in Voluntary Action; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar]
  40. Kriegman, S.; Blackiston, D.; Levin, M.; Bongard, J. Kinematic self-replication in reconfigurable organisms. Proc. Natl. Acad. Sci. USA 2021, 118, e2112672118. [Google Scholar] [CrossRef] [PubMed]
  41. O’Brien, J.; Nelson, C. Assessing the Risks Posed by the Convergence of Artificial Intelligence and Biotechnology. Health Secur. 2020, 18, 219–227. [Google Scholar] [CrossRef] [PubMed]
  42. Roco, M.; Sims, W.; National Science Foundation. Converging Technologies for Improving Human Performance. Available online: https://obamawhitehouse.archives.gov/sites/default/files/microsites/ostp/bioecon-%28%23%20023SUPP%29%20NSF-NBIC.pdf (accessed on 30 November 2023).
  43. Nordmann, A.; European Commission. Converging Technologies -Shaping the Future of European Societies 2004. Available online: https://pure.iiasa.ac.at/id/eprint/12590/1/Converging%20Technologies.pdf (accessed on 30 November 2023).
  44. Baroni, M.J.L. El bioderecho y la bioética ante las tecnologías de la sospecha. In Temas Clave de la Filosofía del Derecho y Política: Comentarios Críticos; Tecnos: Logroño, Spain, 2019; pp. 125–150. [Google Scholar]
  45. Berg, P.; Baltimore, D.; Boyer, H.W.; Cohen, S.N.; Davis, R.W.; Hogness, D.S.; Nathans, D.; Roblin, R.; Watson, J.D.; Weissman, S.; et al. Potential Biohazards of Recombinant DNA Molecules. Science 1974, 185, 303. [Google Scholar] [CrossRef] [PubMed]
  46. Pause Giant AI Experiments: An Open Letter. Available online: https://futureoflife.org/open-letter/pause-giant-ai-experiments/ (accessed on 30 November 2023).
  47. Lander, E.S.; Baylis, F.; Zhang, F.; Charpentier, E.; Berg, P.; Bourgain, C.; Friedrich, B.; Joung, J.K.; Li, J.; Liu, D.; et al. Adopt a moratorium on heritable genome editing. Nature 2019, 567, 165–168. [Google Scholar] [CrossRef]
  48. Asilomar AI Principles. Asilomar Conference. Available online: https://futureoflife.org/open-letter/ai-principles/ (accessed on 30 November 2023).
  49. Donaire Sánchez, P. Los Derechos Humanos. 193–199. Available online: http://www.rtfd.es/numero5/15-5.pdf (accessed on 30 November 2023).
  50. Macedo, D. El pensamiento Político de Paulo Bonavides; Instituto de Investigaciones Jurídicas UNAM: Ciudad de México, Mexico, 2013; pp. 971–989. Available online: https://archivos.juridicas.unam.mx/www/bjv/libros/7/3455/42.pdf (accessed on 30 November 2023).
  51. Rodríguez Palop, M.E. La nueva Generación de Derechos Humanos; Origen y Justificación: Dykinson, SL, USA, 2010; pp. 971–989. [Google Scholar]
  52. Aquino Britoa, A. La Cuarta Generación De Derechos. Democracia Constitucional Como Meta-Garantía. Una Mirada Desde Argentina. 2018. Available online: https://repositorio.unne.edu.ar/handle/123456789/27813 (accessed on 30 November 2023).
  53. Morales Lamberti, A. Derechos de la naturaleza y Justicia ecológica intergeneracional. Prometeica 2019, 18, 13–23. [Google Scholar] [CrossRef]
  54. Regal, T. Rights-Based Order Under a Multipolar International Regime. IUP J. Int. Relat. 2023, 17, 25–42. [Google Scholar]
  55. Gómez Sánchez, Y. La protección de los datos genéticos: El derecho a la autodeterminación informativa. DS Derecho Salud 2008, 16, 59–78. [Google Scholar]
  56. Bustamante Donas, J. Hacia la Cuarta Generación de Derechos Humanos: Repensando la Condición Humana en la Sociedad Tecnológica. Available online: https://www.corteidh.or.cr/tablas/r22470.pdf (accessed on 30 November 2023).
  57. Nuredin, A. Fourth-Generation Human Rights and the Violation of the Concept of Privacy. Vis. Int. Sci. J. 2023, 8, 9–23. [Google Scholar]
  58. Cova Fernández, E. Derechos humanos y derechos digitales en la Sociedad de la Información. Rev. Derechos Hum. Educ. 2022, 6, 61–80. [Google Scholar]
  59. Aguirre, A.; Manasia, N. Cuarta Generación de Derechos Humanos: Inclusion social y democratización del conocimiento. Telématique 2015, 14, 2–16. [Google Scholar]
  60. Morales Aguilera, P. Entre el prisma discursivo y el ciberhumanismo: Algunas reflexiones sobre Derechos Humanos de cuarta generación. Franciscanum 2018, 169, 39–86. [Google Scholar] [CrossRef]
  61. Riofrío Martínez-Villalba, J. La cuarta ola de derechos humanos: Los derechos digitales. Rev. Latinoam. Derechos Hum. 2014, 25, 15–45. [Google Scholar]
  62. University of Deusto. Declaración DEUSTO Derechos Humanos en Entornos Digitales. Available online: https://www.deusto.es/es/inicio/privacidad/declaracion-deusto-derechos-humanos-en-entornos-digitales (accessed on 30 November 2023).
  63. Llano Alonso, F. Singularidad Tecnológica, Metaverso e Identidad Personal: Del Homo Faber al Novo Homo Ludens. pp. 208. In Inteligencia Artificial y Filosofía del Derecho; Laborum Ediciones: Murcia, Spain, 2022; pp. 189–215. [Google Scholar]
  64. Bobbio, N. El Tiempo de los Derechos; Editorial Sistema: Madrid, Spain, 1991. [Google Scholar]
  65. Ivani, Q.; Kuchuk, A.; Orlova, Q. Biotechnology as Factor for The Fourth. J. Hist. Cult. Art Res. 2020, 9, 115–121. [Google Scholar] [CrossRef]
  66. Valdés, E. Bioética, Genética y Derechos Humanos. Análisis de los Alcances Jurídicos del Bioderecho Europeo y Su Posible Aplicación En Estados Unidos Como Fuente de Derechos Humanos de Cuarta Generación; Universidad Carlos III de Madrid: Madrid, Spain, 2013; pp. 139–163. [Google Scholar]
  67. Valdés, R. Bioderecho, daño genético y derechos humanos de cuarta generación. Boletín Mex. Derecho Comparado. Año XLVIII Núm 2015, 144, 1197–1228. [Google Scholar] [CrossRef]
  68. Kemp, P.; Rendforff, J. The Barcelona Declaration. Towards an Integrated Approach to Basic Ethical Principles. Synth. Philos. 2008, 23, 239–251. [Google Scholar]
  69. Rendtorff, J. Update of European bioethics: Basic ethical principles in European bioethics and biolaw. Bioeth. Update 2015, 1, 113–129. [Google Scholar] [CrossRef]
  70. Rendtorff, J.; Kemp, P. Basic Ethical Principles in European Bioethics and Biolaw; Centre for Ethics and Law: Copenhagen, Denmark; Instituto Borja de Bioética: Barcelona, Spain, 2000; Volume 2. [Google Scholar]
  71. Vasile Cornescu, A. The generations of human’s rights Dny Práva. 2009. Available online: https://www.law.muni.cz/sborniky/dny_prava_2009/files/prispevky/tvorba_prava/Cornescu_Adrian_Vasile.pdf (accessed on 30 November 2023).
  72. Risse, M. The Fourth Generation of Human Rights: Epistemic Rights in Digital. Lifeworld. Carr. Cent. Hum. Rights Policy Harv. Kennedy Sch. 2021, 8, 351–378. [Google Scholar]
  73. Bracero, F. Artificial Intelligence: ‘AI Fallacies Endanger Democracy’. Interview with López de Mantarás. La Vanguardia. 31 March 2023. Available online: https://www.lavanguardia.com/tecnologia/20230331/8866778/falsedades-ia-ponen-peligro-democracia.html (accessed on 30 November 2023).
  74. Zakaria, S.; Marler, T.; Cabling, M.; Genc, S.; Honich, A.; Virdee, M.; Stockwell, S. Machine Learning and Gene Editing at the Helm of a Societal Evolution; RAND Corporation: Santa Monica, CA, USA, 2023; Available online: https://www.rand.org/pubs/research_reports/RRA2838-1.html (accessed on 30 November 2023).
  75. Urbina, F.; Lentzos, F.; Invernizzi, C.; Ekins, S. Dual use of artificial-intelligence-powered drug discovery. Nat. Mach. Intell. 2022, 4, 189–191. [Google Scholar] [CrossRef]
  76. Yuste, R.; Goering, S.; Arcas, B.A.Y.; Bi, G.; Carmena, J.M.; Carter, A.; Fins, J.J.; Friesen, P.; Gallant, J.; Huggins, J.E.; et al. Four ethical priorities for neurotechnologies and AI. Nature 2017, 551, 159–163. [Google Scholar] [CrossRef]
  77. Dolin Vyacheslav, A. Concept of Fourth generation of human rights: Attempt of philosophical and anthropological rationale. Антинoмии 2018, 18, 7–20. [Google Scholar]
  78. Pérez Luño, A. El posthumanismo no es un humanismo. Doxa. Cuad. Filos. Derecho 2021, 44, 291–312. [Google Scholar] [CrossRef]
  79. Herrera, A. El Principio De Precaución Como Fundamento De Los Derechos De Cuarta Generación. In Globalización y Derecho. Una Aproximación Desde Europa y América Latina; Lima Torrado, J., Olivas, E., Ortiz-Arce, A., Eds.; DILEX: Logroño, Spain, 2007; pp. 269–290. [Google Scholar]
  80. Jonas, H. The Principle of Responsibility; (original title Das Prinzip Verantwortung, Inserl Verlag, translation by Javier Fernández Retenaga); Empresa Editorial Herder SA: Barcelona, Spain, 1995. [Google Scholar]
  81. Tabara, D.; Polo, D.; Lemkow, L. Precaución, Riesgo y Sostenibilidad en Los Organismos Agrícolas Genéticamente Modificados. Política Soc. 2003, 40, 81–104. [Google Scholar]
  82. European Commission. Communication from the Commission on the Precautionary Principle. Brussels, 2.2.2000. COM(2000) 1 final. Available online: https://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=COM:2000:0001:FIN:en:PDF (accessed on 30 November 2023).
  83. High Court of Justice of the European Union (Grand Chamber) of 25 July 2018, Case 528/2016. Available online: https://curia.europa.eu/juris/liste.jsf?num=C-528/16&language=EN (accessed on 30 November 2023).
  84. Directive 18/2001 of 12 March 2001 on the Deliberate Release into the Environment of Genetically Modified Organisms. Available online: https://eur-lex.europa.eu/eli/dir/2001/18/oj (accessed on 30 November 2023).
  85. Animal and Plant Health Inspection Service (APHIS); USDA. Final rule. Movement of Certain Genetically Engineered Organism (2020 Rule). Fed. Regist. 2020, 29, 790–29838. [Google Scholar]
  86. Singer, P. De compras por el supermercado genético. Translated by Julio Seoane. Isegoría 2002, 27, 10–40. [Google Scholar]
  87. Álvarez, R.G. Aproximaciones A Los Derechos Humanos de Cuarta Generación. Available online: https://www.tendencias21.es/derecho/attachment/113651/ (accessed on 30 November 2023).
  88. Cecchetto, S. ¿Una ética de cara al futuro? Derechos humanos y responsabilidades de la generación presente frente a las generaciones por venir. Andamios 2007, 3, 61–80. [Google Scholar] [CrossRef]
  89. UNESCO. Recommendation on the Ethics of Artificial Intelligence. 2021. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000381137 (accessed on 30 November 2023).
  90. Committee on Artificial Intelligence (CAI). Consolidated Working Draft on the Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law. Strasbourg, 7 July 2023. Available online: https://rm.coe.int/cai-2023-18-consolidated-working-draft-framework-convention/1680abde66 (accessed on 30 November 2023).
  91. UK Government. The Bletchley Declaration by Countries Attending the AI Safety Summit, 1–2 November 2023. Available online: https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023 (accessed on 30 November 2023).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Baroni, M.J.L. Fourth Generation Human Rights in View of the Fourth Industrial Revolution. Philosophies 2024, 9, 39. https://doi.org/10.3390/philosophies9020039

AMA Style

Baroni MJL. Fourth Generation Human Rights in View of the Fourth Industrial Revolution. Philosophies. 2024; 9(2):39. https://doi.org/10.3390/philosophies9020039

Chicago/Turabian Style

Baroni, Manuel Jesús López. 2024. "Fourth Generation Human Rights in View of the Fourth Industrial Revolution" Philosophies 9, no. 2: 39. https://doi.org/10.3390/philosophies9020039

Article Metrics

Back to TopTop