Next Article in Journal
Strength and Resilience for Kinship Caregivers Raising Children: A Scoping Review
Previous Article in Journal
Identity and Multiplicity of Belonging in a Europe in Search of Democracy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Reluctant Republic: A Positive Right for Older People to Refuse AI-Based Technology

Department of Communication Studies, National University of Political Studies and Public Administration, 012244 Bucharest, Romania
Societies 2023, 13(12), 248; https://doi.org/10.3390/soc13120248
Submission received: 7 October 2023 / Revised: 25 November 2023 / Accepted: 27 November 2023 / Published: 1 December 2023

Abstract

:
Societies in the global North face a future of accelerated ageing. In this context, advanced technology, especially that involving artificial intelligence (AI), is often presented as a natural counterweight to stagnation and decay. While it is a reasonable expectation that AI will play important roles in such societies, the manner in which it affects the lives of older people needs to be discussed. Here I argue that older people should be able to exercise, if they so choose, a right to refuse AI-based technologies, and that this right cannot be purely negative. There is a public duty to provide minimal conditions to exercise such a right, even if majorities in the relevant societies disagree with skeptical attitudes towards technology. It is crucial to recognize that there is nothing inherently irrational or particularly selfish in refusing to embrace technologies that are commonly considered disruptive and opaque, especially when the refusers have much to lose. Some older individuals may understandably decide that they indeed stand to lose a whole world of familiar facts and experiences, competencies built in decades of effort, and autonomy in relation to technology. The current default of investigating older people’s resistance to technology as driven by fear or exaggerated emotion in general, and therefore as something to be managed and extinguished, is untenable.

1. Preamble

One sunny morning, a couple walking their dog on the shores of the island of Reluctant Republic witnessed something extraordinary. Bright geometrical shapes were growing at the Northwestern horizon, and soon, the shapes became huge ships, which presently dropped anchor not far from the white pebbles of the Beach of No Return. The couple alerted the other islanders, and they gathered to meet the newcomers. By the time the locals elected a welcome committee and descended to the seaside with yesterday’s cookies, the strangers were already inspecting, with poorly disguised skepticism, the unkept pebbles and anachronic shells under their feet.
A polite conversation followed as these strangers spoke fluent Republican, although adorned here and there with Unknown Terms, which they seemed to enjoy greatly, making sure to use at least one per sentence. After meteorological circumstances were exhausted, important things began to be said. It became clear that while the islanders were a people of their rock, the strangers were accustomed to always traveling and calling everywhere home. As their voices began to eclipse those of the locals, darker threads filled the otherwise well-mannered distance between the two groups.
The welcome committee silently reconsidered the concept of welcoming as they learned that these strangers did not see themselves as guests and had no intention of leaving. Pedagogically towering over the local audience, the nomads explained that this was but one more island in a myriad of essentially similar islands and that they intended to finally give it the attention it deserved. They even apologized for the delay—there had always been demand elsewhere, but also a certain failure of the Archives, which for a while, forgot about the very existence and nature of the Reluctant Republic. Well, past failures were to be fixed. There was a tangible air of self-satisfaction as they expressed their view that, while their lives were many decades-long and relatively painless, and the islanders’ much shorter and often touched by illness, still the locals could now finally live Full and Meaningful lives, which meant lives such as the strangers liked to live. There was, These Days, no need to suffer unkept pebbles and antediluvian shells because so much better was at hand. They pointed to their ships and landing craft, full of lights against the darkening sky, and then they presented their Gifts, accompanying the ceremony with well-weighted click sounds.
Each of the islanders gathered on the shore was given a Device. There was also a clear suggestion that the absentees should get their Devices as soon as possible. The newcomers explained that such Devices were already a Fact of Life on all the other islands, their usefulness nothing short of miraculous. Most needs would soon be satisfied by simply redirecting one’s thoughts to the Device. Escaping from boredom, loneliness, or pain, getting food or medicine, lifting heavy items—the Devices would make all effortless—almost. To enjoy all these Favors, only a small sacrifice of Changing their Ways was required of the locals. Not that at this point, the strangers noted with empathetic smiles, there was much of a choice. They had come to communicate Necessity.
There was silence now, which meant the Sermon was over. The locals smiled back, and then reminded the newcomers the meaning of their island’s name. They said “no”.
Future Historians, missing actual records after this moment, have, as it is well known, been relegated to debating the nature of that refusal. Perhaps it was just an expression of infantile panic in an already decaying society, unable to grasp its predicament or what it was offered. Or was it that other variety of fear, which raises its head recurrently, pretending it is original, each time New Things supersede Old Ways? It could have been worse: not panic, but calm, cold entitlement—privilege being what privilege is. It remains a task for Future Future Historians to investigate why not many of their predecessors thought of Reluctant Republicans as within their Rights when they had said “no” to the strangers.
*
The sketchy thought experiment above is an invitation to think about old age and technological change in a manner that does not default to delegitimizing rejection of new technology. Indeed, I want to sketch some reasons for looking at such a refusal as a right. Many details will have to be left out, still, I think a basic picture will become reasonably clear as we go on. This is a picture in which those older individuals who reject digital technology, especially that involving artificial intelligence (AI) (See the next section for a clarification of the term “AI”), may do so as a matter of enjoying a right of refusal which should include at least some minimal positive components. That is to say that this right will generate corresponding obligations that, at least in principle, go beyond letting be. The physiognomy of such obligations should not be confused with what is typically offered as a benefit or indulgence to those older persons that navigate the world of tech with difficulty. They will be obligations stemming from a “no” taken at face value, not the pervasive yes-buttery of institutional and corporate PR; not somewhat larger fonts.
It is worth insisting on taking this kind of refusal seriously to indicate its weight and nature and the injustice of its casual dismissal—which tends to be the rule currently. I should stress, however, that I do not intend to make a sociological point, suggesting, for example, that most older individuals are skeptical about recent technology. The matter is one of principle. Let me explain.
The old are a vast and diverse demographic, not a group or culture in any usable sense. Still, one way to look at old age is as a world of its own—or rather a series of related worlds—within the larger adult society. This need not take sophisticated phenomenological meanings, though there can be little doubt that specific experiences are part of what constitutes such worlds. What the experiences are experiences of should also be included, and this extends to technological implements and the habits sedimented around them. The reader could also think in the admittedly flexible terms of Wittgensteinian “forms of life”. What matters is that the term “world”, as used here, carries the idea of precarious—and ultimately untenable—autonomy. An autonomy that still needs to be respected, even if it fades. While the process of ageing is, by its very nature, a process of being displaced, of seeing one’s world gradually evaporate, older people, especially in the global North, are now faced with accelerated, and thus potentially brutal, uprooting, driven by technology and associated social mores. In their rapidly ageing societies in which digital technology and tech capital are dominant forces, they face an imperative to adapt by adopting current technologies, especially those driven by AI, into their lives. The question is whether they have a serious claim to a right to ignore or reject this imperative.
The question has to be faced, even if the older people resisting technology are or become a small minority, and even if one is (tacitly) inclined to regard that minority as backward, reactionary, or selfish. Perhaps most older individuals are, in fact, motivated by a sense of belonging to whatever carries the flagrances of youth. Perhaps they truly want to give up the past and its endless trail of broken objects, obsolete technology, and gloomy memories of dead people—and just be creatures of now. If that is the case, then it will likely help, say, to train AI on inclusive, ageism-controlled data sets or to have a robot in the home that is friendly, reliable, and cheap, and to program it to deliver small talk, jokes, and dementia-retarding memory games. Paint as rosy a picture as one may like, it will not, however, realistically eliminate the possibility that some older people will not want to partake in this game. For them, the problem will not be to make technology friendly because they desire no such friend. They do not want it in their world.
The allusion in the story above to colonial aggression dressed as mission civilisatrice should be read in this light. It should be unambiguous that there is a right to resist, to hold on to one’s world. I am aware this is a problematic analogy to make, and it is not my intention to make the application of the concept of colonization cheap or to treat colonization as an off-the-shelf metaphor. Still, I think this may be an imperfect way to focus attention on a particular agglutination of processes. One can ignore the allusion as long as one keeps in mind that these processes involve, simultaneously, displacement and replacement of people and of things invested with meaning, discourses that euphemize loss and mask the exercise of power, and exploitative commercial strategies.
It is part of this logic, for example, that there is a palpable rush to make old age and ageing technological problems with technological solutions, and that it comes with a particularly offensive, bland, chatbot-like discourse of sunlit horizons. Briefly, the old should give up their world for the promise of a better world offered by enlightened younger individuals. The submerged side of this image is less pleasant and usually merely gestured towards. They should give up their world because, like hoarders, they live among ruins that inconvenience the neighbors. They should give up their world because, frankly, their property rights expired last year—no country for old men—and they should not be squeamish about what the generosity of others leaves them with. They should let go because that is absolution from the sin of becoming a burden.
This is morally scandalous, not because no older person would naturally want to embrace new things, including AI-driven technology, with all its advantages and drawbacks. Again, perhaps most would, and who is to doubt their reasons? Nonetheless, this amounts to imposition from the exterior because what counts as reasonable choice is predefined. One knows from the outset and the outside what is rational and admirable and what is not. What a good life looks like in its final episodes. One should want to get old like that tanned guy climbing the Himalayas, his smartwatch glimmering in the thin air, not like that grumpy fellow dusting 1980s magazines in a small apartment. Whoever does not want that is to be educated because it must be, in the final analysis, a version of ignorance that motivates their refusal.
I think there is a right to resist this possibly benevolent remaking of old age, and that it is substantial. Older people have a right to refuse technologies that other individuals—including overwhelming majorities in a given society—think are helpful, safe, easy to use, efficient, cost-saving, life-saving, unavoidable-given-the-context, etc. The old have this right even if, by others’ lights, they are misinformed, selfish, or unreasonable in their refusal. They have this right even if, in practice, there are no means to currently satisfy it adequately (it cannot be a purely negative right).
This is a risky proposition, and I think it is best to lay it out without the usual hedging, even if I will not be able to provide it with a full defense here. The sketch that I do present is meant to at least erode the “certainties” that paint especially AI-driven technology as a natural ally of old age. I proceed as follows: I begin from afar by discussing general attitudes toward technology, especially the relation between fear and rejection of technology. Fearing technology may be a poor reason to resist it, but not all refusal should be attributed to fear. I then focus on issues generated by technologies that use AI and a series of reasons that older people may invoke to resist AI. I will suggest that proposed remedies to the drawbacks of AI, such as safeguarding fair representation in datasets, do not justify making such technologies quasi-mandatory. Representation is one thing, participation is something else. A right to refuse participation should not be understood, at least in the case of older people, as merely protection from interference. I will present a brief defense of a positive right to refuse AI-driven technology, that is, a right that will involve putting in place some minimal provisions for refusers. To put things into perspective, I will conclude with a set of analogies, which invite comparison between refusal of technology by older people and situations such as refusal of medical treatment or refusal of technology for religious reasons.

2. Refusal of Technology as Rational Attitude

One typical perspective from which resistance to (or refusal of) technology is discussed merges social science and social engineering. The default and often tacit starting point assumes that the benefits of technology are obvious, therefore, resistance is (1) something to be explained—a scientific task—and (2) something to be managed and overcome—a social engineering task. Models of “technology acceptance” such as the Technology Acceptance Model—TAM [1], the Unified Theory of Acceptance and Use of Technology—UTAUT [2], or the Technology Readiness Index—TRI [3] are generally deployed following this logic. Within and beyond such schemata, behavior is generally modeled in order to facilitate acceptance or mitigate resistance. Bluntly put, rejection of technology is something to be extinguished. Understanding resistance as justified, and thus as a way of acting with a legitimate claim to endure, is comparatively rare. This is even more so in the case of populations suspected of ignorance or excessive emotionality, such as the old or the ill.
Here are a few run-of-the-mill examples:
“In the Knowledge, Attitude, and Practice Model (KAP), knowledge is the base, belief is a motivator, and the formation and changing of behaviors is the ultimate goal. We design a subjects-centered program that starts with lots of hands-on, engaging, game-like experiments to inspire learning motivation, improve subject knowledge, and overcome technophobia.”
[4]
“Resistance to new technologies or procedures should be recognized by medical professionals and their customers. Once recognized, these forms of resistance can be overcome by carefully planned and appropriate interventions.”
[5]
“[N]on-digital adults seemed to experience computer-related anxiety, making them feel technophobic or unconfident regarding digital solutions. In other words, their inclusivity in digital living is inadequate and likely affects their quality of life. [….] Future development of this research will involve […] formulation of innovative educational and clinical pathways.”
[6]
“[W]e need to ensure that older adults understand the multiple benefits of [mobile-assisted language learning] for their lives, as for seniors to adopt an innovation, those benefits must outweigh the effort required to learn using it.”
[7]
This is all reasonable as an expression of a general stance. The assumption that technology is a force of good is largely justified, historically, and the questioning of technological progress ad nauseam is a dangerous vice. Nonetheless, it would be equally delusional to imagine that all technological developments are inherently benign or that any agenda hitching a ride on them is itself a manifestation of progress. Some innovations are bound to generate disruptions and risks, and digital technology, especially in the form of AI, seems to belong in this category. There is thus nothing irrational, as things stand, in rejecting it, not that this is, or should be, everyone’s preferred choice. Indeed, there are good reasons, including age-specific good reasons, to refuse to join the trend. I will return to this shortly.
There is then the graver danger posed by grander ideas associated with such hazardous technologies. Engineering acceptance of technology has the potential to save communities or to make them thrive. It can also develop nasty prescriptive habits. For example, persistently helping older people overcome putative technophobia may begin to look suspiciously similar to requiring them to use technologies and devices against their judgment and will. Circumventing their agency by overdiagnosing fear and ignorance cannot be defensible, especially since what they are asked to sacrifice is not nothing. A world with no cash, no paper documents, and no cashiers, no call center operators and no taxi drivers, little privacy and little physical presence may be, for some demographics, a world—their world—erased. I suspect it is so for numerous older people, and individuals in other age groups may share their perspective.
One can take note of obvious facts, such as the challenge posed by the abrupt ageing in the global North [8], and have little illusions about advanced technology, in the form of AI, becoming ubiquitous and permeating old age [9,10,11,12], without accepting the imposition of norms in such matters. Prescriptions, including an imperative to embrace technology, are to be discussed in their own terms and logic; they cannot be mere conclusions from facts. Even if the right of refusal that I defend here turns out to be permanently infringed upon, for example, due to financial constraints, reminding ourselves of such bracketed rights will still have its uses. It could, for example, help older people negotiate better terms in their surrender to the promises of digital apparatchiks and the whims of tech bros.
If one must, one could also describe my argument as an invitation to reframe refusal of technology from an attitude largely motivated by fear in a context of ignorance (an emotion, therefore, a token of irrationality) to one which may as well be a valid conclusion from commonsense beliefs and desires (as rational as any commonly held position reached by inference). Reframing would mean that one does not start from problematizing rejection but actually looks at the reasons that motivate it, assuming instead that one deals with rational adults in the exercise of their rights and in the possession of their cognitive faculties.
This is not to deny the fear of technology in the old. What I meant above by its “overdiagnosis” is the tendency to understate the importance of the following fact: One may fear technology and have good reasons to fear it and have other good reasons, besides fear, to reject it. While technophobia in the old continues to be extensively researched, e.g., [6,13,14,15,16,17], some researchers have noted that the assumption of fear may be part of a stereotypical portrayal of the old, and thus a driver of ageism [18,19], and others have suggested that older individuals are not always more anxious or passive about technology than younger people [20,21,22]. Moreover, the literature does provide evidence of utilitarian reasons for adopting technology, such as its (perceived) usefulness or ease of use [23,24,25,26] which ipso facto are also reasons to reject technology. Perhaps even more interesting, and less researched, are what one may call ideological or value-based reasons to refuse technology.
Resistance to technology for ideological reasons may be thought of as a spectrum, ranging from token gestures, such as digitally advertising one’s imminent absconding on a digital detox retreat, to more active or militant “digital disconnection” [27,28,29], which can emphasize responsible decision-making [27,30] and take unmistakably political forms [31,32]. While the latter topic/phenomenon concerns mostly digital media, it seems safe in this context to extrapolate to the technological ecosystem that underlies digital media, AI now included. After all, disconnection sometimes means refusing even the relevant physical devices. For example, Rosenberg and Vogelman-Natan [33] have described “ideologists” among mobile phone refusers. One could compare this with older attempts to modify or domesticate devices, as was the case with the so-called “kosher phone” [34].
Older people may themselves be among such dissenters, or they can be ideological opponents on different lines. Probably their refusal is more often shaped by their relative powerlessness rather than by militantism. Older individuals tend to have less of a public voice, less money and powerful connections, less tech and media savviness, and less good health to focus on campaigning. Even under this assumption, one needs to take seriously the possibility of a refusal driven by fundamental ideas and values that are reasonable, not extreme or bizarre.
Are then the reasons that some older individuals may invoke to refuse digital technology, and especially AI, good reasons? Let us focus specifically on AI in its current understanding, i.e., deep learning software that is based on neural network architectures and which capitalizes on advancements in processing power and the availability of massive data sets to establish and manipulate patterns, with applications such as natural language processing or artificial vision. Large language models (LLMs) are currently a topic of popular discussion, but AI applications are already legion, from client service chatbots to autonomous vehicles, from visual background fillers to medical diagnostic systems. Future developments are likely to be more powerful and challenging.
Other generations have, of course, faced technological changes in their time, but the pressure to adapt experienced nowadays is, in an important sense, unprecedent, when it involves AI-driven technologies. This has to do with the pace, pervasiveness, and power of technology, but more to the point with its very nature. Living with tech takes a different meaning when, quite literally, that implies sharing a world with things that have an agency of their own or even deferring to the agency of those things. Great expectations and dystopic sci-fi scenarios aside, older opponents could invoke problems, such as the following:
  • Opacity
The very nature of AI, in the sense sketched above, implies that its workings are opaque. They allow for mathematical description, but this is not typically translatable in common terms. How an AI system does its thing will remain, in this sense, incomprehensible even to its programmers. One can compare witnessing oneself speak and thinking of what one’s brain may be doing in the background to “support” speaking. The latter may be described in neurobiological terms but remains opaque to an ordinary observer and impossible to translate in “folk psychological” terms. It is in such terms that we become transparent to each other and extending them to agents that are completely unlike ourselves is problematic, to say the least. Thus, one cannot claim to comprehend how, e.g., a generative AI system works just by interacting with it—that is, from its behavior. Its use of language may be very human-like, but its workings will still be opaque.
AI opaqueness poses obvious challenges when decisions reached by, or with the help of, AI systems have major consequences. Thus, opaqueness is often discussed in the medical context [35,36,37,38] or in the context of autonomous driving [39,40]. It should be clear, however, that opaqueness is a general issue inherent in this kind of technology. As such, it is a legitimate ground to resist AI-based applications. Typically, people prefer to know why other agents behave as they do and do their best to reduce opacity. This is a meaningless effort with AI agents. To be told, for example, that GPT-4 works by adjusting 1.76 trillion parameters will do nothing in this respect. It is thus perfectly comprehensible that some people will be skeptical about a technology they cannot, as a matter of principle, understand. Sociologically, perhaps more older people will see things this way, but this has nothing to do with older individuals being incompetent about technology. Indeed, it may reflect a firmer grasp on what AI is and is not.
2.
Bias
AI, in its current conceptualization, is software that learns from very large collections of data—mostly text but also images and other formats. Aside from how learning algorithms themselves are designed, what is present in the datasets from which AI learns is crucial. Misrepresentations and biases in the data, especially if they are prevalent, are likely to be perpetuated in the autonomous behavior of the AI, sometimes in less than obvious manners. Worries have been raised, for example, in matters related to gender bias [41,42,43] or racial profiling [44,45,46].
Predictably, the same kind of concern can be taken as object bias in the form of ageism. Indeed, there are analyses that problematize the development of AI technologies from the standpoint of the representation of old people and old age [47,48]. If older people are not adequately represented in datasets, or their traces are peripheral, then AI will, by necessity, be blind or biased toward these individuals. The opacity of function discussed above, and the added actuarial opacity of dataset acquisition and management and training procedures for AI aggravate this problem. An emerging field—critical datasets studies—is targeting precisely this area [49], as is the expanding debate on AI ageism [50,51] —a division of the larger digital ageism literature [52].
Older people may thus reject AI because of proven and probable biases directed against them—an age-specific reason for refusal. They would, moreover, be entitled to be skeptical about proposed remedies. Even curated datasets will likely be curated beyond their control by entities and people who do not share their station in life. How responsible corporate and state behavior can be in such matters remains an open question, given the intense competition [53], the sheer volume of data, in which even pirated books of famous authors can get lost [54], and the incentives to allow as much data as possible in datasets—even the open internet, with its armies of trolls [55]. This is admittedly only the darker half of a more nuanced image. On the very optimistic assumption that all these issues will be fixed and representation will be adequate, older people may still justifiably refuse participation. More on this below.
3.
Cost
I have mentioned above utilitarian reasons for embracing or rejecting technology. If, for example, a technology is hard to use or has a steep learning curve, that will enter a cost-benefit analysis and may lead, justifiably, to rejection. One could reply that AI only makes the digital world more accessible and intuitive. Using natural language instead of programming languages to interact with devices is a prime example thereof. Perhaps, but cost can have a more age-specific meaning. Learning is more difficult for older people, and what may seem intuitive to a “prompt engineer” could be confusing for another person. The same goes for “affordable”, “easy”, “fun”, “safe”, “reliable” etc. Even the psychological cost of accepting change, or the need to learn, may be substantial since it will involve accepting interactions with autonomous artificial agents.
There is then a simpler and deeper question to ask in this context. Why should an older person use her or his time, of all things, for an effort to understand and live with the latest tech? There is a trivial answer to that: because societies are making it increasingly hard to do otherwise. But the question is not intended to be trivial. As one ages, time takes different meanings. Its finite character becomes conspicuous. One needs to preserve a life of sense in a shrinking and eventually collapsing bubble of time. For some older individuals, investing their last decades in something that matters to them could mean something else than adapting to a world changed by AI to being second-rate cogs in a machinery they have no control over. If one has fewer or no second chances, trudging toward something irrelevant will have incalculable costs.
4.
Loss
AI is routinely recognized as disruptive, and often, this is done in a celebratory fashion, as if breaking things were the royal road to progress. An example is the legion of reports on the future of work in which the potentially massive loss of employment is treated as an acceptable nuisance [56,57,58]. As per above, change has costs, but costs are not distributed equally. Vulnerable categories carry most of them. How believable can empathy and concern be when, for example, some of the last stable, skilled, and well-paying professions for blue-collar workers (long-haul driver, welder, etc.) are marked for elimination?
In the case of older people, things more fundamental than a job are at stake, so it would be euphemistic to talk merely about costs. AI is already in the process of remaking most of their experiences: how they communicate with others, how they shop and pay utilities, how they manage their finances, how they get their news, how they vote, how they travel, how they access healthcare services, how they are assisted at home or in a home. A whole world is being replaced, which means a whole world is lost. This is not to claim that the lost world was better—which would be meaningless rather than false—but it is to claim that it was home to those who built it. A loss of this magnitude cannot be simply brushed aside, even when those same older people have been persuaded to be polite about it in public. It is reason enough to refuse to partake in what replaces one’s native habitat, especially when one is fragile and cannot conjure any immovable objects in the path of change.
5.
Worldview
There could be many other sensible motives for older people to reject AI-based technology. I will only add one more here, as an extension to the discussion above and as a potential umbrella category for a variety of ideological reasons. Older people may not only mourn their word. They may also actively dislike what AI is doing to the new world and to humanity. What could justify such a perspective? The same grounds that justify it for younger critics. They may suspect that AI is contributing to the loss of privacy and individuality, that it widens already pathological differences in wealth and chances, that it is aggravating political polarization, that it transforms democracies into empty, procedural hulls, that it weakens bargaining positions for most employees, that it invites epidemic deskilling, that it dehumanizes interactions, and that it blurs moral and legal responsibilities. These are all debatable claims, but they are not outlandish. Some have even been defended by tech moguls [59], who have much less to lose if this kind of criticism is true. Now, if such views carry weight when they are expressed by other demographics and by self-proclaimed leaders, they should also be taken seriously when voiced by the old.
None of these reasons is conclusive in the sense of justifying a rejection of AI and related technology across contexts. But that was not the point. What needs to be rejected is only the default construal of older refusers of technology as individuals dominated by childish fears that can be alleviated with a quick dose of adult supervision. Refusal of AI—by this demographic, and by others, for both age-specific and general reasons—is, in principle, as rational as any other common attitude toward technology. It is not its faulty rationality that makes it a minority position—assuming that that is indeed the case.
Even as a minority view, this stance toward current technology is worthy of protection. Democracies are protective of even obviously false ideas when those ideas are important to people and not a grave danger to others. No one would thus deny that an attitude that could as well be right has the right to manifest itself and endure. To defend a right to refuse AI may accordingly seem redundant. There is no gun held to anybody’s head. Young or old, just walk away and ignore the tech, no one will stop you. What this rejoinder expresses is a presumably undisputed negative right. However, even this negative right is already being eroded, and, more importantly, it is not the kind of right democracies should establish in this area. What is needed is a positive right to refuse technologies such as AI.

3. A Positive Right to Refuse AI-Based Technology

In his classic essay on negative and positive liberty, Isaiah Berlin contemplates the dangers of a positive conception of liberty pushed by the darker currents of a rationalist conception of human beings. If negative liberty is liberty-from, and, as such, tries to define areas of non-interference, positive liberty is liberty-to—and may come to mean liberty to behave rationally, even against one’s will or wishes. One may be freed from oneself, if need be, by those who have self-declared privileged access to one’s rational core. The question has been historically recognized and acted upon
“of how, in practice, men were to be made rational in this way. Clearly, they must be educated. For the uneducated are irrational, heteronomous, and need to be coerced, if only to make life tolerable for the rational if they are to live in the same society and not be compelled to withdraw to a desert or some Olympian height. But the uneducated cannot be expected to understand or co-operate with the purposes of their educators” [60]
I have, in fact, borrowed above from this classical liberal preference for a negative conception of liberty when I criticized the view of older people as ignorant, child-like, and in need of education. A discussion in terms of kinds of rights is not equivalent to one in terms of kinds of liberty, but the distinctions are not fundamental in this context. We can approximate negative liberty as the enjoyment of negative rights, with more or less the same equation for positive rights.
Is there a negative right to refuse technology? As already mentioned, I take this as generally undisputed, perhaps with some provisos. As with all rights, this cannot be seen as absolute but as balanced by other rights and by the rights of others. For example, individuals with a history of violent offenses may be coerced to wear tracking devices in order to enforce the right of others to be free from violence. Such exceptions aside, democratic societies have not, historically, made technologies mandatory. Opting out may be hard or frowned upon, but it is not forbidden, even in cases where there may be consequences for others. An individual may, for instance, routinely refuse a technology that can save her or his life, such as a pacemaker, even if this can marginally raise the risk for others, as in the situation of having a heart attack while operating machinery. In the specific case of older people refusing AI-driven technology, even this presumably benign and established negative right seems to be challenged.
It is not that life is made difficult for those who refuse technology. This is certainly the case, but it has to do with the positive right to refuse technology, which I will discuss below. What may lead to an infringement of the negative right is the fact that older people are not, in fact, left alone. Refusal in their case is seen, as I suggested already, as a legitimate area of public intervention. This can range from research grants for academics to education and training programs, persuasion campaigns, introducing infrastructure in the relevant communities, or creating institutions and bureaucracies tasked with preparing the growing elderly population for life with tech. As well intended as all these may be, they may have the combined effect of cornering those older persons who simply want no part in this. It may be too strong to call this an infringement of rights, but I think it is clear that the private space for refusal is gradually eroded.
Let us now turn to what is inherently more contentious. Is there also a positive right to refuse technology? This would mean that, in addition to not interfering in cases of refusal, there are also obligations to provide proper conditions for refusal—to make it, for example, an affordable choice. Many political and legal thinkers have been skeptical of positive rights. Berlin, in the example above, expresses a moderate position, but it is not unheard of to simply dismiss such rights [61] or to reject them in context [62]. Still, most positions admit that at least some rights have positive components, i.e., they entail obligations to act, for example, obligations to protect and help [63]. Should this be the case with the right to refuse technology?
I suggest that we look at this question through the lens of a metaphor I used above, that of older individuals being natives of a world or collection of worlds. I think these are worlds worthy of a level of minimal protection since contemporary tech, and especially AI, threatens to erase these worlds while their natives are still alive. It is not enough to refrain from interfering because that will make refusal of technology unbearably costly or reduce it to a merely theoretical possibility. Societies should be active in preserving the worlds of the old to a reasonable extent for at least the following reasons:
  • Older people have a right to be at home in the world
Older individuals are world-builders, not free-riders in the word of the young. One may be critical of outdated ways of life, anachronistic ideas, or unfashionable technologies. Still, one has to recognize that all these are fruits of work and embodiments of meaning. They are also part of what continues to give meaning to the lives of some older persons by keeping realities minimally familiar. Societies should recognize the importance of this fact, as they do, for example, in the case of archaic (sub)cultures, such as the “uncontacted peoples” of the Amazon.
What could this mean in practical terms? Should older individuals be entitled to ask, for example, that landlines and rotary dial phones be maintained on taxpayers’ money? Struggling to make things sound absurd may backfire. For example, landlines still have their advantages, especially in emergency situations. We are not, in fact, in the dark when it comes to positive rights of this kind, and we have conceptual frameworks, such as intergenerational justice [64], that could be extended to deal with distributive arrangements between, as it were, contemporary generations. Societies generally provide what they, in a context, think is reasonable or what different parties regard as an acceptable compromise. In this case, they may provide or maintain familiar alternatives to AI-driven technology. For example, they may make it easy for older people to speak directly to a person when contacting utility companies instead of forcing them to interact with a chatbot. They may provide the option of not having one’s medical data fed into AI systems—for diagnostic, insurance, or other purposes. Or they may set a default of accessing basic online services, such as messaging, email, or search engines, without data being collected for further analysis by, or training for, AI algorithms. Nothing exotic or enormously costly needs to happen to have a positive right of refusal in place and then to adjust it to accommodate everybody’s interests.
2.
Older people are vulnerable and deserve support
Provisions such as those suggested above may even be general desiderata, things that not only older people may want to claim as a right. Societies, however, do not have limitless resources. Some positive rights will have a narrow scope, and they will only be enforced for the most vulnerable, for those who have the most to lose, and for those who are most deserving. Old age matters here as a proxy for vulnerability, and there may be other vulnerable categories in the context of social change driven by technology. To vary examples, take the phenomenon of “cashless exclusion” [65,66]. The old are affected, as are the poor, the least educated, migrants, and other categories. Ideally, something will be done for all those affected, but that may not be feasible. If the latter is the case, one will need to consider unpleasant questions, such as “who is hurt the most?”, or “who is more worthy of support?”. I want, thus, to leave it open whether rejection of technology, specifically of AI, should be protected for every kind of refuser.
In a scenario of scarcity—which tends to be the rule—it is important to understand that older people have important things to lose. Short time is precious time. And a whole world, built in long decades, is not a bagatelle. As sensitive as the issue may be, one also needs to stress that they are at least as deserving of support as other vulnerable categories. Those long decades have, after all, been spent laying the foundations for the relatively long, healthy, and prosperous lives which continue to be the rule in the global North.
3.
Preserving the world of the old does not hurt others
Ridiculous entitlement sometimes paints older people indiscriminately as close to intolerable. These persons have been lifelong polluters and oppressors, and they continue to be simpletons (xenophobic, technophobic, chronophobic). This version of ageism needs to be denounced, not answered. Still, some worries about accommodating older refusers of technology need to be taken seriously. Is not preserving a world for the old, with its obsolete trinkets, hurting others? Is not refusing technologies of proven effectiveness ipso facto an antisocial action? Antisocial attitudes cannot be elevated to the status of rights.
It is true that positive rights entail costs for others. Fulfilling a legitimate obligation and being hurt are different things, however. Providing familiar alternatives to AI technology for some older people will not deprive affluent societies of a future. What would be provided for is not the exercise of risky or violent behavior or of hurtful attitudes. It is simply the partial continuation of benign forms of life for those who prefer it. Moreover, this needs to be properly contextualized. In the relevant societies, many clearly risky and even hurtful behaviors are protected. Often, this is done to avoid worse outcomes, as is the case with providing clean needles to drug users or with police cordons for neo-Nazi marches. Otherwise, ways of doing things are protected because, bluntly put, some people like them. Some European highways have no speed limits, despite the elevated pollution this causes; some towns make room for noisy music festivals, even if many locals protest the noise. Older people refusing a technology with debatable credentials should not be treated worse than troublesome drug users or petrolheads.
There may be good reasons to criticize the refusal of AI, especially by people susceptible to loneliness and illness. Some may label it irrational, and this may be true in some cases, even if I presented some reasons against this being generally true. But a positive right for older persons to refuse AI should not depend on what others think of their rationality. Older people refusing technology, even if irrationally, are not on par with, say, antivaxxers. Irrational behavior in such a case is a danger to others because it threatens herd immunity. There is no danger to society as a whole if some older people continue their lives without AI. It is implausible, in any case, that irrationality can be attributed to a majority of older refusers as if this were a case of folie à plusieurs.

4. Some Analogies

I will conclude by looking at a few other situations of technology rejection. The case of AI-driven technologies that are de facto imposed on older individuals will be clearer against this background, in the sense that the pedagogical stance that propels prescriptions here will look particularly dubious. The relevant situations are those, as already suggested, in which resisting particular technologies does not amount to a danger to or to an infringement of the rights of others. These others, even if displeased, have accordingly no grounds to make life harder for the people who resist technology.
I begin the analogies with cases that are both structurally close to that of older people refusing AI-driven technology and also relatively straightforward morally, and then I look at an extreme example. The omnipresence of technology may obscure from the public discourse the fact that there are minorities who resist it. These can be religious minorities, such the Amish in the United States, or ideological minorities, who are critical of technology, for principled but non-religious reasons and act accordingly in decisive manners. I have already mentioned such ideological refusers in Section 2 above. They may practice systematic disconnection or refuse to own devices, such as smartphones, to the extent that their worries are mainly related to digital tech. Of course, other kinds of worries and other ideological inclinations may lead to the rejection of other technologies. One may refuse to own a car or to travel by plane due to ecological considerations.
To keep the analogies relevant, the situation of such groups needs to be distinguished from a more pervasive phenomenon, which I have already mentioned. For obvious reasons having to do with the metabolism of popular media, critical behavior that is far more limited in scope and substance, such as digital detox, is more visible publicly than serious refusal. This is part of a long tradition that has mixed diluted skepticism about dominant media and technology and the allure of the real and authentic—i.e., unmediated—life [29]. Self-centered fads are not relevant to our discussion.
  • Refusing technology on religious grounds
Religious grounds for refusing technology may be a sensitive topic, but even in a schematic discussion, some distinctions can be made. It would not do to accept just any justification of anti-technology or, far worse, anti-scientific attitudes that calls itself religious or faith-based. Romanticizing bizarre beliefs and the fringe movements they stir is an ordinary intellectual sin that may lead to extraordinarily costly consequences. What is generally protected in democratic societies is the way of life of functional communities, which are not particularly dangerous or violent internally or externally. To take the example above, the Amish obviously count as such a community. Even their rejection of technology has nuances since they are negotiating their relationship with technology in the light of their values, being selective and sometimes disagreeing about options [67,68,69,70]. Outsiders may reasonably be critical of Amish values and choices, but it would not be reasonable to limit the exercise of such choices. This community is clearly not of a kind with, say, a suicide sect. One could add here other well-known examples, such as ultra-Orthodox Jews [71].
Older people who refuse technology are not a marginal religious group, and it would be ridiculous to think of them as sharing, globally or within one culture, a small set of values. Still, their reasons for rejecting technology may be, to a certain extent, analogous to those of religious groups. For example, the rejection may be based on matters of principle or on the perception that important values, such as privacy and personal autonomy, are threatened by technology, particularly by AI. Closer to what I have suggested above, older individuals may feel that their world, the aggregation of the familiar facts of their experience of a lifetime and of a certain way of life, is endangered, and they may act to preserve it. They may even do it explicitly, thinking of themselves as part of fading or past communities. In doing this, older individuals may, in fact, be in a worse position than an organized religious community. This, as I explained above, should be part of the reasoning when deciding on a positive right to refuse technology scaled to the needs and situation of older refusers.
2.
Refusing medical treatment
The right to refuse medical treatment is sometimes discussed together with the right to die [72], but I want to separate them here. Some authors have stressed similar distinctions for other reasons, for example, to detach a rather consensual right to refuse treatment from problematic positive components, sometimes described dramatically as “a right to be killed” [73]. To keep things clearer and in a certain progression, it is best for now to consider cases in which death is not a likely direct outcome in the short or medium term of a decision to go against medical advice and refuse treatment.
As with the case of religious refusal, caution is needed here to avoid situations in which others may be hurt and to set an appropriate bar for justification. Refusing treatment is sometimes likely to harm others. The obvious case here is vaccination. Preventing contagious diseases cannot be a purely personal decision since prevention, by definition, works only if a majority of people get vaccines. Other examples can be added in which refusing medication makes one more likely to be dangerous to others (which may be the case with a minority of psychotic patients) or in which refusal to wear medical devices impairs one’s professional performance, which puts others at risk. If one is at risk of serious arrythmia, one may need to wear an implantable cardioverter defibrillator, if one wants to drive.
Refusal of birth control is an interesting example of resisting medical intervention. Experts may, for example, recommend IUDs, but the rate of acceptance may still be low due to personal concerns, culture, or level of education [74,75,76]. This may be unreasonable from the point of view of health experts, but obviously, mandatory, invasive procedures, especially related to reproduction, should be out of the question in a sane society. Maintaining and offering easy access to alternatives are ways to achieve similar prosocial aims.
An example that overlaps with our topic is the refusal of diagnosis or treatment planning by AI. A recent debate in the literature [77,78] has emphasized that resisting the involvement of AI in medical matters is justified, the point of disagreement being the nature of reasons that constitute the justification. Ploug and Holm have emphasized different kinds of rational reasons that patients may give for their refusal, whereas de Miguel Beriain has suggested that values such as personal autonomy should be taken as decisive, also noting limitations to the right of refusal (e.g., because of potential harm to others). Other researchers have shown that patients are reluctant to accept AI, trusting it less than doctors, even when it issued similar recommendations [79]. Trust may not be a fully rational matter, but we do allow it to play a major role in our affairs routinely. There is no reason to deny its importance in medicine.
Medical intervention is a deeply personal area, and the fragility brought by illness adds a layer of care and concern in such matters. Refusal of treatment, including that which specifically involves technologies, devices, and impersonal data handling, is to be seen in this light. The similarities to a (partly overlapping) refusal by some older persons to use AI-based tech run quite deep. In the latter case too, preserving autonomy is crucial in a state of increased vulnerability—for reasons that are both general and situation-specific. Recognizing the patient as a decision-maker will typically not be equivalent to adopting a non-interference position. Some positive elements, such as the provision of alternatives, where these are available at reasonable costs, will be in the cards. It should be the same with older people who are skeptical of AI technology beyond its medical applications.
3.
Refusing to prolong one’s life
It may seem sinister to discuss the right to die in an essay on ageing and technology, but perhaps such an extreme example will provide useful context. My goal here is not to defend a substantive position regarding the right to die but to very briefly explore the analogy between this putative, controversial right and something which should be much less contentious in retrospect—the right to refuse AI technology.
Without digressing further, I should remind the reader that this has traditionally been, for good reasons, a territory perceived with revulsion or worse. One could point to a locus classicus, such as Kant’s condemnation of suicide—contra the Stoics he otherwise admired—or one could be sympathetic to the arguments of those who consider the very expression “the right to die” contradictory [72]. Whatever one may think on this topic, I think it is worth exploring whether such a putative right could have positive elements to it on the assumption that there is a right to die to begin with. If one accepts the assumption, it may be sensible to see the right to die as not a purely negative right. Bluntly put, this is because, presumably, societies would not want people who choose to exercise this right to end their lives miserably. We arguably have a shared interest in preserving the dignity of even those who have taken radical, private decisions (which do not inherently harm others, as per above).
Let us focus on a few narrowly circumscribed situations in which obligations seem to have a more solid grounding. Doron [80] has discussed the particular case of the right to die at home, suggesting that, without some positive components, it would tend to be nominal or void of content. Kompatsiari [81] has discussed the more stringent case of those wishing but incapable of committing suicide due to extremely debilitating illnesses in their final stages. The author argues that this is a situation in which the denial of a positive right to die would amount to the imposition of an obligation to live. Such an obligation, unless emanating from a religious or metaphysical prior commitment, is hard to defend. In other words, looking at the right to die as entailing only non-intervention or omission has consequences, which at least locally, with people in extreme situations, approach moral bankruptcy.
If, in such an extreme case, some societies are inclined to affirm certain (public) duties, then perhaps we will find ourselves less skeptical about a positive right of old people to pursue a life divorced from the dominant technologies of the day. Therefore, putting in place an infrastructure for choices about technology, including refusal, should be in the cards. Refusing AI is a lesser sin than refusing life, it should be forgiven accordingly.

5. Conclusions

Old age is not a second childhood. This is a platitude that seems to be permanently bracketed by both old customs and modern crises. To it, one may add another platitude: technological developments are not implacable forces of nature, but the results of human ingenuity—and human decisions. The development of disruptive technologies, as is the case currently with AI, is not on par with gravity. How we arrange our life with AI is thus not a given but a result of ongoing negotiations. Even declarations that AI should be embraced by everybody—or else—are part of such negotiations. They can and should be opposed.
To the list of reasonable positions regarding older people in old societies and the advent of AI, we should add one that maintains a robust right of refusal of AI. Perhaps only a small minority of older persons will choose to exercise this right. The Reluctant Republic may be tiny. But that does not make its potential oppression or erasure a forgivable injustice.

Funding

This work has been supported through the project City & Co, funded by the EU Horizon 2020 research and innovation program under grant agreement No 101003758.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Davis, F.D. User Acceptance of Information Systems: The Technology ACCEPTANCE Model (TAM); University of Michigan: Ann Arbor, MI, USA, 1987. [Google Scholar]
  2. Williams, M.D.; Rana, N.P.; Dwivedi, Y.K. The unified theory of acceptance and use of technology (UTAUT): A literature review. J. Enterp. Inf. Manag. 2015, 28, 443–488. [Google Scholar] [CrossRef]
  3. Parasuraman, A. Technology Readiness Index (Tri): A Multiple-Item scale to measure readiness to embrace new technologies. J. Serv. Res. 2000, 2, 307–320. [Google Scholar] [CrossRef]
  4. Wang, C.-C.; Chen, J.-J. Overcoming technophobia in poorly-educated elderly–the HELPS-seniors service learning program. Int. J. Autom. Smart Technol. 2015, 5, 173–182. [Google Scholar] [CrossRef]
  5. Safi, S.; Thiessen, T.; Schmailzl, K.J. Acceptance and Resistance of New Digital Technologies in Medicine: Qualitative Study. JMIR Res. Protoc. 2018, 7, e11072. [Google Scholar] [CrossRef] [PubMed]
  6. Di Giacomo, D.; Ranieri, J.; D’amico, M.; Guerra, F.; Passafiume, D. Psychological Barriers to Digital Living in Older Adults: Computer Anxiety as Predictive Mechanism for Technophobia. Behav. Sci. 2019, 9, 96. [Google Scholar] [CrossRef] [PubMed]
  7. Puebla, C.; Fievet, T.; Tsopanidi, M.; Clahsen, H. Mobile-assisted language learning in older adults: Chances and challenges. ReCALL 2022, 34, 169–184. [Google Scholar] [CrossRef]
  8. World Population Ageing 2020. Highlights. United Nations Department of Economic and Social Affairs. (2020). Available online: https://www.un.org/development/desa/pd/sites/www.un.org.development.desa.pd/files/undesa_pd-2020_world_population_ageing_highlights.pdf (accessed on 25 August 2023).
  9. Bohr, A.; Memarzadeh, K. The rise of artificial intelligence in healthcare applications. In Artificial Intelligence in Healthcare; Elsevier: Amsterdam, The Netherlands, 2020; pp. 25–60. [Google Scholar] [CrossRef]
  10. Ho, A. Are we ready for artificial intelligence health monitoring in elder care? BMC Geriatr. 2020, 20, 358. [Google Scholar] [CrossRef]
  11. Qian, K.; Zhang, Z.; Yamamoto, Y.; Schuller, B.W. Artificial intelligence internet of things for the elderly: From assisted living to health-care monitoring. IEEE Signal Process. Mag. 2021, 38, 78–88. [Google Scholar] [CrossRef]
  12. Zhu, J.; Shi, K.; Yang, C.; Niu, Y.; Zeng, Y.; Zhang, N.; Liu, T.; Chu, C.H. Ethical issues of smart home-based elderly care: A scoping review. J. Nurs. Manag. 2022, 30, 3686–3699. [Google Scholar] [CrossRef]
  13. Di Giacomo, D.; Guerra, F.; Perilli, E.; Ranieri, J. Technophobia as emerging risk factor in aging: Investigation on computer anxiety dimension. Health Psychol. Res. 2020, 8, 1–4. [Google Scholar] [CrossRef]
  14. Jeng, M.-Y.; Pai, F.-Y.; Yeh, T.-M. Antecedents for older adults’ intention to use smart health wearable devices-technology anxiety as a moderator. Behav. Sci. 2022, 12, 114. [Google Scholar] [CrossRef] [PubMed]
  15. Volkmann, T.; Miller, I.; Jochems, N. Addressing fear and lack of knowledge of older adults regarding social network sites. In Human Aspects of IT for the Aged Population. Technology and Society; Springer: Berlin/Heidelberg, Germany, 2020; pp. 114–130. [Google Scholar]
  16. Wang, K.H.; Chen, G.; Chen, H.-G. A model of technology adoption by older adults. Soc. Behav. Pers. Int. J. 2017, 45, 563–572. [Google Scholar] [CrossRef]
  17. Zafrani, O.; Nimrod, G.; Edan, Y. Between fear and trust: Older adults’ evaluation of socially assistive robots. Int. J. Human-Comput. Stud. 2023, 171. [Google Scholar] [CrossRef]
  18. Ivan, L.; Cutler, S.J. Ageism and technology: The role of internalized stereotypes. Univ. Tor. Q. 2021, 90, 127–139. [Google Scholar] [CrossRef]
  19. Neves, B.B.; Amaro, F. Too old for technology? How the elderly of Lisbon use and perceive ICT. J. Community Inform. 2012, 8, 1–12. [Google Scholar] [CrossRef]
  20. Dyck, J.L.; Smither, J.A.-A. Age differences in computer anxiety: The role of computer experience, gender and education. J. Educ. Comput. Res. 1994, 10, 239–248. [Google Scholar] [CrossRef]
  21. Ha, J.-G.; Page, T.; Thorsteinsson, G. A Study on technophobia and mobile device design. Int. J. Contents 2011, 7, 17–25. [Google Scholar] [CrossRef]
  22. Loos, E.; Peine, A.; Fernandéz-Ardèvol, M. Older People as Early Adopters and Their Unexpected and Innovative Use of New Technologies: Deviating from Technology Companies’ Scripts. In Proceedings of the International Conference on Human-Computer Interaction, Malaga, Spain, 22–24 September 2021; pp. 156–167. [Google Scholar]
  23. Guner, H.; Acarturk, C. The use and acceptance of ICT by senior citizens: A comparison of technology acceptance model (TAM) for elderly and young adults. Univers. Access Inf. Soc. 2020, 19, 311–330. [Google Scholar] [CrossRef]
  24. Iancu, I.; Iancu, B. Designing mobile technology for elderly. A theoretical overview. Technol. Forecast. Soc. Chang. 2020, 155, 119977. [Google Scholar] [CrossRef]
  25. Syed-Abdul, S.; Malwade, S.; Nursetyo, A.A.; Sood, M.; Bhatia, M.; Barsasella, D.; Liu, M.F.; Chang, C.-C.; Srinivasan, K.; Li, Y.C.J. Virtual reality among the elderly: A usefulness and acceptance study from Taiwan. BMC Geriatr. 2019, 19, 1–10. [Google Scholar] [CrossRef]
  26. Yoo, H.-S.; Suh, E.-K.; Kim, T.-H. A Study on Technology acceptance of elderly living alone in smart city environment: Based on AI speaker. J. Ind. Distrib. Bus. 2020, 11, 41–48. [Google Scholar] [CrossRef]
  27. Kaun, A. Ways of seeing digital disconnection: A negative sociology of digital culture. Convergence 2021, 27, 1571–1583. [Google Scholar] [CrossRef]
  28. Lomborg, S.; Ytre-Arne, B. Advancing digital disconnection research: Introduction to the special issue. Convergence 2021, 27, 1529–1535. [Google Scholar] [CrossRef]
  29. Syvertsen, T.; Enli, G. Digital detox: Media resistance and the promise of authenticity. Convergence 2019, 26, 1269–1283. [Google Scholar] [CrossRef]
  30. Moe, H.; Madsen, O.J. Understanding digital disconnection beyond media studies. Convergence 2021, 27, 1584–1598. [Google Scholar] [CrossRef]
  31. Casemajor, N.; Couture, S.; Delfin, M.; Goerzen, M.; Delfanti, A. Non-participation in digital media: Toward a framework of mediated political action. Media Cult. Soc. 2015, 37, 850–866. [Google Scholar] [CrossRef]
  32. Kaun, A.; Treré, E. Repression, resistance and lifestyle: Charting (dis)connection and activism in times of accelerated capitalism. Soc. Mov. Stud. 2020, 19, 697–715. [Google Scholar] [CrossRef]
  33. Rosenberg, H.; Vogelman-Natan, K. The (other) two percent also matter: The construction of mobile phone refusers. Mob. Media Commun. 2022, 10, 216–234. [Google Scholar] [CrossRef]
  34. Rashi, T. The kosher cell phone in ultra-Orthodox society. In Digital Religion: Understanding Religious Practice in New Media Worlds; Campbell, H., Ed.; Routledge: Abingdon, UK, 2012; pp. 173–181. [Google Scholar]
  35. Amann, J.; Blasimme, A.; Vayena, E.; Frey, D.; Madai, V.I. Explainability for artificial intelligence in healthcare: A multidisciplinary perspective. BMC Med. Inform. Decis. Mak. 2020, 20, 310. [Google Scholar] [CrossRef]
  36. Durán, J.M.; Jongsma, K.R. Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI. J. Med Ethics 2021, 47, 329–335. [Google Scholar] [CrossRef]
  37. Smith, H. Clinical AI: Opacity, accountability, responsibility and liability. AI Soc. 2021, 36, 535–545. [Google Scholar] [CrossRef]
  38. von Eschenbach, W.J. Transparency and the black box problem: Why we do not trust AI. Philos. Technol. 2021, 34, 1607–1622. [Google Scholar] [CrossRef]
  39. Malik, S.; Khan, M.A.; El-Sayed, H.; Khan, J.; Ullah, O. How do autonomous vehicles decide? Sensors 2022, 23, 317. [Google Scholar] [CrossRef]
  40. Zablocki, E.; Ben-Younes, H.; Pérez, P.; Cord, M. Explainability of deep vision-based autonomous driving systems: Review and challenges. Int. J. Comput. Vis. 2022, 130, 2425–2452. [Google Scholar] [CrossRef]
  41. Dastin, J. Amazon scraps secret AI recruiting tool that showed bias against women. In Ethics of Data and Analytics; Auerbach Publications: Boca Raton, FL, USA, 2022; pp. 296–299. [Google Scholar]
  42. Gross, N. What ChatGPT Tells Us about Gender: A Cautionary Tale about Performativity and Gender Biases in AI. Soc. Sci. 2023, 12, 435. [Google Scholar] [CrossRef]
  43. Sun, L.; Wei, M.; Sun, Y.; Suh, Y.J.; Shen, L.; Yang, S. Smiling Women Pitching Down: Auditing Representational and Presentational Gender Biases in Image Generative AI. arXiv 2023, arXiv:2305.10566. [Google Scholar] [CrossRef]
  44. Huang, J.; Galal, G.; Etemadi, M.; Vaidyanathan, M. Evaluation and mitigation of racial bias in clinical machine learning models: Scoping review. JMIR Med. Inform. 2022, 10, e36388. [Google Scholar] [CrossRef]
  45. Malek, A. Criminal courts’ artificial intelligence: The way it reinforces bias and discrimination. AI Ethics 2022, 2, 233–245. [Google Scholar] [CrossRef]
  46. Taylor, S.M.; Gulson, K.N.; McDuie-Ra, D. Artificial Intelligence from Colonial India: Race, Statistics, and Facial Recognition in the Global South. Sci. Technol. Hum. Values 2023, 48, 663–689. [Google Scholar] [CrossRef]
  47. Kamikubo, R.; Wang, L.; Marte, C.; Mahmood, A.; Kacorri, H. Data Representativeness in Accessibility Datasets: A Meta-Analysis. In Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility, Athens, Greece, 23–26 October 2022. [Google Scholar]
  48. World Health Organization. Ageism in Artificial Intelligence for Health: WHO Policy Brief. (2022). Available online: https://apps.who.int/iris/rest/bitstreams/1408281/retrieve (accessed on 25 August 2023).
  49. Thylstrup, N.B. The ethics and politics of data sets in the age of machine learning: Deleting traces and encountering remains. Media Cult. Soc. 2022, 44, 655–671. [Google Scholar] [CrossRef]
  50. Chu, C.H.; Nyrup, R.; Leslie, K.; Shi, J.; Bianchi, A.; Lyn, A.; McNicholl, M.; Khan, S.; Rahimi, S.; Grenier, A. Digital Ageism: Challenges and Opportunities in Artificial Intelligence for Older Adults. Gerontol. 2022, 62, 947–955. [Google Scholar] [CrossRef] [PubMed]
  51. Stypinska, J. AI ageism: A critical roadmap for studying age discrimination and exclusion in digitalized societies. AI Soc. 2021, 38, 665–677. [Google Scholar] [CrossRef] [PubMed]
  52. Rosales, A.; Fernández-Ardèvol, M. Ageism in the era of digital platforms. Convergence 2020, 26, 1074–1087. [Google Scholar] [CrossRef] [PubMed]
  53. Sheehan, M.-F.; Matt, C. AI Is Winning the AI Race. Foreign Policy. Available online: https://foreignpolicy.com/2023/06/19/us-china-ai-race-regulation-artificial-intelligence/ (accessed on 19 June 2023).
  54. Reisner, A. Revealed: The Authors Whose Pirated Books Are Powering Generative AI. The Atlantic. Available online: https://www.theatlantic.com/technology/archive/2023/08/books3-ai-meta-llama-pirated-books/675063/ (accessed on 19 August 2023).
  55. ChatGPT Users Can Now Browse Internet, OpenAI Says. Reuters. Available online: https://www.reuters.com/technology/openai-says-chatgpt-can-now-browse-internet-2023-09-27/ (accessed on 27 September 2023).
  56. PricewaterhouseCoopers. Workforce of the Future—The Competing Forces Shaping 2030. 2018. Available online: https://www.pwc.com/gx/en/services/workforce/publications/workforce-of-the-future.html (accessed on 7 July 2023).
  57. World Economic Forum. The Future of Jobs Report 2023. 2023. Available online: https://www.weforum.org/reports/the-future-of-jobs-report-2023/ (accessed on 7 July 2023).
  58. McKinsey Global Institute. The Future of Work after COVID-19. 2021. Available online: https://www.mckinsey.com/featured-insights/future-of-work/the-future-of-work-after-covid-19 (accessed on 7 July 2023).
  59. Metz, C.; Schmidt, G. Elon Musk and Others Call for Pause on A.I., Citing ‘Profound Risks to Society.’ The New York Times. 2023. Available online: https://www.nytimes.com/2023/03/29/technology/ai-artificial-intelligence-musk-risks.html (accessed on 29 March 2023).
  60. Berlin, I. Two Concepts of Libery. In Liberty. Incorporating Four Essays on Liberty; Oxford University Press: Oxford, UK, 2002; pp. 166–217. [Google Scholar]
  61. Currie, D.P. Positive and Negative Constitutional Rights. Univ. Chic. Law Rev. 1986, 53, 864. [Google Scholar] [CrossRef]
  62. Sunstein, C.R. Against positive rights. East Eur. Const. Rev. 1993, 2, 35. [Google Scholar]
  63. Shue, H. Basic Rights: Subsistence, Affluence, and US Foreign Policy; Princeton University Press: Princeton, NJ, USA, 2020. [Google Scholar]
  64. Rawls, J. A Theory of Justice; Oxford University Press: Oxford, UK, 1971. [Google Scholar]
  65. Lupo-Pasini, F. Is it a wonderful life? Cashless societies and monetary exclusion. Rev. Bank. Financ. Law 2020, 40, 153. [Google Scholar]
  66. Warchlewska, A. Will the development of cashless payment technologies increase the financial exclusion of senior citizens? Acta Scientiarum Polonorum. Oeconomia 2020, 1, 87–96. [Google Scholar] [CrossRef]
  67. Basham, R.E. Technology and community in a rural culture: The Amish. J. Cult. Relig. Stud. 2019, 7, 639–659. [Google Scholar] [CrossRef]
  68. Ems, L. Amish workarounds: Toward a dynamic, contextualized view of technology use. J. Amish Plain Anabapt. Stud. 2014, 2, 42–58. [Google Scholar] [CrossRef]
  69. Johnson-Weiner, K.M. Technological diversity and cultural change among contemporary Amish groups. Mennon. Q. Rev. 2014, 88, 5–23. [Google Scholar]
  70. Umble, D.Z. The Amish and the Telephone: Resistance and reconstruction. Consuming Technologies: Media and Information in Domestic Spaces; Routledge: Abingdon, UK, 1992; p. 18394. [Google Scholar]
  71. Rosenberg, H.; Blondheim, M. Authority: The passive-aggressive Haredi campaign against the smartphone. In Digital Religion, 2nd ed.; Campbell, H., Ed.; Routledge: Abingdon, UK, 2021; pp. 196–204. [Google Scholar]
  72. Beschle, D.L. Autonomous Decisionmaking and Social Choice: Examining the Right to Die. Ky. Law J. 1988, 77, 319. [Google Scholar]
  73. Lowe, S.L. The right to refuse treatment is not a right to be killed. J. Med. Ethics 1997, 23, 154–163. [Google Scholar] [CrossRef] [PubMed]
  74. Dorney, E.; Botfield, J.R.; Robertson, S.; McGeechan, K.; Bateson, D. Acceptability of the copper intrauterine device as a form of emergency contraception in New South Wales, Australia. Eur. J. Contracept. Reprod. Health Care 2020, 25, 114–119. [Google Scholar] [CrossRef] [PubMed]
  75. Elkhateeb, R.R.; Kishk, E.; Sanad, A.; Bahaa, H.; Hagazy, A.R.; Shaheen, K.; Moustafa, E.; Fares, H.; Gomaa, K.; Mahran, A. The acceptability of using IUDs among Egyptian nulliparous women: A cross-sectional study. BMC Women’s Health 2020, 20, 1–6. [Google Scholar] [CrossRef] [PubMed]
  76. Kraft, M.B.d.P.L.; Miadaira, M.; Marangoni, M.; Juliato, C.R.T.; Surita, F.G. Postplacental placement of intrauterine devices: Acceptability, reasons for refusal and proposals to increase its use. Rev. Bras. Hematol. Hemoter. 2021, 43, 172–177. [Google Scholar] [CrossRef] [PubMed]
  77. Beriain, I.d.M. Should we have a right to refuse diagnostics and treatment planning by artificial intelligence? Med. Health Care Philos. 2020, 23, 247–252. [Google Scholar] [CrossRef] [PubMed]
  78. Ploug, T.; Holm, S. The right to refuse diagnostics and treatment planning by artificial intelligence. Med. Health Care Philos. 2020, 23, 107–114. [Google Scholar] [CrossRef]
  79. Yokoi, R.; Eguchi, Y.; Fujita, T.; Nakayachi, K. Artificial intelligence is trusted less than a doctor in medical treatment decisions: Influence of perceived care and value similarity. Int. J. Hum. –Comput. Interact. 2021, 37, 981–990. [Google Scholar] [CrossRef]
  80. Doron, I. Caring for the dying: From a “negative” to a “positive” legal right to die at home. Care Manag. J. 2005, 6, 22–28. [Google Scholar] [CrossRef]
  81. Kompatsiari, E. The Contribution of the Element of ‘Fellowship’ to the Recognition of a Positive Right to Die for People Incapable of Committing Suicide. In The New Law: Suggestions for Reforms and Improvements of Existing Legal Norms and Principles; Lorenzmeier, S., Miler, D., Eds.; Nomos: Berlin, Germany, 2018; pp. 327–340. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tudorie, G. Reluctant Republic: A Positive Right for Older People to Refuse AI-Based Technology. Societies 2023, 13, 248. https://doi.org/10.3390/soc13120248

AMA Style

Tudorie G. Reluctant Republic: A Positive Right for Older People to Refuse AI-Based Technology. Societies. 2023; 13(12):248. https://doi.org/10.3390/soc13120248

Chicago/Turabian Style

Tudorie, George. 2023. "Reluctant Republic: A Positive Right for Older People to Refuse AI-Based Technology" Societies 13, no. 12: 248. https://doi.org/10.3390/soc13120248

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop