Next Article in Journal
Forecasting Neonatal Mortality in Portugal
Previous Article in Journal
Predictive Evaluation of Atomic Layer Deposition Characteristics for Synthesis of Al2O3 thin Films
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Forecasting Transitions in Digital Society: From Social Norms to AI Applications †

1
Department of Computer Science, Ludwig-Maximilians-Universität München, 80337 Munich, Germany
2
Department of Psychology, Ludwig-Maximilians-Universität München, 80802 Munich, Germany
*
Author to whom correspondence should be addressed.
Presented at the 9th International Conference on Time Series and Forecasting, Gran Canaria, Spain, 12–14 July 2023.
Eng. Proc. 2023, 39(1), 88; https://doi.org/10.3390/engproc2023039088
Published: 14 July 2023
(This article belongs to the Proceedings of The 9th International Conference on Time Series and Forecasting)

Abstract

:
The use of AI and digitalization in many areas of everyday life holds great potential but also introduces significant societal transitions. This paper takes a closer look at three exemplary areas of central social and psychological relevance that might serve as a basis for forecasting transitions in the digital society: (1) social norms in the context of digital systems; (2) surveillance and social scoring; and (3) artificial intelligence as a decision-making aid or decision-making authority. For each of these areas, we highlight current trends and developments and then present future scenarios that illustrate possible societal transitions, related questions to be answered, and how such predictions might inform responsible technology design.

1. Introduction

“I am sorry, but I have to inform you that we cannot undertake the surgery.” The news was upsetting for Anna, as surgery was the only option to stop the potentially fatal disease. Sure, surgery was risky, too. But without surgery, there was no hope left other than that the disease would cure itself. Anna had only recently been diagnosed with the rare disease, and the outcome was difficult to predict. The decision of treatment, however, was not only that of her doctor. In fact, her doctor based all his decisions on “Health Guardian”, an artificial intelligence (AI) system generating treatment recommendations based on incredible amounts of data. In Anna’s case, Health Guardian recommended not to do surgery. Anna’s mother, who joined the consultation, desperately asked whether there could be a mistake and whether the doctor was of the same opinion. The doctor was in a dilemma: Personally, he was not necessarily against surgery. He would even have argued in favor of surgery, had Health Guardian voiced any uncertainty. But he knew that compared to his own, naturally limited, perspective, the AI could factor in far more data. And that was what it came down to. Although the AI results were called “recommendations”, they were actually decisions. As the responsible doctor, he would have to present extremely good reasons to oppose the AI—but no such reasons were apparent in the current case. So the doctor had no choice but to console Anna and her family. At least there was still a sliver of hope for a natural recovery.
In recent years, artificial intelligence (AI) has achieved impressive successes in various domains such as visual perception [1], pattern recognition [2], expert and decision-making systems, games such as Chess and Go [3,4], or computer strategy games [5]. At the same time, critics still question whether these performances represent “real intelligence” [6,7].
In fact, the formation of “intelligence” in such systems is hardly comprehensible to us and exceeds the horizon of human understanding [8]. This is compounded by the fact that such systems can hardly be repaired. While a human may understand the basic mechanisms, the specific design becomes so complex that it is no longer possible for a human to discern how to fix bad parts of the system without damaging other parts. Often, the only solution is a completely new start, namely the training of a new system with modified start parameters that hopefully will not end up with the same errors.
The general lack of transparency in AI technologies [9] is one of the factors in the doctor’s dilemma in the above-mentioned example of AI in the operating room: AI decisions are hardly traceable by nature. AI systems refer to patterns detected in the example material (from the past) and then try to make predictions for the future and come up with new examples. However, which exact variables are considered, how these are weighted, and which correlations between these variables do exist remain hidden from the user (and mostly also from the programmer) [10]. Hence, re-turning to the case of the decision for or against surgery, the deciding factors for the Health Guardian’s decision remain obscured. Was it only about the predicted effectiveness and risk of the intervention? How were other variables taken into account, such as the cost-to-benefit ratio, budget of the healthcare system, and bed occupancy rate of hospitals? What about other waiting patients who needed surgery more urgently? It does not appear unlikely that artificial intelligence will consider the constraints of relevant stakeholders, especially in societies where resources in the healthcare system are more limited than in others.

2. Overview and Method

The AI case is exemplary for the current challenges and questions around digital transitions that our society is faced with: What does it mean if current technological trends and developments continue? What are the psychological effects and consequences of social interaction? Which moral considerations play a role, and which decisions have to be made? This paper takes a closer look at three exemplary areas of central social and psychological relevance that might serve as a basis for forecasting transitions in the digital society: (1) social norms in the context of digital systems; (2) surveillance and social scoring; and (3) artificial intelligence as a decision-making aid or decision-making authority.
For each of these areas, we highlight current trends and developments and use the method of future scenarios to illustrate possible societal transitions and related questions to be answered. Thereby, we aim to contribute to several fields of research. First, the examples and implications discussed here may inspire future research and design in the fields of human–computer interaction (HCI) and AI. Moreover, regarding forecasting and future studies in general, this article may illustrate how qualitative analyses and future-related reflections on current technological and societal trends may complement more quantitative and statistical methods.
Of course, the here-applied method of future scenarios comes with particular limitations related to some fundamental problems with predictions. Typically, when trying to forecast the future, current developments are analyzed, and one then tries to project them into the future and anticipate their interactions with other developments. However, numerous examples demonstrate how difficult this is, even for experts.
In the 1950s, when people were asked how they imagined the year 2000, they assumed that people would travel in flying cars powered by miniaturized nuclear engines, as also depicted in an artwork by Frank Rudolph Paul, an illustrator of science fiction magazines in that time [11]. Two currently successful existing technologies, the car and nuclear power, were taken as a basis and projected into the future. However, the dangers and technical limits of nuclear power could not be anticipated. Could the people of the past have made a better prediction if they had studied nuclear power more intensively? Possibly. But even if one misjudgment is taken into account and corrected, there are still many others.
In the second half of the last century, researchers at the Massachusetts Institute of Technology (MIT) published a study on the future of the world economy [12]. The key question was to predict the (assumed) necessary collapse of the current economic system based on exponential growth. Numerous parameters, such as population growth and density, aging of society, movement of goods, government budget, and debt, were considered in the prediction model. According to the model calculations, the time of collapse would be within the next 100 years, i.e., around the year 2070. However, many parameters and events that turned out to be relevant later on were naturally not considered, e.g., the disintegration of the Soviet Union, the rapid rise of China as a world power, and the significance of climate change for the planet. The significance of these developments was not foreseeable when the calculations were made and thus was not adequately taken into account in the forecast model. In the meantime, the forecast model was updated with new parameters [13]—we will see whether the predictions hold true this time.
A basic problem here is that so-called disruptive events, findings, or technologies are not taken into account. Disruptive technologies are technical innovations that replace or displace established products or services and interrupt the success of previously prevailing approaches [14]. One example would be the Internet, which has opened up many new areas of business, but at the same time brought about the collapse of many previously successful business models. A few years earlier, no one would have predicted the disruptive character of the Internet, and in turn, many predictions that disregarded the influence of the Internet were faulty.
In the end, we must remind ourselves that predictions are still a kind of thought experiment and do not allow for perfect knowledge of what will actually happen. However, this should not diminish the importance of such thought experiments. Even non-perfect thought experiments are still better than not thinking at all. Such thought experiments reveal what could happen and indicate possible alternative courses of action. Thought experiments emphasize that we are not mere passengers being overrun by the future but can actively help to shape it.

3. Social Norms

Social norms are the unwritten rules of beliefs, attitudes, and behaviors that are considered acceptable in a particular social group or culture [15]. Social norms represent shared beliefs regarding appropriate ways to feel, think, and behave [16]. In this way, social norms provide order and predictability in society [15]. For example, in German culture, if we make an appointment, we expect the other person to arrive on time. In contrast to legal norms (e.g., laws), social norms occur spontaneously rather than being planned deliberately and are enforced informally [17]. Typically, social norms only become evident when conflict arises, i.e., if someone’s behavior contradicts our informal understanding of what is appropriate, such as cutting in line, entering an office without knocking, or starting to eat before everyone is seated at the table [18]. The same seems to apply to the digital space. Many conflicts in the context of social media and digital communication can be interpreted as social norm conflicts [19].
Regarding the forecast of societal transitions in the digital age, the differences or transfer of norms between the digital and non-digital spaces is an interesting aspect. In order to understand and possibly foresee such transitions, we will first take a look at some particular possibilities and characteristics of the digital space, which in turn affect the formation, change, and enforcement of certain social norms.
  • Distance between interaction partners: In many channels of digital communication, interaction consists only of writing and reading text. Social cues we adhere to in face-to-face conversation (human characteristics such as appearance, voice, and physical presence) are missing. Therefore, it is terribly easy to forget that one is not interacting with texts but with humans, who have their own motives, their own value system, feelings, and emotions, and who can be hurt or offended by one’s own actions. In consequence, one might not even notice having hurt the counterpart on the other end of the digital channel, and empathic mechanisms that could show the consequences of one’s own actions are not activated [20];
  • Avatar and control, instead of authenticity: On many social media platforms, users are represented through an avatar, which can easily be exchanged if this seems convenient. A re-creation of another account is quickly carried out, allowing one to restart with a clean slate (assuming interactions in anonymous or pseudonymous space). Such a new start and identity change are very difficult in the non-digital space. And even on platforms where the avatar/identity cannot be easily changed, the user has much greater control over what information is revealed about him or her. In particular, involuntary aspects of communication (facial expressions, affective reactions, and voice color) are greatly reduced in digital space [21];
  • Felt anonymity: The fact that other interaction partners often appear as avatars and the fact that you yourself do not know who the other person is exactly create an illusion of complete anonymity. Even though, technically speaking, users can actually be identified and are only anonymous to each other, this feeling of anonymity still has psychological consequences. This pseudo-anonymity can be sufficient to make people feel “safe” and disregard regular social norms. Like hooded demonstrators, seemingly anonymous users may no longer feel obliged to follow social rules [21,22]. Not all users make use of this “freedom,” but a significant portion do;
  • Digital-exclusive mechanisms: The digital space provides various interaction mechanics that are unknown or even impossible in the non-digital space. One example is ghost banning. Ghost banning is a technique that is typically used against so-called trolls (i.e., internet forum troublemakers who derive satisfaction from provoking other users with polarizing statements). If a troll was just simply banned (deleted), this would not solve the problem for a long time since the user could easily create a new account and start again. Ghost banning, however, is a process through which a user is invisibly banned from a social network, website, or online community. The user retains the ability to browse through and use the available features without knowing that his or her actions are invisible to other users. This, in turn, prevents the user from interfering with other users [23]. Colloquially speaking, when an admin ghost bans a troll, this puts the troll in an invisible cage where they are unaware that other users cannot see their posts [24]. Initially, the ghost-banned troll has no way to determine his invisibility to others and can at best wonder about the lack of reactions to his provocations. Only if the troll would log in with another user’s account and obtain their perspective on the online world could he or she find out what is going on. Transferring the technique of ghost-banning to the non-digital space, one could imagine an invisibility cloak you can put on troublemakers without the person noticing. What is pure fiction in the real world is everyday life in the digital realm: every user receives his or her own individual view of the (digital) world, and the differences are seldom communicated.
Already nowadays, due to the ubiquitous use of digital interaction channels, corresponding digital norms are gaining more and more weight, which are in turn influenced by the peculiarities of the digital space.

A Possible Future Scenario

Social norms are implicitly learned and adhered to, and norms from the non-digital space influence those from the digital space, and vice versa [19]. We can conclude that as digitally mediated social interaction becomes more and more pervasive in everyday life, we are exposed to norms from the digital space to a greater extent. This, in turn, increases the relative influence of these norms. Ultimately, this could lead to a situation where norms from the digital space dominate over traditional norms that originated in the non-digital world.
Taking into account the characteristics of the digital space mentioned above, this could result in a greater level of rudeness and less consideration of the other’s emotional world. A side effect could also be the development of avoidance strategies against direct, non-digital interaction. In particular, people might stick to non-synchronous digital channels, such as text messaging, as a protective shield to insulate themselves from the possibly distressing interaction of the interaction partner [25], the so-called buffer effect [26]. In fact, there is already a perceptible trend among younger people to avoid direct synchronous interaction, such as face-to-face conversations or telephone calls (e.g., [27]). Instead, they are turning to more distant, mediated communication wherever possible. Instead of dealing with one’s own empathic reactions, non-digital contact is more and more evasive. As a result, empathic skills are used and trained less frequently, which, again, increases the preference for digital channels—a self-reinforcing dynamic.
Along with these predictions, we must also consider that, of course, the repertoire of traditional norms acquired over centuries in the non-digital world still continues to shape our behavior. In other words, the current observable state is still skewed in favor of conservative norms, and the future influence of norms from the digital world will become even stronger. A fictitious society starting from “zero” would presumably be even more strongly influenced by norms from the digital world. Following these thoughts, every existing society will be influenced increasingly by digital norms over time—if solely for the reason that older people, who tend to be representatives of conservative norms, die and are replaced by those who come after them and who are more strongly influenced by norms from the digital world.

4. Surveillance and Social Scoring

When the Internet emerged, the first goal was to create a failsafe communication infrastructure that would continue to function even if parts of it broke down [28]. Only later did additional (primarily economic-driven) goals emerge, such as creating specific social networks, tracking users’ paths, and presenting targeted advertising. Thus, the early days of the Internet were characterized primarily by freedom: Freedom in users’ actions and freedom from control. This period is also referred to as the golden age of the Internet or the Wild West period without rules [29].
However, as the popularity of the Internet increased, the economic potential of big data and large user groups became more and more recognized. First and foremost, this was the display of advertisements and the creation of numerous digital trading places [30]. In addition, the dissemination of news and information also played an increasingly important role. With more and more people obtaining their information from the Internet, the senders of information gained a steadily growing reach [31]. A natural follow-up question was how to maximize influence on users and how to establish information sovereignty: who determines which of two contradictory pieces of information is “correct”?
Accordingly, it did not take long for various stakeholders to discover the worldwide web and its users for their interests, and they began to extend their influence: Politicians, news portals, the advertising industry, providers of consumer products, activists, and individual opinion leaders as well as “influencers” [32]. As such, the Internet can be seen as the antithesis of the classic democratic society, in which information sovereignty is concentrated in the hands of the state or a small group of people. On the Internet, on the other hand, everyone is a sender and a receiver; everyone can potentially participate in opinion formation [33,34] and is, thus, a potential competitor to the major established media—a state of affairs that (traditional) media and politics losing control do not necessarily find desirable. This is accompanied by attempts at surveillance and information control, such as upload filters or sabotage of encryption technologies. Typically, these are justified with popular goals such as criminal prosecution, referring to relatively small groups of offenders (e.g., child pornography, illegal black markets). However, the negative effects and potential misuse of surveillance technologies affect all users equally.

A Possible Future Scenario

With the increasing digitalization of everyday life, the potential for surveillance increases as well. With every online action, users leave their digital footprints, becoming more and more transparent citizens. On the users’ side, the awareness of monitoring leads to adapted behavior, and even the mere awareness of potentially being monitored creates distress—a symptom also known as the “chilling effect” [35]. Of course, this chilling effect can be deliberately utilized to steer user behavior in the desired direction. Since not everyone needs to be monitored, this method is also cost-effective.
At the same time, alternative ways of surveillance, such as AI-based algorithms, will become more popular. Where once actual humans had to detect offenses in the social media world, algorithms can slip into the monitoring role. For example, such algorithms can automatically detect copyright infringements, (child) pornography, or certain keywords that are taboo on the platforms. However, the effect of such interventions has so far been negligible, since even being banned from a platform does not generally represent a serious consequence for these users.
With the introduction of social scoring, this has fundamentally changed. Social scoring takes the monitoring aspect to a new level and turns implicit, casual influence into an explicit, targeted one: with the use of social scoring—citizens receive points for desired behaviors and deductions for undesired behaviors—desirable behavior is explicitly prescribed (e.g., [36,37]). When such social scores affect real-life chances (e.g., when looking for a job or when searching for an apartment), violations against desired behaviors have specific and tangible consequences for users. Naturally, any criticism of this system will be classified as an undesirable action as well. Withdrawal from such a system will become almost impossible as soon as critical functionalities (freedom to travel, payment functions, prioritization in the search for housing, jobs, hiring criteria analogous to a police clearance certificate) are linked to the social score. In the end, the self-reinforcing spiral of social scoring systems may result in more and more extreme and comprehensive rules until all areas of human behavior are covered.
As these considerations reveal, the basic idea of social scoring already contains much negative potential. Therefore, no matter what disruptive event of the future might stop it or not, it seems important to consider now whether we want to prevent the establishment of such a concept through our actions today.

5. AI as Decision-Making Aid or Decision-Making Authority

Artificial intelligence is already being used to support complex decisions, for example in the fields of insurance [38,39], medicine (for example, diagnostics and pattern recognition in image processing mentioned by Kermany et al. [40], Esteva et al. [41]), and HR, where artificial intelligence can help identify the most suitable candidate for an advertised position [42,43]. Across all these applications, the possibilities of artificial intelligence (in particular, machine learning) are limited by three main factors:
  • The specification of the method, algorithm, or network topology;
  • The computing power for training the AI;
  • The number of available data sets matching possible input data and output data (for example, a large collection of different animal images, each with an indication of which animal is depicted).
In many application domains, the current technical possibilities regarding all three factors are sufficient to create AIs that deliver results that are equal to or superior to those of humans. Especially for the last factor, i.e., the data sets that link input patterns with correct results, progress results as a kind of by-product of the activities of current users (e.g., of social media platforms). Every new set of stored user data generates new training data. Hence, the situation is becoming better every day—at least for those who can store and utilize the data.

A Possible Future Scenario

As soon as AI methods are able to replace human labor or skills of equal quality, there is no question of whether these methods will be applied. Not using such methods would result in a significant competitive disadvantage, maybe even being put out of the market. As methods and data collections continue to evolve, AI will find its way into more and more fields as a decision support tool, such as jurisdiction [44,45], partner choice [46,47], and many more.
With the invasion of AI into ever new domains, many questions arise, beginning with the most fundamental one: Should AI be allowed to enter all domains of human society or are there any barriers?
Moreover, what if AI delivers recommendations that are politically incorrect and therefore undesirable? How can it be ensured that the training data is “neutral” so that no bias is transferred to the trained AI?
Moreover, who is responsible for the indirect consequences of AI recommendations, and what kind of events can be traced back to an algorithm?
For example, in a recent case before the US Supreme Court, a mother whose daughter died along with 130 other people in connection with the ISIS terrorist attack in November 2015 in Paris alleged that Google’s YouTube algorithms effectively amplified Islamic State-produced materials in support of the extremists that killed her daughter [48]. As with many online media platforms, YouTube’s recommendation algorithm basically aims to suggest relevant items to users by directing them to videos that are similar to those they have previously selected and watched. YouTube’s recommendations thus mirror the user’s apparent interest. However, the family of the terror victim argues that YouTube’s recommendations expose people to (ever more) hateful content, radicalize viewers, and ultimately encourage them to make terrorist attacks of their own [49]. To date (as of February 2023), the case is still under trial. With ever more complex AI systems and algorithms in the future, such legal and moral questions will probably become more complex as well.
In connection to this, another block of questions refers to the transparency of AI: Is there a right to understand on what basis an AI makes concrete recommendations—and how could such a right ever be realized if, by nature, AI decisions remain a black box to some extent?
With the current state of technology, it is certain that AI can neither offer error-free decision-making nor transparent reasons for its decisions. At the same time, these shortcomings do not mean that AI will not be applied, especially when considering the advantages on the other side.
What will be essential, then, is how people feel about AI and its role in important decisions in society. Would it be desirable, for example, if an AI that has access to your data and will regard your interests would decide about the future and regulations in a country instead of human politicians?
When asked that question, a survey found overall high ratings in favor of AI: In the European region, the approval rate is 51% on average, with particularly strong support for AI in Spain (66%), Italy (59%), and Estonia (56%). In China, 75% are in favor of AI as a political decision-maker, whereas in the USA, only 40% want to delegate political decisions to AI [50].
Independent from the application domain, it seems likely that the use of AI will become more mainstream and that technological progress will more or less override the discussion about which applications are desirable or ethical.

6. Outlook

The use of AI and digitalization in many areas of work and private life will continue to increase in the future and hold great potential overall. Unpleasant tasks can be delegated to technology; AI can take over tasks that overwhelm or bore humans (and possibly vice versa). However, what we need to keep in focus are the major societal changes that might come with the use of AI. A system based on supply and demand for (human) work performance can hardly be maintained in its current form if artificial agents are competing with humans. New ideas for living and working together are needed. While there is probably still some time left before the big breakthrough of artificial agents, no one knows exactly how much time. When that day comes, there needs to be an action plan defining the space we want to grant AI in society. Otherwise, we will only be able to react to a factual reality instead of designing a desirable future.
Altogether, these considerations show that the innocent golden age of AI and digitization is over. Simply accepting their effects and side effects on our society is not acceptable. Conscious technology design requires us to predict how technology will continue to develop, what effects we can expect on our society, and how we can counter these influences with foresight. As in the physical world, our behavior in the digital space is influenced by design decisions [19]. In order to promote desired, prosocial behavior and reduce antisocial behavior, it needs a deliberate consideration of how certain features of technology affect social dynamics and the world we live in. Not everything that is technically feasible is morally acceptable. There is no such thing as neutral design.
Even with conscious design decisions, developing solutions that actually work flawlessly continues to be a challenge. For example, the approaches chosen to promote prosocial behavior can again have undesirable side effects. Trying to prevent antisocial behavior by making users completely transparent means trading one problem for another. The same applies to surveillance and social scoring. The negative effects of social scores must be researched in advance so as not to create a factual situation from which it will be nearly impossible to escape later on.
In sum, the development of good solutions that are morally and socially acceptable is one of the current core tasks in the context of digitalization.

Author Contributions

Conceptualization, D.U. and S.D.; methodology, D.U.; writing—original draft preparation, D.U.; writing—review and editing, S.D.; project administration, D.U. and S.D.; funding acquisition, S.D. All authors have read and agreed to the published version of the manuscript.

Funding

Part of this research was funded by the German Research Foundation (DFG), Project PerforM (425412993) as part of the Priority Program SPP2199 Scalable Interaction Paradigms for Pervasive Computing Environments.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

This is a revised version of “Ullrich, D. (2022). Zukunftsvisionen. In: S. Die-fenbach, S. and P. von Terzi, P. (Eds). Digitale Gesellschaft neu denken. Chancen und Herausfor-derungen in Alltags- und Arbeitswelt aus psychologischer Perspektive. Stuttgart, Germany: Kohlhammer.” published in German by Kohlhammer publisher. Permission was granted by Kohlhammer publisher.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Taigman, Y.; Yang, M.; Ranzato, M.; Wolf, L. DeepFace: Closing the gap to human-level performance in face verification. 2014 IEEE Conf. Comput. Vis. Pattern Recognit. 2014, 5, 1701–1708. [Google Scholar]
  2. Foggia, P.; Percannella, G.; Vento, M. Graph matching and learning in pattern recognition in the last 10 years. Int. J. Pattern Recognit. Artif. Intell. 2014, 28, 1450001. [Google Scholar] [CrossRef]
  3. Schrittwieser, J.; Antonoglou, I.; Hubert, T.; Simonyan, K.; Sifre, L.; Schmitt, S.; Guez, A.; Lockhart, E.; Hassabis, D.; Graepel, T.; et al. Mastering Atari, Go, chess and shogi by planning with a learned model. Nature 2020, 588, 604–609. [Google Scholar] [CrossRef] [PubMed]
  4. Silver, D.; Huang, A.; Maddison, C.J.; Guez, A.; Sifre, L.; van den Driessche, G.; Schrittwieser, J.; Antonoglou, I.; Panneershelvam, V.; Lanctot, M.; et al. Mastering the game of Go with deep neural networks and tree search. Nature 2016, 529, 484–489. [Google Scholar] [CrossRef]
  5. Vinyals, O.; Babuschkin, I.; Czarnecki, W.M.; Mathieu, M.; Dudzik, A.; Chung, J.; Choi, D.H.; Powell, R.; Ewalds, T.; Georgiev, P.; et al. Grandmaster level in StarCraft II using multi-agent reinforcement learning. Nature 2019, 575, 350–354. [Google Scholar] [CrossRef]
  6. Fjelland, R. Why general artificial intelligence will not be realized. Humanit. Soc. Sci. Commun. 2020, 7, 10. [Google Scholar] [CrossRef]
  7. Crawford, K. Microsoft’s Kate Crawford: ‘AI Is Neither Artificial nor Intelligent’ (Z. Corbyn, Interviewer) [Interview]. 2021. Available online: https://www.theguardian.com/technology/2021/jun/06/microsofts-kate-crawford-ai-is-neither-artificial-nor-intelligent (accessed on 4 April 2023).
  8. Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 2019, 1, 206–215. [Google Scholar] [CrossRef]
  9. Larsson, S.; Heintz, F. Transparency in artificial intelligence. Internet Policy Rev. 2020, 9, 1–16. [Google Scholar] [CrossRef]
  10. Kim, T.W.; Routledge, B.R. Why a right to an explanation of algorithmic decision-making should exist: A trust-based approach. Bus. Ethics Q. 2021, 32, 75–102. [Google Scholar] [CrossRef]
  11. Novak, M. The World Will Be Wonderful in the Year 2000! Available online: https://www.smithsonianmag.com/history/the-world-will-be-wonderful-in-the-year-2000-110060404/ (accessed on 4 April 2023).
  12. Meadows, D.H.; Meadows, D.L.; Randers, J.; Behrens, W.W. The Limits to Growth: A Report for the Club of Rome’s Project on the Predicament of Mankind; Universe Books: New York, NY, USA, 1972. [Google Scholar]
  13. Herrington, G. Update to limits to growth: Comparing the World3 model with empirical data. J. Ind. Ecol. 2021, 25, 614–626. [Google Scholar] [CrossRef]
  14. Danneels, E. Disruptive technology reconsidered: A critique and research agenda. J. Prod. Innov. Manag. 2004, 21, 246–258. [Google Scholar] [CrossRef]
  15. McLeod, S.A. Social Roles. Simply Psychology. 2008. Available online: www.simplypsychology.org/social-roles.html (accessed on 4 April 2023).
  16. Turner, J.C. Social Influence, 16th ed.; Thomson Books: Belmont, CA, USA, 1991. [Google Scholar]
  17. Hechter, M.; Opp, K.-D. Social Norms; Russel Sage Foundation: New York, NY, USA, 2001. [Google Scholar]
  18. Diefenbach, S. Social norms in digital spaces: Experience reports on wellbeing and conflict in the teleworking context and implications for design. Z. Für Arb. 2022, 77, 56–77. [Google Scholar] [CrossRef] [PubMed]
  19. Diefenbach, S.; Ullrich, D. Disrespectful technologies: Social norm conflicts in digital worlds. In Advances in Usability, User Experience and Assistive Technology; Ahram, T.Z., Falcão, C., Eds.; Springer International Publishing: Basel, Switzerland, 2019; pp. 44–56. [Google Scholar]
  20. Carrier, L.M.; Spradlin, A.; Bunce, J.P.; Rosen, L.D. Virtual empathy: Positive and negative impacts of going online upon empathy in young adults. Comput. Hum. Behav. 2015, 52, 39–48. [Google Scholar] [CrossRef]
  21. Suler, J. The online disinhibition effect. CyberPsychology Behav. 2004, 7, 321–326. [Google Scholar] [CrossRef]
  22. Macdonald, C. Avatars, disconnecting agents: Exploring the nuances of the avatar effect in online discourse. Open Sci. J. 2020, 5, 1–8. [Google Scholar] [CrossRef]
  23. Techopedia. Ghost Banning. 2019. Available online: https://www.techopedia.com/definition/29190/ghost-banning (accessed on 4 April 2023).
  24. Slang.net. Ghost Banning. Censoring a Social Media User’s Posts. 2022. Available online: https://slang.net/meaning/ghost_banning (accessed on 4 April 2023).
  25. O’Sullivan, B. What you don’t know won’t hurt me: Impression management functions of communication channels in relationships. Hum. Commun. Res. 2000, 26, 403–431. [Google Scholar] [CrossRef]
  26. Tretter, S.; Diefenbach, S. The buffer effect: Strategic choice of communication media and the moderating role of interpersonal closeness. J. Media Psychol. Theor. Methods Appl. 2022, 34, 265–276. [Google Scholar] [CrossRef]
  27. Colbert, A.; Yee, N.; George, G. The digital workforce and the workplace of the future. Acad. Manag. J. 2016, 59, 731–739. [Google Scholar] [CrossRef] [Green Version]
  28. Leiner, B.M.; Cerf, V.G.; Clark, D.D.; Kahn, R.E.; Kleinrock, L.; Lynch, D.C.; Postel, J.; Roberts, L.G.; Wolff, S. A brief history of the internet. ACM SIGCOMM Comput. Commun. Rev. 2009, 39, 22–31. [Google Scholar] [CrossRef]
  29. Palacios, A. The Internet’s “Wild West” Era: A Love Letter to the Early 00’s Internet. 2019. Available online: https://medium.com/@alejandropalacios_98575/the-internets-wild-west-era-a-love-letter-to-the-early-00-s-internet-3075722f79ae (accessed on 4 April 2023).
  30. Taylor, K. One Statistic Shows How Much Amazon Could Dominate the Future of Retail. Business Insider. 2021. Available online: https://www.businessinsider.com/retail-apocalypse-amazon-accounts-for-half-of-all-retail-growth-2017-11 (accessed on 15 September 2022).
  31. Beisch, N.; Schäfer, C. Ergebnisse der ARD/ZDF-Onlinestudie 2020. Internetnutzung mit großer Dynamik: Medien, Kommunikation, Social Media. Media Perspekt. 2020, 9, 462–481. [Google Scholar]
  32. Moffett, S.; Santos, J. Social media as an influencer of public policy, cultural engagement, societal change and human impact. In Proceedings of the European Conference on Social Media: ECSM 2014, Brighton, UK, 10–11 July 2014; pp. 312–319. [Google Scholar]
  33. Bakshy, E.; Rosenn, I.; Marlow, C.; Adamic, L. The role of social networks in information diffusion. In Proceedings of the 21st International Conference on World Wide Web—WWW’12, Lyon, France, 16–20 April 2012; Association for Computing Machinery: New York, NY, USA, 2012; pp. 519–528. [Google Scholar]
  34. Burbach, L.; Halbach, P.; Ziefle, M.; Calero Valdez, A. Opinion formation on the internet: The influence of personality, network structure, and content on sharing messages online. Front. Artif. Intell. 2020, 3, 45. [Google Scholar] [CrossRef]
  35. Büchi, M.; Festic, N.; Latzer, M. The Chilling Effects of Digital Dataveillance: A Theoretical Model and an Empirical Research Agenda. Big Data Soc. 2022, 9, 20539517211065368. [Google Scholar] [CrossRef]
  36. Hoffrage, U.; Marewski, J.N. Social Scoring als Mensch-System-Interaktion. In Social Credit Rating; Everling, O., Ed.; Springer Fachmedien: Wiesbaden, Deutschland, 2020; pp. 305–329. [Google Scholar]
  37. Kostka, G. China’s social credit systems and public opinion: Explaining high levels of approval. New Media Soc. 2019, 21, 1565–1593. [Google Scholar] [CrossRef]
  38. Eling, M.; Nuessle, D.; Staubli, J. The impact of artificial intelligence along the insurance value chain and on the insuraility of risks. Geneva Pap. Risk Insur.—Issues Pract. 2021, 47, 205–241. [Google Scholar] [CrossRef]
  39. Riikkinen, M.; Saarijärvi, H.; Sarlin, P.; Lähteenmäki, I. Using artificial intelligence to create value in insurance. Int. J. Bank Mark. 2018, 36, 1145–1168. [Google Scholar] [CrossRef]
  40. Kermany, D.S.; Goldbaum, M.; Cai, W.; Valentim, C.C.S.; Liang, H.; Baxter, S.L.; McKeown, A.; Yang, G.; Wu, X.; Yan, F.; et al. Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell 2018, 172, 1122–1131. [Google Scholar] [CrossRef]
  41. Esteva, A.; Kuprel, B.; Novoa, R.A.; Ko, J.; Swetter, S.M.; Blau, H.M.; Thrun, S. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017, 542, 115–118. [Google Scholar] [CrossRef]
  42. Upadhyay, A.K.; Khandelwal, K. Applying artificial intelligence: Implications for recruitment. Strateg. HR Rev. 2018, 17, 255–258. [Google Scholar] [CrossRef]
  43. Nawaz, N.; Mary, A. Artificial intelligence chatbots are new recruiters. Int. J. Adv. Comput. Sci. Appl. 2019, 10, 1–5. [Google Scholar] [CrossRef] [Green Version]
  44. Sourdin, T. Judges, Technology and Artificial Intelligence: The Artificial Judge; Edward Elgar Publishing: Cheltenham, UK, 2021. [Google Scholar]
  45. Vermeys, N. The computer as the court: How will artificial intelligence affect judicial processes? In New Pathways to Civil Justice in Europe; Kramer, X., Biard, A., Hoevenaars, J., Themeli, E., Eds.; Springer: Basel, Switzerland, 2021. [Google Scholar]
  46. Agudo, U.; Matute, H. The influence of algorithms on political and dating decisions. PLoS ONE 2012, 16, e0249454. [Google Scholar] [CrossRef]
  47. Scavarelli, C.M. The Future of Dating (No. 6) [Song]. 2018. Available online: https://soundcloud.com/user-145965453 (accessed on 4 April 2023).
  48. ABC News. Family of American Terror Victim Asks Supreme Court to Curb Immunity for Social Media. Available online: https://abcnews.go.com/Politics/family-american-terror-victim-asks-supreme-court-curb/story?id=96463560 (accessed on 4 April 2023).
  49. LegalEagle. The Supreme Court Could Destroy the Internet Next Week. 2023. Available online: https://www.youtube.com/watch?v=hzNo5lZCq5M (accessed on 4 April 2023).
  50. Jonsson, O.; de Tena, C.L. European Tech Insights 2021. Part II Embracing and Govering Technological Disruption. Center for Governance of Change. 2021. Available online: https://docs.ie.edu/cgc/IE-CGC-European-Tech-Insights-2021-%28Part-II%29.pdf (accessed on 4 April 2023).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ullrich, D.; Diefenbach, S. Forecasting Transitions in Digital Society: From Social Norms to AI Applications. Eng. Proc. 2023, 39, 88. https://doi.org/10.3390/engproc2023039088

AMA Style

Ullrich D, Diefenbach S. Forecasting Transitions in Digital Society: From Social Norms to AI Applications. Engineering Proceedings. 2023; 39(1):88. https://doi.org/10.3390/engproc2023039088

Chicago/Turabian Style

Ullrich, Daniel, and Sarah Diefenbach. 2023. "Forecasting Transitions in Digital Society: From Social Norms to AI Applications" Engineering Proceedings 39, no. 1: 88. https://doi.org/10.3390/engproc2023039088

Article Metrics

Back to TopTop