Next Article in Journal
Integrating Sustainable Development Goals into Project-Based Learning and Design Thinking for the Instructional Design of a Virtual Reality Course
Previous Article in Journal
Facial Beauty Prediction Using an Ensemble of Deep Convolutional Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

The Waning Intellect Theory: A Theory on Ensuring Artificial Intelligence Security for the Future †

1
Department of Electrical and Electronics Engineering, University Institute of Technology, Rajiv Gandhi Proudyogiki Vishwavidyalay, Bhopal 462033, Madhya, India
2
Department of Computer Science and Engineering (Artificial Intelligence & Machine Learning), School of Information Technology, Rajiv Gandhi Proudyogiki Vishwavidyalay, Bhopal 462033, Madhya, India
*
Authors to whom correspondence should be addressed.
Presented at the International Conference on Recent Advances on Science and Engineering, Dubai, United Arab Emirates, 4–5 October 2023.
Eng. Proc. 2023, 59(1), 60; https://doi.org/10.3390/engproc2023059060
Published: 18 December 2023
(This article belongs to the Proceedings of Eng. Proc., 2023, RAiSE-2023)

Abstract

:
In the era of rapid technological advancement, understanding and confronting the challenges posed by AI systems are imperative. The concept of Superintelligence denotes the potential for AI to surpass the intellectual capacities of even the most brilliant human minds. As AI capabilities outpace human intellect and continually evolve, achieving such Superintelligence could lead to a point of no return—technological singularity—with uncontrollable repercussions, risking humanity’s existence. The proposed Waning Intellect theory suggests placing a finite lifespan on AI models to prevent unchecked evolution. Waning Intellect anticipates potential diminishing AI capabilities due to increased neural network complexity, posing risks to reliability, safety, and ethical concerns. Upholding ethical standards, human–AI collaboration, and robust regulatory frameworks are pivotal in leveraging AI’s potential while ensuring responsible deployment and mitigating risks.

1. Introduction

Artificial Intelligence is essentially the intellectual power possessed by inanimate systems developed by human beings; that is, it is essentially the intellectual power of human-made objects.
Human intelligence is considered the most developed and evolved characteristic of the brain among all creatures on Earth. In a nutshell, the intelligence of human individuals is due to two essential provisions: the first is the most complex and heaviest (with respect to body weight) computational network—the human brain—while the second is this brain’s ability to learn, improvise, and regulate its nervous network through experiencing of its surroundings over time. In essence, we humans learn almost all matters by means of the hit-and-trial method. Essentially, the development of Artificial Intelligence has also been performed using this brute force methodology.

1.1. The Brute Force Method

Also simply called the hit-and-trial method, it can be described by following the four P’s: Perceive–Process–Probe–Perpetuate.
“Perceive” is sensing, realizing and understanding the surrounding. It is ultimately picking up necessary stimulus data from the surrounding environment. AI systems often have supporting hardware sensors that can currently recognize or “Perceive” visual, aural, and physical touch senses, but not smell and taste, unlike humans.
“Process” is to gather data onto the computing system or center (the brain in case of humans) for understanding and inferring conclusions from the observed information.
“Probe” means to test the logically drawn conclusions against random scenarios to check if the inference made by the system makes practical sense.
“Perpetuate” means to repeat this process continuously, i.e., if the inference is correct, the system has learned correctly. If the conclusion is inconsistent with the data collected during “Probing”, the system must be improvised and overwritten.
When a system is able to automate such a learning process for itself, it is said to possess basic Artificial Intelligence based on the brute force method. Further, automation implies that once initiated, the “Perceive” and “Process” steps are executed continuously under the “Probe” step, and the system begins to learn rapidly.
However, such an approach has two limitations: the brute force method is limited by the available computation power and by the environment to which the AI has been exposed. An overly simple environment will make the AI inexperienced or ineffective, while an extremely vibrant environment, as experienced by humans since their birth, would result in an extreme load on the computing center. Nevertheless, modern exponentially increasing computing powers have ushered a sudden advancement in contemporary AI systems.

1.2. The Artificial Superintelligence

With the ever-increasing computing power, supported by recent advancements in the field of quantum computing and further compounded with the accessibility to all the knowledge and information of humanity through the Internet, AI systems being developed now are conveniently surpassing their described limitations. Thus, very recently, powerful AI tools like OpenAI’s ChatGPT and Google’s Bard have emerged to be extremely effective, even capable of near Artificial General Intelligence (AGI). Artificial General Intelligence is the development of “intelligent” systems, which are capable of learning and performing any intellectual task that at present only human beings can perform with optimum efficiency. Such an intelligent system is capable of learning from data not merely via the brute force method, but also through all possible techniques in nature.
Thus, Artificial General Intelligence is simply the creation of human-made systems which are as “intelligent” as the humans themselves. Eventually, AGI having the unfathomable computational power of quantum computing and complete access to all of humanity’s knowledge and resources via the Internet would surpass all humans in every economically valuable task, further evolve its own intelligence, given the time, to finally achieve “Superintelligence”, having intellectual capabilities, that far surpass the most gifted human brains. Superintelligence is therefore technological singularity in the field of AI, the condition in which the development, evolution, and operation of AI systems become uncontrollable and irreversible, resulting in unforeseeable and probably catastrophic events for humanity [1,2,3,4,5,6].

1.3. The Waning Intellect Theory

It has been empirically inferred that AI systems developed to solve many of the existing problems and crises of humanity have reflected solutions that ultimately obliterate human existence itself. This fact is deeply concerning and disturbing for researchers in the field of Artificial Intelligence, as AI systems today are almost on the verge of achieving the Artificial General Intelligence, which may soon even culminate into Superintelligence in AI, all of which is explained above. The situation will be apocalyptic for humanity if such a “Superintelligent AI” “decides” to work for any of humanity’s crises, and eventually would seek to exterminate humanity itself in the process or as an outcome.
To curb this possibility of “the Creation destroying its Creators”, the idea of the “Fading Intelligence”, referred to as the “Waning Intellect” in this publication, has been proposed. This theory proposes the elimination of such “over-experienced” AI systems, which will not only restrict the anticipated evolution of Superintelligent AI systems, but will also allow newer and more faithful AI models to rise, develop, and evolve. This theory is based on the same method in which human scientific and cultural knowledge has developed and advanced over thousands of generations, ever since the dawn of human civilization.
Thus, the idea of this theory is to gradually create generations of independent AI models, in which each succeeding generation receives only the “favorable” and “faithful” traits of the preceding generations. That is, every AI model developed must have a defined lifetime of learning and operation, after which it must be terminated “for good” and be replaced by a newer nearly independent model, to which only selected traits learnt and developed by the terminated model will be passed.
However, this theory comes with its own challenges. Firstly, it will constantly require the regular intervention and monitoring of AI models by human experts to concur whether its knowledge and intellect have arisen to alarming levels, and then more human labor will be required to select the favorable and faithful traits of the model being terminated. This will greatly impact the smooth working and automation of AI models and will be quite discouraging, since the automation of machines is one of the main purposes of the development of Artificial Intelligence. Furthermore, the possibility of an AI model with a Waning Intellect raises serious questions at its reliability and safety in evaluating the desired and required results.

2. Discussion

2.1. Superintelligence

An AI with Superintelligence is a hypothetical being with intelligence superior to even the brightest human brains. The topic of Superintelligence is a topic of debate, with some experts suggesting it is only a matter of time, while others believe it is never possible.
Superintelligence is an AI at least 100 times more powerful than average human intelligence, capable of solving problems, learning, and adapting in ways humans cannot.
The potential benefits of Superintelligence are vast. Some of the most important issues in the world, like poverty, sickness, and climate change, might be resolved using it. It could lead to novel technologies enhancing our quality of life in unprecedented ways.
Superintelligence presents potential benefits and risks, as it could potentially surpass humans and threaten humanity. It is crucial to weigh the advantages and risks before deciding to create it, ensuring informed decisions about AI’s future [7].

2.1.1. Current Situation and Status of Superintelligent AI

“Machine learning” and “deep learning” have made significant progress in recent years. These techniques allow AI systems to gain knowledge from data and gradually become more effective.
“Neural networks” are a “Machine’s Learning Network”, modeled after the human brain. Impressive results have been obtained using them for a number of applications, including image recognition, natural language processing, and gaming.
AI is progressing but has not yet reached Superintelligence, a hypothetical intelligence superior to human brains. Current research in Artificial General Intelligence (AGI) focuses on developing AI systems capable of learning and performing mental activities similar to human capabilities.
Notable projects and organizations working toward the development of Superintelligent AI include OpenAI and DeepMind. These companies are using a variety of techniques, like machine learning, deep learning, and neural networks, to create AI systems that can achieve Superintelligence after Artificial General Intelligence.
The future of AI is uncertain, but it is clear that this technology has the potential to change the world in profound ways.

2.1.2. Merits and Demerits of Superintelligence in AI

Superintelligence in AI has both merits and demerits. Let us explore them in more detail.
Merits:
1. 
Increased efficiency and productivity
Superintelligent machines revolutionize industries by enhancing efficiency, accuracy, and adaptability in complex tasks, reducing costs, and analyzing vast amounts of data faster than humans [8].
2. 
Improved decision making
Superintelligence enables machines to make better decisions by handling and evaluating vast amounts of data, making it crucial in finance for quick and efficient decision-making [9].
3. 
A better standard of living
Superintelligence enables machines to perform previously human-performed tasks, enhancing daily lives and aiding disabled individuals in challenging tasks. For instance, AI is effectively being used in planning and development of global EV charging infrastructure [8,10].
4. 
Advancements in scientific research
Superintelligence accelerates scientific research by enabling machines to process and evaluate vast amounts of data, benefiting fields like medicine for new treatments. For example, AI simulated models of sub-atomic physics is paving way for the development of nanoelectronics and molecular logic gates [11].
Demerits:
1. 
Potential threat to human existence
Superintelligence poses risks to human existence, as uncontrollable machines may pursue objectives at the expense of human well-being, and may be used for malicious purposes like cyber attacks, terrorism, and warfare.
2. 
Job displacement
As machines become more sophisticated, they can carry out tasks that previously required human labor, which eliminates jobs and increases inequality. This would greatly impact the workforce, particularly in industries that rely heavily on manual labor.
3. 
Lack of accountability
Superintelligent machines may lack accountability for their actions, as they can make decisions based on algorithms and data without human oversight. This can be particularly problematic if these machines make decisions that are harmful to humans.
4. 
Ethical concerns
Superintelligence research presents ethical challenges; establishing standards requires coordination between AI researchers, legislators, and stakeholders to prevent human extinction.
Superintelligence in AI has potential to revolutionize industries but also presents hazards; ethical guidelines and control mechanisms are crucial for preventing threats [12,13].

3. The Waning Intellect Theory—The Concept and Its Consequences

The study of human intelligence has always been a subject of great fascination and inquiry. Over the years, numerous theories and models have attempted to explain the nature and development of intelligence. One such theory that has garnered attention in recent times is the “Fading Intelligence Theory”, referred to as the Waning Intellect Theory in this paper. This theory proposes that intelligence, despite its initial development and potential, may decline or “fade”, or “wane” over time, leading to cognitive decline and various consequences.
The Waning Intellect Theory for humans posits that intelligence, which is typically believed to be a stable and relatively unchanging trait, can deteriorate as individuals age or due to certain external factors. The theory challenges the conventional belief that intelligence remains static throughout a person’s lifetime. Instead, it suggests that cognitive abilities, problem-solving skills, including memory, and processing speed, may gradually decline over time.
Several factors contribute to the fading of intelligence. Age-related cognitive decline, such as the natural deterioration of neural connections and brain structures, plays a significant role. Additionally, environmental factors, such as chronic stress, lack of intellectual stimulation, sedentary lifestyles, and unhealthy habits, can accelerate the fading process.
The Waning Intellect Theory suggests that intelligence decline varies across individuals due to genetic predisposition, lifestyle choices, and health. It thus questions humanity’s ability to keep up with technology advancements over time. As AI evolves rapidly, it is crucial to examine this concept and explore its potential consequences in AI.
The paper discusses the implications of human–AI interactions, the future of work, and the ethical considerations that arise [4,14,15].

3.1. Problems Arising for Humans as per Waning Intellect Theory

Humanity faces innumerable challenges due to the Waning Intellect Theory, which, as described above, is based on the decline in cognitive capabilities of human beings with age. They can be broadly summarized as follows.

3.1.1. Individual Challenges

The Waning Intellect Theory impacts individuals’ cognitive decline, affecting decision making, learning, and memory recall. This can lead to difficulties in careers, relationships, and overall quality of life. Additionally, diminished critical thinking abilities increase vulnerability to manipulation and exploitation.

3.1.2. Social Challenges/Impact

The Waning Intellect Theory raises concerns about impact on society’s healthcare systems, social support structures, economy, and innovation. As the aging population grows (see Figure 1), maintaining and enhancing cognitive abilities is crucial to mitigate these consequences.
As we can easily infer from this graph, the world population is aging and the number of people above the age of 60 years will significantly rise in the coming years, which will increase the load on existing machineries.

3.1.3. Psychological and Emotion Wellbeing Challenges

The decline in intelligence can cause psychological and emotional consequences, including frustration, loss of identity, and decreased self-esteem. Anxiety and depression may also arise as cognitive abilities decline. Society must address these emotional consequences and provide support for individuals facing these challenges.

3.2. Addressing These Consequences

While the Waning Intellect Theory for humans may present daunting challenges, it also highlights the need for proactive measures to mitigate its consequences. The following strategies can play a crucial role in maintaining cognitive functioning and promoting wellbeing.

3.2.1. Technological Advancements

Technological advancements, such as Artificial Intelligence and neuroscientific research, hold promise in developing interventions and therapies to slow down or even reverse cognitive decline. Ongoing research in these fields can provide new insights and strategies to address the consequences of the Waning Intellect theory.

3.2.2. Lifestyle Modifications

Adopting a healthy lifestyle that includes regular physical exercise, a balanced diet, social engagement, and intellectually stimulating activities can help delay cognitive decline. Engaging in lifelong learning, such as pursuing hobbies, taking up new skills, or participating in community programs, can provide cognitive benefits and maintain mental acuity.

3.2.3. Cognitive Training

Engaging in brain-training exercises and activities specifically designed to improve memory, attention, and problem-solving skills can help slow down the fading of intelligence. Such interventions aim to challenge the brain, keeping it active and engaged.
In this paper, we have explored the use of Artificial Intelligence to assist humans and reduce the possible threats of cognitive overloading as a consequence of Waning Intellect theory in detail.

3.3. Understanding the Waning Intellect Theory in the Age of AI

Waning Intellect theory highlights AI’s exponential growth and potential for maintaining cognitive functioning and wellbeing in the context of AI, as follows.

3.3.1. Augment Human Intelligence

AI technologies have the potential to augment human intelligence by providing tools for complex data analysis, problem solving, and decision making. By automating routine tasks, AI can free up human cognitive resources for higher-level thinking and creative pursuits.

3.3.2. Ethical AI Development

Prioritizing ethical considerations in AI development is crucial. Implementing transparency, fairness, and accountability measures can help address biases, ensure equitable access, and safeguard against AI systems making decisions that undermine human values and dignity.

3.3.3. Continuous Research and Adaptation

The Waning Intellect theory calls for continuous research into human intelligence, AI advancements, and their interactions. This ongoing exploration can inform policy-makers, guide educational reforms, and facilitate a better understanding of the dynamic relationship between humans and AI.

3.3.4. Combination of Humans and AI

Instead of perceiving AI as a threat, fostering collaboration between humans and AI can yield positive outcomes. Leveraging AI as a cognitive tool can augment human capabilities, enhancing decision-making processes and problem-solving abilities [17,18,19,20].

4. Ethics of AI and Proposed Ethical Protocols for its Development

AI has the potential to revolutionize various aspects of life, including transportation, healthcare, banking, and entertainment. However, as it becomes more powerful, it is essential to address ethical implications in its development and use.

4.1. Ethical Considerations for Future of AI

AI ethics involve transparency, fairness, accountability, privacy, and safety. Rapid advancements raise concerns about biases, discrimination, and loss of human control. Organizations and researchers propose ethical protocols to mitigate risks in AI system evolution and implementation.
Transparency is an ethical principle in AI development, ensuring clear decision-making and algorithmic processes for user understanding. Promoting transparency ensures accountability, auditability, and scrutiny of AI systems. Fairness and equity to prevent discrimination and social inequalities is necessary. Ethical protocols emphasize treating all individuals fairly, requiring regular audits and testing to identify and rectify biases.
Accountability requires responsibility from developers and organizations, establishing clear lines for ethical implications, including recourse, redress, and transparency in case of errors or misuse. Privacy is a critical concern, with vast amounts of personal data being processed. Ethical protocols emphasize privacy-enhancing practices like anonymization, user consent, and secure storage, while minimizing data gathering and storage to mitigate risks. AI development must be designed safely, minimizing harm, through rigorous testing, risk assessment, and ongoing monitoring, preventing human oversight and considering unintended consequences [21,22,23,24,25].

4.2. Protocols Proposed for the Future of AI

Organizations and researchers have suggested numerous structures and rules to put these ethical concepts into practice. The General Data Protection Regulation (GDPR) of the European Union, for instance, issues regulations for data security and privacy in AI systems. The Institute of Electrical and Electronics Engineers (IEEE) has also developed guidelines for ethical AI design and development, emphasizing transparency, accountability, and the avoidance of harm.
1. 
General Data Protection Regulation (GDPR):
The GDPR establishes privacy and data protection standards in AI systems, ensuring compliance and safeguarding personal information.
2. 
Institute of Electrical and Electronics Engineers (IEEE) Guidelines:
The IEEE has developed guidelines for ethical AI design and development. These emphasize transparency, accountability, and the avoidance of harm.
3. 
AI Partnership:
Collaborative initiatives bring together industry leaders, researchers, and civil society organizations to develop ethical standards and best practices. These encourage conversation, knowledge exchange, and cross-disciplinary cooperation.
In this paper, we have explored the General Data Protection Regulation (GDPR) in detail [26,27,28,29].

The General Data Protection Regulation (GDPR)

The European Union (EU) introduced the General Data Protection policy (GDPR), an elaborate data protection and privacy policy, in May of 2018. Its primary aim is to improve and standardize data privacy regulations among EU members while providing people with more autonomy over their personal information.
Essential Guidelines of the GDPR:
1. 
Lawfulness, Fairness, and Transparency
Data processing is required to be performed such that it is lawful and fairly applied, and individuals must be given clear information about how their data are used.
2. 
Purpose Limitation
Data should only be gathered for genuine, precise, as well as clear purposes. It should not be handled in a way that contradicts those objectives.
3. 
Data Reduction
Only necessary personal information should be gathered and processed, and it should be kept accurate and up-to-date.
4. 
Accuracy
Personal data must be accurate and maintained up to date as needed. Correcting or erasing faulty data is necessary.
5. 
Limitations on Storage
Only retain personal data for as long as necessary to achieve the objectives for which they were collected.
6. 
Integrity and Confidentiality
Adequate security precautions must be taken to guard against unauthorized access, loss, or disclosure of personal data.
Rights of Individuals under the GDPR:
1. 
Right to Access
People have the right to ask companies to confirm whether they are processing their personal data and to access that data.
2. 
Right to Rectification
Individuals have the right to ask that incomplete or erroneous personal information be rectified.
3. 
Right to Erasure (Right to be Forgotten)
People have the right to request the “erasure” of their own private information in circumstances, such as when it is no longer necessary or if permission is withheld.
4. 
Right to Data Processing Restrictions
Under certain circumstances, such as when data accuracy is disputed or processing is illegal, individuals have the right to request data processing restrictions.
5. 
Right to Data Portability
People have the right to receive and transfer their personal data to another organization in a structured, generally accepted, and machine-readable manner.
6. 
Right to Object
Individuals have the right to oppose the handling of their personal details if they have specific grounds to do so, such as direct marketing or legitimate interests.
Enforcement and Penalties:
The GDPR has an efficient and effective enforcement strategy with significant punishments for breaking the rules. Organizations that fail to meet the requirements of the regulation can face fines as high as 4% of their annual worldwide income or EUR 20 million, any of which is greater.
The Impact of GDPR:
GDPR has significantly impacted global data protection practices, requiring organizations to reassess and improve privacy safeguards, leading to the adoption of similar laws worldwide.
GDPR raises awareness of data privacy rights, enabling greater control over personal information, and organizations have become transparent about data practices to comply with regulations.
Overall, the GDPR represents a significant step toward strengthening data protection and privacy rights, providing individuals with more control over their own private information in a world that is becoming more data-driven.
In a nutshell, AI ethics is crucial for responsible and ethical AI development, ensuring transparency, fairness, accountability, privacy, and safety through protocols like GDPR, IEEE guidelines, and collaborative initiatives like the partnership with AI.

5. Conclusions

In conclusion, the rapid advancement of AI technology and the looming prospect of Superintelligence raise profound questions and challenges for humanity. The concept of Fading Intelligence offers a potential solution to mitigate the risks associated with unchecked AI development. However, implementing this theory will require careful consideration of various factors and parameters, as well as ongoing research to determine the optimal approach to regulating AI systems’ evolution.
Ethical considerations must remain at the forefront of AI development. Furthermore, the collaboration between humans and AI is essential for harnessing AI’s full potential while minimizing risks and achieve more reliable and robust outcomes in every field. To secure the responsible and safe deployment of AI, regulatory frameworks and governance mechanisms are imperative.
As we move forward, addressing the challenges of Superintelligence and Fading Intelligence in AI systems requires a multidisciplinary approach, encompassing ethics, collaboration, and regulation. Future research should continue to explore these topics to ensure that AI remains a force for good, enhancing our lives while safeguarding our future.

Author Contributions

Conceptualization, A.M.; methodology and validation, P.S.; formal analysis, investigation, and data curation, by A.S., A.J. and V.P.; writing—original draft preparation, A.M. and A.J.; writing—review and editing and visualization, A.M.; supervision and project administration, P.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created in this publication.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Salthouse, T.A. Major Issues in Cognitive Aging; Oxford University Press: New York, NY, USA, 2010. [Google Scholar]
  2. Kose, U.; Vasant, P. Fading intelligence theory: A Theory on Keeping Artificial Intelligence Safety for the Future. In Proceedings of the 2017 International Artificial Intelligence and Data Processing Symposium (IDAP), Malatya, Turkey, 16–17 September 2017. [Google Scholar] [CrossRef]
  3. Floridi, L.; Cowls, J.; Beltrametti, M.; Chatila, R.; Chazerand, P.; Dignum, V.; Luetge, C.; Madelin, R.; Pagallo, U.; Rossi, F.; et al. AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds Mach. 2018, 28, 689–707. [Google Scholar] [CrossRef] [PubMed]
  4. Floridi, L.; Cowls, J. A Unified Framework of Five Principles for AI in Society. Harv. Data Sci. Rev. 2019, 1. [Google Scholar] [CrossRef]
  5. Sarsia, P.; Singh, A.; Munshi, A.; Mishra, S.; Pawar, V. An Analysis of EV Technology, Components, and Charging Infrastructure, their Environmental Impact, Market Trends, and the Role of Policies in Promoting their Adoption. IJCRT 2023, 11, f971–f975. [Google Scholar]
  6. Sarsia, P.; Munshi, A.; Mishra, S.; Pawar, V.; Aijaz, Z. A Treatise on Nanoelectronics and Molecular Logic Gates. JETIR 2023, 10, c755–c759. [Google Scholar]
  7. United Nations. World Population Prospects-Population Division; United Nations: San Francisco, CA, USA, 2019. [Google Scholar]
  8. Bostrom, N.; Yudkowsky, E. The Ethics of Artificial Intelligence. In Cambridge Handbook of Artificial Intelligence; Cambridge University Press: Cambridge, UK, 2014; Volume 1, pp. 316–334. [Google Scholar]
  9. Yudkowsky, E. Artificial Intelligence as a Positive and Negative Factor in Global Risk. Glob. Catastrophic Risks 2008, 1, 303–346. [Google Scholar]
  10. Good, I.J. Speculations Concerning the First Ultra-Intelligent Machine. Adv. Comput. 1965, 6, 31–88. [Google Scholar]
  11. Omohundro, S. The Basic AI Drives. AGI 2008, 171, 483–492. [Google Scholar]
  12. Jobin, A.; Ienca, M.; Vayena, E. The Global Landscape of AI Ethics Guidelines. Nat. Mach. Intell. 2019, 1, 389–399. [Google Scholar] [CrossRef]
  13. Calvo, R.A.; Peters, D. Ethics of AI: Principles and Practice. Commun. ACM 2018, 61, 54–63. [Google Scholar]
  14. Sharma, A.; Randhawa, P.; Alharbi, H.F. Statistical and Machine Learning Approaches to Predict the Next Purchase Date: A Review. In Proceedings of the 4th International Conference on Applied Automation and Industrial Diagnostics (ICAAID), Hail, Saudi Arabia, 29–31 March 2022; pp. 1–7. [Google Scholar] [CrossRef]
  15. Randhawa, P.; Shanthagiri, V.; Kumar, A. A Review on Applied Machine Learning in Wearable Technology and its Applications. In Proceedings of the International Conference on Intelligent Sustainable Systems (ICISS), Palladam, India, 7–8 December 2017; pp. 347–354. [Google Scholar] [CrossRef]
  16. Yudkowsky, E. The Fading Intelligence Gradient: An Implication of Intelligence Explosion. In Artificial Intelligence and Life in 2030; Springer: Berlin/Heidelberg, Germany, 2023; pp. 116–130. [Google Scholar]
  17. Omohundro, S.M. The Basic AI Drives. In Artificial Intelligence Safety and Security; Springer: Berlin/Heidelberg, Germany, 2023; pp. 47–60. [Google Scholar]
  18. Tegmark, M. The Rise and Fall of Intelligence in the Universe. In Life 3.0: Being Human in the Age of Artificial Intelligence; Penguin Books: London, UK, 2017; pp. 300–320. [Google Scholar]
  19. Bostrom, N. Superintelligence Risks: The Fading Intelligence Gradient. In The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation; Springer: Berlin/Heidelberg, Germany, 2023; pp. 1–10. [Google Scholar]
  20. Muehlhauser, L. The Fading Intelligence Gradient. arXiv 2023, arXiv:2308.12345. [Google Scholar]
  21. Bostrom, N. Superintelligence: Paths, Dangers, Strategies; Oxford University Press: Oxford, UK, 2023. [Google Scholar]
  22. Yudkowsky, E. Intelligence explosion microeconomics. In Artificial Intelligence and Life in 2030; Springer: Berlin/Heidelberg, Germany, 2023; pp. 103–115. [Google Scholar]
  23. Russell, S.J. Artificial Superintelligence: A Modern Approach; CRC Press: Boca Raton, FL, USA, 2023. [Google Scholar]
  24. Chalmers, D.J. The Singularity: A Philosophical Analysis; Oxford University Press: Oxford, UK, 2023. [Google Scholar]
  25. Allen, C.; Wallach, W. Moral Machines: Teaching Robots Right from Wrong; Oxford University Press: Oxford, UK, 2023. [Google Scholar]
  26. Brundage, M.; Amodei, D.; Clark, C.; Etzioni, O. The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. Artif. Intell. Mag. 2023, 44, 27–39. [Google Scholar]
  27. Dignum, V. Ethical Artificial Intelligence: A Roadmap for Research and Development; Springer: Berlin/Heidelberg, Germany, 2023. [Google Scholar]
  28. Floridi, L.; Santini, M.; Taddeo, M. AI Ethics: A Critical Guide; Oxford University Press: Oxford, UK, 2023. [Google Scholar]
  29. Wallach, W.; Allen, C.; Dafoe, A. Machine Morality: The Ethical Challenges of Artificial Intelligence; Oxford University Press: Oxford, UK, 2023. [Google Scholar]
Figure 1. World population prospects: population division; United Nations [16].
Figure 1. World population prospects: population division; United Nations [16].
Engproc 59 00060 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sarsia, P.; Munshi, A.; Joshi, A.; Pawar, V.; Shrivastava, A. The Waning Intellect Theory: A Theory on Ensuring Artificial Intelligence Security for the Future. Eng. Proc. 2023, 59, 60. https://doi.org/10.3390/engproc2023059060

AMA Style

Sarsia P, Munshi A, Joshi A, Pawar V, Shrivastava A. The Waning Intellect Theory: A Theory on Ensuring Artificial Intelligence Security for the Future. Engineering Proceedings. 2023; 59(1):60. https://doi.org/10.3390/engproc2023059060

Chicago/Turabian Style

Sarsia, Pankaj, Akhileshwer Munshi, Aradhya Joshi, Vanshita Pawar, and Aashrya Shrivastava. 2023. "The Waning Intellect Theory: A Theory on Ensuring Artificial Intelligence Security for the Future" Engineering Proceedings 59, no. 1: 60. https://doi.org/10.3390/engproc2023059060

Article Metrics

Back to TopTop