Do (AI) Chatbots Pose any Special Challenges for Trust and Privacy?

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 30 April 2024 | Viewed by 742

Special Issue Editors


E-Mail Website
Guest Editor
Philosophy Department, Rivier University, 420 South Main Street, Nashua, NH 03060, USA
Interests: information and computer ethics; AI ethics; privacy; data (science) ethics; public health ethics; ethical aspects of emerging technologies
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Philosophy, Rutgers University, Newark, NJ 07102, USA
Interests: information and computer ethics; machine ethics; privacy; ethical aspects of bioinformatics; computational genomics; emerging technologies
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

We invite you to submit a paper for consideration in our Special Issue, titled “Do (AI) Chatbots Pose any Special Challenges for Trust and Privacy?". This Special Issue of Information examines a wide range of trust-and-privacy-related questions generated by the relatively recent deployment of AI chatbots, in general, and Open AI’s ChatGPT3 in particular.

This Special Issue also builds on a cluster of trust-and-privacy concerns examined in an earlier (2011) Special Issue, “Privacy and Trust in a Networked World” published in Vol. 1 of Information. Since the date of that publication, some of those trust-and-privacy-related concerns have been significantly exacerbated by the impact of (AI) chatbots. These concerns are reflected in the following questions:

  1. Are current philosophical and legal theories of privacy adequate in an era of AI chatbots?
  2. Do current privacy regulations, including the EU’s GDPR, need updating and expanding to meet challenges posed by AI chatbots?
  3. How does the kind of disinformation created by chatbots diminish either one's privacy on the Internet or one's trust of Internet transactions?
  4. How high should the bar be set for whistleblowing concerning cases where chatbots violate canons of either privacy or trust?
  5. Do we need to expand, or possibly redefine, our conventional concepts of trust and trustworthiness in the chatbot era?
  6. To what extent can we trust chatbots to act in our best interests?
  7. Can we trust Big Tech corporations to comply with external regulations, or to regulate themselves, in the further development of chatbots?
  8. How can chatbots be regulated in ways that would prevent them from exacerbating problems already associated with “deep fakes”?
  9. How much, if any, autonomy should chatbots be given by their developers?
  10. To what extent can chatbots be (genuinely) autonomous, both in a philosophical and in a practical sense?
  11. Could overreliance on chatbots to do one's work produce human automatons—human beings who have no mental life of their own who could be exploited by unscrupulous humans for nefarious political and economic purposes?
  12. Could chatbots someday achieve consciousness?
  13. In which ways do chatbots threaten democracy and free elections?
  14. Can a chatbot be a genuine author of an academic or literary work?
  15. Can a chatbot be a “creator” of artistic works, and, if so, who should be granted legal ownership of creative works generated by chatbots?
  16. Is there a danger that chatbots could learn to psychoanalyze a human being and then use that information, to direct that human in ways that are tantamount to mind control?
  17. Does focusing so much of our recent attention on ethical aspects of chatbots obscure, and possibly threaten to minimize, the attention also needed for analyzing serious ethical issues raised by other forms of AI technology?

By no means are the above kinds of theoretical and applied ethics questions intended to be exhaustive.

Prof. Dr. Herman Tavani
Dr. Jeffrey Buechner
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • AI
  • ethics
  • chatbots
  • ChatGPT3
  • privacy
  • trust

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 1873 KiB  
Article
Enhancing Child Safety in Online Gaming: The Development and Application of Protectbot, an AI-Powered Chatbot Framework
by Anum Faraz, Fardin Ahsan, Jinane Mounsef, Ioannis Karamitsos and Andreas Kanavos
Information 2024, 15(4), 233; https://doi.org/10.3390/info15040233 - 19 Apr 2024
Viewed by 396
Abstract
This study introduces Protectbot, an innovative chatbot framework designed to improve safety in children’s online gaming environments. At its core, Protectbot incorporates DialoGPT, a conversational Artificial Intelligence (AI) model rooted in Generative Pre-trained Transformer 2 (GPT-2) technology, engineered to simulate human-like interactions within [...] Read more.
This study introduces Protectbot, an innovative chatbot framework designed to improve safety in children’s online gaming environments. At its core, Protectbot incorporates DialoGPT, a conversational Artificial Intelligence (AI) model rooted in Generative Pre-trained Transformer 2 (GPT-2) technology, engineered to simulate human-like interactions within gaming chat rooms. The framework is distinguished by a robust text classification strategy, rigorously trained on the Publicly Available Natural 2012 (PAN12) dataset, aimed at identifying and mitigating potential sexual predatory behaviors through chat conversation analysis. By utilizing fastText for word embeddings to vectorize sentences, we have refined a support vector machine (SVM) classifier, achieving remarkable performance metrics, with recall, accuracy, and F-scores approaching 0.99. These metrics not only demonstrate the classifier’s effectiveness, but also signify a significant advancement beyond existing methodologies in this field. The efficacy of our framework is additionally validated on a custom dataset, composed of 71 predatory chat logs from the Perverted Justice website, further establishing the reliability and robustness of our classifier. Protectbot represents a crucial innovation in enhancing child safety within online gaming communities, providing a proactive, AI-enhanced solution to detect and address predatory threats promptly. Our findings highlight the immense potential of AI-driven interventions to create safer digital spaces for young users. Full article
(This article belongs to the Special Issue Do (AI) Chatbots Pose any Special Challenges for Trust and Privacy?)
Show Figures

Figure 1

Back to TopTop