Artificial Intelligence for Online Safety

A special issue of Big Data and Cognitive Computing (ISSN 2504-2289).

Deadline for manuscript submissions: closed (31 October 2023) | Viewed by 17330

Special Issue Editors


E-Mail Website
Guest Editor
L3S Research Center, Leibniz University of Hannover, 30167 Hannover, Germany
Interests: artificial intelligence; federated learning; data mining; information retrieval; generative model; event detection; clustering methods based on statistical approaches, and near duplicate detection

E-Mail Website
Guest Editor Assistant
Department of Information Engineering, Infrastructure and Sustainable Energy (DIIES), University Mediterranea of Reggio Calabria, 89122 Reggio Calabria, Italy
Interests: security; privacy; access control; social network analysis
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Nearly every aspect of our lives is impacted by the Internet, including entertainment, education, health, commerce, government, social interaction, and more. The profound impact that these recent changes are having on our lives requires us to rethink how we make decisions in these areas. In addition to questions about costs and benefits, users also face important questions about trust and security. For example, we often find cases where people avoid online banking or buying products online because they fear becoming victims of fraudulent activity. In addition, informed users are increasingly reluctant to trust information sources. Therefore, it is of utmost importance to create a trustworthy environment for users—an environment where both users and contents are trustworthy.

When fighting threats, companies face a dilemma, given that no system is perfect. In e-commerce, on the one hand, threats and the derived losses should be reduced; on the other hand, users neither want to be accused of fraud nor treated like criminals. In other areas, such as e-health, the problems associated with data abuse and security leaks could even result in more severe damages than purely financial matters. Finally, platforms that allow users to share information and non-curated content have recently faced the complex trade-off between free expression and moderation. In this latter case, the spread of misinformation poses a threat to society, health, and even democracy.           

Given the exponential growth and exploitation of these vulnerabilities for businesses and societies, online service providers are looking for automated solutions that can mitigate problems like automated threats detection, malicious user activity detection, automated content curation.

In recent years, AI has taken on an increasingly central role in threats prevention. The reason why AI techniques are so popular and widely used in the threats and fraud detection industry is due to (i) their fast computational power in analyzing and processing data and extracting new patterns; (ii) their scalability, as models become more accurate and effective in prediction the more data they receive; (iii) their efficiency in obtaining results compared to manual efforts.

Although both industry and academia fields have always invested significant efforts in tackling the above-mentioned problems, we have identified a gap between the fields. On the one hand, industry, which is affected firsthand by these problems, has a wealth of data and suboptimal solutions (which are not always shared) to mitigate these risks. On the other hand, academia has cutting-edge solutions, but often has limited access to data and users.

This Special Issue aims at bringing together research from a wide array of disciplines (mathematics, computer science, economy, philosophy, social science) to (i) understand the cases and motivations of fraudulent activities in online environments, (ii) find AI solutions to detect and analyze threats, malicious activities and the spread and misinformation, and (iii) derive means to prevent them.

We invite the submission of ongoing and completed research work with a particular focus on the following topics:

  • User Modeling of Fraudulent and Malicious Users
  • Features Engineering in the Online Threats Detection Domain
  • Outlier and Anomaly Detection
  • Fraud Detection
  • Detecting, preventing and predicting anomalies in Web data (e.g., fake content, spam, algorithmic and data biases)
  • Fraud Detection in the Streaming Domain
  • Distributed Fraud Detection Systems
  • Malicious user activity in Web-based systems
  • Safeguarding and governance of the Web, including anonymity, security and trust
  • Accountable use of the Web
  • Online Safety in the Medical Domain

Dr. Marco Fisichella
Guest Editor

Dr. Antonia Russo
Guest Editor Assistant

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Big Data and Cognitive Computing is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • fraud detection
  • outlier and anomaly detection
  • anonymity
  • security and trust
  • privacy
  • accountability and auditability
  • federated learning

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 756 KiB  
Article
A New Approach to Data Analysis Using Machine Learning for Cybersecurity
by Shivashankar Hiremath, Eeshan Shetty, Allam Jaya Prakash, Suraj Prakash Sahoo, Kiran Kumar Patro, Kandala N. V. P. S. Rajesh and Paweł Pławiak
Big Data Cogn. Comput. 2023, 7(4), 176; https://doi.org/10.3390/bdcc7040176 - 21 Nov 2023
Cited by 1 | Viewed by 3101
Abstract
The internet has become an indispensable tool for organizations, permeating every facet of their operations. Virtually all companies leverage Internet services for diverse purposes, including the digital storage of data in databases and cloud platforms. Furthermore, the rising demand for software and applications [...] Read more.
The internet has become an indispensable tool for organizations, permeating every facet of their operations. Virtually all companies leverage Internet services for diverse purposes, including the digital storage of data in databases and cloud platforms. Furthermore, the rising demand for software and applications has led to a widespread shift toward computer-based activities within the corporate landscape. However, this digital transformation has exposed the information technology (IT) infrastructures of these organizations to a heightened risk of cyber-attacks, endangering sensitive data. Consequently, organizations must identify and address vulnerabilities within their systems, with a primary focus on scrutinizing customer-facing websites and applications. This work aims to tackle this pressing issue by employing data analysis tools, such as Power BI, to assess vulnerabilities within a client’s application or website. Through a rigorous analysis of data, valuable insights and information will be provided, which are necessary to formulate effective remedial measures against potential attacks. Ultimately, the central goal of this research is to demonstrate that clients can establish a secure environment, shielding their digital assets from potential attackers. Full article
(This article belongs to the Special Issue Artificial Intelligence for Online Safety)
Show Figures

Figure 1

16 pages, 975 KiB  
Article
Federated Learning to Safeguard Patients Data: A Medical Image Retrieval Case
by Gurtaj Singh, Vincenzo Violi and Marco Fisichella
Big Data Cogn. Comput. 2023, 7(1), 18; https://doi.org/10.3390/bdcc7010018 - 18 Jan 2023
Cited by 9 | Viewed by 3050
Abstract
Healthcare data are distributed and confidential, making it difficult to use centralized automatic diagnostic techniques. For example, different hospitals hold the electronic health records (EHRs) of different patient populations; however, transferring this data between hospitals is difficult due to the sensitive nature of [...] Read more.
Healthcare data are distributed and confidential, making it difficult to use centralized automatic diagnostic techniques. For example, different hospitals hold the electronic health records (EHRs) of different patient populations; however, transferring this data between hospitals is difficult due to the sensitive nature of the information. This presents a significant obstacle to the development of efficient and generalizable analytical methods that require a large amount of diverse Big Data. Federated learning allows multiple institutions to work together to develop a machine learning algorithm without sharing their data. We conducted a systematic study to analyze the current state of FL in the healthcare industry and explore both the limitations of this technology and its potential. Organizations share the parameters of their models with each other. This allows them to reap the benefits of a model developed with a richer data set while protecting the confidentiality of their data. Standard methods for large-scale machine learning, distributed optimization, and privacy-friendly data analytics need to be fundamentally rethought to address the new problems posed by training on diverse networks that may contain large amounts of data. In this article, we discuss the particular qualities and difficulties of federated learning, provide a comprehensive overview of current approaches, and outline several directions for future work that are relevant to a variety of research communities. These issues are important to many different research communities. Full article
(This article belongs to the Special Issue Artificial Intelligence for Online Safety)
Show Figures

Figure 1

19 pages, 705 KiB  
Article
PRIVAFRAME: A Frame-Based Knowledge Graph for Sensitive Personal Data
by Gaia Gambarelli and Aldo Gangemi
Big Data Cogn. Comput. 2022, 6(3), 90; https://doi.org/10.3390/bdcc6030090 - 26 Aug 2022
Cited by 3 | Viewed by 2619
Abstract
The pervasiveness of dialogue systems and virtual conversation applications raises an important theme: the potential of sharing sensitive information, and the consequent need for protection. To guarantee the subject’s right to privacy, and avoid the leakage of private content, it is important to [...] Read more.
The pervasiveness of dialogue systems and virtual conversation applications raises an important theme: the potential of sharing sensitive information, and the consequent need for protection. To guarantee the subject’s right to privacy, and avoid the leakage of private content, it is important to treat sensitive information. However, any treatment requires firstly to identify sensitive text, and appropriate techniques to do it automatically. The Sensitive Information Detection (SID) task has been explored in the literature in different domains and languages, but there is no common benchmark. Current approaches are mostly based on artificial neural networks (ANN) or transformers based on them. Our research focuses on identifying categories of personal data in informal English sentences, by adopting a new logical-symbolic approach, and eventually hybridising it with ANN models. We present a frame-based knowledge graph built for personal data categories defined in the Data Privacy Vocabulary (DPV). The knowledge graph is designed through the logical composition of already existing frames, and has been evaluated as background knowledge for a SID system against a labeled sensitive information dataset. The accuracy of PRIVAFRAME reached 78%. By comparison, a transformer-based model achieved 12% lower performance on the same dataset. The top-down logical-symbolic frame-based model allows a granular analysis, and does not require a training dataset. These advantages lead us to use it as a layer in a hybrid model, where the logical SID is combined with an ANNs SID tested in a previous study by the authors. Full article
(This article belongs to the Special Issue Artificial Intelligence for Online Safety)
Show Figures

Figure 1

21 pages, 1199 KiB  
Article
Enhancing Marketing Provision through Increased Online Safety That Imbues Consumer Confidence: Coupling AI and ML with the AIDA Model
by Yang-Im Lee and Peter R. J. Trim
Big Data Cogn. Comput. 2022, 6(3), 78; https://doi.org/10.3390/bdcc6030078 - 12 Jul 2022
Cited by 4 | Viewed by 7519
Abstract
To enhance the effectiveness of artificial intelligence (AI) and machine learning (ML) in online retail operations and avoid succumbing to digital myopia, marketers need to be aware of the different approaches to utilizing AI/ML in terms of the information they make available to [...] Read more.
To enhance the effectiveness of artificial intelligence (AI) and machine learning (ML) in online retail operations and avoid succumbing to digital myopia, marketers need to be aware of the different approaches to utilizing AI/ML in terms of the information they make available to appropriate groups of consumers. This can be viewed as utilizing AI/ML to improve the customer journey experience. Reflecting on this, the main question to be addressed is: how can retailers utilize big data through the implementation of AI/ML to improve the efficiency of their marketing operations so that customers feel safe buying online? To answer this question, we conducted a systematic literature review and posed several subquestions that resulted in insights into why marketers need to pay specific attention to AI/ML capability. We explain how different AI/ML tools/functionalities can be related to different stages of the AIDA (Awareness, Interest, Desire, and Action) model, which in turn helps retailers to recognize potential opportunities as well as increase consumer confidence. We outline how digital myopia can be reduced by focusing on human inputs. Although challenges still exist, it is clear that retailers need to identify the boundaries in terms of AI/ML’s ability to enhance the company’s business model. Full article
(This article belongs to the Special Issue Artificial Intelligence for Online Safety)
Show Figures

Figure 1

Back to TopTop