Next Issue
Volume 3, March
Previous Issue
Volume 2, September
 
 

J. Cybersecur. Priv., Volume 2, Issue 4 (December 2022) – 8 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
17 pages, 531 KiB  
Review
User Reputation on E-Commerce: Blockchain-Based Approaches
by Maria José Angélico Gonçalves, Rui Humberto Pereira and Marta Alexandra Guerra Magalhães Coelho
J. Cybersecur. Priv. 2022, 2(4), 907-923; https://doi.org/10.3390/jcp2040046 - 19 Dec 2022
Cited by 2 | Viewed by 2425
Abstract
User trust is a fundamental issue in e-commerce. To address this problem, recommendation systems have been widely used in different application domains including social media healthcare, e-commerce, and others. In this paper, we present a systematic review of the literature in the area [...] Read more.
User trust is a fundamental issue in e-commerce. To address this problem, recommendation systems have been widely used in different application domains including social media healthcare, e-commerce, and others. In this paper, we present a systematic review of the literature in the area of blockchain-based reputation models and we discuss the obtained results, answering the initial research questions. These findings lead us to conclude that the existing systems are based on a trusted third party (TTP) to collect and store reputation data, which does not provide transparency on users’ reputation scores. In the recent literature, on the one hand, blockchain-based reputation systems have been highlighted as possible solutions to effectively provide the necessary transparency, as well as effective identity management. On the other hand, new challenges are posed in terms of user privacy and performance, due to the specific characteristics of the blockchain. According to the literature, two major approaches have been proposed based on public and permissioned blockchains. Each approach applies adjusted models for calculating reputation scores. Despite the undoubted advantages added by a blockchain, the problem is only partially solved since there is no effective way to prevent blockchain oracles from feeding the chain with false, unfair, or biased data. In our future work, we intend to explore the two approaches discussed in the literature in order to propose a new blockchain-based model for deriving user reputation scores. Full article
(This article belongs to the Section Cryptography and Cryptology)
Show Figures

Figure 1

25 pages, 2104 KiB  
Article
An Understanding of the Vulnerability of Datasets to Disparate Membership Inference Attacks
by Hunter D. Moore, Andrew Stephens and William Scherer
J. Cybersecur. Priv. 2022, 2(4), 882-906; https://doi.org/10.3390/jcp2040045 - 14 Dec 2022
Cited by 1 | Viewed by 2125
Abstract
Recent efforts have shown that training data is not secured through the generalization and abstraction of algorithms. This vulnerability to the training data has been expressed through membership inference attacks that seek to discover the use of specific records within the training dataset [...] Read more.
Recent efforts have shown that training data is not secured through the generalization and abstraction of algorithms. This vulnerability to the training data has been expressed through membership inference attacks that seek to discover the use of specific records within the training dataset of a model. Additionally, disparate membership inference attacks have been shown to achieve better accuracy compared with their macro attack counterparts. These disparate membership inference attacks use a pragmatic approach to attack individual, more vulnerable sub-sets of the data, such as underrepresented classes. While previous work in this field has explored model vulnerability to these attacks, this effort explores the vulnerability of datasets themselves to disparate membership inference attacks. This is accomplished through the development of a vulnerability-classification model that classifies datasets as vulnerable or secure to these attacks. To develop this model, a vulnerability-classification dataset is developed from over 100 datasets—including frequently cited datasets within the field. These datasets are described using a feature set of over 100 features and assigned labels developed from a combination of various modeling and attack strategies. By averaging the attack accuracy over 13 different modeling and attack strategies, the authors explore the vulnerabilities of the datasets themselves as opposed to a particular modeling or attack effort. The in-class observational distance, width ratio, and the proportion of discrete features are found to dominate the attributes defining dataset vulnerability to disparate membership inference attacks. These features are explored in deeper detail and used to develop exploratory methods for hardening these class-based sub-datasets against attacks showing preliminary mitigation success with combinations of feature reduction and class-balancing strategies. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Figure 1

20 pages, 573 KiB  
Article
Differentially Private Block Coordinate Descent for Linear Regression on Vertically Partitioned Data
by Jins de Jong, Bart Kamphorst and Shannon Kroes
J. Cybersecur. Priv. 2022, 2(4), 862-881; https://doi.org/10.3390/jcp2040044 - 09 Nov 2022
Viewed by 1784
Abstract
We present a differentially private extension of the block coordinate descent algorithm by means of objective perturbation. The algorithm iteratively performs linear regression in a federated setting on vertically partitioned data. In addition to a privacy guarantee, we derive a utility guarantee; a [...] Read more.
We present a differentially private extension of the block coordinate descent algorithm by means of objective perturbation. The algorithm iteratively performs linear regression in a federated setting on vertically partitioned data. In addition to a privacy guarantee, we derive a utility guarantee; a tolerance parameter indicates how much the differentially private regression may deviate from the analysis without differential privacy. The algorithm’s performance is compared with that of the standard block coordinate descent algorithm on both artificial test data and real-world data. We find that the algorithm is fast and able to generate practical predictions with single-digit privacy budgets, albeit with some accuracy loss. Full article
(This article belongs to the Section Privacy)
Show Figures

Figure 1

9 pages, 243 KiB  
Article
Cybersecurity in Hospitals: An Evaluation Model
by Mohammed A. Ahmed, Hatem F. Sindi and Majid Nour
J. Cybersecur. Priv. 2022, 2(4), 853-861; https://doi.org/10.3390/jcp2040043 - 26 Oct 2022
Cited by 3 | Viewed by 4391
Abstract
Hospitals have been historically known for their strong risk mitigation policies and designs, which are not becoming easier or simpler to plan and operate. Currently, new technologies and devices are developed every day in the medical industry. These devices, systems, and personnel are [...] Read more.
Hospitals have been historically known for their strong risk mitigation policies and designs, which are not becoming easier or simpler to plan and operate. Currently, new technologies and devices are developed every day in the medical industry. These devices, systems, and personnel are in an ever-higher state of connection to the network and servers, which necessitates the use of stringent cybersecurity policies. Therefore, this work aims to comprehensively identify, quantify, and model the cybersecurity status quo in healthcare facilities. The developed model is going to allow healthcare organizations to understand the imminent operational risks and to identify which measures to improve or add to their system in order to mitigate those risks. Thus, in this work we will develop a novel assessment tool to provide hospitals with a proper reflection of their status quo, which will assist hospital designers in adding the suggested cyber risk mitigation measures to the design itself before operation. Full article
23 pages, 720 KiB  
Article
Calibrating the Attack to Sensitivity in Differentially Private Mechanisms
by Ayşe Ünsal and Melek Önen
J. Cybersecur. Priv. 2022, 2(4), 830-852; https://doi.org/10.3390/jcp2040042 - 18 Oct 2022
Cited by 2 | Viewed by 2024
Abstract
This work studies the power of adversarial attacks against machine learning algorithms that use differentially private mechanisms as their weapon. In our setting, the adversary aims to modify the content of a statistical dataset via insertion of additional data without being detected by [...] Read more.
This work studies the power of adversarial attacks against machine learning algorithms that use differentially private mechanisms as their weapon. In our setting, the adversary aims to modify the content of a statistical dataset via insertion of additional data without being detected by using the differential privacy to her/his own benefit. The goal of this study is to evaluate how easy it is to detect such attacks (anomalies) when the adversary makes use of Gaussian and Laplacian perturbation using both statistical and information-theoretic tools. To this end, firstly via hypothesis testing, we characterize statistical thresholds for the adversary in various settings, which balances the privacy budget and the impact of the attack (the modification applied on the original data) in order to avoid being detected. In addition, we establish the privacy-distortion trade-off in the sense of the well-known rate-distortion function for the Gaussian mechanism by using an information-theoretic approach. Accordingly, we derive an upper bound on the variance of the attacker’s additional data as a function of the sensitivity and the original data’s second-order statistics. Lastly, we introduce a new privacy metric based on Chernoff information for anomaly detection under differential privacy as a stronger alternative for the (ϵ,δ)-differential privacy in Gaussian mechanisms. Analytical results are supported by numerical evaluations. Full article
(This article belongs to the Section Privacy)
Show Figures

Figure 1

30 pages, 4151 KiB  
Review
A Survey of the Recent Trends in Deep Learning Based Malware Detection
by Umm-e-Hani Tayyab, Faiza Babar Khan, Muhammad Hanif Durad, Asifullah Khan and Yeon Soo Lee
J. Cybersecur. Priv. 2022, 2(4), 800-829; https://doi.org/10.3390/jcp2040041 - 28 Sep 2022
Cited by 31 | Viewed by 9188
Abstract
Monitoring Indicators of Compromise (IOC) leads to malware detection for identifying malicious activity. Malicious activities potentially lead to a system breach or data compromise. Various tools and anti-malware products exist for the detection of malware and cyberattacks utilizing IOCs, but all have several [...] Read more.
Monitoring Indicators of Compromise (IOC) leads to malware detection for identifying malicious activity. Malicious activities potentially lead to a system breach or data compromise. Various tools and anti-malware products exist for the detection of malware and cyberattacks utilizing IOCs, but all have several shortcomings. For instance, anti-malware systems make use of malware signatures, requiring a database containing such signatures to be constantly updated. Additionally, this technique does not work for zero-day attacks or variants of existing malware. In the quest to fight zero-day attacks, the research paradigm shifted from primitive methods to classical machine learning-based methods. Primitive methods are limited in catering to anti-analysis techniques against zero-day attacks. Hence, the direction of research moved towards methods utilizing classic machine learning, however, machine learning methods also come with certain limitations. They may include but not limited to the latency/lag introduced by feature-engineering phase on the entire training dataset as opposed to the real-time analysis requirement. Likewise, additional layers of data engineering to cater to the increasing volume of data introduces further delays. It led to the use of deep learning-based methods for malware detection. With the speedy occurrence of zero-day malware, researchers chose to experiment with few shot learning so that reliable solutions can be produced for malware detection with even a small amount of data at hand for training. In this paper, we surveyed several possible strategies to support the real-time detection of malware and propose a hierarchical model to discover security events or threats in real-time. A key focus in this survey is on the use of Deep Learning-based methods. Deep Learning based methods dominate this research area by providing automatic feature engineering, the capability of dealing with large datasets, enabling the mining of features from limited data samples, and supporting one-shot learning. We compare Deep Learning-based approaches with conventional machine learning based approaches and primitive (statistical analysis based) methods commonly reported in the literature. Full article
(This article belongs to the Special Issue Secure Software Engineering)
Show Figures

Figure 1

22 pages, 497 KiB  
Article
A Distributed Model for Privacy Preserving V2I Communication with Strong Unframeability and Efficient Revocation
by Panayiotis Kalogeropoulos, Dimitris Papanikas and Panayiotis Kotzanikolaou
J. Cybersecur. Priv. 2022, 2(4), 778-799; https://doi.org/10.3390/jcp2040040 - 20 Sep 2022
Viewed by 9227
Abstract
Although Vehicle to Infrastructure (V2I) communications greatly improve the efficiency of early warning systems for car safety, communication privacy is an important concern. Although solutions exist in the literature for privacy preserving VANET communications, they usually require high trust assumptions for a single [...] Read more.
Although Vehicle to Infrastructure (V2I) communications greatly improve the efficiency of early warning systems for car safety, communication privacy is an important concern. Although solutions exist in the literature for privacy preserving VANET communications, they usually require high trust assumptions for a single authority. In this paper we propose a distributed trust model for privacy preserving V2I communications. Trust is distributed among a certification authority that issues the vehicles’ credentials, and a signing authority that anonymously authenticates V2I messages in a zero knowledge manner. Anonymity is based on bilinear pairings and partially blind signatures. In addition, our system supports enhanced conditional privacy since both authorities and the relevant RSU need to collaborate to trace a message back to a vehicle, while efficient certificateless revocation is supported. Moreover, our scheme provides strong unframeability for honest vehicles. Even if all the entities collude, it is not possible to frame a honest vehicle, by tracing a forged message back to an honest vehicle. The proposed scheme concurrently achieves conditional privacy and strong unframeabilty for vehicles, without assuming a fully trusted authority. Our evaluation results show that the system allows RSUs to efficiently handle multiple messages per second, which suffices for real world implementations. Full article
(This article belongs to the Special Issue Cybersecurity in the Transportation Ecosystem)
Show Figures

Figure 1

14 pages, 760 KiB  
Article
Detection of SQL Injection Attack Using Machine Learning Techniques: A Systematic Literature Review
by Maha Alghawazi, Daniyal Alghazzawi and Suaad Alarifi
J. Cybersecur. Priv. 2022, 2(4), 764-777; https://doi.org/10.3390/jcp2040039 - 20 Sep 2022
Cited by 29 | Viewed by 14335
Abstract
An SQL injection attack, usually occur when the attacker(s) modify, delete, read, and copy data from database servers and are among the most damaging of web application attacks. A successful SQL injection attack can affect all aspects of security, including confidentiality, integrity, and [...] Read more.
An SQL injection attack, usually occur when the attacker(s) modify, delete, read, and copy data from database servers and are among the most damaging of web application attacks. A successful SQL injection attack can affect all aspects of security, including confidentiality, integrity, and data availability. SQL (structured query language) is used to represent queries to database management systems. Detection and deterrence of SQL injection attacks, for which techniques from different areas can be applied to improve the detect ability of the attack, is not a new area of research but it is still relevant. Artificial intelligence and machine learning techniques have been tested and used to control SQL injection attacks, showing promising results. The main contribution of this paper is to cover relevant work related to different machine learning and deep learning models used to detect SQL injection attacks. With this systematic review, we aims to keep researchers up-to-date and contribute to the understanding of the intersection between SQL injection attacks and the artificial intelligence field. Full article
(This article belongs to the Collection Machine Learning and Data Analytics for Cyber Security)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop