Journal of Cybersecurity and Privacy doi: 10.3390/jcp4020008
Authors: Anthony J. Rose Scott R. Graham Christine M. Schubert Kabban Jacob J. Krasnov Wayne C. Henry
The Antimalware Scan Interface (AMSI) plays a crucial role in detecting malware within Windows operating systems. This paper presents ScriptBlock Smuggling, a novel evasion and log spoofing technique exploiting PowerShell and .NET environments to circumvent the AMSI. By focusing on the manipulation of ScriptBlocks within the Abstract Syntax Tree (AST), this method creates dual AST representations, one for compiler execution and another for antivirus and log analysis, enabling the evasion of AMSI detection and challenging traditional memory patching bypass methods. This research provides a detailed analysis of PowerShell’s ScriptBlock creation and its inherent security features and pinpoints critical limitations in the AMSI’s capabilities to scrutinize ScriptBlocks and the implications of log spoofing as part of this evasion method. The findings highlight potential avenues for attackers to exploit these vulnerabilities, suggesting the possibility of a new class of AMSI bypasses and their use for log spoofing. In response, this paper proposes a synchronization strategy for ASTs, intended to unify the compilation and malware scanning processes to reduce the threat surfaces in PowerShell and .NET environments.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp4010007
Authors: Yousef Alghamdi Arslan Munir
Ensuring confidentiality and privacy is critical when it comes to sharing images over unsecured networks such as the internet. Since widely used and secure encryption methods, such as AES, Twofish, and RSA, are not suitable for real-time image encryption due to their slow encryption speeds and high computational requirements, researchers have proposed specialized algorithms for image encryption. This paper provides an introduction and overview of the image encryption algorithms and metrics used, aiming to evaluate them and help researchers and practitioners starting in this field obtain adequate information to understand the current state of image encryption algorithms. This paper classifies image encryption into seven different approaches based on the techniques used and analyzes the strengths and weaknesses of each approach. Furthermore, this paper provides a detailed review of a comprehensive set of security, quality, and efficiency evaluation metrics for image encryption algorithms, and provides upper and lower bounds for these evaluation metrics. Finally, this paper discusses the pros and cons of different image encryption approaches as well as the suitability of different image encryption approaches for different applications.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp4010006
Authors: Stefan Kutschera Wolfgang Slany Patrick Ratschiller Sarina Gursch Patrick Deininger Håvard Dagenborg
Sharing information with the public is becoming easier than ever before through the usage of the numerous social media platforms readily available today. Once posted online and released to the public, information is almost impossible to withdraw or delete. More alarmingly, postings may carry sensitive information far beyond what was intended to be released, so-called incidental data, which raises various additional security and privacy concerns. To improve our understanding of the awareness of incidental data, we conducted a survey where we asked 192 students for their opinions on publishing selected postings on social media. We found that up to 21.88% of all participants would publish a posting that contained incidental data that two-thirds of them found privacy-compromising. Our results show that continued efforts are needed to increase our awareness of incidental data posted on social media.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp4010005
Authors: Ioannis Paspatis Aggeliki Tsohou
Multiple studies have demonstrated that the conventional method of learning is suboptimal when our goal is to enhance individuals’ genuine privacy behavior. This study introduces a framework for transforming privacy behavior, with the objective of enhancing individuals’ privacy practices to a higher level of confidentiality. We performed an experiment on a limited number of people to validate the efficacy of our suggested transformation framework. This framework combined determining aspects of privacy behavior with experiential behavior modification methodologies such as neutral stimuli (e.g., cognitive behavioral transformation—CBTx), practical assessments and motivational interviews from other disciplines. While these methods have proven effective in fields like psychology and sociology, they have not yet been applied to the realm of Information Computer and Technology (ICT). In this study, we have effectively demonstrated the efficacy of the proposed framework through a five-phase experiment. The suggested framework has the potential to be advantageous for educational institutions, including both public and private schools as well as universities, to construct new frameworks or develop new methodologies regarding individuals’ privacy behavior transformation to a more protective one. Furthermore, our framework offers a conducive environment for further investigation into privacy behavior transformation methodologies.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp4010004
Authors: Elias Seid Oliver Popov Fredrik Blix
Identifying potential system attacks that define security requirements is crucial to building secure cyber systems. Moreover, the attack frequency makes their subsequent analysis challenging and arduous in cyber–physical systems (CPS). Since CPS include people, organisations, software, and infrastructure, a thorough security attack analysis must consider both strategic (social and organisational) aspects and technical (software and physical infrastructure) aspects. Studying cyberattacks and their potential impact on internal and external assets in cyberspace is essential for maintaining cyber security. The importance is reflected in the work of the Swedish Civil Contingencies Agency (MSB), which receives IT incident reports from essential service providers mandated by the NIS directive of the European Union and Swedish government agencies. To tackle this problem, a multi-realm security attack event monitoring framework was proposed to monitor, model, and analyse security events in social(business process), cyber, and physical infrastructure components of cyber–physical systems. This paper scrutinises security attack patterns and the corresponding security solutions for Swedish government agencies and organisations within the EU’s NIS directive. A pattern analysis was conducted on 254 security incident reports submitted by critical service providers. A total of five critical security attacks, seven vulnerabilities (commonly known as threats), ten attack patterns, and ten parallel attack patterns were identified. Moreover, we employed standard mitigation techniques obtained from recognised repositories of cyberattack knowledge, namely, CAPEC and Mitre, in order to conduct an analysis of the behavioural patterns
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp4010003
Authors: Anthony Overmars Sitalakshmi Venkatraman
The RSA (Rivest–Shamir–Adleman) cryptosystem is an asymmetric public key cryptosystem popular for its use in encryptions and digital signatures. However, the Wiener’s attack on the RSA cryptosystem utilizes continued fractions, which has generated much interest in developing competitive factoring algorithms. A general-purpose integer factorization method first proposed by Lehmer and Powers formed the basis of the well-known Continued Fraction Factorization (CFRAC) method. Recent work on the one line factoring algorithm by Hart and its connection with Lehman’s factoring method have motivated this paper. The emphasis of this paper is to explore the representations of PQ as continued fractions and the suitability of lower ordered convergences as representations of ab. These simpler convergences are then prescribed to Hart’s one line factoring algorithm. As an illustration, we demonstrate the working of our approach with two numbers: one smaller number and another larger number occupying 95 bits. Using our method, the fourth convergence finds the factors as the solution for the smaller number, while the eleventh convergence finds the factors for the larger number. The security of the RSA public key cryptosystem relies on the computational difficulty of factoring large integers. Among the challenges in breaking RSA semi-primes, RSA250, which is an 829-bit semi-prime, continues to hold a research record. In this paper, we apply our method to factorize RSA250 and present the practical implementation of our algorithm. Our approach’s theoretical and experimental findings demonstrate the reduction of the search space and a faster solution to the semi-prime factorization problem, resulting in key contributions and practical implications. We identify further research to extend our approach by exploring limitations and additional considerations such as the difference of squares method, paving the way for further research in this direction.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp4010002
Authors: Mohamad Saalim Wani Michael Rademacher Thorsten Horstmann Mathias Kretschmer
5G networks, pivotal for our digital mobile societies, are transitioning from 4G to 5G Stand-Alone (SA) networks. However, during this transition, 5G Non-Stand-Alone (NSA) networks are widely used. This paper examines potential security vulnerabilities in 5G NSA networks. Through an extensive literature review, we identify known 4G attacks that can theoretically be applied to 5G NSA. We organize these attacks into a structured taxonomy. Our findings reveal that 5G NSA networks may offer a false sense of security, as most security and privacy improvements are concentrated in 5G SA networks. To underscore this concern, we implement three attacks with severe consequences and successfully validate them on various commercially available smartphones. Notably, one of these attacks, the IMSI Leak, consistently exposes user information with no apparent security mitigation in 5G NSA networks. This highlights the ease of tracking individuals on current 5G networks.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp4010001
Authors: Christian DeLozier
Using a safe subset of C++ is a promising direction for increasing the safety of the programming language while maintaining its performance and productivity. In this paper, we examine how close existing C/C++ code is to conforming to a safe subset of C++. We examine the rules presented in existing safe C/C++ standards and safe C/C++ subsets. We analyze the code characteristics of 5.8 million code samples from the Exebench benchmark suite, two C/C++ benchmark suites, and five modern C++ applications using a static analysis tool. We find that raw pointers, unsafe casts, and unsafe library functions are used in both C/C++ code at large and in modern C++ applications. In general, C/C++ code at large does not differ much from modern C++ code, and continued work will be required to transition from existing C/C++ code to a safe subset of C++.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp3040041
Authors: Tirthankar Ghosh Sikha Bagui Subhash Bagui Martin Kadzis Jackson Bare
This article presents a statistical approach using entropy and classification-based analysis to detect anomalies in industrial control systems traffic. Several statistical techniques have been proposed to create baselines and measure deviation to detect intrusion in enterprise networks with a centralized intrusion detection approach in mind. Looking at traffic volume alone to find anomalous deviation may not be enough—it may result in increased false positives. The near real-time communication requirements, coupled with the lack of centralized infrastructure in operations technology and limited resources of the sensor motes, require an efficient anomaly detection system characterized by these limitations. This paper presents extended results from our previous work by presenting a detailed cluster-based entropy analysis on selected network traffic features. It further extends the analysis using a classification-based approach. Our detailed entropy analysis corroborates with our earlier findings that, although some degree of anomaly may be detected using univariate and bivariate entropy analysis for Denial of Service (DOS) and Man-in-the-Middle (MITM) attacks, not much information may be obtained for the initial reconnaissance, thus preventing early stages of attack detection in the Cyber Kill Chain. Our classification-based analysis shows that, overall, the classification results of the DOS attacks were much higher than the MITM attacks using two Modbus features in addition to the three TCP/IP features. In terms of classifiers, J48 and random forest had the best classification results and can be considered comparable. For the DOS attack, no resampling with the 60–40 (training/testing split) had the best results (average accuracy of 97.87%), but for the MITM attack, the 80–20 non-attack vs. attack data with the 75–25 split (average accuracy of 82.81%) had the best results.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp3040040
Authors: Shannon K. S. Kroes Matthijs van Leeuwen Rolf H. H. Groenwold Mart P. Janssen
Synthetic data generation is becoming an increasingly popular approach to making privacy-sensitive data available for analysis. Recently, cluster-based synthetic data generation (CBSDG) has been proposed, which uses explainable and tractable techniques for privacy preservation. Although the algorithm demonstrated promising performance on simulated data, CBSDG has not yet been applied to real, personal data. In this work, a published blood-transfusion analysis is replicated with synthetic data to assess whether CBSDG can reproduce more complex and intricate variable relations than previously evaluated. Data from the Dutch national blood bank, consisting of 250,729 donation records, were used to predict donor hemoglobin (Hb) levels by means of support vector machines (SVMs). Precision scores were equal to the original data results for both male (0.997) and female (0.987) donors, recall was 0.007 higher for male and 0.003 lower for female donors (original estimates 0.739 and 0.637, respectively). The impact of the variables on Hb predictions was similar, as quantified and visualized with Shapley additive explanation values. Opportunities for attribute disclosure were decreased for all but two variables; only the binary variables Deferral Status and Sex could still be inferred. Such inference was also possible for donors who were not used as input for the generator and may result from correlations in the data as opposed to overfitting in the synthetic-data-generation process. The high predictive performance obtained with the synthetic data shows potential of CBSDG for practical implementation.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp3040039
Authors: Turki Al lelah George Theodorakopoulos Amir Javed Eirini Anthi
The proliferation of cloud and public legitimate services (CLS) on a global scale has resulted in increasingly sophisticated malware attacks that abuse these services as command-and-control (C&C) communication channels. Conventional security solutions are inadequate for detecting malicious C&C traffic because it blends with legitimate traffic. This motivates the development of advanced detection techniques. We make the following contributions: First, we introduce a novel labeled dataset. This dataset serves as a valuable resource for training and evaluating detection techniques aimed at identifying malicious bots that abuse CLS as C&C channels. Second, we tailor our feature engineering to behaviors indicative of CLS abuse, such as connections to known CLS domains and potential C&C API calls. Third, to identify the most relevant features, we introduced a custom feature elimination (CFE) method designed to determine the exact number of features needed for filter selection approaches. Fourth, our approach focuses on both static and derivative features of Portable Executable (PE) files. After evaluating various machine learning (ML) classifiers, the random forest emerges as the most effective classifier, achieving a 98.26% detection rate. Fifth, we introduce the “Replace Misclassified Parameter (RMCP)” adversarial attack. This white-box strategy is designed to evaluate our system’s detection robustness. The RMCP attack modifies feature values in malicious samples to make them appear as benign samples, thereby bypassing the ML model’s classification while maintaining the malware’s malicious capabilities. The results of the robustness evaluation demonstrate that our proposed method successfully maintains a high accuracy level of 84%. In sum, our comprehensive approach offers a robust solution to the growing threat of malware abusing CLS as C&C infrastructure.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp3040038
Authors: Clay Carper Stone Olguin Jarek Brown Caylie Charlton Mike Borowczak
Power-based Side-Channel Analysis (SCA) began with visual-based examinations and has progressed to utilize data-driven statistical analysis. Two distinct classifications of these methods have emerged over the years; those focused on leakage exploitation and those dedicated to leakage detection. This work primarily focuses on a leakage detection-based schema that utilizes Welch’s t-test, known as Test Vector Leakage Assessment (TVLA). Both classes of methods process collected data using statistical frameworks that result in the successful exfiltration of information via SCA. Often, statistical testing used during analysis requires the assumption that collected power consumption data originates from a normal distribution. To date, this assumption has remained largely uncontested. This work seeks to demonstrate that while past studies have assumed the normality of collected power traces, this assumption should be properly evaluated. In order to evaluate this assumption, an implementation of Tiny-AES-c with nine unique substitution-box (s-box) configurations is conducted using TVLA to guide experimental design. By leveraging the complexity of the AES algorithm, a sufficiently diverse and complex dataset was developed. Under this dataset, statistical tests for normality such as the Shapiro-Wilk test and the Kolmogorov-Smirnov test provide significant evidence to reject the null hypothesis that the power consumption data is normally distributed. To address this observation, existing non-parametric equivalents such as the Wilcoxon Signed-Rank Test and the Kruskal-Wallis Test are discussed in relation to currently used parametric tests such as Welch’s t-test.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp3040037
Authors: Humera Ghani Shahram Salekzamankhani Bal Virdee
Due to the wide variety of network services, many different types of protocols exist, producing various packet features. Some features contain irrelevant and redundant information. The presence of such features increases computational complexity and decreases accuracy. Therefore, this research is designed to reduce the data dimensionality and improve the classification accuracy in the UNSW-NB15 dataset. It proposes a hybrid dimensionality reduction system that does feature selection (FS) and feature extraction (FE). FS was performed using the Recursive Feature Elimination (RFE) technique, while FE was accomplished by transforming the features into principal components. This combined scheme reduced a total of 41 input features into 15 components. The proposed systems’ classification performance was determined using an ensemble of Support Vector Classifier (SVC), K-nearest Neighbor classifier (KNC), and Deep Neural Network classifier (DNN). The system was evaluated using accuracy, detection rate, false positive rate, f1-score, and area under the curve metrics. Comparing the voting ensemble results of the full feature set against the 15 principal components confirms that reduced and transformed features did not significantly decrease the classifier’s performance. We achieved 94.34% accuracy, a 93.92% detection rate, a 5.23% false positive rate, a 94.32% f1-score, and a 94.34% area under the curve when 15 components were input to the voting ensemble classifier.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp3040036
Authors: Mohamed Chahine Ghanem Patrick Mulvihill Karim Ouazzane Ramzi Djemai Dipo Dunsin
The use of the unindexed web, commonly known as the deep web and dark web, to commit or facilitate criminal activity has drastically increased over the past decade. The dark web is a dangerous place where all kinds of criminal activities take place, Despite advances in web forensic techniques, tools, and methodologies, few studies have formally tackled dark and deep web forensics and the technical differences in terms of investigative techniques and artefact identification and extraction. This study proposes a novel and comprehensive protocol to guide and assist digital forensic professionals in investigating crimes committed on or via the deep and dark web. The protocol, named D2WFP, establishes a new sequential approach for performing investigative activities by observing the order of volatility and implementing a systemic approach covering all browsing-related hives and artefacts which ultimately resulted in improving the accuracy and effectiveness. Rigorous quantitative and qualitative research has been conducted by assessing the D2WFP following a scientifically sound and comprehensive process in different scenarios and the obtained results show an apparent increase in the number of artefacts recovered when adopting the D2WFP which outperforms any current industry or opensource browsing forensic tools. The second contribution of the D2WFP is the robust formulation of artefact correlation and cross-validation within the D2WFP which enables digital forensic professionals to better document and structure their analysis of host-based deep and dark web browsing artefacts.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp3040035
Authors: Filippo Sobrero Beatrice Clavarezza Daniele Ucci Federica Bisio
In the very recent years, cybersecurity attacks have increased at an unprecedented pace, becoming ever more sophisticated and costly. Their impact has involved both private/public companies and critical infrastructures. At the same time, due to the COVID-19 pandemic, the security perimeters of many organizations expanded, causing an increase in the attack surface exploitable by threat actors through malware and phishing attacks. Given these factors, it is of primary importance to monitor the security perimeter and the events occurring in the monitored network, according to a tested security strategy of detection and response. In this paper, we present a protocol tunneling detector prototype which inspects, in near real-time, a company’s network traffic using machine learning techniques. Indeed, tunneling attacks allow malicious actors to maximize the time in which their activity remains undetected. The detector monitors unencrypted network flows and extracts features to detect possible occurring attacks and anomalies by combining machine learning and deep learning. The proposed module can be embedded in any network security monitoring platform able to provide network flow information along with its metadata. The detection capabilities of the implemented prototype have been tested both on benign and malicious datasets. Results show an overall accuracy of 97.1% and an F1-score equal to 95.6%.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp3040034
Authors: Theodoros Theodoropoulos Luis Rosa Chafika Benzaid Peter Gray Eduard Marin Antonios Makris Luis Cordeiro Ferran Diego Pavel Sorokin Marco Di Girolamo Paolo Barone Tarik Taleb Konstantinos Tserpes
Cloud-native services face unique cybersecurity challenges due to their distributed infrastructure. They are susceptible to various threats like malware, DDoS attacks, and Man-in-the-Middle (MITM) attacks. Additionally, these services often process sensitive data that must be protected from unauthorized access. On top of that, the dynamic and scalable nature of cloud-native services makes it difficult to maintain consistent security, as deploying new instances and infrastructure introduces new vulnerabilities. To address these challenges, efficient security solutions are needed to mitigate potential threats while aligning with the characteristics of cloud-native services. Despite the abundance of works focusing on security aspects in the cloud, there has been a notable lack of research that is focused on the security of cloud-native services. To address this gap, this work is the first survey that is dedicated to exploring security in cloud-native services. This work aims to provide a comprehensive investigation of the aspects, features, and solutions that are associated with security in cloud-native services. It serves as a uniquely structured mapping study that maps the key aspects to the corresponding features, and these features to numerous contemporary solutions. Furthermore, it includes the identification of various candidate open-source technologies that are capable of supporting the realization of each explored solution. Finally, it showcases how these solutions can work together in order to establish each corresponding feature. The insights and findings of this work can be used by cybersecurity professionals, such as developers and researchers, to enhance the security of cloud-native services.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp3040033
Authors: Noel Khaemba Issa Traoré Mohammad Mamun
To address the lack of datasets for agetech, this paper presents an approach for generating synthetic datasets that include traces of benign and attack datasets for agetech. The generated datasets could be used to develop and evaluate intrusion detection systems for smart homes for seniors aging in place. After reviewing several resources, it was established that there are no agetech attack data for sensor readings. Therefore, in this research, several methods for generating attack data were explored using attack data patterns from an existing IoT dataset called TON_IoT weather data. The TON_IoT dataset could be used in different scenarios, but in this study, the focus is to apply it to agetech. The attack patterns were replicated in a normal agetech dataset from a temperature sensor collected from the Information Security and Object Technology (ISOT) research lab. The generated data are different from normal data, as abnormal segments are shown that could be considered as attacks. The generated agetech attack datasets were also trained using machine learning models, and, based on different metrics, achieved good classification performance in predicting whether a sample is benign or malicious.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp3040032
Authors: Ghada Abdelmoumin Danda B. Rawat Abdul Rahman
Training-anomaly-based, machine-learning-based, intrusion detection systems (AMiDS) for use in critical Internet of Things (CioT) systems and military Internet of Things (MioT) environments may involve synthetic data or publicly simulated data due to data restrictions, data scarcity, or both. However, synthetic data can be unrealistic and potentially biased, and simulated data are invariably static, unrealistic, and prone to obsolescence. Building an AMiDS logical model to predict the deviation from normal behavior in MioT and CioT devices operating at the sensing or perception layer due to adversarial attacks often requires the model to be trained using current and realistic data. Unfortunately, while real-time data are realistic and relevant, they are largely imbalanced. Imbalanced data have a skewed class distribution and low-similarity index, thus hindering the model’s ability to recognize important features in the dataset and make accurate predictions. Data-driven learning using data sampling, resampling, and generative methods can lessen the adverse impact of a data imbalance on the AMiDS model’s performance and prediction accuracy. Generative methods enable passive adversarial learning. This paper investigates several data sampling, resampling, and generative methods. It examines their impacts on the performance and prediction accuracy of AMiDS models trained using imbalanced data drawn from the UNSW_2018_IoT_Botnet dataset, a publicly available IoT dataset from the IEEEDataPort. Furthermore, it evaluates the performance and predictability of these models when trained using data transformation methods, such as normalization and one-hot encoding, to cover a skewed distribution, data sampling and resampling methods to address data imbalances, and generative methods to train the models to increase the model’s robustness to recognize new but similar attacks. In this initial study, we focus on CioT systems and train PCA-based and oSVM-based AMiDS models constructed using low-complexity PCA and one-class SVM (oSVM) ML algorithms to fit an imbalanced ground truth IoT dataset. Overall, we consider the rare event prediction case where the minority class distribution is disproportionately low compared to the majority class distribution. We plan to use transfer learning in future studies to generalize our initial findings to the MioT environment. We focus on CioT systems and MioT environments instead of traditional or non-critical IoT environments due to the stringent low energy, the minimal response time constraints, and the variety of low-power, situational-aware (or both) things operating at the sensing or perception layer in a highly complex and open environment.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp3040031
Authors: Ayat-Allah Bouramdane
Smart grids have emerged as a transformative technology in the power sector, enabling efficient energy management. However, the increased reliance on digital technologies also exposes smart grids to various cybersecurity threats and attacks. This article provides a comprehensive exploration of cyberattacks and cybersecurity in smart grids, focusing on critical components and applications. It examines various cyberattack types and their implications on smart grids, backed by real-world case studies and quantitative models. To select optimal cybersecurity options, the study proposes a multi-criteria decision-making (MCDM) approach using the analytical hierarchy process (AHP). Additionally, the integration of artificial intelligence (AI) techniques in smart-grid security is examined, highlighting the potential benefits and challenges. Overall, the findings suggest that “security effectiveness” holds the highest importance, followed by “cost-effectiveness”, “scalability”, and “Integration and compatibility”, while other criteria (i.e., “performance impact”, “manageability and usability”, “compliance and regulatory requirements”, “resilience and redundancy”, “vendor support and collaboration”, and “future readiness”) contribute to the evaluation but have relatively lower weights. Alternatives such as “access control and authentication” and “security information and event management” with high weighted sums are crucial for enhancing cybersecurity in smart grids, while alternatives such as “compliance and regulatory requirements” and “encryption” have lower weighted sums but still provide value in their respective criteria. We also find that “deep learning” emerges as the most effective AI technique for enhancing cybersecurity in smart grids, followed by “hybrid approaches”, “Bayesian networks”, “swarm intelligence”, and “machine learning”, while “fuzzy logic”, “natural language processing”, “expert systems”, and “genetic algorithms” exhibit lower effectiveness in addressing smart-grid cybersecurity. The article discusses the benefits and drawbacks of MCDM-AHP, proposes enhancements for its use in smart-grid cybersecurity, and suggests exploring alternative MCDM techniques for evaluating security options in smart grids. The approach aids decision-makers in the smart-grid field to make informed cybersecurity choices and optimize resource allocation.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp3030030
Authors: Abdul Majeed
Anonymization techniques are widely used to make personal data broadly available for analytics/data-mining purposes while preserving the privacy of the personal information enclosed in it. In the past decades, a substantial number of anonymization techniques were developed based on the famous four privacy models such as k-anonymity, ℓ-diversity, t-closeness, and differential privacy. In recent years, there has been an increasing focus on developing attribute-centric anonymization methods, i.e., methods that exploit the properties of the underlying data to be anonymized to improve privacy, utility, and/or computing overheads. In addition, synthetic data are also widely used to preserve privacy (privacy-enhancing technologies), as well as to meet the growing demand for data. To the best of the authors’ knowledge, none of the previous studies have covered the distinctive features of attribute-centric anonymization methods and synthetic data based developments. To cover this research gap, this paper summarizes the recent state-of-the-art (SOTA) attribute-centric anonymization methods and synthetic data based developments, along with the experimental details. We report various innovative privacy-enhancing technologies that are used to protect the privacy of personal data enclosed in various forms. We discuss the challenges and the way forward in this line of work to effectively preserve both utility and privacy. This is the first work that systematically covers the recent development in attribute-centric and synthetic-data-based privacy-preserving methods and provides a broader overview of the recent developments in the privacy domain.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp3030029
Authors: Anastasios Papathanasiou George Liontos Vasiliki Liagkou Euripidis Glavas
Business Email Compromise (BEC) attacks have emerged as serious threats to organizations in recent years, exploiting social engineering and malware to dupe victims into divulging confidential information and executing fraudulent transactions. This paper provides a comprehensive review of BEC attacks, including their principles, techniques, and impacts on enterprises. In light of the rising tide of BEC attacks globally and their significant financial impact on business, it is crucial to understand their modus operandi and adopt proactive measures to protect sensitive information and prevent financial losses. This study offers valuable recommendations and insights for organizations seeking to enhance their cybersecurity posture and mitigate the risks associated with BEC attacks. Moreover, we analyze the Greek landscape of cyberattacks, focusing on the existing regulatory framework and the measures taken to prevent and respond to cybercrime in accordance with the NIS Directives of the EU. By examining the Greek landscape, we gain insights into the effectiveness of countermeasures in this region, as well as the challenges and opportunities for improving cybersecurity practices.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp3030028
Authors: Hannes Salin Martin Lundgren
Cooperative Intelligent Transport Systems (C-ITSs) are an important development for society. C-ITSs enhance road safety, improve traffic efficiency, and promote sustainable transportation through interconnected and intelligent communication between vehicles, infrastructure, and traffic-management systems. Many real-world implementations still consider traditional Public Key Infrastructures (PKI) as the underlying trust model and security control. However, there are challenges with the PKI-based security control from a scalability and revocation perspective. Lately, certificateless cryptography has gained research attention, also in conjunction with C-ITSs, making it a new type of security control to be considered. In this study, we use certificateless cryptography as a candidate to investigate factors affecting decisions (not) to adopt new types of security controls, and study its current gaps, key challenges and possible enablers which can influence the industry. We provide a qualitative study with industry specialists in C-ITSs, combined with a literature analysis of the current state of research in certificateless cryptographic in C-ITS. It was found that only 53% of the current certificateless cryptography literature for C-ITSs in 2022–2023 provide laboratory testing of the protocols, and 0% have testing in real-world settings. However, the trend of research output in the field has been increasing linearly since 2016 with more than eight times as many articles in 2022 compared to 2016. Based on our analysis, using a five-phased Innovation-Decision Model, we found that key reasons affecting adoption are: availability of proof-of-concepts, knowledge beyond current best practices, and a strong buy-in from both stakeholders and standardization bodies.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp3030027
Authors: Turki Al lelah George Theodorakopoulos Philipp Reinecke Amir Javed Eirini Anthi
The widespread adoption of cloud-based and public legitimate services (CPLS) has inadvertently opened up new avenues for cyber attackers to establish covert and resilient command-and-control (C&C) communication channels. This abuse poses a significant cybersecurity threat, as it allows malicious traffic to blend seamlessly with legitimate network activities. Traditional detection systems are proving inadequate in accurately identifying such abuses, emphasizing the urgent need for more advanced detection techniques. In our study, we conducted an extensive systematic literature review (SLR) encompassing the academic and industrial literature from 2008 to July 2023. Our review provides a comprehensive categorization of the attack techniques employed in CPLS abuses and offers a detailed overview of the currently developed detection strategies. Our findings indicate a substantial increase in cloud-based abuses, facilitated by various attack techniques. Despite this alarming trend, the focus on developing detection strategies remains limited, with only 7 out of 91 studies addressing this concern. Our research serves as a comprehensive review of CPLS abuse for the C&C infrastructure. By examining the emerging techniques used in these attacks, we aim to make a significant contribution to the development of effective botnet defense strategies.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp3030026
Authors: Fatemeh Ahmadi Abkenari Amin Milani Fard Sara Khanchi
An intrusion detection system (IDS), whether as a device or software-based agent, plays a significant role in networks and systems security by continuously monitoring traffic behaviour to detect malicious activities. The literature includes IDSs that leverage models trained to detect known attack behaviours. However, such models suffer from low accuracy or high overfitting. This work aims to enhance the performance of the IDS by making a model based on the observed traffic via applying different single and ensemble classifiers and lowering the classifier’s overfitting on a reduced set of features. We implement various feature reduction techniques, including Linear Regression, LASSO, Random Forest, Boruta, and autoencoders on the CSE-CIC-IDS2018 dataset to provide a training set for classifiers, including Decision Tree, Naïve Bayes, neural networks, Random Forest, and XGBoost. Our experiments show that the Decision Tree classifier on autoencoders-based reduced sets of features yields the lowest overfitting among other combinations.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp3030025
Authors: Anastasios Giannaros Aristeidis Karras Leonidas Theodorakopoulos Christos Karras Panagiotis Kranias Nikolaos Schizas Gerasimos Kalogeratos Dimitrios Tsolis
Autonomous vehicles (AVs), defined as vehicles capable of navigation and decision-making independent of human intervention, represent a revolutionary advancement in transportation technology. These vehicles operate by synthesizing an array of sophisticated technologies, including sensors, cameras, GPS, radar, light imaging detection and ranging (LiDAR), and advanced computing systems. These components work in concert to accurately perceive the vehicle’s environment, ensuring the capacity to make optimal decisions in real-time. At the heart of AV functionality lies the ability to facilitate intercommunication between vehicles and with critical road infrastructure—a characteristic that, while central to their efficacy, also renders them susceptible to cyber threats. The potential infiltration of these communication channels poses a severe threat, enabling the possibility of personal information theft or the introduction of malicious software that could compromise vehicle safety. This paper offers a comprehensive exploration of the current state of AV technology, particularly examining the intersection of autonomous vehicles and emotional intelligence. We delve into an extensive analysis of recent research on safety lapses and security vulnerabilities in autonomous vehicles, placing specific emphasis on the different types of cyber attacks to which they are susceptible. We further explore the various security solutions that have been proposed and implemented to address these threats. The discussion not only provides an overview of the existing challenges but also presents a pathway toward future research directions. This includes potential advancements in the AV field, the continued refinement of safety measures, and the development of more robust, resilient security mechanisms. Ultimately, this paper seeks to contribute to a deeper understanding of the safety and security landscape of autonomous vehicles, fostering discourse on the intricate balance between technological advancement and security in this rapidly evolving field.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp3030024
Authors: Lewis Golightly Paolo Modesti Victor Chang
Network emulation offers a flexible solution for network deployment and operations, leveraging software to consolidate all nodes in a topology and utilizing the resources of a single host system server. This research paper investigated the state of cybersecurity in virtualized systems, covering vulnerabilities, exploitation techniques, remediation methods, and deployment strategies, based on an extensive review of the related literature. We conducted a comprehensive performance evaluation and comparison of two network-emulation platforms: Graphical Network Simulator-3 (GNS3), an established open-source platform, and the SEED Internet Emulator, an emerging platform, alongside physical Cisco routers. Additionally, we present a Distributed System that seamlessly integrates network architecture and emulation capabilities. Empirical experiments assessed various performance criteria, including the bandwidth, throughput, latency, and jitter. Insights into the advantages, challenges, and limitations of each platform are provided based on the performance evaluation. Furthermore, we analyzed the deployment costs and energy consumption, focusing on the economic aspects of the proposed application.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp3030023
Authors: Humera Ghani Bal Virdee Shahram Salekzamankhani
With the growth in network usage, there has been a corresponding growth in the nefarious exploitation of this technology. A wide array of techniques is now available that can be used to deal with cyberattacks, and one of them is network intrusion detection. Artificial Intelligence (AI) and Machine Learning (ML) techniques have extensively been employed to identify network anomalies. This paper provides an effective technique to evaluate the classification performance of a deep-learning-based Feedforward Neural Network (FFNN) classifier. A small feature vector is used to detect network traffic anomalies in the UNSW-NB15 and NSL-KDD datasets. The results show that a large feature set can have redundant and unuseful features, and it requires high computation power. The proposed technique exploits a small feature vector and achieves better classification accuracy.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp3030022
Authors: Richard Li Michail Tsikerdekis
Network anomaly detection solutions can analyze a network’s data volume by protocol over time and can detect many kinds of cyberattacks such as exfiltration. We use exponential random graph models (ERGMs) in order to flatten hourly network topological characteristics into a time series, and Autoregressive Moving Average (ARMA) to analyze that time series and to detect potential attacks. In particular, we extend our previous method in not only demonstrating detection over hourly data but also through labeling of nodes and over the HTTP protocol. We demonstrate the effectiveness of our method using real-world data for creating exfiltration scenarios. We highlight how our method has the potential to provide a useful description of what is happening in the network structure and how this can assist cybersecurity analysts in making better decisions in conjunction with existing intrusion detection systems. Finally, we describe some strengths of our method, its accuracy based on the right selection of parameters, as well as its low computational requirements.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp3030021
Authors: Juliet Samandari Clémentine Gritti
Message Queue Telemetry Transport (MQTT) is a common communication protocol used in the Internet of Things (IoT). MQTT is a simple, lightweight messaging protocol used to establish communication between multiple devices relying on the publish–subscribe model. However, the protocol does not provide authentication, and most proposals to incorporate it lose their lightweight feature and do not consider the future risk of quantum attacks. IoT devices are generally resource-constrained, and postquantum cryptography is often more computationally resource-intensive compared to current cryptographic standards, adding to the complexity of the transition. In this paper, we use the postquantum digital signature scheme CRYSTALS-Dilithium to provide authentication for MQTT and determine what the CPU, memory and disk usage are when doing so. We further investigate another possibility to provide authentication when using MQTT, namely a key encapsulation mechanism (KEM) trick proposed in 2020 for transport level security (TLS). Such a trick is claimed to save up to 90% in CPU cycles. We use the postquantum KEM scheme CRYSTALS-KYBER and compare the resulting CPU, memory and disk usages with traditional authentication. We found that the use of KEM for authentication resulted in a speed increase of 25 ms, a saving of 71%. There were some extra costs for memory but this is minimal enough to be acceptable for most IoT devices.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp3030020
Authors: Ioannis Paspatis Aggeliki Tsohou
Several studies have shown that the traditional way of learning is not optimal when we aim to improve ICT users’ actual privacy behaviors. In this research, we present a literature review of the theories that are followed in other fields to modify human behavior. Our findings show that cognitive theory and the health belief model present optimistic results. Further, we examined various learning methods, and we concluded that experiential learning is advantageous compared to other methods. In this paper, we aggregate the privacy behavior determinant factors found in the literature and use cognitive theory to synthesize a theoretical framework. The proposed framework can be beneficial to educational policymakers and practitioners in institutions such as public and private schools and universities. Also, our framework provides a fertile ground for more research on experiential privacy learning and privacy behavior enhancement.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp3030019
Authors: Jennifer Bellizzi Eleonora Losiouk Mauro Conti Christian Colombo Mark Vella
The ubiquity of Android smartphones makes them targets of sophisticated malware, which maintain long-term stealth, particularly by offloading attack steps to benign apps. Such malware leaves little to no trace in logs, and the attack steps become difficult to discern from benign app functionality. Endpoint detection and response (EDR) systems provide live forensic capabilities that enable anomaly detection techniques to detect anomalous behavior in application logs after an app hijack. However, this presents a challenge, as state-of-the-art EDRs rely on device and third-party application logs, which may not include evidence of attack steps, thus prohibiting anomaly detection techniques from exposing anomalous behavior. While, theoretically, all the evidence resides in volatile memory, its ephemerality necessitates timely collection, and its extraction requires device rooting or app repackaging. We present VEDRANDO, an enhanced EDR for Android that accomplishes (i) the challenge of timely collection of volatile memory artefacts and (ii) the detection of a class of stealthy attacks that hijack benign applications. VEDRANDO leverages memory forensics and app virtualization techniques to collect timely evidence from memory, which allows uncovering attack steps currently uncollected by the state-of-the-art EDRs. The results showed that, with less than 5% CPU overhead compared to normal usage, VEDRANDO could uniquely collect and fully reconstruct the stealthy attack steps of ten realistic messaging hijack attacks using standard anomaly detection techniques, without requiring device or app modification.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp3030018
Authors: Andey Robins Stone Olguin Jarek Brown Clay Carper Mike Borowczak
The control flow of a program represents valuable and sensitive information; in embedded systems, this information can take on even greater value as the resources, control flow, and execution of the system have more constraints and functional implications than modern desktop environments. Early works have demonstrated the possibility of recovering such control flow through power-based side-channel attacks in tightly constrained environments; however, they relied on meaningful differences in computational states or data dependency to distinguish between states in a state machine. This work applies more advanced machine learning techniques to state machines which perform identical operations in all branches of control flow. Complete control flow is recovered with 99% accuracy even in situations where 97% of work is outside of the control flow structures. This work demonstrates the efficacy of these approaches for recovering control flow information; continues developing available knowledge about power-based attacks on program control flow; and examines the applicability of multiple standard machine learning models to the problem of classification over power-based side-channel information.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp3030017
Authors: Henock Mulugeta Melaku
Cybersecurity protects cyberspace from a wide range of cyber threats to reduce overall business risk, ensure business continuity, and maximize business opportunities and return on investments. Cybersecurity is well achieved by using appropriate sets of security governance frameworks. To this end, various Information Technology (IT) and cybersecurity governance frameworks have been reviewed along with their benefits and limitations. The major limitations of the reviewed frameworks are; they are complex and have complicated structures to implement, they are expensive and require high skill IT and security professionals. Moreover, the frameworks require many requirement checklists for implementation and auditing purposes and a lot of time and resources. To fill the limitations mentioned above, a simple, dynamic, and adaptive cybersecurity governance framework is proposed that provides security related strategic direction, ensures that security risks are managed appropriately, and ensures that organizations’ resources are utilized optimally. The framework incorporated different components not considered in the existing frameworks, such as research and development, public-private collaboration framework, regional and international cooperation framework, incident management, business continuity, disaster recovery frameworks, and compliance with laws and regulations. Moreover, the proposed framework identifies and includes some of the existing frameworks’ missed and overlapped components, processes, and activities. It has nine components, five activities, four outcomes, and seven processes. Performance metrics, evaluation, and monitoring techniques are also proposed. Moreover, it follows a risk based approach to address the current and future technology and threat landscapes. The design science research method was used in this research study to solve the problem mentioned. Using the design science research method, the problem was identified. Based on the problem, research objectives were articulated; the objective of this research was solved by developing a security governance framework considering different factors which were not addressed in the current works. Finally, performance metrics were proposed to evaluate the implementation of the governance framework.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp3030016
Authors: Lukas Schmidt Henry Hosseini Thomas Hupperich
Emerging technologies in video monitoring solutions seriously threaten personal privacy, as current technologies hold the potential for total surveillance. These concerns apply in particular to baby monitor solutions incorporating mobile applications due to the potential privacy impact of combining sensitive video recordings with access to the vast amount of private data on a cell phone. Therefore, this study extends the state of privacy research by assessing the security and privacy of popular baby monitor apps. We analyze network security measures that aim to protect baby monitoring streams, evaluate the corresponding privacy policies, and identify privacy leaks by performing network traffic analysis. Our results point to several problems that may compromise user privacy. We conclude that our methods can support the evaluation of the security and privacy of video surveillance solutions and discuss how to improve the protection of user data.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp3020015
Authors: Ilias Belalis Georgios Spathoulas Ioannis Anagnostopoulos
Active reconnaissance is the primary source of information gathering about the infrastructure of a target network for intruders. Its main functions are host discovery and port scanning, the basic techniques of which are thoroughly analyzed in the present paper. The main contribution of the paper is the definition of a modeling approach regarding (a) all possible intruder actions, (b) full or partial knowledge of the intruder’s preferred methodology, and (c) the topology of the target network. The result of the modeling approach, which is based on state diagrams, is the extraction of a set of all probable paths that the intruder may follow. On top of this, a number of relevant metrics are calculated to enable the dynamic assessment of the risk to specific network assets according to the point on the paths at which the intruder is detected. The proposed methodology aims to provide a robust model that can enable the efficient and automated application of deception techniques to protect a given network. A series of experiments has also been performed to assess the required resources for the modeling approach when applied in real-world applications and provide the required results with bearable overhead to enable the online application of deception measures.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp3020014
Authors: Khosro Salmani Brian Atuh
COVID-19 was an unprecedented pandemic that changed the lives of everyone. To handle the virus’s rapid spread, governments and big tech companies, such as Google and Apple, implemented Contact Tracing Applications (CTAs). However, the response by the public was different in each country. While some countries mandated downloading the application for their citizens, others made it optional, revealing contrasting patterns to the spread of COVID-19. In this study, in addition to investigating the privacy and security of the Canadian CTA, COVID Alert, we aim to disclose the public’s perception of these varying patterns. Additionally, if known of the results of other nations, would Canadians sacrifice their freedoms to prevent the spread of a future pandemic? Hence, a survey was conducted, gathering responses from 154 participants across Canada. Next, we questioned the participants regarding the COVID-19 pandemic and their knowledge and opinion of CTAs before presenting our findings regarding other countries. After showing our results, we then asked the participants their views of CTAs again. The arrangement of the preceding questions, the findings, and succeeding questions to identify whether Canadians’ opinions on CTAs would change, after presenting the proper evidence, were performed. Among all of our findings, there is a clear difference between before and after the findings regarding whether CTAs should be mandatory, with 34% of participants agreeing before and 56% agreeing afterward. This hints that all the public needed was information to decide whether or not to participate. In addition, this exposes the value of transparency and communication when persuading the public to collaborate. Finally, we offer three recommendations on how governments and health authorities can respond effectively in a future pandemic and increase the adoption rate for CTAs to save more lives.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp3020013
Authors: Sara Kokal Mounika Vanamala Rushit Dave
Throughout the past several decades, mobile devices have evolved in capability and popularity at growing rates while improvement in security has fallen behind. As smartphones now hold mass quantities of sensitive information from millions of people around the world, addressing this gap in security is crucial. Recently, researchers have experimented with behavioral and physiological biometrics-based authentication to improve mobile device security. Continuing the previous work in this field, this study identifies popular dynamics in behavioral and physiological smartphone authentication and aims to provide a comprehensive review of their performance with various deep learning and machine learning algorithms. We found that utilizing hybrid schemes with deep learning features and deep learning/machine learning classification can improve authentication performance. Throughout this paper, the benefits, limitations, and recommendations for future work will be discussed.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp3020012
Authors: Feng Wang Yongning Tang Hongbing Fang
As the Internet of Things (IoT) continues to expand, billions of IoT devices are now connected to the internet, producing vast quantities of data. Collecting and sharing this data has become crucial to improving IoT technologies and developing new applications. However, the publication of privacy-preserving IoT traffic data is exceedingly challenging due to the various privacy concerns surrounding users, IoT networks, and devices. In this paper, we propose a data transformation method aimed at safeguarding the privacy of IoT devices by transforming time series datasets. Based on our measurements, we have found that the transformed datasets retain the intrinsic value of the original IoT data and maintains data utility. This approach will enable non-expert data owners to better understand and evaluate the potential device-level privacy risks associated with their IoT data while simultaneously offering a reliable solution to mitigate their concerns about privacy violations.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp3020011
Authors: David S. Butcher Christian J. Brigham James Berhalter Abigail L. Centers William M. Hunkapiller Timothy P. Murphy Eric C. Palm Julia H. Smith
A cybersecurity approach for a large-scale user facility is presented—utilizing the National High Magnetic Field Laboratory (NHMFL) at Florida State University (FSU) as an example. The NHMFL provides access to the highest magnetic fields for scientific research teams from a range of disciplines. The unique challenges of cybersecurity at a widely accessible user facility are showcased, and relevant cybersecurity frameworks for the complex needs of a user facility with industrial-style equipment and hazards are discussed, along with the approach for risk identification and management, which determine cybersecurity requirements and priorities. Essential differences between information technology and research technology are identified, along with unique requirements and constraints. The need to plan for the introduction of new technology and manage legacy technologies with long usage lifecycles is identified in the context of implementing cybersecurity controls rooted in pragmatic decisions to avoid hindering research activities while enabling secure practices, which includes FAIR (findable, accessible, interoperable, and reusable) and open data management principles. The NHMFL’s approach to FAIR data management is presented. Critical success factors include obtaining resources to implement and maintain necessary security protocols, interdisciplinary and diverse skill sets, phased implementation, and shared allocation of NHMFL and FSU responsibilities.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp3020010
Authors: Raghvinder S. Sangwan Youakim Badr Satish M. Srinivasan
Recent advances in machine learning have created an opportunity to embed artificial intelligence in software-intensive systems. These artificial intelligence systems, however, come with a new set of vulnerabilities making them potential targets for cyberattacks. This research examines the landscape of these cyber attacks and organizes them into a taxonomy. It further explores potential defense mechanisms to counter such attacks and the use of these mechanisms early during the development life cycle to enhance the safety and security of artificial intelligence systems.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp3020009
Authors: Shinelle Hutchinson Miloš Stanković Samuel Ho Shiva Houshmand Umit Karabiyik
The emergence of the Internet of Things technologies and the increase and convenience of smart home devices have contributed to the growth of self-installed home security systems. While home security devices have become more accessible and can help users monitor and secure their homes, they can also become targets of cyberattacks and/or witnesses of criminal activities, hence sources of forensic evidence. To date, there is little existing literature on forensic analysis and the security and privacy of home security systems. In this paper, we seek to better understand and assess the forensic artifacts that can be extracted, the security and privacy concerns around the use of home security devices, and the challenges forensic investigators might encounter, by performing a comprehensive investigation of the SimpliSafe security system. We investigated the interaction of the security system with the SimpliSafe companion app on both Android and iOS devices. We analyzed the network traffic as the user interacts with the system to identify any security or privacy concerns. Our method can help investigators working on other home security systems, and our findings can further help developers to improve the confidentiality and privacy of user data in home security devices and their applications.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp3020008
Authors: Laurens D’hooge Miel Verkerken Tim Wauters Filip De Turck Bruno Volckaert
Generalization is a longstanding assumption in articles concerning network intrusion detection through machine learning. Novel techniques are frequently proposed and validated based on the improvement they attain when classifying one or more of the existing datasets. The necessary follow-up question of whether this increased performance in classification is meaningful outside of the dataset(s) is almost never investigated. This lacuna is in part due to the sparse dataset landscape in network intrusion detection and the complexity of creating new data. The introduction of two recent datasets, namely CIC-IDS2017 and CSE-CIC-IDS2018, opened up the possibility of testing generalization capability within similar academic datasets. This work investigates how well models from different algorithmic families, pretrained on CICIDS2017, are able to classify the samples in CSE-CIC-IDS2018 without retraining. Earlier work has shown how robust these models are to data reduction when classifying state-of-the-art datasets. This work experimentally demonstrates that the implicit assumption that strong generalized performance naturally follows from strong performance on a specific dataset is largely erroneous. The supervised machine learning algorithms suffered flat losses in classification performance ranging from 0 to 50% (depending on the attack class under test). For non-network-centric attack classes, this performance regression is most pronounced, but even the less affected models that classify the network-centric attack classes still show defects. Current implementations of intrusion detection systems (IDSs) with supervised machine learning (ML) as a core building block are thus very likely flawed if they have been validated on the academic datasets, without the consideration for their general performance on other academic or real-world datasets.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp3010007
Authors: Shadi Sadeghpour Natalija Vlajic
Session-replay bots are believed to be the latest and most sophisticated generation of web bots, and they are also very difficult to defend against. Combating session-replay bots is particularly challenging in online domains that are repeatedly visited by the same genuine human user(s) in the same or similar ways—such as news, banking or gaming sites. In such domains, it is difficult to determine whether two look-alike sessions are produced by the same human user or if these sessions are just bot-generated session replays. Unfortunately, to date, only a handful of research studies have looked at the problem of session-replay bots, with many related questions still waiting to be addressed. The main contributions of this paper are two-fold: (1) We introduce and provide to the public a novel real-world mouse dynamics dataset named ReMouse. The ReMouse dataset is collected in a guided environment, and, unlike other publicly available mouse dynamics datasets, it contains repeat sessions generated by the same human user(s). As such, the ReMouse dataset is the first of its kind and is of particular relevance for studies on the development of effective defenses against session-replay bots. (2) Our own analysis of ReMouse dataset using statistical and advanced ML-based methods (including deep and unsupervised neural learning) shows that two different human users cannot generate the same or similar-looking sessions when performing the same or a similar online task; furthermore, even the (repeat) sessions generated by the same human user are sufficiently distinguishable from one another.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp3010006
Authors: Guillaume Bour Camillo Bosco Rita Ugarelli Martin Gilje Jaatun
The security of IoT-based digital solutions is a critical concern in the adoption of Industry 4.0 technologies. These solutions are increasingly being used to support the interoperability of critical infrastructure, such as in the water and energy sectors, and their security is essential to ensure the continued reliability and integrity of these systems. However, as our research demonstrates, many digital solutions still lack basic security mechanisms and are vulnerable to attacks that can compromise their functionality. In this paper, we examine the security risks associated with IoT-based digital solutions for critical infrastructure in the water sector, and refer to a set of good practices for ensuring their security. In particular, we analyze the risks associated with digital solutions not directly connected with the IT system of a water utility. We show that they can still be leveraged by attackers to trick operators into making wrong operational decisions.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp3010005
Authors: Giorgia Tempestini Ericka Rovira Aryn Pyke Francesco Di Nocera
Knowledge of possible cyber threats as well as awareness of appropriate security measures plays a crucial role in the ability of individuals to not only discriminate between an innocuous versus a dangerous cyber event, but more importantly to initiate appropriate cybersecurity behaviors. The purpose of this study was to construct a Cybersecurity Awareness INventory (CAIN) to be used as an instrument to assess users’ cybersecurity knowledge by providing a proficiency score that could be correlated with cyber security behaviors. A scale consisting of 46 items was derived from ISO/IEC 27032. The questionnaire was administered to a sample of college students (N = 277). Based on cybersecurity behaviors reported to the research team by the college’s IT department, each participant was divided into three groups according to the risk reports they received in the past nine months (no risk, low risk, and medium risk). The ANOVA results showed a statistically significant difference in CAIN scores between those in the no risk and medium-risk groups; as expected, CAIN scores were lower in the medium-risk group. The CAIN has the potential to be a useful assessment tool for cyber training programs as well as future studies investigating individuals’ vulnerability to cyberthreats.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp3010004
Authors: Raphael Kiesel Marvin Lakatsch Alexander Mann Karl Lossie Felix Sohnius Robert H. Schmitt
Homomorphic encryption enables secure cloud computing over the complete data lifecycle. As so-called in-use encryption methodology, it allows using encrypted data for, e.g., data analysis—in contrast to classic encryption methods. In-use encryption enables new ways of value creation and an extensive use of cloud computing for manufacturing companies. However, homomorphic encryption is not widely implemented in practice yet. This is mainly since homomorphic encryption has higher computation times and is limited regarding its calculation operations. Nevertheless, for some use cases, the security requirements are a lot stricter than, e.g., timeliness requirements. Thus, homomorphic encryption might be beneficial. This paper, therefore, analyzes the potential of homomorphic encryption for cloud computing in manufacturing. First, the potential and limitations for both classic and homomorphic encryption are presented on the basis of a literature review. Second, to validate the limitations, simulations are executed, comparing the computation time and data transfer of classic and homomorphic encryption. The results show that homomorphic encryption is a tradeoff of security, time, and cost, which highly depends on the use case. Therefore, third, manufacturing use cases are identified; the two use cases of predictive maintenance and contract manufacturing are presented in detail, demonstrating how homomorphic encryption can be beneficial.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp3010003
Authors: Vasileios Vlachos Yannis C. Stamatiou Sotiris Nikoletseas
Instilling good privacy practices to developers and users appears to be a difficult and daunting task. The World Wide Web encompasses a panspermia of different technologies, commercial and open source apis, evolving security standards and protocols that can be deployed towards the implementation of complex, powerful, web applications. At the same time, the proliferation of applications and services on all types of devices has also increased the attack surface for privacy threats. In this paper, we present the Privacy Flag Observatory, a platform which is one of the main tools produced by the Privacy Flag eu funded research project. The goal of this initiative is to raise awareness among European citizens of the potential privacy threats that beset the software and services they trust and use every day, including websites and smartphone applications. The Privacy Flag Observatory is one of the components that contributed to a large extent, to the success of the project’s goals. It is a real-time security and privacy threat monitoring platform whose aim is to collect, archive, analyze and present security and privacy-related information to the broader public as well as experts. Although the platform relies on crowdsourcing information gathering strategies and interacts with several other components installed on users’ devices or remote servers and databases, in this paper, we focus on the observatory platform referring only cursorily to other components such as the mobile phone add-on.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp3010002
Authors: Journal of Cybersecurity and Privacy Editorial Office Journal of Cybersecurity and Privacy Editorial Office
High-quality academic publishing is built on rigorous peer review [...]
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp3010001
Authors: Mohamed Ali Kazi Steve Woodhead Diane Gan
Banking malware are malicious programs that attempt to steal confidential information, such as banking authentication credentials, from users. Zeus is one of the most widespread banking malware variants ever discovered. Since the Zeus source code was leaked, many other variants of Zeus have emerged, and tools such as anti-malware programs exist that can detect Zeus; however, these have limitations. Anti-malware programs need to be regularly updated to recognise Zeus, and the signatures or patterns can only be made available when the malware has been seen. This limits the capability of these anti-malware products because they are unable to detect unseen malware variants, and furthermore, malicious users are developing malware that seeks to evade signature-based anti-malware programs. In this paper, a methodology is proposed for detecting Zeus malware network traffic flows by using machine learning (ML) binary classification algorithms. This research explores and compares several ML algorithms to determine the algorithm best suited for this problem and then uses these algorithms to conduct further experiments to determine the minimum number of features that could be used for detecting the Zeus malware. This research also explores the suitability of these features when used to detect both older and newer versions of Zeus as well as when used to detect additional variants of the Zeus malware. This will help researchers understand which network flow features could be used for detecting Zeus and whether these features will work across multiple versions and variants of the Zeus malware.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2040046
Authors: Maria José Angélico Gonçalves Rui Humberto Pereira Marta Alexandra Guerra Magalhães Coelho
User trust is a fundamental issue in e-commerce. To address this problem, recommendation systems have been widely used in different application domains including social media healthcare, e-commerce, and others. In this paper, we present a systematic review of the literature in the area of blockchain-based reputation models and we discuss the obtained results, answering the initial research questions. These findings lead us to conclude that the existing systems are based on a trusted third party (TTP) to collect and store reputation data, which does not provide transparency on users’ reputation scores. In the recent literature, on the one hand, blockchain-based reputation systems have been highlighted as possible solutions to effectively provide the necessary transparency, as well as effective identity management. On the other hand, new challenges are posed in terms of user privacy and performance, due to the specific characteristics of the blockchain. According to the literature, two major approaches have been proposed based on public and permissioned blockchains. Each approach applies adjusted models for calculating reputation scores. Despite the undoubted advantages added by a blockchain, the problem is only partially solved since there is no effective way to prevent blockchain oracles from feeding the chain with false, unfair, or biased data. In our future work, we intend to explore the two approaches discussed in the literature in order to propose a new blockchain-based model for deriving user reputation scores.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2040045
Authors: Hunter D. Moore Andrew Stephens William Scherer
Recent efforts have shown that training data is not secured through the generalization and abstraction of algorithms. This vulnerability to the training data has been expressed through membership inference attacks that seek to discover the use of specific records within the training dataset of a model. Additionally, disparate membership inference attacks have been shown to achieve better accuracy compared with their macro attack counterparts. These disparate membership inference attacks use a pragmatic approach to attack individual, more vulnerable sub-sets of the data, such as underrepresented classes. While previous work in this field has explored model vulnerability to these attacks, this effort explores the vulnerability of datasets themselves to disparate membership inference attacks. This is accomplished through the development of a vulnerability-classification model that classifies datasets as vulnerable or secure to these attacks. To develop this model, a vulnerability-classification dataset is developed from over 100 datasets—including frequently cited datasets within the field. These datasets are described using a feature set of over 100 features and assigned labels developed from a combination of various modeling and attack strategies. By averaging the attack accuracy over 13 different modeling and attack strategies, the authors explore the vulnerabilities of the datasets themselves as opposed to a particular modeling or attack effort. The in-class observational distance, width ratio, and the proportion of discrete features are found to dominate the attributes defining dataset vulnerability to disparate membership inference attacks. These features are explored in deeper detail and used to develop exploratory methods for hardening these class-based sub-datasets against attacks showing preliminary mitigation success with combinations of feature reduction and class-balancing strategies.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2040044
Authors: Jins de Jong Bart Kamphorst Shannon Kroes
We present a differentially private extension of the block coordinate descent algorithm by means of objective perturbation. The algorithm iteratively performs linear regression in a federated setting on vertically partitioned data. In addition to a privacy guarantee, we derive a utility guarantee; a tolerance parameter indicates how much the differentially private regression may deviate from the analysis without differential privacy. The algorithm’s performance is compared with that of the standard block coordinate descent algorithm on both artificial test data and real-world data. We find that the algorithm is fast and able to generate practical predictions with single-digit privacy budgets, albeit with some accuracy loss.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2040043
Authors: Mohammed A. Ahmed Hatem F. Sindi Majid Nour
Hospitals have been historically known for their strong risk mitigation policies and designs, which are not becoming easier or simpler to plan and operate. Currently, new technologies and devices are developed every day in the medical industry. These devices, systems, and personnel are in an ever-higher state of connection to the network and servers, which necessitates the use of stringent cybersecurity policies. Therefore, this work aims to comprehensively identify, quantify, and model the cybersecurity status quo in healthcare facilities. The developed model is going to allow healthcare organizations to understand the imminent operational risks and to identify which measures to improve or add to their system in order to mitigate those risks. Thus, in this work we will develop a novel assessment tool to provide hospitals with a proper reflection of their status quo, which will assist hospital designers in adding the suggested cyber risk mitigation measures to the design itself before operation.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2040042
Authors: Ayşe Ünsal Melek Önen
This work studies the power of adversarial attacks against machine learning algorithms that use differentially private mechanisms as their weapon. In our setting, the adversary aims to modify the content of a statistical dataset via insertion of additional data without being detected by using the differential privacy to her/his own benefit. The goal of this study is to evaluate how easy it is to detect such attacks (anomalies) when the adversary makes use of Gaussian and Laplacian perturbation using both statistical and information-theoretic tools. To this end, firstly via hypothesis testing, we characterize statistical thresholds for the adversary in various settings, which balances the privacy budget and the impact of the attack (the modification applied on the original data) in order to avoid being detected. In addition, we establish the privacy-distortion trade-off in the sense of the well-known rate-distortion function for the Gaussian mechanism by using an information-theoretic approach. Accordingly, we derive an upper bound on the variance of the attacker’s additional data as a function of the sensitivity and the original data’s second-order statistics. Lastly, we introduce a new privacy metric based on Chernoff information for anomaly detection under differential privacy as a stronger alternative for the (ϵ,δ)-differential privacy in Gaussian mechanisms. Analytical results are supported by numerical evaluations.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2040041
Authors: Umm-e-Hani Tayyab Faiza Babar Khan Muhammad Hanif Durad Asifullah Khan Yeon Soo Lee
Monitoring Indicators of Compromise (IOC) leads to malware detection for identifying malicious activity. Malicious activities potentially lead to a system breach or data compromise. Various tools and anti-malware products exist for the detection of malware and cyberattacks utilizing IOCs, but all have several shortcomings. For instance, anti-malware systems make use of malware signatures, requiring a database containing such signatures to be constantly updated. Additionally, this technique does not work for zero-day attacks or variants of existing malware. In the quest to fight zero-day attacks, the research paradigm shifted from primitive methods to classical machine learning-based methods. Primitive methods are limited in catering to anti-analysis techniques against zero-day attacks. Hence, the direction of research moved towards methods utilizing classic machine learning, however, machine learning methods also come with certain limitations. They may include but not limited to the latency/lag introduced by feature-engineering phase on the entire training dataset as opposed to the real-time analysis requirement. Likewise, additional layers of data engineering to cater to the increasing volume of data introduces further delays. It led to the use of deep learning-based methods for malware detection. With the speedy occurrence of zero-day malware, researchers chose to experiment with few shot learning so that reliable solutions can be produced for malware detection with even a small amount of data at hand for training. In this paper, we surveyed several possible strategies to support the real-time detection of malware and propose a hierarchical model to discover security events or threats in real-time. A key focus in this survey is on the use of Deep Learning-based methods. Deep Learning based methods dominate this research area by providing automatic feature engineering, the capability of dealing with large datasets, enabling the mining of features from limited data samples, and supporting one-shot learning. We compare Deep Learning-based approaches with conventional machine learning based approaches and primitive (statistical analysis based) methods commonly reported in the literature.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2040040
Authors: Panayiotis Kalogeropoulos Dimitris Papanikas Panayiotis Kotzanikolaou
Although Vehicle to Infrastructure (V2I) communications greatly improve the efficiency of early warning systems for car safety, communication privacy is an important concern. Although solutions exist in the literature for privacy preserving VANET communications, they usually require high trust assumptions for a single authority. In this paper we propose a distributed trust model for privacy preserving V2I communications. Trust is distributed among a certification authority that issues the vehicles’ credentials, and a signing authority that anonymously authenticates V2I messages in a zero knowledge manner. Anonymity is based on bilinear pairings and partially blind signatures. In addition, our system supports enhanced conditional privacy since both authorities and the relevant RSU need to collaborate to trace a message back to a vehicle, while efficient certificateless revocation is supported. Moreover, our scheme provides strong unframeability for honest vehicles. Even if all the entities collude, it is not possible to frame a honest vehicle, by tracing a forged message back to an honest vehicle. The proposed scheme concurrently achieves conditional privacy and strong unframeabilty for vehicles, without assuming a fully trusted authority. Our evaluation results show that the system allows RSUs to efficiently handle multiple messages per second, which suffices for real world implementations.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2040039
Authors: Maha Alghawazi Daniyal Alghazzawi Suaad Alarifi
An SQL injection attack, usually occur when the attacker(s) modify, delete, read, and copy data from database servers and are among the most damaging of web application attacks. A successful SQL injection attack can affect all aspects of security, including confidentiality, integrity, and data availability. SQL (structured query language) is used to represent queries to database management systems. Detection and deterrence of SQL injection attacks, for which techniques from different areas can be applied to improve the detect ability of the attack, is not a new area of research but it is still relevant. Artificial intelligence and machine learning techniques have been tested and used to control SQL injection attacks, showing promising results. The main contribution of this paper is to cover relevant work related to different machine learning and deep learning models used to detect SQL injection attacks. With this systematic review, we aims to keep researchers up-to-date and contribute to the understanding of the intersection between SQL injection attacks and the artificial intelligence field.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2030038
Authors: Miloš Stanković Umit Karabiyik
Mobile devices, specifically smartphones, have become a necessity in everyday life, as we perform many essential day-to-day tasks using these devices. With the projected increase in mobile devices to 18.22 billion by 2025, the reliance on smartphones will only grow. This demand for smartphones has allowed various companies to start developing their own devices and custom operating systems, each of which puts its own touch on them. In addition, current smartphones have increased processing power, providing users with a computer experience in their pockets. Software developers have taken this opportunity to bridge the gap between personal computers and smartphones by creating the same software for personal computers and mobile devices. Kali Linux is one of the most popular penetration testing tools for desktop use and has been adapted to operate on mobile devices under the name Kali NetHunter. Kali NetHunter has three different versions on mobile platforms that provide various levels of capabilities. Kali NetHunter is just one example in which an application or an operating system applies to a specific niche of users. Highly customized operating systems or applications do not receive the same attention as field research, leaving them unfamiliar to mobile forensic investigators when used maliciously. In this paper, we conducted an exploratory study on the Kali NetHunter Lite application after it was installed and its embedded tools were utilized. Our results show a detailed analysis of the file system and reveal the data from the tests carried out during various phases. Furthermore, the locations of the folders involved in the process were described.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2030037
Authors: Andreas Puder Marcel Rumez Daniel Grimm Eric Sax
To implement new software functions and more flexible updates in the future as well as to provide cloud-based functionality, the service-oriented architecture (SOA) paradigm is increasingly being integrated into automotive electrical and electronic architecture (E/E architectures). In addition to the automotive industry, the medical industry is also researching SOA-based solutions to increase the interoperability of devices (vendor-independent). The resulting service-oriented communication is no longer fully specified during design time, which affects information security measures. In this paper, we compare different SOA protocols for the automotive and medical fields. Furthermore, we explain the underlying communication patterns and derive features for the development of an SOA-based Intrusion Detection System (IDS).
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2030036
Authors: A M Mahmud Chowdhury Masudul Haider Imtiaz
Contactless fingerprint identification systems have been introduced to address the deficiencies of contact-based fingerprint systems. A number of studies have been reported regarding contactless fingerprint processing, including classical image processing, the machine-learning pipeline, and a number of deep-learning-based algorithms. The deep-learning-based methods were reported to have higher accuracies than their counterparts. This study was thus motivated to present a systematic review of these successes and the reported limitations. Three methods were researched for this review: (i) the finger photo capture method and corresponding image sensors, (ii) the classical preprocessing method to prepare a finger image for a recognition task, and (iii) the deep-learning approach for contactless fingerprint recognition. Eight scientific articles were identified that matched all inclusion and exclusion criteria. Based on inferences from this review, we have discussed how deep learning methods could benefit the field of biometrics and the potential gaps that deep-learning approaches need to address for real-world biometric applications.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2030035
Authors: Pedro Sousa António Pinto Pedro Pinto
Messaging services are usually provided within social network platforms and allow these platforms to collect additional information about users, such as what time, for how long, with whom, and where a user communicates. This information allows the identification of users and is available to the messaging service provider even when communication is encrypted end-to-end. Thus, a gap still exists for alternative messaging services that enable anonymous and confidential communication and that are independent of a specific online service. Online services can still be used to support this messaging service, but in a way that enables users to communicate anonymously and without the knowledge and scrutiny of the online services. In this paper, we propose messaging using steganography and online services to support anonymous and confidential communication. In the proposed messaging service, only the sender and the receiver are aware of the existence of the exchanged data, even if the online services used or other third parties have access to the exchanged secret data containers. This work reviews the viability of using existing online services to support the proposed messaging service. Moreover, a proof-of-concept of the proposed message service is implemented and tested using two online services acting as proxies in the exchange of encrypted information disguised within images and links to those images. The obtained results confirm the viability of such a messaging service.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2030034
Authors: Christina L. Phibbs Shawon S. M. Rahman
Older adults in the U.S. are interested in maintaining independence, aging at home longer, and staying active. Their substantial size, market share, and household wealth sparked the interest of investors and developers in remote monitoring, smart homes, ambient-assisted living, tracking, applications, and sensors via the IoT. This study used the unified theory of acceptance and use of technology extended (UTAUT2). The overarching research question was: “To what extent do performance, effort, influence, conditions, motivation, price, and habit affect older adults’ behavioral intent to use IoT technologies in their homes?” The research methodology for this study was a nonexperimental correlation of the variables that affect older adults’ intention to use IoT-enabled technologies in their homes. The population was adults 60 plus years in northern Virginia. The sample consisted of 316 respondents. The seven predictors cumulatively influenced older adults’ behavioral intent to use IoT-enabled technologies, F(7, 308) = 133.50, p < 0.001, R2 = 0.75. The significant predictors of behavioral intention to use IoT technologies were performance expectancy (B = 0.244, t(308) = 4.427, p < 0.001), social influence (B = 0.138, t(308) = 3.4775, p = 0.001), facilitating conditions (B = 0.184, t(308) = 2.999, p = 0.003), hedonic motivation (B = 0.153, t(308) = 2.694, p = 0.007), price value (B = 0.140, t(308) = 3.099, p = 0.002), and habit (B = 0.378, t(308) = 8.696, p < 0.001). Effort expectancy was insignificant (B = −0.026, t(308) = −0.409, p = 0.683). This study filled the gap in research on older adults’ acceptance of IoT by focusing specifically on that population. The findings help reduce the risk of solutions driven by technological and organizational requirements rather than the older adults’ unique needs and requirements. The study revealed that older adults may be susceptible to undue influence to adopt IoT solutions. These socioeconomic dimensions of the UTAUT2 are essential to the information technology field because the actualizing of IoT-enabled technologies in private homes depends on older adults’ participation and adoption. This research is beneficial to IoT developers, implementers, cybersecurity researchers, healthcare providers, caregivers, and managers of in-home care providers regarding adding IoT technologies in their homes.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2030033
Authors: Rachida Hireche Houssem Mansouri Al-Sakib Khan Pathan
The Internet of Medical Things (IoMT) has become a strategic priority for future e-healthcare because of its ability to improve patient care and its scope of providing more reliable clinical data, increasing efficiency, and reducing costs. It is no wonder that many healthcare institutions nowadays like to harness the benefits offered by the IoMT. In fact, it is an infrastructure with connected medical devices, software applications, and care systems and services. However, the accelerated adoption of connected devices also has a serious side effect: it obscures the broader need to meet the requirements of standard security for modern converged environments (even beyond connected medical devices). Adding up different types and numbers of devices risks creating significant security vulnerabilities. In this paper, we have undertaken a study of various security techniques dedicated to this environment during recent years. This study enables us to classify these techniques and to characterize them in order to benefit from their positive aspects.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2030032
Authors: Jessil Fuhr Feng Wang Yongning Tang
Optimizing the monitoring of network traffic features to detect abnormal traffic is critical. We propose a two-stage monitoring and classification (MOCA) system requiring fewer features to detect and classify malicious network attacks. The first stage monitors abnormal traffic, and the anomalous traffic is forwarded for processing in the second stage. A small subset of features trains both classifiers. We demonstrate MOCA’s effectiveness in identifying attacks in the CICIDS2017 dataset with an accuracy of 99.84% and in the CICDDOS2019 dataset with an accuracy of 93%, which significantly outperforms previous methods. We also found that MOCA can use a pre-trained classifier with one feature to distinguish DDoS and Botnet attacks from normal traffic in four different datasets. Our measurements show that MOCA can distinguish DDoS attacks from normal traffic in the CICDDOS2019 dataset with an accuracy of 96% and DDoS attacks in non-IoT and IoT traffic with an accuracy of 99.94%. The results emphasize the importance of using connection features to discriminate new DDoS and Bot attacks from benign traffic, especially with insufficient training samples.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2030031
Authors: Christoph Stach Michael Behringer Julia Bräcker Clémentine Gritti Bernhard Mitschang
Two factors are crucial for the effective operation of modern-day smart services: Initially, IoT-enabled technologies have to capture and combine huge amounts of data on data subjects. Then, all these data have to be processed exhaustively by means of techniques from the area of big data analytics. With regard to the latter, thorough data refinement in terms of data cleansing and data transformation is the decisive cornerstone. Studies show that data refinement reaches its full potential only by involving domain experts in the process. However, this means that these experts need full insight into the data in order to be able to identify and resolve any issues therein, e.g., by correcting or removing inaccurate, incorrect, or irrelevant data records. In particular for sensitive data (e.g., private data or confidential data), this poses a problem, since these data are thereby disclosed to third parties such as domain experts. To this end, we introduce SMARTEN, a sample-based approach towards privacy-friendly data refinement to smarten up big data analytics and smart services. SMARTEN applies a revised data refinement process that fully involves domain experts in data pre-processing but does not expose any sensitive data to them or any other third-party. To achieve this, domain experts obtain a representative sample of the entire data set that meets all privacy policies and confidentiality guidelines. Based on this sample, domain experts define data cleaning and transformation steps. Subsequently, these steps are converted into executable data refinement rules and applied to the entire data set. Domain experts can request further samples and define further rules until the data quality required for the intended use case is reached. Evaluation results confirm that our approach is effective in terms of both data quality and data privacy.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2030030
Authors: Shao-Fang Wen Ankur Shukla Basel Katt
Security assurance (SA) is a technique that helps organizations to appraise the trust and confidence that a system can be operated correctly and securely. To foster effective SA, there must be systematic techniques to reflect the fact that the system meets its security requirements and, at the same time, is resilient against security vulnerabilities and failures. Quantitative SA evaluation applies computational and mathematical techniques for deriving a set of SA metrics to express the assurance level that a system reaches. Such metrics are intended to quantify the strength and weaknesses of the system that can be used to support improved decision making and strategic planning initiatives. Utilizing metrics to capture and evaluate a system’s security posture has gained attention in recent years. However, scarce work has described how to combine SA evaluation while taking into account both SA metrics modeling and analysis. This paper aims to develop a novel approach for the modeling, calculation, and analysis of SA metrics that could ultimately enhance quantitative SA evaluation.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2030029
Authors: William J. Triplett
This article identifies human factors in workplaces that contribute to the challenges faced by cybersecurity leadership within organizations and discusses strategic communication, human–computer interaction, organizational factors, social environments, and security awareness training. Cybersecurity does not simply focus on information technology systems; it also considers how humans use information systems and susceptible actions leading to vulnerabilities. As cyber leaders begin to identify human behavior and processes and collaborate with individuals of the same mindset, an organization’s strategy can improve substantially. Cybersecurity has been an expanding focal point from the viewpoint of human factors. Human inaccuracy can be unintentional due to an inaccurate strategic implementation or accurate unsatisfactory plan implementation. A systematic literature review was conducted to realize unintentional human factors in cybersecurity leadership. The results indicate that humans were the weakest link during the transmission of secure data. Furthermore, specific complacent and unintentional behaviors were observed, enabled by the ignorance of leaders and employees. Therefore, the enforcement of cybersecurity focuses on education, awareness, and communication. A research agenda is outlined, highlighting a further need for interdisciplinary research. This study adopts an original approach by viewing security from a human perspective and assessing how people can reduce cybersecurity incidents.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2030028
Authors: Hannah Nyholm Kristine Monteith Seth Lyles Micaela Gallegos Mark DeSantis John Donaldson Claire Taylor
The collection and analysis of volatile memory is a vibrant area of research in the cybersecurity community. The ever-evolving and growing threat landscape is trending towards fileless malware, which avoids traditional detection but can be found by examining a system’s random access memory (RAM). Additionally, volatile memory analysis offers great insight into other malicious vectors. It contains fragments of encrypted files’ contents, as well as lists of running processes, imported modules, and network connections, all of which are difficult or impossible to extract from the file system. For these compelling reasons, recent research efforts have focused on the collection of memory snapshots and methods to analyze them for the presence of malware. However, to the best of our knowledge, no current reviews or surveys exist that systematize the research on both memory acquisition and analysis. We fill that gap with this novel survey by exploring the state-of-the-art tools and techniques for volatile memory acquisition and analysis for malware identification. For memory acquisition methods, we explore the trade-offs many techniques make between snapshot quality, performance overhead, and security. For memory analysis, we examined the traditional forensic methods used, including signature-based methods, dynamic methods performed in a sandbox environment, as well as machine learning-based approaches. We summarize the currently available tools, and suggest areas for more research.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2030027
Authors: Mostofa Ahsan Kendall E. Nygard Rahul Gomes Md Minhaz Chowdhury Nafiz Rifat Jayden F Connolly
Machine learning is of rising importance in cybersecurity. The primary objective of applying machine learning in cybersecurity is to make the process of malware detection more actionable, scalable and effective than traditional approaches, which require human intervention. The cybersecurity domain involves machine learning challenges that require efficient methodical and theoretical handling. Several machine learning and statistical methods, such as deep learning, support vector machines and Bayesian classification, among others, have proven effective in mitigating cyber-attacks. The detection of hidden trends and insights from network data and building of a corresponding data-driven machine learning model to prevent these attacks is vital to design intelligent security systems. In this survey, the focus is on the machine learning techniques that have been implemented on cybersecurity data to make these systems secure. Existing cybersecurity threats and how machine learning techniques have been used to mitigate these threats have been discussed. The shortcomings of these state-of-the-art models and how attack patterns have evolved over the past decade have also been presented. Our goal is to assess how effective these machine learning techniques are against the ever-increasing threat of malware that plagues our online community.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2030026
Authors: Daniel Spiekermann Jörg Keller
Currently, network environments are complex infrastructures with different levels of security, isolation and permissions. The management of these networks is a complex task, faced with different issues such as adversarial attacks, user demands, virtualisation layers, secure access and performance optimisation. In addition to this, forensic readiness is a demanded target. To cover all these aspects, network packet captures are used to train new staff, evaluate new security features and improve existing implementations. Because of this, realistic network packet captures are needed that cover all appearing aspects of the network environment. Packet generators are used to create network traffic, simulating real network environments. There are different network packet generators available, but there is no valid rule set defining the requirements targeting packet generators. The manual creation of such network traces is a time-consuming and error-prone task, and the inherent behaviour of virtual networks eradicates a straight-forward automation of trace generation in comparison to common networks. Hence, we analyse relevant conditions of modern virtualised networks and define relevant requirements for a valid packet generation and transformation process. From this, we derive recommendations for the implementation of packet generators that provide valid and correct packet captures for use with virtual networks.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2030025
Authors: Tibor Pósa Jens Grossklags
The emergence of the COVID-19 pandemic in early 2020 has transformed how individuals work and learn and how they can apply cyber-security requirements in their, mostly remote, environments. This transformation also affected the university student population; some needed to adjust to new remote work settings, and all needed to adjust to the new remote study environment. In this online research study, we surveyed a large number of university students (n = 798) to understand their expectations in terms of support and help for this new remote work and study environment. We also asked students to report on their practices regarding remote location and Wi-Fi security settings, smart home device usage, BYOD (bring your own device) and personal device usage and social engineering threats, which can all lead to compromised security. A key aspect of our work is a comparison between the practices of students having work experience with the practices of students having no such additional experience. We identified that both the expectations and the level of cyber-security awareness differ significantly between the two student populations and that cyber-security awareness is increased by work experience. Work experience students are more aware of the cyber-security risks associated with a remote environment, and a higher portion of them know the dedicated employee whom they can contact in the event of incidents. We present the organizational security practices through the lens of employees with initial work experience, contributing to a topic that has so far received only limited attention from researchers. We provide recommendations for remote study settings and also for remote work environments, especially where the existing research literature survey results differ from the findings of our survey.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2030024
Authors: Rosemary Cosmas Tlatlaa Panga Janeth Marwa Jema David Ndibwile
Recently, phishing attacks have been increasing tremendously, and attackers discover new techniques every day to deceive users. With the advancement of technology, teenagers are considered the most technologically advanced generation, having grown up with the availability of the internet and mobile devices. However, as end-users, they are also considered the weakest link for these attacks to be successful, as they still show poor cybersecurity hygiene and practices. Despite several efforts to educate and provide awareness on the prevention of phishing attacks, less has been done to develop tools to educate teenagers about protecting themselves from phishing attacks considering their differences in social-economic and social culture. This research contributes a customized educational mobile game that fits the African context due to the participants’ existing differences in social-economic and social culture. We initially conducted a survey to assess teenagers’ phishing and cybersecurity knowledge in secondary schools categorized as international, private, and government schools. We then developed a customized mobile game based on the African context taking into consideration participants’ differences in social-economic and social culture. We compared the performance of phishing knowledge of teenagers using a game and a traditional teaching method. The traditional teaching method was presented by the reading notes method. The results revealed that teenagers’ phishing and cybersecurity knowledge differs based on their socioeconomic and social culture. For instance, international, private scholars, and those who live in urban areas have better phishing knowledge than those from government schools and those who live in rural areas. On the other hand, participants who had a poor performance in the first assessment improved their knowledge after playing the game. In addition, participants who played the game had retained their phishing knowledge more, two weeks later, than their counterparts who read only notes.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2030023
Authors: Abdullah F. Al-Aboosi Matan Broner Fadhil Y. Al-Aboosi
A lack of security best practices in modern password storage has led to a dramatic rise in the number of online data breaches, resulting in financial damages and lowered trust in online service providers. This work aims to explore the question of how leveraging decentralized storage paired with a centralized point of authentication may combat such attacks. A solution, “Bingo”, is presented, which implements browser side clients which store password shares for a centralized proxy server. Bingo is a fully formed system which allows for modern browsers to store and retrieve a dynamic number of anonymized password shares, which are used when authenticating users. Thus, Bingo is the first solution to prove that distributed password storage functions in the context of the modern web. Furthermore, Bingo is evaluated in both simulation and cloud in order to show that it achieves high rates of system liveness despite its dependence on its users being active at given intervals. In addition, a novel simulator is presented which allows future researchers to mock scheduled behavior of online users. This work concludes that with the rise in online activity, decentralization may play a role in increasing data security.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2020022
Authors: Kimia Ameri Michael Hempel Hamid Sharif Juan Lopez Jr. Kalyan Perumalla
This paper presents our research approach and findings towards maximizing the accuracy of our classifier of feature claims for cybersecurity literature analytics, and introduces the resulting model ClaimsBERT. Its architecture, after extensive evaluations of different approaches, introduces a feature map concatenated with a Bidirectional Encoder Representation from Transformers (BERT) model. We discuss deployment of this new concept and the research insights that resulted in the selection of Convolution Neural Networks for its feature mapping aspects. We also present our results showing ClaimsBERT to outperform all other evaluated approaches. This new claims classifier represents an essential processing stage within our vetting framework aiming to improve the cybersecurity of industrial control systems (ICS). Furthermore, in order to maximize the accuracy of our new ClaimsBERT classifier, we propose an approach for optimal architecture selection and determination of optimized hyperparameters, in particular the best learning rate, number of convolutions, filter sizes, activation function, the number of dense layers, as well as the number of neurons and the drop-out rate for each layer. Fine-tuning these hyperparameters within our model led to an increase in classification accuracy from 76% obtained with BertForSequenceClassification’s original model to a 97% accuracy obtained with ClaimsBERT.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2020021
Authors: Michel Walrave Joris Van Ouytsel Kay Diederen Koen Ponnet
Human resource (HR) professionals who assess job candidates may engage in cybervetting, the collection and analysis of applicants’ personal information available on social network sites (SNS). This raises important questions about the privacy of job applicants. In this study, interviews were conducted with 24 HR professionals from profit and governmental organizations to examine how information found on SNS is used to screen job applicants. HR managers were found to check for possible mismatches between the online information and the experiences and competences claimed by candidates. Pictures of the job candidates’ spare time activities, drinking behavior, and physical appearance are seen as very informative. Pictures posted by job candidates’ connections are valued as more informative than those posted by the applicants themselves. Governmental organizations’ HR managers differ from profit-sector professionals by the fact that political views may play a role for the former. Finally, some HR professionals do not collect personal information about job candidates through social media, since they aim to respect a clear distinction between private life and work. They do not want to be influenced by information that has no relation with candidates’ qualifications. The study’s implications for theory and practice are also discussed.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2020020
Authors: Griffith Russell McRee
Security analysts working in the modern threat landscape face excessive events and alerts, a high volume of false-positive alerts, significant time constraints, innovative adversaries, and a staggering volume of unstructured data. Organizations thus risk data breach, loss of valuable human resources, reputational damage, and impact to revenue when excessive security alert volume and a lack of fidelity degrade detection services. This study examined tactics to reduce security data fatigue, increase detection accuracy, and enhance security analysts’ experience using security alert output generated via data science and machine learning models. The research determined if security analysts utilizing this security alert data perceive a statistically significant difference in usability between security alert output that is visualized versus that which is text-based. Security analysts benefit two-fold: the efficiency of results derived at scale via ML models, with the additional benefit of quality alert results derived from these same models. This quantitative, quasi-experimental, explanatory study conveys survey research performed to understand security analysts’ perceptions via the Technology Acceptance Model. The population studied was security analysts working in a defender capacity, analyzing security monitoring data and alerts. The more specific sample was security analysts and managers in Security Operation Center (SOC), Digital Forensic and Incident Response (DFIR), Detection and Response Team (DART), and Threat Intelligence (TI) roles. Data analysis indicated a significant difference in security analysts’ perception of usability in favor of visualized alert output over text alert output. The study’s results showed how organizations can more effectively combat external threats by emphasizing visual rather than textual alerts.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2020019
Authors: Haozhe Zhou Amin Milani Fard Adetokunbo Makanju
Smart contracts are self-executing programs that run on the blockchain and make it possible for peers to enforce agreements without a third-party guarantee. The smart contract on Ethereum is the fundamental element of decentralized finance with billions of US dollars in value. Smart contracts cannot be changed after deployment and hence the code needs to be verified for potential vulnerabilities. However, smart contracts are far from being secure and attacks exploiting vulnerabilities that have led to losses valued in the millions. In this work, we explore the current state of smart contracts security, prevalent vulnerabilities, and security-analysis tool support, through reviewing the latest advancement and research published in the past five years. We study 13 vulnerabilities in Ethereum smart contracts and their countermeasures, and investigate nine security-analysis tools. Our findings indicate that a uniform set of smart contract vulnerability definitions does not exist in research work and bugs pertaining to the same mechanisms sometimes appear with different names. This inconsistency makes it difficult to identify, categorize, and analyze vulnerabilities. We explain some safeguarding approaches and best practices. However, as technology improves new vulnerabilities may emerge. Regarding tool support, SmartCheck, DefectChecker, contractWard, and sFuzz tools are better choices in terms of more coverage of vulnerabilities; however, tools such as NPChecker, MadMax, Osiris, and Sereum target some specific categories of vulnerabilities if required. While contractWard is relatively fast and more accurate, it can only detect pre-defined vulnerabilities. The NPChecker is slower, however, can find new vulnerability patterns.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2020018
Authors: Faiza Tazi Sunny Shrestha Junibel De La Cruz Sanchari Das
The World Wide Web (www) consists of the surface web, deep web, and Dark Web, depending on the content shared and the access to these network layers. Dark Web consists of the Dark Net overlay of networks that can be accessed through specific software and authorization schema. Dark Net has become a growing community where users focus on keeping their identities, personal information, and locations secret due to the diverse population base and well-known cyber threats. Furthermore, not much is known of Dark Net from the user perspective, where often there is a misunderstanding of the usage strategies. To understand this further, we conducted a systematic analysis of research relating to Dark Net privacy and security on N=200 academic papers, where we also explored the user side. An evaluation of secure end-user experience on the Dark Net establishes the motives of account initialization in overlaid networks such as Tor. This work delves into the evolution of Dark Net intelligence for improved cybercrime strategies across jurisdictions. The evaluation of the developing network infrastructure of the Dark Net raises meaningful questions on how to resolve the issue of increasing criminal activity on the Dark Web. We further examine the security features afforded to users, motives, and anonymity revocation. We also evaluate more closely nine user-study-focused papers revealing the importance of conducting more research in this area. Our detailed systematic review of Dark Net security clearly shows the apparent research gaps, especially in the user-focused studies emphasized in the paper.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2020017
Authors: Niusen Chen Bo Chen
Combating the OS-level malware is a very challenging problem as this type of malware can compromise the operating system, obtaining the kernel privilege and subverting almost all the existing anti-malware tools. This work aims to address this problem in the context of mobile devices. As real-world malware is very heterogeneous, we narrow down the scope of our work by especially focusing on a special type of OS-level malware that always corrupts user data. We have designed mobiDOM, the first framework that can combat the OS-level data corruption malware for mobile computing devices. Our mobiDOM contains two components, a malware detector and a data repairer. The malware detector can securely and timely detect the presence of OS-level malware by fully utilizing the existing hardware features of a mobile device, namely, flash memory and Arm TrustZone. Specifically, we integrate the malware detection into the flash translation layer (FTL), a firmware layer embedded into the flash storage hardware, which is inaccessible to the OS; in addition, we run a trusted application in the Arm TrustZone secure world, which acts as a user-level manager of the malware detector. The FTL-based malware detection and the TrustZone-based manager can communicate with each other stealthily via steganography. The data repairer can allow restoring the external storage to a healthy historical state by taking advantage of the out-of-place-update feature of flash memory and our malware-aware garbage collection in the FTL. Security analysis and experimental evaluation on a real-world testbed confirm the effectiveness of mobiDOM.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2020016
Authors: Caspar Schwarz-Schilling Sheng-Nan Li Claudio J. Tessone
In blockchain-based systems whose consensus mechanisms resort to Proof-of-Work (PoW), it is expected that a miner’s share of total block revenue is proportional to their share of hashing power with respect to the rest of the network. The protocol relies on the immediate broadcast of blocks by miners, to earn precedence in peers’ local blockchains. However, a deviation from this strategy named selfish mining (SM), may lead miners to earn more than their “fair share”. In this paper, we introduce an agent-based model to simulate the dynamics of SM behaviour by a single miner as well as mining pools to understand the influence of (a) mining power distribution, (b) overlay network topology, (c) positioning of the selfish nodes within the peer to peer network. Our minimalistic model allows us to find that in high levels of latency, SM is always a more profitable strategy; our results are very robust to different network topologies and mining nodes’ centrality in the network. Moreover, the power-law distribution of the miners’ hashing power can make it harder for a selfish miner to be profitable. In addition, we analyze the effect of SM on system global efficiency and fairness. Our analysis confirms that SM is always more profitable for hashing powers representing more than one-third of the total computing power. Further, it also confirms that SM behaviour could cause a statistically significant high probability of continuously mined blocks opening the door for empirical verification of the phenomenon.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2020015
Authors: Hannes Salin Martin Lundgren
In this study, a framework was developed, based on a literature review, to help managers incorporate cybersecurity risk management in agile development projects. The literature review used predefined codes that were developed by extending previously defined challenges in the literature—for developing secure software in agile projects—to include aspects of agile cybersecurity risk management. Five steps were identified based on the insights gained from how the reviewed literature has addressed each of the challenges: (1) risk collection; (2) risk refinement; (3) risk mitigation; (4) knowledge transfer; and (5) escalation. To assess the appropriateness of the identified steps, and to determine their inclusion or exclusion in the framework, a survey was submitted to 145 software developers using a four-point Likert scale to measure the attitudes towards each step. The resulting framework presented herein serves as a starting point to help managers and developers structure their agile projects in terms of cybersecurity risk management, supporting less overloaded agile processes, stakeholder insights on relevant risks, and increased security assurance.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2020014
Authors: Abdulghafour Mohammad Sergio Vargas Pavel Čermák
Today’s cars can share data with other cars, automakers, and service providers. Shared data can help improve the driving experience, the performance of the car, and the traffic situations. Among all data-collection techniques, blockchain technology offers an immutable and secure solution to support data collection in the automotive industry. Despite its advantages, collecting auto data with blockchain still faces several challenges. Thus, the purpose of this study was to conduct a review of published articles that have addressed the challenges of adopting blockchain for data collection in the automotive industry. This paper allowed us to answer the predefined research question: “What are the challenges of using blockchain for data collection in the automotive industry as presented in the published literature?” The review included articles published from 2017 to January 2022, and from the screened records, 13 articles were analyzed in full-text form. The founded challenges were categorized into seven categories: connectivity, privacy, security attacks, scalability, performance, costs, and monetizing. This review will help researchers, car manufacturers, and third-party suppliers to assess the applicability of the blockchain for data collection.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2020013
Authors: Francesco Di Nocera Giorgia Tempestini
The usability/security trade-off indicates the inversely proportional relationship that seems to exist between usability and security. The more secure the systems, the less usable they will be. On the contrary, more usable systems will be less secure. So far, attempts to reduce the gap between usability and security have been unsuccessful. In this paper, we offer a theoretical perspective to exploit this tradeoff rather than fight it, as well as a practical approach to the use of contextual improvements in system usability to reward secure behavior. The theoretical perspective, based on the concept of reinforcement, has been successfully applied to several domains, and there is no reason to believe that the cybersecurity domain will represent an exception. Although the purpose of this article is to devise a research agenda, we also provide an example based on a single-case study where we apply the rationale underlying our proposal in a laboratory experiment.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2020012
Authors: Emmanuel Aboah Boateng J. W. Bruce
The security of programmable logic controllers (PLCs) that control industrial systems is becoming increasingly critical due to the ubiquity of the Internet of Things technologies and increasingly nefarious cyber-attack activity. Conventional techniques for safeguarding PLCs are difficult due to their unique architectures. This work proposes a one-class support vector machine, one-class neural network interconnected in a feed-forward manner, and isolation forest approaches for verifying PLC process integrity by monitoring PLC memory addresses. A comprehensive experiment is conducted using an open-source PLC subjected to multiple attack scenarios. A new histogram-based approach is introduced to visualize anomaly detection algorithm performance and prediction confidence. Comparative performance analyses of the proposed algorithms using decision scores and prediction confidence are presented. Results show that isolation forest outperforms one-class neural network, one-class support vector machine, and previous work, in terms of accuracy, precision, recall, and F1-score on seven attack scenarios considered. Statistical hypotheses tests involving analysis of variance and Tukey’s range test were used to validate the presented results.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2010011
Authors: Laura Genga Luca Allodi Nicola Zannone
Decisional processes are at the basis of most businesses in several application domains. However, they are often not fully transparent and can be affected by human or algorithmic biases that may lead to systematically incorrect or unfair outcomes. In this work, we propose an approach for unveiling biases in decisional processes, which leverages association rule mining for systematic hypothesis generation and regression analysis for model selection and recommendation extraction. In particular, we use rule mining to elicit candidate hypotheses of bias from the observational data of the process. From these hypotheses, we build regression models to determine the impact of variables on the process outcome. We show how the coefficient of the (selected) model can be used to extract recommendation, upon which the decision maker can operate. We evaluated our approach using both synthetic and real-life datasets in the context of discrimination discovery. The results show that our approach provides more reliable evidence compared to the one obtained using rule mining alone, and how the obtained recommendations can be used to guide analysts in the investigation of biases affecting the decisional process at hand.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2010010
Authors: Andrew McCarthy Essam Ghadafi Panagiotis Andriotis Phil Legg
Machine learning has become widely adopted as a strategy for dealing with a variety of cybersecurity issues, ranging from insider threat detection to intrusion and malware detection. However, by their very nature, machine learning systems can introduce vulnerabilities to a security defence whereby a learnt model is unaware of so-called adversarial examples that may intentionally result in mis-classification and therefore bypass a system. Adversarial machine learning has been a research topic for over a decade and is now an accepted but open problem. Much of the early research on adversarial examples has addressed issues related to computer vision, yet as machine learning continues to be adopted in other domains, then likewise it is important to assess the potential vulnerabilities that may occur. A key part of transferring to new domains relates to functionality-preservation, such that any crafted attack can still execute the original intended functionality when inspected by a human and/or a machine. In this literature survey, our main objective is to address the domain of adversarial machine learning attacks and examine the robustness of machine learning models in the cybersecurity and intrusion detection domains. We identify the key trends in current work observed in the literature, and explore how these relate to the research challenges that remain open for future works. Inclusion criteria were: articles related to functionality-preservation in adversarial machine learning for cybersecurity or intrusion detection with insight into robust classification. Generally, we excluded works that are not yet peer-reviewed; however, we included some significant papers that make a clear contribution to the domain. There is a risk of subjective bias in the selection of non-peer reviewed articles; however, this was mitigated by co-author review. We selected the following databases with a sizeable computer science element to search and retrieve literature: IEEE Xplore, ACM Digital Library, ScienceDirect, Scopus, SpringerLink, and Google Scholar. The literature search was conducted up to January 2022. We have striven to ensure a comprehensive coverage of the domain to the best of our knowledge. We have performed systematic searches of the literature, noting our search terms and results, and following up on all materials that appear relevant and fit within the topic domains of this review. This research was funded by the Partnership PhD scheme at the University of the West of England in collaboration with Techmodal Ltd.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2010009
Authors: Philokypros P. Ioulianou Vassilios G. Vassilakis Siamak F. Shahandashti
Routing attacks are a major security issue for Internet of Things (IoT) networks utilising routing protocols, as malicious actors can overwhelm resource-constrained devices with denial-of-service (DoS) attacks, notably rank and blackhole attacks. In this work, we study the impact of the combination of rank and blackhole attacks in the IPv6 routing protocol for low-power and lossy (RPL) networks, and we propose a new security framework for RPL-based IoT networks (SRF-IoT). The framework includes a trust-based mechanism that detects and isolates malicious attackers with the help of an external intrusion detection system (IDS). Both SRF-IoT and IDS are implemented in the Contiki-NG operating system. Evaluation of the proposed framework is based on simulations using the Whitefield framework that combines both the Contiki-NG and the NS-3 simulator. Analysis of the simulations of the scenarios under active attacks showed the effectiveness of deploying SRF-IoT with 92.8% packet delivery ratio (PDR), a five-fold reduction in the number of packets dropped, and a three-fold decrease in the number of parent switches in comparison with the scenario without SRF-IoT. Moreover, the packet overhead introduced by SRF-IoT in attack scenarios is minimal at less than 2%. Obtained results suggest that the SRF-IoT framework is an efficient and promising solution that combines trust-based and IDS-based approaches to protect IoT networks against routing attacks. In addition, our solution works by deploying a watchdog mechanism on detector nodes only, leaving unaffected the operation of existing smart devices.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2010008
Authors: Abdulghafour Mohammad
As the functionality and services provided by cloud computing increase, control access to these services becomes more complex, and more security breaches are generated. This is mainly based on the emergence of new requirements and constraints in the open, dynamic, heterogeneous, and distributed cloud environment. Despite the importance of identifying these requirements for designing and evaluating access control models, the available studies do not provide a rigorous review of these requirements and the mechanisms that fulfill them. The purpose of this study was to conduct a literature review of the published articles that have dealt with cloud access control requirements and techniques. This paper allowed us to answer the following two research questions: What cloud access control security requirements have been presented in the published literature? What access control mechanisms are proposed to fulfill them? This review yielded 21 requirements and nine mechanisms, reported by 20 manuscripts. The identified requirements in this review will help researchers, academics and practitioners assess the effectiveness of cloud access control models and identify gaps that are not addressed in the proposed solutions. In addition, this review showed the current cloud access control mechanisms used to meet these requirements such as access control based on trust, risk, multi-tenant, and attribute encryption.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2010007
Authors: Maryam Taeb Hongmei Chi
Deepfakes are realistic-looking fake media generated by deep-learning algorithms that iterate through large datasets until they have learned how to solve the given problem (i.e., swap faces or objects in video and digital content). The massive generation of such content and modification technologies is rapidly affecting the quality of public discourse and the safeguarding of human rights. Deepfakes are being widely used as a malicious source of misinformation in court that seek to sway a court’s decision. Because digital evidence is critical to the outcome of many legal cases, detecting deepfake media is extremely important and in high demand in digital forensics. As such, it is important to identify and build a classifier that can accurately distinguish between authentic and disguised media, especially in facial-recognition systems as it can be used in identity protection too. In this work, we compare the most common, state-of-the-art face-detection classifiers such as Custom CNN, VGG19, and DenseNet-121 using an augmented real and fake face-detection dataset. Data augmentation is used to boost performance and reduce computational resources. Our preliminary results indicate that VGG19 has the best performance and highest accuracy of 95% when compared with other analyzed models.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2010006
Authors: Harry Owen Javad Zarrin Shahrzad M. Pour
Botnets have become increasingly common and progressively dangerous to both business and domestic networks alike. Due to the Covid-19 pandemic, a large quantity of the population has been performing corporate activities from their homes. This leads to speculation that most computer users and employees working remotely do not have proper defences against botnets, resulting in botnet infection propagating to other devices connected to the target network. Consequently, not only did botnet infection occur within the target user’s machine but also neighbouring devices. The focus of this paper is to review and investigate current state of the art and research works for both methods of infection, such as how a botnet could penetrate a system or network directly or indirectly, and standard detection strategies that had been used in the past. Furthermore, we investigate the capabilities of Artificial Intelligence (AI) to create innovative approaches for botnet detection to enable making predictions as to whether there are botnets present within a network. The paper also discusses methods that threat-actors may be used to infect target devices with botnet code. Machine learning algorithms are examined to determine how they may be used to assist AI-based detection and what advantages and disadvantages they would have to compare the most suitable algorithm businesses could use. Finally, current botnet prevention and countermeasures are discussed to determine how botnets can be prevented from corporate and domestic networks and ensure that future attacks can be prevented.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2010005
Authors: Lili Nemec Zlatolas Nataša Feher Marko Hölbl
IoT devices are used frequently in smart homes. To better understand how users perceive the security of IoT devices in their smart homes, a model was developed and tested with multiple linear regression. A total of 306 participants participated in the survey with measurement items, out of which 121 had already been using IoT devices in their smart homes. The results show that users’ awareness of data breaches, ransomware attacks, personal information access breaches, and device vulnerabilities have an effect on IoT security importance. On the other hand, users often do not check their security settings and feel safe while using IoT devices. This paper provides an overview of users’ perception of security while using IoT devices, and can help developers build better devices and help raise awareness of security among users.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2010004
Authors: Nadine Kashmar Mehdi Adda Hussein Ibrahim
Access control (AC) policies are a set of rules administering decisions in systems and they are increasingly used for implementing flexible and adaptive systems to control access in today’s internet services, networks, security systems, and others. The emergence of the current generation of networking environments, with digital transformation, such as the internet of things (IoT), fog computing, cloud computing, etc., with their different applications, bring out new trends, concepts, and challenges to integrate more advanced and intelligent systems in critical and heterogeneous structures. This fact, in addition to the COVID-19 pandemic, has prompted a greater need than ever for AC due to widespread telework and the need to access resources and data related to critical domains such as government, healthcare, industry, and others, and any successful cyber or physical attack can disrupt operations or even decline critical services to society. Moreover, various declarations have announced that the world of AC is changing fast, and the pandemic made AC feel more essential than in the past. To minimize security risks of any unauthorized access to physical and logical systems, before and during the pandemic, several AC approaches are proposed to find a common specification for security policy where AC is implemented in various dynamic and heterogeneous computing environments. Unfortunately, the proposed AC models and metamodels have limited features and are insufficient to meet the current access control requirements. In this context, we have developed a Hierarchical, Extensible, Advanced, and Dynamic (HEAD) AC metamodel with substantial features that is able to encompass the heterogeneity of AC models, overcome the existing limitations of the proposed AC metamodels, and follow the various technology progressions. In this paper, we explain the distinct design of the HEAD metamodel, starting from the metamodel development phase and reaching to the policy enforcement phase. We describe the remaining steps and how they can be employed to develop more advanced features in order to open new opportunities and answer the various challenges of technology progressions and the impact of the pandemic in the domain. As a result, we present a novel approach in five main phases: metamodel development, deriving models, generating policies, policy analysis and assessment, and policy enforcement. This approach can be employed to assist security experts and system administrators to design secure systems that comply with the organizational security policies that are related to access control.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2010003
Authors: Tor Onshus Lars Bodsberg Stein Hauge Martin Gilje Jaatun Mary Ann Lundteigen Thor Myklebust Maria Vatshaug Ottermo Stig Petersen Egil Wille
The developments of reduced manning on offshore facilities and increased information transfer from offshore to land continue and may also be a prerequisite for the future survival of the oil and gas industry. A general requirement from the operators has emerged in that all relevant information from offshore-located systems should be made available so that it can be analysed on land. This represents a challenge to safety in avoiding negative impacts and potential accidents for these facilities. The layered Purdue model, which helps protect OT systems from unwanted influences through network segregation, is undermined by the many new connections arising between the OT systems and the surroundings. Each individual connection is not necessarily a problem; however, in aggregate, they add to the overall complexity and attack surface thereby exposing the OT systems to increased cyber risk. Since the OT systems are critical to controlling physical processes, the added connections represent a challenge not only to security but also to safety.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2010002
Authors: Journal of Cybersecurity and Privacy Editorial Office Journal of Cybersecurity and Privacy Editorial Office
Rigorous peer-reviews are the basis of high-quality academic publishing [...]
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp2010001
Authors: Thilini B. G. Herath Prashant Khanna Monjur Ahmed
In this paper, we present secondary research on recommended cybersecurity practices for social media users from the user’s point of view. Through following a structured methodological approach of the systematic literature review presented, aspects related to cyber threats, cyber awareness, and cyber behavior in internet and social media use are considered in the study. The study presented finds that there are many cyber threats existing within the social media platform, such as loss of productivity, cyber bullying, cyber stalking, identity theft, social information overload, inconsistent personal branding, personal reputation damage, data breach, malicious software, service interruptions, hacks, and unauthorized access to social media accounts. Among other findings, the study also reveals that demographic factors, for example age, gender, and education level, may not necessarily be influential factors affecting the cyber awareness of the internet users.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp1040039
Authors: Shadi Sadeghpour Natalija Vlajic
Over the last two decades, we have witnessed a fundamental transformation of the advertising industry, which has been steadily moving away from the traditional advertising mediums, such as television or direct marketing, towards digital-centric and internet-based platforms. Unfortunately, due to its large-scale adoption and significant revenue potential, digital advertising has become a very attractive and frequent target for numerous cybercriminal groups. The goal of this study is to provide a consolidated view of different categories of threats in the online advertising ecosystems. We begin by introducing the main elements of an online ad platform and its different architecture and revenue models. We then review different categories of ad fraud and present a taxonomy of known attacks on an online advertising system. Finally, we provide a comprehensive overview of methods and techniques for the detection and prevention of fraudulent practices within those system—both from the scientific as well as the industry perspective. The main novelty of our work lies in the development of an innovative taxonomy of different types of digital advertising fraud based on their actual executors and victims. We have placed different advertising fraud scenarios into real-world context and provided illustrative examples thereby offering an important practical perspective that is very much missing in the current literature.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp1040038
Authors: Paul M. Simon Scott Graham
Rarely are communications networks point-to-point. In most cases, transceiver relay stations exist between transmitter and receiver end-points. These relay stations, while essential for controlling cost and adding flexibility to network architectures, reduce the overall security of the respective network. In an effort to quantify that reduction, we extend the Quality of Secure Service (QoSS) model to these complex networks, specifically multi-hop networks. In this approach, the quantification of security is based upon probabilities that adversarial listeners and disruptors gain access to or manipulate transmitted data on one or more of these multi-hop channels. Message fragmentation and duplication across available channels provides a security performance trade-space, with its consequent QoSS. This work explores that trade-space and the corresponding QoSS model to describe it.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp1040037
Authors: Ravi Chauhan Ulya Sabeel Alireza Izaddoost Shahram Shah Heydari
Intrusion Detection Systems (IDS) are essential components in preventing malicious traffic from penetrating networks and systems. Recently, these systems have been enhancing their detection ability using machine learning algorithms. This development also forces attackers to look for new methods for evading these advanced Intrusion Detection Systemss. Polymorphic attacks are among potential candidates that can bypass the pattern matching detection systems. To alleviate the danger of polymorphic attacks, the IDS must be trained with datasets that include these attacks. Generative Adversarial Network (GAN) is a method proven in generating adversarial data in the domain of multimedia processing, text, and voice, and can produce a high volume of test data that is indistinguishable from the original training data. In this paper, we propose a model to generate adversarial attacks using Wasserstein GAN (WGAN). The attack data synthesized using the proposed model can be used to train an IDS. To evaluate the trained IDS, we study several techniques for updating the attack feature profile for the generation of polymorphic data. Our results show that by continuously changing the attack profiles, defensive systems that use incremental learning will still be vulnerable to new attacks; meanwhile, their detection rates improve incrementally until the polymorphic attack exhausts its profile variables.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp1040036
Authors: Andreas Skalkos Ioannis Stylios Maria Karyda Spyros Kokolakis
Smartphone user authentication based on passwords, PINs, and touch patterns raises several security concerns. Behavioral Biometrics Continuous Authentication (BBCA) technologies provide a promising solution which can increase smartphone security and mitigate users’ concerns. Until now, research in BBCA technologies has mainly focused on developing novel behavioral biometrics continuous authentication systems and their technical characteristics, overlooking users’ attitudes towards BBCA. To address this gap, we conducted a study grounded on a model that integrates users’ privacy concerns, trust in technology, and innovativeness with Protection Motivation Theory. A cross-sectional survey among 778 smartphone users was conducted via Amazon Mechanical Turk (MTurk) to explore the factors which can predict users’ intention to use BBCA technologies. Our findings demonstrate that privacy concerns towards intention to use BBCA technology have a significant impact on all components of PMT. Further to this, another important construct we identified that affects the usage intention of BBCA technology is innovativeness. Our findings posit the view that reliability and trustworthiness of security technologies, such as BBCA are important for users. Together, these results highlighted the importance of addressing users’ perceptions regarding BBCA technology.
]]>Journal of Cybersecurity and Privacy doi: 10.3390/jcp1040035
Authors: Moses Ashawa Sarah Morris
The evolution of mobile technology has increased correspondingly with the number of attacks on mobile devices. Malware attack on mobile devices is one of the top security challenges the mobile community faces daily. While malware classification and detection tools are being developed to fight malware infection, hackers keep deploying different infection strategies, including permissions usage. Among mobile platforms, Android is the most targeted by malware because of its open OS and popularity. Permissions is one of the major security techniques used by Android and other mobile platforms to control device resources and enhance access control. In this study, we used the t-Distribution stochastic neighbor embedding (t-SNE) and Self-Organizing Map techniques to produce a visualization method using exploratory factor plane analysis to visualize permissions correlation in Android applications. Two categories of datasets were used for this study: the benign and malicious datasets. Dataset was obtained from Contagio, VirusShare, VirusTotal, and Androzoo repositories. A total of 12,267 malicious and 10,837 benign applications with different categories were used. We demonstrate that our method can identify the correlation between permissions and classify Android applications based on their protection and threat level. Our results show that every permission has a threat level. This signifies those permissions with the same protection level have the same threat level.
]]>