Next Issue
Volume 15, February
Previous Issue
Volume 14, December
 
 

Information, Volume 15, Issue 1 (January 2024) – 66 articles

Cover Story (view full-size image): Doubts about the ownership or provenance authenticity of smart contracts running in blockchain networks are a major entry barrier to this technology. Block verifiers are services that check authenticity, but their inherent overhead may negatively impact time-sensitive domains like cyber-physical systems. This work elaborates on the determination of the temporal cost of the provenance verification process for smart contracts. This serves as the basis for analyzing the suitability of employing block verifiers for time-sensitive domains like cyber-physical systems. We design and validate a middleware that manages the process of determining the overhead of the verification services. We provide middleware implementation over a real blockchain network and verifier services. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
16 pages, 1931 KiB  
Article
Baseline Structural Connectomics Data of Healthy Brain Development Assessed with Multi-Modal Magnetic Resonance Imaging
by David Mattie, Zihang Fang, Emi Takahashi, Lourdes Peña Castillo and Jacob Levman
Information 2024, 15(1), 66; https://doi.org/10.3390/info15010066 - 22 Jan 2024
Viewed by 1303
Abstract
Diffusion magnetic resonance imaging (MRI) tractography is a powerful tool for non-invasively studying brain architecture and structural integrity by inferring fiber tracts based on water diffusion profiles. This study provided a thorough set of baseline data of structural connectomics biomarkers for 809 healthy [...] Read more.
Diffusion magnetic resonance imaging (MRI) tractography is a powerful tool for non-invasively studying brain architecture and structural integrity by inferring fiber tracts based on water diffusion profiles. This study provided a thorough set of baseline data of structural connectomics biomarkers for 809 healthy participants between the ages of 1 and 35 years. The data provided can help to identify potential biomarkers that may be helpful in characterizing physiological and anatomical neurodevelopmental changes linked with healthy brain maturation and can be used as a baseline for comparing abnormal and pathological development in future studies. Our results demonstrate statistically significant differences between the sexes, representing a potentially important baseline from which to establish healthy growth trajectories. Biomarkers that correlated with age, potentially representing useful methods for assessing brain development, are also presented. This baseline information may facilitate studies that identify abnormal brain development associated with a variety of pathological conditions as departures from healthy sex-specific age-dependent neural development. Our findings are the result of combining the use of mainstream analytic methods with in-house-developed open-source software to help facilitate reproducible analyses, inclusive of many potential biomarkers that cannot be derived from existing software packages. Assessing relationships between our identified regional tract measurements produced by our technology and participant characteristics/phenotypic data in future analyses has tremendous potential for the study of human neurodevelopment. Full article
(This article belongs to the Special Issue Multi-Modal Biomedical Data Science—Modeling and Analysis)
Show Figures

Figure 1

25 pages, 2717 KiB  
Article
Machine Learning and Blockchain: A Bibliometric Study on Security and Privacy
by Alejandro Valencia-Arias, Juan David González-Ruiz, Lilian Verde Flores, Luis Vega-Mori, Paula Rodríguez-Correa and Gustavo Sánchez Santos
Information 2024, 15(1), 65; https://doi.org/10.3390/info15010065 - 22 Jan 2024
Viewed by 2361
Abstract
Machine learning and blockchain technology are fast-developing fields with implications for multiple sectors. Both have attracted a lot of interest and show promise in security, IoT, 5G/6G networks, artificial intelligence, and more. However, challenges remain in the scientific literature, so the aim is [...] Read more.
Machine learning and blockchain technology are fast-developing fields with implications for multiple sectors. Both have attracted a lot of interest and show promise in security, IoT, 5G/6G networks, artificial intelligence, and more. However, challenges remain in the scientific literature, so the aim is to investigate research trends around the use of machine learning in blockchain. A bibliometric analysis is proposed based on the PRISMA-2020 parameters in the Scopus and Web of Science databases. An objective analysis of the most productive and highly cited authors, journals, and countries is conducted. Additionally, a thorough analysis of keyword validity and importance is performed, along with a review of the most significant topics by year of publication. Co-occurrence networks are generated to identify the most crucial research clusters in the field. Finally, a research agenda is proposed to highlight future topics with great potential. This study reveals a growing interest in machine learning and blockchain. Topics are evolving towards IoT and smart contracts. Emerging keywords include cloud computing, intrusion detection, and distributed learning. The United States, Australia, and India are leading the research. The research proposes an agenda to explore new applications and foster collaboration between researchers and countries in this interdisciplinary field. Full article
(This article belongs to the Special Issue Machine Learning for the Blockchain)
Show Figures

Figure 1

16 pages, 289 KiB  
Article
Agile Logical Semantics for Natural Languages
by Vincenzo Manca
Information 2024, 15(1), 64; https://doi.org/10.3390/info15010064 - 21 Jan 2024
Viewed by 1351
Abstract
This paper presents an agile method of logical semantics based on high-order Predicate Logic. An operator of predicate abstraction is introduced that provides a simple mechanism for logical aggregation of predicates and for logical typing. Monadic high-order logic is the natural environment in [...] Read more.
This paper presents an agile method of logical semantics based on high-order Predicate Logic. An operator of predicate abstraction is introduced that provides a simple mechanism for logical aggregation of predicates and for logical typing. Monadic high-order logic is the natural environment in which predicate abstraction expresses the semantics of typical linguistic structures. Many examples of logical representations of natural language sentences are provided. Future extensions and possible applications in the interaction with chatbots are briefly discussed as well. Full article
(This article belongs to the Special Issue Computational Linguistics and Natural Language Processing)
18 pages, 7127 KiB  
Article
Benchmarking Automated Machine Learning (AutoML) Frameworks for Object Detection
by Samuel de Oliveira, Oguzhan Topsakal and Onur Toker
Information 2024, 15(1), 63; https://doi.org/10.3390/info15010063 - 21 Jan 2024
Viewed by 2031
Abstract
Automated Machine Learning (AutoML) is a subdomain of machine learning that seeks to expand the usability of traditional machine learning methods to non-expert users by automating various tasks which normally require manual configuration. Prior benchmarking studies on AutoML systems—whose aim is to compare [...] Read more.
Automated Machine Learning (AutoML) is a subdomain of machine learning that seeks to expand the usability of traditional machine learning methods to non-expert users by automating various tasks which normally require manual configuration. Prior benchmarking studies on AutoML systems—whose aim is to compare and evaluate their capabilities—have mostly focused on tabular or structured data. In this study, we evaluate AutoML systems on the task of object detection by curating three commonly used object detection datasets (Open Images V7, Microsoft COCO 2017, and Pascal VOC2012) in order to benchmark three different AutoML frameworks—namely, Google’s Vertex AI, NVIDIA’s TAO, and AutoGluon. We reduced the datasets to only include images with a single object instance in order to understand the effect of class imbalance, as well as dataset and object size. We used the metrics of the average precision (AP) and mean average precision (mAP). Solely in terms of accuracy, our results indicate AutoGluon as the best-performing framework, with a mAP of 0.8901, 0.8972, and 0.8644 for the Pascal VOC2012, COCO 2017, and Open Images V7 datasets, respectively. NVIDIA TAO achieved a mAP of 0.8254, 0.8165, and 0.7754 for those same datasets, while Google’s VertexAI scored 0.855, 0.793, and 0.761. We found the dataset size had an inverse relationship to mAP across all the frameworks, and there was no relationship between class size or imbalance and accuracy. Furthermore, we discuss each framework’s relative benefits and drawbacks from the standpoint of ease of use. This study also points out the issues found as we examined the labels of a subset of each dataset. Labeling errors in the datasets appear to have a substantial negative effect on accuracy that is not resolved by larger datasets. Overall, this study provides a platform for future development and research on this nascent field of machine learning. Full article
(This article belongs to the Special Issue Deep Learning for Image, Video and Signal Processing)
Show Figures

Figure 1

22 pages, 5833 KiB  
Article
Software Platform for the Comprehensive Testing of Transmission Protocols Developed in GNU Radio
by Mihai Petru Stef and Zsolt Alfred Polgar
Information 2024, 15(1), 62; https://doi.org/10.3390/info15010062 - 20 Jan 2024
Viewed by 1306
Abstract
With the constant growth of software-defined radio (SDR) technologies in fields related to wireless communications, the need for efficient ways of testing and evaluating the physical-layer (PHY) protocols developed for these technologies in real-life traffic scenarios has become more critical. This paper proposes [...] Read more.
With the constant growth of software-defined radio (SDR) technologies in fields related to wireless communications, the need for efficient ways of testing and evaluating the physical-layer (PHY) protocols developed for these technologies in real-life traffic scenarios has become more critical. This paper proposes a software testbed that enhances the creation of network environments that allow GNU radio applications to be fed with test traffic in a simple way and through an interoperable interface. This makes the use of any traffic generator possible—existing ones or one that is custom-built—to evaluate a GNU radio application. In addition, this paper proposes an efficient way to collect PHY-specific monitoring data to improve the performance of the critical components of the message delivery path by employing the protocol buffers library. This study considers the entire testing and evaluation ecosystem and demonstrates how PHY-specific monitoring information is collected, handled, stored, and processed as time series to allow complex visualization and real-time monitoring. Full article
(This article belongs to the Special Issue Advances in Telecommunication Networks and Wireless Technology)
Show Figures

Figure 1

20 pages, 540 KiB  
Article
Unsupervised Learning in NBA Injury Recovery: Advanced Data Mining to Decode Recovery Durations and Economic Impacts
by George Papageorgiou, Vangelis Sarlis and Christos Tjortjis
Information 2024, 15(1), 61; https://doi.org/10.3390/info15010061 - 20 Jan 2024
Cited by 2 | Viewed by 1755
Abstract
This study utilized advanced data mining and machine learning to examine player injuries in the National Basketball Association (NBA) from 2000–01 to 2022–23. By analyzing a dataset of 2296 players, including sociodemographics, injury records, and financial data, this research investigated the relationships between [...] Read more.
This study utilized advanced data mining and machine learning to examine player injuries in the National Basketball Association (NBA) from 2000–01 to 2022–23. By analyzing a dataset of 2296 players, including sociodemographics, injury records, and financial data, this research investigated the relationships between injury types and player recovery durations, and their socioeconomic impacts. Our methodology involved data collection, engineering, and mining; the application of techniques such as Density-Based Spatial Clustering of Applications with Noise (DBSCAN), isolation forest, and the Z score for anomaly detection; and the application of the Apriori algorithm for association rule mining. Anomaly detection revealed 189 anomalies (1.04% of cases), highlighting unusual recovery durations and factors influencing recovery beyond physical healing. Association rule mining indicated shorter recovery times for lower extremity injuries and a 95% confidence level for quick returns from “Rest” injuries, affirming the NBA’s treatment and rest policies. Additionally, economic factors were observed, with players in lower salary brackets experiencing shorter recoveries, pointing to a financial influence on recovery decisions. This study offers critical insights into sports injuries and recovery, providing valuable information for sports professionals and league administrators. This study will impact player health management and team tactics, laying the groundwork for future research on long-term injury effects and technology integration in player health monitoring. Full article
(This article belongs to the Special Issue Real-World Applications of Machine Learning Techniques)
Show Figures

Figure 1

44 pages, 7889 KiB  
Article
Mapping the Landscape of Misinformation Detection: A Bibliometric Approach
by Andra Sandu, Ioana Ioanăș, Camelia Delcea, Laura-Mădălina Geantă and Liviu-Adrian Cotfas
Information 2024, 15(1), 60; https://doi.org/10.3390/info15010060 - 19 Jan 2024
Cited by 4 | Viewed by 2089
Abstract
The proliferation of misinformation presents a significant challenge in today’s information landscape, impacting various aspects of society. While misinformation is often confused with terms like disinformation and fake news, it is crucial to distinguish that misinformation involves, in mostcases, inaccurate information without the [...] Read more.
The proliferation of misinformation presents a significant challenge in today’s information landscape, impacting various aspects of society. While misinformation is often confused with terms like disinformation and fake news, it is crucial to distinguish that misinformation involves, in mostcases, inaccurate information without the intent to cause harm. In some instances, individuals unwittingly share misinformation, driven by a desire to assist others without thorough research. However, there are also situations where misinformation involves negligence, or even intentional manipulation, with the aim of shaping the opinions and decisions of the target audience. Another key factor contributing to misinformation is its alignment with individual beliefs and emotions. This alignment magnifies the impact and influence of misinformation, as people tend to seek information that reinforces their existing beliefs. As a starting point, some 56 papers containing ‘misinformation detection’ in the title, abstract, or keywords, marked as “articles”, written in English, published between 2016 and 2022, were extracted from the Web of Science platform and further analyzed using Biblioshiny. This bibliometric study aims to offer a comprehensive perspective on the field of misinformation detection by examining its evolution and identifying emerging trends, influential authors, collaborative networks, highly cited articles, key terms, institutional affiliations, themes, and other relevant factors. Additionally, the study reviews the most cited papers and provides an overview of all selected papers in the dataset, shedding light on methods employed to counter misinformation and the primary research areas where misinformation detection has been explored, including sources such as online social networks, communities, and news platforms. Recent events related to health issues stemming from the COVID-19 pandemic have heightened interest within the research community regarding misinformation detection, a statistic which is also supported by the fact that half of the papers included in top 10 papers based on number of citations have addressed this subject. The insights derived from this analysis contribute valuable knowledge to address the issue, enhancing our understanding of the field’s dynamics and aiding in the development of effective strategies to detect and mitigate the impact of misinformation. The results spotlight that IEEE Access occupies the first position in the current analysis based on the number of published papers, the King Saud University is listed as the top contributor for the misinformation detection, while in terms of countries, the top-5 list based on the highest contribution to this area is made by the USA, India, China, Spain, and the UK. Moreover, the study supports the promotion of verified and reliable sources of data, fostering a more informed and trustworthy information environment. Full article
(This article belongs to the Special Issue Recent Advances in Social Media Mining and Analysis)
Show Figures

Figure 1

23 pages, 3140 KiB  
Article
An ART Tour de Force on Mental Imagery: Vividness, Individual Bias Differences, and Complementary Visual Processing Streams
by Amedeo D’Angiulli, Christy Laarakker and Derrick Matthew Buchanan
Information 2024, 15(1), 59; https://doi.org/10.3390/info15010059 - 19 Jan 2024
Viewed by 1337
Abstract
Grossberg’s adaptive resonance theory (ART) provides a framework for understanding possible interactions between mental imagery and visual perception. Our purpose was to integrate, within ART, the phenomenological notion of mental image vividness and thus investigate the possible biasing effects of individual differences on [...] Read more.
Grossberg’s adaptive resonance theory (ART) provides a framework for understanding possible interactions between mental imagery and visual perception. Our purpose was to integrate, within ART, the phenomenological notion of mental image vividness and thus investigate the possible biasing effects of individual differences on visual processing. Using a Vernier acuity task, we tested whether indirect estimation of relative V1 size (small, medium, large) and self-reported vividness, in three subgroups of 53 observers, could predict significant effects of priming, interference, or more extreme Perky effects (negative and positive), which could be induced by imagery, impacting acuity performance. The results showed that small V1 was correlated with priming and/or negative Perky effects independently of vividness; medium V1 was related to interference at low vividness but priming at high vividness; and large V1 was related to positive Perky effects at high vividness but negative Perky effects at low vividness. Our interpretation of ART and related modeling based on ARTSCAN contributes to expanding Grossberg’s comprehensive understanding of how and why individually experienced vividness may drive the differential use of the dorsal and ventral complementary visual processing pathways, resulting in the observed effects of imagery on concurrent perception. Full article
Show Figures

Figure 1

17 pages, 4757 KiB  
Article
Integrated Generative Adversarial Networks and Deep Convolutional Neural Networks for Image Data Classification: A Case Study for COVID-19
by Ku Muhammad Naim Ku Khalif, Woo Chaw Seng, Alexander Gegov, Ahmad Syafadhli Abu Bakar and Nur Adibah Shahrul
Information 2024, 15(1), 58; https://doi.org/10.3390/info15010058 - 18 Jan 2024
Cited by 2 | Viewed by 1846
Abstract
Convolutional Neural Networks (CNNs) have garnered significant utilisation within automated image classification systems. CNNs possess the ability to leverage the spatial and temporal correlations inherent in a dataset. This study delves into the use of cutting-edge deep learning for precise image data classification, [...] Read more.
Convolutional Neural Networks (CNNs) have garnered significant utilisation within automated image classification systems. CNNs possess the ability to leverage the spatial and temporal correlations inherent in a dataset. This study delves into the use of cutting-edge deep learning for precise image data classification, focusing on overcoming the difficulties brought on by the COVID-19 pandemic. In order to improve the accuracy and robustness of COVID-19 image classification, the study introduces a novel methodology that combines the strength of Deep Convolutional Neural Networks (DCNNs) and Generative Adversarial Networks (GANs). This proposed study helps to mitigate the lack of labelled coronavirus (COVID-19) images, which has been a standard limitation in related research, and improves the model’s ability to distinguish between COVID-19-related patterns and healthy lung images. The study uses a thorough case study and uses a sizable dataset of chest X-ray images covering COVID-19 cases, other respiratory conditions, and healthy lung conditions. The integrated model outperforms conventional DCNN-based techniques in terms of classification accuracy after being trained on this dataset. To address the issues of an unbalanced dataset, GAN will produce synthetic pictures and extract deep features from every image. A thorough understanding of the model’s performance in real-world scenarios is also provided by the study’s meticulous evaluation of the model’s performance using a variety of metrics, including accuracy, precision, recall, and F1-score. Full article
(This article belongs to the Special Issue Feature Papers in Information in 2023)
Show Figures

Figure 1

36 pages, 5353 KiB  
Article
SFS-AGGL: Semi-Supervised Feature Selection Integrating Adaptive Graph with Global and Local Information
by Yugen Yi, Haoming Zhang, Ningyi Zhang, Wei Zhou, Xiaomei Huang, Gengsheng Xie and Caixia Zheng
Information 2024, 15(1), 57; https://doi.org/10.3390/info15010057 - 17 Jan 2024
Viewed by 1304
Abstract
As the feature dimension of data continues to expand, the task of selecting an optimal subset of features from a pool of limited labeled data and extensive unlabeled data becomes more and more challenging. In recent years, some semi-supervised feature selection methods (SSFS) [...] Read more.
As the feature dimension of data continues to expand, the task of selecting an optimal subset of features from a pool of limited labeled data and extensive unlabeled data becomes more and more challenging. In recent years, some semi-supervised feature selection methods (SSFS) have been proposed to select a subset of features, but they still have some drawbacks limiting their performance, for e.g., many SSFS methods underutilize the structural distribution information available within labeled and unlabeled data. To address this issue, we proposed a semi-supervised feature selection method based on an adaptive graph with global and local constraints (SFS-AGGL) in this paper. Specifically, we first designed an adaptive graph learning mechanism that can consider both the global and local information of samples to effectively learn and retain the geometric structural information of the original dataset. Secondly, we constructed a label propagation technique integrated with the adaptive graph learning in SFS-AGGL to fully utilize the structural distribution information of both labeled and unlabeled data. The proposed SFS-AGGL method is validated through classification and clustering tasks across various datasets. The experimental results demonstrate its superiority over existing benchmark methods, particularly in terms of clustering performance. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

16 pages, 3143 KiB  
Article
KEGGSum: Summarizing Genomic Pathways
by Chaim David and Haridimos Kondylakis
Information 2024, 15(1), 56; https://doi.org/10.3390/info15010056 - 17 Jan 2024
Cited by 1 | Viewed by 1350
Abstract
Over time, the renowned Kyoto Encyclopedia of Genes and Genomes (KEGG) has grown to become one of the most comprehensive online databases for biological procedures. The majority of the data are stored in the form of pathways, which are graphs that depict the [...] Read more.
Over time, the renowned Kyoto Encyclopedia of Genes and Genomes (KEGG) has grown to become one of the most comprehensive online databases for biological procedures. The majority of the data are stored in the form of pathways, which are graphs that depict the relationships between the diverse items participating in biological procedures, such as genes and chemical compounds. However, the size, complexity, and diversity of these graphs make them difficult to explore and understand, as well as making it difficult to extract a clear conclusion regarding their most important components. In this regard, we present KEGGSum, a system enabling the efficient and effective summarization of KEGG pathways. KEGGSum receives a KEGG identifier (Kid) as an input, connects to the KEGG database, downloads a specialized form of the pathway, and determines the most important nodes in the graph. To identify the most important nodes in the KEGG graphs, we explore multiple centrality measures that have been proposed for generic graphs, showing their applicability to KEGG graphs as well. Then, we link the selected nodes in order to produce a summary graph out of the initial KEGG graph. Finally, our system visualizes the generated summary, enabling an understanding of the most important parts of the initial graph. We experimentally evaluate our system, and we show its advantages and benefits. Full article
(This article belongs to the Special Issue Feature Papers in Information in 2023)
Show Figures

Figure 1

28 pages, 1639 KiB  
Article
IT Risk Management: Towards a System for Enhancing Objectivity in Asset Valuation That Engenders a Security Culture
by Bilgin Metin, Sefa Duran, Eda Telli, Meltem Mutlutürk and Martin Wynn
Information 2024, 15(1), 55; https://doi.org/10.3390/info15010055 - 17 Jan 2024
Viewed by 1957
Abstract
In today’s technology-centric business environment, where organizations encounter numerous cyber threats, effective IT risk management is crucial. An objective risk assessment—based on information relating to business requirements, human elements, and the security culture within an organisation—can provide a sound basis for informed decision [...] Read more.
In today’s technology-centric business environment, where organizations encounter numerous cyber threats, effective IT risk management is crucial. An objective risk assessment—based on information relating to business requirements, human elements, and the security culture within an organisation—can provide a sound basis for informed decision making, effective risk prioritisation, and the implementation of suitable security measures. This paper focuses on asset valuation, supply chain risk, and enhanced objectivity—via a “segregation of duties” approach—to extend and apply the capabilities of an established security culture framework. The resultant system design aims at mitigating subjectivity in IT risk assessments, thereby diminishing personal biases and presumptions to provide a more transparent and accurate understanding of the real risks involved. Survey responses from 16 practitioners working in the private and public sectors confirmed the validity of the approach but suggest it may be more workable in larger organisations where resources allow dedicated risk professionals to operate. This research contributes to the literature on IT and cyber risk management and provides new perspectives on the need to improve objectivity in asset valuation and risk assessment. Full article
(This article belongs to the Special Issue Feature Papers in Information in 2023)
Show Figures

Graphical abstract

15 pages, 311 KiB  
Article
Self-Bilinear Map from One Way Encoding System and i𝒪
by Huang Zhang, Ting Huang, Fangguo Zhang, Baodian Wei and Yusong Du
Information 2024, 15(1), 54; https://doi.org/10.3390/info15010054 - 17 Jan 2024
Viewed by 1005
Abstract
A bilinear map whose domain and target sets are identical is called a self-bilinear map. Original self-bilinear maps are defined over cyclic groups. Since the map itself reveals information about the underlying cyclic group, the Decisional Diffie–Hellman Problem (DDH) and the computational Diffie–Hellman [...] Read more.
A bilinear map whose domain and target sets are identical is called a self-bilinear map. Original self-bilinear maps are defined over cyclic groups. Since the map itself reveals information about the underlying cyclic group, the Decisional Diffie–Hellman Problem (DDH) and the computational Diffie–Hellman (CDH) problem may be solved easily in some specific groups. This brings a lot of limitations to constructing secure self-bilinear schemes. As a compromise, a self-bilinear map with auxiliary information was proposed in CRYPTO’2014. In this paper, we construct this weak variant of a self-bilinear map from generic sets and indistinguishable obfuscation. These sets should own several properties. A new notion, One Way Encoding System (OWES), is proposed to summarize these properties. The new Encoding Division Problem (EDP) is defined to complete the security proof. The OWES can be built by making use of one level of graded encoding systems (GES). To construct a concrete self-bilinear map scheme, Garg, Gentry, and Halvei(GGH13) GES is adopted in our work. Even though the security of GGH13 was recently broken by Hu et al., their algorithm does not threaten our applications. At the end of this paper, some further considerations for the EDP for concrete construction are given to improve the confidence that EDP is indeed hard. Full article
(This article belongs to the Section Information Security and Privacy)
18 pages, 3164 KiB  
Article
Fast Object Detection Leveraging Global Feature Fusion in Boundary-Aware Convolutional Networks
by Weiming Fan, Jiahui Yu and Zhaojie Ju
Information 2024, 15(1), 53; https://doi.org/10.3390/info15010053 - 17 Jan 2024
Viewed by 1246
Abstract
Endoscopy, a pervasive instrument for the diagnosis and treatment of hollow anatomical structures, conventionally necessitates the arduous manual scrutiny of seasoned medical experts. Nevertheless, the recent strides in deep learning technologies proffer novel avenues for research, endowing it with the potential for amplified [...] Read more.
Endoscopy, a pervasive instrument for the diagnosis and treatment of hollow anatomical structures, conventionally necessitates the arduous manual scrutiny of seasoned medical experts. Nevertheless, the recent strides in deep learning technologies proffer novel avenues for research, endowing it with the potential for amplified robustness and precision, accompanied by the pledge of cost abatement in detection procedures, while simultaneously providing substantial assistance to clinical practitioners. Within this investigation, we usher in an innovative technique for the identification of anomalies in endoscopic imagery, christened as Context-enhanced Feature Fusion with Boundary-aware Convolution (GFFBAC). We employ the Context-enhanced Feature Fusion (CEFF) methodology, underpinned by Convolutional Neural Networks (CNNs), to establish equilibrium amidst the tiers of the feature pyramids. These intricately harnessed features are subsequently amalgamated into the Boundary-aware Convolution (BAC) module to reinforce both the faculties of localization and classification. A thorough exploration conducted across three disparate datasets elucidates that the proposition not only surpasses its contemporaries in object detection performance but also yields detection boxes of heightened precision. Full article
(This article belongs to the Special Issue Deep Learning for Image, Video and Signal Processing)
Show Figures

Figure 1

19 pages, 3879 KiB  
Article
“Like a Virtual Family Reunion”: Older Adults Defining Requirements for an Augmented Reality Communication System
by Veronika Mikhailova, Melisa Conde and Nicola Döring
Information 2024, 15(1), 52; https://doi.org/10.3390/info15010052 - 17 Jan 2024
Cited by 1 | Viewed by 1653
Abstract
Leading a socially engaged life is beneficial for the well-being of older adults. Immersive technologies, such as augmented reality (AR), have the potential to provide more engaging and vivid communication experiences compared to conventional digital tools. This qualitative study adopts a human-centered approach [...] Read more.
Leading a socially engaged life is beneficial for the well-being of older adults. Immersive technologies, such as augmented reality (AR), have the potential to provide more engaging and vivid communication experiences compared to conventional digital tools. This qualitative study adopts a human-centered approach to discern the general attitudes and specific requirements of older adults regarding interpersonal communication facilitated by AR. We conducted semi-structured individual interviews with a sample of N = 30 older adults from Germany. During the interviews, participants evaluated storyboard illustrations depicting a fictional AR-enabled communication scenario centered around a grandparent and their adult grandchildren, which were represented as avatars within the AR environment. The study identified technological, emotional, social, and administrative requirements of older adults regarding the AR communication system. Based on these findings, we provide practical recommendations aimed at more inclusive technology design, emphasizing the significance of addressing the emotional needs of older adults, especially the perceived intimacy of AR-based interpersonal communication. Acknowledging and catering to these emotional needs is crucial, as it impacts the adoption of immersive technologies and the realization of their social benefits. This study contributes to the development of user-friendly AR systems that effectively promote and foster social engagement among older adults. Full article
(This article belongs to the Collection Augmented Reality Technologies, Systems and Applications)
Show Figures

Figure 1

13 pages, 2755 KiB  
Article
Measuring Trajectory Similarity Based on the Spatio-Temporal Properties of Moving Objects in Road Networks
by Ali Dorosti, Ali Asghar Alesheikh and Mohammad Sharif
Information 2024, 15(1), 51; https://doi.org/10.3390/info15010051 - 17 Jan 2024
Viewed by 1291
Abstract
Advancements in navigation and tracking technologies have resulted in a significant increase in movement data within road networks. Analyzing the trajectories of network-constrained moving objects makes a profound contribution to transportation and urban planning. In this context, the trajectory similarity measure enables the [...] Read more.
Advancements in navigation and tracking technologies have resulted in a significant increase in movement data within road networks. Analyzing the trajectories of network-constrained moving objects makes a profound contribution to transportation and urban planning. In this context, the trajectory similarity measure enables the discovery of inherent patterns in moving object data. Existing methods for measuring trajectory similarity in network space are relatively slow and neglect the temporal characteristics of trajectories. Moreover, these methods focus on relatively small volumes of data. This study proposes a method that maps trajectories onto a network-based space to overcome these limitations. This mapping considers geographical coordinates, travel time, and the temporal order of trajectory segments in the similarity measure. Spatial similarity is measured using the Jaccard coefficient, quantifying the overlap between trajectory segments in space. Temporal similarity, on the other hand, incorporates time differences, including common trajectory segments, start time variation and trajectory duration. The method is evaluated using real-world taxi trajectory data. The processing time is one-quarter of that required by existing methods in the literature. This improvement allows for spatio-temporal analyses of a large number of trajectories, revealing the underlying behavior of moving objects in network space. Full article
Show Figures

Figure 1

17 pages, 1848 KiB  
Article
From Cybercrime to Digital Balance: How Human Development Shapes Digital Risk Cultures
by Răzvan Rughiniș, Emanuela Bran, Ana Rodica Stăiculescu and Alexandru Radovici
Information 2024, 15(1), 50; https://doi.org/10.3390/info15010050 - 17 Jan 2024
Viewed by 1673
Abstract
This article examines configurations of digital concerns within the European Union (EU27), a leading hub of innovation and policy development. The core objective is to uncover the social forces shaping technology acceptance and risk awareness, which are essential for fostering a resilient digital [...] Read more.
This article examines configurations of digital concerns within the European Union (EU27), a leading hub of innovation and policy development. The core objective is to uncover the social forces shaping technology acceptance and risk awareness, which are essential for fostering a resilient digital society in the EU. The study draws upon Bourdieu’s concept of capital to discuss technological capital and digital habitus and Beck’s risk society theory to frame the analysis of individual and national attitudes towards digital risks. Utilizing Eurobarometer data, the research operationalizes technological capital through proxy indicators of individual socioeconomic status and internet use, while country-level development indicators are used to predict aggregated national risk perception. Article contributions rely on individual- and country-level statistical analysis. Specifically, the study reveals that digital concerns are better predicted at a national level rather than individual level, being shaped by infrastructure, policy, and narrative rather than by personal technological capital. Key findings highlight a positive and a negative correlation between digital advancement with cybersecurity fears and digital literacy, respectively. HDI and DESI are relevant country-level predictors of public concerns, while CGI values are not. Using cluster analysis, we identify and interpret four digital risk cultures within the EU, each with varying foci and levels of concern, which correspond to economic, political, and cultural influences at the national level. Full article
(This article belongs to the Special Issue Digital Privacy and Security)
Show Figures

Figure 1

19 pages, 1754 KiB  
Article
A Fair Energy Allocation Algorithm for IRS-Assisted Cognitive MISO Wireless-Powered Networks
by Chuanzhe Gao, Shidang Li, Mingsheng Wei, Siyi Duan and Jinsong Xu
Information 2024, 15(1), 49; https://doi.org/10.3390/info15010049 - 16 Jan 2024
Viewed by 1232
Abstract
With the rapid development of wireless communication networks and Internet of Things technology (IoT), higher requirements have been put forward for spectrum resource utilization and system performance. In order to further improve the utilization of spectrum resources and system performance, this paper proposes [...] Read more.
With the rapid development of wireless communication networks and Internet of Things technology (IoT), higher requirements have been put forward for spectrum resource utilization and system performance. In order to further improve the utilization of spectrum resources and system performance, this paper proposes an intelligent reflecting surface (IRS)-assisted fair energy allocation algorithm for cognitive multiple-input single-output (MISO) wireless-powered networks. The goal of this paper is to maximize the minimum energy receiving power in the energy receiver, which is constrained by the signal-to-interference-plus-noise ratio (SINR) threshold of the information receiver in the secondary network, the maximum transmission power at the cognitive base station (CBS), and the interference power threshold of the secondary network on the main network. Due to the coupling between variables, this paper uses iterative optimization algorithms to optimize and solve different variables. That is, when solving the active beamforming variables, the passive beamforming variables are fixed; then, the obtained active beamforming variables are fixed, and the passive beamforming variables are solved. Through continuous iterative optimization, the system converges. The simulation results have verified the effectiveness of the proposed algorithm. Full article
(This article belongs to the Special Issue Advances in Telecommunication Networks and Wireless Technology)
Show Figures

Figure 1

22 pages, 7459 KiB  
Article
Metaverse Applications in Bioinformatics: A Machine Learning Framework for the Discrimination of Anti-Cancer Peptides
by Sufyan Danish, Asfandyar Khan, L. Minh Dang, Mohammed Alonazi, Sultan Alanazi, Hyoung-Kyu Song and Hyeonjoon Moon
Information 2024, 15(1), 48; https://doi.org/10.3390/info15010048 - 15 Jan 2024
Cited by 1 | Viewed by 1967
Abstract
Bioinformatics and genomics are driving a healthcare revolution, particularly in the domain of drug discovery for anticancer peptides (ACPs). The integration of artificial intelligence (AI) has transformed healthcare, enabling personalized and immersive patient care experiences. These advanced technologies, coupled with the power of [...] Read more.
Bioinformatics and genomics are driving a healthcare revolution, particularly in the domain of drug discovery for anticancer peptides (ACPs). The integration of artificial intelligence (AI) has transformed healthcare, enabling personalized and immersive patient care experiences. These advanced technologies, coupled with the power of bioinformatics and genomic data, facilitate groundbreaking developments. The precise prediction of ACPs from complex biological sequences remains an ongoing challenge in the genomic area. Currently, conventional approaches such as chemotherapy, target therapy, radiotherapy, and surgery are widely used for cancer treatment. However, these methods fail to completely eradicate neoplastic cells or cancer stem cells and damage healthy tissues, resulting in morbidity and even mortality. To control such diseases, oncologists and drug designers highly desire to develop new preventive techniques with more efficiency and minor side effects. Therefore, this research provides an optimized computational-based framework for discriminating against ACPs. In addition, the proposed approach intelligently integrates four peptide encoding methods, namely amino acid occurrence analysis (AAOA), dipeptide occurrence analysis (DOA), tripeptide occurrence analysis (TOA), and enhanced pseudo amino acid composition (EPseAAC). To overcome the issue of bias and reduce true error, the synthetic minority oversampling technique (SMOTE) is applied to balance the samples against each class. The empirical results over two datasets, where the accuracy of the proposed model on the benchmark dataset is 97.56% and on the independent dataset is 95.00%, verify the effectiveness of our ensemble learning mechanism and show remarkable performance when compared with state-of-the-art (SOTA) methods. In addition, the application of metaverse technology in healthcare holds promise for transformative innovations, potentially enhancing patient experiences and providing novel solutions in the realm of preventive techniques and patient care. Full article
(This article belongs to the Special Issue Applications of Deep Learning in Bioinformatics and Image Processing)
Show Figures

Figure 1

16 pages, 1807 KiB  
Article
Identifying Smartphone Users Based on Activities in Daily Living Using Deep Neural Networks
by Sakorn Mekruksavanich and Anuchit Jitpattanakul
Information 2024, 15(1), 47; https://doi.org/10.3390/info15010047 - 15 Jan 2024
Viewed by 1240
Abstract
Smartphones have become ubiquitous, allowing people to perform various tasks anytime and anywhere. As technology continues to advance, smartphones can now sense and connect to networks, providing context-awareness for different applications. Many individuals store sensitive data on their devices like financial credentials and [...] Read more.
Smartphones have become ubiquitous, allowing people to perform various tasks anytime and anywhere. As technology continues to advance, smartphones can now sense and connect to networks, providing context-awareness for different applications. Many individuals store sensitive data on their devices like financial credentials and personal information due to the convenience and accessibility. However, losing control of this data poses risks if the phone gets lost or stolen. While passwords, PINs, and pattern locks are common security methods, they can still be compromised through exploits like smudging residue from touching the screen. This research explored leveraging smartphone sensors to authenticate users based on behavioral patterns when operating the device. The proposed technique uses a deep learning model called DeepResNeXt, a type of deep residual network, to accurately identify smartphone owners through sensor data efficiently. Publicly available smartphone datasets were used to train the suggested model and other state-of-the-art networks to conduct user recognition. Multiple experiments validated the effectiveness of this framework, surpassing previous benchmark models in this area with a top F1-score of 98.96%. Full article
Show Figures

Figure 1

29 pages, 4963 KiB  
Article
A Holistic Approach to Ransomware Classification: Leveraging Static and Dynamic Analysis with Visualization
by Bahaa Yamany, Mahmoud Said Elsayed, Anca D. Jurcut, Nashwa Abdelbaki and Marianne A. Azer
Information 2024, 15(1), 46; https://doi.org/10.3390/info15010046 - 14 Jan 2024
Cited by 1 | Viewed by 2066
Abstract
Ransomware is a type of malicious software that encrypts a victim’s files and demands payment in exchange for the decryption key. It is a rapidly growing and evolving threat that has caused significant damage and disruption to individuals and organizations around the world. [...] Read more.
Ransomware is a type of malicious software that encrypts a victim’s files and demands payment in exchange for the decryption key. It is a rapidly growing and evolving threat that has caused significant damage and disruption to individuals and organizations around the world. In this paper, we propose a comprehensive ransomware classification approach based on the comparison of similarity matrices derived from static, dynamic analysis, and visualization. Our approach involves the use of multiple analysis techniques to extract features from ransomware samples and to generate similarity matrices based on these features. These matrices are then compared using a variety of comparison algorithms to identify similarities and differences between the samples. The resulting similarity scores are then used to classify the samples into different categories, such as families, variants, and versions. We evaluate our approach using a dataset of ransomware samples and demonstrate that it can accurately classify the samples with a high degree of accuracy. One advantage of our approach is the use of visualization, which allows us to classify and cluster large datasets of ransomware in a more intuitive and effective way. In addition, static analysis has the advantage of being fast and accurate, while dynamic analysis allows us to classify and cluster packed ransomware samples. We also compare our approach to other classification approaches based on single analysis techniques and show that our approach outperforms these approaches in terms of classification accuracy. Overall, our study demonstrates the potential of using a comprehensive approach based on the comparison of multiple analysis techniques, including static analysis, dynamic analysis, and visualization, for the accurate and efficient classification of ransomware. It also highlights the importance of considering multiple analysis techniques in the development of effective ransomware classification methods, especially when dealing with large datasets and packed samples. Full article
(This article belongs to the Special Issue Wireless IoT Network Protocols II)
Show Figures

Figure 1

28 pages, 4717 KiB  
Article
ABAC Policy Mining through Affiliation Networks and Biclique Analysis
by Abner Perez-Haro and Arturo Diaz-Perez
Information 2024, 15(1), 45; https://doi.org/10.3390/info15010045 - 12 Jan 2024
Viewed by 1019
Abstract
Policy mining is an automated procedure for generating access rules by means of mining patterns from single permissions, which are typically registered in access logs. Attribute-based access control (ABAC) is a model which allows security administrators to create a set of rules, known [...] Read more.
Policy mining is an automated procedure for generating access rules by means of mining patterns from single permissions, which are typically registered in access logs. Attribute-based access control (ABAC) is a model which allows security administrators to create a set of rules, known as the access control policy, to restrict access in information systems by means of logical expressions defined through the attribute–values of three types of entities: users, resources, and environmental conditions. The application of policy mining in large-scale systems oriented towards ABAC is a must because it is not workable to create rules by hand when the system requires the management of thousands of users and resources. In the literature on ABAC policy mining, current solutions follow a frequency-based strategy to extract rules; the problem with that approach is that selecting a high-frequency support leaves many resources without rules (especially those with few requesters), and a low support leads to the rule explosion of unreliable rules. Another challenge is the difficulty of collecting a set of test examples for correctness evaluation, since the classes of user–resource pairs available in logs are imbalanced. Moreover, alternative evaluation criteria for correctness, such as peculiarity and diversity, have not been explored for ABAC policy mining. To address these challenges, we propose the modeling of access logs as affiliation networks for applying network and biclique analysis techniques (1) to extract ABAC rules supported by graph patterns without a frequency threshold, (2) to generate synthetic examples for correctness evaluation, and (3) to create alternative evaluation measures to correctness. We discovered that the rules extracted through our strategy can cover more resources than the frequency-based strategy and perform this without rule explosion; moreover, our synthetics are useful for increasing the certainty level of correctness results. Finally, our alternative measures offer a wider evaluation profile for policy mining. Full article
(This article belongs to the Special Issue Complex Network Analysis in Security)
Show Figures

Figure 1

19 pages, 833 KiB  
Article
Radar-Based Invisible Biometric Authentication
by Maria Louro da Silva, Carolina Gouveia, Daniel Filipe Albuquerque and Hugo Plácido da Silva
Information 2024, 15(1), 44; https://doi.org/10.3390/info15010044 - 12 Jan 2024
Viewed by 2452
Abstract
Bio-Radar (BR) systems have shown great promise for biometric applications. Conventional methods can be forged, or fooled. Even alternative methods intrinsic to the user, such as the Electrocardiogram (ECG), present drawbacks as they require contact with the sensor. Therefore, research has turned towards [...] Read more.
Bio-Radar (BR) systems have shown great promise for biometric applications. Conventional methods can be forged, or fooled. Even alternative methods intrinsic to the user, such as the Electrocardiogram (ECG), present drawbacks as they require contact with the sensor. Therefore, research has turned towards alternative methods, such as the BR. In this work, a BR dataset with 20 subjects exposed to different emotion-eliciting stimuli (happiness, fearfulness, and neutrality) in different dates was explored. The spectral distributions of the BR signal were studied as the biometric template. Furthermore, this study included the analysis of respiratory and cardiac signals separately, as well as their fusion. The main test devised was authentication, where a system seeks to validate an individual’s claimed identity. With this test, it was possible to infer the feasibility of these type of systems, obtaining an Equal Error Rate (EER) of 3.48% if the training and testing data are from the same day and within the same emotional stimuli. In addition, the time and emotion results dependency is fully analysed. Complementary tests such as sensitivity to the number of users were also performed. Overall, it was possible to achieve an evaluation and consideration of the potential of BR systems for biometrics. Full article
Show Figures

Figure 1

16 pages, 275 KiB  
Article
A Traceable Universal Designated Verifier Transitive Signature Scheme
by Shaonan Hou, Chengjun Lin and Shaojun Yang
Information 2024, 15(1), 43; https://doi.org/10.3390/info15010043 - 12 Jan 2024
Viewed by 1008
Abstract
A transitive signature scheme enables anyone to obtain the signature on edge (i,k) by combining the signatures on edges (i,j) and (j,k), but it suffers from signature theft and signature [...] Read more.
A transitive signature scheme enables anyone to obtain the signature on edge (i,k) by combining the signatures on edges (i,j) and (j,k), but it suffers from signature theft and signature abuse. The existing work has solved these problems using a universal designated verifier transitive signature (UDVTS). However, the UDVTS scheme only enables the designated verifier to authenticate signatures, which provides a simple way for the signer to deny having signed some messages. The fact that the UDVTS is not publicly verifiable prevents the verifier from seeking help arbitrating the source of signatures. Based on this problem, this paper proposes a traceable universal designated verifier transitive signature (TUDVTS) and its security model. We introduce a tracer into the system who will trace the signature back to its true source after the verifier has submitted an application for arbitration. To show the feasibility of our primitive, we construct a concrete scheme from a bilinear group pair (G,GT) of prime order and prove that the scheme satisfies unforgeability, privacy, and traceability. Full article
Show Figures

Figure 1

12 pages, 1442 KiB  
Article
Simulation-Enhanced MQAM Modulation Identification in Communication Systems: A Subtractive Clustering-Based PSO-FCM Algorithm Study
by Zhi Quan, Hailong Zhang, Jiyu Luo and Haijun Sun
Information 2024, 15(1), 42; https://doi.org/10.3390/info15010042 - 12 Jan 2024
Viewed by 1000
Abstract
Signal modulation recognition is often reliant on clustering algorithms. The fuzzy c-means (FCM) algorithm, which is commonly used for such tasks, often converges to local optima. This presents a challenge, particularly in low-signal-to-noise-ratio (SNR) environments. We propose an enhanced FCM algorithm that incorporates [...] Read more.
Signal modulation recognition is often reliant on clustering algorithms. The fuzzy c-means (FCM) algorithm, which is commonly used for such tasks, often converges to local optima. This presents a challenge, particularly in low-signal-to-noise-ratio (SNR) environments. We propose an enhanced FCM algorithm that incorporates particle swarm optimization (PSO) to improve the accuracy of recognizing M-ary quadrature amplitude modulation (MQAM) signal orders. The process is a two-step clustering process. First, the constellation diagram of the received signal is used by a subtractive clustering algorithm based on SNR to figure out the initial number of clustering centers. The PSO-FCM algorithm then refines these centers to improve precision. Accurate signal classification and identification are achieved by evaluating the relative sizes of the radii around the cluster centers within the MQAM constellation diagram and determining the modulation order. The results indicate that the SC-based PSO-FCM algorithm outperforms the conventional FCM in clustering effectiveness, notably enhancing modulation recognition rates in low-SNR conditions, when evaluated against a variety of QAM signals ranging from 4QAM to 64QAM. Full article
Show Figures

Figure 1

0 pages, 16129 KiB  
Article
Public Health Implications for Effective Community Interventions Based on Hospital Patient Data Analysis Using Deep Learning Technology in Indonesia
by Lenni Dianna Putri, Ermi Girsang, I Nyoman Ehrich Lister, Hsiang Tsung Kung, Evizal Abdul Kadir and Sri Listia Rosa
Information 2024, 15(1), 41; https://doi.org/10.3390/info15010041 - 11 Jan 2024
Viewed by 1471
Abstract
Public health is an important aspect of community activities, making research on health necessary because it is a crucial field in maintaining and improving the quality of life in society as a whole. Research on public health allows for a deeper understanding of [...] Read more.
Public health is an important aspect of community activities, making research on health necessary because it is a crucial field in maintaining and improving the quality of life in society as a whole. Research on public health allows for a deeper understanding of the health problems faced by a population, including disease prevalence, risk factors, and other determinants of health. This work aims to explore the potential of hospital patient data analysis as a valuable tool for understanding community implications and deriving insights for effective community health interventions. The study recognises the significance of harnessing the vast amount of data generated within hospital settings to inform population-level health strategies. The methodology employed in this study involves the collection and analysis of deidentified patient data from a representative sample of a hospital in Indonesia. Various data analysis techniques, such as statistical modelling, data mining, and machine learning algorithms, are utilised to identify patterns, trends, and associations within the data. A program written in Python is used to analyse patient data in a hospital for five years, from 2018 to 2022. These findings are then interpreted within the context of public health implications, considering factors such as disease prevalence, socioeconomic determinants, and healthcare utilisation patterns. The results of the data analysis provide valuable insights into the public health implications of hospital patient data. The research also covers predictions for the patient data to the hospital based on disease, age, and geographical residence. The research prediction shows that, in the year 2023, the number of patients will not be considerably affected by the infection, but in March to April 2024 the number will increase significantly up to 10,000 patients due to the trend in the previous year at the end of 2022. These recommendations encompass targeted prevention strategies, improved healthcare delivery models, and community engagement initiatives. The research emphasises the importance of collaboration between healthcare providers, policymakers, and community stakeholders in implementing and evaluating these interventions. Full article
(This article belongs to the Special Issue Advances in AI for Health and Medical Applications)
Show Figures

Figure 1

27 pages, 820 KiB  
Article
Secure Genomic String Search with Parallel Homomorphic Encryption
by Md Momin Al Aziz, Md Toufique Morshed Tamal and Noman Mohammed
Information 2024, 15(1), 40; https://doi.org/10.3390/info15010040 - 11 Jan 2024
Viewed by 1236
Abstract
Fully homomorphic encryption (FHE) cryptographic systems enable limitless computations over encrypted data, providing solutions to many of today’s data security problems. While effective FHE platforms can address modern data security concerns in unsecure environments, the extended execution time for these platforms hinders their [...] Read more.
Fully homomorphic encryption (FHE) cryptographic systems enable limitless computations over encrypted data, providing solutions to many of today’s data security problems. While effective FHE platforms can address modern data security concerns in unsecure environments, the extended execution time for these platforms hinders their broader application. This project aims to enhance FHE systems through an efficient parallel framework, specifically building upon the existing torus FHE (TFHE) system chillotti2016faster. The TFHE system was chosen for its superior bootstrapping computations and precise results for countless Boolean gate evaluations, such as AND and XOR. Our first approach was to expand upon the gate operations within the current system, shifting towards algebraic circuits, and using graphics processing units (GPUs) to manage cryptographic operations in parallel. Then, we implemented this GPU-parallel FHE framework into a needed genomic data operation, specifically string search. We utilized popular string distance metrics (hamming distance, edit distance, set maximal matches) to ascertain the disparities between multiple genomic sequences in a secure context with all data and operations occurring under encryption. Our experimental data revealed that our GPU implementation vastly outperforms the former method, providing a 20-fold speedup for any 32-bit Boolean operation and a 14.5-fold increase for multiplications.This paper introduces unique enhancements to existing FHE cryptographic systems using GPUs and additional algorithms to quicken fundamental computations. Looking ahead, the presented framework can be further developed to accommodate more complex, real-world applications. Full article
(This article belongs to the Special Issue Digital Privacy and Security)
Show Figures

Figure 1

20 pages, 3183 KiB  
Article
Time Series Forecasting Utilizing Automated Machine Learning (AutoML): A Comparative Analysis Study on Diverse Datasets
by George Westergaard, Utku Erden, Omar Abdallah Mateo, Sullaiman Musah Lampo, Tahir Cetin Akinci and Oguzhan Topsakal
Information 2024, 15(1), 39; https://doi.org/10.3390/info15010039 - 11 Jan 2024
Cited by 2 | Viewed by 2707
Abstract
Automated Machine Learning (AutoML) tools are revolutionizing the field of machine learning by significantly reducing the need for deep computer science expertise. Designed to make ML more accessible, they enable users to build high-performing models without extensive technical knowledge. This study delves into [...] Read more.
Automated Machine Learning (AutoML) tools are revolutionizing the field of machine learning by significantly reducing the need for deep computer science expertise. Designed to make ML more accessible, they enable users to build high-performing models without extensive technical knowledge. This study delves into these tools in the context of time series analysis, which is essential for forecasting future trends from historical data. We evaluate three prominent AutoML tools—AutoGluon, Auto-Sklearn, and PyCaret—across various metrics, employing diverse datasets that include Bitcoin and COVID-19 data. The results reveal that the performance of each tool is highly dependent on the specific dataset and its ability to manage the complexities of time series data. This thorough investigation not only demonstrates the strengths and limitations of each AutoML tool but also highlights the criticality of dataset-specific considerations in time series analysis. Offering valuable insights for both practitioners and researchers, this study emphasizes the ongoing need for research and development in this specialized area. It aims to serve as a reference for organizations dealing with time series datasets and a guiding framework for future academic research in enhancing the application of AutoML tools for time series forecasting and analysis. Full article
(This article belongs to the Special Issue New Deep Learning Approach for Time Series Forecasting)
Show Figures

Figure 1

37 pages, 6465 KiB  
Review
Unmanned Autonomous Intelligent System in 6G Non-Terrestrial Network
by Xiaonan Wang, Yang Guo and Yuan Gao
Information 2024, 15(1), 38; https://doi.org/10.3390/info15010038 - 11 Jan 2024
Cited by 1 | Viewed by 1879
Abstract
Non-terrestrial network (NTN) is a trending topic in the field of communication, as it shows promise for scenarios in which terrestrial infrastructure is unavailable. Unmanned autonomous intelligent systems (UAISs), as a physical form of artificial intelligence (AI), have gained significant attention from academia [...] Read more.
Non-terrestrial network (NTN) is a trending topic in the field of communication, as it shows promise for scenarios in which terrestrial infrastructure is unavailable. Unmanned autonomous intelligent systems (UAISs), as a physical form of artificial intelligence (AI), have gained significant attention from academia and industry. These systems have various applications in autonomous driving, logistics, area surveillance, and medical services. With the rapid evolution of information and communication technology (ICT), 5G and beyond-5G communication have enabled numerous intelligent applications through the comprehensive utilization of advanced NTN communication technology and artificial intelligence. To meet the demands of complex tasks in remote or communication-challenged areas, there is an urgent need for reliable, ultra-low latency communication networks to enable unmanned autonomous intelligent systems for applications such as localization, navigation, perception, decision-making, and motion planning. However, in remote areas, reliable communication coverage is not available, which poses a significant challenge for intelligent systems applications. The rapid development of non-terrestrial networks (NTNs) communication has shed new light on intelligent applications that require ubiquitous network connections in space, air, ground, and sea. However, challenges arise when using NTN technology in unmanned autonomous intelligent systems. Our research examines the advancements and obstacles in academic research and industry applications of NTN technology concerning UAIS, which is supported by unmanned aerial vehicles (UAV) and other low-altitude platforms. Nevertheless, edge computing and cloud computing are crucial for unmanned autonomous intelligent systems, which also necessitate distributed computation architectures for computationally intensive tasks and massive data offloading. This paper presents a comprehensive analysis of the opportunities and challenges of unmanned autonomous intelligent systems in UAV NTN, along with NTN-based unmanned autonomous intelligent systems and their applications. A field trial case study is presented to demonstrate the application of NTN in UAIS. Full article
Show Figures

Figure 1

23 pages, 2929 KiB  
Review
Parametric and Nonparametric Machine Learning Techniques for Increasing Power System Reliability: A Review
by Fariha Imam, Petr Musilek and Marek Z. Reformat
Information 2024, 15(1), 37; https://doi.org/10.3390/info15010037 - 11 Jan 2024
Viewed by 1789
Abstract
Due to aging infrastructure, technical issues, increased demand, and environmental developments, the reliability of power systems is of paramount importance. Utility companies aim to provide uninterrupted and efficient power supply to their customers. To achieve this, they focus on implementing techniques and methods [...] Read more.
Due to aging infrastructure, technical issues, increased demand, and environmental developments, the reliability of power systems is of paramount importance. Utility companies aim to provide uninterrupted and efficient power supply to their customers. To achieve this, they focus on implementing techniques and methods to minimize downtime in power networks and reduce maintenance costs. In addition to traditional statistical methods, modern technologies such as machine learning have become increasingly common for enhancing system reliability and customer satisfaction. The primary objective of this study is to review parametric and nonparametric machine learning techniques and their applications in relation to maintenance-related aspects of power distribution system assets, including (1) distribution lines, (2) transformers, and (3) insulators. Compared to other reviews, this study offers a unique perspective on machine learning algorithms and their predictive capabilities in relation to the critical components of power distribution systems. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop