Next Issue
Volume 14, November
Previous Issue
Volume 14, September
 
 

Information, Volume 14, Issue 10 (October 2023) – 64 articles

Cover Story (view full-size image): Computer vision is a powerful tool for healthcare applications as it can provide the ability to perform objective diagnoses and assessments of pathologies. It can also help speed up population screening, reduce health care costs and improve the quality of service. Several articles summarise applications and systems in medical imaging, whereas less research is devoted to surveying approaches for healthcare goals using ambient intelligence, i.e., observing individuals in natural settings. In addition, there is a lack of papers providing a survey of research that exhaustively covers computer vision applications for children’s health. Thus, the aim of this paper is to survey articles covering children’s health-related issues through ambient intelligence methods and systems relying on computer vision. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
15 pages, 1514 KiB  
Article
Deep-Learning-Based Multitask Ultrasound Beamforming
by Elay Dahan and Israel Cohen
Information 2023, 14(10), 582; https://doi.org/10.3390/info14100582 - 23 Oct 2023
Viewed by 1577
Abstract
In this paper, we present a new method for multitask learning applied to ultrasound beamforming. Beamforming is a critical component in the ultrasound image formation pipeline. Ultrasound images are constructed using sensor readings from multiple transducer elements, with each element typically capturing multiple [...] Read more.
In this paper, we present a new method for multitask learning applied to ultrasound beamforming. Beamforming is a critical component in the ultrasound image formation pipeline. Ultrasound images are constructed using sensor readings from multiple transducer elements, with each element typically capturing multiple acquisitions per frame. Hence, the beamformer is crucial for framerate performance and overall image quality. Furthermore, post-processing, such as image denoising, is usually applied to the beamformed image to achieve high clarity for diagnosis. This work shows a fully convolutional neural network that can learn different tasks by applying a new weight normalization scheme. We adapt our model to both high frame rate requirements by fitting weight normalization parameters for the sub-sampling task and image denoising by optimizing the normalization parameters for the speckle reduction task. Our model outperforms single-angle delay and sum on pixel-level measures for speckle noise reduction, subsampling, and single-angle reconstruction. Full article
(This article belongs to the Special Issue Deep Learning for Image, Video and Signal Processing)
Show Figures

Figure 1

17 pages, 1420 KiB  
Article
Mobility Control Centre and Artificial Intelligence for Sustainable Urban Districts
by Francis Marco Maria Cirianni, Antonio Comi and Agata Quattrone
Information 2023, 14(10), 581; https://doi.org/10.3390/info14100581 - 21 Oct 2023
Cited by 3 | Viewed by 3404
Abstract
The application of artificial intelligence (AI) to dynamic mobility management can support the achievement of efficiency and sustainability goals. AI can help to model alternative mobility system scenarios in real time (by processing big data from heterogeneous sources in a very short time) [...] Read more.
The application of artificial intelligence (AI) to dynamic mobility management can support the achievement of efficiency and sustainability goals. AI can help to model alternative mobility system scenarios in real time (by processing big data from heterogeneous sources in a very short time) and to identify network and service configurations by comparing phenomena in similar contexts, as well as support the implementation of measures for managing demand that achieve sustainable goals. In this paper, an in-depth analysis of scenarios, with an IT (Information Technology) framework based on emerging technologies and AI to support sustainable and cooperative digital mobility, is provided. Therefore, the definition of the functional architecture of an AI-based mobility control centre is defined, and the process that has been implemented in a medium-large city is presented. Full article
Show Figures

Figure 1

29 pages, 4177 KiB  
Article
Interoperability and Targeted Attacks on Terrorist Organizations Using Intelligent Tools from Network Science
by Alexandros Z. Spyropoulos, Evangelos Ioannidis and Ioannis Antoniou
Information 2023, 14(10), 580; https://doi.org/10.3390/info14100580 - 21 Oct 2023
Cited by 2 | Viewed by 2049
Abstract
The early intervention of law enforcement authorities to prevent an impending terrorist attack is of utmost importance to ensuring economic, financial, and social stability. From our previously published research, the key individuals who play a vital role in terrorist organizations [...] Read more.
The early intervention of law enforcement authorities to prevent an impending terrorist attack is of utmost importance to ensuring economic, financial, and social stability. From our previously published research, the key individuals who play a vital role in terrorist organizations can be timely revealed. The problem now is to identify which attack strategy (node removal) is the most damaging to terrorist networks, making them fragmented and therefore, unable to operate under real-world conditions. We examine several attack strategies on 4 real terrorist networks. Each node removal strategy is based on: (i) randomness (random node removal), (ii) high strength centrality, (iii) high betweenness centrality, (iv) high clustering coefficient centrality, (v) high recalculated strength centrality, (vi) high recalculated betweenness centrality, (vii) high recalculated clustering coefficient centrality. The damage of each attack strategy is evaluated in terms of Interoperability, which is defined based on the size of the giant component. We also examine a greedy algorithm, which removes the node corresponding to the maximal decrease of Interoperability at each step. Our analysis revealed that removing nodes based on high recalculated betweenness centrality is the most harmful. In this way, the Interoperability of the communication network drops dramatically, even if only two nodes are removed. This valuable insight can help law enforcement authorities in developing more effective intervention strategies for the early prevention of impending terrorist attacks. Results were obtained based on real data on social ties between terrorists (physical face-to-face social interactions). Full article
(This article belongs to the Special Issue Complex Network Analysis in Security)
Show Figures

Figure 1

14 pages, 2419 KiB  
Article
Improving CS1 Programming Learning with Visual Execution Environments
by Raquel Hijón-Neira, Celeste Pizarro, John French, Pedro Paredes-Barragán and Michael Duignan
Information 2023, 14(10), 579; https://doi.org/10.3390/info14100579 - 20 Oct 2023
Cited by 1 | Viewed by 1319
Abstract
Students in their first year of computer science (CS1) at universities typically struggle to grasp fundamental programming concepts. This paper discusses research carried out using a Java-based visual execution environment (VEE) to introduce fundamental programming concepts to CS1 students. The VEE guides beginner [...] Read more.
Students in their first year of computer science (CS1) at universities typically struggle to grasp fundamental programming concepts. This paper discusses research carried out using a Java-based visual execution environment (VEE) to introduce fundamental programming concepts to CS1 students. The VEE guides beginner programmers through the fundamentals of programming, utilizing visual metaphors to explain and direct interactive tasks implemented in Java. The study’s goal was to determine if the use of the VEE in the instruction of a group of 63 CS1 students from four different groups enrolled in two academic institutions (based in Madrid, Spain and Galway, Ireland) results in an improvement in their grasp of fundamental programming concepts. The programming concepts covered included those typically found in an introductory programming course, e.g., input and output, conditionals, loops, functions, arrays, recursion, and files. A secondary goal of this research was to examine if the use of the VEE enhances students’ understanding of particular concepts more than others, i.e., whether there exists a topic-dependent benefit to the use of the VEE. The results of the study found that use of the VEE in the instruction of these students resulted in a significant improvement in their grasp of fundamental programming concepts compared with a control group who received instruction without the use of the VEE. The study also found a pronounced improvement in the students’ grasp of particular concepts (e.g., operators, conditionals, and loops), suggesting the presence of a topic-dependent benefit to the use of the VEE. Full article
(This article belongs to the Special Issue Information Technologies in Education, Research and Innovation)
Show Figures

Figure 1

25 pages, 1805 KiB  
Article
A Conceptual Design of an AI-Enabled Decision Support System for Analysing Donor Behaviour in Nonprofit Organisations
by Idrees Alsolbi, Renu Agarwal, Bhuvan Unhelkar, Tareq Al-Jabri, Mahendra Samarawickrama, Siamak Tafavogh and Mukesh Prasad
Information 2023, 14(10), 578; https://doi.org/10.3390/info14100578 - 20 Oct 2023
Viewed by 1337
Abstract
Analysing and understanding donor behaviour in nonprofit organisations (NPOs) is challenging due to the lack of human and technical resources. Machine learning (ML) techniques can analyse and understand donor behaviour at a certain level; however, it remains to be seen how to build [...] Read more.
Analysing and understanding donor behaviour in nonprofit organisations (NPOs) is challenging due to the lack of human and technical resources. Machine learning (ML) techniques can analyse and understand donor behaviour at a certain level; however, it remains to be seen how to build and design an artificial-intelligence-enabled decision-support system (AI-enabled DSS) to analyse donor behaviour. Thus, this paper proposes an AI-enabled DSS conceptual design to analyse donor behaviour in NPOs. A conceptual design is created following a design science research approach to evaluate an AI-enabled DSS’s initial DPs and features to analyse donor behaviour in NPOs. The evaluation process of the conceptual design applied formative assessment by conducting interviews with stakeholders from NPOs. The interviews were conducted using the Appreciative Inquiry framework to facilitate the process of interviews. The evaluation of the conceptual design results led to the recommendation for efficiency, effectiveness, flexibility, and usability in the requirements of the AI-enabled DSS. This research contributes to the design knowledge base of AI-enabled DSSs for analysing donor behaviour in NPOs. Future research will combine theoretical components to introduce a practical AI-enabled DSS for analysing donor behaviour in NPOs. This research is limited to such an analysis of donors who donate money or volunteer time for NPOs. Full article
(This article belongs to the Section Information Systems)
Show Figures

Figure 1

15 pages, 1726 KiB  
Review
Thematic Analysis of Big Data in Financial Institutions Using NLP Techniques with a Cloud Computing Perspective: A Systematic Literature Review
by Ratnesh Kumar Sharma, Gnana Bharathy, Faezeh Karimi, Anil V. Mishra and Mukesh Prasad
Information 2023, 14(10), 577; https://doi.org/10.3390/info14100577 - 20 Oct 2023
Viewed by 1900
Abstract
This literature review explores the existing work and practices in applying thematic analysis natural language processing techniques to financial data in cloud environments. This work aims to improve two of the five Vs of the big data system. We used the PRISMA approach [...] Read more.
This literature review explores the existing work and practices in applying thematic analysis natural language processing techniques to financial data in cloud environments. This work aims to improve two of the five Vs of the big data system. We used the PRISMA approach (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) for the review. We analyzed the research papers published over the last 10 years about the topic in question using a keyword-based search and bibliometric analysis. The systematic literature review was conducted in multiple phases, and filters were applied to exclude papers based on the title and abstract initially, then based on the methodology/conclusion, and, finally, after reading the full text. The remaining papers were then considered and are discussed here. We found that automated data discovery methods can be augmented by applying an NLP-based thematic analysis on the financial data in cloud environments. This can help identify the correct classification/categorization and measure data quality for a sentiment analysis. Full article
Show Figures

Figure 1

41 pages, 9365 KiB  
Article
Computing the Sound–Sense Harmony: A Case Study of William Shakespeare’s Sonnets and Francis Webb’s Most Popular Poems
by Rodolfo Delmonte
Information 2023, 14(10), 576; https://doi.org/10.3390/info14100576 - 20 Oct 2023
Viewed by 1775
Abstract
Poetic devices implicitly work towards inducing the reader to associate intended and expressed meaning to the sounds of the poem. In turn, sounds may be organized a priori into categories and assigned presumed meaning as suggested by traditional literary studies. To compute the [...] Read more.
Poetic devices implicitly work towards inducing the reader to associate intended and expressed meaning to the sounds of the poem. In turn, sounds may be organized a priori into categories and assigned presumed meaning as suggested by traditional literary studies. To compute the degree of harmony and disharmony, I have automatically extracted the sound grids of all the sonnets by William Shakespeare and have combined them with the themes expressed by their contents. In a first experiment, sounds have been associated with lexically and semantically based sentiment analysis, obtaining an 80% of agreement. In a second experiment, sentiment analysis has been substituted by Appraisal Theory, thus obtaining a more fine-grained interpretation that combines dis-harmony with irony. The computation for Francis Webb is based on his most popular 100 poems and combines automatic semantically and lexically based sentiment analysis with sound grids. The results produce visual maps that clearly separate poems into three clusters: negative harmony, positive harmony and disharmony, where the latter instantiates the need by the poet to encompass the opposites in a desperate attempt to reconcile them. Shakespeare and Webb have been chosen to prove the applicability of the method proposed in general contexts of poetry, exhibiting the widest possible gap at all linguistic and poetic levels. Full article
(This article belongs to the Special Issue Computational Linguistics and Natural Language Processing)
Show Figures

Figure 1

23 pages, 5243 KiB  
Article
Generative Adversarial Networks (GANs) for Audio-Visual Speech Recognition in Artificial Intelligence IoT
by Yibo He, Kah Phooi Seng and Li Minn Ang
Information 2023, 14(10), 575; https://doi.org/10.3390/info14100575 - 19 Oct 2023
Cited by 5 | Viewed by 2406
Abstract
This paper proposes a novel multimodal generative adversarial network AVSR (multimodal AVSR GAN) architecture, to improve both the energy efficiency and the AVSR classification accuracy of artificial intelligence Internet of things (IoT) applications. The audio-visual speech recognition (AVSR) modality is a classical multimodal [...] Read more.
This paper proposes a novel multimodal generative adversarial network AVSR (multimodal AVSR GAN) architecture, to improve both the energy efficiency and the AVSR classification accuracy of artificial intelligence Internet of things (IoT) applications. The audio-visual speech recognition (AVSR) modality is a classical multimodal modality, which is commonly used in IoT and embedded systems. Examples of suitable IoT applications include in-cabin speech recognition systems for driving systems, AVSR in augmented reality environments, and interactive applications such as virtual aquariums. The application of multimodal sensor data for IoT applications requires efficient information processing, to meet the hardware constraints of IoT devices. The proposed multimodal AVSR GAN architecture is composed of a discriminator and a generator, each of which is a two-stream network, corresponding to the audio stream information and the visual stream information, respectively. To validate this approach, we used augmented data from well-known datasets (LRS2-Lip Reading Sentences 2 and LRS3) in the training process, and testing was performed using the original data. The research and experimental results showed that the proposed multimodal AVSR GAN architecture improved the AVSR classification accuracy. Furthermore, in this study, we discuss the domain of GANs and provide a concise summary of the proposed GANs. Full article
Show Figures

Figure 1

18 pages, 1627 KiB  
Article
Translation Performance from the User’s Perspective of Large Language Models and Neural Machine Translation Systems
by Jungha Son and Boyoung Kim
Information 2023, 14(10), 574; https://doi.org/10.3390/info14100574 - 19 Oct 2023
Cited by 3 | Viewed by 4679
Abstract
The rapid global expansion of ChatGPT, which plays a crucial role in interactive knowledge sharing and translation, underscores the importance of comparative performance assessments in artificial intelligence (AI) technology. This study concentrated on this crucial issue by exploring and contrasting the translation performances [...] Read more.
The rapid global expansion of ChatGPT, which plays a crucial role in interactive knowledge sharing and translation, underscores the importance of comparative performance assessments in artificial intelligence (AI) technology. This study concentrated on this crucial issue by exploring and contrasting the translation performances of large language models (LLMs) and neural machine translation (NMT) systems. For this aim, the APIs of Google Translate, Microsoft Translator, and OpenAI’s ChatGPT were utilized, leveraging parallel corpora from the Workshop on Machine Translation (WMT) 2018 and 2020 benchmarks. By applying recognized evaluation metrics such as BLEU, chrF, and TER, a comprehensive performance analysis across a variety of language pairs, translation directions, and reference token sizes was conducted. The findings reveal that while Google Translate and Microsoft Translator generally surpass ChatGPT in terms of their BLEU, chrF, and TER scores, ChatGPT exhibits superior performance in specific language pairs. Translations from non-English to English consistently yielded better results across all three systems compared with translations from English to non-English. Significantly, an improvement in translation system performance was observed as the token size increased, hinting at the potential benefits of training models on larger token sizes. Full article
Show Figures

Figure 1

20 pages, 1734 KiB  
Article
The Impact of Data Science Solutions on the Company Turnover
by Marian Pompiliu Cristescu, Dumitru Alexandru Mara, Lia Cornelia Culda, Raluca Andreea Nerișanu, Adela Bâra and Simona-Vasilica Oprea
Information 2023, 14(10), 573; https://doi.org/10.3390/info14100573 - 19 Oct 2023
Viewed by 1480
Abstract
This study explores the potential of data science software solutions like Customer Relationship Management Software (CRM) for increasing the revenue generation of businesses. We focused on those businesses in the accommodation and food service sector across the European Union (EU). The investigation is [...] Read more.
This study explores the potential of data science software solutions like Customer Relationship Management Software (CRM) for increasing the revenue generation of businesses. We focused on those businesses in the accommodation and food service sector across the European Union (EU). The investigation is contextualized within the rising trend of data-driven decision-making, examining the potential correlation between data science applications and business revenues. By employing a comprehensive evaluation of Eurostat datasets from 2014 to 2021, we used both univariate and multivariate analyses, assessing the percentage of companies that have e-commerce sales across the EU countries, focusing on the usage of big data analytics from any source and the use of CRM tools for marketing purposes or other activities. Big data utilization showed a clear, positive relationship with enhanced e-commerce sales. However, CRM tools exhibited a dualistic impact: while their use in marketing showed no significant effect on sales, their application in non-marketing functions had negative effects on sales. These findings underscore the potential role of CRM and data science solutions in enhancing business performance in the EU’s accommodation and food service industry. Full article
(This article belongs to the Section Information Processes)
Show Figures

Figure 1

22 pages, 558 KiB  
Article
Prototype Selection for Multilabel Instance-Based Learning
by Panagiotis Filippakis, Stefanos Ougiaroglou and Georgios Evangelidis
Information 2023, 14(10), 572; https://doi.org/10.3390/info14100572 - 19 Oct 2023
Viewed by 1253
Abstract
Reducing the size of the training set, which involves replacing it with a condensed set, is a widely adopted practice to enhance the efficiency of instance-based classifiers while trying to maintain high classification accuracy. This objective can be achieved through the use of [...] Read more.
Reducing the size of the training set, which involves replacing it with a condensed set, is a widely adopted practice to enhance the efficiency of instance-based classifiers while trying to maintain high classification accuracy. This objective can be achieved through the use of data reduction techniques, also known as prototype selection or generation algorithms. Although there are numerous algorithms available in the literature that effectively address single-label classification problems, most of them are not applicable to multilabel data, where an instance can belong to multiple classes. Well-known transformation methods cannot be combined with a data reduction technique due to different reasons. The Condensed Nearest Neighbor rule is a popular parameter-free single-label prototype selection algorithm. The IB2 algorithm is the one-pass variation of the Condensed Nearest Neighbor rule. This paper proposes variations of these algorithms for multilabel data. Through an experimental study conducted on nine distinct datasets as well as statistical tests, we demonstrate that the eight proposed approaches (four for each algorithm) offer significant reduction rates without compromising the classification accuracy. Full article
(This article belongs to the Special Issue International Database Engineered Applications)
Show Figures

Figure 1

15 pages, 4049 KiB  
Article
On the Use of Kullback–Leibler Divergence for Kernel Selection and Interpretation in Variational Autoencoders for Feature Creation
by Fábio Mendonça, Sheikh Shanawaz Mostafa, Fernando Morgado-Dias and Antonio G. Ravelo-García
Information 2023, 14(10), 571; https://doi.org/10.3390/info14100571 - 18 Oct 2023
Viewed by 1388
Abstract
This study presents a novel approach for kernel selection based on Kullback–Leibler divergence in variational autoencoders using features generated by the convolutional encoder. The proposed methodology focuses on identifying the most relevant subset of latent variables to reduce the model’s parameters. Each latent [...] Read more.
This study presents a novel approach for kernel selection based on Kullback–Leibler divergence in variational autoencoders using features generated by the convolutional encoder. The proposed methodology focuses on identifying the most relevant subset of latent variables to reduce the model’s parameters. Each latent variable is sampled from the distribution associated with a single kernel of the last encoder’s convolutional layer, resulting in an individual distribution for each kernel. Relevant features are selected from the sampled latent variables to perform kernel selection, which filters out uninformative features and, consequently, unnecessary kernels. Both the proposed filter method and the sequential feature selection (standard wrapper method) were examined for feature selection. Particularly, the filter method evaluates the Kullback–Leibler divergence between all kernels’ distributions and hypothesizes that similar kernels can be discarded as they do not convey relevant information. This hypothesis was confirmed through the experiments performed on four standard datasets, where it was observed that the number of kernels can be reduced without meaningfully affecting the performance. This analysis was based on the accuracy of the model when the selected kernels fed a probabilistic classifier and the feature-based similarity index to appraise the quality of the reconstructed images when the variational autoencoder only uses the selected kernels. Therefore, the proposed methodology guides the reduction of the number of parameters of the model, making it suitable for developing applications for resource-constrained devices. Full article
(This article belongs to the Special Issue Deep Learning for Image, Video and Signal Processing)
Show Figures

Figure 1

19 pages, 6678 KiB  
Article
DAEM: A Data- and Application-Aware Error Analysis Methodology for Approximate Adders
by Muhammad Abdullah Hanif, Rehan Hafiz and Muhammad Shafique
Information 2023, 14(10), 570; https://doi.org/10.3390/info14100570 - 17 Oct 2023
Viewed by 1573
Abstract
Approximate adders are some of the fundamental arithmetic operators that are being employed in error-resilient applications, to achieve performance/energy/area gains. This improvement usually comes at the cost of some accuracy and, therefore, requires prior error analysis, to select an approximate adder variant that [...] Read more.
Approximate adders are some of the fundamental arithmetic operators that are being employed in error-resilient applications, to achieve performance/energy/area gains. This improvement usually comes at the cost of some accuracy and, therefore, requires prior error analysis, to select an approximate adder variant that provides acceptable accuracy. Most of the state-of-the-art error analysis techniques for approximate adders assume input bits and operands to be independent of one another, while some also assume the operands to be uniformly distributed. In this paper, we analyze the impact of these assumptions on the accuracy of error estimation techniques, and we highlight the need to address these assumptions, to achieve better and more realistic quality estimates. Based on our analysis, we propose DAEM, a data- and application-aware error analysis methodology for approximate adders. Unlike existing error analysis models, we neither assume the adder operands to be uniformly distributed nor assume them to be independent. Specifically, we use 2D joint input probability mass functions (PMFs), populated using sample data, in order to incorporate the data and application knowledge in the analysis. These 2D joint input PMFs, along with 2D error maps of approximate adders, are used to estimate the error PMF of an adder network. The error PMF is then utilized to compute different error measures, such as the mean squared error (MSE) and mean error distance (MED). We evaluate the proposed error analysis methodology on audio and video processing applications, and we demonstrate that our methodology provides error estimates having a better correlation with the simulation results, as compared to the state-of-the-art techniques. Full article
Show Figures

Figure 1

11 pages, 2587 KiB  
Article
An AI-Based Framework for Translating American Sign Language to English and Vice Versa
by Vijayendra D. Avina, Md Amiruzzaman, Stefanie Amiruzzaman, Linh B. Ngo and M. Ali Akber Dewan
Information 2023, 14(10), 569; https://doi.org/10.3390/info14100569 - 15 Oct 2023
Cited by 1 | Viewed by 2265
Abstract
In this paper, we propose a framework to convert American Sign Language (ASL) to English and English to ASL. Within this framework, we use a deep learning model along with the rolling average prediction that captures image frames from videos and classifies the [...] Read more.
In this paper, we propose a framework to convert American Sign Language (ASL) to English and English to ASL. Within this framework, we use a deep learning model along with the rolling average prediction that captures image frames from videos and classifies the signs from the image frames. The classified frames are then used to construct ASL words and sentences to support people with hearing impairments. We also use the same deep learning model to capture signs from the people with deaf symptoms and convert them into ASL words and English sentences. Based on this framework, we developed a web-based tool to use in real-life application and we also present the tool as a proof of concept. With the evaluation, we found that the deep learning model converts the image signs into ASL words and sentences with high accuracy. The tool was also found to be very useful for people with hearing impairment and deaf symptoms. The main contribution of this work is the design of a system to convert ASL to English and vice versa. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

13 pages, 1175 KiB  
Article
Investigating the Relationship of User Acceptance to the Characteristics and Performance of an Educational Software in Byzantine Music
by Konstantinos-Hercules Kokkinidis, Georgios Patronas, Sotirios K. Goudos, Theodoros Maikantis and Nikolaos Nikolaidis
Information 2023, 14(10), 568; https://doi.org/10.3390/info14100568 - 15 Oct 2023
Viewed by 1279
Abstract
The purpose of this study is to examine the impact of educational software characteristics on software performance through the mediating role of user acceptance. Our approach allows for a deeper understanding of the factors that contribute to the effectiveness of educational software by [...] Read more.
The purpose of this study is to examine the impact of educational software characteristics on software performance through the mediating role of user acceptance. Our approach allows for a deeper understanding of the factors that contribute to the effectiveness of educational software by bridging the fields of educational technology, psychology, and human–computer interaction, offering a holistic perspective on software adoption and performance. This study is based on a sample collected from public and private education institutes in Northern Greece and on data obtained from 236 users. The statistical method employed is structural equation models (SEMs), via SPSS—AMOS estimation. The findings of this study suggest that user acceptance and performance appraisal are exceptionally interrelated in regard to educational applications. The study argues that user acceptance is positively related to the performance of educational software and constitutes the nested epicenter mediating construct in the educational software characteristics. Additional findings, such as computer-familiar users and users from the field of choral music, are positively related to the performance of the educational software. Our conclusions help in understanding the psychological and behavioral aspects of technology adoption in the educational setting. Findings are discussed in terms of their practical usefulness in education and further research. Full article
Show Figures

Figure 1

18 pages, 516 KiB  
Article
Automated Assessment of Comprehension Strategies from Self-Explanations Using LLMs
by Bogdan Nicula, Mihai Dascalu, Tracy Arner, Renu Balyan and Danielle S. McNamara
Information 2023, 14(10), 567; https://doi.org/10.3390/info14100567 - 14 Oct 2023
Cited by 1 | Viewed by 1728
Abstract
Text comprehension is an essential skill in today’s information-rich world, and self-explanation practice helps students improve their understanding of complex texts. This study was centered on leveraging open-source Large Language Models (LLMs), specifically FLAN-T5, to automatically assess the comprehension strategies employed by readers [...] Read more.
Text comprehension is an essential skill in today’s information-rich world, and self-explanation practice helps students improve their understanding of complex texts. This study was centered on leveraging open-source Large Language Models (LLMs), specifically FLAN-T5, to automatically assess the comprehension strategies employed by readers while understanding Science, Technology, Engineering, and Mathematics (STEM) texts. The experiments relied on a corpus of three datasets (N = 11,833) with self-explanations annotated on 4 dimensions: 3 comprehension strategies (i.e., bridging, elaboration, and paraphrasing) and overall quality. Besides FLAN-T5, we also considered GPT3.5-turbo to establish a stronger baseline. Our experiments indicated that the performance improved with fine-tuning, having a larger LLM model, and providing examples via the prompt. Our best model considered a pretrained FLAN-T5 XXL model and obtained a weighted F1-score of 0.721, surpassing the 0.699 F1-score previously obtained using smaller models (i.e., RoBERTa). Full article
Show Figures

Figure 1

18 pages, 3213 KiB  
Article
Multi-Beam Radar Communication Integrated System Design
by Hao Ma, Jun Wang, Xin Sun and Wenxin Jin
Information 2023, 14(10), 566; https://doi.org/10.3390/info14100566 - 14 Oct 2023
Viewed by 1623
Abstract
In this paper, we propose a multi-beam integrated radar and communication scheme using phased-array antenna, in which the same LFM-BPSK integrated waveform is used for both the radar and the communication beams. In the integrated beam design, the radar beam is periodically scanned [...] Read more.
In this paper, we propose a multi-beam integrated radar and communication scheme using phased-array antenna, in which the same LFM-BPSK integrated waveform is used for both the radar and the communication beams. In the integrated beam design, the radar beam is periodically scanned in different directions for detection, and the communication beam is periodically manipulated in one direction for communication. The system’s beamforming uses adaptive beamforming technology to achieve radar echoes and communication reception. For the LFM-BPSK integrated waveform used by the system, we propose a method for estimating parameters during communication reception. Through simulation, the proposed beam-pattern design, adaptive beamforming, and parameter estimation scheme can achieve radar and communication functions using phased-array antennas. Full article
(This article belongs to the Section Wireless Technologies)
Show Figures

Figure 1

14 pages, 2596 KiB  
Article
New Suptech Tool of the Predictive Generation for Insurance Companies—The Case of the European Market
by Timotej Jagrič, Daniel Zdolšek, Robert Horvat, Iztok Kolar, Niko Erker, Jernej Merhar and Vita Jagrič
Information 2023, 14(10), 565; https://doi.org/10.3390/info14100565 - 14 Oct 2023
Viewed by 1610
Abstract
Financial innovation, green investments, or climate change are changing insurers’ business ecosystems, impacting their business behaviour and financial vulnerability. Supervisors and other stakeholders are interested in identifying the path toward deterioration in the insurance company’s financial health as early as possible. Suptech tools [...] Read more.
Financial innovation, green investments, or climate change are changing insurers’ business ecosystems, impacting their business behaviour and financial vulnerability. Supervisors and other stakeholders are interested in identifying the path toward deterioration in the insurance company’s financial health as early as possible. Suptech tools enable them to discover more and to intervene in a timely manner. We propose an artificial intelligence approach using Kohonen’s self-organizing maps. The dataset used for development and testing included yearly financial statements with 4058 observations for European composite insurance companies from 2012 to 2021. In a novel manner, the model investigates the behaviour of insurers, looking for similarities. The model forms a map. For the obtained groupings of companies from different geographical origins, a common characteristic was discovered regarding their future financial deterioration. A threshold defined using the solvency capital requirement (SCR) ratio being below 130% for the next year is applied to the map. On the test sample, the model correctly identified on average 86% of problematic companies and 79% of unproblematic companies. Changing the SCR ratio level enables differentiation into multiple map sections. The model does not rely on traditional methods, or the use of the SCR ratio as a dependent variable but looks for similarities in the actual insurer’s financial behaviour. The proposed approach offers grounds for a Suptech tool of predictive generation to support early detection of the possible future financial distress of an insurance company. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) for Economics and Business Management)
Show Figures

Figure 1

19 pages, 299 KiB  
Review
Neural Network Applications in Polygraph Scoring—A Scoping Review
by Dana Rad, Nicolae Paraschiv and Csaba Kiss
Information 2023, 14(10), 564; https://doi.org/10.3390/info14100564 - 13 Oct 2023
Cited by 1 | Viewed by 1896
Abstract
Polygraph tests have been used for many years as a means of detecting deception, but their accuracy has been the subject of much debate. In recent years, researchers have explored the use of neural networks in polygraph scoring to improve the accuracy of [...] Read more.
Polygraph tests have been used for many years as a means of detecting deception, but their accuracy has been the subject of much debate. In recent years, researchers have explored the use of neural networks in polygraph scoring to improve the accuracy of deception detection. The purpose of this scoping review is to offer a comprehensive overview of the existing research on the subject of neural network applications in scoring polygraph tests. A total of 57 relevant papers were identified and analyzed for this review. The papers were examined for their research focus, methodology, results, and conclusions. The scoping review found that neural networks have shown promise in improving the accuracy of polygraph tests, with some studies reporting significant improvements over traditional methods. However, further research is needed to validate these findings and to determine the most effective ways of integrating neural networks into polygraph testing. The scoping review concludes with a discussion of the current state of the field and suggestions for future research directions. Full article
24 pages, 1880 KiB  
Article
KVMod—A Novel Approach to Design Key-Value NoSQL Databases
by Ahmed Dourhri, Mohamed Hanine and Hassan Ouahmane
Information 2023, 14(10), 563; https://doi.org/10.3390/info14100563 - 12 Oct 2023
Viewed by 1892
Abstract
The growth of structured, semi-structured, and unstructured data produced by the new applications is a result of the development and expansion of social networks, the Internet of Things, web technology, mobile devices, and other technologies. However, as traditional databases became less suitable to [...] Read more.
The growth of structured, semi-structured, and unstructured data produced by the new applications is a result of the development and expansion of social networks, the Internet of Things, web technology, mobile devices, and other technologies. However, as traditional databases became less suitable to manage the rapidly growing quantity of data and variety of data structures, a new class of database management systems named NoSQL was required to satisfy the new requirements. Although NoSQL databases are generally schema-less, significant research has been conducted on their design. A literature review presented in this paper lets us claim the need to create modeling techniques to describe how to structure data in NoSQL databases. Key-value is one of the NoSQL families that has received too little attention, especially in terms of its design methodology. Most studies have focused on the other families, like column-oriented and document-oriented. This paper aims to present a design approach named KVMod (key-value modeling) specific to key-value databases. The purpose is to provide to the scientific community and engineers with a methodology for the design of key-value stores using the maximum automation and therefore the minimum human intervention, which equals the minimum number of errors. A software tool called KVDesign has been implemented to automate the proposed methodology and, thus, the most time-consuming database modeling tasks. The complexity is also discussed to assess the efficiency of our proposed algorithms. Full article
(This article belongs to the Special Issue Feature Papers in Information in 2023)
Show Figures

Figure 1

19 pages, 8333 KiB  
Article
A New Migration and Reproduction Intelligence Algorithm: Case Study in Cloud-Based Microgrid
by Renwu Yan, Yunzhang Liu and Ning Yu
Information 2023, 14(10), 562; https://doi.org/10.3390/info14100562 - 12 Oct 2023
Viewed by 955
Abstract
Inspired by the migration and reproduction of species in nature to explore suitable habitats, this paper proposed a new swarm intelligence algorithm called the Migration and Reproduction Algorithm (MARA). This new algorithm discusses how to transform the behavior of an organism looking for [...] Read more.
Inspired by the migration and reproduction of species in nature to explore suitable habitats, this paper proposed a new swarm intelligence algorithm called the Migration and Reproduction Algorithm (MARA). This new algorithm discusses how to transform the behavior of an organism looking for a suitable habitat into a mathematical model, which can solve optimization problems. MARA has some common features with other optimization methods such as particle swarm optimization (PSO) and the fireworks algorithm (FWA), which means MARA can also solve the optimization problems that PSO and FWA are used to, namely, high-dimensional optimization problems. MARA also has some unique features among biology-based optimization methods. In this paper, we articulated the structure of MARA by correlating it with natural biogeography; then, we demonstrated the performance of MARA on sets of 12 benchmark functions. In the end, we applied it to optimize a practical problem of power dispatching in a multi-microgrid system that proved it has certain value in practical applications. Full article
(This article belongs to the Special Issue Data Security and Privacy in Cloud and IoT)
Show Figures

Figure 1

23 pages, 5320 KiB  
Article
Exploring Effective Approaches to the Risk Management Framework (RMF) in the Republic of Korea: A Study
by Giseok Jeong, Kookjin Kim, Sukjoon Yoon, Dongkyoo Shin and Jiwon Kang
Information 2023, 14(10), 561; https://doi.org/10.3390/info14100561 - 12 Oct 2023
Viewed by 1740
Abstract
As the world undergoes rapid digitalization, individuals and objects are becoming more extensively connected through the advancement of Internet networks. This phenomenon has been observed in governmental and military domains as well, accompanied by a rise in cyber threats consequently. The United States [...] Read more.
As the world undergoes rapid digitalization, individuals and objects are becoming more extensively connected through the advancement of Internet networks. This phenomenon has been observed in governmental and military domains as well, accompanied by a rise in cyber threats consequently. The United States (U.S.), in response to this, has been strongly urging its allies to adhere to the RMF standard to bolster the security of primary defense systems. An agreement has been signed between the Republic of Korea and the U.S. to collaboratively operate major defense systems and cooperate on cyber threats. However, the methodologies and tools required for RMF implementation have not yet been fully provided to several allied countries, including the Republic of Korea, causing difficulties in its implementation. In this study, the U.S. RMF process was applied to a specific system of the Republic of Korea Ministry of National Defense, and the outcomes were analyzed. Emphasis was placed on the initial two stages of the RMF: ‘system categorization’ and ‘security control selection’, presenting actual application cases. Additionally, a detailed description of the methodology used by the Republic of Korea Ministry of National Defense for RMF implementation in defense systems is provided, introducing a keyword-based overlay application methodology. An introduction to the K-RMF Baseline, Overlay, and Tailoring Tool is also given. The methodologies and tools presented are expected to serve as valuable references for ally countries, including the U.S., in effectively implementing the RMF. It is anticipated that the results of this research will contribute to enhancing cyber security and threat management among allies. Full article
(This article belongs to the Special Issue Emerging Information Technologies in the Field of Cyber Defense)
Show Figures

Figure 1

16 pages, 5162 KiB  
Article
Transfer Learning-Based YOLOv3 Model for Road Dense Object Detection
by Chunhua Zhu, Jiarui Liang and Fei Zhou
Information 2023, 14(10), 560; https://doi.org/10.3390/info14100560 - 12 Oct 2023
Cited by 1 | Viewed by 1298
Abstract
Stemming from the overlap of objects and undertraining due to few samples, road dense object detection is confronted with poor object identification performance and the inability to recognize edge objects. Based on this, one transfer learning-based YOLOv3 approach for identifying dense objects on [...] Read more.
Stemming from the overlap of objects and undertraining due to few samples, road dense object detection is confronted with poor object identification performance and the inability to recognize edge objects. Based on this, one transfer learning-based YOLOv3 approach for identifying dense objects on the road has been proposed. Firstly, the Darknet-53 network structure is adopted to obtain a pre-trained YOLOv3 model. Then, the transfer training is introduced as the output layer for the special dataset of 2000 images containing vehicles. In the proposed model, one random function is adapted to initialize and optimize the weights of the transfer training model, which is separately designed from the pre-trained YOLOv3. The object detection classifier replaces the fully connected layer, which further improves the detection effect. The reduced size of the network model can further reduce the training and detection time. As a result, it can be better applied to actual scenarios. The experimental results demonstrate that the object detection accuracy of the presented approach is 87.75% for the Pascal VOC 2007 dataset, which is superior to the traditional YOLOv3 and the YOLOv5 by 4% and 0.59%, respectively. Additionally, the test was carried out using UA-DETRAC, a public road vehicle detection dataset. The object detection accuracy of the presented approach reaches 79.23% in detecting images, which is 4.13% better than the traditional YOLOv3, and compared with the existing relatively new object detection algorithm YOLOv5, the detection accuracy is 1.36% better. Moreover, the detection speed of the proposed YOLOv3 method reaches 31.2 Fps/s in detecting images, which is 7.6 Fps/s faster than the traditional YOLOv3, and compared with the existing new object detection algorithm YOLOv7, the speed is 1.5 Fps/s faster. The proposed YOLOv3 performs 67.36 Bn of floating point operations per second in detecting video, which is obviously less than the traditional YOLOv3 and the newer object detection algorithm YOLOv5. Full article
(This article belongs to the Topic Lightweight Deep Neural Networks for Video Analytics)
Show Figures

Figure 1

16 pages, 5731 KiB  
Article
Innovative Visualization Approach for Biomechanical Time Series in Stroke Diagnosis Using Explainable Machine Learning Methods: A Proof-of-Concept Study
by Kyriakos Apostolidis, Christos Kokkotis, Evangelos Karakasis, Evangeli Karampina, Serafeim Moustakidis, Dimitrios Menychtas, Georgios Giarmatzis, Dimitrios Tsiptsios, Konstantinos Vadikolias and Nikolaos Aggelousis
Information 2023, 14(10), 559; https://doi.org/10.3390/info14100559 - 12 Oct 2023
Cited by 3 | Viewed by 1380
Abstract
Stroke remains a predominant cause of mortality and disability worldwide. The endeavor to diagnose stroke through biomechanical time-series data coupled with Artificial Intelligence (AI) poses a formidable challenge, especially amidst constrained participant numbers. The challenge escalates when dealing with small datasets, a common [...] Read more.
Stroke remains a predominant cause of mortality and disability worldwide. The endeavor to diagnose stroke through biomechanical time-series data coupled with Artificial Intelligence (AI) poses a formidable challenge, especially amidst constrained participant numbers. The challenge escalates when dealing with small datasets, a common scenario in preliminary medical research. While recent advances have ushered in few-shot learning algorithms adept at handling sparse data, this paper pioneers a distinctive methodology involving a visualization-centric approach to navigating the small-data challenge in diagnosing stroke survivors based on gait-analysis-derived biomechanical data. Employing Siamese neural networks (SNNs), our method transforms a biomechanical time series into visually intuitive images, facilitating a unique analytical lens. The kinematic data encapsulated comprise a spectrum of gait metrics, including movements of the ankle, knee, hip, and center of mass in three dimensions for both paretic and non-paretic legs. Following the visual transformation, the SNN serves as a potent feature extractor, mapping the data into a high-dimensional feature space conducive to classification. The extracted features are subsequently fed into various machine learning (ML) models like support vector machines (SVMs), Random Forest (RF), or neural networks (NN) for classification. In pursuit of heightened interpretability, a cornerstone in medical AI applications, we employ the Grad-CAM (Class Activation Map) tool to visually highlight the critical regions influencing the model’s decision. Our methodology, though exploratory, showcases a promising avenue for leveraging visualized biomechanical data in stroke diagnosis, achieving a perfect classification rate in our preliminary dataset. The visual inspection of generated images elucidates a clear separation of classes (100%), underscoring the potential of this visualization-driven approach in the realm of small data. This proof-of-concept study accentuates the novelty of visual data transformation in enhancing both interpretability and performance in stroke diagnosis using limited data, laying a robust foundation for future research in larger-scale evaluations. Full article
(This article belongs to the Special Issue Deep Learning for Image, Video and Signal Processing)
Show Figures

Figure 1

18 pages, 7011 KiB  
Article
Practice Projects for an FPGA-Based Remote Laboratory to Teach and Learn Digital Electronics
by Rafael Navas-González, Óscar Oballe-Peinado, Julián Castellanos-Ramos, Daniel Rosas-Cervantes and José A. Sánchez-Durán
Information 2023, 14(10), 558; https://doi.org/10.3390/info14100558 - 12 Oct 2023
Cited by 1 | Viewed by 1416
Abstract
This work presents examples of practice sessions to teach and learn digital electronics using an FPGA-based development platform, accessible either through the on-campus laboratory or online using a remote laboratory developed by the authors. The main tasks proposed in the practice sessions are [...] Read more.
This work presents examples of practice sessions to teach and learn digital electronics using an FPGA-based development platform, accessible either through the on-campus laboratory or online using a remote laboratory developed by the authors. The main tasks proposed in the practice sessions are to design specific modules that will be included as a main block in more complex projects. Each project is adapted and ready once the student modules to be implemented, debugged, and/or tested in the FPGA-based platform are added using the aforementioned accessibility methods. The proposal suggests the use of a web-based remote laboratory to complement (rather than replace) on-campus teaching in response to the growing need for access to laboratory resources beyond regular teaching hours. The paper introduces the main topics on implementing and using the tool, sets out how to adapt regular projects to be executed in the remote lab, and describes several practice projects proposed to students in the final three academic years. The paper concludes with an analysis and evaluation of the user experience taken from surveys conducted with students at the end of the semester. Full article
Show Figures

Figure 1

33 pages, 2449 KiB  
Review
Exploring Blockchain Research in Supply Chain Management: A Latent Dirichlet Allocation-Driven Systematic Review
by Abderahman Rejeb, Karim Rejeb, Steve Simske and John G. Keogh
Information 2023, 14(10), 557; https://doi.org/10.3390/info14100557 - 12 Oct 2023
Cited by 6 | Viewed by 6104
Abstract
Blockchain technology has emerged as a tool with the potential to enhance transparency, trust, security, and decentralization in supply chain management (SCM). This study presents a comprehensive review of the interplay between blockchain technology and SCM. By analyzing an extensive dataset of 943 [...] Read more.
Blockchain technology has emerged as a tool with the potential to enhance transparency, trust, security, and decentralization in supply chain management (SCM). This study presents a comprehensive review of the interplay between blockchain technology and SCM. By analyzing an extensive dataset of 943 articles, our exploration utilizes the Latent Dirichlet Allocation (LDA) method to delve deep into the thematic structure of the discourse. This investigation revealed ten central topics ranging from blockchain’s transformative role in supply chain finance and e-commerce operations to its application in specialized areas, such as the halal food supply chain and humanitarian contexts. Particularly pronounced were discussions on the challenges and transformations of blockchain integration in supply chains and its impact on pricing strategies and decision-making. Visualization tools, including PyLDAvis, further illuminated the interconnectedness of these themes, highlighting the intertwined nature of blockchain adoption challenges with aspects such as traceability and pricing. Despite the breadth of topics covered, the paper acknowledges its limitations due to the fast-evolving nature of blockchain developments during and after our analysis period. Ultimately, this review provides a holistic academic snapshot, emphasizing both well-developed and nascent research areas and guiding future research in the evolving domain of blockchain in SCM. Full article
(This article belongs to the Special Issue Blockchain, Technology and Its Application)
Show Figures

Figure 1

16 pages, 5064 KiB  
Article
Particle Swarm Optimization-Based Control for Maximum Power Point Tracking Implemented in a Real Time Photovoltaic System
by Asier del Rio, Oscar Barambones, Jokin Uralde, Eneko Artetxe and Isidro Calvo
Information 2023, 14(10), 556; https://doi.org/10.3390/info14100556 - 11 Oct 2023
Cited by 2 | Viewed by 1313
Abstract
Photovoltaic panels present an economical and environmentally friendly renewable energy solution, with advantages such as emission-free operation, low maintenance, and noiseless performance. However, their nonlinear power-voltage curves necessitate efficient operation at the Maximum Power Point (MPP). Various techniques, including Hill Climb algorithms, are [...] Read more.
Photovoltaic panels present an economical and environmentally friendly renewable energy solution, with advantages such as emission-free operation, low maintenance, and noiseless performance. However, their nonlinear power-voltage curves necessitate efficient operation at the Maximum Power Point (MPP). Various techniques, including Hill Climb algorithms, are commonly employed in the industry due to their simplicity and ease of implementation. Nonetheless, intelligent approaches like Particle Swarm Optimization (PSO) offer enhanced accuracy in tracking efficiency with reduced oscillations. The PSO algorithm, inspired by collective intelligence and animal swarm behavior, stands out as a promising solution due to its efficiency and ease of integration, relying only on standard current and voltage sensors commonly found in these systems, not like most intelligent techniques, which require additional modeling or sensoring, significantly increasing the cost of the installation. The primary contribution of this study lies in the implementation and validation of an advanced control system based on the PSO algorithm for real-time Maximum Power Point Tracking (MPPT) in a commercial photovoltaic system to assess its viability by testing it against the industry-standard controller, Perturbation and Observation (P&O), to highlight its advantages and limitations. Through rigorous experiments and comparisons with other methods, the proposed PSO-based control system’s performance and feasibility have been thoroughly evaluated. A sensitivity analysis of the algorithm’s search dynamics parameters has been conducted to identify the most effective combination for optimal real-time tracking. Notably, experimental comparisons with the P&O algorithm have revealed the PSO algorithm’s remarkable ability to significantly reduce settling time up to threefold under similar conditions, resulting in a substantial decrease in energy losses during transient states from 31.96% with P&O to 9.72% with PSO. Full article
Show Figures

Figure 1

41 pages, 4505 KiB  
Systematic Review
Artificial Intelligence and Software Modeling Approaches in Autonomous Vehicles for Safety Management: A Systematic Review
by Shirin Abbasi and Amir Masoud Rahmani
Information 2023, 14(10), 555; https://doi.org/10.3390/info14100555 - 11 Oct 2023
Cited by 4 | Viewed by 4690
Abstract
Autonomous vehicles (AVs) have emerged as a promising technology for enhancing road safety and mobility. However, designing AVs involves various critical aspects, such as software and system requirements, that must be carefully addressed. This paper investigates safety-aware approaches for AVs, focusing on the [...] Read more.
Autonomous vehicles (AVs) have emerged as a promising technology for enhancing road safety and mobility. However, designing AVs involves various critical aspects, such as software and system requirements, that must be carefully addressed. This paper investigates safety-aware approaches for AVs, focusing on the software and system requirements aspect. It reviews the existing methods based on software and system design and analyzes them according to their algorithms, parameters, evaluation criteria, and challenges. This paper also examines the state-of-the-art artificial intelligence-based techniques for AVs, as AI has been a crucial element in advancing this technology. This paper reveals that 63% of the reviewed studies use various AI methods, with deep learning being the most prevalent (34%). The article also identifies the current gaps and future directions for AV safety research. This paper can be a valuable reference for researchers and practitioners on AV safety. Full article
(This article belongs to the Special Issue Automotive System Security: Recent Advances and Challenges)
Show Figures

Figure 1

19 pages, 1277 KiB  
Article
Top-Down Models across CPU Architectures: Applicability and Comparison in a High-Performance Computing Environment
by Fabio Banchelli, Marta Garcia-Gasulla and Filippo Mantovani
Information 2023, 14(10), 554; https://doi.org/10.3390/info14100554 - 10 Oct 2023
Viewed by 1259
Abstract
Top-Down models are defined by hardware architects to provide information on the utilization of different hardware components. The target is to isolate the users from the complexity of the hardware architecture while giving them insight into how efficiently the code uses the resources. [...] Read more.
Top-Down models are defined by hardware architects to provide information on the utilization of different hardware components. The target is to isolate the users from the complexity of the hardware architecture while giving them insight into how efficiently the code uses the resources. In this paper, we explore the applicability of four Top-Down models defined for different hardware architectures powering state-of-the-art HPC clusters (Intel Skylake, Fujitsu A64FX, IBM Power9, and Huawei Kunpeng 920) and propose a model for AMD Zen 2. We study a parallel CFD code used for scientific production to compare these five Top-Down models. We evaluate the level of insight achieved, the clarity of the information, the ease of use, and the conclusions each allows us to reach. Our study indicates that the Top-Down model makes it very difficult for a performance analyst to spot inefficiencies in complex scientific codes without delving deep into micro-architecture details. Full article
(This article belongs to the Special Issue Advances in High Performance Computing and Scalable Software)
Show Figures

Figure 1

23 pages, 860 KiB  
Review
A Survey of Machine Learning Assisted Continuous-Variable Quantum Key Distribution
by Nathan K. Long, Robert Malaney and Kenneth J. Grant
Information 2023, 14(10), 553; https://doi.org/10.3390/info14100553 - 10 Oct 2023
Cited by 1 | Viewed by 1613
Abstract
Continuous-variable quantum key distribution (CV-QKD) shows potential for the rapid development of an information-theoretic secure global communication network; however, the complexities of CV-QKD implementation remain a restrictive factor. Machine learning (ML) has recently shown promise in alleviating these complexities. ML has been applied [...] Read more.
Continuous-variable quantum key distribution (CV-QKD) shows potential for the rapid development of an information-theoretic secure global communication network; however, the complexities of CV-QKD implementation remain a restrictive factor. Machine learning (ML) has recently shown promise in alleviating these complexities. ML has been applied to almost every stage of CV-QKD protocols, including ML-assisted phase error estimation, excess noise estimation, state discrimination, parameter estimation and optimization, key sifting, information reconciliation, and key rate estimation. This survey provides a comprehensive analysis of the current literature on ML-assisted CV-QKD. In addition, the survey compares the ML algorithms assisting CV-QKD with the traditional algorithms they aim to augment, as well as providing recommendations for future directions for ML-assisted CV-QKD research. Full article
(This article belongs to the Special Issue Quantum Information Processing and Machine Learning)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop